Nfs-client-provisioner, multi nodes access problem

Hello,

I encounter a weird problem with Rancher and nfs-client-provisioner.
When I try to deploy multiples pods from differents nodes which have access to the same NFS server, all pods get stuck.
From a MYSQL pod I get this error :
[ERROR] [FATAL] InnoDB: fsync() returned 5
[ERROR] mysqld got signal 6 ;

I can however write and read from any pods into the NFS PVC.

This is the version of components deployed :
Rancher : v2.4.0
Kubectl Client : v1.16.8
Kubectl Server : v1.17.4
nfs-client-provisioner : 1.2.8

Does someone have already encounter this issue ?

hi there, more of question on how to use the nfs-client-provisioner here, i am trying to create deployment using my nfs share created on master node , have created the storageClass in Rancher UI with the mountoption with IP of the nfshare , but getting these event logs on the VolumeBinding , any help here pls?
Rancher ; 2.4.2

Events:
Type Reason Age From Message


Warning FailedScheduling default-scheduler error while running “VolumeBinding” filter plugin for pod “prometheus-cluster-monitoring-0”: pod has unbound immediate PersistentVolumeClaims
Warning FailedScheduling default-scheduler error while running “VolumeBinding” filter plugin for pod “prometheus-cluster-monitoring-0”: pod has unbound immediate PersistentVolumeClaims

@slefeuvre I also had trouble with the nfs-client-provisioner and mysql, and ended up switching my datastores to postgres (where I could)
one of the things I did was to troubleshoot that deployments worked correctly without the nfs-client-provisioner by mounting the nfs exports on every worker node, and configure the pods to use a specific nfs mounted directory as a host volume /imports/mysql/data

Also make sure your deployment re-creates the pod instead of doing a rolling update, as otherwise you’ll have two mysqld writing to the same volume…
See: https://dev.mysql.com/doc/refman/8.0/en/disk-issues.html#disk-issues-nfs
and https://kb.netapp.com/app/answers/answer_view/a_id/1004952

@maurya-m try:

storageClass: (your thing)
accessMode: ReadWriteMany

Hello @Azulinho,

For me, the error was also with mongodb, I solved my problem by resintalling the plateform with :

  • kubernetes v1.16.3
  • RKE v1.0.0
  • Helm v3.0.0

This have solved my problem.
I don’t know which was the origin of this issue.

The installation with the problem was :

  • Kubernetes v1.17.4
  • RKE v1.6.0
  • Helm 2.16.15 (I took an older version)

Kind Regards,

@slefeuvre so far I have only had problems with mongodb and NFS, I get corruption every few weeks. it’s not worth it.
Use a different PV type.

@Azulinho
The objective is to have a NFS shared volume in order to allow our customer to deploy its workloads and statefulset with NFS storageclass without problems and without need to know storage backends conceptions.

I didn’t have problems anymore since I have reinstalled the plateform with older version describe in my previous message.

Regards,

Hi,

I’ve got nfs-client-provisioner-1.2.8 working fine with NFS server and running: mysql innoDB, mongoDB & mariaDB. All DB deployed with multiple pod instances for HA.

K8S V1.13.5