I have a Kubernetes cluster created with Rancher that is conformed by 4 servers:
- Server1: Hosts the Rancher and Kubectl container.
- Server2: Node with the etcd, control plane and worker roles.
- Server3: Node with the worker role.
- Server4: Node with the worker role.
Additionally I have an nfs server to store the persistent volumes.
We create the following manifests to raise a PostgreSQL database with different replicas, these replicas are distributed in the worker Nodes.
- Persistent Volume
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
labels:
app: postgres
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
mountOptions:
- nfsvers=4.2
nfs:
path: /opt/k8s-data/postgres
server: nfs-server-ip
- Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
namespace: default
labels:
app: postgres
spec:
volumeName: postgres-pv
volumeMode: Filesystem
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
- Stateful Set
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
namespace: default
spec:
selector:
matchLabels:
app: postgres
serviceName: "postgres-set"
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:14.6-bullseye
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- name: postgres-pv
mountPath: /var/lib/postgresql/data/pgdata
volumes:
- name: postgres-pv
persistentVolumeClaim:
claimName: postgres-pv-claim
- Service
apiVersion: v1
kind: Service
metadata:
name: postgres-serv
namespace: default
labels:
app: postgres
spec:
ports:
- name: postgres
port: 5432
nodePort: 30564
type: NodePort
selector:
app: postgres
It is necessary that information can be read and written from any node and that it is available for new replicas of the pod.
Initially a single pod was raised and once the database was created, the different replicas were raised. At this point we started to insert information from each of the replicated pods to evaluate its behavior and we found that the information is not reflected in the other pods; i.e.: If I enter pod1 and insert a record in a table, it´s not reflected in pod2. If the same thing is done but this time inserting the record from pod2, then the same thing happens. If pod3 is raised, it reflects information from pod2 or information from pod3, because of that, we do not know what pod has the authority to write to the shared volume.
Another case observed is that if we bring down the cluster and raise only one pod, it may have information entered by any pod, but it never shows ALL the information entered from the pods.
Our intention is to have several replicas distributed in the nodes that can write and read information in the database and this data is always available for the existing pods and those that are raised later.