Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
259 views
in Technique[技术] by (71.8m points)

kubernetes - Mounting a SharedVolumeClaim spawns two Pods

I want to share files between two containers in Kubernetes. Therefore I created a SharedVolumeClaim, which I plan to mount in both containers. But for a start, I tried to mount it in one container only. Whenever I deploy this, two Pods get created.

$ kubectl get pods | grep frontend
dev-frontend-84ca5d6dd6-8n8lf               0/1     ContainerCreating   0          18h
dev-frontend-b4959fb97-f9mgw                1/1     Running             0          18h

The first container is stuck in creating because it can not access the SharedVolume (Volume is already attached by pod fresh-namespace/dev-frontend-b4959fb97-f9mgw). But when I remove the code that mounts the volume into the container and deploy again, only one container is created. This also happens if I start with a completely new namespace.

$ kubectl get pods | grep frontend
dev-frontend-587bc7f359-7ozsx               1/1     Running   0          5m

Why does the mount spawn a second pod as it should only be one?

Here are the relevant parts of the deployment files:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: shared-file-pvc
  labels:
    app: frontend

spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Mi

---

apiVersion: apps/v1
kind: Deployment

metadata:
  name: dev-frontend
  labels:
    app: frontend
    environment: dev

spec:
  replicas: 1
  revisionHistoryLimit: 0
  selector:
    matchLabels:
      app: frontend
      environment: dev

  template:
    metadata:
      labels:
        app: frontend
        environment: dev

    spec:
      volumes:
        - name: shared-files
          persistentVolumeClaim:
            claimName: shared-file-pvc

      containers:
        - name: frontend
          image: some/registry/frontend:version
          imagePullPolicy: Always
          ports:
            - containerPort: 8080
          resources:
            requests:
              memory: 100Mi
              cpu: 250m
            limits:
              memory: 300Mi
              cpu: 750m
          volumeMounts:
            - name: shared-files             <!-- works if I remove -->
              mountPath: /data/shared-files  <!-- this two lines -->
---



Can anybody help me, what I am missing here?

Thanks in advance!


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

If you check exact names of your deployed Pods, you can see that these Pods are managed by two diffrent ReplicaSets (dev-frontend-84ca5d6dd6 and dev-frontend-b4959fb97):

$ kubectl get pods | grep frontend
dev-frontend-84ca5d6dd6-8n8lf               0/1     ContainerCreating   0          18h
dev-frontend-b4959fb97-f9mgw                1/1     Running             0          18h 

Your Deployment is using Rolling Deployment as default Deployment Strategy. A Rolling Deployment waits for new Pods to become ready BEFORE scaling down previous Pods.

I assume that first you deployed dev-frontend-b4959fb97-f9mgw (old version of the app) and then you deployed dev-frontend-84ca5d6dd6-8n8lf (new version of the same app).

The newly created Pod (dev-frontend-84ca5d6dd6-8n8lf) is not able to attach volume (the old one still uses this volume) so it will never be in Ready state.

Check how many dev-frontend-* ReplicaSets you have in this particular namespaces and which one is currently managed by your Deployment:

$ kubectl get rs | grep "dev-frontend"

$ kubectl describe deployment dev-frontend | grep NewReplicaSet

This two commands above should give you the idea which one is new and which one is old.

I am not sure which StorageClass you are using, but you can check it using:

$ kubectl describe pvc shared-file-pvc | grep "StorageClass"

Additionally check if created PV really support RWX Access Mode:

$ kubectl describe pv <PV_name> | grep "Access Modes"

Main concept is that PersistentVolume and PersistentVolumeClaim are mapped one-to-one. This is described in Persistent Volume - Binding documentation.

However, if your StorageClass is configured correctly, it is possible to have two or more containers using the same PersistentVolumeClaim that is attached to the same mount on the back end.

The most common examples are NFS and CephFS


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...