If you check exact names of your deployed Pods
, you can see that these Pods
are managed by two diffrent ReplicaSets
(dev-frontend-84ca5d6dd6
and dev-frontend-b4959fb97
):
$ kubectl get pods | grep frontend
dev-frontend-84ca5d6dd6-8n8lf 0/1 ContainerCreating 0 18h
dev-frontend-b4959fb97-f9mgw 1/1 Running 0 18h
Your Deployment
is using Rolling Deployment
as default Deployment Strategy. A Rolling Deployment
waits for new Pods
to become ready BEFORE scaling down previous Pods
.
I assume that first you deployed dev-frontend-b4959fb97-f9mgw
(old version of the app) and then you deployed dev-frontend-84ca5d6dd6-8n8lf
(new version of the same app).
The newly created Pod
(dev-frontend-84ca5d6dd6-8n8lf
) is not able to attach volume (the old one still uses this volume) so it will never be in Ready
state.
Check how many dev-frontend-*
ReplicaSets
you have in this particular namespaces and which one is currently managed by your Deployment
:
$ kubectl get rs | grep "dev-frontend"
$ kubectl describe deployment dev-frontend | grep NewReplicaSet
This two commands above should give you the idea which one is new and which one is old.
I am not sure which StorageClass
you are using, but you can check it using:
$ kubectl describe pvc shared-file-pvc | grep "StorageClass"
Additionally check if created PV
really support RWX
Access Mode:
$ kubectl describe pv <PV_name> | grep "Access Modes"
Main concept is that PersistentVolume
and PersistentVolumeClaim
are mapped one-to-one. This is described in Persistent Volume - Binding documentation.
However, if your StorageClass
is configured correctly, it is possible to have two or more containers using the same PersistentVolumeClaim
that is attached to the same mount on the back end.
The most common examples are NFS and CephFS
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…