Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
2.8k views
in Technique[技术] by (71.8m points)

containers - delete Kubernetes persistent volume from statefulset after scale down

I scaled my statefulset up to 4, and when scaling down to 1, I saw that I still have 4 persistent volumes with indexes from 0 to 3.

I also saw that the status of all of them is Bound I guess it is because I use it as stateful set, so it doesn't delete the volumes after scale down.

I tried to manually delete one of tham (the one with index 2) because I was sure it will release my volume, so I used:

kubectl delete persistentvolume <volume>

Well, that didn't help, it just made this volume to be in a terminating state forever... :/

I have no idea how to remove this and all the other unused volumes now.

here is the volume configuration in stateful set yaml.

  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes: ["ReadWriteOnce"]
        storageClassName: "default"
        resources:
          requests:
            storage: 7Gi

if I run

kubectl get pvc --all-namespaces

I get

NAMESPACE    NAME                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
default      data-0         Bound    pvc-23af1aec-e385-4778-b0b0-56f1d1dfdfee   7Gi        RWO            default        4h5m
default      data-1         Bound    pvc-34625107-1352-4715-b12c-2fc6ff22ed08   7Gi        RWO            default        4h4m
default      data-2         Bound    pvc-15dbdb53-d951-465d-b9c3-ebadfcc3c725   7Gi        RWO            default        4h3m
default      data-3         Bound    pvc-d317657f-194a-4f4f-8c5f-dff2843b693f   7Gi        RWO            default        4h3m

if I run

kubectl get --no-headers persistentvolumes

I get this:

pvc-15dbdb53-d951-465d-b9c3-ebadfcc3c725   7Gi   RWO   Delete   Terminating   default/data-2            default         4h4m
pvc-23af1aec-e385-4778-b0b0-56f1d1dfdfee   7Gi   RWO   Delete   Bound         default/data-0            default         4h6m
pvc-34625107-1352-4715-b12c-2fc6ff22ed08   7Gi   RWO   Delete   Bound         default/data-1            default         4h5m
pvc-d317657f-194a-4f4f-8c5f-dff2843b693f   7Gi   RWO   Delete   Bound         default/data-3            default         4h3m

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

In statefulset, K8s doesn't delete PV or PVC by their own after termination of a pod automatically, It is to avoid further complication and for data safety. Thats why after doing scale down, we need to do it manually.Deleting the PVC after the pods have terminated will trigger deletion of the respective Persistent Volumes depending on the storage class and reclaim policy.

Please try to delete persistent volume claim or PVC instead of persistent volume. if you delete pvc it will automatically delete the respective pv.

just run this command in your bash:

kubectl delete pvc data-3

REF


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

57.0k users

...