Table of Contents
PVC Stucked In Terminating State
pvc-52e24191-895e-45c1-8b22-084fa0394321 2Gi RWO Delete Terminating default/storage-prometheus-alertmanager-0 default <unset> 2y13d
Check PVC Status:
kubectl get pvc storage-prometheus-alertmanager-0 -n default -o yaml
kubectl get pv pvc-52e24191-895e-45c1-8b22-084fa0394321 -o yaml
Look for:
- Finalizers section. Usually something like
kubernetes.io/pvc-protectionorexternal-provisioner.volume.kubernetes.io/finalizer. - DeletionTimestamp field. Indicates it's waiting for cleanup.
- ClaimRef in the PV — make sure it matches the PVC namespace/name.
- StorageClass provisioner.
If you see something like this:
metadata:
finalizers:
- kubernetes.io/pvc-protection
Then Kubernetes protection finalizer is blocking deletion because the PVC is still bound to a Pod or volume attachment.
Check if any Pod still references the PVC:
kubectl get pod -A --field-selector spec.volumes.persistentVolumeClaim.claimName=storage-prometheus-alertmanager-0
Inspect the PV:
kubectl get pv pvc-52e24191-895e-45c1-8b22-084fa0394321 -o yaml
Check the PersistentVolume (PV) is in the Bound phase and has two finalizers:
finalizers:
- kubernetes.io/pv-protection
- external-provisioner.volume.kubernetes.io/finalizer
And also has a deletionTimestamp:
deletionTimestamp: "2025-02-14T08:04:44Z"
That means Kubernetes already tried to delete it back in February, but couldn’t because one or more finalizers never completed.
Understanding the Blockers - Azure
external-provisioner.volume.kubernetes.io/finalizer
This is from the CSI external-provisioner (disk.csi.azure.com). It tells Kubernetes: "Don’t delete this PV until the Azure Disk CSI driver confirms cleanup."
This usually gets stuck when:
- The Azure Disk CSI Controller that created this PV is no longer running.
- The underlying managed disk was deleted manually.
- The identity (service principal or managed identity) used by the CSI driver lost permissions.
kubernetes.io/pv-protection
This prevents deletion if the PV is still bound to a PVC — which it is. Since the PVC is also stuck (Terminating with pvc-protection), you have a mutual hold:
PVC stuck because PV not released
PV stuck because CSI driver never finalized
Verify Underlying Azure Disk Exists
az disk show --ids /subscriptions/0ffcc405-cfc7-407e-bfe5-4b66912442ed/resourceGroups/mc_farel_dev_aks_cluster_germanywestcentral/providers/Microsoft.Compute/disks/pvc-52e24191-895e-45c1-8b22-084fa0394321
If this returns an error or “not found”, the disk is already gone — safe to clean up Kubernetes resources.
If it exists, confirm it’s not attached to any VM:
az disk show --ids /subscriptions/0ffcc405-cfc7-407e-bfe5-4b66912442ed/resourceGroups/mc_farel_dev_aks_cluster_germanywestcentral/providers/Microsoft.Compute/disks/pvc-52e24191-895e-45c1-8b22-084fa0394321 --query '[name, managedBy]'
If managedBy is null, it’s detached — cleanup is safe. If attached to a VM, detach it first.
Manual Cleanup Procedure (Safe Sequence)
Remove finalizers from PV:
kubectl patch pv pvc-52e24191-895e-45c1-8b22-084fa0394321 \
-p '{"metadata":{"finalizers":null}}' --type=merge
This clears both pv-protection and the CSI external-provisioner finalizer. Kubernetes will then immediately delete the PV.
Once the PV is gone, clear the PVC’s finalizer as well:
kubectl patch pvc storage-prometheus-alertmanager-0 -n default \
-p '{"metadata":{"finalizers":null}}' --type=merge
Now the PVC should delete.
Check for VolumeAttachment remnants
CSI may have left stale volume attachment objects (though unlikely in this case):
kubectl get volumeattachment | grep 52e24191 || true
If you see any — and the PV/PVC are gone — delete them:
kubectl delete volumeattachment <name>
After cleanup verification:
kubectl get pv | grep pvc-52e24191
kubectl get pvc storage-prometheus-alertmanager-0 -n default
Verify no Azure disk remains:
az disk list --query "[?contains(name, '52e24191')].{name:name, managedBy:managedBy}"