Back to all scenarios
Scenario #303
Storage
Kubernetes v1.23, GCE PD on GKE
Volume Mount Fails Due to Node Affinity Mismatch
A pod was scheduled on a node that couldn’t access the persistent disk due to zone mismatch.
Find this helpful?
What Happened
A StatefulSet PVC was bound to a disk in us-central1-a, but the pod got scheduled in us-central1-b, causing volume mount failure.
Diagnosis Steps
- 1Described pod: showed MountVolume.MountDevice failed.
- 2Described PVC and PV: zone mismatch confirmed.
- 3Looked at scheduler decisions — no awareness of volume zone.
Root Cause
Scheduler was unaware of zone constraints on the PV.
Fix/Workaround
• Added topology.kubernetes.io/zone node affinity to match PV.
• Ensured StatefulSets used storage classes with volume binding mode WaitForFirstConsumer.
Lessons Learned
Without delayed binding, PVs can bind in zones that don’t match future pods.
How to Avoid
- 1Use WaitForFirstConsumer for dynamic provisioning.
- 2Always define zone-aware topology constraints.