Back to all scenarios
Scenario #336
Storage
Kubernetes v1.25, NFS

Persistent Volume Bound But Not Mounted

Pod entered Running state, but data was missing because PV was bound but not properly mounted.

Find this helpful?
What Happened

NFS server was unreachable during pod start. Pod started, but mount failed silently due to default retry behavior.

Diagnosis Steps
  • 1mount output lacked NFS entry.
  • 2Pod logs: No such file or directory errors.
  • 3CSI logs showed silent NFS timeout.
Root Cause

CSI driver didn’t fail pod start when mount failed.

Fix/Workaround
• Added mountOptions: [hard,intr] to NFS SC.
• Set pod readiness probe to check file existence.
Lessons Learned

Mount failures don’t always stop pod startup.

How to Avoid
  • 1Validate mounts via init containers or probes.
  • 2Monitor CSI logs on pod lifecycle events.