Back to all scenarios
Scenario #400
Storage
Kubernetes v1.23, AWS EKS with CA
Cluster Autoscaler Deleted Nodes with Mounted Volumes
Cluster Autoscaler aggressively removed nodes with attached volumes, causing workload restarts.
Find this helpful?
What Happened
Nodes were deemed underutilized and deleted while volumes were still mounted.
Diagnosis Steps
- 1Volumes detached mid-write, causing file corruption.
- 2Events showed node scale-down triggered by CA.
Root Cause
No volume-aware protection in CA.
Fix/Workaround
• Enabled --balance-similar-node-groups and --skip-nodes-with-local-storage.
Lessons Learned
Cluster Autoscaler must be volume-aware.
How to Avoid
- 1Configure CA to respect mounted volumes.
- 2Tag volume-critical nodes as unschedulable before scale-down.