Back to all scenarios
Scenario #416
Scaling & Load
Kubernetes v1.23, on-prem
Kubernetes Downscaled During Rolling Update
Pods were prematurely scaled down during rolling deployment.
Find this helpful?
What Happened
Rolling update caused a drop in available replicas, triggering autoscaler.
Diagnosis Steps
- 1Observed spike in 5xx errors during update.
- 2HPA decreased replica count despite live traffic.
Root Cause
Deployment strategy interfered with autoscaling logic.
Fix/Workaround
• Tuned maxUnavailable and minReadySeconds.
• Added load-based HPA stabilization window.
Lessons Learned
HPA must be aligned with rolling deployment behavior.
How to Avoid
- 1Use behavior.scaleDown.stabilizationWindowSeconds.
- 2Monitor scaling decisions during rollouts.