Back to all scenarios
Scenario #458
Scaling & Load
Kubernetes v1.24, IBM Cloud
Memory Pressure Causing Slow Pod Scaling
Pod scaling was delayed due to memory pressure in the cluster, causing performance bottlenecks.
Find this helpful?
What Happened
Pods scaled slowly during high memory usage periods because of memory pressure on existing nodes.
Diagnosis Steps
- 1Checked node metrics and found that there was significant memory pressure on the nodes, delaying pod scheduling.
- 2Memory was allocated too heavily to existing pods, leading to delays in new pod scheduling.
Root Cause
High memory pressure on nodes, causing delays in pod scaling.
Fix/Workaround
• Increased the memory available on nodes to alleviate pressure.
• Used resource requests and limits more conservatively to ensure proper memory allocation.
Lessons Learned
Node memory usage must be managed carefully during scaling events to avoid delays.
How to Avoid
- 1Monitor node memory usage and avoid over-allocation of resources.
- 2Use memory-based autoscaling to ensure adequate resources are available during traffic spikes.