Back to all scenarios
Scenario #14
Cluster Management
K8s v1.24, AWS EKS, containerd

Uncontrolled Logs Filled Disk on All Nodes

Application pods generated excessive logs, filling up node /var/log.

Find this helpful?
What Happened

A debug flag was accidentally enabled in a backend pod, logging hundreds of lines/sec. The journald and container logs filled up all disk space.

Diagnosis Steps
  • 1df -h showed /var/log full.
  • 2Checked /var/log/containers/ – massive logs for one pod.
  • 3Used kubectl logs to confirm excessive output.
Root Cause

A log level misconfiguration caused explosive growth in logs.

Fix/Workaround
• Rotated and truncated logs.
• Restarted container runtime after cleanup.
• Disabled debug logging.
Lessons Learned

Logging should be controlled and bounded.

How to Avoid
  • 1Set log rotation policies for container runtimes.
  • 2Enforce sane log levels via CI/CD validation.