Back to all scenarios
Scenario #10
Cluster Management
K8s v1.18, On-prem, Flannel CNI
Control Plane Unavailable After Flannel Misconfiguration
Misaligned pod CIDRs caused overlay misrouting and API server failure.
Find this helpful?
What Happened
A new node was added with a different pod CIDR than what Flannel expected. This broke pod-to-pod and node-to-control-plane communication.
Diagnosis Steps
- 1kubectl timed out from nodes.
- 2Logs showed dropped traffic in iptables.
- 3Compared --pod-cidr in kubelet and Flannel config.
Root Cause
Pod CIDRs weren’t consistent across node and Flannel.
Fix/Workaround
• Reconfigured node with proper CIDR range.
• Flushed iptables and restarted Flannel.
Lessons Learned
CNI requires strict configuration consistency.
How to Avoid
- 1Enforce CIDR policy via admission control.
- 2Validate podCIDR ranges before adding new nodes.