Back to all scenarios
Scenario #183
Networking
K8s v1.21, OpenShift
High Latency in Pod-to-Node Communication Due to Overlay Network
High latency was observed in pod-to-node communication due to network overhead introduced by the overlay network.
Find this helpful?
What Happened
The cluster was using Flannel as the CNI plugin, and network latency increased as the overlay network was unable to efficiently handle the traffic between pods and nodes.
Diagnosis Steps
- 1Used kubectl exec to measure network latency between pods and nodes.
- 2Analyzed the network traffic and identified high latency due to the overlay network's encapsulation.
Root Cause
The Flannel overlay network introduced additional overhead, which caused latency in pod-to-node communication.
Fix/Workaround
• Switched to a different CNI plugin (Calico) that offered better performance for the network topology.
• Retested pod-to-node communication after switching CNI plugins.
Lessons Learned
Choose the right CNI plugin based on network performance needs, especially in high-throughput environments.
How to Avoid
- 1Perform a performance evaluation of different CNI plugins during cluster planning.
- 2Monitor network performance regularly and switch plugins if necessary.