I was following along with a demo of Networking in Kubernetes where the presenter was showing how different containers (in their own pods) are able to communicate across a flat network with other pods.
As part of this, they pulled up the network interface using the ip a command.
(ip a and ip addr show replace depreciated command ifconfig in CentOS7 and newer versions of Ubuntu among others.)
When I followed along, I noticed that my output was a little different…
For some reason, the GCP hosted container had a network interface with a 24-bit netmask (/24), and my AWS hosted container had a 32-bit netmask (/32).
Both Kubernetes clusters had the same configuration applied so as I understand it, their networking should be identical.
And as expected, I was able to ping the other nodes just fine even though I had a 32-bit netmask associated with my IP address.
But the bigger “concern” here, and what caught my eye even though it was not mentioned in the video, is that IPV4 Network Addresses are only 32 bits long.
So if the Netmask is 32-bits long, what portion of the IP address is the Host?
I’m not quite sure what’s going on here, but it’s about 1 AM so I’m leaving this here for now and I’ll come back to dig into it some more later…
A Rabbit Hole to go down. Some of the best ways to learn about the nuances of how a system works. 😎
(Now if only it didn’t turn the 10-minute video I was watching into a much longer wild goose chase.) 🤣
My current working theory…
Maybe this forms a loopback address with a route to “nowhere”, but that the kube-proxy process on the node is capturing the traffic and dealing with it accordingly.
I also found this reference to the Amazon VPC CNI plugin for Kubernetes.
Pod Networking (CNI)
Maybe this CNI plugin allows or requires that the interfaces have a 32-bit netmask???
Feel free to comment or drop me an E-Mail if you know what’s up. 🤔