There's only one correct answer to such question: Leave it. It's a very, very bad idea. Believe me, with such solution you'll only generate more problems that you will need to solve. What about updating such cache when a Pod
gets recreated and both its name and IP changes ?
Pod
in kubernetes is an object of ephemeral nature and it can be destroyed and recreated in totally normal circumstances e.g. as a result of scaling down the cluster and draining the node Pods
are evicted and rescheduled on a different node, with completely different names and IP addresses.
The only stable manner of accessing your Pods
is via a Service
that exposes them.
To minimize the latency when receiving each time a new request i need
to use the Ips from a cache instead of trying to get the ip of the pod
from the kubernetes API
It's really reinventing the wheel. Every time a Service
is created (except Services without selectors), the corresponding Endpoints
object is created as well. And in fact it acts exactly like the caching mechanism you need. It keeps track of all IP addresses of Pods
and gets updated if a Pod
gets recreated and its IP changes. This way you have a guarantee that it is always up to date. When implementing any cache mechanism you would need to call the kubernetes API anyway, to make sure that a Pod
with such IP still exists and if it doesn't, what was created instead of it, with what name, with what IP address. Quite bothersome, isn't it ?
And it is not true that each time you access a Pod
you need to make a call to kubernetes API to get its IP address. In fact,Service
is implemented as a set of iptables rules on each node. When the request hits the virtual IP of the Service
it actually falls into specific iptables chain and gets routed by the the kernel to the backend Pod
. Kubernetes networking is really wide topic, so I recommend you to read about it e.g. using the resources I attached in the end, but not going into unnecessary details it's worth mentioning that each time cluster configuration changes (e.g. some Pod
is recreated and gets a different IP and the respective Endpoints
object, that keeps track of Pods
IPs for the specific Service
, changes), kube-proxy
which runs as a container on every node takes care of updating the above mentioned iptables forwarding rules. If running in iptables mode (which is the most common implementation) kube-proxy
configures Netfilter
chains so the connection is routed directly to the backend container’s endpoint by the node’s kernel.
So API call is made only when the cluster configuration changes so that kube-proxy
can update iptables rules. Normally when you're accessing the Pod
via a Service
, the traffic is routed to backend Pods
based on current iptables
rules without a need for asking kubenetes API about the IP of such Pod
.
In fact, Krishna Chaurasia have already answered your question (shortly but 100% correct) by saying:
You should not access pod by their IPs. They are not persisted across
pod restarts.
and
that's not how K8s work. Requests are forwarded based on the Service
generally and they are redirected towards the matching pods based on
label/selectors. – Krishna Chaurasia 4 hours ago
I can only agree with that. And my reasons for "why" have been explained in detail above.
Additional resources: