ویرگول
ورودثبت نام
Shamim Shahraeini
Shamim Shahraeini
خواندن ۴ دقیقه·۲ سال پیش

My Understanding of K8s Networking :: ep1

Introduction

Networking is a central part of Kubernetes, but it can be challenging to understand exactly how it is expected to work and how it actually works. Overall, there are 4 distinct networking challenges to address:

  1. Highly-coupled container-to-container communications: This is solved by inter Pods network(CNI), specifically by localhost and the port number exposed communications.
  2. Pod-to-Pod communications: Deligated to Network Plugins; Every pod gets its own cluster-private IP address, so you do not need to explicitly create links between pods or map container ports to host ports It can be solved by directly addressing its IP address
  3. Pod-to-Service communications: This is covered by services.
  4. External-to-Service communications: This is covered by services.


CNIs?

It is a Cloud Native Computing Foundation project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins. CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted. Because of this focus, CNI has a wide range of support and the specification is simple to implement.

In other word, CNI plugin is responsible for inserting a network interface into the container network namespace (e.g., one end of a virtual ethernet (veth) pair) and making any necessary changes on the host (e.g., attaching the other end of the veth into a bridge). It then assigns an IP address to the interface and sets up the routes consistent with the IP Address Management section by invoking the appropriate IP Address Management (IPAM) plugin.

Flow of how it actually works:

  1. When the container runtime expects to perform network operations on a container, it (like the kubelet in the case of K8s) calls the CNI plugin with the desired command.
  2. The container runtime also provides related network configuration and container-specific data to the plugin.
  3. The CNI plugin performs the required operations and reports the result.

note: CNI is called twice by K8s (kubelet) to set up loopback and eth0 interfaces for a pod.

  • Detailed information:

- Basic commands are: ADD, DEL, CHECK and VERSION

- Plugins are executables

- Spawned by the runtime when network operations are desired

- Fed JSON configuration via stdin

- Also fed container-specific data via stdin

- Report structured result via stdout



Services?

They are an abstract way to expose an application running on a set of Pods as a network service, and a policy by which to access them through each other (This will mostly be used in microservice-based architectures). The set of Pods targeted by a Service is usually determined by a selector.

There are three main types of communications between Kubernetes services:

  • Cluster IP - which is the usual way of accessing a service from inside the cluster.
https://projectcalico.docs.tigera.io/about/about-kubernetes-services
https://projectcalico.docs.tigera.io/about/about-kubernetes-services


  • Node port - which is the most basic way of accessing a service from outside the cluster.
https://projectcalico.docs.tigera.io/about/about-kubernetes-services
https://projectcalico.docs.tigera.io/about/about-kubernetes-services


  • Loadbalancer - which uses an external loadbalancer as a more sophisticated way to access a service from outside the cluster.
https://projectcalico.docs.tigera.io/about/about-kubernetes-services
https://projectcalico.docs.tigera.io/about/about-kubernetes-services



Virtual IPs?

Every node in a Kubernetes cluster runs a kube-proxy and It is responsible for implementing a form of virtual IP for Services of type other than ExternalName.

There are three kinds of service proxy:

  • User space proxy mode - kube-proxy watches the Kubernetes control plane for the addition and removal of Service and Endpoint objects. For each Service it opens a port (randomly chosen) on the local node. user-space proxy installs iptables rules which capture traffic to the Service's clusterIP (which is virtual) and port. The rules redirect that traffic to the proxy port which proxies the backend Pod and if failed and would automatically retry with a different backend Pod
https://kubernetes.io/docs/concepts/services-networking/service/
https://kubernetes.io/docs/concepts/services-networking/service/


  • iptables proxy mode - kube-proxy watches the Kubernetes control plane for the addition and removal of Service and Endpoint objects. For each Service, it installs iptables rules, which capture traffic to the Service's clusterIP and port, and redirect that traffic to one of the Service's backend sets. For each Endpoint object, it installs iptables rules which select a backend Pod but only backends that test out as healthy. The traffic is handled by Linux netfilter without the need to switch between userspace and the kernel space.
https://kubernetes.io/docs/concepts/services-networking/service/
https://kubernetes.io/docs/concepts/services-networking/service/


  • IPVS proxy mode - kube-proxy watches Kubernetes Services and Endpoints, calls netlink interface to create IPVS rules accordingly and synchronizes IPVS rules with Kubernetes Services and Endpoints periodically. It uses a hash table as the underlying data structure and works in the kernel space.
https://kubernetes.io/docs/concepts/services-networking/service/
https://kubernetes.io/docs/concepts/services-networking/service/


kubernetesip addresskube proxybackend podcni plugin
شاید از این پست‌ها خوشتان بیاید