Kubernetes architecture and workflow

·

10 min read

TABLE OF CONTENTS
  1. Kubernetes Architecture

    1. Control Plane

    2. Worker Node

  2. Kubernetes Control Plane Components

    1. 1. kube-apiserver

    2. 2. etcd

    3. 3. kube-scheduler

    4. 4. Kube Controller Manager

    5. 5. Cloud Controller Manager (CCM)

  3. Kubernetes Worker Node Components

    1. 1. Kubelet

    2. 2. Kube proxy

    3. 3. Container Runtime

  4. Kubernetes Cluster Addon Components

    1. 1. CNI Plugin

Kubernetes Architecture

A Kubernetes cluster consists of control plane nodes and worker nodes.

Control Plane

The control plane is responsible for container orchestration and maintaining the desired state of the cluster. It has the following components.

  1. kube-apiserver

  2. etcd

  3. kube-scheduler

  4. kube-controller-manager

  5. cloud-controller-manager

Worker Node

The Worker nodes are responsible for running containerized applications. The worker Node has the following components.

  1. kubelet

  2. kube-proxy

  3. Container runtime

Kubernetes Control Plane Components

1. kube-apiserver

The kube-api server is the central hub of the Kubernetes cluster that exposes the Kubernetes API.

So when you use kubectl to manage the cluster, at the backend you are actually communicating with the API server through HTTP REST APIs. However, the internal cluster components like the scheduler, controller, etc talk to the API server using gRPC.

The communication between the API server and other components in the cluster happens over TLS to prevent unauthorized access to the cluster.

Kubernetes api-server is responsible for the following

  1. API management: Exposes the cluster API endpoint and handles all API requests.

  2. Authentication (Using client certificates, bearer tokens, and HTTP Basic Authentication) and Authorization (ABAC and RBAC evaluation)

  3. Processing API requests and validating data for the API objects like pods, services, etc. (Validation and Mutation Admission controllers)

  4. It is the only component that communicates with etcd.

  5. api-server coordinates all the processes between the control plane and worker node components.

  6. api-server has a built-in bastion apiserver proxy. It is part of the API server process. It is primarily used to enable access to ClusterIP services from outside the cluster, even though these services are typically only reachable within the cluster itself.

2. etcd

It acts as both a backend service discovery and a database.

etcd is an open-source strongly consistent, distributed key-value store.

  1. etcd stores all configurations, states, and metadata of Kubernetes objects (pods, secrets, daemonsets, deployments, configmaps, statefulsets, etc).

  2. etcd stores all objects under the /registry directory key in key-value format. For example, information on a pod named Nginx in the default namespace can be found under /registry/pods/default/nginx

  3. Also, etcd it is the only Statefulset component in the control plane.

3. kube-scheduler

The kube-scheduler is responsible for scheduling pods on worker nodes.

When you deploy a pod, you specify the pod requirements such as CPU, memory, affinity, taints or tolerations, priority, persistent volumes (PV), etc. The scheduler’s primary task is to identify the create request and choose the best node for a pod that satisfies the requirements.

4. Kube Controller Manager

It runs continuously and watches the actual and desired state of objects. If there is a difference in the actual and desired state, it ensures that the kubernetes resource/object is in the desired state.

Let’s say you want to create a deployment, you specify the desired state in the manifest YAML file (declarative approach). For example, 2 replicas, one volume mount, configmap, etc. The in-built deployment controller ensures that the deployment is in the desired state all the time. If a user updates the deployment with 5 replicas, the deployment controller recognizes it and ensures the desired state is 5 replicas.

Kube controller manager is a component that manages all the Kubernetes controllers. Kubernetes resources/objects like pods, namespaces, jobs, replicaset are managed by respective controllers. Also, the kube scheduler is also a controller managed by Kube controller manager.

Following is the list of important built-in Kubernetes controllers.

  1. Deployment controller

  2. Replicaset controller

  3. DaemonSet controller

  4. Job Controller (Kubernetes Jobs)

  5. CronJob Controller

  6. endpoints controller

  7. namespace controller

  8. service accounts controller.

  9. Node controller

Here is what you should know about the Kube controller manager.

  1. It manages all the controllers and the controllers try to keep the cluster in the desired state.

  2. You can extend Kubernetes with custom controllers associated with a custom resource definition.

custom resource definition (CRD) is a way to add new features or to enhance the capabilities of a Kubernetes cluster For Example, I have my application deployed on Kubernetes and want to access the application using advanced load balancing capabilities provided by f5, nginx etc. But Kubernetes does not support this kind of resource, it supports only deployments, services, and ingress.

So, to solve these, we came up with CRD, where F5 or other vendors can write Custom resources to implement what is written in CRD and they are managed by custom controllers, so controllers look for custom resources and implement CRD

5. Cloud Controller Manager (CCM)

When Kubernetes is deployed in cloud environments, the cloud controller manager acts as a bridge between Cloud Platform APIs and the Kubernetes cluster.

Cloud controller integration allows Kubernetes cluster to provision cloud resources like instances (for nodes), Load Balancers (for services), and Storage Volumes (for persistent volumes).

Following are some of the classic examples of cloud controller manager.

  1. Deploying Kubernetes Service of type Load balancer. Here Kubernetes provisions a Cloud-specific Loadbalancer and integrates with Kubernetes Service.

  2. Provisioning storage volumes (PV) for pods backed by cloud storage solutions.

Kubernetes Worker Node Components

1. Kubelet

Kubelet is an agent component that runs on every node in the cluster. t does not run as a container instead runs as a daemon, managed by systemd.

Mainly deals working with the podSpec (Pod specification – YAML or JSON) primarily from the API server. podSpec defines the containers that should run inside the pod, their resources (e.g. CPU and memory limits), and other settings such as environment variables, volumes, and

To put it simply, kubelet is responsible for the following.

  1. Creating, modifying, and deleting containers for the pod.

  2. Responsible for handling liveliness, readiness, and startup probes.

  3. Responsible for Mounting volumes by reading pod configuration and creating respective directories on the host for the volume mount.

  4. Collecting and reporting Node and pod status via calls to the API server.

Other than PodSpecs from the API server, kubelet can accept podSpec from a file, HTTP endpoint, and HTTP server. A good example of “podSpec from a file” is Kubernetes static pods.

Static pods are controlled by kubelet, not the API servers.

This means you can create pods by providing a pod YAML location to the Kubelet component. However, static pods created by Kubelet are not managed by the API server.

As we learned in the Kubelet section, the kubelet agent is responsible for interacting with the container runtime using CRI APIs to manage the lifecycle of a container. It also gets all the container information from the container runtime and provides it to the control plane.

Here is a real-world example use case of the static pod.

While bootstrapping the control plane, kubelet starts the api-server, scheduler, and controller manager as static pods from podSpecs located at /etc/kubernetes/manifests

Following are some of the key things about kubelet.

  1. Kubelet uses the CRI (container runtime interface) gRPC interface to talk to the container runtime.

  2. It also exposes an HTTP endpoint to stream logs and provides exec sessions for clients.

  3. Uses the CSI (container storage interface) gRPC to configure block volumes.

  4. It uses the CNI plugin configured in the cluster to allocate the pod IP address and set up any necessary network routes and firewall rules for the pod.

2. Kube proxy

Kube-proxy is a daemon that runs on every node as a daemonset. It is a proxy component that implements the Kubernetes Services concept for pods. (single DNS for a set of pods with load balancing). It primarily proxies UDP, TCP, and SCTP and does not understand HTTP.

When you expose pods using a Service (ClusterIP), Kube-proxy creates network rules to send traffic to the backend pods (endpoints) grouped under the Service object. Meaning, all the load balancing, and service discovery are handled by the Kube proxy.

So how does Kube-proxy work?

Kube proxy talks to the API server to get the details about the Service (ClusterIP) and respective pod IPs & ports (endpoints). It also monitors for changes in service and endpoints.

Kube-proxy then uses IPTables modes to create/update rules for routing traffic to pods behind a Service

  1. IPTables: It is the default mode. In IPTables mode, the traffic is handled by IPtable rules. In this mode, kube-proxy chooses the backend pod random for load balancing. Once the connection is established, the requests go to the same pod until the connection is terminated.

3. Container Runtime

You probably know about Java Runtime (JRE). It is the software required to run Java programs on a host. In the same way, container runtime is a software component that is required to run containers.

Container runtime runs on all the nodes in the Kubernetes cluster. It is responsible for pulling images from container registries, running containers, allocating and isolating resources for containers, and managing the entire lifecycle of a container on a host.

Kubernetes Cluster Addon Components

Apart from the core components, the kubernetes cluster needs addon components to be fully operational. Choosing an addon depends on the project requirements and use cases.

Following are some of the popular addon components that you might need on a cluster.

  1. CNI Plugin (Container Network Interface)

  2. CoreDNS (For DNS server): CoreDNS acts as a DNS server within the Kubernetes cluster. By enabling this addon, you can enable DNS-based service discovery.

  3. Metrics Server (For Resource Metrics): This addon helps you collect performance data and resource usage of Nodes and pods in the cluster.

CNI Plugin

This allows users to choose a networking solution that best fits their needs from different providers.

How does CNI Plugin work with Kubernetes?

  1. The Kube-controller-manager is responsible for assigning pod CIDR to each node. Each pod gets a unique IP address from the pod CIDR.

  2. Kubelet interacts with container runtime to launch the scheduled pod. The CRI plugin which is part of the Container runtime interacts with the CNI plugin to configure the pod network.

  3. CNI Plugin enables networking between pods spread across the same or different nodes using an overlay network.

    Following are high-level functionalities provided by CNI plugins.

    1. Pod Networking

    2. Pod network security & isolation using Network Policies to control the traffic flow between pods and between namespaces.

Some popular CNI plugins include:

  1. Calico

  2. Flannel

  3. Weave Net

  4. Cilium (Uses eBPF)

  5. Amazon VPC CNI (For AWS VPC)

  6. Azure CNI (For Azure Virtual network)Kubernetes networking is a big topic and it differs based on the hosting platforms.

Kubernetes Workflow for Absolute Beginners

Hello Everyone, let’s discuss the kubernetes workflow. Whenever we are executing any command let’s say “kubectl create pod nginx”, how the pod request work in the backend and how a pod is got created:

  1. The request is authenticated first and validated.

  2. The “kube-api-server” creates a POD object, without assigning it to a node, updates the information of the newly created pod in “ETCD Cluster” and updated/shows us a message that a POD is got created.

  3. The “kube-scheduler” which is continually monitoring the “kube-api-server” gets to know that a new pod is got created with no node assigned to it.

  4. The “kube-scheduler” identifies the right node (according to pod resource requirement, pod/node affinity rules, labels & selectors etc.) to place the new POD and communicate back to the “kube-api-server” (with the information of the right node for the pod)

  5. The “kube-api-server” again updates the information to the “ETCD Cluster” received from “kube-scheduler”.

  6. The “kube-api-server” then passed the same information to the “kubelet” on the appropriate worker node identified by “kube-scheduler” in the 4th step.

  7. The “kubelet” then creates the pod on node and instructs the “Container Runtime Engine” to deploy the application image/container.

  8. Once done, the “kubelet” updates the information/status of the pod back to the “kube-api-server”.

  9. And “kube-api-server” updates the information/data back in the “ETCD Cluster”.

Reference :- https://devopscube.com/kubernetes-architecture-explained/