etcd, Kubernetes

4 Mins Read

Remarkable Features of Kubernetes – Architecture and its Configuration

Introduction

Kubernetes or k8s is an open-source Container Orchestration and cluster management tool for managing Docker containers.

Kubernetes is an open-source system that helps in automating deployment, containerized applications management, and scaling. It creates groups of containers to manage the entire application into logical units for easy discovery and management.

Kubernetes makes deploying containers on multiple hosts very easy using a declarative YAML (Yet Another Markup Language) file. We must specify how the container must be deployed, and Kubernetes will take care of it.

Need of Kubernetes

Why we need the Kubernetes tool is the first question that pops up in our minds while we deal with managing containers.

Deploying Containers:

If we have a microservice-based application containing various APIs, User Management, and Credit card transactions, these APIs must communicate with each other using REST APIs or other networking protocols.

As the application has multiple services, we cannot deploy all the services in a single container or a server. The applications must be decoupled, and microservices must be scaled and deployed individually. This approach makes scaling applications, development, and deployment much easier and faster.

When it comes to managing containers for microservice applications, it is essential to handle networking, load balancing, service discovery, or file systems management of all the containers as applications scale. With Kubernetes, the developers must just worry about the application development and deployment strategies. All the management tasks will be handled by Kubernetes.

Overall, Kubernetes helps in:

  1. Self-Healing
  2. Automatic Container Scheduling
  3. Vertical and Horizontal Scaling
  4. Application upgrades and downgrades with minimal downtime.

  • Cloud Migration
  • Devops
  • AIML & IoT
Know More

Kubernetes Architecture and High Availability

Kubernetes is a distributed architecture with multiple servers over the network. These servers can be virtual machines or bare metal servers, and together, it is called a Kubernetes Cluster.

A Kubernetes Cluster contains worker nodes and control plane nodes.

Control Plane

The Control plane takes the responsibility for Container Orchestration and maintaining the desired cluster state. The components of the control plane are:

  1. Kube-apiserver

It’s the central hub of the Kubernetes cluster and acts as a front end for end users to communicate with the cluster components. The communication between the API server and other components happens over Transport Layer Security (TLS) to prevent unauthorized access to the cluster.

It has other responsibilities like:

  1. Processing the API requests and data validation for API objects like pods and services.
  2. Coordinating the communication between the worker node and the control plane.
  3. Authentication and Authorization
  4. Handling all API requests exposing the cluster API endpoint.
  1. Etcd

Kubernetes requires a distributed database that supports the distributed nature of Kubernetes. It acts as a database and backend service discovery. etcd can be considered as the brain of the Kubernetes cluster.

Its main functionalities are:

  1. etcd stores all configurations, states, and Kubernetes metadata of Kubernetes objects (secrets, pods, deployments, configmaps, daemon sets, etc.)
  2. etcd stores all objects under /the registry directory in a key-value format.
  3. Kube-apiserver uses the watch () API to receive state change notifications of an object.
  1. Kube-scheduler

The Kube scheduler is mainly responsible for scheduling Kubernetes pods on worker nodes. When a pod is deployed, the pod requirements are specified, such as memory utilization, taints and tolerations, CPU, priority, and persistent volumes.

The Scheduler’s primary job is to create a request and select the best node for a pod that satisfies the requirements.

  1. Kube-control-manager

It manages all the Kubernetes controllers. Kubernetes resources pods, jobs, namespaces, and replica sets are managed by respective controllers. Controllers run continuously, watching the desired and actual state of objects.

If it finds there is a difference, then it brings the object to the desired state. We can also extend custom controllers associated with custom resource definitions.

Worker node

  1. Kubelet

It’s an agent component that runs on every node in the cluster. It does not run as a container; instead, it runs as a daemon process managed by systemd.

Kubelet uses the CRI (Container Runtime Interface) interface to talk to container runtime.

It also exposes the HTTP endpoint for streaming logs and provides exec sessions for clients.

It uses the CNI plugin within the cluster to allocate the pod-IP address and set up network routes necessary for the pod.

  1. Kube proxy

It’s a service to expose a set of pods to internal or external traffic. A virtual IP address called ClusterIP is accessible only within the Kubernetes cluster. Kupeproxy is a proxy component that implements the Kubernetes services.

Kube proxy proxies the UDP, SCTP, and TCP and handles all the service discovery and load balancing. The Endpoint object contains IP addresses and ports of pod groups under the service object.

  1. Container Runtime

Container Runtime is a component for running containers. It’s a collection of APIs allowing Kubernetes to interact with different container runtimes. The container runtime interface (CRI) defines the APIs for creating, stopping, and deleting containers.

Its main tasks are to pull the images from container registries, run the containers, allocate and isolate resources for containers, and overall manage the entire lifecycle for containers on a host.

Cluster set-up using Kubeadm:

Kubeadm is a service to set up the Kubernetes cluster without much complex configuration. Kubeadm makes the entire process easy by running a series of checks to ensure that the server has all the essential configurations and components to run Kubernetes.

Following are the steps for the cluster set-up using Kubeadm

  1. Install the container runtime on all the nodes.
  2. Install the kubeadm, kubectl, and kubelet on all the nodes.
  3. Initiate Kubeadm configuration on the master node.
  4. Save the node join command with the token and install the network plugin.
  5. Join the worker node to master node using Join command.
  6. Validate the cluster components and nodes.
  7. Install Kubernetes metrics server.
  8. Deploy the sample app and validate the app.

Get your new hires billable within 1-60 days. Experience our Capability Development Framework today.

  • Cloud Training
  • Customized Training
  • Experiential Learning
Read More

About CloudThat

CloudThat is an official AWS (Amazon Web Services) Advanced Consulting Partner and Training partner, AWS Migration Partner, AWS Data and Analytics Partner, AWS DevOps Competency Partner, Amazon QuickSight Service Delivery Partner, AWS EKS Service Delivery Partner, and Microsoft Gold Partner, helping people develop knowledge of the cloud and help their businesses aim for higher goals using best-in-industry cloud computing practices and expertise. We are on a mission to build a robust cloud computing ecosystem by disseminating knowledge on technological intricacies within the cloud space. Our blogs, webinars, case studies, and white papers enable all the stakeholders in the cloud computing sphere.

To get started, go through our Consultancy page and Managed Services PackageCloudThat’s offerings.

WRITTEN BY Veeranna Gatate

Share

Comments

    Click to Comment

Get The Most Out Of Us

Our support doesn't end here. We have monthly newsletters, study guides, practice questions, and more to assist you in upgrading your cloud career. Subscribe to get them all!