Get $20 for an easy start!

Get $20 for an easy start!

Sign up

Prices

Kubernetes: orchestration of containers in OneCloudPlanet cloud

10 жовтня 2024 р.

A key tool for container management, providing automation, scalability, and reliability for applications — this is Kubernetes. What is its architecture and when is it used? Let’s explore the main components and capabilities of this technology applied in the cloud solutions of OneCloudPlanet.

 

From virtual machines to containers

 

In the 1990s, virtual machines (VMs) became the primary technology for managing server resources. They allowed multiple applications to run on a single physical server, isolating each application in its virtual environment. This approach enabled companies to flexibly manage and scale their applications as needed.

 

However, as the number of virtual machines grew, a challenge emerged: each VM required its own operating system kernel, which led to significant resource consumption, especially for applications that didn’t need full isolation. The solution was the use of containers, which allowed applications to be isolated without the need to run a separate kernel for each one.

 

Containers: optimizing isolation

 

Containers are isolated environments that allow applications to run with minimal resource consumption compared to VMs. This approach uses a shared kernel, making containers significantly lighter and more efficient.

 

Containers have high portability: an application and its dependencies can be packaged into a single container that can be transferred and run on any server that supports containers. They also support microservice architecture, allowing complex applications to be divided into independent modules, simplifying their deployment and management.

 

Docker: standardization of containers

 

Docker became the first and most popular platform for working with containers. It provided a simple and convenient way to create, manage, and deploy containers using Dockerfile configuration files. This tool allowed developers to package applications and their dependencies into a single lightweight unit — a container that can easily be deployed in any environment.

 

Docker integrates with Continuous Integration/Continuous Delivery (CI/CD) systems such as GitLab CI or Jenkins, making the process of automated testing and deploying new versions of applications fast and safe.

 

Role of orchestration

 

Docker simplifies the creation and running of containers, but to manage large systems composed of many containers, orchestration is required. Container orchestration allows you to automatically manage their distribution, scaling, and fault tolerance. This is where Kubernetes comes in — a powerful platform for managing containers that automates the deployment of applications, scales them as needed, and monitors their state.

 

Kubernetes: architecture and components

 

Kubernetes (k8s) is a platform for automated container management that solves the problems of scaling, load balancing, and fault tolerance. Kubernetes automatically distributes containers across cluster nodes, manages their state, and restores them in case of failures. With it, you can automate the deployment of complex applications consisting of many containers, ensuring high performance and reliability.

 

Let’s take a look at the main components of Kubernetes architecture that make it such a powerful and reliable tool.

 

Pods
Pods are the basic computing units in Kubernetes that represent a set of containers working in a shared environment. They share network and storage resources. Pods can contain one or more containers, and they all have access to shared resources, making them an optimal solution for microservices, where containers must interact with each other.

 

Master Node
The master node is the main controlling component of the cluster. It manages the entire Kubernetes system and ensures everything runs according to plan. It consists of:

 

  • API Server: the point of interaction with the cluster through which users and external systems can create, modify, and delete objects.
  • etcd: a distributed data store that holds all cluster state information.
  • Scheduler: distributes pods across worker nodes based on available resources and established rules (e.g., node affinity).
  • Controller Manager: responsible for maintaining the desired state of the system, controlling pod replication, and restoring them in case of failures.

 

Nodes
Nodes are the working machines (physical or virtual) on which containers are directly run. Each node includes:

 

  • kubelet: an agent that manages pods on the node, monitors their state, and communicates with the master node.
  • Container Runtime: launches and manages containers. Popular runtimes include Docker, containerd, and CRI-O.
  • kube-proxy: a component responsible for network communication between pods and external systems and load balancing.

 

Affinities
Affinities allow Kubernetes to manage the placement of pods on nodes, considering various resource requirements and interactions with other pods. Two main types are distinguished:

 

  • Node Affinity: defines on which nodes pods should or must run depending on resources or other factors.
  • Pod Affinity: specifies which pods should be placed close to each other for optimal interaction or, conversely, avoid joint placement (anti-affinity).

 

Services
Services provide a stable access point to sets of pods that doesn’t change even when IP addresses or pod restarts occur. This makes the application reliably accessible, regardless of dynamic scaling and container restarts.

 

Controllers
Controllers manage the lifecycle of pods and applications in Kubernetes. They automate the process of maintaining the right number of pods, managing application updates and providing fault tolerance.

 

  • ReplicaSet: maintains a specified number of pod replicas.
  • Deployment: an abstraction over ReplicaSet that allows you to manage application updates, ensuring smooth rollout and rollback.
  • StatefulSet: manages pods with unique network identities and persistent states, often used for databases.
  • DaemonSet: ensures that a specific pod runs on every node, often used for monitoring or network agents.

 

Thus, the master node includes the API server, responsible for interacting with the cluster, the Scheduler, which distributes pods across nodes, and Controllers, which maintain the desired state of the system. To work with Kubernetes objects such as pods and services, you can use the kubectl command-line tool or interact directly with the Kubernetes API by sending HTTP requests. This allows you to manage application deployments and configurations through declarative YAML or JSON configurations.
 

Kubernetes in action

 

Imagine an application consisting of several microservices: a web interface, an ordering service, a payment service, and a notification service. Each of these microservices is placed in a separate container managed by Kubernetes. If the load on the ordering service increases, Kubernetes automatically adds more containers to handle the requests, and when the load decreases, it reduces the number of containers, saving resources.

 

Kubernetes provides load balancing between containers, monitors their state, and automatically restarts containers if they fail.

 

cloud_in_blog.png

 

Deploying an application in kubernetes on OneCloudPlanet

 

  1. Creating pod configurations: each microservice is assigned its pod configuration, describing which containers to run and with what resources. These configurations are loaded into Kubernetes via the API server.
  2. Deployment: Kubernetes takes care of deploying all containers, distributing them across worker nodes based on available resources. For example, the ordering service and payment service can run on different nodes, increasing the system’s fault tolerance.
  3. Scaling: during peak hours, the load on the ordering service increases. Kubernetes automatically scales the service by adding more pods to handle the increased load. When the load decreases, the system reduces the number of pods, freeing up resources.
  4. Monitoring and state management: Kubernetes tracks the state of each container. If the notification service fails, the system automatically restarts the container on another node, preserving all necessary data and states using the StatefulSet mechanism.
  5. Load balancing and network services: Kubernetes automatically distributes traffic between container instances using network services. This ensures that user requests are processed even if one container restarts or fails.
     

Advantages of using kubernetes on OneCloudPlanet

 

OneCloudPlanet is the first cloud service in Ukraine to implement Managed Kubernetes, simplifying cluster administration and infrastructure.

 

  1. Automatic scaling: when application load increases, Kubernetes automatically increases the number of pods, allowing the application to adapt to changing conditions.
  2. Fault tolerance: if one of the nodes or containers fails, Kubernetes automatically restarts them on other nodes, minimizing downtime.
  3. Flexibility and microservice management: Kubernetes is ideal for working with microservice architectures, making it an excellent choice for dynamic applications with multiple components.

 

Kubernetes offers powerful features for automatic application deployment and scaling, making it a smart choice for cloud platforms like OneCloudPlanet.

 

Example kubernetes syntax

 

Kubernetes uses declarative manifests written in YAML or JSON to describe cluster objects. Example pod manifest:

 

apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  containers:
  - name: example-container
    image: nginx:latest
    ports:
    - containerPort: 80

 

This simple example describes a pod named example-pod, which runs a container with the nginx:latest image and opens port 80.
 

Hence

 

Thanks to its architecture, Kubernetes has become an essential tool for managing containerized applications, offering automation, scalability, and fault tolerance. From virtual machines to containers and their orchestration, the technology significantly simplifies the management of complex microservice systems.

 

K8s allows flexible resource distribution, ensuring the stable operation of applications under changing loads. This solution is optimal for cloud platforms like OneCloudPlanet, thanks to its capabilities for automatic deployment, application state management, and data security.

 

Register

Content