Kubernetes (also known as K8s) is an open-source container orchestration system for automating the deployment, scaling, and management of containerised applications. It was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF)1.
- Master node: This is the central control plane of the Kubernetes cluster, responsible for managing the overall state of the system. The master node runs a number of components, including the API server, scheduler, and controller manager.
- Worker nodes: These are the machines that actually run the containers. A worker node runs a container runtime, such as Docker or
containerd, as well as a
kubeletprocess, which is responsible for communicating with the master node and launching containers on the node.
- Pods: A pod is the smallest deployable unit in Kubernetes. It is a logical host for one or more containers, and all containers in a pod are scheduled on the same node. Pods provide a shared context for the containers, allowing them to communicate with each other and access shared resources such as shared storage.
- Replication controller: A replication controller ensures that a specified number of replicas of a pod are running at any given time. If a pod fails, the replication controller will create a new one to replace it.
- Services: A service is a logical abstraction that represents a group of pods and defines how they should be accessed. Services allow you to access your application through a stable, load-balanced endpoint, rather than having to directly access individual pods.
- Deployments: A deployment is a higher-level abstraction that represents a desired state for your application. It specifies the number of replicas of a pod that should be running and manages the process of rolling out updates to your application.
- Volumes: A volume is a persistent storage mechanism that can be mounted into a pod. It allows containers to access data that is stored outside of the container itself, such as on a shared filesystem or in a cloud storage service.
- ustom resource definitions (CRDs): which permit you to define custom resources in Kubernetes. A custom resource is a resource that is not natively supported by the Kubernetes API, but that you can create and manage as one.
Custom Resource Definitions
A Custom Resource Definiton (CRD) is an endpoint in the Kubernetes API that stores a collection of API objects. This abstraction permits an expanding the Kubernetes API with new resource definitions.
For example, you might want to create a custom resource to manage a specific type of resource in your application, such as a database or a message queue. With CRDs, you can define the custom resource and the associated API for managing it, and then use the Kubernetes API to create, delete, and update instances of that resource.
CRDs are useful because they allow you to extend the Kubernetes API with custom resources that are specific to your application or environment. This can make it easier to manage your application within the Kubernetes ecosystem, as you can use the same tools and APIs to manage both native and custom resources.
To use custom resource definitions, you first need to create a definition of the custom resource in the form of a
CustomResourceDefinition object. This object defines the properties of the custom resource, such as its name, scope, and version, as well as the schema for the resource’s data. Once you have created the definition, you can use the Kubernetes API to create and manage instances of the custom resource.
In Kubernetes, an ingress2 is a collection of rules that allow inbound connections to reach the cluster services. It acts as a reverse proxy, routing traffic from the Internet to the appropriate service within the cluster.
An ingress service is a special type of service that exposes an HTTP or HTTPS endpoint for external traffic. It is typically used to provide a single entry point for all traffic to the cluster, allowing you to route traffic to different services based on the incoming request.
To create an ingress service, you first need to define an ingress resource. This resource specifies the rules for routing traffic to the appropriate service, such as the hostname or path that should be used to route traffic to a particular service.
Once you have defined the ingress resource, you can create an ingress controller to implement the ingress rules. An ingress controller is a piece of software that runs within the cluster and listens for incoming traffic, routing it to the appropriate service based on the rules defined in the ingress resource.
Overall, an ingress service is a useful way to provide a single entry point for external traffic to your cluster, allowing you to easily route traffic to the appropriate service based on the incoming request.
First of all: what’s the difference between a controller and an operator?
A controller is a piece of software that runs within the cluster and watches for changes to resources. When a change occurs, the controller takes some action to reconcile the current state of the resource with the desired state.
An operator is a controller that is designed to manage a specific type of resource (a
CustomResourceDefinition). For example, a database operator might be responsible for managing a database resource, while a message queue operator might be responsible for managing a message queue resource.
Most operators are written in Go, but there are also operators written in other languages such as Python and Java. In theory, any language can be used to write an operator, as long as it can interact with the Kubernetes API which is mostly done over HTTP using Kubernetes OpenAPI client libraries.
However, some languages are better suited for writing operators than others. For example, Go is a good choice because it is a compiled language that produces a single binary, which makes it easy to distribute and run. It also has good support for concurrency, which is important for operators that need to handle multiple requests at once.
Some examples of languages with have good support for writing operators include:
Some operator frameworks also provide support for writing operators in other languages, such as:
A ==Kubernetes selector is a label that is used to select pods that match a certain criteria==. Selectors can be used to select pods based on their labels, annotations, or even the contents of their containers.
Selectors are used in a variety of Kubernetes operations, such as:
- Creating a deployment. When you create a deployment, you can specify a selector to select the pods that should be created.
- Creating a service. When you create a service, you can specify a selector to select the pods that should be exposed by the service.
- Scaling a deployment. When you scale a deployment, you can specify a selector to select the pods that should be scaled.
- Scheduling a pod. When you schedule a pod, you can specify a selector to select the node that the pod should be scheduled on.
Selectors are a powerful tool that can be used to control how Kubernetes manages your pods. More info on selectors:
Leases in Kubernetes
This is an overview of Kubernetes’ coordination mechanisms, which ensure reliable and consistent leadership within a cluster’s control plane. One of the key resources facilitating this process is the
Lease is a lightweight resource in the
coordination.k8s.io API group that plays a pivotal role in the leader election algorithm. It allows various components of the cluster to safely coordinate operations without clashing, which is essential for high-availability and fault-tolerant systems.
Leader election, enabled by
Leases, is a strategy to ensure that a single instance of a component, like a controller or operator, is responsible for managing the shared state or performing specific tasks at any given time. This prevents multiple instances from attempting to perform the same operation simultaneously, which could lead to conflicts or inconsistent states within the cluster.
Lease object is utilised by Kubernetes controllers to acquire a lock within a namespace for a specified duration. This lock must be continually renewed by the active leader. If the leader fails or becomes unresponsive, the
Lease expires, allowing another controller instance to take over as the leader.
To explore more about how Kubernetes handles coordination and utilises
Leases for maintaining cluster stability and reliability, visit the detailed section on
Leases and coordination.
Main page on Minikube is here.
Kubernetes tools page is here.