Kubernetes (also known as K8s) is an open-source container orchestration system for automating the deployment, scaling, and management of containerized applications. It was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF)1.
- Master node: This is the central control plane of the Kubernetes cluster, responsible for managing the overall state of the system. The master node runs a number of components, including the API server, scheduler, and controller manager.
- Worker nodes: These are the machines that actually run the containers. A worker node runs a container runtime, such as Docker or containerd, as well as a kubelet process, which is responsible for communicating with the master node and launching containers on the node.
- Pods: A pod is the smallest deployable unit in Kubernetes. It is a logical host for one or more containers, and all containers in a pod are scheduled on the same node. Pods provide a shared context for the containers, allowing them to communicate with each other and access shared resources such as shared storage.
- Replication controller: A replication controller ensures that a specified number of replicas of a pod are running at any given time. If a pod fails, the replication controller will create a new one to replace it.
- Services: A service is a logical abstraction that represents a group of pods and defines how they should be accessed. Services allow you to access your application through a stable, load-balanced endpoint, rather than having to directly access individual pods.
- Deployments: A deployment is a higher-level abstraction that represents a desired state for your application. It specifies the number of replicas of a pod that should be running and manages the process of rolling out updates to your application.
- Volumes: A volume is a persistent storage mechanism that can be mounted into a pod. It allows containers to access data that is stored outside of the container itself, such as on a shared filesystem or in a cloud storage service.
- Custom resource definitions (CRDs): which permit you to define custom resources in Kubernetes. A custom resource is a resource that is not natively supported by the Kubernetes API, but that you can create and manage as one.
Custom Resource Definitions
A Custom Resource Definiton (CRD) is an endpoint in the Kubernetes API that stores a collection of API objects. This abstraction permits an expanding the Kubernetes API with new resource definitions.
For example, you might want to create a custom resource to manage a specific type of resource in your application, such as a database or a message queue. With CRDs, you can define the custom resource and the associated API for managing it, and then use the Kubernetes API to create, delete, and update instances of that resource.
CRDs are useful because they allow you to extend the Kubernetes API with custom resources that are specific to your application or environment. This can make it easier to manage your application within the Kubernetes ecosystem, as you can use the same tools and APIs to manage both native and custom resources.
To use custom resource definitions, you first need to create a definition of the custom resource in the form of a
CustomResourceDefinition object. This object defines the properties of the custom resource, such as its name, scope, and version, as well as the schema for the resource’s data. Once you have created the definition, you can use the Kubernetes API to create and manage instances of the custom resource.
In Kubernetes, an ingress2 is a collection of rules that allow inbound connections to reach the cluster services. It acts as a reverse proxy, routing traffic from the Internet to the appropriate service within the cluster.
An ingress service is a special type of service that exposes an HTTP or HTTPS endpoint for external traffic. It is typically used to provide a single entry point for all traffic to the cluster, allowing you to route traffic to different services based on the incoming request.
To create an ingress service, you first need to define an ingress resource. This resource specifies the rules for routing traffic to the appropriate service, such as the hostname or path that should be used to route traffic to a particular service.
Once you have defined the ingress resource, you can create an ingress controller to implement the ingress rules. An ingress controller is a piece of software that runs within the cluster and listens for incoming traffic, routing it to the appropriate service based on the rules defined in the ingress resource.
Overall, an ingress service is a useful way to provide a single entry point for external traffic to your cluster, allowing you to easily route traffic to the appropriate service based on the incoming request.
Main page on Minikube is here.
Kubernetes tools page is here.