What is Kubernetes
Kubernetes is, as their website puts it, a Container Orchestrator. A Container Orchestrator is a master service that schedules and executes containerized application across computer clusters.
But why do we need Container Orchestration? Given a single container, it may be simple schedule and execute containers; however, can you guarantee all the following?
- Are fault-tolerant
- Can scale, and do this on-demand
- Use resources optimally
- Can discover other applications automatically, and communicate with each other
- Are accessible from the external world
- Can update/rollback without any downtime.
Perhaps; however, what happens when the number of containers needed to maintain grows? This is where Container Orchestration is useful. It completes to definition stated above as well as attempts to satisfy each of the above properties.
Pods
Pods are the most basic building block within Kubernetes. They represent the entirety of an application. This may be represented a single container or a series of containers. In general, the most common use case of Pods is the former: One Pod Per Container. Containers within a Pod are always colocated within the same machine. All containers within a Pod share the same unique IP address and network ports. In addition, Pods can share storage by mounting a volume. This is useful to store persistent data in the case that one of the Pod's containers fails and needs to be restarted.
Deployment
A Deployment is a high-level description of the Pods of an application. It can describe Pod templates, how to roll out updates, CPU and Memory usages, and much more.
Kubernetes Architecture
From a high-level overview, Kubernetes consist of 3 core components: Master Node, Worker Node, and a Distributed Key-Value store.
Master Node
The Master Node is responsible for all administrative task such as scaling, rolling back, providing a GUI dashboard, and responding to CLI inquiries. There can be more than one Master Node. If there is, then this is what is called High Availability (HA) Mode. In this mode, there is one core Master Node that handles all task and all other Master Nodes are followers.
The Master Node is composed of 4 core components:
-
API Server: This is a REST based API Server that users can interact with to communicate with all other nodes.
-
Scheduler: This schedules each worker nodes. It contains information about user-defined limitations on a task and worker nodes.
-
Controller Manager: This is a monitoring service that observes operations done via the API Server. It attempts to correct states that are invalid.
-
etcd: This is the distributed key-value store. It may be apart of the Master Node or be configured as a separate entity. It stores a cluster's state.
Worker Node
The Worker Node is responsible for running a Pod and its containers.
It contains 3 core components:
-
Container Runtime: This is simply a runtime environment to execute the containers of the Pods. This is by default Docker.
-
kubelet: This is what communicates with the Master Node to receive the Pod and execute it. It contains periodically does a health check on Pods it is monitoring.
-
kube-proxy: This component manages and routes network calls to each container.