With Microservices in place, it is impossible to think about the packaging of an application without containers, Docker has done a decent job by providing a way to containerize the applications. But Docker is finding it difficult to manage applications which are huge in number having challenges in auxiliary services like auto-scaling, load balancing and service discovery. Apart from this docker does not offer a comprehensive solution for standard microservices design patterns like auto-scaling and load balancer.
Many of us started leveraging cloud-based services to address these challenges, but these services will bind your application to the specific cloud and there is no effective and efficient way to move out of the specific cloud.
Kubernetes exactly fits the bill here. It helps us to design and build scalable, highly maintainable application which can work across all the platforms. It is the most popular tool/framework to build cloud agnostic scalable and maintainable systems.
How Kubernetes works?
Kubernete cluster consists of one or more nodes, one node will work as the master node and remaining nodes will work as slaves.
The Master node contains the following items:
- API Server: It is used as the central management entity and the only component that directly talks with the distributed storage component etcd through HTTP api using JSON
- Scheduler: It is used for the selecting nodes to run containers.
- Controller manager: It is used to run the controllers.
- Etcd: It is used as a global configuration store.
- Dashboard: It is used for managing the cluster via the web UI running on the master node.
- Kubectl: It is the command line alternative to manage the cluster. It is very powerful and supports almost all the features of a dashboard in command line form.
Slave node contains the following items:
- Container runtime: It is used for packaging or building the application(example, Docker Engine)
- Kubelet: It is responsible for starting, stopping, and managing individual containers by requests from the Kubernetes control plane.
- Kube-proxy: It is responsible for networking and load balancing
What are the basic building blocks?
A pod is nothing but a smallest deployable unit in kubernetes. The main use of pod is to support co-related services, such as an application and its peripheral services like the cache. The pod contains one or more containers and each will find via localhost and communicate through shared memory. Containers inside a pod will share IP address and port space. Many times pod contains just one container only.
Kubernetes gives us a provision to name/label the resources for example pods. A Label is a key/value pair that can be given any resource.
If you want to perform a certain action on a resource or group of resources, first we need to identify these resource(s). Selector generally allows us to find a resource or set of resources by providing a label or range of labels.
Controller is the main engine of Kubernetes which manages all resources and owns the responsibility to keep the cluster in the defined state. The Controller comprises multiple independent processes which will take care of the specific need.
Replication Controller: Replication controller is responsible for running the specified number of pod copies (replicas) across the cluster.
Deployment Controller: Deployment controller takes care of rolling new images into the desired environment and rolling back of images as needed. The typical rolling back is putting the previous state to current state.
Node Controller: It continuously monitors the nodes, if any of the nodes is down or not responding it will get into action immediately.
A service is a logical group of pods which provides communication abstraction to another group of pods. As pods can die any time and new pods can start in runtime, strong coupling among a group of pods will lead to serious communication failures. Service in Kubernetes facilitates decoupling.
Cluster and Nodes:
A cluster is nothing but a group of nodes. Each node will be a physical machine or virtual machine which contains docker container runtime and kubelet service as discussed in ‘How Kubernetes Work’ section.
Kubernetes was built by Google primarily to take care of heavy production workloads for their products. Google made it opensource in mid-2015 and now it is supported by strong community along with Google.
When and Where to use?
Below table depicts the list of features which are offered by Kubernetes and equivalent service on popular public clouds.
Kubernetes Feature AWS Google Cloud Azure Auto ScalingAWS AutoscaleGoogle Cloud Load BalancingAzure Application GatewayNetworkingAmazon’s Virtual Private Cloud (VPCs)Google Virtual Private CloudAzure Virtual Network(VNET)Health checks & Resource usage monitoringAmazon’s Cloud WatchGoogle Cloud Health CheckAzure Service HealthService DiscoveryAmazon ECSGoogle Cloud Metadata ServerAzure Service FabricLoad balancingAmazon’s Elastic Load BalancingGoogle Cloud Load BalancingMicrosoft Azure Load BalancingRolling updateAWS AutoscaleGoogle Cloud Load BalancingAzure Application GatewayVolume managementAmazon Elastic Block StoreGoogle Cloud StorageMicrosoft Azure Storage
I hope this article was informative and leaves you with a better understanding of Kubernetes.
At Walking Tree, we are excited about the possibilities that Kubernetes brings in. Stay tuned for more articles on this topic.