Technology of Today: Kubernetes

Geetansh Sharma
13 min readDec 26, 2020

--

What is Kubernetes??

Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation.

Kubernetes: Terminology and Architecture

Kubernetes introduces a lot of vocabulary to describe how your application is organized. We’ll start from the smallest layer and work our way up.

Pods

A Kubernetes pod is a group of containers, and is the smallest unit that Kubernetes administers. Pods have a single IP address that is applied to every container within the pod. Containers in a pod share the same resources such as memory and storage. This allows the individual Linux containers inside a pod to be treated collectively as a single application, as if all the containerized processes were running together on the same host in more traditional workloads. It’s quite common to have a pod with only a single container, when the application or service is a single process that needs to run. But when things get more complicated, and multiple processes need to work together using the same shared data volumes for correct operation, multi-container pods ease deployment configuration compared to setting up shared resources between containers on your own.

For example, if you were working on an image-processing service that created GIFs, one pod might have several containers working together to resize images. The primary container might be running the non-blocking microservice application taking in requests, and then one or more auxiliary (side-car) containers running batched background processes or cleaning up data artifacts in the storage volume as part of managing overall application performance.

Deployments

Kubernetes deployments define the scale at which you want to run your application by letting you set the details of how you would like pods replicated on your Kubernetes nodes. Deployments describe the number of desired identical pod replicas to run and the preferred update strategy used when updating the deployment. Kubernetes will track pod health, and will remove or add pods as needed to bring your application deployment to the desired state.

Services

The lifetime of an individual pod cannot be relied upon; everything from their IP addresses to their very existence are prone to change. In fact, within the DevOps community, there’s the notion of treating servers as either “pets” or “cattle.” A pet is something you take special care of, whereas cows are viewed as somewhat more expendable. In the same vein, Kubernetes doesn’t treat its pods as unique, long-running instances; if a pod encounters an issue and dies, it’s Kubernetes’ job to replace it so that the application doesn’t experience any downtime.

A service is an abstraction over the pods, and essentially, the only interface the various application consumers interact with. As pods are replaced, their internal names and IPs might change. A service exposes a single machine name or IP address mapped to pods whose underlying names and numbers are unreliable. A service ensures that, to the outside network, everything appears to be unchanged.

Nodes

A Kubernetes node manages and runs pods; it’s the machine (whether virtualized or physical) that performs the given work. Just as pods collect individual containers that operate together, a node collects entire pods that function together. When you’re operating at scale, you want to be able to hand work over to a node whose pods are free to take it.

The Kubernetes control plane

The Kubernetes control plane is the main entry point for administrators and users to manage the various nodes. Operations are issued to it either through HTTP calls or connecting to the machine and running command-line scripts. As the name implies, it controls how Kubernetes interacts with your applications.

Cluster

A cluster is all of the above components put together as a single unit.

Kubernetes components

With a general idea of how Kubernetes is assembled, it’s time to take a look at the various software components that make sure everything runs smoothly. Both the control plane and individual worker nodes have three main components each.

Control plane

API Server

The API server exposes a REST interface to the Kubernetes cluster. All operations against pods, services, and so forth, are executed programmatically by communicating with the endpoints provided by it.

Scheduler

The scheduler is responsible for assigning work to the various nodes. It keeps watch over the resource capacity and ensures that a worker node’s performance is within an appropriate threshold.

Controller manager

The controller-manager is responsible for making sure that the shared state of the cluster is operating as expected. More accurately, the controller manager oversees various controllers which respond to events (e.g., if a node goes down).

Worker node components

Kubelet

A Kubelet tracks the state of a pod to ensure that all the containers are running. It provides a heartbeat message every few seconds to the control plane. If a replication controller does not receive that message, the node is marked as unhealthy.

Kube proxy

The Kube proxy routes traffic coming into a node from the service. It forwards requests for work to the correct containers.

etcd

etcd is a distributed key-value store that Kubernetes uses to share information about the overall state of a cluster. Additionally, nodes can refer to the global configuration data stored there to set themselves up whenever they are regenerated.

Kubernetes : Some Use Cases

Learning Kubernetes by deploying a simple app

The first case where you can make use of Kubernetes may seem controversial, but still is very useful. Let’s assume that we have a simple three-tier application with backend written in Python/PHP, a database and front-end created in React or Angular. To deploy it, you can use Kubernetes. Yes, from a purely practical point of view this would be not very reasonable: Kubernetes is complex and creating a Kubernetes cluster to run one simple app would mean doing unnecessary work. Further, you can deploy such an app using other, less expensive solutions. But there is an educational purpose that shouldn’t be overlooked. In undertaking such a deployment, you will learn how to run a Kubernetes cluster and deploy applications on it.

There is still one more practical and advanced scenario where we can use Kubernetes to deploy apps. Imagine we work in a creative agency that is developing a marketing webpage for a client in the pharmaceutical industry. Each medicine advertised on the main page requires a separate webpage where a leaflet of information about the medicine, its ingredients, dosage, possible adverse effects etc. Each medicine would also have a dedicated app. In this scenario, we would be well advised to call on the power of Kubernetes. Thanks to the better resource allocation it affords, it will be cheaper to run one dedicated K8s cluster than many separate servers for each website. What’s more, it will be much easier to manage such a cluster than to employ separate hosts. So, in this case, using Kubernetes is perfectly reasonable.

Microservices architecture

A use case where you want to deploy a more complicated app with many components that will communicate with one another is a classic scenario for Kubernetes. In fact, its origins go back to Google deploying, managing and scaling apps in a more efficient way by using containers. That’s how the container orchestration platform Kubernetes was born. So, we now have a K8s cluster with one complicated app deployed. This app has numerous components that communicate with one another. Kubernetes helps you manage this communication.

This is closely related with another important trend in software development: microservice architecture, which I’ll explain using the example of an Internet bookstore. In such a store, we have different functionalities: manage users, order books, manage order lists, etc. There can be many such functionalities and each of them is a separate app. This is a practical realization which are aptly called microservices. All these apps must communicate with each other. To enable such communication and coordination, code must be written to conform with the programming language of each component.

Here you can clearly see the power of Kubernetes in managing microservices. It handles for developers such tasks as detecting problems with communication between the intra-app components, managing the behavior of components in the event of a failure or managing the authentication processes between components. What’s more, as more or less resources are needed for a particular component, Kubernetes automatically scales them up or down. This is a clear advantage of the microservice architecture: scalability. You can scale a single component rather than the whole app.

Kubernetes has built in tools like Horizontal Pod Autoscaler, which helps ensure that each microservice has the optimal number of replicas. Thank to this cluster operators can be sure that the application has enough resources to work smoothly but doesn’t waste valuable resources.
Of course, at the design stage, it has to be decided which architecture is better for a given app, as there are many different approaches to software development. Microservices are not always the best choice. Still, if microservice architecture is chosen, Kubernetes offers a number of advantages. It simplifies the entire process of managing app components and considerably reduces the work needed to get the app up and running.

Lift and shift — from servers to cloud

This scenario occurs frequently today, as software is migrated from on-prem infrastructure to cloud solutions. Let’s imagine the following situation. We have an application deployed on physical servers in a classical data center. For practical or economic reasons, it has been decided to move it to the cloud: either to a Virtual Machine or to big pods in Kubernetes. Of course, moving it to big pods in K8s isn’t a cloud native approach, but it can be treated as an intermediary phase. First, such a big app working outside the cloud is moved to the same big app in Kubernetes. It is then split into smaller components to become a regular cloud native-app. Such methodology is called “lift and shift” and is a good use case where Kubernetes can be used effectively.

Cloud-native Network Functions (CNF)

A few years ago, big telco companies had a problem. Their network services were based on hardware such as firewalls or load balancers provided by specialized hardware companies. Of course, this left them dependent on the hardware providers, and gave them little in the way of flexibility. If new functionality was needed, operators had to upgrade existing hardware. When a device firmware update was not possible, additional hardware had to be purchased. To address this disadvantage, the telcos opted to have all these network services as software and use Virtual Machines and OpenStack for network function virtualization (NFV). They now want to go a step further and use containers for the same purpose.

This approach is called Cloud-native Network Functions (CNF). R&D projects are now afoot, focusing on moving from VM-based Virtual Network Functions to Container-based network functions. In such a scenario, Kubernetes would be responsible not only for orchestrating the containers, but also for directing network traffic to proper pods. However, this is still a research area. There are no established standards related to various network components allowing software providers to deliver different implementations of the same functions. The Cloud Native Computing Foundation (CNCF) and LF Networking (LFN) have joined forces to launch the Cloud Native Network Functions (CNF) Testbed in order to foster the evolution from VNFs into CNFs. We will be keeping abreast of the research in this area and foresee Kubernetes playing an important role here.

Machine learning and Kubernetes

Machine learning techniques are now widely used to solve real-life problems. Successes have come in multiple fields–self-driving cars, image recognition, machine translation, speech recognition, game playing (Go or poker). Machine learning models have beaten even humans in games like Go, which was once thought to be too difficult a game for machines to crack. Moreover, AI could lead to real breakthroughs in detecting cancer and drug discovery. The business world has not failed to get in on the technology, either. Google, Microsoft and Amazon, to name three behemoths, have all put machines to good use, while other companies are investing heavily to boost their AI capabilities.

Yet, the process of building an effective AI model and using it in production is complicated and time-consuming. Building an app that can reliably recognize whether an image presents a cat or a dog is a case in point. First of all, a large dataset of images tagged “cat” or “dog” must be uploaded. Then, an untrained machine learning model is trained to classify the data in mathematical terms; trained, that is, to recognize the images that are neither in the training nor in the test dataset. After the model is trained, it is implemented in an app that will be made available to the public.

As you can see, it takes time to use an AI-trained model in an application. Therefore, many companies would like to simplify this process and make the life of data scientists or ML engineers easier by introducing a toolkit to speed up the whole process. In this way, the number of operations necessary to deploy such an app will be significantly reduced, shortening the app’s time-to-market. In this scenario, enterprises can harness the power of Kubernetes, as all the calculations necessary to train the ML model are performed inside the K8s cluster. The data scientist or ML engineer will only need to clean the data and write the code. The rest will be handled by a toolkit based on Kubernetes. Such toolkits are already available on the market: Kubeflow by Google and CodiLime spin-off Neptune both come to mind. The increasing demand for AI-powered solutions will surely further promote the adoption of Kubernetes.

Some Big Companies using Kubernetes

Tinder’s Move to Kubernetes

Due to high traffic volume, Tinder’s engineering team faced challenges of scale and stability. What did they do?

The answer is, of course, Kubernetes.

Tinder’s engineering team solved interesting challenges to migrate 200 services and run a Kubernetes cluster at scale totaling 1,000 nodes, 15,000 pods, and 48,000 running containers.

Was that easy? No way. However, they had to do it for the smooth business operations going further. One of their engineering leaders said, “As we onboarded more and more services to Kubernetes, we found ourselves running a DNS service that was answering 250,000 requests per second.” Tinder’s entire engineering organization now has knowledge and experience on how to containerize and deploy their applications on Kubernetes.

Reddit’s Kubernetes Story

Reddit is one of the busiest sites in the world. Kubernetes forms the core of Reddit’s internal infrastructure.

From many years, the Reddit infrastructure team followed traditional ways of provisioning and configuring. However, this didn’t go far until they saw some huge drawbacks and failures happening while doing the things the old way. They moved to Kubernetes.

The New York Times’s Journey to Kubernetes

Today the majority of the NYT’s customer-facing applications are running on Kubernetes. What an amazing story. The biggest impact has been an increase in the speed of deployment and productivity. Legacy deployments that took up to 45 minutes are now pushed in just a few. It’s also given developers more freedom and fewer bottlenecks. The New York Times has gone from a ticket-based system for requesting resources and weekly deploy schedules to allowing developers to push updates independently.

Airbnb’s Kubernetes Story

Airbnb’s transition from a monolithic to a microservices architecture is pretty amazing. They needed to scale continuous delivery horizontally, and the goal was to make continuous delivery available to the company’s 1,000 or so engineers so they could add new services. Airbnb adopted Kubernetes to support over 1,000 engineers concurrently configuring and deploying over 250 critical services to Kubernetes (at a frequency of about 500 deploys per day on average).

Pinterest’s Kubernetes Story

Image credits: Pinterest

With over 250 million monthly active users and serving over 10 billion recommendations every single day, the engineers at Pinterest knew these numbers are going to grow day by day, and they began to realize the pain of scalability and performance issues.

Their initial strategy was to move their workload from EC2 instances to Docker containers; they first moved their services to Docker to free up engineering time spent on Puppet and to have an immutable infrastructure.

The next strategy was to move to Kubernetes. Now they can take ideas from ideation to production in a matter of minutes, whereas earlier they used to take hours or even days. They have cut down so much overhead cost by utilizing Kubernetes and have removed a lot of manual work without making engineers worry about the underlying infrastructure.

Pokémon Go’s Kubernetes Story

How was Pokémon Go able to scale so efficiently became so successful? The answer is Kubernetes. Pokémon Go was developed and published by Niantic Inc., and grew to 500+ million downloads and 20+ million daily active users.

Pokémon Go engineers never thought their user base would increase exponentially to surpass expectations within a short time. They were not ready for it, and the servers couldn’t handle this much traffic.

Pokémon Go also faced a severe challenge when it came to vertical and horizontal scaling because of the real-time activity by millions of users worldwide. Niantic was not prepared for this.

The solution was in the magic of containers. The application logic for the game ran on Google Container Engine (GKE) powered by the open source Kubernetes project. Niantic chose GKE for its ability to orchestrate their container cluster at planetary-scale, freeing its team to focus on deploying live changes for their players. In this way, Niantic used Google Cloud to turn Pokémon GO into a service for millions of players, continuously adapting and improving. This gave them more time to concentrate on building the game’s application logic and new features rather than worrying about the scaling part

*************THANKS FOR READING*************

--

--