AKS: Azure Kubernetes Service

Geetansh Sharma
13 min readMar 2, 2021

What is AKS?

AKS is an open-source fully managed container orchestration service that became available in June 2018 and is available on the Microsoft Azure public cloud that can be used to deploy, scale and manage Docker containers and container-based applications in a cluster environment. Azure Kubernetes Service (AKS) offers server less Kubernetes, an integrated continuous integration and continuous delivery (CI/CD) experience and enterprise-grade security and governance. It unites your development and operations teams on a single platform to rapidly build, deliver and scale applications with confidence.

To know about more about kubernetes, refer my this article:-

Azure Kubernetes Service Benefits

Azure Kubernetes Service is currently competing with both Amazon Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE). It offers numerous features such as creating, managing, scaling, and monitoring Azure Kubernetes Clusters, which is attractive for users of Microsoft Azure. The following are some benefits offered by AKS:

  • Efficient resource utilization: The fully managed AKS offers easy deployment and management of containerized applications with efficient resource utilization that elastically provisions additional resources without the headache of managing the Kubernetes infrastructure.
  • Faster application development: Developers spent most of the time on bug-fixing. AKS reduces the debugging time while handling patching, auto-upgrades, and self-healing and simplifies the container orchestration. It definitely saves a lot of time and developers will focus on developing their apps while remaining more productive.
  • Security and compliance: Cybersecurity is one of the most important aspects of modern applications and businesses. AKS integrates with Azure Active Directory (AD) and offers on-demand access to the users to greatly reduce threats and risks. AKS is also completely compliant with the standards and regulatory requirements such as System and Organization Controls (SOC), HIPAA, ISO, and PCI DSS.
  • Quicker development and integration: Azure Kubernetes Service (AKS) supports auto-upgrades, monitoring, and scaling and helps in minimizing the infrastructure maintenance that leads to comparatively faster development and integration. It also supports provisioning additional compute resources in Server less Kubernetes within seconds without worrying about managing the Kubernetes infrastructure.

Azure Kubernetes Service Features

Microsoft Azure offers Azure Kubernetes Service that simplifies managed Kubernetes cluster deployment in the public cloud environment and also manages health and monitoring of managed Kubernetes service. Customers can create AKS clusters using the Azure portal or Azure CLI and can manage the agent nodes.

A template-based deployment using Terraform and Resource Manager templates can also be chosen to deploy the AKS cluster that manages the auto-configuration of master and worker nodes of the Kubernetes cluster. Some additional features such as advanced networking, monitoring, and Azure AD integration can also be configured. Let’s take a look into the features that Azure Kubernetes Service (AKS) offers:

Open-source environment with enterprise commitment

Microsoft has inducted the number of employees in last couple of years to make Kubernetes easier for the businesses and developers to use and participate in open-source projects and became the third giant contributor to make Kubernetes more business-oriented, cloud-native, and accessible by bringing the best practices and advanced learning with diverse customers and users to the Kubernetes community.

Nodes and clusters

In AKS, apps and supporting services are run on Kubernetes nodes and the AKS cluster is a combination of one or more than one node. And, these AKS nodes are run on Azure Virtual Machines. Nodes that are configured with the same configuration are grouped together called node pool. Nodes in the Kubernetes cluster are scaled-up and scaled-down according to the resources are required in the cluster. So, nodes, clusters, and node pools are the most prominent components of your Azure Kubernetes environment.

Role-based access control (RBAC)

AKS easily integrates with Azure Active Directory (AD) to provide role-based access, security, and monitoring of Kubernetes architecture on the basis of identity and group membership. You can also monitor the performance of your AKS and the apps.

Integration of development tools

Another important feature of AKS is the development tools such as Helm and Draft are seamlessly integrated with AKS where Azure Dev Spaces can provide a quicker and iterative Kubernetes development experience to the developers. Containers can be run and debugged directly in Azure Kubernetes environment with less stress on the configuration.

AKS also offers support for Docker image format and can also integrate with Azure Container Registry (ACR) to provide private storage for Docker images. And, regular compliance with the industry standards such as System and Organization Controls (SOC), Payment Card Industry Data Security Standard (PCI DSS), Health Insurance Portability and Accountability Act (HIPAA), and ISO make AKS more reliable across various business.

Running any workload in Azure Kubernetes Service

You can orchestrate any type of workload running in the AKS environment. You can move .NET apps to Windows Server containers, modernize Java apps in Linux containers, or run microservices in Azure Kubernetes Service. AKS will run any type of workload in the cluster environment.

Removes complexities

AKS removes your implementation, installation, maintenance, and security complexities in Azure cloud architecture. It also reduces substantial costs where no per-cluster charges are being imposed on you.

Use-Cases of Azure Kubernetes Service

We’ll take a look at different use cases where AKS can be used :-

  • Migration of existing applications: You can easily migrate existing apps to containers and run them with Azure Kubernetes Service. You can also control access via Azure AD integration and SLA-based Azure Services like Azure Database using Open Service Broker for Azure (OSBA).
  • Simplifying the configuration and management of microservices-based Apps: You can also simplify the development and management of microservices-based apps as well as streamline load balancing, horizontal scaling, self-healing, and secret management with AKS.
  • Bringing DevOps and Kubernetes together: AKS is also a reliable resource to bring Kubernetes and DevOps together for securing DevOps implementation with Kubernetes. Bringing both together, it improves the security and speed of the development process with Continuous Integration and Continuous Delivery (CI/CD) with dynamic policy controls.
  • Ease of scaling: AKS can also be applied in many other use cases such as ease of scaling by using Azure Container Instances (ACI) and AKS. By doing this, you can use AKS virtual node to provision pods inside Azure Container Instance (ACI) that start within a few seconds and enables AKS to run with required resources. If your AKS cluster is run out of resources, if will scale-out additional pods automatically without any additional servers to manage in the Kubernetes environment.
  • Data streaming: AKS can also be used to ingest and process real-time data streams with data points via sensors and perform quick analysis.

Case Study: WhiteSource simplifies deployments using Azure Kubernetes Service

WhiteSource simplifies open-source usage management for security and compliance professionals worldwide. Now the WhiteSource solution can meet the needs of even more companies, thanks to a re-engineering effort that incorporated Azure Kubernetes Service (AKS).

WhiteSource was created by software developers on a mission to make it easier to consume open-source code. Founded in 2008, the company is headquartered in Israel, with offices in Boston and New York City. Today, WhiteSource serves customers around the world, including Fortune 100 companies. As much as 60 to 70 percent of the modern codebase includes open-source components. WhiteSource simplifies the process of consuming these components and helps to minimize the cost and effort of securing and managing them so that developers can freely and fearlessly use open-source code.

WhiteSource is a user-friendly, cloud-based, open-source management solution that automates the process for monitoring and documenting open-source dependencies. The WhiteSource platform continuously detects all the open-source components used in a customer’s software using a patent-pending Contextual Pattern Matching (CPM) Engine that supports more than 200 programming languages. It then compares these components against the extensive WhiteSource database. Unparalleled in its coverage and accuracy, this database is built by collecting up-to-date information about open-source components from numerous sources, including various vulnerability feeds and hundreds of community and developer resources. New sources are added on a daily basis and are validated and ranked by credibility.

Simplifying deployments, monitoring, availability, and scalability

WhiteSource was looking for a way to deliver new services faster to provide more value for its customers. The solution required more agility and the ability to quickly and dynamically scale up and down, while maintaining the lowest costs possible.

Because WhiteSource is a security DevOps–oriented company, its solution required the ability to deploy fast and to roll back even faster. Focusing on an immutable approach, WhiteSource was looking for a built-in solution to refresh the environment upon deployment or restart, keeping no data on the app nodes.

This was the impetus to investigate containers. Containers make it possible to run multiple instances of an application on a single instance of an operating system, thereby using resources more efficiently. Containers also enable continuous deployment (CD), because an application can be developed on a desktop, tested in a virtual machine (VM), and then deployed for production in the cloud.

Finding the right container solution

The WhiteSource development team explored many vendors and technologies in its quest to find the right container orchestrator. The team knew that it wanted to use Kubernetes because it was the best established container solution in the open-source community. The WhiteSource team was already using other managed alternatives, but it hoped to find an even better way to manage the building process of Kubernetes clusters in the cloud. The solution needed to quickly scale per work queue and to keep the application environment clean post-execution. However, the Kubernetes management solutions that the team tried were too cumbersome to deploy, maintain, and get proper support for.

Fortunately, WhiteSource has a long-standing relationship with Microsoft. A few years ago, the team responsible for Microsoft Visual Studio Team Services (now Azure DevOps) reached out to WhiteSource after hearing customers request a better way to manage the open-source components in their software. Now WhiteSource Bolt is available as a free extension to Azure DevOps in the Azure Marketplace. In addition, Microsoft signed a global agreement with WhiteSource to use the WhiteSource solution to track open-source components in Microsoft software and in the open-source projects that Microsoft supports.

A Microsoft solution specialist demonstrated Azure Kubernetes Service to the WhiteSource development team, and the team knew immediately that it had found the right easy-to-use solution. AKS manages a hosted Kubernetes environment, making it quick and easy to deploy and manage containerized applications — without container orchestration expertise. It also eliminates the burden of ongoing operations and maintenance by provisioning, upgrading, and scaling resources on demand, without taking the application offline. AKS also supported a cloud-agnostic solution that could run the WhiteSource application on multiple clouds.

Architecture

The WhiteSource solution was redesigned as a multicontainer application. The developmental language was mostly Java that runs under a WildFly (JBoss) application server. The application is deployed to an AKS cluster that pulls images from Azure Container Registry and runs in 60 to 70 Kubernetes pods.

The main WhiteSource app runs on Azure Virtual Machines and is exposed from behind a web application firewall (WAF) in Azure Application Gateway. The WhiteSource developers were early adopters of Application Gateway, which provides an application delivery controller (ADC) as a service with Layer 7 load-balancing capabilities — in effect, handling the solution’s front end.

The services that run on AKS communicate with the front end through Application Gateway to get the requests from clients, process them, and then return the answers to the application servers. When the WhiteSource application starts, it samples an Azure Database for MySQL database and looks for an open-source component to process. After finding the data, it starts processing, sends the results to the database, and expires. The process running in the container can be scrubbed entirely from the environment. The container starts fresh, and no data is saved. Then it starts all over.

Containers also make it easy to continuously build and deploy applications. The containerized workflow is integrated into the WhiteSource continuous integration (CI) and continuous deployment in Jenkins. The developers update the application by pushing commits to GitHub. Jenkins automatically runs a new container build, pushes container images to Azure Container Registry, and then runs the app in AKS. By setting up a continuous build to produce the WhiteSource container images and orchestration, the team has increased the speed and reliability of its deployments. In addition, the new CI/CD pipeline serves environments hosted on multiple clouds.

We write our AKS manifests and implement CI/CD so we can build it once and deploy it on multiple clouds. That is the coolest thing!

Uzi Yassef: senior DevOps engineer

WhiteSource

Azure services in the WhiteSource solution

The WhiteSource solution is built on a stack of Azure services that includes the following primary components, in addition to AKS:

  • Azure Virtual Machine scale sets are used to run the AKS containers. They make it easy to create and manage a group of identical, load-balanced, and autoscaling VMs and are designed to support scale-out workloads, like the WhiteSource container orchestration based on AKS.
  • Application Gateway is the web traffic load balancer that manages traffic to the WhiteSource application. A favorite feature is connection draining, which enables the developers to change members within a back-end pool without disruption to the service. Existing connections to the WhiteSource application continue to be sent to their previous destinations until either the connections are closed or a configurable timeout expires.
  • Azure Database for MySQL is a relational database service based on the open-source MySQL Server engine that stores information about a customer’s detected open-source components.
  • Azure Blob storage is optimized for storing massive amounts of unstructured data. The WhiteSource application uses blob storage to serve reports directly to a customer’s browser.
  • Azure Queue storage is used to store large numbers of messages that can be accessed from anywhere in the world via authenticated calls using HTTP or HTTPS. The WhiteSource solution uses queues to create a backlog of work to process asynchronously.

Using AKS, we get all of the advantages of Kubernetes as a service without the overhead of building and maintaining our own managed cluster. And I get my support in one place for everything. As a Microsoft customer, for me, that’s very important.

Uzi Yassef: senior DevOps engineer

WhiteSource

Benefits of AKS

The WhiteSource developers couldn’t help comparing AKS to their experience with Amazon Elastic Container Service for Kubernetes (Amazon EKS). They felt that the learning curve for AKS was considerably shorter. Using the AKS documentation, walk-throughs, and example scenarios, they ramped up quickly and created two clusters in less time than it took to get started with EKS. The integration with other Azure components provided a great operational and development experience, too.

Other benefits included:

  • Automated scaling. With AKS, WhiteSource can scale its container usage according to demand. The workloads can change dynamically, enabling some background processes to run when the cluster is idle, and then return to running the customer-facing services when needed. In addition, more instances can run for a much lower cost than they could with the previous methods the company used.
  • Faster updates. Security is a top priority for WhiteSource. The company needs to update its databases of open-source components as quickly as possible with the latest information. It was accustomed to a more manual deployment process, so the ease of AKS surprised them. Its integration with Azure DevOps and its CD pipeline makes it simple to push updates as often as needed.
  • Safer deployments. The WhiteSource deployment pipeline includes rolling updates. An update can be deployed with zero downtime by incrementally updating pod instances with new ones. Even if an update includes an error and the application crashes, it doesn’t terminate the other pods and performance is not affected.
  • Control over configurations. The entire WhiteSource environment is set up to use a Kubernetes manifest file that defines the desired state for the cluster and the container images to run. The manifest file makes it easy to configure and create all the objects needed to run the application on Azure.

Summary

The developers at WhiteSource understand that every improvement they make to the infrastructure of their solution helps enhance the product offering. They are moving forward with containers and beginning to containerize other utilities and small tool sets. In addition, all new development is now being done using AKS.

The simplicity of the AKS deployments has more than made up for any inconvenience related to the move to containers. The entire managed Kubernetes experience — from the first demo of AKS to the most recent application deployment — far exceeded the company’s expectations.

In the past, we were setting up our own Kubernetes cluster, maintaining it, and updating it. It was cumbersome and it took time and specialized knowledge. Now we can just focus on building, deploying, and maintaining our application. AKS shortens the time and helps us to focus more on innovation.

Uzi Yassef: senior DevOps engineer

WhiteSource

Conclusion

Businesses are transforming from on-premises to the cloud very quickly while building and managing modern and cloud-native applications. Kubernetes is one of the solutions that is open-sourced and supports building and deploying cloud-native apps with complete orchestration. Azure Kubernetes Service is a robust and cost-effective container orchestration service that helps you to deploy and manage containerized applications in seconds where additional resources are assigned automatically without the headache of managing additional servers.

AKS nodes are scaled-out automatically as the demand increases. It has numerous benefits such as security with role-based access, easy integration with other development tools, and running any workload in the Kubernetes cluster environment. It also offers efficient utilization of resources, removes complexities, easily scaled-out, and migrates any existing workload to a containerized environment and all containerized resources can be accessed via the AKS management portal or AKS CLI.

--

--