What is OpenShift?
OpenShift is a family of containerization software products developed by Red Hat. Its flagship product is the OpenShift Container Platform — an on-premises platform as a service built around Docker containers orchestrated and managed by Kubernetes on a foundation of Red Hat Enterprise Linux.
What is the use of OpenShift?
Red Hat OpenShift is a Kubernetes distribution focused on developer experience and application security that’s platform agnostic. OpenShift helps you develop and deploy applications to one or more hosts. These can be public facing web applications, or backend applications, including micro services or databases.
OpenShift — Architecture
OpenShift is a layered system wherein each layer is tightly bound with the other layer using Kubernetes and Docker cluster. The architecture of OpenShift is designed in such a way that it can support and manage Docker containers, which are hosted on top of all the layers using Kubernetes. Unlike the earlier version of OpenShift V2, the new version of OpenShift V3 supports containerized infrastructure. In this model, Docker helps in creation of lightweight Linux-based containers and Kubernetes supports the task of orchestrating and managing containers on multiple hosts.
Components of OpenShift
One of the key components of OpenShift architecture is to manage containerized infrastructure in Kubernetes. Kubernetes is responsible for Deployment and Management of infrastructure. In any Kubernetes cluster, we can have more than one master and multiple nodes, which ensures there is no point of failure in the setup.
Kubernetes Master Machine Components
Etcd − It stores the configuration information, which can be used by each of the nodes in the cluster. It is a high availability key value store that can be distributed among multiple nodes. It should only be accessible by Kubernetes API server as it may have sensitive information. It is a distributed key value Store which is accessible to all.
API Server − Kubernetes is an API server which provides all the operation on cluster using the API. API server implements an interface which means different tools and libraries can readily communicate with it. A kubeconfig is a package along with the server side tools that can be used for communication. It exposes Kubernetes API”.
Controller Manager − This component is responsible for most of the collectors that regulate the state of the cluster and perform a task. It can be considered as a daemon which runs in a non-terminating loop and is responsible for collecting and sending information to API server. It works towards getting the shared state of the cluster and then make changes to bring the current status of the server to a desired state. The key controllers are replication controller, endpoint controller, namespace controller, and service account controller. The controller manager runs different kind of controllers to handle nodes, endpoint, etc.
Scheduler − It is a key component of Kubernetes master. It is a service in master which is responsible for distributing the workload. It is responsible for tracking the utilization of working load on cluster nodes and then placing the workload on which resources are available and accepting the workload. In other words, this is the mechanism responsible for allocating pods to available nodes. The scheduler is responsible for workload utilization and allocating a pod to a new node.
Kubernetes Node Components
Following are the key components of the Node server, which are necessary to communicate with the Kubernetes master.
Docker − The first requirement of each node is Docker which helps in running the encapsulated application containers in a relatively isolated but lightweight operating environment.
Kubelet Service − This is a small service in each node, which is responsible for relaying information to and from the control plane service. It interacts with etcd store to read the configuration details and Wright values. This communicates with the master component to receive commands and work. The Kubelet process then assumes responsibility for maintaining the state of work and the node server. It manages network rules, port forwarding, etc.
Kubernetes Proxy Service − This is a proxy service which runs on each node and helps in making the services available to the external host. It helps in forwarding the request to correct containers. Kubernetes Proxy Service is capable of carrying out primitive load balancing. It makes sure that the networking environment is predictable and accessible but at the same time it is isolated as well. It manages pods on node, volumes, secrets, creating new containers health checkup, etc.
Integrated OpenShift Container Registry
OpenShift container registry is an inbuilt storage unit of Red Hat, which is used for storing Docker images. With the latest integrated version of OpenShift, it has come up with a user interface to view images in OpenShift internal storage. These registries are capable of holding images with specified tags, which are later used to build containers out of it.
Frequently Used Terms
Image − Kubernetes (Docker) images are the key building blocks of Containerized Infrastructure. As of now, Kubernetes only supports Docker images. Each container in a pod has its Docker image running inside it. When configuring a pod, the image property in the configuration file has the same syntax as the Docker command.
Project − They can be defined as the renamed version of the domain which was present in the earlier version of OpenShift V2.
Container − They are the ones which are created after the image is deployed on a Kubernetes cluster node.
Node − A node is a working machine in Kubernetes cluster, which is also known as minion for master. They are working units which can a physical, VM, or a cloud instance.
Pod − A pod is a collection of containers and its storage inside a node of a Kubernetes cluster. It is possible to create a pod with multiple containers inside it. For example, keeping the database container and web server container inside the pod.
Use Cases and Benefits of Redhat OpenShift
Containers are highly efficient vehicles for developing and deploying apps. As container usage ramps up, the complexity of managing containers across the totality of your IT infrastructure rises exponentially — making a container management platform essential at the enterprise level. Let’s look at the benefits of OpenShift container platform.
1. Innovate and go to market faster
OpenShift enables your development team to focus on doing what they do best — designing and testing applications. When they are freed from spending excessive time managing and deploying containers, they can speed up the development process and get products to market more rapidly.
Consider the case of a company specializing in the design and sale of integrated circuits. The cycle of innovation in this industry is relentless; as new technologies arise, chipmakers who can most effectively design chips for these new uses will be the ones who gain market share. For example, the rapid rise of the smart phone has been a boon to companies that have designed chips to power it.
Orchestrating container usage via the OpenShift platform provides a marked efficiency advantage to chipmakers who utilize containers for the next-generation virtualization benefits they offer. Deploying an increased number of apps on existing system resources enables a chipmaker to provide its developers with an expanded toolset to increase their ability to innovate. In an industry where a chipmaker’s main products can become outdated — if not obsolete — within a year or less, the ability to innovate and bring a product to market rapidly is a significant competitive advantage.
2. Accelerate application development
Deploying and managing containers at scale is a complicated process. OpenShift enables efficient container orchestration, allowing rapid container provisioning, deploying, scaling, and management. The tool enhances the DevOps process by streamlining and automating the container management process. Cutting down on time that would otherwise be spent managing containers improves your company’s productivity and speeds up application development.
Accelerated application development is especially valuable in enterprises where a company’s IT system must accommodate rapidly evolving functions. An example of this is the cybersecurity industry. Companies in this industry face an arms race against hackers, who are continually looking for software flaws to exploit. When an exploit is found, cybersecurity firms are expected to respond with fixes as rapidly as possible — often in days, if not hours.
If the development team at such a firm is delayed by container management issues, it can mean missing out on developing a timely fix; if this happens often enough, it can erode a company’s competitive position vis-à-vis its rivals. By streamlining and automating the container automation process, OpenShift enables cybersecurity developers to focus on accelerating application development, updates, and product distribution.
3. Enterprise-grade, container-based platform with no vendor lock-in
A company’s IT needs can vary greatly from one period to the next. Selecting a proprietary container management platform subjects you to the possibility that your vendor won’t be able to provide an acceptable solution if your company’s IT focus changes. In such cases, the expense and time involved in moving from a proprietary vendor to a new platform can be considerable.
Consider the case of a company with worldwide manufacturing facilities that implements a proprietary container platform tool. If the company shifts its production approach to a process that requires it to change to a new operating system — one that isn’t supported by its containerization platform — the company will face the expensive task of redoing its containerization orchestration on another platform.
With a vendor-agnostic open source platform, users can migrate their container processes to the new operating system quickly — while avoiding the extensive costs often involved in migrating from a proprietary operating framework.
4. Enable DevOps and department-wide collaboration
The DevOps process relies upon transparent communication between all involved parties. Containerization provides a convenient means of enabling your IT operations staff to test instances of a new app. OpenShift assists this process by making it easy to test apps throughout your IT architecture without being impeded by framework conflicts, deployment issues, or language discrepancies.
One industry that can benefit from Open Shift’s enablement of enhanced DevOps processes is the webhosting and development field. Companies competing in this industry are constantly racing to offer their customers enhanced functionality. For instance, as web commerce increases by leaps and bounds, companies and individuals progressively look to sell their products over the web. They can do this by adding web sales functionality to their own sites via widgets designed for this purpose, or by purchasing sites with built-in sales functionality.
A company operating in this field, which requires constantly updated functionality to stay competitive, needs to empower its employees to design and test applications as rapidly and effectively as possible. By enabling developer and operations staff to collaborate efficiently, OpenShift allows web hosting and design companies to link developers and operations staff together to effectively design, test, and deploy applications.
5. Self-service provisioning
Assembling the proper tools to create applications on your system architecture can be a challenge, especially at the enterprise level. OpenShift makes the process easy by allowing for the integration of the tools you use most across your entire operating environment.
This self-service provisioning helps improve developer productivity by allowing your development team to work with the tools they are most comfortable using — speeding up the development process by enabling faster creation and deployment of applications. At the same time, OpenShift allows your operations staff to retain control over the environment as a whole.
Examples of how this feature benefits users can be seen in companies where the development staff must be fluent in a variety of development tools and languages. For instance, a video game development company can benefit from this feature when they need to develop games that are compatible with a variety of operating systems. OpenShift enables the game developer’s programmers to use their favorite tools while developing games for different systems. This results in effective container usage, without forcing the company’s developers to use tools they aren’t familiar with.
Case Study: Ford Motors
Ford Motor Company adopts Kubernetes and Red Hat OpenShift
Ford Motor Company seeks to provide mobility solutions at accessible prices to its customers, including dealerships and parts distributors who sell to a variety of retail and commercial consumers. To speed delivery and simplify maintenance, the company sought to create a container-based application platform to modernize its legacy stateful applications and optimize its hardware use. With this platform, based on Red Hat OpenShift and supported by Red Hat and Sysdig technology, Ford has improved developer productivity, enhanced its security and compliance approach, and optimized its hardware use to improve operating costs. Now, the company can focus on exploring new ways to innovate, from big data to machine learning and artificial intelligence.
Improved productivity with standardized development environment and self-service provisioning
Enhanced security with enterprise technology from Red Hat and continuous monitoring provided by Sysdig
Significantly reduced hardware costs by running OpenShift on bare metal
Automotive innovation requires modern platform to enhance legacy applications
Ford Motor Company is a leader in creating reliable, technologically advanced vehicles worldwide. Its mission is to provide mobility solutions at accessible prices to its customers, including dealerships and parts distributors who sell to a variety of retail and commercial consumers.
”We’re a well-known brand. Everybody knows the Ford oval,” said Jason Presnell, CaaS [Containers-as-a-Service] Product Service Owner, at Ford Motor Company. “Our mission in becoming a mobility company is to not only find new ways to help people get from place to place, but also to get them the information and tools they need to support their travel, like mobile apps that let you start or unlock your car. We need to support and deliver these capabilities at a global scale.”
Each of Ford’s business units hosts a robust, engaged development community that is focused on building products and services that take advantage of the latest technological innovations, from machine learning for crash analysis and autonomous driving to high-performance computing (HPC) for prototype creation and testing. But this engagement across hundreds of thousands of employees and thousands of internal applications and sites created complexity that Ford’s traditional IT environment and development approaches could not accommodate. Even with hypervisors and virtual machines, the company struggled with inefficient resource use and high staffing costs to maintain this environment.
“We needed faster delivery for our stateful applications,” said Satish Puranam, Technical Specialist, Cloud Platforms, at Ford Motor Company. “Pivotal Cloud Foundry worked fine for newer, stateless applications that were built for portability, but we’re a hundred-year-old company with a lot of stateful, data-heavy, legacy applications. For things like inventory systems, dealer-facing applications, and CI/CD [continuous integration and delivery] that needed data persistence, getting the right infrastructure could take as long as 6 months.”
Ford sought to use Kubernetes container technology, application programming interfaces (APIs), and automation within its datacenters to give its legacy stateful applications the benefits of public cloud: faster delivery, easier maintenance, and automated scalability. Consolidating its hardware and software environments with container orchestration would also help the company use its resources more effectively.
”Containers are an extremely portable way to deliver an application, because you can build in all the dependencies and libraries that allow anyone to run that container and get the same performance in any environment,” said Presnell. “But we wanted to focus on the value we could deliver, not maintaining the container platform. We needed container orchestration that would provide not only application delivery, but also service capabilities to maintain that environment.”
New container-based application platform uses enterprise and community open source technology
After running tests and proofs of concept (POCs) of container technology, Ford began looking for an enterprise partner offering commercially supported open source solutions to help run containers in production and support innovative experimentation.
“We have several open source technologies in our IT environment and products. We want to move toward being able to use and contribute to open source more — to help somebody else in the community take what we’ve done and improve on it,” said Presnell. “But we needed a container platform that had an enterprise offering, one that was well-known in the industry and was well-engineered.”
Past experience with Kubernetes led Ford to adopt CoreOS Tectonic. When CoreOS was acquired by Red Hat, Ford migrated to Red Hat OpenShift Container Platform, a solution that enhanced the strengths of CoreOS’s offering with new automation and security capabilities. Based on Red Hat Enterprise Linux®, OpenShift Container Platform offers a scalable, centralized Kubernetes application platform to help teams quickly and more reliably develop, deploy, and manage container applications across cloud infrastructure.
The company also implemented Red Hat Quay to create a centralized container registry to host and secure all of its container images while offering protected, API-based access to partners and other third parties.
“Red Hat is one of the top engineering-focused Linux companies in the world and produces one of the most significant Linux distributions,” said Presnell. “They are the second biggest contributor to the Kubernetes community. Red Hat is really focused on providing enterprise-quality service alongside engineering excellence.”
Ford has also adopted several open source technologies that Red Hat contributes to, from Open Data Hub — a data and artificial intelligence (AI) platform for hybrid cloud — to Dex, an OpenID-based identity authentication service.
During migration, Ford worked closely with Red Hat Consulting to create an environment that supports more than 100 back-end and dealer-facing stateful applications, including databases and messaging systems, inventory systems, and API managers. After launching OpenShift in production, Ford also adopted Sysdig Secure and Sysdig Monitor, a Kubernetes security solution certified by Red Hat, to add extra visibility and protection for its development and production OpenShift environments.
For its success using OpenShift for modern automotive development and using digital technology to serve customers, Ford was recognized with a 2020 Red Hat Innovation Award.
Performance and security improvements help Ford deliver services and work with partners more efficiently
Significantly increased developer productivity
Using OpenShift Container Platform, Ford has accelerated time to market by centralizing and standardizing its application development environment and compliance analysis for a consistent multicloud experience. For example, Open Shift’s automation capabilities help Ford deploy new clusters more rapidly.
These improvements are enhanced by the company’s shift from a traditional, waterfall approach to iterative DevOps processes and a continuous integration and delivery (CI/CD) workflow.
Now, some of the same processes for stateful workloads take minutes instead of months, and developers no longer need to focus on underlying infrastructure with self-service provisioning. These improvements extend to Ford’s IT hosting, where the company has seen a significant productivity improvement for CaaS support. Dealers and plant operators gain access to new features, fixes, and updates faster through Ford’s multitenant OpenShift environment.
Enhanced security and compliance with enterprise container and monitoring technology
Companies in the automotive industry must comply with various security standards and regulations, such as Payment Card Industry Data Security Standard (PCI DSS) and personal data protection standards. When creating its new container platform, Ford sought to balance providing access to partners and developers with ensuring vulnerabilities and updates were addressed and working toward future adoption of a DevSecOps approach.
“In a container environment, moving applications and code continuously, security needs to be automated and built in from when a container is created,” said Payal Chakravarty, Vice President, Products, Sysdig. “Sysdig provides real-time vulnerability management in CI/CD pipelines. Security checks are in place to analyze code and identify issues before production.”
To support this approach, Ford standardized on Red Hat container images and registries using Red Hat Quay. OpenShift provides a unified management interface across Ford’s entire infrastructure, as well as built-in Security Enhanced Linux (SELinux) capabilities.
Sysdig Secure and Sysdig Monitor help Ford enhance this protection with improved, data-based insight into container infrastructure to run OpenShift in a compliant way. “Sysdig can tell us about a container’s network activity, can help us protect multiple containers running on a single host, and provide continuous monitoring and alerts,” said Puranam.
Significantly reduced hardware costs
Shifting to a container-based approach requires less initial hardware investment — and ongoing savings as Ford continues to modernize and migrate its legacy applications. The company has improved the efficiency of its hardware footprint by running OpenShift on bare metal and using its existing hardware more effectively.
“We were able to initially run OpenShift on a fleet of hardware that had literally been pulled out of our datacenter to be scrapped. We put that hardware back and are successfully running production OpenShift on it today,” said Puranam.
By establishing an approach for controlling costs and increasing profit margins, Ford can reallocate resources to higher-value projects to address new business opportunities faster.
Successful adoption of OpenShift and DevOps creates foundation for new opportunities to innovate
Ford is already experiencing significant growth in demand for its OpenShift-based applications and services. It aims to achieve migration of most of its on-premise, legacy deployments within the next few years.
The company is also looking for ways to use its container platform environment to address opportunities like big data, mobility, machine learning, and AI to continue delivering high-quality, timely services to its customers worldwide.
“Kubernetes and OpenShift have really forced us to think differently about our
problems, because we can’t solve new business challenges with traditional
approaches… We’re now well-situated for future success.”
Ford Motor Company
“With OpenShift, we have a common framework that can be reused for deploying
an application or service, because every major cloud provider has Kubernetes
compatibility. We can now deliver features in a more secure, reliable manner.”
CaaS Product Service Owner,
Ford Motor Company