Over the last couple years, microservices and containers have started to redefine the software development landscape. The traditional large Java or C# application has been replaced with multiple smaller components (microservices) which coordinate to provide the required functionality. These microservices typically run inside containers, which provide isolation and portability.

This approach has numerous benefits including being able to scale and replace microservices independently as well as reducing the complexity of individual components. However, it also brings more complexity to the system level; it takes extra effort and tooling to manage and orchestrate the microservices and their interactions.

This post will describe how Red Hat technology and services can be used to develop, deploy and run an effective microservice-based system.

The diagram below shows the software lifecycle for a container-based system using Red Hat technology:

Red Hat Container Ecosystem

The diagram depicts the major stages in software development; development, testing and production. In a modern continuous delivery or deployment pipeline, the entire cycle happens continuously - it is an on-going iterative process, not a one-off sequence of stages. The finished application can run either on-premise using local resources, or in the cloud, using public or dedicated resources.

Small applications can run directly on hosts using the Docker engine. For larger applications, it is advisable to use an orchestration platform. In the Red Hat ecosystem, the recommended solution is OpenShift. This is an open source platform, built on top of Kubernetes from Google. OpenShift can be deployed locally or in the cloud and is available as a supported enterprise product (there is also the upstream OpenShift Origin product).

Both small and large applications can use the Atomic Enterprise Platform package which includes a managed cluster of Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux Atomic Host instances for running multi-container applications, whether or not they use OpenShift.

Let’s dig into some of the components in a little more detail...

The Red Hat Registry

Public container registries provide catalogues of Docker images that can be reused by developers when building container-based systems. There are images for running common software, such as databases and logging software, as well as images for common programming languages and platforms such as Ruby on Rails and Node.js. The largest container registry is the Docker Hub, which contains thousands of images, the majority of which are uploaded by users and are of variable quality (note that the official images should be of higher quality). In contrast, the Red Hat Registry contains only a small number of certified images that have passed checks carried out by Red Hat. For security reasons, it is recommended that users first try to use Red Hat images, only looking at Docker Hub images where there is no equivalent in the Red Hat Registry.

Private Registry

Most organizations will want to run their own registry to store their non-public images. Developers can directly push and pull images to this repository, but it is also common to have a CI/CD system (such as Jenkins) automatically push images. In this case the CI/CD system will build the image from source (possibly using a tool such as S2I) whenever a developer checks in changes to SCM, run tests against the image and push passing images into the repository. At this point the image can be pushed to production, either automatically with Continuous Deployment or manually.

Atomic Host or a Red Hat Enterprise Linux Cluster

Organizations that want to manage their own compute hosts, whether they are on-premise or in a cloud, need to choose a Linux distribution to run on the hosts. The traditional choice is Red Hat Enterprise Linux, which will be familiar to most organizations and has full support for containers. The alternative choice is Red Hat Enterprise Linux Atomic Host, which is a stripped-down Red Hat distribution designed specifically for running containers. This can provide several advantages; a stripped-down distribution means the OS requires less resources and less updating, as well as reducing the potential attack surface exposed to hackers (if the host doesn’t run a service, it can’t be exploited).

OpenShift

The OpenShift platform provides an advanced orchestration layer for containers, built on top of Kubernetes. It is available both on-premise and as a hosted service. OpenShift takes a lot of the pain out of developing, deploying and scaling microservices. Out-of-the-box features include cross-host networking, container scheduling, automatic scaling and health checking.

Nulecule and Atomic App

The Nulecule specification defines how a multi-container application should be deployed and scaled. It is designed to target multiple “providers”, allowing for an application to be easily ported between clouds or orchestration systems. Red Hat Atomic App implements this specification and can be used to launch containers on a range of providers including OpenShift, Kubernetes and Mesos, as well as plain Docker hosts.

Container Development Kit

The Red Hat Container Development Kit (CDK) provides a Vagrant VM for quickly getting started with the Red Hat container stack. The CDK includes Atomic App and an OpenShift installation, so complex, multi-container apps can be run locally in an environment closely resembling production. Other features include integration with the Eclipse IDE, allowing you to develop and launch containers from within the editor’s environment.

Conclusion

Containers and microservices represent the future for a lot of industry. The benefits in terms of scalability and agility mean we have seen enormous growth in this area and a profusion of competing technologies. The downside to this explosion of ideas and technologies is that it is difficult to select a set of technologies that will provide a complete solution which is both stable and feature rich.

The Red Hat stack provides a proven and integrated platform for developing microservices. It significantly reduces the complexity associated with choosing and managing a stack but still provides cutting-edge technology features by building on-top of existing technology such as Docker and Kubernetes.