Building containers for applications
Developers need tools for building the application and any of the necessary dependencies into container images. This process should be repeated for code changes and finished releases. During rollouts, operators or developers also need the ability to deploy the new images in place of the current running container images. While low-level tools exist for performing these tasks, the container platform makes this process much easier.
Building containers to run applications often requires languages, runtimes, frameworks, and application servers to allow the application to run. These can be pulled in during the build process with a base container image as a foundation. While there are a number of sources for base images, the challenge is to acquire them from a known and trusted source. The base images need to be secure, up-to-date, and free of known vulnerabilities. If a vulnerability is discovered, the base images must be updated. Users also need a way to find out if containers are based on out-of-date images.
Public cloud challenges
One of the challenges IT organizations face when adopting the public cloud is that the infrastructure, management, and automation software provided by the public cloud are different from what the IT organization uses in its own datacenters. Many public cloud tools and services are not available to run on-premise, so they cannot be used with applications that run internally.
Many organizations choose to use more than one public cloud for reasons like geographic availability, diversity, and cost. However, each public cloud provider offers vendor-specific interfaces, tools, and services.
Containers, Kubernetes orchestration, and cloud computing have tremendous potential for improving operational efficiency through automation. Containers are an ideal environment to implement DevOps practices and culture. However, a cloud strategy that uses different platforms everywhere there are hosted applications could overload operators and developers with too much to keep track of and learn.
Red Hat’s approach: A cloud experience everywhere
Red Hat® OpenShift® is an enterprise-ready Kubernetes container platform with full-stack automated operations to manage hybrid cloud and multicloud deployments, offering the simplicity and automation of the public cloud. It includes an enterprise-grade Linux® operating system, container runtime, networking, monitoring, registry, and authentication and authorization solutions.
You can deploy Red Hat OpenShift Container Platform on your choice of infrastructure, whether in your on-premise datacenter or in a private cloud. If you prefer not to manage the infrastructure, most public cloud providers offer Red Hat OpenShift as a managed service.
Streamlining operations with a consistent hybrid cloud foundation
Red Hat OpenShift helps address the challenges that arise when legacy applications need to stay on-premise as newer development occurs on cloud platforms. It creates a common application platform by abstracting away the details of the underlying cloud or container platform, easing the transition into hybrid and multicloud deployments.
Red Hat OpenShift’s common operational interface for old and new applications, whether they run internally or externally, streamlines operations. The same tools, consoles, and procedures are used regardless of where the application runs. Operators can be productive faster with a reduced learning curve. They no longer have to remember how things work in different environments, so they can diagnose and resolve problems more quickly.
The common application platform increases application portability and deployment flexibility. Containers do not include all of the deployment details necessary to orchestrate multiple containers into delivering a complete application. Kubernetes uses a number of YAML files to store deployment and configuration details. One of the areas where Red Hat OpenShift adds value over Kubernetes is by providing a graphical user interface (GUI) and deployment templates to eliminate the need for operators and developers to edit YAML files by hand.
Deployment templates streamline the process of deploying applications on Red Hat OpenShift and of moving an application from one OpenShift cluster to another. The templates can be part of the application code or kept separately. Applications can be added to the Red Hat OpenShift service catalog, which allows for point-and-click deployment of applications and software components.
For managing multiple Red Hat OpenShift clusters, Red Hat OpenShift 4 introduced a unified hybrid cloud console. This feature provides centralized management and visualization tools across clusters that can run on-premise or on multiple clouds.
Developing applications in containers
Before applications can migrate to containers, the application code needs to be built into a container image. Red Hat OpenShift gives developers a self-service platform where they can build and run containers without waiting for resources to be provisioned. This is one of the key areas where Red Hat OpenShift adds value over Kubernetes.
Through Red Hat OpenShift, developers can set up automated builds for continuous integration and continuous delivery (CI/CD). The builds can be triggered automatically whenever new code is checked into the source code version control system. When the build completes successfully, it can be automatically deployed in place of the previous version. This feature helps with automated testing and continuous improvement. Red Hat OpenShift has rich functionality for creating sophisticated automated build pipelines. Developers can use familiar tools, like Jenkins, without the complexity of trying to create a build environment from scratch.
IT operations maintains control, and developers can work without administrative access to the cluster. Red Hat OpenShift supports multiple tenants with security. All of the tasks that developers perform, whether running a build or logging in to debug running code, run inside of containers on top of Red Hat OpenShift. Because the development tasks run in containers, they are isolated from other containers and the cluster itself.
Tools for developers
Red Hat offers many tools to help developers build applications to run in containers:
- Red Hat CodeReady Studio is a traditional desktop integrated development environment (IDE) with a broad set of tooling for containers and multiple programming models.
- Red Hat Container Catalog provides a library of tested container images from a trusted source that developers can use as base images.
- Red Hat OpenShift Application Runtimes is a collection of Red Hat OpenShift integrated runtimes covering multiple languages and programming styles to simplify cloud-native development.
- Red Hat Application Migration Toolkit is an assembly of tools that helps developers evaluate code from legacy applications to determine what changes are necessary to run on modern platforms such as current application servers and middleware.
Moving legacy applications into containers
Once the application’s containers are built, the next steps for deploying the application are configuring storage and networking. To accommodate the need for permanent storage, applications defined in Red Hat OpenShift can be configured to use persistent storage volumes that are automatically attached to the applications’ containers when they run. Developers can manage elastic storage for container-based applications, drawing from storage pools provisioned by operations. Red Hat OpenShift Container Storage can be used to make software-defined persistent storage. It offers block, file, or object access methods to applications running on a Red Hat OpenShift cluster.
Virtual private networking, routing, and load balancing for applications running in containers are built in as part of the platform provided by Kubernetes and Red Hat OpenShift. Networking is specified in a declarative manner as part of the application’s deployment configuration. Application-specific network configuration can be stored with the source code to become infrastructure as code. Tying application-specific infrastructure configuration to each application improves reliability when moving, adding, or changing application deployments.
Software-defined routing and load balancing play a key role in enabling applications to automatically scale up or down. Additionally, applications running on Red Hat OpenShift can take advantage of rolling deployments to reduce risk. With Red Hat OpenShift’s built-in service routing, strategies for rolling deployments can be used to test new code on subsets of the user population. If something goes wrong, rolling back to a previous version is easier with containers on Red Hat OpenShift.
Finally, Red Hat OpenShift Service Mesh provides increased resilience and performance for distributed applications. OpenShift Service Mesh abstracts the logic of interservice communication into a dedicated infrastructure layer, so communication is more efficient and distributed applications are more resilient. OpenShift Service Mesh incorporates Istio service mesh, Jaeger (for tracing), and Kiali (for visibility) on a security-focused, enterprise platform.
Improving your application landscape
Once your legacy applications are running in containers on Red Hat OpenShift, opportunities for improvement develop. New code releases can occur more frequently with better reliability using CI/CD, build and deployment automation, automated testing, and rolling deployments. The ability to release code more often means your organization can better respond to changing business demands.
A common approach to modernization is to put new interfaces and services implemented in newer technologies in front of legacy systems. This approach is much easier when everything is running in containers, where it does not matter what languages or technologies are running inside each container. The virtual networking capabilities and service mesh in Red Hat OpenShift make it easier to reliably connect application components.
Red Hat OpenShift also makes it easier to deploy the latest middleware alongside your legacy applications. Red Hat offers integration and messaging systems, business process management, and decision management software ready to run on OpenShift clusters in containers. You can use these to connect your applications for agile integration.
Conclusion
Red Hat’s approach to hybrid cloud and multicloud provides a common application platform that serves old and new applications, whether they are running on-premise or in the public cloud. The resulting application portability gives organizations the flexibility to run workloads where it makes the most sense. The details of the multiple underlying cloud and container platforms abstract away, making operators and developers more productive regardless of where the application runs.
There are many benefits to containerizing legacy apps and running old and new applications on Red Hat OpenShift. A container-based architecture, orchestrated with Kubernetes and OpenShift, improves application reliability and scalability while decreasing developer and operations overhead. Red Hat OpenShift’s full-stack automation, developer self-service, and CI/CD capabilities also provide a foundation for continuous improvement processes.
Learn more about containers and running containers at scale at https://www.redhat.com/en/solutions/hybrid-cloud-infrastructure#scale.