Blog da Red Hat
We've published a new guide to help you select the right container hosts and images for you container workloads - whether it's a single container running on a single host, or thousands of workloads running in a Kubernetes/OpenShift environment. Why? Because people don't know what they don't know and we are here to help.
Like "The Cloud" before it, a lot of promises are being made about what capabilities containers might deliver - does anybody remember the promises of cloud bursting? No, not that cloud bursting, this cloud bursting :-)
Once the dust settles from the hype around a new technology, people learn how to leverage it, while still applying much of their current knowledge. Containers are no different. While they do enable a great deal of portability, they are not magic. They do not guarantee that your application is completely portable through time and space. This is especially true as the supported number of different workloads on Kubernetes expands, the Kubernetes clusters grow larger, and the cluster nodes become more differentiated with specialized hardware. There will be an ever expanding role for the Linux & Kubernetes to tie these disparate technologies together in a consumable way.
Building rock solid Kubernetes clusters starts with a solid foundation - selecting the right container hosts and container images. When selecting these components, architects are making a big decision about lifecycle, and the future supportability of their Kubernetes clusters. The supportability of the underlying cluster nodes doesn't change in a containerized environment - administrators still need to think about configuration management, patching, lifecycle, security. They also need to think about compatibility with the all of the different container images which will run in the environment. Not just simple web servers, but all of the workloads which will run, ranging from HPC, big data, and DNS, to databases, a wide range of 3rd party applications, and even administrative workloads for troubleshooting the containerized clusters (aka distributed systems). All of these different types of applications are moving to Kubernetes. Workloads drive the need for libraries, language runtimes, and compilers. Sound familiar? Most of these needs are delivered by Linux distributions, like Red Hat Enterprise Linux.
We've published a new guide to help you leverage the architectural knowledge you have and apply it as you are building your Kubernetes/OpenShift environment. If you have questions, please post them below, and we will be happy to help guide you:
About the author
At Red Hat, Scott McCarty is Senior Principal Product Manager for RHEL Server, arguably the largest open source software business in the world. Focus areas include cloud, containers, workload expansion, and automation. Working closely with customers, partners, engineering teams, sales, marketing, other product teams, and even in the community, he combines personal experience with customer and partner feedback to enhance and tailor strategic capabilities in Red Hat Enterprise Linux.
McCarty is a social media start-up veteran, an e-commerce old timer, and a weathered government research technologist, with experience across a variety of companies and organizations, from seven person startups to 20,000 employee technology companies. This has culminated in a unique perspective on open source software development, delivery, and maintenance.