Applications don’t always work as expected, and “it works fine on my machine” -- the first line of response when reporting an issue -- has been around for decades. One way to avoid the challenge of application issues in production is to maintain identical environments for development, testing, and production. Another is to create a Continuous Integration environment, where code is compiled and deployed to test machines and vetted with each and every code check-in, long before being pushed to production.

Enter containers.

Developers love containers because, with the help of the Docker CLI, they promise to solve numerous pain points in delivering applications, including the challenge of delivering the same functionality to multiple deployment environments. Containers introduce a concept of autonomy for applications in that they allow applications to be packaged with their dependencies, rather than relying on those dependencies being installed and configured on the host machine.

At least that’s the theory.

The reality, unfortunately, is not so simple. Co-founder and CTO of Fewbytes, Avishai Ish-Shalom discovered this when he attempted to create container images of omnibus packages for multiple Linux distributions. After multiple failed attempts where the build appeared successful but then would not work across even two out of the three distros, Avishai concluded:

“While docker enthusiasts claim you can ‘run any app anywhere’ this is unfortunately not true in many cases. Many userland tools are coupled to kernel features, kernel modules, distro specific kernel configurations, etc... Over the years we have built a complex web of interdependence between kernelspace, userspace, compile-time configurations and runtime configurations; it will take years to untangle this mess.”

Red Hat agrees that containers hold a lot of promise, if you recognize that the underlying operating system plays a vital role. You can’t throw Linux distro A and Linux distro B together in a Frankenstein’s Monster approach to infrastructure and expect for them to play well together --and Fewbytes have shown just one classic example of this.

I was talking with Bhavna Sarathy, senior technology product manager at Red Hat, on this very topic. As Bhavna pointed out, “Linux containers is a capability of the operating system; if anyone tells you otherwise, they are wrong. Containers depend on key capabilities in the Kernel and the operating system to function.  Resource management, isolation, abstraction, and security - all of these are fundamental building blocks for Linux containers.”

Developers and system administrators have a cornucopia of tools to address the issue of delivering containerized applications. They need:

Red Hat delivers true container portability with deployment across physical hardware, hypervisors, private clouds, and public clouds. Red Hat delivers true container portability across physical hardware, hypervisors, private clouds, and public clouds.

We’re working very hard at Red Hat to advance both the container technology and the ecosystem that supports it to make it enterprise-consumable, as we did with Linux. We at Red Hat were early proponents of Docker technology, and we quickly became one of the leading contributors to the community project, thanks to our extensive expertise on the Kernel and OS. This enables us to standardize Linux containers across our own solutions, including Red Hat Enterprise Linux, OpenShift, Red Hat Enterprise Linux Atomic Host and more, even as we help drive standards for Linux containers in the industry. Because Linux containers will work the same across Red Hat solutions, customers and partners can deploy containerized applications anywhere and everywhere.