Editor’s note - This post is the third in a four-part series on private cloud from Red Hat technology evangelist Gordon Haff. Read the earlier posts:
Early DevOps discussions often focused on breaking down the wall between developers and operations. The thinking went that, if developers didn’t just toss their new applications over to operations and run away, the world of IT would be a better place. That was how operations viewed the state of affairs in any case.
Don’t talk when you don’t have to
There’s certainly truth in that stereotype of standard practice. We can probably all agree that open communication lines and mutual understanding are good things. But eliminating unnecessary communications can be a good practice too. It’s possible to put infrastructure, processes, and tools in place so that Devs doesn’t need to interact with Ops as much while being (even more) effective. One of the analogies I like to use is that I don’t want to streamline my interactions with a bank teller. For routine and even not so routine transactions, I just want to use an ATM or my smartphone.
It’s up to Ops to build and operate the infrastructure supporting those streamlined transactions, such as a private Infrastructure-as-a-Service (IaaS) cloud. And then provide core services through a modern container platform. In Red Hat’s case this takes the form of Red Hat OpenShift Container Platform running on top of Red Hat OpenStack Platform. Or as an integrated offering in the form of Red Hat Cloud Suite which combines a container-based app-development platform (OpenShift), private cloud infrastructure (OpenStack), and interoperability/management (Red Hat CloudForms).
Why a modern platform?
Optimizing development processes and cloud-native applications is often best done on a modern platform.
You may need scale-out architectures to meet highly elastic service requirements. Application designs with significant scale-up components simply aren’t able to accommodate shifting capacity needs. Or may simply not be able to scale up as far as they need to go.
Modern platforms are software-defined because software functions, such as network function virtualization and software-defined storage, are much more flexible than when the same functions are embedded in hardware.
Modern applications are composed of loosely-coupled services because large monolithic applications can be fragile and can’t be updated quickly. A modern container platform enables iterative software development and deployment in part because modern applications are often short-lived and require frequent refreshes and replacements.
Containers started out as another way to partition a system, a lightweight alternative to hardware virtualization. But transforming containers into a way to package applications is what made them broadly interesting. By providing an image that also contains an application’s dependencies, a container can be made into a packaging construct that is portable and consistent as it moves from development, to testing, and finally to production.
Some other pieces came together as well. Specifications that are now under the governance of the Open Container Initiative (OCI) standardized the image and runtime for containers. These specifications work together to define the contents of a container image and those dependencies, environments, arguments, and so forth necessary for the image to be run properly. As a result of these standardization efforts, the OCI has opened the door for many other tooling efforts that can now depend on stable runtime and Image specs. For example, Red Hat has been involved heavily in container registry and container building projects such as Project Atomic, Skopeo, and Buildah.
An OCI-compliant container runtime, by itself, is very good at managing single containers. However, when you start using more and more containers and containerized apps, broken down into hundreds of pieces, management and orchestration can get tricky. Eventually, you need to take a step back and group containers to deliver services.
Orchestrating and managing
That’s where Kubernetes comes in. Originally developed by Google but now a large community project under the auspices of the Cloud Native Computing Foundation (CNCF), Kubernetes lets you cluster together groups of hosts running Linux containers, and orchestrate (manage) them.
Kubernetes relies on additional projects to provide the services developers and operators might choose to deploy and run cloud-native applications in production. These include a container registry, telemetry, networking, and security. A significant amount of open source innovation is coming together around containers. I’ve only touched on a small piece of it.
Integrating and packaging
OpenShift natively integrates technologies such as OCI-compliant containers and Kubernetes and combines them with an enterprise foundation in Red Hat Enterprise Linux. OpenShift also integrates the architecture, processes, platforms, and services needed by development and operations teams. It’s fully open source with OpenShift Origin as its upstream community project. Users, contributors, and partners come together in OpenShift Commons.
If you consider the level of activity happening in the cloud-native space, the challenges of DIY integration can be pretty clear. (This applies to Infrastructure-as-a-Service as well as container platforms.) There’s a rapid pace of change within projects, new projects are popping up all the time, and different approaches go in and out of favor as developers and users gain experience with different types of tasks and use cases.
In its December 2016 report commissioned by Red Hat, “OpenStack Platform Delivers for Private Cloud Users, Organizations show preference for trusted third parties,” 451 Research wrote: “[our] Voice of the Enterprise research survey shows that nearly 63% of users choose a vendor’s distribution of OpenStack or simply sign on with a service provider, which also may well use a popular distribution. Just 21% of organizations choose a do-it-yourself course.”
The goal for integrated platforms like OpenShift, OpenStack, and Cloud Suite is to simplify the startup process, reduce the level of skills required, and generally help organizations focus on their own application delivery needs rather than on the underlying platform.