Wählen Sie eine Sprache
This is the fifth and final in a series of posts that delves deeper into the questions that IDC’s Mary Johnston Turner and Gary Chen considered in a recent IDC Analyst Connection. The fifth question asked:
What types of technologies are available to facilitate the integration of multiple generations of infrastructure and applications as hybrid cloud-native and conventional architectures evolve?
Mary and Gary write that “We expect that as these next-generation environments evolve, conventional and cloud-native infrastructure and development platforms will extend support for each other. As an example, OpenStack was built as a next-generation cloud-native solution, but it is now adding support for some enterprise features.”
This is the one aspect of integration. Today, it’s useful to draw a distinction between conventional and cloud-native infrastructures in part because they often use different technologies and those technologies are changing at different rates. However, as projects/products that are important for many enterprise cloud-native deployments--such as OpenStack--mature, they’re starting to adopt features associated with enterprise virtualization and enterprise management.
At the same time, enterprises are increasingly adopting practices and approaches more associated with cloud-native. As I noted in an earlier post in this series, Ansible, recently acquired by Red Hat, is an example of a DevOps tool being used by organizations even in their more traditional environments. Ansible provides a simple automation language for application infrastructure automation from servers to virtualized cloud infrastructures to containers. (Ansible also helps make automation pervasive and consistent across both classic and cloud-native IT using a common language in its “Playbooks.”)
The result is that we should expect both the technologies and the practices of “conventional” and “cloud-native” to increasingly converge. As a result, the distinction between these two IT modes should increasingly blur over time. See, for example, this video about OpenStack VM Live Migration from the 2015 OpenStack Summit in Vancouver.
Mary and Gary also write that “Emerging technologies including container packaging systems such as Docker can offer portability across very diverse underlying infrastructure and can potentially work with both new and conventional applications. At the network level, software-defined networks can carry traffic and interconnect both legacy and cloud-native environments.”
This is a second aspect of integration. Infrastructures are likely to be hybrid in any case. There will be private and public clouds. Bare metal and virtualized infrastructure. Perhaps clusters of hardware optimized for specific tasks. The need is often not to create a single homogeneous pool but to allow for workload portability across a hybrid and heterogeneous set of resources as needed. Which is why portable container packaging is one of the main things that made the underlying container, aka operating system virtualization, technology--which dates back to at least BSD jails--so interesting all of a sudden.
In addition to the software-defined networks mentioned by Mary and Gary, software-defined storage provides another tool to integrate across conventional and cloud-native IT. Storage is particularly important because application portability isn’t very useful if those applications can’t access the data they need. Gluster and Ceph are examples of open source, scale-out software-defined storage solutions that run on commodity hardware and have durable, programmable architectures. Red Hat Gluster Storage streamlines file and object access across physical, virtual, and cloud environments, while Red Hat Ceph Storage offers a scalable block and object storage platform for enterprises deploying public or private clouds. Returning to the first theme, they’ve also seen the addition of enterprise features such as a solution integrating Red Hat Gluster Storage and Red Hat Enterprise Virtualization.
A third aspect of integration is the management unifying different modes and types of IT infrastructure. This is more or less the definition of a cloud management platform (CMP). A CMP like Red Hat CloudForms (based on the upstream ManageIQ project) manages the lifecycle of applications, places virtual workloads according to business priorities, and— automatically through policies —monitors performance, security, and reliability across cloud platforms. Thus, OpenStack private clouds can be quickly deployed and scaled while also combining them with existing IT infrastructure investments and federated public cloud deployments. This helps avoid creating new silos when deploying new infrastructure.
There are other aspects of integrating as well that I’m not going to get into here. Just as Web “wrappers” were applied to many existing applications during the first Internet boom, I expect that we will see many of today’s existing applications wrapped with APIs to allow them to communicate with new services. In many cases, existing databases will remain the canonical data sources for new systems of engagement. Business rules and process management, messaging, and service busses are needed to connect and integrate applications, data, and devices across on-premise, mobile, and cloud environments.
In closing out this series of posts, I’m going to leave you with one final thought that’s been something of a common thread throughout. The idea of bimodal IT or two-speed IT is a useful model. It gives IT executives charged with keeping today’s lights on and today’s trains running license to simultaneously consider the frenetic innovation most associated with the Web giants. At the same time, however, it’s important to not treat bimodal IT as an invitation to create new silos or to simply wall off existing systems and applications and ignore them.
Build your new platforms and applications but modernize your existing ones--and integrate with them.
Read the entire series: