The following is an excerpt from my Red Hat Summit keynote today.

Over the past few years, Paul, Matt and I have talked about how the next evolution in computing needs to be the open hybrid cloud. Linux won in private datacenters, and it has won on public cloud platforms.

But if we leave things as they are, the biggest competitors in this space will continue to find ways to lock customers into their technologies and services.

To achieve a true open hybrid cloud approach, we need to continue to create and foster communities that share this common purpose: to democratize the tremendous power of cloud platforms through open standards at every level—from service APIs to hardware.

Every time we explore a new dimension of these technologies, we have to recommit to keeping things open. That’s the only way to ensure that the best ideas win and the future remains open to all.

When I think about what I’m seeing in the industry right now, I focus on the concurrent evolution of cloud services and hardware. But first, let’s understand where the pressure to innovate is coming from.

Businesses are feeling immense pressure to increase agility and automate processes to meet customer needs and, of course, grow financially. The transformation to become a digital business by giving cloud-like flexibility to critical business systems is not optional anymore; it’s an imperative.

Developers are looking for every option to reduce their time to success with each project. And operations teams need to maintain infrastructure security and reliability. So, we can’t just focus on perfecting the functionality of the software that we build.

In an environment where developers need to innovate as rapidly as their cloud platforms will allow, the stability and availability of software at massive scale is as important as that software’s ability to function at all.

Operate First

Good software with a poor operational experience has limited value, especially as more enterprise software moves into the cloud.

Right now, all of the operational expertise for running software is removed from the production of the code. If our goal is to maintain the “openness” in open hybrid cloud, we have to bridge the gap between the knowledge (and tools, like automation, that support effective operations at scale) with the community that’s producing the software. This means really considering the operator’s software experience and “operationalizing” development.

This critical concept is something that I call “Operate First”.

Operate First is about supporting the development of operational knowledge that can be encoded into upstream open source projects. Without a community focus on Operate First, companies consume the efforts of open source communities, learn how to operate those systems at scale and then reserve that knowledge for themselves, rather than contributing back to the community. As the world moves towards cloud services, we have to bring the community along, by building on the software and the strong relationships we’ve built.

A great example of what this looks like is OpenInfra Labs (OI Labs). It’s a project of the Open Infrastructure Foundation that works to integrate and optimize open source projects in production environments. They also publish complete, reproducible stacks for existing and emerging workloads. As Paul mentioned in his keynote, Red Hat supports the Open Infrastructure Foundation through financial and engineering contributions to the Mass Open Cloud.

One of the newest OI Labs community efforts is the Telemetry Working Group. There are three core tenets to operating software: automation, software architecture—how many services you need, what APIs they surface—and telemetry. You can’t operate software—or have an Operate First mindset—without telemetry. The Telemetry Working Group’s work is important, because this is how we keep things open: by starting with collaboration.

In order to innovate on platforms like the MOC, we first need to guarantee optimal system performance and reliability. For IT ops teams, this involves intelligently distilling insights out of the crush of surrounding data and across multiple cloud environments. According to the Linux Foundation, 175 zettabytes of data will be generated by 2025—10 times more than in 2016. And teams are using multiple monitoring tools and technologies to keep track of it, making it difficult to quickly correlate and analyze application performance metrics to solve complex emerging problems.

Today’s dynamic IT infrastructure cannot be efficiently managed using a traditional, rules-based approach. Red Hat is working on this challenge with Advanced Cluster Management for Kubernetes.

Infusing AI into CI/CD workflows with AIOps

We continue to advance that approach, which is where the modern AIOps paradigm comes in. It combines platforms, big data and AI to help IT operations teams become more efficient—similar to Operate First. It’s an approach we’re already seeing customers adopt successfully—for example, using AI to enable compliance.

With how much is now moving through CI/CD pipelines, automating repeated tasks like resource configuration, code analysis and vulnerability checks is no longer an option—it’s a necessity. Infusing AI into a common CI/CD pipeline enables short, frequent development cycles with minimal disruption to operations.

A great example of AI in a CI/CD workflow is Project Thoth. They’re building an intelligent software recommendation system based on collective knowledge for Open Data Hub. This upstream AI-as-a-Service project runs in the Mass Open Cloud.

Securing apps across the hybrid cloud

Red Hat is focused on transforming how cloud-native workloads can be secured by shifting security left into the container build and CI/CD phases.

Our recent acquisition of StackRox underscores this focus. By bringing Kubernetes-native security to OpenShift, we’re furthering our vision to deliver that single, holistic platform that not only enables developers to build secure applications but to also efficiently enforce security policies across the hybrid cloud.

Meeting the demand for computational resources at the edge

Another way systems are changing, beyond the need for simplified security, is coping with exponential data growth. Modern enterprise applications run on data, and the proliferation of endpoints in the hybrid cloud has led to staggering amounts of data to be processed. In the past, companies focused on getting this data back to a central point to process. But that approach isn’t going to work anymore.

According to FinancesOnline, there will be 75 billion IoT devices by 2025. And those devices are generating a lot of data. For example, the Automotive Edge Computing Consortium and IHS Automotive say that—at a low estimate—the average connected vehicle will produce up to 30 terabytes of data in a single day of driving. With this volume of information flowing into enterprise systems, we’re hitting actual physical limitations with bandwidth and latency.

In order to keep up, we have to push the ability to process and react to information out to those edge endpoints. This will mean tackling challenges that are unique to the environments where edge devices are deployed.

Some of these challenges include:

  • Device resource constraints—like limited CPU, RAM and storage,

  • Securing publicly accessible devices, and

  • Building tolerance for interruptions across massive, distributed networks.

Few of our customers understand that challenge better than Swiss Federal Railways. Its software developers understand the increasing demand for computational resources at the edge.

It’s just not possible to operate at the edge without processing and interpreting data at the endpoint, running with local data and without relying on a persistent network connection.

For example, CPUs are no longer providing the yearly performance gains that enterprise software has come to depend on—and that edge computing needs. A host of domain-specific acceleration hardware has emerged to augment the CPU and get workloads the performance they need.

SmartNICs represent an example of this approach to improving compute performance by offloading compute-intensive tasks to optimized components. However, from a computing perspective, SmartNICs still rely on a general-purpose CPU to coordinate and dispatch workloads. In addition, they are generally only extensible through proprietary, manufacturer-specific code.

This creates interoperability challenges and conflicts with the architectural principles of Kubernetes containerized applications. It also conflicts with the idea of abstracting hardware and virtualization and creates security challenges by crossing isolation boundaries.

So we’re excited about the next evolution of compute that addresses these challenges: an accelerator with flexible compute capabilities and software-defined functions.

This accelerator works as its own computer and can offload entire subsystems and maintain security isolation. It essentially moves hardware virtualization from the hypervisor to a software-defined device function.

A great example of this new type of accelerator is NVIDIA’s Bluefield 2 Data Processing Unit, or DPU. Red Hat is working closely with NVIDIA and our ISV ecosystem to make the DPU accelerators accessible to RHEL and OpenShift users through software-defined device functions. With DPU, NVIDIA is pioneering the concept of disaggregated and accelerated data processing for the next generation of data centers.

The datacenter of the future is an open hybrid cloud datacenter

This new system model is truly exciting. What we are speaking about here is defining the datacenter of the future. A truly open hybrid cloud datacenter.

At Red Hat, we want to see open source become the de facto standard for creating these new composable architectures.

Our partners—chip makers like NVIDIA and Intel—are committed to working with us and open source communities to develop open, composable architectures.

This work shows the value of participating in something that’s bigger than any one person or company, learning how to innovate together and then contributing that knowledge back to the community.

It’s an approach that will be especially important this year, because we see 2021 as a turning point where—instead of just talking about cloud-first—organizations are evolving to deploying cloud everywhere, extending to the edge.

Emerging technologies rely on a future rooted in open standards, open code and open knowledge. And that’s why open hybrid cloud is so important. Whether you’re a customer, partner, open source community member or fellow Red Hatter, we are all working together, and we all have an important role to play in supporting this vision for the open hybrid cloud. It’s one of the things I really love about open source—we create the future together.


About the author

Chris Wright is senior vice president and chief technology officer (CTO) at Red Hat. Wright leads the Office of the CTO, which is responsible for incubating emerging technologies and developing forward-looking perspectives on innovations such as artificial intelligence, cloud computing, distributed storage, software defined networking and network functions virtualization, containers, automation and continuous delivery, and distributed ledger.

Read full bio