At Red Hat Summit and beyond, we are exploring how we can help our customers and ecosystem partners expand their possibilities. We're demonstrating how Red Hat's platforms built around Red Hat Enterprise Linux (RHEL) and Red Hat OpenShift are truly your trusted platforms for a new cycle of innovation. We're revealing how our software partners are helping operations and developers using artificial intelligence (AI), machine learning (ML), and deep learning (DL) in ways never before available. And we're showing you how these new capabilities are helping businesses deliver value in disruptive ways--not only for themselves but for entire industries.

To innovate and build a thriving business, you depend on a platform that delivers a consistent, stable environment for you to quickly and easily leverage technology. You require a platform that lets you focus on the innovation that is core to your business. A hybrid cloud platform provides consistency independent of the underlying infrastructure, giving you the flexibility to choose where you create and innovate. Red Hat’s hybrid cloud platform creates the consistency and stability you need now without cutting you off from future innovation.

Let’s take a look at what we’re focused on and the innovations that excite us.

Innovation--and change--happens incrementally

Technology innovation does not have to be a single radical change; it is often a series of incremental improvements. This is the open source way. Those improvements, over time, compound to create disruptive business impacts. Across all industries, enterprises have recognized the business value that can be created with new or novel technology innovations. With open source as a key innovation engine for the industry, the rate of technology change is accelerating. As a result, businesses are working hard to absorb all of this change and become digital leaders in their market. Enterprise businesses are expected to not only meet the changing needs of their end-users, but also ensure continued support for preexisting solutions.  

Over the last two decades, Red Hat has been working alongside the open source software industry to create the first generation of community-driven innovation in IT. This trend shows not only the value of open technologies rapidly evolving to meet end-user demands, but also the importance of developing these innovations the open source way to provide stability across the generations of technologies.  

In that time, innovations like the internet, smartphones, DevOps, big data, and cloud computing have taken off. Reliant on open source, these technologies have combined to provide value such that they are now ubiquitous in our everyday lives. Through it all, Red Hat has remained the common platform businesses trust to develop and deploy on with all the benefits open source.

Now, new trends in open source innovation are emerging: hardware-based acceleration, data-centric application workloads and the mass-adoption of machine learning, and distributed computing. These technological advancements are set to unleash a new cycle of business disruption, the early effects of which we can already see today. What are the technologies driving these trends, and how is Red Hat leveraging them?

Hardware enables platform innovation

Innovation begins at the hardware level. As Moore’s Law begins to hit laws of physics, we would quickly use up all of the power of our platform without changes in how we use hardware to accelerate workloads. Fortunately, modern platforms are optimizing end-to-end, systems-level performance using software to take advantage of acceleration capabilities in hardware to support performance sensitive workloads like ML and DL. For example, we’re seeing tensor processing units (TPUs) from Google and Intel augmenting their central processing units (CPUs) with DL Boost, and graphic processing units (GPUs) as ways to accelerate ML and DL workloads.

Think about where we were one year ago, where we were looking at CPUs with just under 30 cores compared with GPUs at over 5,000 cores. We continue to see model training times improve with newer hardware and optimized software, but we also see efficiency increase with lower power devices that are good for inference deployed at the edge. As systems continue to evolve to support hardware accelerated ML/DL workloads, we're pushing boundaries and expanding our possibilities.

One of our collaborators in this work is NVIDIA. NVIDIA has been with Red Hat all along, making sure their newest hardware capabilities are accessible, available, certified, and supported end-to-end in our solutions. Achieving peak performance out of hardware also depends on the end-to-end optimization of specialized toolchains, network fabric, hardware and drivers, software stacks, ML frameworks and applications. And now, we are deepening our alliance to accelerate and scale ML workloads across hybrid cloud environments by combining Red Hat’s industry-leading enterprise open source software solutions with NVIDIA’s GPU hardware and NVIDIA GPU Cloud (NGC).

The combination of Red Hat and NVIDIA enables applications to more efficiently tap into the raw computational power needed to run resource-intensive workloads and scale them across hybrid environments.

Machine learning enhances operations

Supported by next-generation hardware, let us not forget the people responsible for keeping the environment running to deliver optimal performance and high throughput to business-critical applications running on Red Hat Enterprise Linux (RHEL) and Red Hat OpenShift -- the operations (ops) teams. Operating and managing modern software stacks is complex work. Cloud-native platforms like OpenShift have simplified some of this work, making it easier to deploy available applications on demand. In addition to the ever-present pressures to deliver more value with less, operations teams are also expected to look for ways to make things run more efficiently while delivering new capabilities for developers, applications, and customers. Unexpected hardware failure, high demand that your infrastructure can't support, or puzzling performance issues can add more work to already mounting pressure. Data collected from infrastructure to train machine learning models can help ops scale.

Reliability is paramount for enabling innovation; everything must be running properly before ops can think about maximizing performance. This is where AIOps comes into play. AIOps is the combination of platforms, big data, and AI/ML, utilized to enhance practices like performance monitoring, event correlation and analysis, and management. At Red Hat, we’re actively enabling AIOps with solutions like Red Hat Insights and with core concepts of Red Hat OpenShift 4, such as Kubernetes Operators. With AIOps, the infrastructure learns from the data and gains the ability to predict issues before they become problems. Think autonomous clouds or self-driving clusters.

One collaborator in this space is ProphetStor, whose solutions are built on Red Hat OpenShift and enhance its scaling and scheduling capabilities in multi-cloud environments. ProphetStor and AIOps help operations teams predict and optimize workloads and resources in your cluster.

Making machine learning easier

Data is required when training machines to learn. It helps make predictions about the future, recommendations and other valuable insights.  

The work of using data for training machine learning models has historically been done by small groups of highly skilled data scientists. These data scientists are a precious resource, but in a traditional machine learning workflow, they can become a bottleneck. We need to enable developers to help data scientists scale.

With OpenShift Red Hat, along with our partner ecosystem, is enabling data scientists and developers to connect data and key insights to applications. These intelligent applications allow businesses to use data to respond in real time to customer needs, building more value.

Thanks to our partnerships with companies like PerceptiLabs and H2O, developers can now easily create and deploy machine learning models to power intelligent applications on OpenShift.

Innovation never stops

Together with the open source community and our partner ecosystem, we create a platform that enables our customers to openly collaborate and innovate to satisfy their own business needs. In doing so, they’re improving their own business and occasionally creating something truly disruptive.

These disruptive innovations create fundamental changes not only in one business, but across whole industries. Combined, these changes can impact the world. At the same time, open source has taught us that creativity comes from collaboration, trust, and a wealth of diversity. This is why I believe that open unlocks the world’s potential.

For a more in-depth look at how we are collaborating with industry innovators to further technology innovation, especially around machine learning and artificial intelligence, watch my keynote from Red Hat Summit 2019.


关于作者

Chris Wright is senior vice president and chief technology officer (CTO) at Red Hat. Wright leads the Office of the CTO, which is responsible for incubating emerging technologies and developing forward-looking perspectives on innovations such as artificial intelligence, cloud computing, distributed storage, software defined networking and network functions virtualization, containers, automation and continuous delivery, and distributed ledger.

Read full bio