Subscribe to the feed

It’s one thing to answer the IT challenges of today – at Red Hat Summit and AnsibleFest you’ve seen our own responses to a variety of technology demands, from delivering an open source foundation model platform for AI, to new IT policies as code, and so much more. But today is fleeting, and tomorrow is here before we know it. So how do you, right now, solve the technology puzzles that haven’t materialized yet? Luckily, we’ve got a time-tested way to help us plan for and create the future: Open source communities and projects.

Today, many people are looking at AI/ML as a future technology. It’s still nascent in many organizations, for sure, with strategies and planning and big ideas – rather than deployments and production taking center stage. But not for the open source world. We’re already looking ahead to how to answer the next wave of AI-driven questions.

You could fill an entire conference keynote with what the future holds for AI, but I want to focus on three distinct areas being tackled by open source projects:

  • Democratization
  • Sustainability
  • Trust

If you solve or at least start to solve these issues, then the rest of an AI strategy may start to feel a little less complex and more attainable.


We need to be very blunt when it comes to AI terminology: It’s hard not to raise an eyebrow at “open” models that have quotation marks around them or maybe an asterisk. Don’t get me wrong: These models are critical to the field of AI, but they aren’t open in the sense of open source. They’re open for usage – many with various restrictions or rules – but they may not be open for contributions, nor do they have open training data sets or weights.

This is a challenge that we addressed today, and that we will continue to do so, in collaboration with IBM Research. Alongside InstructLab, IBM Research is now applying an open source Apache license to Granite language and code models. This is huge, not because it’s unique to have a model governed by an open source license. This is unique because now anyone – through InstructLab – can contribute to these models to make them better.

More than that, you can actually make an AI model YOUR AI model. Do you want to build a chatbot focused around fishing? Go for it, contribute it back, let’s make ChatGoFish. Want to focus a troubleshooting bot around a really specific niche technology area? Do it with InstructLab. The possibilities become boundless when you really, truly apply open source principles to AI models, and we’re here for it.


OK, let’s get straight to the point: Model training and AI inference require a lot of power. By 2026, the International Energy Agency expects power demand for the AI industry to grow by 10x. So what does this mean other than coin miners have a rival in the energy industry?

It means we need to bring software – open source software – to bear to help solve this challenge. Getting started with AI will almost always be power-hungry, but we can be smart about it. We’ve already taken steps in this regard with modern enterprise IT through the Kepler project, which helps provide insights into the carbon footprint and energy efficiency of cloud-native applications and infrastructure. It’s currently available as a technology preview in Red Hat OpenShift 4.15.

But what if we can, through the power of open innovation, turn Kepler into a tool that can also watch power consumption of GPUs, not just CPUs?

We’re doing just that, using Kepler to measure the power consumption of ML models for both training and inference. This provides a full view of the power consumption of both traditional IT and your AI footprints – once again, brought to you by open source.


Like any exciting new technology, we need to be able to effectively protect and enforce the security footprint of AI workloads, models and platforms. Innovation without security is simply “risk,” which is something that both enterprises and open source communities want to minimize.

For software, the supply chain and provenance is key in delivering a more secure experience. This means having a clear understanding of where given bits are coming from, who coded them and who accessed them before they make it into production capabilities. The Red Hat-led sigstore project helps prove the veracity of the open source code that you’re using across all stages of application development.

Now, we need to apply this same level of forethought, discipline and rigor to AI models – which is what Red Hat and the open source community are doing by working to create an AI Bill of Materials, which delivers greater assurances around model builds using our secure supply chain tooling.

Going hand in hand with security is the concept of trust – how do you and your organizations know that you can trust the AI models and workloads that you’re banking the future on? This is where TrustyAI comes in. It helps a technology team understand the justifications of AI models as well as flag potentially problematic behavior.

Through these examples, I hope you can see how open source is working to bring greater accessibility, more sustainability, and enhanced security and trust to AI for the future. And at Red Hat, we’re proud to be at the forefront of driving all of these technologies, none of which would be possible without open source community collaboration that spurs new ways of thinking.

About the author

Chris Wright is senior vice president and chief technology officer (CTO) at Red Hat. Wright leads the Office of the CTO, which is responsible for incubating emerging technologies and developing forward-looking perspectives on innovations such as artificial intelligence, cloud computing, distributed storage, software defined networking and network functions virtualization, containers, automation and continuous delivery, and distributed ledger.

During his more than 20 years as a software engineer, Wright has worked in the telecommunications industry on high availability and distributed systems, and in the Linux industry on security, virtualization, and networking. He has been a Linux developer for more than 15 years, most of that time spent working deep in the Linux kernel. He is passionate about open source software serving as the foundation for next generation IT systems.

Read full bio

Browse by channel

automation icon


The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon


The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon


The latest on the world’s leading enterprise Linux platform

application development icon


Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech