Build your AI. On your terms.

Join our webinars and demos to see how Red Hat AI helps you build with flexibility, deploy with confidence, and innovate on your terms.

AI_ML_1

AI Tech Journey: A series of webinars and demos

Join our Red Hat experts for an insightful Tech Journey series designed to help put artificial intelligence (AI) to work for you, the open source way. Take a look at the sessions taking place over the coming months and register for those that interest you.

Upcoming Events

Watch on demand

Webinar: Mastering AI-assisted Enterprise Application Development with Red Hat

🗓  Date: October 29, 2025


Join us to explore how AI-assisted app development can be adopted in a scalable way while maintaining security and compliance.

  • Explore how the Red Hat Hybrid Application Platform powers enterprise AI strategies with essential components and resources
  • Learn the differences between Vibe coding and Agentic coding, their key use cases, and the risks and challenges of enterprise adoption
redhat intel

Webinar: Harnessing the Power of AI to Solve Enterprise Challenges with Red Hat and Intel AI Platforms

🗓  Date: October 7, 2025
 

Join us to explore how Red Hat’s AI platforms and Intel’s AI-optimized hardware simplify GenAI development.

  • Discover how enterprises are achieving faster AI project deployment, increased data science productivity, and lower infrastructure costs.
  • Learn from real world success stories and apply for a free GenAI proof of concept

Webinar: Building Smart Agents with Llama-Stack on OpenShift AI

🗓  Date: October 1, 2025


Join us to explore agentic GenAI with Llama Stack on OpenShift AI.

  • Watch Red Hat experts build live AI agents, including a web-search assistant and an OpenShift admin bot powered by MCP servers
  • Learn how to iterate locally, deploy globally, and monitor everything from model latency to audit logs using OpenShift AI

Webinar: Scaling Generative AI with Confidence: LLM-d and OpenShift for Distributed Inference

🗓  Date: September 11, 2025
 

Join us to explore LLM-d, a Kubernetes-native framework for distributed inference.

  • Learn how LLM-d integrates with OpenShift AI to support multi-GPU workloads and more
  • Explore how LLM-d enables declarative model deployment and delivers distributed serving for large models
frame

Webinar: Optimize your GPU investments for AI performance

🗓  Date: August 14, 2025
 

Join us to explore GPU management challenges and unlock AI performance with centralized GPU-as-a-Service.

  • Learn how to simplify GPU management with a unified approach

  • Discover how to optimize GPU usage and improve AI performance

frame

Webinar: AI in action: Top real-world use cases

🗓  Date: August 13, 2025
 

Join us to learn how to reduce costs and risks while building scalable production environments with Red Hat AI.

  • Learn how to craft the right AI strategy for your organization

  • Explore real-world use cases and see how Red Hat AI powers flexible, cost effective deployment

frame

Webinar: Accelerate private AI adoption with Models-as-a-Service

🗓  Date: August 12, 2025
 

Join us to learn pitfalls of self-hosted AI and how Models-as-a-Service (MaaS) helps you avoid them.

  • Explore how MaaS and API gateways enable self-serve access to private AI models

  • Discover the best ways to optimize costs, track usage, and enable chargeback for LLM deployments

frame

Webinar: Optimized AI Inference: Boosting Performance & Slashing Costs with Red Hat AI Inference Server

🗓  Date: July 25, 2025
 

Join us to explore scalable, fast, and cost-effective inference with Red Hat AI Inference Server

  • Learn how to serve models quickly and efficiently with Red Hat AI’s Hugging Face repository

  • Discover how to optimize custom models with Red Hat’s LLM Compressor tool

frame

Webinar: Deploying and Scaling Gen AI with OpenShift AI

🗓  Date: July 23, 2025
 

Join us to explore how to deploy, manage, and scale large language models (LLMs) using OpenShift AI. 

  • Discover vLLM for lightning-fast, memory-efficient inference 

  • Learn strategies to optimize resource usage and performance at scale 

logo

Webinar: Building and Enhancing LLMs with RHEL AI

🗓  Date: June 25, 2025
 

Join us to find out how RHEL AI supports efficient LLM development and tuning.  

  • Learn how to leverage tools to generate synthetic data and fine-tune LLMs

  • Gain insights into aligning models and maximizing performance without proprietary price tags

rh

Webinar: How Red Hat AI accelerates your Generative AI journey

🗓  Date: May 28, 2025


Join us as we unveil the innovative potential of Red Hat AI for Gen AI workloads. 

  • Learn how to streamline data generation, refinement, and host language models 

  • Discover how to simplify model alignment, reduce costs, and deploy flexibly 

Build the solutions you need with Red Hat AI

Red Hat AI is the open source AI platform that works the way you do. Reduce costs with efficient models, customize them with your data and domain expertise, and deploy and manage workloads consistently across any infrastructure. All with tools designed to help your teams collaborate and scale.

Red Hat AI, explained

Isn’t Red Hat just Linux? Video duration: 0:50

Why is an AI platform essential? Video duration: 0:41

How can I cost effectively run AI models? Video duration: 0:35

Your vendors are your choice

That’s why they’re our partners, too. We work with open source communities, and software and hardware vendors to help you construct an AI solution of your own design.

Integrate your tech with ours

Contact Us

Talk to a Red Hatter about Red Hat AI