Gen AI Bootcamp

A five-day immersive and hands-on learning program

Event overview

In this immersive learning program, Red Hat experts guide your teams through the fundamentals of enterprise AI, real-world Generative AI scenarios, and deployment strategies that scale. Over five days, you’ll gain not just skills, but the clarity and confidence to operationalize AI for business impact. Fees apply. 

At the end of five days, your teams will walk away with

diamond
  • A clear understanding of AI fundamentals on OpenShift AI
  • Hands-on experience with Generative AI use cases
  • Best practices for scaling and deploying enterprise-ready AI solutions
  • Clarity on your AI roadmap and confidence to navigate emerging technologies

*Only available to participants in the Asia-Pacific (APAC) region.

Why attend this bootcamp

Accelerate AI delivery

Deploy faster with streamlined pipelines and proven open-source practices.

Make smarter decisions

Harness Generative AI for insight extraction, summarization, and custom model development.

Scale with confidence

Securely operationalize AI across the enterprise using Red Hat OpenShift AI.

Simplify operations

Reduce complexity with flexible toolchains and hands-on guidance from practitioners.

Stay future-ready

Identify your next steps to evolve AI capabilities with confidence.

What you’ll learn over 5 days

Day 1

Foundations of AI and OpenShift Basics

Get teams started with collaborative ways of working, OpenShift essentials, and the basics of AI in the enterprise.

  • Explore OpenShift architecture: control plane, worker nodes, Operators, and GPU enablement for AI and ML workloads.
  • Deploy containerized applications using Source-to-Image (S2I), BuildConfigs, and OpenShift Pipelines (Tekton) for automated build and deployment.
  • Learn how OpenShift AI integrates with the OpenShift platform, combining Jupyter workbenches, model training environments, and hybrid cloud capabilities.
Day 2

Red Hat OpenShift AI and ML Platform

Get teams started with collaborative ways of working, OpenShift essentials, and the basics of AI in the enterprise.

  • Get familiar with Red Hat OpenShift AI (RHODS): Workbenches, Model Serving, and managed environments supporting TensorFlow, PyTorch, and scikit-learn.
  • Build end-to-end MLOps workflows covering data preparation, model training, versioning, and automated deployment using Pipelines and the Model Registry (tech preview).
  • Align enterprise agile practices, such as the Big Picture & Event Storming with model experimentation, governance, and continuous delivery in OpenShift AI.
Day 3

Applied AI with Generative Models and RAG

Dive into LLMs, Retrieval Augmented Generation (RAG), and vector databases, applying these capabilities to real-world use cases such as summarization and contextual search.

  • Deploy and fine-tune Large Language Models (LLMs) using GPU-enabled OpenShift clusters and vLLM serving runtime.
  • Implement Retrieval-Augmented Generation (RAG) using OpenShift AI Model Serving with vector databases such as Milvus or pgvector for contextual enhancement.
  • Apply generative AI for document summarization, contextual search, and knowledge retrieval using OpenShift’s data science pipelines and embedding features.
Day 4

Build an AI Assistant for Contextual Search and Summarization

Hands-on with real-world use cases such as AI assistants and practical AI assistant use cases for summarization and contextual search.

  • Architect an AI assistant using OpenShift AI Workbenches for development and Model Serving for scalable deployment.
  • Integrate embeddings and vector search with summarization models to deliver context-aware answers and document insights.
  • Expose the assistant securely through OpenShift routes and Service Mesh, with role-based access and API integration for enterprise applications.
Day 5

GitOps, Pipelines, and Model Inferencing

Move from lab to production with GitOps, model inferencing best practices, and team showcases that prove real-world readiness.

  • Implement GitOps with OpenShift GitOps (Argo CD) and OpenShift Pipelines (Tekton) to automate model builds, deployments, and environment promotions.
  • Manage model inferencing using single-model or multi-model serving on KServe, with monitoring for drift and bias through TrustyAI integration (tech preview).
  • Ensure scalable and compliant AI operations via resource quotas, security policies, and multi-cluster governance using Advanced Cluster Management (ACM).

Your journey with Red Hat AI

The workshop is the ideal starting point for a broader, high-impact AI adoption journey with Red Hat’s comprehensive services

Gen AI workshop

Build skills and define a path

arrow

AI quick starts

Accelerate your first project

arrow

Enterprise AI solutions

Scale and optimize across the organization

Delivered by Red Hat experts

Learn from those who build, scale, and deliver enterprise AI every day. Delivered by Red Hat experienced instructors, you will gain deep technical expertise and real-world implementation experience in advanced topics like LLMs, MLOps, and deployment automation.

Who should attend?

  • Enterprise IT and platform teams to operationalize AI at scale.
  • Data science and ML engineers to move beyond experiments and deliver production-ready solutions.
  • Innovation leaders to shape AI strategy with confidence and business alignment.

Turn your GenAI goals into enterprise reality

Move beyond pilots. Schedule your Red Hat Gen AI Bootcamp and take the next step towards production-scale AI. Contact us for special pricing and scheduling your bootcamp.