Virtual event

Pulp Citron: Guardrailing customer Lemonade stand chatbot from attackers

Jump to section

Workshop | AI Tech Journey Series

As organizations integrate generative AI into their applications, traditional security measures like firewalls, authentication, and encryption can’t fully protect against threats such as prompt injection, data leakage, and policy violations. These risks can expose sensitive data, damage brand reputation, and undermine compliance.

Join this workshop to learn how to secure AI applications at scale with TrustyAI by delivering guardrails-as-a-service on Red Hat® OpenShift® AI. Through hands-on labs, our experts will show you how to:

  • Build a modular guardrail architecture where all AI traffic passes through an orchestration layer that enforces security and compliance checks by default, with added monitoring and visualizations.
  • Combine and customize different detectors—such as prompt injection detection, content safety filters, and language or business policy validation.

By the end of the session, you'll know how to standardize, scale, and automate AI security practices on Red Hat OpenShift AI, as well as how to continuously test and evolve guardrails against emerging threats using open source tools. 


Anneli Sara Banderby

Anneli Sara Banderby

Senior Specialist Solution Architect, AI, Red Hat

Cansu Kavılı Örnek

Cansu Kavılı Örnek

Principal AI Platform Architect, Red Hat

Robert Lundberg

Robert Lundberg

Principal AI Platform Architect, Red Hat

AI Tech Journey Hub