When Red Hat revealed our AI quickstarts, EDB suggested a use case to balance the business need for data with the non-negotiable demand for governance. We often treat this as a zero-sum game, but what if the architecture itself could negotiate peace?
This Data Governance Co-Pilot AI quickstart, built on Red Hat OpenShift AI and EDB Postgres AI (PGAI) platform, treats safe data discovery as a requirement. It provides a protected workspace where any data consumer can navigate complex schemas and extract insights with less risk of tripping compliance wires.
Retrieval-augmented generation (RAG) with privacy
The real governance challenge with agentic AI isn't just what the large language model (LLM) knows, it's what it can do. Agents with direct database access can execute queries autonomously, bypass access controls, and return raw personally identifiable information (PII) to the chat interface. The architecture needs to make compliant behavior the only possible behavior.
The AI quickstart is built on pg-airman-mcp, EDB's open source Model Context Protocol (MCP) server for Postgres integrated with PGAI. It uses agentic tool calling to Postgres for an innovative safeguard—an analytics tool that merges static data governance policies with a natural language query interface. This means that policies concerning data privacy, confidentiality, and data timeliness become an active filter directly embedded within the analyst's tooling and workflow. User queries are processed through an uploaded governance policy and the LLM to generate policy-compliant SQL statements, preventing data leakage to the chat interface.
By handling masking at the SQL level, this prevents the LLM from accessing the raw data payload.
Governance policies as filters
Usually, data governance policies are static documents. This Data Governance Co-Pilot AI quickstart changes the paradigm by allowing users to upload a data governance policy directly into the workflow, turning it into an active filter.
If a trusted analyst asks for a dataset that violates a residency rule, the system understands the policy and dynamically enforces rules, such as masking geographic data. It bridges the gap between legal intent and technical execution, helping non-technical teams understand database boundaries without requiring a lecture from the security team.
An air-gapped cloud native stack
Sending database schemas and query context to external AI providers introduces real risk—latency, exposure, and dependency on infrastructure you don't control. This AI quickstart eliminates that by running the entire stack within your Red Hat OpenShift cluster—the UI, the MCP server, the database, and the backend orchestrator. No data crosses the public internet to reach a third-party model.
By keeping the reasoning engine and the data store co-located within the same cluster, you dramatically reduce your attack surface and remove any dependency on third-party AI infrastructure. This gives you sovereign data and AI, fully realized within your own infrastructure.
"Restricted Mode" by default
It's realistic to be concerned about agentic AI executing a DROP TABLE command in production. The underlying pg-airman-mcp engine addresses this integrity risk by implementing distinct access modes where DROP operations are strictly prohibited in both.
While developers can use "Unrestricted Mode," the system supports a "Restricted Mode" by default for production. The engine performs deep AST-level validation in "Restricted Mode" which limits operations to read-only transactions and imposes constraints on resource utilization, meaning a curious analyst can't accidentally lock up the database or modify the schema. Whereas in Unrestricted Mode, although this mode executes SQL directly (subject to standard DB permissions), it still maintains the global block on DROP commands.
The only exception exists in metadata maintenance where users can update object comments via a configurable ALLOW_COMMENT_IN_RESTRICTED flag, allowing for collaborative documentation without risking schema integrity. This gives users a powerful tool to explore data without giving them the keys to the kingdom.
Fast time-to-insight
New analysts often have database access, but don't know the table names or relationships. "Schema Intelligence" guides the user through the schema via context-aware SQL generation based on a deep understanding of database objects and attached metadata. This metadata provides the LLM with powerful semantic context regarding both the individual objects and the broader schema. The AI becomes a mentor, facilitating an exploratory use case for understanding database objects, empowering the user to unpack the company's data structure independently and reducing the burden on technical staff.
Final takeaway
EDB's Data Governance Co-Pilot AI quickstart provides a layer of architectural glue that sits between your users and your data, actively enforcing safety. You can try it today.
Product
Red Hat AI
About the authors
For Shane Heroux, technology has always been about connections: connecting systems, people, and ideas. His open source journey kicked off in a college dorm room in the mid-90s, tinkering with Slackware just for fun. It wasn't long before he found his way to Red Hat, and he's been an active part of the Linux and open-source communities ever since.
He officially joined the team in 2018, first diving deep into the world of containers as an OpenShift Consultant. He then moved into the partner space as a Technical Account Manager, where he discovered a passion for building success with partners, not just for them.
Today, that focus is his pride and joy. Shane thrives on collaborating with the incredible Red Hat partner ecosystem to design and develop creative solutions that solve real-world problems. For him, it's all about using the power of open, collaborative technology to build a better, more efficient, and more connected world for everyone.
Bilge Ince is a Machine Learning Engineer at EDB, where she works at the intersection of artificial intelligence and PostgreSQL. Her role involves both extending PostgreSQL with AI capabilities and building AI-driven solutions that use PostgreSQL as a core platform.
She is active in the open-source and PostgreSQL communities and co-organises Diva: Dive Into AI, aconference dedicated to artificial intelligence, with a strong focus on accessibility and inclusion. Bilge also contributes to education and mentorship, having taught Machine Learning and Web Application Security at the Turkish Linux Users Association summer camps, and led courses at the University of Bremen’s Informatica Feminale, an international summer school for women in technology.
Originally from Turkey and now based in London, she balances her professional work with Muay Thai training and long-distance running, and completed the 2024 Amsterdam Marathon.
More like this
AI quickstart: Protecting inference with F5 Distributed Cloud and Red Hat AI
Red Hat, NVIDIA, and Palo Alto Networks collaborate to deliver an integrated, security-first foundation for AI-native telecommunications
Technically Speaking | Build a production-ready AI toolbox
Technically Speaking | Platform engineering for AI agents
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Virtualization
The future of enterprise virtualization for your workloads on-premise or across clouds