When Red Hat revealed our AI quickstartsEDB suggested a use case to balance the business need for data with the non-negotiable demand for governance. We often treat this as a zero-sum game, but what if the architecture itself could negotiate peace?

This Data Governance Co-Pilot AI quickstart, built on Red Hat OpenShift AI and EDB Postgres AI (PGAI) platform, treats safe data discovery as a requirement. It provides a protected workspace where any data consumer can navigate complex schemas and extract insights with less risk of tripping compliance wires.

Retrieval-augmented generation (RAG) with privacy

The real governance challenge with agentic AI isn't just what the large language model (LLM) knows, it's what it can do. Agents with direct database access can execute queries autonomously, bypass access controls, and return raw personally identifiable information (PII) to the chat interface. The architecture needs to make compliant behavior the only possible behavior.

The AI quickstart is built on pg-airman-mcp, EDB's open source Model Context Protocol (MCP) server for Postgres integrated with PGAI. It uses agentic tool calling to Postgres for an innovative safeguard—an analytics tool that merges static data governance policies with a natural language query interface. This means that policies concerning data privacy, confidentiality, and data timeliness become an active filter directly embedded within the analyst's tooling and workflow. User queries are processed through an uploaded governance policy and the LLM to generate policy-compliant SQL statements, preventing data leakage to the chat interface.

By handling masking at the SQL level, this prevents the LLM from accessing the raw data payload.

Governance policies as filters

Usually, data governance policies are static documents. This Data Governance Co-Pilot AI quickstart changes the paradigm by allowing users to upload a data governance policy directly into the workflow, turning it into an active filter.

If a trusted analyst asks for a dataset that violates a residency rule, the system understands the policy and dynamically enforces rules, such as masking geographic data. It bridges the gap between legal intent and technical execution, helping non-technical teams understand database boundaries without requiring a lecture from the security team.

An air-gapped cloud native stack

Sending database schemas and query context to external AI providers introduces real risk—latency, exposure, and dependency on infrastructure you don't control. This AI quickstart eliminates that by running the entire stack within your Red Hat OpenShift cluster—the UI, the MCP server, the database, and the backend orchestrator. No data crosses the public internet to reach a third-party model.

By keeping the reasoning engine and the data store co-located within the same cluster, you dramatically reduce your attack surface and remove any dependency on third-party AI infrastructure. This gives you sovereign data and AI, fully realized within your own infrastructure.

"Restricted Mode" by default

It's realistic to be concerned about agentic AI executing a DROP TABLE command in production. The underlying pg-airman-mcp engine addresses this integrity risk by implementing distinct access modes where DROP operations are strictly prohibited in both.

While developers can use "Unrestricted Mode," the system supports a "Restricted Mode" by default for production. The engine performs deep AST-level validation in "Restricted Mode" which limits operations to read-only transactions and imposes constraints on resource utilization, meaning a curious analyst can't accidentally lock up the database or modify the schema. Whereas in Unrestricted Mode, although this mode executes SQL directly (subject to standard DB permissions), it still maintains the global block on DROP commands.

The only exception exists in metadata maintenance where users can update object comments via a configurable ALLOW_COMMENT_IN_RESTRICTED flag, allowing for collaborative documentation without risking schema integrity. This gives users a powerful tool to explore data without giving them the keys to the kingdom.

Fast time-to-insight

New analysts often have database access, but don't know the table names or relationships. "Schema Intelligence" guides the user through the schema via context-aware SQL generation based on a deep understanding of database objects and attached metadata. This metadata provides the LLM with powerful semantic context regarding both the individual objects and the broader schema. The AI becomes a mentor, facilitating an exploratory use case for understanding database objects, empowering the user to unpack the company's data structure independently and reducing the burden on technical staff.

Final takeaway

EDB's Data Governance Co-Pilot AI quickstart provides a layer of architectural glue that sits between your users and your data, actively enforcing safety. You can try it today

제품

Red Hat AI

Red Hat AI는 하이브리드 클라우드 환경 전반에서 AI 솔루션의 개발과 배포를 가속화하는 유연하고 비용 효율적인 솔루션을 제공합니다.

저자 소개

Giri Venkataraman is an Principal Solution Architect with the Global Ecosystem team at Red Hat. He works with partners to enable and market joint solutions that advance customers’ cloud-native and AI journeys by modernizing application development and delivery and reducing time-to-market. Prior to joining Red Hat in 2021, Giri spent 20+ years working with organizations in the financial services, data integration, and insurance industries to develop, mature, and automate their DevSecOps processes that helped their lines of business to deliver innovative customer experiences and lower their operational costs.

For Shane Heroux, technology has always been about connections: connecting systems, people, and ideas. His open source journey kicked off in a college dorm room in the mid-90s, tinkering with Slackware just for fun. It wasn't long before he found his way to Red Hat, and he's been an active part of the Linux and open-source communities ever since.

He officially joined the team in 2018, first diving deep into the world of containers as an OpenShift Consultant. He then moved into the partner space as a Technical Account Manager, where he discovered a passion for building success with partners, not just for them.

Today, that focus is his pride and joy. Shane thrives on collaborating with the incredible Red Hat partner ecosystem to design and develop creative solutions that solve real-world problems. For him, it's all about using the power of open, collaborative technology to build a better, more efficient, and more connected world for everyone.

Bilge Ince is a Machine Learning Engineer at EDB, where she works at the intersection of artificial intelligence and PostgreSQL. Her role involves both extending PostgreSQL with AI capabilities and building AI-driven solutions that use PostgreSQL as a core platform.

She is active in the open-source and PostgreSQL communities and co-organises Diva: Dive Into AI, aconference dedicated to artificial intelligence, with a strong focus on accessibility and inclusion. Bilge also contributes to education and mentorship, having taught Machine Learning and Web Application Security at the Turkish Linux Users Association summer camps, and led courses at the University of Bremen’s Informatica Feminale, an international summer school for women in technology.

Originally from Turkey and now based in London, she balances her professional work with Muay Thai training and long-distance running, and completed the 2024 Amsterdam Marathon.

Peter Samouelian is a Principal Software Engineer at Red Hat, where he leads the development of AI-based software within a global partner ecosystem. His recent work focuses on architecting agentic conversational analytics applications and AI-driven recommenders designed to operate at massive scale.

Peter’s career is defined by the high-stakes intersection of cybersecurity, machine learning robustness, and system resilience. Previously serving as a Principal Investigator and Research Engineer, he led defense-grade studies into making complex digital architectures survive in adversarial and rapidly changing environments. This background gives him a unique perspective on AI: he is driven by the conviction that the truth is rarely simple and things are seldom what they seem.

Despite his leadership in high-level research, Peter remains a dedicated hands-on engineer, driven by the belief that the act of building is the only reliable way to overcome the "illusion of explanatory depth" that often clouds one's true understanding of a topic—plus, it’s just damn more fun. A Senior Member of the IEEE and a member of ACM SIGKDD, Peter remains focused on the next frontier: architecting microservice-based AI applications that are as secure and resilient as they are scalable.

UI_Icon-Red_Hat-Close-A-Black-RGB

채널별 검색

automation icon

오토메이션

기술, 팀, 인프라를 위한 IT 자동화 최신 동향

AI icon

인공지능

고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트

open hybrid cloud icon

오픈 하이브리드 클라우드

하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요

security icon

보안

환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보

edge icon

엣지 컴퓨팅

엣지에서의 운영을 단순화하는 플랫폼 업데이트

Infrastructure icon

인프라

세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보

application development icon

애플리케이션

복잡한 애플리케이션에 대한 솔루션 더 보기

Virtualization icon

가상화

온프레미스와 클라우드 환경에서 워크로드를 유연하게 운영하기 위한 엔터프라이즈 가상화의 미래