Red Hat further deepened its commitment to secure development in 2025 by expanding and refining Secure Development Lifecycle (SDLC) activities. This unwavering focus on the SDLC aims to mitigate risks for customers by developing every Red Hat product and service with inherent trust and reliability. Red Hat is actively using AI tools and techniques to assist in expediting and enhancing existing SDLC activities.
Security Architecture Review
Red Hat has significantly advanced its security assurance processes using an augmented workflow for the Security Architecture Review (SAR). While the fundamentals of a SAR remain unchanged, this updated process uses AI tools to help expedite and enhance the review. The new SAR process is divided into multiple steps incorporating human intervention to ensure confidence in the analysis. AI capabilities are used to parse architectural diagrams and apply relevant guides and generate a security review checklist tailored to the product's architecture.
The new SAR process aims to provide efficiency and expertise. AI tools allow rapid processing of large amounts of data while human engineers from both Product Security and Product Development teams provide manual validation.
Penetration testing
As of the end of November 2025, the Product Security pen-test team has successfully completed penetration testing on 38 products, yielding a total of 150 findings across the Red Hat software and managed service portfolio. As with SAR, our penetration testing capabilities are now heavily augmented with AI, for both test plan creation and general workflow automation.
Finding type | Count |
Vulnerability | 25 |
Weakness / hardening opportunities | 86 |
Informational | 39 |
Table 6. Finding type results
These findings provide granular insights, categorized as 25 vulnerabilities, 86 weaknesses, and 39 informational findings.
SAST automation
Another area where AI provided significant assistance in 2025 is in our SAST scanning capabilities. To help mitigate the ongoing challenge of alert fatigue with SAST results, we have deployed an AI-powered interpreter into the workflow. Rather than simply labeling a line of code as potentially vulnerable, the system uses a specialized AI model combined with a retrieval system that is able to analyze the findings in both the context of the larger code base and external reference materials. This system has already been validated against major codebases and has significantly reduced noise and toil.
RapiDAST
Our RapiDAST tool has continued its evolution through 2025, with improvements aimed at tracking emerging threats as well as improvements to accuracy and efficiency. One standout innovation is the introduction of support of large language model (LLM) security scanning via the integration of the Garak scanner, allowing for targeted evaluation of LLM-specific vulnerabilities and weaknesses.
Additionally, we have made improvements to the usability and reliability of scan results. We introduced the advanced SARIF filtering for false positives allowing results to be filtered using Common Expression Language (CEL), which can dramatically reduce noise.
The integration of RapiDAST into Red Hat Engineering teams’ CI/CD pipelines has steadily progressed, receiving positive feedback and achieving widespread adoption. This momentum is expected to continue into next year, reinforcing RapiDAST’s ongoing relevance and impact in secure software development practices.
For more information on RapiDAST, find the project on GitHub. Community contributions and feedback are always encouraged.
Project Boann
As the size and scope of our operations continue to grow in both volume and complexity, the need for integration of results from various tools and data sources into a single view becomes essential. In 2025 we launched our open source Single pane of glass (SPoG) project, dubbed “Project Boann.”
The project has 2 key components: a pipeline that collects and normalizes data to the Open Cybersecurity Schema Framework (OCSF), and an LLM chat interface, which will evolve into an agent interface going forward.
The project is under active development with more details and source code in our Github repos:
We welcome feedback and contributions from the community.
Post-quantum cryptography migration
The advent of quantum computing represents a fundamental, long-term threat to digital trust and software supply chain integrity. In 2025, Red Hat moved from planning to execution, leading the industry by shipping hybrid Post Quantum Cryptography (PQC) key exchange (ML-KEM) by default in Red Hat Enterprise Linux 10.1. This provides customers with immediate, out-of-the-box protection against Harvest Now, Decrypt Later data secrecy attacks. Furthermore, we modernized our production signing infrastructure to dual-sign Red Hat package managers (RPMs) with both classical RSA and PQC (ML-DSA) signatures, a critical step in providing verifiable authenticity and protecting our customers against the future threat of quantum forgery. Initial analysis across the portfolio has confirmed that like Red Hat, most organizations’ greatest risks are not in the code but in the complex, interdependent technology supply chain. Critical dependencies from upstream open source projects like Go, which impacts Kubernetes and Sigstore, to hardware-level components like HSMs and firmware, are not yet aligned on PQC timelines. Red Hat is actively leading this cross-industry collaboration to establish clear roadmaps and help unblock the vast open source ecosystem.
AI security update
In 2025, the focus of AI shifted from traditional LLMs to tool orchestration via the Model Context Protocol (MCP), which lets LLMs control external tools for increased organizational efficiency. However, this shift introduces new security risks, specifically the potential for a LLM to execute unsafe commands, underscoring the need for proper risk assessment and controls. This interest in MCP highlights the broader movement toward autonomous agentic AI systems, a field that is expanding with the emergence of new interoperable standards like Google's Agent2Agent (A2A) protocol. This protocol defines how independent AI agents communicate and collaborate for modular, enterprise-scale automation.
Red Hat Product Security continues its research on AI security, safety, and governance. Our whitepaper, “Blueprints of Trust: AI System Cards for End‑to‑End Transparency and Governance,” introduces Hazard-Aware System Card (HASC), a novel framework designed to enhance transparency and accountability in the development and deployment of AI systems. The HASC framework expands on existing system card concepts by integrating a dynamic record of an AI system's security and safety posture. It proposes a standardized system of identifiers, including a new AI Safety Hazard (ASH) ID, to work with existing identifiers like CVEs, ensuring clear communication of fixed flaws.
We published hazard-aware AI System Cards: -
Building on this foundation, 2026 will see an increased focus on transparency as a core aspect of AI security, with further adoption of mechanisms such as model cards, AI Bills of Materials (AIBOMs), and model signing.