Mozilla recently published a fascinating piece titled "The zero-days are numbered," focusing on their collaboration with Anthropic to use AI models to find vulnerabilities in Firefox. The results Mozilla reports are staggering: 22 security-sensitive bugs found in one release cycle, followed by 271 vulnerabilities identified in a subsequent pass. These aren't trivial issues and they weren't theoretical; they were real defects, the kind that elite human researchers spend careers finding.
But a machine found them in a fraction of the time.
This is one of those moments where the ground shifts under your feet and you have a choice: panic, or recognize that the ground was always shifting. For those of us leading defense in depth strategies for years, this isn’t just good news – this is a catalyst for progress.
The asymmetry problem
For as long as most of us have been in the world of IT and systems security, the advantage has belonged to attackers. Defenders had to protect everything, while attackers only needed to find one crack. Mozilla describes this eloquently. They've been "fighting to a draw" for decades using overlapping defensive layers, knowing that any single layer could fail. The economics were brutal, as a zero-day exploit could be worth millions on the open market precisely because finding one required either extraordinary skill or extraordinary luck.
AI changes that calculus. When a model can reason about code the way an elite researcher does, but across an entire codebase simultaneously, the economics of vulnerability discovery collapse. Bugs that would have taken months of manual analysis to find can be surfaced in hours. Here is the critical insight from Mozilla's work: the AI didn't find some alien class of vulnerability that humans couldn't have found. It found the same kinds of bugs (memory safety issues, logic errors, edge cases in parsing) just at a scale and speed that no human team could match.
The gap between "machine-discoverable" and "human-discoverable" bugs is closing. This means the reservoir of undiscovered vulnerabilities in any given codebase is now a shrinking, possibly finite pool. Incidentally, this highlights the immense benefit open source provides. As these tools are increasingly pointed at open source projects, those projects become more secure at an exponentially faster rate than we’ve seen previously. The old adage, also known as Linus’ law, that “given enough eyeballs, all bugs are shallow” becomes increasingly more relevant here.
For defenders, this is the first genuinely hopeful development in a long time, if not a tectonic shift. The reservoir of unknown and undiscovered vulnerabilities may finally have a floor, and it’s being drained rapidly.
Defense in depth was never optional
But here's the thing that I think gets lost in the excitement about AI-driven bug hunting: finding vulnerabilities faster doesn't help you if your only defense is patching them. Patching is vital, but if your security strategy is solely predicated on the assumption that software will be vulnerability-free, you've already lost. That was true before AI and it's even more true now, because the same tools that help defenders find bugs will inevitably help attackers find them too.
This is where defense in depth (the principle of layering multiple, independent security controls so that no single failure is catastrophic) becomes not just a best practice but a survival strategy. Mozilla talks about sandboxing, Rust adoption for memory safety, and fuzzing. These are examples of defense in depth, the sorts of things we've been doing at Red Hat as well, and I'd argue our platform gives enterprises some of the most comprehensive layered defenses available.
Take Red Hat Enterprise Linux as an example. We compile with stack protection and Position Independent Executables (PIE), along with FORTIFY_SOURCE and ASLR (Address Space Layout Randomization) enabled by default policy, disabling only by exception when not practical for technical reasons. These aren't exotic features; they are fundamental requirements for any modern enterprise platform. A buffer overflow vulnerability that might yield arbitrary code execution on a system built without these protections becomes, on RHEL, significantly more complex to exploit, often requiring chaining of multiple bugs or primitives to work around the mitigations.
I've written about this before in the context of why different vendors score the same CVE differently. How software is built materially changes the impact of the vulnerabilities found in it. AI tools might discover a thousand buffer overflows in a C codebase, but if the platform underneath is compiled with hardening flags that provide multiple hurdles to overflows being weaponized, the risk calculus can change dramatically.
Then there's SELinux. I've long described SELinux as the lock on the interior door. An attacker who exploits a vulnerability in a web server process doesn't get free rein of the system. They're confined to that process's security context and can only access resources explicitly permitted by the policy. In a world where AI can churn out exploit chains at machine speed, mandatory access controls like SELinux are the difference between "the attacker got a foothold" and "the attacker got into the kitchen, but every other room is locked."
Combine SELinux with Linux namespaces for isolation, seccomp for system call filtering, and capabilities for fine-grained privilege management, and you have layers upon layers of controls that each independently limit the blast radius of any single vulnerability. These capabilities, enabled out of the box, are more critical than ever. In this age of AI, they should be treated as essential technologies designed and proven to protect systems by reducing the impact of successful exploitation.
Containers and the OpenShift story
Mozilla's emphasis on process sandboxing maps directly to the containerized world. On Red Hat OpenShift, every workload runs in its own container with its own namespace, network policies, and Security Context Constraints (SCCs) that define exactly what that workload is permitted to do. This represents a fundamental shift: security as an architectural requirement rather than an operational afterthought.
We've integrated Red Hat Advanced Cluster Security (ACS) for runtime vulnerability scanning, configuration management, and threat detection across Kubernetes environments, and the Compliance Operator to continuously validate security baselines. These tools address the human element, often in the form of misconfigurations, credential abuse or phishing, that accounts for a majority of modern breaches. You can find every vulnerability in your code, but if your Kubernetes cluster is misconfigured with overly permissive role bindings, it hardly matters. Automated policy enforcement isn't glamorous, but it prevents the kinds of mistakes that actually lead to breaches.
Memory safety and the long game
Mozilla rightly highlights their adoption of Rust for memory-safe code as a foundational investment. They still have decades of C++ to contend with, and so do we. The Linux kernel is roughly 30 million lines of C, and rewrites of that scale don't happen overnight. But progress is being made. Rust is increasingly being used in Linux kernel modules, and Red Hat has been involved in supporting this evolution.
In the meantime, the compiler hardening flags I mentioned earlier serve as a pragmatic bridge. They don't eliminate memory safety bugs, but they significantly reduce their exploitability on our platform. As Mozilla aptly noted, "defects are finite." AI is helping us find them faster than ever, and systems like RHEL (with secure defaults enabled out of the box) are helping us contain their impact when they're inevitably exploited before a patch arrives.
Proactive security at Red Hat
On our side, our Product Security organization conducts threat modeling, penetration testing, and static analysis across the Red Hat portfolio. These approaches are all complementary, combining the use of long-standing traditional methods and newer AI-driven practices to form an even stronger layered security posture
Let’s use fuzzing as one example. Fuzzing finds bugs by throwing unexpected inputs at running software, while AI-driven analysis reasons about code paths that might never be reached through random input generation. Together, they shrink that finite pool of undiscovered defects from both ends. For each of these traditional practices, using AI to enhance and extend the capabilities gives defenders an advantage we’ve not had before.
Mozilla's findings offer a profound implication. If defects truly are finite, and we can find them at machine scale, then the defenders' job shifts from an impossible game of whack-a-mole to a tractable engineering problem. Comprehensive vulnerability discovery might actually be achievable. This requires an underlying platform built for resilience, because there will always be a window between discovery and remediation. Defense in depth and enabling AI-based vulnerability discovery in the CI/CD pipeline fills that window.
Expertise matters
Red Hat Engineering has deep expertise across many core open source codebases. That expertise is critical for assessing the legitimacy, relevance, and severity of potential findings identified by AI scanning tools. That expertise, fed back into the tooling and related artifacts, improves the quality of the AI-based analysis. Red Hat Product Security applies this expertise in our in-house tooling to prioritize incoming reports by potential severity and impact so we can operate at the scale AI scanning tools are enabling.
Red Hat also has long-standing, strong relationships with the upstreams of the codebases we ship, which is critical when coordinating responses and remediating discovered issues. We understand the responsible disclosure procedures for these upstreams, coordinate and share information to help protect users, and deploy fixes as soon as possible to impacted projects and products. In many ways, this allows Red Hat to act as a filter so upstream maintainers deal only with the real, verified vulnerabilities that are discovered. There is a benefit to our customers as well: explicit, focused fixes to reduce the amount of changing code that must be absorbed to fix the vulnerabilities that truly matter.
Transparency matters
I want to close with something that doesn't get enough attention in these conversations. Mozilla is an open source organization, and so is Red Hat. We can have this discussion, and Mozilla can publish findings of 271 bugs without a PR catastrophe, because transparency is a core value of open source. While many proprietary vendors might view 271 discovered vulnerabilities as a reputational risk to be managed, the open source community views them as a roadmap for improvement. Open source doesn't have nor want that luxury. Transparency allows the community, customers, and competitors alike to validate the work, learn from it, and contribute back. It's also what allows enterprises to understand exactly how their software is built, how it's defended, and what risks remain.
At Red Hat, this is reflected in our public CVE pages, our published security risk reports, and our commitment to providing authoritative CVSS scores based on how we actually build and ship our software. As AI accelerates the pace of vulnerability discovery, that transparency becomes even more critical. Customers need to trust that their vendor isn't just finding bugs, but building platforms where those bugs can't easily be turned into breaches.
The zero-days might be numbered. But the defenses we've been building for decades (SELinux, compiler hardening, container isolation, mandatory access controls, proactive testing) were never about assuming the software would be perfect. They exist so that when software flaws are discovered, the damage is contained. In an era of AI-driven security research, that philosophy is finally being vindicated.
Produktsicherheit bei Red Hat
Über den Autor
Vincent Danen lives in Canada and is the Vice President of Product Security at Red Hat. He joined Red Hat in 2009 and has been working in the security field, specifically around Linux, operating security and vulnerability management, for over 20 years.
Ähnliche Einträge
CVE-2026-31431: How Red Hat Advanced Cluster Security and Red Hat Advanced Cluster Management can help
Control your AI agent traffic at scale: Model Context Protocol gateway for Red Hat OpenShift is now in technology preview
Collaboration In Product Security | Compiler
Keeping Track Of Vulnerabilities With CVEs | Compiler
Nach Thema durchsuchen
Automatisierung
Das Neueste zum Thema IT-Automatisierung für Technologien, Teams und Umgebungen
Künstliche Intelligenz
Erfahren Sie das Neueste von den Plattformen, die es Kunden ermöglichen, KI-Workloads beliebig auszuführen
Open Hybrid Cloud
Erfahren Sie, wie wir eine flexiblere Zukunft mit Hybrid Clouds schaffen.
Sicherheit
Erfahren Sie, wie wir Risiken in verschiedenen Umgebungen und Technologien reduzieren
Edge Computing
Erfahren Sie das Neueste von den Plattformen, die die Operations am Edge vereinfachen
Infrastruktur
Erfahren Sie das Neueste von der weltweit führenden Linux-Plattform für Unternehmen
Anwendungen
Entdecken Sie unsere Lösungen für komplexe Herausforderungen bei Anwendungen
Virtualisierung
Erfahren Sie das Neueste über die Virtualisierung von Workloads in Cloud- oder On-Premise-Umgebungen