This is the third part of Vincent Danen’s “Patch management needs a revolution” series.
- Patch management needs a revolution, part 1: Surveying cybersecurity’s lineage
- Patch management needs a revolution, part 2: The flood of vulnerabilities
Vulnerability ratings are the foundation for a good risk-based vulnerability management program, especially if they’re from a trusted party. Recently I was discussing this topic with a customer and they said they practiced Zero Trust, as if to explain why they could not trust our ratings. The irony, however, is that they did use National Vulnerability Database (NVD) and third-party scanners that use NVD data, meaning they implicitly trust NVD.
This isn’t what “Zero Trust” means; Zero Trust focuses on access and authorization in infrastructure architecture. Nevertheless, trying to apply a principle of “Zero Trust” to vulnerability ratings only makes sense if you’re prepared to do the analysis of each vulnerability on your own, from scratch. Most end-users do not have the time or expertise to do that, so some level of trust must be extended to someone. In fact, this is implied in the demands that vendors provide SBOM (Software Bill of Materials) data on their products. If we apply this kind of “Zero Trust” here as well, every end-user should create their own SBOM. That is an expensive and error-prone proposition, so vendors are expected to provide an SBOM, and rightly so, as the authority on what software they provide. And that’s just one example of mandated implicit trust extended to software vendors.
Red Hat champions the notion of risk-based vulnerability management. For every vulnerability affecting our software, Red Hat Product Security analysts assess them by taking into account how the reported vulnerability is actually exposed and potentially exploitable in our products in order to issue our rating. The common industry inference of risk is based on scores, and the most widely used is CVSS (Common Vulnerability Scoring System). This tends to result in a lot of comparison between differing scores for the same vulnerability and leads to conversations around “their score” (usually provided by the NVD) versus scores provided by a vendor. It’s worth noting that Red Hat ratings of vulnerabilities aren’t reliant solely on CVSS scores, unlike NVD. Score accuracy is important, but not reflective of objective industry-standard four-point rating scales such as those that vendors like Red Hat employ.
This is discussed further in other papers, including the Open Approach to Vulnerability Management whitepaper. To summarize, however: our position is that the composer of software is uniquely positioned to understand the intended use of that software better than anyone else, and the National Telecommunications and Information Administration (NTIA) agrees by noting that suppliers are the authoritative providers of SBOMs (see their SBOM FAQ, page 7, “Q: Who creates and maintains an SBOM?”). If suppliers, or vendors, are required to provide such crucial data, it is proof that they are indeed the best source of this data. While this example is specific to SBOMs, that same trust in vendor data can be extended to things like CVSS scores; that authority doesn’t start and end with SBOMs alone. In the same way, a car dealership is uniquely positioned to better understand the cars they manufacture than a general purpose mechanic. It’s not to say the mechanic doesn’t understand the car at all, but generally not more than the manufacturer.
In this same way, we believe that Red Hat knows Red Hat products better than NVD, not just in terms of how the software is used, but also how it’s built and configured. I’ll even go out on a limb and suggest that every other vendor would say the same about their own products. Further, organizations like NVD consider a vulnerability in all contexts so by definition must be overly broad; after all, open source software is available on multiple operating systems and can be built in a wide variety of ways. The vendor can be precise on impact and exploitation of a vulnerability to their specific product: how it’s used, configured, composed and compiled.
Digging further into the available 2023 data, out of the 29,065 CVEs published, 24,462 were assigned a 2023 CVE (verified using the CVE v5 JSON repository). 13,681 of those CVE entries do not have a Common Vulnerability Scoring System (CVSS) base score provided by the CVE Naming Authority (CNA). The remaining 10,781 CVEs are categorized as:
- 880 Critical (8%)
- 4,184 High (39%)
- 4,954 Medium (46%)
- 763 Low (7%)
Comparing this to NVD, using their JSON feed they have 24,460 2023 CVEs, 912 without a base score assigned. In their dataset, based on the CVSS scores assigned by NVD and the remaining 23,548 there were:
- 3,906 Critical (17%)
- 9,538 High (40%)
- 9,648 Medium (41%)
- 458 Low (2%)
That’s quite different from what CVE.org publishes - which is based on the authoritative scores the CNA provides to them! Interestingly, NVD decreased 998 scores while increasing 2,555 scores. This is nearly identical to what they did in 2022 as well. We’ll look at one such CVE from 2022 as an example.
CVE-2022-28734 is a vulnerability in grub2’s HTTP handling code that, when parsing a particular type of request, could result in a denial of service. Given grub2 is only used at boot, there’s a fairly limited window of opportunity to take advantage of the vulnerability that, if exploited, could render the device unable to boot.
Red Hat rated this vulnerability as Moderate, with a CVSSv3 base score of 7.0. CVE.org has no CVSS score. NVD gave it a score of 9.8 but (plot twist!) also lists Canonical as the CNA that assigned the CVE, and Canonical provided a score of 8.1. Since grub2 is used by others, a quick search shows that Amazon also gave it a 7.0, as did SUSE. Upstream gave it a score of 7.0.
This one CVE has a variety of scores: 9.8, 8.1 and 7.0! Who do you trust when there’s a variety of potential answers? Every vendor noted here knows how grub2 is used in their products, so the clear direction would be to trust the vendor over the aggregator. The burning question is why NVD rated it higher than upstream and every other vendor? There is no explanation, so we do not know the answer. This is true for the other 2,555 scores that were increased and the 998 that were decreased (as compared to CVE.org) and that's not even accounting for differences with vendors.
Tying this back to the exploitation figures above, if we use NVD’s ratings then out of 3,906 Critical issues only 38 (of the 121 discovered in 2023 for 2023 CVEs) were exploited, giving an exploitation rate of 1%. Of the 9,538 High issues only 44 (of the 121) were exploited, or 0.46%, and of the Medium, only 9 of the 9,648, or 0.09% of all the reported Medium issues, were exploited last year.
Finally, given that CVSS base scores tend to be used as a risk metric alone, it’s worth noting that the above criticality ratings by NVD and CVE.org are based on CVSS base scores. These scores are not being used in the way they are intended. As per FIRST (the Forum of Incident Response Security Teams), and authors of CVSS, CVSS scores are meant to prioritize vulnerabilities to remediate by measuring severity, not risk. This is made more explicit in the CVSSv4 user guide. NVD makes the exact same statement in their vulnerability metrics page. It is time to put the final nail in the coffin of CVSS base scores representing risk.
Using vulnerability scoring effectively requires some level of trust, but this shouldn’t matter if you’re just patching everything, right? That sound you just heard was your IT operations teams’ collective head exploding. I’ll explain more in my next post where we look at the myth of patching “all the things.”
About the author
Vincent Danen lives in Canada and is the Vice President of Product Security at Red Hat. He joined Red Hat in 2009 and has been working in the security field, specifically around Linux, operating security and vulnerability management, for over 20 years.