Subscribe to the feed

The National Vulnerability Database (NVD) is a US Government repository of vulnerability management data that includes databases of security checklists, security related software flaws and impact metrics. It provides a public severity rating for all the vulnerabilities named by the CVE (Common Vulnerabilities and Exposures), a list of standardized names for vulnerabilities and other security exposures. The ratings can be “Low,” “Medium” or “High”. Each rating is generated automatically based on the CVSS (Common Vulnerability Scoring System) score its analysts calculate for each issue.

At Red Hat, we’ve been interested for some time in seeing how well those map to the severity ratings that Red Hat give to issues. We use the same ratings and methodology that many others in our industry use - we assign “Critical” to things that have the ability to be remotely exploited and we obviously react and fix these vulnerabilities with the highest priority. Our remaining three levels, “Important,” “Moderate” and “Low,” also take into account the potential risk of a flaw and are treated according to this risk.

To explore our vulnerability ratings in relation to NVD, we took the last 12 months of all possible vulnerabilities affecting Red Hat Enterprise Linux 4 - from 126 advisories across all components - from one of our metrics pages and compared them to NVD using its provided XML data files. The result broke down like this:

  • Red Hat:

    13% Critical
    24% Important
    39% Moderate
    24% Low

  • NVD:

    30% High
    20% Moderate
    50% Low

    Seems like a good comparison on the surface. But what is implied is that all the issues Red Hat rated as “Critical” were mapped to NVD as “High.” This actually isn’t the case. If you look at the breakdown of what the NVD rated as “High,” “Medium” and “Low,” you’ll see that varied Red Hat ratings are included in each collection, skewing the comparison.

    The NVD rated 90 vulnerabilities as “High.” These 90 included the following vulnerability ratings from Red Hat:

    • 23 Critical
    • 24 Important
    • 35 Moderate
    • 8 Low

    The NVD rated 61 vulnerabilities as “Moderate.” These 61 included the following vulnerability ratings from Red Hat:

    • 9 Critical
    • 18 Important
    • 22 Moderate
    • 12 Low

    The NVD rated 152 vulnerabilities as “Low.” These 152 included the following vulnerability ratings from Red Hat:

    • 7 Critical
    • 32 Important
    • 62 Moderate
    • 51 Low

    In summary, NVD rated 90 issues that affected Red Hat as “High.” Out of those, we rated only 47 of those “Critical” or “Important.” So almost half of the issues that the NVD rated as “High” actually only affected Red Hat with “Moderate” or “Low” severity. Likewise, there were some Red-Hat-rated “Critical” and “Important” vulnerabilities under the NVD’s “Low” rating.

    This difference in ranking turns out to be less mysterious than one might imagine. It seems that many of the differences are due to the way vulnerabilities affect open source software. Take for example the Apache HTTP server. Lots of companies ship Apache in their products, but all ship different versions with different defaults on different operating systems for different architecture compiled with different compilers using different compiler options. Many Apache vulnerabilities over the years have affected different platforms in significantly different ways. We’ve seen an Apache vulnerability that leads to arbitrary code execution on older FreeBSD, that causes a denial of service on Windows, but that was unexploitable on Linux for example. But this flaw had a single CVE identifier.

    Recent competitive studies have attempted to rate Red Hat on our performance based on NVD “High” severity issues. By doing this, these studies are actually including our performance for 43 “Moderate” and “Low” issues – issues that we typically defer to be fixed via future updates. In reality, our policy is to fix the things that are genuinely “Critical” and “Important” the fastest, and we have an impressive record for fixing “Critical” issues.

    So it’s no wonder that recent competitive vulnerability studies that use the NVD mapping when analyzing Red Hat vulnerabilities have some significant data errors.

    Basically, we feel that you can’t use the generic severity ratings maintained in third-party databases as an accurate assessment of how that issue affects specific products, such as Red Hat Enterprise Linux. For multi-vendor software, the severity rating for a given vulnerability may very well be different for each vendor’s version. This is a level of detail that vulnerability databases such as NVD don’t currently capture. So, comparisons of response times that rely on the accuracy of third-party severity ratings will always be biased against open source software, and is another reason why such comparisons are pretty meaningless.

    Red Hat provides open and transparent metrics on its track record correcting security issues here.

About the author


Browse by channel

automation icon


The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon


The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon


The latest on the world’s leading enterprise Linux platform

application development icon


Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech