Seems like a good comparison on the surface. But what is implied is that all the issues Red Hat rated as “Critical” were mapped to NVD as “High.” This actually isn’t the case. If you look at the breakdown of what the NVD rated as “High,” “Medium” and “Low,” you’ll see that varied Red Hat ratings are included in each collection, skewing the comparison.
The NVD rated 90 vulnerabilities as “High.” These 90 included the following vulnerability ratings from Red Hat:
- 23 Critical
- 24 Important
- 35 Moderate
- 8 Low
The NVD rated 61 vulnerabilities as “Moderate.” These 61 included the following vulnerability ratings from Red Hat:
- 9 Critical
- 18 Important
- 22 Moderate
- 12 Low
The NVD rated 152 vulnerabilities as “Low.” These 152 included the following vulnerability ratings from Red Hat:
- 7 Critical
- 32 Important
- 62 Moderate
- 51 Low
In summary, NVD rated 90 issues that affected Red Hat as “High.” Out of those, we rated only 47 of those “Critical” or “Important.” So almost half of the issues that the NVD rated as “High” actually only affected Red Hat with “Moderate” or “Low” severity. Likewise, there were some Red-Hat-rated “Critical” and “Important” vulnerabilities under the NVD’s “Low” rating.
This difference in ranking turns out to be less mysterious than one might imagine. It seems that many of the differences are due to the way vulnerabilities affect open source software. Take for example the Apache HTTP server. Lots of companies ship Apache in their products, but all ship different versions with different defaults on different operating systems for different architecture compiled with different compilers using different compiler options. Many Apache vulnerabilities over the years have affected different platforms in significantly different ways. We’ve seen an Apache vulnerability that leads to arbitrary code execution on older FreeBSD, that causes a denial of service on Windows, but that was unexploitable on Linux for example. But this flaw had a single CVE identifier.
Recent competitive studies have attempted to rate Red Hat on our performance based on NVD “High” severity issues. By doing this, these studies are actually including our performance for 43 “Moderate” and “Low” issues – issues that we typically defer to be fixed via future updates. In reality, our policy is to fix the things that are genuinely “Critical” and “Important” the fastest, and we have an impressive record for fixing “Critical” issues.
So it’s no wonder that recent competitive vulnerability studies that use the NVD mapping when analyzing Red Hat vulnerabilities have some significant data errors.
Basically, we feel that you can’t use the generic severity ratings maintained in third-party databases as an accurate assessment of how that issue affects specific products, such as Red Hat Enterprise Linux. For multi-vendor software, the severity rating for a given vulnerability may very well be different for each vendor’s version. This is a level of detail that vulnerability databases such as NVD don’t currently capture. So, comparisons of response times that rely on the accuracy of third-party severity ratings will always be biased against open source software, and is another reason why such comparisons are pretty meaningless.
Red Hat provides open and transparent metrics on its track record correcting security issues here.