Please note that daysofrisk.pl is deprecated in favor of new security data formats and solutions. You can read more about these in The future of Red Hat security data.


A few months ago, I wrote my first blog for Red Hat: Getting a list of fixes for a Red Hat product between two dates is easy with daysofrisk.pl

In that blog we explored the use of the daysofrisk.pl script provided on the Red Hat Security Data page and show you how you can use it to return a list of Common Vulnerabilities and Exposures (CVEs) and Red Hat Security Advisories (RHSAs) included in a particular Red Hat Product between two specified dates.

Today I want to build on that post and show you ways to enhance the data with the Red Hat Security Data API.

A quick recap

Before you start, if you haven’t set up the script before, or if something is not working as intended, see my first blog post linked above. But here's a quick recap to help you get up to speed quickly.

The daysofrisk.pl script requires the use of three .txt files containing all the publishing dates of Red Hat Security Errata (RHSA), the list of CVEs and the mapping between RHSA and published CVEs. 

In my last article, we created a directory to hold the script and the required files, used wget to download the required files from the Red Hat Security Data and set the permissions on the script.

If you don't have the output from the previous article, the commands below will download the script and required text files and run them for you, getting you ready for what we cover in this article. 

mkdir security_review

cd security_review

wget https://www.redhat.com/security/data/metrics/release_dates.txt https://www.redhat.com/security/data/metrics/rhsamapcpe.txt https://www.redhat.com/security/data/metrics/cve_dates.txt https://www.redhat.com/security/data/metrics/daysofrisk.pl

chmod 700 daysofrisk.pl

./daysofrisk.pl --cpe enterprise_linux:8 --datestart 20220401 --dateend 20220430 --xmlsummary rhel-8-report.xml

This will give us a file called “rhel-8-report.xml”, and this is where we will start today's blog.

What is jq and how can it help me?

The Red Hat Security Data API offers a rich set of information about each erratum. All the data is returned in JSON format, a lightweight format for storing and transporting data. 

In order to make that data more human-readable, we’ll use a tool called “jq,” a lightweight and flexible command-line JSON processor, which  is available in Red Hat Enterprise Linux (RHEL) 8 and above, so can be installed with a simple yum command:

yum install jq

You can find out more details about the package in the Red Hat Package Browser.

How do we query the Red Hat Data API and parse the results with jq?

In its very simplest form, you can view the data from the API by directly calling the CVE JSON URL.  An example of this for CVE-2022-1154 would be:

curl https://access.redhat.com/hydra/rest/securitydata/cve/CVE-2022-1154

This would return back a block of JSON from the API as follows:

 

While that does have all the information we need, it's not easy to read, and if you are looking at a number of CVEs, such as our use case, this gets really confusing really quickly.

This is where jq comes in.  If we run the same CVE but pipe its output to jq we get a much cleaner response.  We run this command:

curl -s https://access.redhat.com/hydra/rest/securitydata/cve/CVE-2022-1154 | jq

Which returns this data:

 

Next up, we can see the real power of jq when we filter these results to get the actual information we are looking for, then rename the fields to something human readable.

In my example below, I am filtering the returned keys to only return Name, Bugzilla Description, CVE Score, Details and Statement.  

Note that before each key there is a friendly name followed by a colon, a dot then the key name we want to filter on. This tells jq to only return the keys we are looking for and rename them using our friendly names. So if we then run this example:

curl -s https://access.redhat.com/hydra/rest/securitydata/cve/CVE-2022-1154 | jq '{CVE: .name, Description: .bugzilla.description, Score: .cvss3.cvss3_base_score, Details: .details, Statement: .statement}'

We can see the result is exactly what we are looking for:

 

Bringing it all together

Now we have looked at daysofrisk.pl, the Red Hat Security Data API and jq, let's bring this together to parse our output and enhance it with more data.

Starting from the file rhel-8-report.xml, we created with daysofrisk.pl, we can query the API using a simple loop:

grep "<cve>" rhel-8-report.xml | sed -e 's/<[^>]*>//g' | while read cve; do  curl -s https://access.redhat.com/hydra/rest/securitydata/cve/$cve | jq '{CVE: .name, Description: .bugzilla.description, Score: .cvss3.cvss3_base_score, Details: .details, Statement: .statement}'; done

This will loop over all the CVEs in the rhel-8-report.xml file and display them on your screen using the format we have defined in our previous steps.

What the command above is doing is using "grep" to extract the CVE numbers from the XML produced from daysofrisk.pl and using "sed" to remove the XML tags from around the CVE number. This gives us a raw list of CVE numbers to pass into our loop. The loop reads each line and assigns the CVE number to a variable, then CVE API is called with the variable in the URL.  Finally, we take the output from the API and pass it to jq to format everything for us.

You can easily write this output to a file with a simple redirect:

grep "<cve>" rhel-8-report.xml | sed -e 's/<[^>]*>//g' | while read cve; do  curl -s https://access.redhat.com/hydra/rest/securitydata/cve/$cve | jq '{CVE: .name, Description: .bugzilla.description, Score: .cvss3.cvss3_base_score, Details: .details, Statement: .statement}'; done > enhanced_output.txt

This will give you a single file called enhanced_output.txt containing details of every CVE released between the two dates you provided to daysofrisk.pl enhanced with the CVE score as well as the Details & Statement which would normally be found on our CVE pages.

Conclusion

Today we covered the usage of the Red Hat Security Data API, along with jq, to enhance the data we collected with daysofrisk.pl and automated the whole process into a single script which can be used to produce the data every time you need it.

If you would like to take this a bit further, it would be possible to use a bash script or Red Hat Ansible Automation Platform to automate the download of the required .txt files to ensure each run has up-to-date data, but that is outside the scope of this article.

 


Sull'autore

Working in production technical support for 10 years prior, I joined Red Hat in 2021 as a Technical Account Manager. I have a passion for automation and reporting, and I live in Scotland, UK with my wife Sara and Dog Radar.

Read full bio