I've been participating in open source for a long time, and when I think of open source being ubiquitous, it’s exciting and terrifying! It’s exciting because as a society we can achieve so much more, and so much faster, than we could otherwise. It’s terrifying because if there’s a problem in software, and there always is, there are potentially many places where something might need to be corrected.
Then again, the same is true for proprietary software. In that respect, the two are not so different. Whether you’re patching an operating system for a known vulnerability, or you’re patching your phone with an available update from a proprietary vendor, the potential impact will be as large as the software’s install base. The greater the install base, the greater the potential impact.
Benjamin Franklin has been attributed as saying “In this world, nothing can be said to be certain except death and taxes.” At this point we should probably add “software vulnerabilities” to that list of things that are certain.
So where does that leave us? Clearly not using software isn’t the answer. The key is understanding how to use software responsibly. And, further, the responsible use of open source software in particular.
Open source software supply chains
Let's start with a software supply chain, which is not so different from a physical supply chain. Consumers can get their goods and services from a number of suppliers, be they large discount super-stores or smaller family owned businesses, who in turn obtain goods from their suppliers and all the way down to the manufacturer of the product in question. And even further back to the manufacturer securing the various components required for the product they are assembling.
When it comes to the software supply chain, I prefer a different analogy for open source – thinking of the open source software supply chain in terms of water
Consider the open source notions of “upstream” (the author or developer) and “downstream” (the consumer). An end user could be said to be downstream from one or more upstreams: an enterprise vendor or an open source project directly. The enterprise vendor could be upstream from the end user, and downstream from the original project.
With this in mind, think of that upstream as a large body of water up in the mountains. Water can be obtained from this lake directly. Make your way up to the lake, ensure there are containers to carry the water back downstream, have the facilities to boil it to make it safe for consumption, etc. There is work to be done to make the water usable and safe.
However, this water also flows down and is brought into a water treatment facility. This facility brings the water in, tests it, treats it with various processes and procedures to ensure it is safe for consumption, safeguards access to the water as it is being treated, and then packages it in tamper-proof containers. These containers are then loaded up and delivered to the end customer.
At the end of the day, that water is available whether you obtained it yourself or had it delivered via the water treatment facility. Obtaining the water directly includes significant effort to ensure it’s safe for consumption. When purchasing the water there is minimal direct effort, but there is a cost for the vendor’s effort involved in collecting, treating and delivering the water.
The same is true for open source software. You can absolutely obtain it yourself, directly from upstream. But there is something to be said for an enterprise vendor that takes care of ensuring that software is trustworthy and resilient – safe for consumption.
Supply chain risk management
All of this is about managing risk. There is inherent risk in using any software but as a society we’ve determined it is acceptable risk. Software is everywhere and it’s not going away. So the question then turns to how that risk is managed, and determining how much risk is simply too much.
Open source is no more or less risky than proprietary software. All software has bugs, and all software will always have bugs. Some of these bugs are severe, others less so. Open source has distinct advantages (speed of innovation, transparency, freedom from lock-in, to name a few), and unique risk mitigation strategies available. Some of the mitigation tactics include the ability to directly patch the software, and transparency around unfixed vulnerabilities that tend not to exist with proprietary software. This allows you to use more traditional mitigations at the operating system, application configuration or network perimeter.
One such risk mitigation strategy is the use of an enterprise open source vendor. In brief, an enterprise open source vendor curates open source software on behalf of their customers. This means that software is selected if it meets certain criteria: usefulness of the software to the overall product, upstream community health and activity, security track record, and expertise in-house to support that software.
This software is largely considered one component in the resulting product, be it an operating system or an application platform or service. If the vendor has no confidence in the upstream project or an inability to support the component, they will choose a different one. Incidentally, this is one of the beautiful benefits of open source that differs from proprietary models: options!
Another benefit is the ability to contribute back. Red Hat contributes to a number of open source projects directly and often if changes to code are required, those changes are offered back to the community via our upstream first development model.
The selected component is then brought in and is subject to a number of transformations. The code is reviewed and reasonable configurations for the product’s use-case are chosen. A vendor might use other methods as well to "harden" code and reduce security concerns.
Once the component is composed, it goes through rigorous quality assurance testing to help ensure that the component behaves as intended within the product. Finally, once quality testing is completed, it goes through a release process that includes cryptographic signing to ensure that the component delivered is the component intended, which end users can verify if they choose, and it’s made available over secure channels for download.
Once downloaded, those packages are verified to be valid prior to install if using appropriate installation tools and, if they cannot be verified, the downloaded package is not installed automatically.
This describes the typical method that an enterprise software vendor would use to deliver upstream projects, composed into products, to downstream consumers in a fashion similar to the water treatment facility.
The DIY and "vanilla" upstream models: Shifting responsibility
So what is the alternative? There are two, typically. One is to download and compile source code yourself and install those compiled binaries. The other is to download compiled or composed software as packages and install them. Both seem fairly straightforward and appealing – after all, why pay someone else for something you can do yourself, right?
Well, take a step back and consider what the enterprise vendor is doing to deliver that open source software that was considered, curated, composed and tested. Those activities are not what one should consider optional.
Put another way, the same risks exist irrespective of which method you choose, however it is the responsibility that changes. In the earlier scenario, the responsibility for supply chain risk management is borne by the enterprise vendor.
In the other two, that responsibility is fully on the end-user. Going back to our earlier analogy, you either carry and make consumable your own water or someone else does that for you at scale. That work needs to be done by someone so either it’s a payment or subscription to a vendor, or the cost is borne by the consumer in terms of time, effort and expertise. To do neither would elevate the risk of using that software to unacceptable levels.
But here’s something else to consider. No enterprise vendor will provide all of the open source that most consumers use. Speaking to Red Hat in particular, we provide and support thousands of open source components.
But there are hundreds of thousands of open source projects! It is not feasible for any vendor to support them all, and you’ll ideally end up with the bulk of the open source in your organization coming from an enterprise vendor, which will lower the overall cost of consumption and decrease the associated risk of using it.
Yet there will likely be some open source that will be consumed directly, and this is where that time and effort will be focused. If that number is small enough, the cost and associated risk will be smaller as well. Certainly not eliminated, and certainly not something that can be ignored – the Apache Log4j vulnerabilities certainly demonstrated the need to be aware of what open source you have deployed, the potential risk of using it, and the cost associated with maintaining it.
Where Log4j was part of a Red Hat product, we issued a security bulletin and updates for the products. Organizations that are consuming Log4j as part of another upstream project may not be aware that they are even using the software.
Perhaps one of the largest benefits of a “mid-stream” enterprise vendor is its very tactical risk-based software management approach. To this point, I’ve mostly attempted to describe this from a vendor-neutral perspective insofar as open source software is concerned, however I’ll shift gears slightly and focus on Red Hat and the enterprise open source software we provide.
Red Hat's active role in the software supply chain
Red Hatters work on a daily basis to ensure as best as possible that the software we curate is safe to use. But the curation and delivery of software is only part of the overall life cycle of any product. Red Hat invests significantly in the maintenance of that open source software through the life of every product. For the supported software we ship, we take on the responsibility of not just supporting it but addressing issues of significant concern.
Speaking to security issues in particular, Red Hat’s position is that if an upstream fix is not available or timely, we will develop and provide the fix, both to our customers and to the upstream community if still active. We want to get patches upstream whenever possible!
If the upstream is not active, or there is a fix in a version no longer supported upstream (and hence would not be fixed there), Red Hat will develop that fix for our product, effectively ensuring that every piece of software is actively supported irrespective of the status of upstream.
Fundamentally this means that if there is a serious security issue in a piece of upstream software in our product or service, Red Hat will fix it as per our life cycle policies. If that upstream version is end of life, Red Hat still patches and supports it.
Consider the same scenario for upstream software consumed directly. What are the available options if there is a publicly known vulnerability and no patch to consume? One option, as a benefit of open source, is you can patch it yourself – if you have the expertise. Another is to find another piece of software that provides the same functionality and re-tool your own applications to use it instead, which could be a costly endeavour. You could aim to find another mitigation that doesn’t require a patch, by changing a configuration option or disabling functionality. Certainly there are a number of options, but they all have a cost.
There is also the question of discoverability – how does one even find out about these new security issues that might be relevant? How does one assess the vulnerability sufficiently to determine risk exposure?
With self-service open source, the responsibility is fully on the end-user. Know what you have, where it is deployed, follow all the relevant upstream announcements lists or forums for new version updates. Once an issue is known, determine its applicability or relevance to how the software is built or used. Even nuances like how a piece of software is configured or compiled are relevant considerations.
It’s a little different when you get the software from an enterprise vendor. From a Red Hat perspective, this is all handled on behalf of our customers:
Identification of new vulnerabilities in all supported products, assessments to determine applicability, risk and severity are completed.
Figuring out which products contain the software.
Engineering and Quality Engineering teams create patches, build and test updates.
Provide information in our CVE database on the vulnerability, a severity rating and scoring based on how the component is used or exposed in the product, with considerations for how it is configured and built.
We provide metadata that can be used via automation and vulnerability management tools.
Cryptographic signatures and trusted distribution channels that ensure the software the end-user obtains is what we intended to provide.
Multiple channels to announce updates, such as Security Bulletins and errata notifications through the Customer Portal and advisory announcements through the RHSA-announce mailing list.
This is where, perhaps, the value of the Red Hat subscription has traditionally been understated. There is significant value in the subscription that goes beyond support (although that is fantastic value in and of itself!).
Consuming enterprise open source through a vendor like Red Hat meets the goal of minimizing the risk of using software in general, while affording the many benefits that only open source can provide.
Whether it is our amazing Engineer and Quality Engineering teams who patch and test, or the Product Security team that discovers and assesses vulnerabilities and their impact to products we provide, Red Hat invests significant resources into these efforts so that our downstream customers don’t have to. Want to know more? Learn more about how Red Hat helps organizations build, code and monitor to a trusted software supply chain.
About the author
Vincent Danen lives in Canada and is the Vice President of Product Security at Red Hat. He joined Red Hat in 2009 and has been working in the security field, specifically around Linux, operating security and vulnerability management, for over 20 years.