Keeping Track Of Vulnerabilities With CVEs

  |  Compiler Team   Sicherheit

Compiler • • Keeping Track Of Vulnerabilities With CVEs | Compiler

Keeping Track Of Vulnerabilities With CVEs | Compiler

About the episode

Every lock has its weakness. That applies to the world of IT security—and any one piece of software can have multiple vulnerabilities. Code changes. Stacks evolve. The potential for incursions keeps growing. How can anyone keep track of it all? Enter the Common Vulnerabilities and Exposures Program.

Jeremy West, Senior Manager of Product Security Engineering at Red Hat, walks us through the CVE tracking and remediation process—and explains why having a common standard is vital for everyone's security.

Compiler team Red Hat original show

Abonnieren

Subscribe here:

Listen on Apple Podcasts Listen on Spotify Subscribe via RSS Feed

Transkript

So tiger is outside one person's cabin in the woods versus tiger is out in the middle of the city. Yeah, or assuming that the tiger is always going to be in that position no matter which neighborhood you're in. Like tigers are about in New York City. Gotcha. Not all buildings are the same. Yeah. Not all neighborhoods are the same. Right. And the proximity of that tiger to you and your imminent threat of being eaten by said tiger really depends on where you are. This is Compiler, an original podcast from Red Hat. I'm your host, Emily Bock, a senior product manager at Red Hat. And I'm Vincent Danen, Red Hat's vice president of product security. On this show, we go beyond the buzzwords and jargon and simplify tech topics. And in this episode, we cover the CVE Program, the program that helps everyone keep track of the dangers lurking in your stack. All season, we've been talking about the elements of product security and the potential threats security teams face on the daily. But how do these teams know what threats are out there or when new ones appear? Which ones apply to their stack and which ones they can ignore? Enter the Common Vulnerabilities and Exposures Program, or CVEs for short. We spoke with Jeremy West, who leads the Product Security Incident Response Team here at Red Hat. He knows the CVE program inside and out. The intent behind the CVE program was to provide partners, vendors, hardware vendors and others with some sort of a standard for which we could report and share vulnerability information. Prior to that, it was okay. Well, I've done this research, and I found this to be true. And I say it's terribly important. And somebody else may say, by what standard? Right. Like, why do you say it's terribly important or somebody else may do some research and provide information and in a completely different format. So part of what the CVE program brings to the security industry is a set of standards for us to be able to talk. Everybody uses the same numbering platform. So all of this information goes into a database, and I think the the key thing that people should probably understand about the program is you got to think of the CVE program more as like a governance body, almost. So it helps people, helps, organizations, customers, communities be able to all talk the same language and be able to, you know, do vulnerability management and remediation together. But it is not necessarily the database per se. And there is a database out there for it, but it is not... that's not the primary purpose of it. Okay. Let's break that down. Why do we even need to name and number vulnerabilities? I mean, there's a simple answer to that. There's a lot and it's easy to get them confused. Like, if I remember back, I mean, we're talking like the early 2000s back in the day when you had a vulnerability, say, in send mail or some other piece of software. And it was a buffer overflow. And you happened to have two. Mhm. Which two are you talking about? There's no way to differentiate between them. Right. So they came up with the idea of numbering these things and there were a bunch of other systems to number them. And CVE just kind of came together and said, hey, this is the way that we're going to do it. So it's consistent and everybody could know that I'm referring to this buffer overflow in this, piece of software as opposed to another one. Oh, yeah. I think that's not too far off from how, you know, we manage development work either. Like, it helps to have a ticketing system and a specific ticket with a name and a number so everybody knows what you're referencing. It's a common language essentially. Which problem are you trying to solve? Exactly. Make sure we're all talking about the same thing. So he also talked a little bit about that it's a governance body versus a database. When you break that down, what that means? Yeah. So I mean, a database is just the information, right? The program kind of, I say forces good behavior when it comes to reporting these things. It requests, certain types of information from each reporter or each entity that's reporting that vulnerability, so that there's a consistent corpus of information that researchers and others can refer to. And so we're not missing things like, for example, if we didn't have those standards, somebody could submit something and say, oh, there's a buffer overflow in send mail and that's it. Completely context free. It's like, well, what am I supposed to do? Where in send mail is this thing? Okay. Can you elaborate? Correct. Okay. So, I mean, it used to be like you would put whatever you want. It's like a small sentence and the reference to a mailing list. Somebody reported something, right? Something like, okay, that's helpful, but it's not actually that helpful. So as a program matured, they were asking for, additional information. And they just kind of made that part of the standard. Now, the database part is interesting because there are vulnerability databases out there. But what CVE it provides itself I mean, they used to call it a dictionary. I still think of it as a dictionary. Right. Like here's your CVE identifier. And this is what it is. Right. It doesn't talk about anything else other than like the, I guess, the concrete facts of the vulnerabilities existence. Gotcha. That makes sense. So it's not just a dumping ground that anyone can put stuff in it. They're also establishing rules and standards and types of information that need to be submitted with it. Well, and you have to because otherwise people, and this has happened in the past, they just submit a bunch of CVES and they're like, yeah, I found stuff. And then over time you're looking at it like, that's not even a it's not even a security issue. Like, why is this even here? Right. So they there was a need to standardize some of these things so that it's actually useful information. Absolutely. No, I'm no stranger to a useless ticket. I'm with you there. Okay. So, governance body, establishing rules, standards, etc. Why is it important for everyone to use that same standard? I mean, I'll go back to what I said earlier about the, the days of yore, where, I mean, predating CVE, we would have different systems. So you would have like X-Force and Security Focus and, other independent bodies naming vulnerabilities or there were no names whatsoever, and people were getting confused. And I remember having to use this or I'm like, what's this X-Force ID and this bug track ID and they're the same thing. And these ones are different. It's very confusing. So when CVE popped up and said, like, hey, we have the answer. People like me were like, this is amazing. Like, I could just refer to it once. Yeah. And that's why, I think it's important for everyone, especially today, right now is we're looking at different competing standards or what the future looks like, there are discussions about diversifying and multiple identifiers like, please, let's not do that. We solved the problem. We made it easy. And don't overcomplicate it. I think that makes a lot of sense. With a standard name and number you can reference with the same standards and practices and information that go with it. And then everyone using that same standard makes it a lot easier to share that information around and make sure we're all talking about the same thing. And that kind of seems like the name of the game there. And it's super important, particularly when you're looking at in the context of open source, like, one piece of open source could be used in multiple operating systems, platforms, etc. and are they going to get a unique identifier per platform? Like that would be insane. I'm a big proponent of like the instant you have to have a matrix to know what you're talking about, like you've failed. It's overcomplicated yeah. Exactly. Exactly. All right. So these CVEs are kind of a big deal, right. So let's find out how do they get created. So, Vincent, do you know how many CVEs there were last year? Unfortunately, yes. I bet you do. So are we talking hundreds? Thousands? Millions? Where are we at? No, thankfully, not at millions. But last year there were over 40,000. That is amazing. That is... okay. I'm doing mental math here. That's over 100 a day, right? Yes. Man, that that is a lot of vulnerabilities to go through. And I can't imagine like if here at Red Hat we had one team trying to do all of that on their own. Like I don't think any company could do all of that on their own. But you have to define "do". Do. Because there is one team that does it all on their own, doesn't fix everything, but manages it. Valid. No. I'm, more about like, finding that many, like, all alone. And I think the CVE naming authorities at CNAs, decide what problems get elevated to that CVE status. They follow a regularly updated set of rules to make those decisions. And there's about 460 CNAs at the moment, which distributes the burden across the tech industry. Jeremy walked us through how Red Hat processes a CVE once a vulnerability comes to their attention. So let's say you have a researcher that has that found a weakness. They found something that can be exploited. They will generally reach out. If it's with within Red Hat software, they'll reach out to Red Hat with our security mailing list and say, hey, I found this particular vulnerability. We will then review it. Will assign an ID to it. We assign, the weakness associated with it. We map a weakness. We determine the, the impact on things like confidentiality, integrity, availability of software being, you know, that which uses that, particular component. And then we... my team specifically will work with the maintainers of that code to fix it. We will do all of this work to alert engineering teams that something needs to be fixed. But in the back end, one of the things we're doing is publishing data very quickly. So reading between the lines here, a CVE is issued before a fix is ready. Is that right? Sometimes. It's a little complicated. Actually it's not that complicated. It's pretty simple. If the vulnerability is found and it's public, that vulnerability will be found, and then a CVE will be assigned to it. If that vulnerability is reported in, like, like Jeremy was talking about in our case to Red Hat, it's under what we call an embargo. It's not public information. We will assign that CVE before we disclose the, the vulnerability to the public. Gotcha. That makes sense. So it depends a little bit on where it's at, who's responsible for it. I imagine a little bit about how bad it might be as well. And when it's found. Yes. Right? Like sometimes those vulnerabilities are fixed and then a CVE is issued because somebody noticed it because they're there. Mhm. I mean it's better today than maybe it was say ten years ago. But some people who are maybe not as interested in CVE or whatnot, they would publish effects, release a new release and then somebody would go through the change log. And I remember doing this and I won't say who, which project this was, but I would look through it like, oh yeah, there's a buffer overflow that should probably have a CVE. Oh yeah, there's a format string. That should probably have a CVE. So I mean, it really depends on the context of the vulnerability. Gotcha. That makes sense. So we also heard another phrase in there, which was I think confidentiality, integrity, availability, you know, CIA maybe for short. In the product security world, what does that really mean? I mean, that means when you're looking at a vulnerability, let me take a step back first. Right? Yeah. Not all vulnerabilities are equal. Mhm. Not all vulnerabilities are actually, quote unquote vulnerabilities are actual vulnerabilities, and some bugs are referenced as bugs in their actual vulnerabilities as well. Right, so... when you're looking at these things, all told, when you're looking at a vulnerability it's like, what is that impact to the confidentiality, the integrity and the availability of a system? There could be something where, you know, you're targeting a particular flaw. And it can, I don't know, do a database dump. That would be an impact to confidentiality. If you were able to also change the contents of the database, that would be an impact to integrity. Because now I have a trusted system where somebody else who's unauthorized is making changes to it. Now, if all I can do is crash the database so that I can't access it anymore, that's an impact to availability. Gotcha. So I imagine any one of those things is pretty bad. More than one of those things. Probably even worse. Yeah, I mean, it depends on the scope of the impact. I mean, it could be like a total breach of a system or just a small part of it or whatever. So there's like varying levels, which I think we'll get into. Yeah. We don't want to get too far ahead of ourselves, but I think it is always good to, to establish like 40,000 CVEs, not all of them are going to be the same level of severity, I imagine. Yeah, they're not all insanely critical. Phew. But they're also not all nothing either. Right, exactly. I think the last piece there that I picked up on also was Jeremy ended on a note about publishing data. What here is being published and where? Right. So from a Red Hat perspective, we publish information about the vulnerability. We have our own... you can call it a database, but we call it our CVE pages. And what it does is it describes a vulnerability, it describes any mitigations, describes which products are affected by it, any maybe external references or links to a like a upstream advisory or a patch file, that sort of thing. So that information is being published on our CVE pages, and then it's also being published in terms of, what we call VEX or the Vulnerability Exploitability eXchange, which is a document, a machine readable document, providing the similar information about a vulnerability that can be used by other systems. So there's a lot of information around a vulnerability because part of the intent here is to expose the existence of that thing, it's severity, etc., to our customers, to the, you know, the ecosystem at large. And so we have to be able to publish that information in a way to be transparent. That's one of our open source ethos that we have. Gotcha. Yeah. It makes sense. There was this vulnerability. Here's what we did about it. Here's what you need to do about it if anything. Kind of a... yeah stance, that makes sense. So now we know what the problem is when we're talking about the world of CVEs and that likely a fix is on the way. But what might that look like from a customer's perspective? So let's talk about prioritization, because the world does not have infinite bandwidth to fix every bug and can implement every feature in our software. Right. You have to prioritize. Same thing is true for on the customer end. You know, you have to have some sort of methodology for how you're going to address issues. At Red Hat, we use a four-point scale. This is not, any different than a lot of other vendors, right? A lot of... most vendors actually have a four-point scale that they use for assessing the severity of a vulnerability. I think where Red Hat differs in our use of this scale, is that we don't map it directly to the base CVSS score. But if you go back to the first node to our website and look CVSS as a standard wasn't always intended to use the base score, temporal scores and environmental scores together. Right. So, you know as a customer, for a customer, you utilize software differently than maybe how the vendor had shipped it. There's also temporal characteristics, like how exploitable is it? Is this just completely implausible? Right. So, you know, if you take those factors into consideration, that's really those factors help you better assess the risk and in terms of how you prioritize, all of those collective risks. Right. So we fully embrace that with our four, with our four-point scale. So something that is listed as a low or a moderate for Red Hat may still occasionally have a CVSS base score of like 8.5. All right. So new acronym here. What is a CVSS score? I mean CVSS is great. And in some respects it's annoying in some others, you know, so what it is, is it's basically a numeric score 0 to 10 that basically talks about the, immutable characteristics of a vulnerability. Mhm. And this is really important. Like Jeremy had noted those three metrics groups: the base, the temporal and the environmental. The base is the immutable part of the, the vulnerability. Environmental is effectively where you deploy it. Is it production is a dev QE, whatever. Gotcha. And then the temporal one is like, what's the actual threat landscape look like today? Like, is this proof of concept? Is it no exploit out in the wild? Is it like, yeah, there's a problem, but nobody's exploiting it, we haven't heard or seen anything. The temporal metrics would reflect that. The important thing to note is that Jeremy had noted the CVSS score is... I look at it as like a three-legged stool. Yeah. You really have to have all three metrics groups before you can even think about talking about risk. Gotcha. And this is something I've had to point out. A number of times to people because they're like, oh my God, this base score is X, it's an 8.5 like he said or whatever, and it's just based on the base characteristics and like, well, where are you deploying it? Like what's your environmental metrics look like? There's no known exploits of the temporal, you know, should reflect that as well. You need to have all three to have that true view of what that vulnerability is from a programmatic perspective. Gotcha. I'm kind of seeing it like, okay, there is a tiger approaching. The tiger as behind a fence. Not as bad as tiger is right here in our face. Yeah. I mean, it's in a lot of ways it's similar to that. I mean, I love the analogy actually like... but I mean, even, like, CVSS even becomes more complicated because you're looking at the way that software is built. Like, if you have a piece of software and I'll pick on Adobe or Microsoft or some proprietary company, right. Like, it's literally just them. They're the only ones who are providing it. Right. Right. There's really no other considerations like they built it this way for Xbox or this way for Windows or whatever. Right? When you're looking at open source software, you have different compilers, you have different operating platforms, you have different built in mitigations on those systems. All those things should be reflected in that base score. They're all going to be different. Gotcha. So tiger is outside one person's cabin in the woods versus tiger is out in the middle of the city. Yeah. Or you know assuming that the tiger is always going to be in that position no matter which neighborhood you're in, like tigers are about in New York City. Gotcha. Not all buildings are the same. Yeah. Not all neighborhoods are the same. Right. And the proximity of that tiger to you and your imminent threat of being eaten by said tiger really depends on where you are. And what the environment is. Right? Exactly. And so he also mentioned in there a four-point score. So how is that different from CVSS. Right. So again CVSS is just that numeric 0 to 10. Our four-point scale: critical important moderate and low. Equates somewhat with the idea of what like CVSS number ranges have severity ratings. So like 9 to 10 would be critical. I think it's seven. 7 to 9 is high. Not important. I think it's four. 4 to 7 is medium which would be similar to our moderate. And then below that is low, right. The difference is it's basically based on whatever the characteristics of that vulnerability turns into a number. That's where we're going to infer our risk from. Gotcha. But if you look at the CVSS guide, it kind of says not to do that. It says that CVSS is not a representation of risk. It's a representation of priority. So when you're looking at that critical, important... critical, high, medium and low, that's like that's the urgency with which you should remediate that thing. Yeah. Not the severity of that thing. Now you contrast that to our four-point rating scale, which incidentally, didn't originate with us. This came from the from the old dogs at Microsoft. Over 20 years ago. Right. And they still have their four-point rating. And it's basically about the severity of impact were that thing to be successfully exploited. Gotcha. And the conditions to successfully exploit. Right. So when we're talking about critical issues, that means don't require any authentication. I can do like a drive by attack in a web browser, which is really where most of our criticals come from, because, I mean, think about the whole point of a browser. It is meant to visit untrusted sites. Yeah. So any untrusted site that you might visit, if it can execute arbitrary code with your user privileges on your system and escape the browser sandbox, by definition, that's critical because they did that attack and you didn't even know it was happening. Exactly. That's critical. Gotcha. So that four-point score is kind of taking it out of the here are the bare metal facts about the CVE and more putting it like contextually for us, this is how bad it would be if something happened. Right. And it's more or less looking at like if we go back to that number of CVEs, 40,000. Mhm. I mean, not everybody is going to have to fix 40,000 CVEs, right? Red Hat doesn't even have to fix 40,000. Ours is like thankfully, you know, in the thousands not the tens of thousands. Much better. But when you have... so much better, so much better. Better not good, Not good but definitely better. Let's say I'm an IT guy and I'm scanning my system and I see, okay, there's all these affected CVEs, right. Mhm. I can I mean, I would prioritize based on Red Hat severity. Like if we say it's critical, man, you should be applying that patch yesterday. Yeah. But if you're looking at like, okay, I have, you know, 300 moderate, findings. Right. I would then use the CVSS score to go, okay, which are the highest. And I would work my way from like high to low because I can't remediate 300 all at once. Right? Yeah. So which of those 300 am I going to fix first I'm going to put the ones at the top of the list to work my way down. That's what CVSS does. It gives you that kind of order of priority in which to remediate things versus a... this is the amount of risk that you're exposed to in your environment. Gotcha. Does that make sense? Yeah, that makes absolute sense. Like now we've got lots of tigers coming for us. It's telling us essentially the order in which we should deal with the tigers. Yeah. I mean, I don't want to condone violence to animals in any way. Like maybe it's like which ones do you run away from first? Yes, exactly. The analogy falls a little flat there, but you know. Which ones do you throw meat at first to distract? No, no harm done to the tigers in this analogy. There were no animals hurt in the recording of this podcast. Agreed. All right. So we also talked a little bit about base and temporal and environmental elements. So I want to dig a little bit more into that. How would you say these apply to the vulnerability assessment and risk management like what are what are the options you have once you've made that assessment. This is so unique to every environment. Right. Because everybody has different amounts of risk tolerance. You might see like Bob & Alice's flower shop tolerates a lot more risk than you know the National Bank of Canada if such a thing existed. Right. So every organization is going to have their own tolerance of risk. And then based on that risk tolerance, you're going to determine in what areas can we, remediate things slower? Mhm. Or maybe their risk tolerance is so short that they're like, we can't just employ a vendor patch because a vendor said to do it. We have to go through our own testing process. Now, what does that look like? Right. So which ones may be... is it a prioritization of which ones in my testing first before they get deployed to production? Right. So it's... there is no one brush paints all kind of thing. Like it's very unique to each individual or organization. But you, you have to take that into account, like, I had a call with someone not that long ago where I asked them, you know, they wanted a bunch of fixes. I said, well, I mean, your list is pretty big. Which ones do you want me to focus on first? Like, I can't fix all 300 or whatever it was, but maybe give me, like 10%. What are the top 10% that you want? Yeah. They couldn't tell me. Mhm. Like well that's a problem because you're, you need to do your risk management part here. Yeah. Your risk management process would tell you which of the ten... which 10% are the most problematic for you in your environment. So that you could tell me which ones you want me to fix tomorrow versus the ones that I could take a quarter to do. Gotcha. Yeah. You can't have everything be critical at the same level. You got to order them out a little bit. I mean, it's that old saying, right? Like, if, if everything is important, then nothing is. Exactly, yes. 100%. No. I think that makes a lot of sense. And so all of this is telling me I've got to keep track of incoming CVEs for your stack. You assess the severity of your own exposure, and then you make your own decision on how to act in response. Yeah. And I'll just add one thing to that, Emily, I think it's really important that, you know, not every security vulnerability requires a patch. Mhm. A lot of times there are mitigating or compensating controls that can be in place that reduce the severity of that thing. Right. So it might be very difficult to patch something. And all I have to do is flip a switch and, you know, my web application firewall or whatever, right, something else. So not everything requires patch. Sometimes that risk management posture will be like mitigations are sufficient. Gotcha. Makes sense. Slow the tigers down might be enough. That's right. Absolutely. How do you even come up with a score? What makes a critical vulnerability. You're watching the news cycle and you see something happen. Or maybe you're the victim of some sort of a cyber attack, you know, so the context of like a cyber attack, you know, these are multifaceted cyber attacks, right? There's usually malware involved. There's maybe, social engineering involved. It is very rarely is it just the result of one singular vulnerability. Right. But there can be vulnerabilities at play in these cyber attacks. Which is why I think if you really want to talk about accountability, so, in that four-point scale, critical vulnerabilities, we talk about like we put things in the context of, you know, the CIA triad critically, you know, confidentiality, integrity, availability. Right. So, you know, for a critical vulnerability, it has to exploit one of those three things or all of those three things. So, in the case of, like the log forging issue that we saw several years ago, you had the ability for root-level access on a system where user accounts were available. So integrity is, is, no longer an issue, right? Confidentiality has been breached. And if you have one of those, then it's very easy to kind of destroy the system at that point, which makes availability an issue. So all three impacting which, you know, helps explain why something like that Log for J issue was rated, you know, as a critical vulnerability. And there's that CIA again, I'm, I'm glad we talked about it earlier. You know, Jason Bourne always shows up. I don't know if that would make it more fun or less. Or it's probably just as fun as it is. I think it's... I actually remember the log forging issue, and it's always kind of a trip when these things show up, like on the news. Oh yeah. No, I if it actually reminds me of the first time we had our first-named vulnerability, Heartbleed, which I think is 2014. And my Mom heard about this on the radio, and she's asking you like, what are you even talking about? Like, how did you hear about this? And so it's a... It's a metal name too. Totally. And it's a weird phenomenon that we've had over the last decade of these named, vulnerabilities. And some of them are, you know, like log for shell, totally important. Right? Like that was a big deal. Heartbleed. It was a big deal. There's some other ones that get names because somebody is excited and you're like, there actually wasn't such a big deal. It didn't need a name. It's not too far off from like, naming hurricanes. I suppose it's... True. But thank God that we don't name every vulnerability like we do for every hurricane. Or at least I think we do. I think, not unlike our CVE situation here, I think they have to be a sufficient level of bad for names to become involved. Okay. Well, maybe. Yeah. I don't know if they have quite as, formalized of a, you know, spread or, or scores as we do, but... Yeah. Well, I mean, they do have some scores. I'm, I'm sure I'm a little less familiar with them, I suppose. So I, I think we, we talked a little bit about that CIA trifecta a little earlier. And so it's any one of those elements, if it factors in it counts as a critical vulnerability. Is that right? No. No, no, I want to clarify that. Yeah. In order for it to be a vulnerability to begin with, there has to be an impact to the confidentiality, integrity, availability to at least one of them. Gotcha. If there's a bug that doesn't affect any of those, sorry, I'm taking my CVE back, like it's not a security issue. Right now if a critical vulnerability would probably affect all three. Right. It's kind of like the worst level. So in like CVSS parlance it would be like complete exposure to any one of those things. Right. So you can kind of have like no exposure, partial exposure or complete. Yeah. I think it's complete. But it's basically like all, all bets are off, like the availability gone. Like it's not just interrupted. Like you think of a denial of service, right? Yeah. At some point, maybe you can get a couple, you know, connections through. Yeah. Right. And so it's like, slows everything down or some people don't have access, but other people do. Like, that would be partial. Gotcha. If it's like, no, we just turned the database off, no one can talk to it. Not even an administrator. Yeah. That would be like a complete and total impact to availability. Right. Gotcha. I'm very selfishly thinking of my own internet service. Like, if it's slow, that's still on. It's off, it's off. Yeah, yeah, yeah, exactly. And that's kind of like for when you look at all elements of the CIA, you're looking at a critical vulnerability would be one with like the highest degree of impact to any one of those, but not exclusively. Right. Like if you have to basically, we like analogies here, so, like create the carnival, staff it and do all the tricks before you can get even partial access to one of those areas in that CIA triad. Probably not critical. But boy is that a lot of work. Gotcha. Makes sense. But if the thing already exists and I can like, snap my fingers and I just pop to you and have like total access or can change anything I want very easily, that one is definitely critical. Understandable. So like, does the number of those categories that it affects factor into the score, or is it more like... is it kind of a 1 to 1 ratio. Like it depends on are they all independent or does it matter how many of them it affects. Does that make sense? I mean they are all independent. And so like on the CVSS side they all add to that like increase to that score. So I mean I don't know what the numbers are. The math is not something I can do in my head. But like let's just say for example, none would be like a modifier of zero. Plus zero. Mhm. A partial might be a modifier like 0.5. Yeah. So if I have integrity at partial it adds 0.5 to my score. If I have availability partial adds another point five. Now my score has gone up by one. Gotcha. Maybe complete is a whole digit. It's one. Right. So partial for all three. Might be 1.5. Complete for all three might be three. Right. In terms of how additive that score gets. Mhm. From a Red Hat perspective, because we don't use CVSS metrics in that way. It has zero relevance to our rating scale. Gotcha. Like for us, critical, like it could be partial for any one of those things. And if it's like, like I can do it with my eyes closed, sleeping in the back of a car kind of thing. Probably going to be critical. Gotcha. It doesn't have to be all three at, like full, full extent. It could be two at full extent. One partial or whatever. Right? Yeah. Makes sense. There's writeups on our website that explain all the different categories. But for us like criticals, wormable and super easy to do, important same vulnerability, but requires a few more steps or unlikely configurations in order to exploit. Yeah, no, it makes perfect sense. I just never met a formula I didn't try to pick apart. Yeah, go pick it apart. There's a lot of math to it. It's really, kind of interesting. You should check it out. Yeah. So, like, talking about some of the, you know, tens on the ten level of this scale, like, we talked about Heartbleed and Log for J, like, how often do they show up in the news like that. Like is that... it doesn't seem super common. It's not, right. Like if you I mean, I guess it's... we'll call it mainstream news. Of course. Right. Because, I mean, I subscribe to a bunch of different security blogs and security sites, and a lot of these vulnerabilities show up there. Yeah. How often does your Mom talk to you about them? Right. Almost never. Almost never. So, I mean, there will be some that show up on the news, but it's going to be like broad impact. And that typically means like it's either new or novel. Mhm. Or it is affected a large swath of the population, maybe not directly, but through like their, their vendor. Like if we look at Equifax a number of years ago. Yeah. Right. The Apache Struts Two vulnerability. Mhm. I mean it's pretty bad, but in and of itself, no worse than a lot of other vulnerabilities. Yeah. The reason why it made the news was because Equifax was unfortunately exploited by it. And all of these people who had their credit reporting history or whatnot potentially exposed. Well, no, well now the aperture is much broader in terms of the number of people impacted. Those are the ones that hit the news. So they're not necessarily always super critical or more than any other critical vulnerability. But it's like the impact was much more broad reaching. Yeah. Makes sense. No it probably correlates with like higher scores, but it has more to do with how many people does it impact and in what way. All right, so obviously critical vulnerabilities: bad. But let's talk a little bit more about the lower-rated vulnerabilities. So low-level vulnerabilities. You know it's something that could be used in combination with other vulnerabilities. So if somebody, you know, did some social engineering had access to the system and they gained access to that, that, that file then sure. Yeah. It could provide them with more information. Your lower-level vulnerabilities are usually like stepping stones to something else that they, they, facilitate kind of a greater problem. But, you know, if you, if you've addressed all of your more critical and important vulnerabilities, usually those low vulnerabilities and are non-issues, but if you've got a couple of those critical, important vulnerabilities that exist and somebody exploits them and suddenly there are a lot of other vulnerabilities that can be taken advantage of as well. If you're only unaddressed, vulnerabilities are low level, that means probably an attacker won't be able to do as much. That's right, I mean, if it depends on the context of the system. Mhm. But by definition a low vulnerability is one that either will... it's very, very difficult to exploit or it yields a very low level of access I'll call it, like, whether it's elevated privileges or access to information that they wouldn't otherwise have. Typically a low vulnerability is something that we would see, like on an operating system, you have to have shell access to do it. And we're not back 20 years ago where a lot of people access the internet, you know, using shell access. We don't do that anymore. So we don't grant shell access to a lot of users. Yeah. So if they have, like, in your system, if it's basically we don't allow shell access to it, whether it's a system container or whatever; if it requires local shell access to exploit something, the fact that they got that local shell access to begin with is probably a bigger problem. Gotcha. Right at that point they don't need to exploit that thing. They've already got what they, what they need. Yeah. Makes sense. So it more piles on or piles together to create a bigger issue than any one of their... them really being an issue on their own either through being super hard to do or require some other major vulnerability slippage to get into. Right. And people do a lot of talking about like vulnerability chaining, like, okay, I have this low and using it I can exploit this moderate and using it I can exploit this other thing. I say, great, but you're only going to get the maximum amount that any one of those things would give you to begin with. Yeah. Now, if you're not fixing anything so you've got unpatched criticals and you've got unpatched supports, yes, that low could be a stepping stone to exploit something much more impactful. But it in of itself it's not... exploiting a low isn't going to turn the next moderate critical because you're exploited that low first. Yeah. Makes sense. Right. So like what Jeremy was talking about they're, like, yeah make sure you're critical and importance are fixed. That's why those are the first things that you should be fixing all day every day. Mhm. The other ones mitigate them, you know, have your other mitigations in place and whatnot. But those things aren't going to have the same level of impact as those criticals and importance. Right? Yeah. You're going to spend a lot fewer resources fixing that loose, loose post in your fence. Then, you know, the fully broken window on your house. That's right. Exactly. Still may be helpful to get into your backyard, but it's not going to get you anywhere on its own. Right? Yeah. And, you know, there's probably other protections you can have in place before they even get in there. Mhm. And sometimes it's like, okay, well, why would I have to fix that thing if I built a moat around my house. So like, the fence doesn't matter. Right. Like they're going to have to swim the moat before they can get there. There you go. That'll take care of... well, no, that wouldn't take care of the tigers too. Emily Bock: They swim... I don't know, can a tiger swim... do they? They do. Scary. No. So yeah what I'm getting at there is it still can be an issue, especially if they're strung together or used in conjunction with something else. But less of a big deal to address these on their own. I imagine we wouldn't necessarily tell anyone. Just ignore them. It's fine, but it's like a mitigation plan versus full patch for every single one. Is that kind of what we're driving at? No actually, we do tell them to ignore? Sorry. I'm bursting your bubble here. Oh no! Because in some cases, actually, if you try to mitigate like say, I mean the ultimate mitigation is a patch. Right. And so that would require a vendor like Red Hat or anybody else to, to produce a change in code. To eliminate that problem. Well what if that change in code actually introduces another problem that's worse than the thing that we fixed. Yeah. Gotcha. Not only are you, increasing your risk because you've changed the code, but now I'm telling you, go ahead and spend some money, some time and some resources to fix this thing. That didn't matter anyways. Like, why would I do that? Like, that's not your, that's not your business model. Your business model isn't to patch everything. So I'm not going to do that for those things that I know are... they literally mean nothing. And if with the presumption that you're applying all the other patches we produce because we wouldn't produce them if the risk didn't outweigh the risk of it being, of that code change being introduced. Right? Yeah. Those are the things that you fix, the other ones you can't ignore them. And we do tell people to ignore them. Yeah. It comes down to the ratio of hassle versus risk. Yeah. Like, I did the math. This was a couple of years ago. But in the number of vulnerabilities that Red Hat produces, if we had like a simple number just for easy math, right, $1,000 to, for me to remediate a vulnerability. So this is the end user, their time to test, to deploy to, you know, reboot or restart a service or whatever. Right. A thousand bucks, it was like over $1 million. Yeah. That it would cost, you know, and there were like 20... I don't even think it was that high. It was like ten known exploited vulnerabilities. Mhm. So you could spend $10,000 to remediate those or maybe $300,000, because obviously you don't know which ones are going to be exploited or not. But if you went for the whole gamut of everything from critical to low, it fixed everything in $1,000 a pop. You'd be spending over $1 million to avoid being exploited by ten things. Yeah, and ten very unlikely things at that. So very much diminishing returns. 100%. All right. So yeah. So to what you said we do kind of tell them to ignore them because they don't matter. Yeah. Understood. No that makes sense. And I meant, you know, not the bad ones. Yeah, yeah. We don't fix every bug. You know... We don't, we don't fix them all. It is really a risk versus reward. And where the risk is lower with the fix, we'll fix it. If the risk is greater with the fix than the thing that's being fixed, we tend not to do it because people want their stuff to run. Mhm. Right. And we don't want to introduce new problems. Right, or bigger problems. Don't rock the boat. Don't rock the boat, I like that. Unless it might sink. There you go. There you go. Yeah. I'll make sense. All right. So I feel like we covered a lot of ground in this one. So I want to kind of go back, recap. What all did we talk about here. We talked about the CVE program existing in general as more of a governance body than a database. That means, you know, we're setting standard rules, standard kind of processes, standard types of information with common names and numbers for trackability. Number one. Number two, not everyone has the same exposure to the same vulnerabilities. So like there's that central CVSS score just really trying to get down to the bare metal facts about a CVE. But everyone will kind of have their own take on it, their own context which might influence their own priority for it. Not just like at the individual level, but at the producer or the vendor level. Mhm. So it's really complicated. Like there is not one, one static rating for these things across the board. Companies are different. Vendors are different. Software is different. Configurations are different. Like I could go on. It's complicated. Exactly. You can try to make order from chaos, but the chaos will always be there. It still remains. For sure. And I think, last main bit we talked about was some vulnerabilities are much worse than others. Obviously, they're not all created equal even, you know, with that kind of context defining how bad they are for specific vendors or people. Yep. Nope, they are not all created equal and some are ridiculously worse than others. And you'll see that. Right? And they get called out and they're the ones that get press attention, like you said. And they're the ones that, you know, patches are produced for. And those are the ones that people, like, my personal pet peeve is people who don't apply patches that are made available. I don't, I don't care. This is even with my wife and her phone. Right. It's there. Apply it like they're produced for a purpose. Exactly. No, absolutely I, I know I'm guilty of procrastinating on my updates on occasion, but I see that word critical and I'll get to it faster. Yes. You should. Just for you I will do better I promise. Thanks Emily. I love it. All right. So, lot of ground covered. I think, we've talked a little bit all about everything I know about CVEs. So you've heard our thoughts. Now it's time to add yours to the conversation. So, if you're listening to us here on this podcast, hit us up on social media at Red Hat and use the #compilerpodcast. And I think that will do it for this episode of Compiler. And this episode was written by Johan Philippine. And a big thank you to our guest Jeremy West. Compiler is produced by the team at Red Hat with technical support from Dialect. And if you like today's episode, don't keep it to yourself. Follow the show, write to show, leave a review or share it with someone you know. And we'll see you next time.

About the show

Compiler

Do you want to stay on top of tech, but find you’re short on time? Compiler presents perspectives, topics, and insights from the industry—free from jargon and judgment. We want to discover where technology is headed beyond the headlines, and create a place for new IT professionals to learn, grow, and thrive. If you are enjoying the show, let us know, and use #CompilerPodcast to share our episodes.