Show logo

Collaboration In Product Security

  |  Compiler Team   Security

Compiler • • Collaboration In Product Security | Compiler

Collaboration In Product Security | Compiler

About the episode

How do security professionals work together to manage the known and the unknown? The Compiler team wraps up their exploration of product security by focusing on how teams across companies and projects work together effectively.

Jeremy West returns to the show to share how transparency, standards, and accountability drive better outcomes when things go wrong.

Compiler team Red Hat original show

Subscribe

Subscribe here:

Listen on Apple Podcasts Listen on Spotify Subscribe via RSS Feed

Transcript

There's a very fine line between blame and accountability. Accountability is good, but blame doesn't really do anything. It doesn't help. It frankly just makes things worse, right? Because then you start blaming people who are there to help. And all of a sudden this goodwill gets destroyed and maybe they don't want to help anymore. Yeah. You're wasting your effort on focusing on the problem rather than fixing it. Right, and then we end up in a worse place where we started. This is Compiler an original podcast from Red Hat. I'm Emily Bock, and I'm joined by Vincent Danen these season to talk about product security. And in our final episode of the season, we're sharing how security professionals collaborate effectively when dealing with both the known and the unknown. We've had some great discussions this season and it's time we wrapped everything together. Today, we'll be hearing from Jeremy West, who leads Red Hat's Product Security Incident Response Team. Vincent, you work with Jeremy, right? Sure do. He's a pretty smart guy, and I'm looking forward to what he has to say. Speaking of, what does the Incident Response Team do exactly? I mean, the name sort of implies that a little bit. They respond to incidents, but in this case, P Cert, what they actually do is respond to vulnerabilities that are newly discovered. So they're the folks who kind of ingress all of this information, learn about these vulnerabilities from a bunch of random sources, and then orchestrate the remediation of those vulnerabilities as appropriate, across all of our products and services. Nice. So, like first responders and researchers all wrapped into one. Definitely first responders. That's an analogy I use a lot. We're the fire people, or, like, we're the fire brigade. We're putting out fires. We're finding out about fires, and we're orchestrating the management of those things. Nice. Yeah. No, we use the word triage a lot in the tech sphere, and I think they're probably closest to the real thing. In fact, that is one of the phases of the incident response process. That initial piece is called the triage. So, I mean, maybe they're nurses as well. I don't know, but, they have they have all the hallmarks of first responders. These guys are amazing. So many hats. Yeah. No, I think that makes a lot of sense. And speaking of that kind of research and knowledge that they're digging into, one thing that's not always clear about vulnerabilities: who knows what and when? Generally speaking, most vulnerabilities come to us already public, where at Red Hat, we're very open and transparent about those vulnerabilities. So if we are alerted to something by a researcher or if we discover a vulnerability, on our end, you know, we publish that information within 24 hours to our website. So it will show up to customers as affected. It won't show up with a fix because a fix isn't available yet, but it will show up as affected. We also publish that information. It's important for us to get that information out as quickly as possible so that we help, you know, customers and partners be able to use- They make good decisions on how to remediate and prioritize risk. And that is what I call a powerful commitment to openness. Because it means that our customers are informed very quickly, even before a fix is ready, which allows them to assess their own risk. Or like it's not just about sharing the information. It's about empowering people to be able to make those sound decisions for themselves, having access to that information. Yeah. And I see how that's important too. And, but not every company is Red Hat. So how can companies trust that information? Like it's a little bit back to our whole medical analogy. It's a little bit like a medical assessment. Like you, you get a diagnosis from one doctor. You go to another for a second opinion. They may or may not agree or say the same things, I imagine. Yeah. And that's why Red Hat utilizes specific standards, publishing not only to our web portal, but also providing other information that is machine readable. So like, VEX for example. VEX is the Vulnerability Exploitability Exchange a data stream, and that embeds status information about vulnerabilities into the CSAFF or the Common Security Advisory Framework Format. And this allows customers to overlay Red Hat's findings onto their own security assessments. Actually understanding how Red Hat assessed and validated that information for that particular vulnerability for our particular products. Now, I also know that Jeremy also acknowledged that there are rare exceptions where, you know, word about a vulnerability might not yet be disclosed. I think there are times when we, you know, we'll have a vulnerability that we may keep as embargoed because we need to really protect customers. You know, maybe it's a serious vulnerability which could cause financial loss for customers. So we're not being reckless. So that makes sense to me, but there is something else on my mind. Whether it's a massive outage or a vulnerability that gets found, like when things go wrong in a very big, very public way. There's always a sense of wanting someone to have blame. And Jeremy had some thoughts on that too. I do not think that the industry should place any focus on fault or blame. One of the things that- In part of the reason why your question surprised me is because I've never even considered that. I don't know that anybody on my team considers that, right? Weaknesses in software is a natural thing, right? And, you know, this kind of goes hand in hand with this idea. There's this kind of crazy idea out there that, you know, you can have software that's in vulnerability free. I don't think that it's realistic. There are always going to be bugs. And unfortunately, some of those bugs are some sort of a weakness in design, right. And that weakness can be exploited and that's going to be a vulnerability. I think that really the focus should be on accountability, right. Making sure that instead of saying, hey, you, Mr. Maintainer or Mrs. Maintainer or whoever is working on this code are at fault for implementing this particular weakness, you know, just making sure that that people are developing software, taking accountability for it. Yeah. It's important to note, you know, this this type of accountability goes beyond developers and maintainers, right? There's another actor in all of this as well as customers, right? Customers need to apply patches so that they avoid remaining vulnerable to those things for which there is a fix. And so we're talking about a shared responsibility here. And a commitment to security overall. Exactly. And it's not just about finding and fixing things. It's also about acknowledging the inherent challenges of software development and being all in on securing products we build, deploy, and use. Chasing perfection is a great goal, but it's not something that you can ever actually attain. It's entirely unrealistic, right. Because all software has bugs, right? And some of those bugs today might seem like an inconvenience, and then somebody cleverly comes along later and finds a way to exploit that, and all of a sudden that bug has become a vulnerability. It didn't necessarily start that way until somebody had that knowledge or the grit to go look and figure it out. Exactly. And like we've talked about it before, but product security is very much a team effort. And putting some of that onus on a customer or an end user to be applying those patches, like there's so many variables in that entire situation that perfection is just flat out impossible. I think. It is. We can do our best to get close. Right. And it's a team sport, right, with multiple teams, not just singular teams, right? Like those customers can't apply a fix that doesn't exist. So somebody has to create it. But once that thing is created, it needs to be applied. Yeah. And I really like what Jeremy said around like the concept of blame in all of this too, because it's a very complex topography. Product security in general and tech as a greater sphere. And I think there's a very fine line between blame and accountability. Accountability is good. But blame doesn't really do anything. It doesn't help. It frankly just makes things worse. Right? Because then you start blaming people who are there to help. And all of a sudden this goodwill gets destroyed and maybe they don't want to help anymore. Yeah. You're wasting your effort on focusing on the problem rather than fixing it. Right. And then we end up in a worse place to where we started. Exactly. So we started off with the practical ways people can collaborate in product security: info sharing, building standards around security assessments. But where does the work start in open source communities? Like this can be a little controversial, and we'll talk a little bit more about this after the break. That's Jeremy West again. We left off with his opinions on accountability when things go wrong. Now he's talking about where the work in security begins. So Red Hat, as a downstream consumer of most of those products, we can fix things. We can, you know, suggest improvements upstream, but we really need the collaboration from those upstream maintainers to fix that code and kind of adopt this secure by design mindset from the very beginning. We also need buy in within the community of upstream projects to actually focus on fixing vulnerabilities. Yeah. And this applies specifically to a company like Red Hat. And I can tell you sometimes it's not always easy to get that buy in. No. Definitely not. That's a challenge common in many development cycles. And devs can even go back and forth about whether thinking about vulnerabilities is even in their purview versus adding features. I know sometimes it's seen as distraction and frustration because a lot of developers just want to focus on cool functionality, right? They don’t want to be distracted by fixing a vulnerability. You know, even within the Red Hat community, I'd say there are people that sometimes will argue and say, well, this is this is not necessarily a vulnerability. But Jeremy stresses the security landscape is ever changing and the community needs to adapt. I think that we have to as a community, but an open source community, we have to agree that the security landscape is changing. Customers are using our software in ways that we don't imagine. And so you have to build for that. You have to plan for that. And I suspect this is happening more than we'd like to admit. Oh, that's so true. We often don't have any say on how our customers use open source code. As long as they're not, you know, actively breaking a law. How should a tech professional or maintainer handle this, do you think? Well, first, I don't think we have much say as to whether they're breaking the law or not because they're using the software right, in whichever way that they want to. But I think that's kind of the point. Right. And this is one of the beautiful things about open source. You can use it in whatever way that you want. And so when we're looking at it from a from a product security perspective, and we're assessing, whether it's CVSS scores or severity ratings or whatnot, we're doing it in the context of how that software is being used. And that's actually a really challenging thing to do. With some things, it's really easy, right? If you're looking at like an operator or something like that in say the Kubernetes or the OpenShift ecosystem, the those pieces of software, it might be used in a very specific way. Right. Something that I would often refer to as plumbing. An attacker can't get to it. An end user can't get to it. It's just something that maybe shuffles data between one other system to another, right. That has no actual access to it. When you're looking at something like an operating system and you're providing libraries there and you know what you've built against it, and so you know how that's being used. But other people, like customers, could be building their own applications using that same library in a way that you never envisioned because you're not building their application. They're building it. So we have to account for all of these things. And even, upstream, right when they're looking at they literally have no idea how people are using their software. It's- The one thing that actually comes to mind here is the Linux kernel, right? The Linux kernel used to be very hesitant to call anything a security issue, because they said everything could be a security issue because the kernel can be used in a thousand different ways or in a thousand different systems. You have different architectures. You have different use cases. Like, I mean, I swear somebody has put the Linux kernel in a toaster somewhere, right? But that's not your typical use case that upstream is thinking about. And it might be using it in some really strange way, right. And so the beautiful thing about open source again, is that you can use it any way that you want, like you have that full license to use it however you want. And a security practitioner, whether it's somebody like RedHat or whether it's the upstream maintainer, sometimes we have to think about it in those ways. Yeah, for sure. Code is a tool. And, you know, I've been a product manager for quite a while now. And the one consistent thing I have always seen is that if you sit down with a customer using something that you have designed and built, they will not use it exactly as you expected, ever. They will use it in ways that you never thought was possible. Yes, yes. And so yeah, there's always going to be some kind of blind spot. You can never be sure of every way that your software might be used. So we have to plan for the intent and then maybe a little bit for anything else we can anticipate. But like perfection isn't real. The theme of to the day, I suppose. No, and that's true. But I mean, even then, like, there's the intent. There's, you know what? What could reasonably use be? But then I think there's also this place especially in an ecosystem like with open source. Listen to what people are saying about how they're using something, right. Like that might actually be a legitimate use case that you as an author didn't think about. But a number of people are using, and it may make sense to support that. And we see that all the time. The conversely, they may be using it in a very dangerous way, and you should be saying like, hey, that's actually not a good idea, but I can see what the thing that you're trying to accomplish is. Maybe there's another way to do it. Yeah, absolutely. Absolutely. So when we begin the work in security, in that open source pain is about understanding the real world scenarios and impacts, not just the severity of the threat itself. And that can help allow engineering teams to prioritize the critical and important fixes a little more effectively. Yeah, and it ties back to everything that we've discussed this season, right. Things like transparency and communication. Accountability and decision making. And collaboration between communities and organizations to ensure that everyone is on the same page. So this is it, the final episode of the season. The end of this season also means that we'll be saying goodbye to Vincent, my intrepid co-host. But before we do that, when I do a little bit of a recap. We talked about a whole lot of things throughout the episodes of this season, all product security related. But we covered a lot of ground. We did, we did. And it's- At first, I just want to say is fantastic. Thank you guys so much for having me on. Like, being able to talk about product security is one of my favorite things to do. So having an entire season dedicated to it was like Christmas come early, right? So it was it was a beautiful thing. And I think that we covered some really important topics, right. Like there's the whole, you know, vulnerability response thing, which is what people think about typically, when they think about product security. But it goes beyond so much because all of these products deal with so many different things. We were talking about, you know, data. We were talking about, cryptography. We're talking about so many things that kind of are in this, this discipline that we call product security. It's been great. Yeah, I think it's really cool to shine a spotlight on something that often shows up more or less as a checklist item on someone's, you know, documentation, because it's you might be a checkbox, but there's a whole lot wrapped up in there that not a lot of people know the inner workings of. It's not a checkbox. It's not a checkbox. Like security by checkbox is the worst kind of security. And I think that product security does some of the most amazing stuff out there. Where like when you're thinking about some disciplines when it comes to security is all about protecting yourself. Right. Like you look at information security I'm protecting the network. I'm protecting my assets. I'm protecting my company. But when you're looking at product security, yeah there's an element to protecting ourselves. We have supply chains. We have build systems we want to protect as well. Right. But ultimately it's about protecting our end users and our customers, which is an entirely different thing. And it's super cool because you get to do something that's I mean, maybe this is a little egomaniac on my part, right. But I always look at it's like we're helping the planet. You know, when we look at like whether it's open source, which is one by the way, for anyone who's doubting, you know, if we're looking at like Red Hat and our customers and all of these things, we are literally helping the planet, keeping the planet secure. Yeah. And that is an awesome and terrifying thing all at the same time. No, I absolutely agree with that too. And I think the fact that anyone can even perceive product security as a checklist item means that we put so much effort into it that people can take for granted to some level that the products that they're using are safe. And I think that's cool. And they should, right? I mean- Yeah, that is kind of the promise that all of us who develop software and sell it are giving them, right. Like, nobody wants to develop things that are terrible and cause problems. We want to make wonderful things that, you know, delight and excite the customers. And some people in marketing would put it right. But we want people to actually enjoy and find value in the things that we're producing. In order to do that, you have to build reliable, trustworthy software that people can depend on. Yeah, I think it's hard to- Yeah. Right. So I mean, at the end of the day, that's what it is. I think it's hard to be delighted if you don't feel safe. Yeah. You're probably right about that one. For sure, for sure. And it's a very deep topic. We have a whole lot more to talk about that we didn't even get to touch on in these episodes. And I know we're we'll be saying goodbye to you here in a minute for now, but I do very much hope that we have you back on the show at some point because it has been an absolute delight. I would absolutely love that. And again, like, it's been great. And I hope that, for all of those who've been listening throughout the season that you've, you've learned a lot and it's given you a lot of things to think about because there's a lot in this space. I know I learned a lot, so I can only hope that some of our listeners have as well. Mission accomplished. Yes. And for all of you listeners, do hit us up on social media. You can tag us @RedHat and use the #compilerpodcast if you want to give us any of your own feedback or thoughts on what we've talked about so far. And that does it for this episode of Compiler. And this episode was written by Kim Huang, and thank you very much to our guest, Jeremy West. Compiler is produced by the team at Red Hat with technical support from Dialect. And if you liked today's episode, follow and review our show on your platform of choice. And to all of our listeners and to you, Vincent, until next time.

About the show

Compiler

Do you want to stay on top of tech, but find you’re short on time? Compiler presents perspectives, topics, and insights from the industry—free from jargon and judgment. We want to discover where technology is headed beyond the headlines, and create a place for new IT professionals to learn, grow, and thrive. If you are enjoying the show, let us know, and use #CompilerPodcast to share our episodes.