Show logo

AI Is Changing The Threat Landscape

  |  Compiler Team   Security

Compiler • • AI Is Changing The Threat Landscape | Compiler

AI Is Changing The Threat Landscape | Compiler

About the episode

The rapid adoption of AI often means security is an afterthought. And let's face it—humans are not always great at assessing risk. But how has AI transformed the security landscape? What can the industry do to stay informed and ready to respond to threats? And what does this mean for product security?

Jeff Crume, distinguished engineer at IBM, stops by to talk about AI as "the new attack surface", and explains why the technology, like so many others, can be used for both altruistic and malicious intentions. 

Compiler team Red Hat original show

Subscribe

Subscribe here:

Listen on Apple Podcasts Listen on Spotify Subscribe via RSS Feed

Transcript

Which do you think kills more people in the US? Cows or sharks? It's a trick question, but it's cows. Yeah, and that's not cows that we eat and get food poisoning from. That's cows causing trampling deaths and things like that. It's about 22 people a year. Now, if everyone listened to these statistics, now you look at a field of cows and they give you that blank stare.... Now you know they have dark hearts. Now, I'm being facetious, but the point is we're not always great at assessing risk. This is Compiler an original podcast from Red Hat. I'm Emily Bock, and this season I'm joined by Vincent Danen to learn more about product security. Today's episode, how AI has transformed the security landscape, and what the industry can do to stay informed and engaged. All right. First up, we have Jeff Crume. For those of you who follow cybersecurity, he might be a familiar face. He's a cyber security architect, distinguished engineer, and master inventor at IBM. He's also known on YouTube as the security guy. And according to him, the developments around AI have pros and cons for those working to defend systems. I remember when I was studying AI as an undergrad a thousand years ago, riding my dinosaur to class, that there was nothing even close to natural language. And now all of a sudden, we have it and it answers with great confidence. And sometimes it's right, but sometimes it's not. And when it's not, it is just as confident as when it is. AI can lie. It can be told to lie and it will follow the instructions it's given. So if someone gets to the system before you do or changes the system prompt on this, then absolutely it will give you wrong information intentionally because that's what it's been told to do. So this is great news for the bad guys and bad news for the good guys. Yeah. The bottom line there, AI is very self-assured even when it knows it's lying to you. So, Vincent, I want to get your take on this. What do you see as what's so dangerous about an overly confident, large language model? I mean, I think the big thing is, is the trust. We assume this thing is giving us accurate information. It's kinda the same as when you do a Google search or whatever you're looking at that you're reading it, you're believing it. Sometimes you shouldn't, sometimes you could. And I think that with AI it makes it even more challenging, particularly when it's so confident it's telling you, no, no, I'm right. Absolutely. No. I think I'm 100% on board with you there. It's... that it's not only it can't just be told to lie to you. It will be very convincing while it does it too. Oh 100%. Yeah. For sure. And it's not just us with these tools. Also, like, if you go back to Jeff, the security guy, if you ask him what the real danger is, it's simple. It's that democratization and proliferation of AI tools means that AI technology can be accessed and used by anyone. Like you talked about good and bad actors. They've all got it. There's a talk that I've been doing a lot of industry conferences lately called AI, The New Attack Surface. Every time we come out with a new technology, it's something that can be used for good. It's something can be used for bad. We call it dual use. You know, fire is an example of dual use technology. You can use it to warm yourself and cook your food, or you can burn down the house. So it could do good or bad depending on whose hands it's in. And we'll use generative AI to defend. The bad guys will use it to improve their attacks. Another example that we're already beginning to see is improved phishing attacks. We trained everybody to look for, bad grammar, spelling errors, things like that. And that would indicate this is a phishing email. Just hit delete. We need to unlearn that from everyone because, moving forward, the smart phishing attackers will use a generative AI to generate the phishing email, and it will be in perfect English. It will have no spelling errors. So if people are looking for spelling errors and don't find them, then they're likely to fall for it. Also, the attackers can use generative AI to go off and do research on you as an individual. Look up all your social media feeds and then might have it generate a hyper personalized phishing email that makes sense only to you. It would reference, you know, your family and friends, your hobbies, where you work; all this kind of very detailed stuff that nobody would have time to do on any sort of mass scale. But you can put all it into AI have it, do the research and generate a different phishing email for every single person out there, and it will be far more convincing and people will fall for it. So we're going to see the bad guys using the technology. That means the good guys have to use it to and use it to do better defense. And that is really the crux of the matter. AI is a tool and it's a powerful tool, and it can be used for whatever you use a tool to do. So let's tie this back into our season theme of product security. How does this relate specifically to product security as a whole? Yeah, it's a... I mean, it's a challenging thing, right? We think about generative AI where like, all the fancy stuff we can do with it. And we have to remember that there's a whole bunch of fancy stuff that these attackers can do as well. Right? So, I mean, like, things like phishing and the social media stuff like as a security practitioner, like, yeah, this stuff scares me, right? It's irrelevant to product security itself. No. Not necessarily. You don't phish software, but you can use generative AI to look for vulnerabilities. You can, you know, we're looking at these AI assisted tools. Have you ever tried any of these, coding, coding tools that are have generative AI capabilities in them? A little bit, yeah, we've been playing around with a few of them in Ansible and lately they're really impressive. Like, it's a whole new world. They totally are. But you got to pay attention to what it's producing, because it could be producing vulnerabilities in your software that you don't know about. And also you have to think about maintainability. Right. Like I've had it create a ton of code for me. I have no idea what it does, but it works, it's magic. And then I have to come back to it later. And then what I have to rely on the AI to figure out what it what it generated before. Right? So I mean, there's definite applicability to the product security space when it comes to these tools, and it really comes down to knowing what the limitations are and knowing what the risks are. Exactly. And I... it's not just that AI can be used in, you know, the detecting or the attacking. It's also that there's that whole surface area of then all the code that's going out. If AI is used in that, there's implications there. Both good and bad. I think you were absolutely right. It's really cool to see it generate something and even have it work. But then when you try to fit it in to the rest of the environment as a whole, and it's a little bit more of an adventure. Oh, totally. Everyone's going to have their own preferences, right? If you look at some open source communities, they're embracing AI with open arms. You see these generative AI coding tools and they're okay with it. And others have policies that say absolutely not. Right. And everyone has to kind of feel that, as with all things, feel your own risk tolerance here. Absolutely. And from really a security professional standpoint in this dynamic where AI can strengthen, it can weaken both systems and applications. Like how do you think about that? Like how does that fit into your world? Well, I mean, I tried not to think too much about the attacker part of it, right? Because, I mean, this is going to sound very selfish, but the product security piece, like they're being deployed and a lot of the things that AI is going to do are things are going to be taking advantage of humans, more so than software. Right? So from a product security attack perspective, like it's there, it could probably find vulnerabilities it will have to respond to. But again, software doesn't get phished right. It gets attacked. But when we're looking at it from the development of software like we already talked about, you know, it's the code assistant piece that's going to be a challenge. But I actually look at some of the positive benefits. I don't want to just focus on the negatives. When I think about things like, SAST tools. Right. Like, if you've ever run one and you scan a piece of software and it spits out a thousand findings and only 1 or 2 of which are actually valid, the rest are, and no offense to anybody, kind of garbage. Right? And I mean, everybody knows this. We use these tools and we accept that limitation. Yeah. But with generative AI, we could maybe use that to actually weed that stuff out for us and get more signal and less noise. And that's really exciting. So going all the way back to that AI programed confidence, through to its status as a dual-use technology. It can be good or bad. IT professionals could be hard pressed, both in hampering the efforts of bad actors and addressing newer attacks 100%. What can be done about this? Figuring this out might be a little above my pay grade, but Jeff has a novel idea that is getting some groundswell from experts in his space. AI as a research assistant, helping humans identify and document risks and security trends. You know, things that mere mortals like us don't always have time or headspace to digest. We've already passed the point where individuals can keep up with all of this, but guess what? We've got AI. So AI could be doing constant research and summarizing the results for us and telling us, yeah, I just read about all these new vulnerabilities, all of these new attacks. What's the latest news that I need? I could have an AI agent that's going off and giving me a daily news summary of here's what you need to know, for instance. And then I can drill in and get more detail on some of the other things. So that's where AI can become a force multiplier for us, where we would be just drowning in information. If it's working well, it can hopefully surface for us. What are the most relevant things that we need to know? So Vincent, how realistic is this? Like, is this something that you do? I don't do, but I'm going to do it now. I mean, that sounds awesome, right? Like I've, I've used AI to do a lot of kind of research and summarization of very specific things, compliance related boring stuff. Right. But like getting the news of the day or like, what's the hot topic out there covering a multitude of different sites and whatnot? That sounds very compelling to me. And I hadn't actually thought about it before. And I'm going to look into. Yeah, I think that's a great concept, actually. And I... as a whole, I kind of see AI as it exists currently as like a very enthusiastic and prolific intern. And so I like to check the work, but especially when it comes to distilling down a lot of information, it's gonna pick up some things that maybe I wouldn't have seen otherwise, might be a little better at summarizing than, you know, reading hundreds and hundreds of articles. Oh no, that's totally true. And I mean, selfishly, I, I mean, I do read a lot of security news on a daily basis. But if I told the AI was looking for very specific sorts of things, then I would miss other things that are maybe interesting to me. That I didn't prompt the AI to go look for. Right. So I'm not quite sure that I would replace what I currently do today, but it would be really great to have that summarization of very specific things that I'm looking for. Exactly. It's an enhancing tool, not necessarily a replacement for what we're doing already. See, again, it can be very confidently wrong. And that's true. And I've seen that happen. And I was just like, wow, that's a really weird take for this. So I mean, I don't trust it 100% yet. Yeah, for sure, for sure. So do you have any of your own examples just off the top of your head of what things we could do in the future where AI can help meet those challenges that it introduces by its own advancement? Oh, yeah, that's a good one. That SAST example, is a good one, right? Because I think that a lot of these tools are going to be AI infused. If we look around, everything is AI infused these days. Which is great. Right? Because maybe it'll clean up its own mess. Right. We can look for AI to do better detection and reduce all of the noise. I think there's going to be places where, you know, can we use AI to detect code that's generated by AI? Mhm. You know, is there particular patterns that can be detected by AI to know that, hey, this chunk of code or this commit was generated by an AI agent of some sort versus a human being, right. Like I think things like that would be helpful. Yeah. I think that's kind of the conundrum of the day is like, how do we tell what's AI or not? And if AI can help us all a better. Oh yeah, I imagine, looking at a different field, this is going to be very important for teachers. Yeah. Teachers are really going to want to know what's AI generated and what's not. And I think that that's something that's going to be applicable to multiple disciplines, multiple fields. Certainly within the product security space. 100% I think we're definitely on to something. And I think Geoff set us off in a really nice direction. I'm going to turn the tables a little bit. So who ultimately is responsible for AI security? Companies are racing to adopt AI, especially generative AI, to figure out ways that can address business challenges and make their business more agile. But the race to adoption can leave security as an afterthought. But why? We're smart, right? Which do you think is, kills more people in the US? Cows or sharks? Yeah. I mean, it's a trick question, but it's cows. Yeah. And that's not cows that we eat and get food poisoning from. That's cows causing trampling deaths and things like that. It's about 22 people a year. But shark attacks, the things that we're so afraid of, that's about one person a year. So it's a 20 fold increase for cows. Yet we. So if you look at the pure numbers, the risk is clearly with cows. Yet we have farmers that raise them and they're in proximity with them all the time, which is, by the way, why it's a greater risk as well. But the point is we are not great at, I mean, we don't have Cow Week. We have Shark Week, right? If now... if everyone listened to these statistics, now you look at a field of cows and they give you that blank stare. Now you know they have dark hearts, right? They really want to attack. So it's a different understanding of risk when you look at it that way. Now, I'm being facetious, but the point is we're not always great at assessing risk. Alas, it is we smart humans that are the problem. We're not very good at assessing risk. We do tend to run to adopt new things without necessarily thinking through the security. So, Vincent, very important question for you. When do you think we should go about airing Cow Week? Honestly, I didn't know I those statistics. I think it's a great idea. I would watch it. Honestly, for real. I'm turning over a whole new leaf. I see a whole new side to cows. But seriously, how do you IT professionals surface risk on busy teams under pressure to get products to market like we need to make money too. How do we balance these things? It's super hard. Because you can't take a one size fits all approach. If you look at it, irrespective of AI, you have to look at risk tolerance and risk mitigation, like there are risks that you are willing to accept and you have to know what those are, like, those have to be clearly understood, clearly documented, and then followed. Absolutely. And a major theme of the last episode too, is all around balance. And I think we've got to balance those security needs with everything else that needs to keep the lights on and keep customers coming to our doors. Right. You're looking at it as, security risk versus business risk. I mean at the end of the day. I mean, it's like the perfect security computer. It generates no value. No, no revenue; doesn't pay your employees because it's incased in concrete and powered off. Right. Exactly. It's a useless... it's a useless thing. And so we have to accept some risk. And I mean, I mean, okay, maybe we're not so good at it if we're worried about sharks instead of cows, but at the same time, we do risk assessments on a daily basis. Somewhere along the line, we got really scared by computers, more so than we did by cars. Mhm. And I don't know why. That's actually a really good point. I think cars kill a lot more people than computers. Although I don't know maybe that's a big philosophical topic. It probably is. Yes. Now we need Computer Week too. So we're filling up the calendar for sure. Yes. Yes. So how do you think we can get people to take risks seriously when AI is so new and intangible and shiny? I think it's an understanding of what the possible outcomes could be. And this is where you have to step through, and not being a fear monger or anything like that but like, what's the worst that could happen? Right. And you actually have to have that honest conversation with yourself or with your company, your coworkers, whatever. You say, ,like, literally what's the worst that could happen? And if the worst that could happen is truly terrible, then you have to kind of build in some safeguards around it or find a different path forward. But I don't think it's an option to just not do it. Absolutely. No. I think you're right. And I think step one is learning about it. The more you know about AI and what's happening with it and what it can do, the more accurate that picture of the actual risks and worst case scenarios will be. Yeah, you know, they say like you fear what you don't know. But at the same time, we also don't fear what we don't know. And it's a really weird paradigm that you kind of have to navigate. And so like step number one is education. Absolutely, that's straightforward I think. And a solid first step. So what actually fixes this problem we're talking about. It's... humans are both the cause and the problem. And it's going to take all of us. It's going to be a little bit of all of us in this. There's the cybersecurity defense side. So I think I as someone who works in that space, need to try to inform people who are working on the product space. Oh, by the way, the bad guys are doing this kind of stuff. They haven't done it to your product yet, but we've seen it happen with three others, so there's no reason to believe it can't happen to you as well. An example that we... that just occurred to me is secrets management. One of the things that a lot of people do in programing is they might have API keys or cryptographic keys or passwords, and they just hardcode those things into the software. Terrible idea. Not only is it not flexible, but if somebody gets a hold of the code, they might be able to reverse engineer and pull that stuff out. In fact, in some cases it ends up in source code, which then ends up out in a source code repository. And we've seen this happen before. So we need a better system for doing secrets management. So people like, you know, my side of that. Develop a system for secrets management where you can, manage credentials in a secure way and then have the code make calls in, in order to use that service, that vault service that keeps all the secrets as opposed to you writing the code directly into your product. We've got new generations of people who haven't... who don't bear the scars of those past learnings. And some of the old, old folks like me who just gotten lazy and, don't want to do the work, you know, because it's just easier to do it this way. So there's a lot of work still yet, and it's going to require everybody. There's a lot of people involved in security, more than just, you know, people with security in their job title. And I do... I agree, I think it's going to take all of us. So I'm going to go through kind of a list of the people we're thinking of who have a stake in this. And I kind of want to talk through like what each of those categories can do to help. So let's start in the world of like the non-security technologists. That's going to be your developers, your operations people, your system folks. And end users. What can they do to help? Well I think first we need a Jeff bot. And I think that Jeff is going to teach us all of these things that he knows or at least, you know, honestly, like jokes aside, I think that something like that is tooling of some sort, like, this is all of the things that he described or things that we do today. Right. So can we have these bots, these tools to prevent us from making these mistakes because we make them over and over and over again. On every person involved in that whole process is a surface that something can go wrong. You know, we've talked about social engineering and how, you know, people are both the cause and the solution for some of these security things so... Yes. If we can all kind of keep in mind that security should be a part of every conversation, I think that helps. Right. And it's... Raise it everywhere. Is what they say like security is everybody's job. It's not just my job being in product security. It's not just the information security team's job. It's everybody's job, right? This is where we get these monitors of shift security left. Like you don't have security teams at the beginning designing your product necessarily. But the involvement is there. And we can have some of these tools do this work for us. Like one thought that I had for these non-security technologists is, you know the practice of peer reviews. Yeah. Right. Like why couldn't you have generative AI be the peer review. Or conversely if you're going to have generative AI generate code, a human being has to be that peer review or another, different AI agent. Exactly. More, more brains are better than one, even if they're, you know, computer based. Yeah. I mean, I shouldn't say I don't trust myself, but if I'm, if I'm tired or I'm busy or I'm distracted, I'm going to make mistakes. And if I just push those things through without any sort of guardrails or safeguards or anything like that, those mistakes, I mean, nothing I do would be catastrophic, probably. But I mean, it would be inconvenient or bad. Right. Exactly. And I think that that also is where the processes come in. And those security policies; taking out some of the guesswork and having to think through every aspect of it in every step, you can replace some of those with guardrails that are kind of consistent and built in to catch those mistakes before they cause any issues. Sure. And unlike a person who can decide whether or not they want to follow a policy, perhaps a generative AI is told you must follow this policy, and therefore it does. Exactly. The gatekeepers of things going through, you really gotta... you can't sweet talk or bribe a computer quite as easily as a person. Not yet. Wait! Well, maybe I don't know... Oh yes, you can trick.... You can turn trick. But I'm not sure yet, but trick, absolutely. Exactly, exactly. So, new category of person. How about information security personnel. Oh I think generative AI is going to be a huge boon for that. If you look at things like, log aggregation, log analysis, I mean, I, you know, like the old A-Team show: "I pity the fool." Like, I pity the person who actually has to go through and do this stuff. And I know that there are tools that help and they're like, you know, regular expression-based or whatever, but having generative AI do that and maybe like contextualize it, I think it would be very helpful and would probably help us to detect some of these signals buried in all the noise of all these aggregated logs. But to be able to maybe detect those attacks earlier. And I think that's exciting. Yeah, for sure, I, I saw a scary statistic in a presentation like a year ago. So I'm sure it's even maybe worse than it was then, there was like only 5% of like alerts from security systems actually get looked at. I hadn't seen that statistic, but it wouldn't surprise me. Very much like *citation needed. It was a while ago, but the gist of it was, there's so much noise out there that only a fraction of it, like the most important, the most scary things, really get looked at. AI might help cut through some of that noise and raise things that might otherwise not be seen. Or maybe one correction: the things that look scary. Yes. Right. Because we don't know, there could be those early indicators that aren't scary until the big thing that is scary. And then it takes somebody a while to find that. Or even if they find that quickly, they've already pivoted and moved on. Exactly. Right. So if I look at just the volume of logs, and keep in mind a lot of regulation nowadays is demanding everything logs stuff and it has to log all of these certain types of events, and we have all of these systems. I mean, going from bare metal machines to like a containerized environment. Now you've got thousands of things generating logs, like it's a massive amount of data to sift through, and you need something that can do that accurately and at speed and pick up those early warning signs. Exactly. Tailor made. Tailor made. All right, another one. How about data security personnel? That one I'm not actually 100% sure of because if I'm looking at, like, data security it's probably going to be along the same, same lines of, like, data exfiltration and how to detect it. But that's probably something that an information security person would be observing as, you know, traffic is flowing through your network. Yeah. So that one I'm actually not 100% sure what generative AI specifically for data security might, might assist with. I gotcha. So I have a thought on that and it's kind of in the shift left kind of realm of things. And I won't spend too much time on it because I'm pretty sure we're going to talk about it a lot more in a little bit. But when it comes to, like, the principle of least data, where it's like you want to collect the least amount of data that you possibly can in order to complete the tasks that you need to do. I could see it being helpful in, you know, helping to identify like what pieces of information maybe you do or don't need or which ones might be dangerous, or in a place of vulnerability where they're stored. It's kind of the closest I can think of. No, actually you're right. And you actually made me trigger a thought. It is more around the generative AI accessing data. Yeah. So you're not looking at like say a chat bot that's out there like a chatGPT or Gemini or whatever. You set up an internal LLM. It has RAG. It's... you're augmenting it with your own data because you want it to give you, you know, insights on your own customers or something like that. How do you protect that? Like, here's another consideration. How do you protect that data that this LLM is referencing and ensuring that maybe this bot or whatever is used by customers? Yeah. And how are you not leaking one customer's data to another? Exactly. That would be a challenge. That would be a challenge. And that is definitely something you don't want to have happen. No. All right. One more category I think before we start wrapping up here. And this one is all your domain. So I'm interested in what you think. How about product security folks? Oh a ton of application. So like I said, like the test results are security testing results. Right. Let's find a way to filter through that noise. Give me more signal. I don't have enough time. I don't have enough people. I don't have enough desire to go filter through noise. Aside from the security testing pieces, and the code generation pieces, which is also a concern, I think just better assessments of safe emerging vulnerabilities. A new vulnerability comes out, currently today I have a person who's reading it trying to understand the vulnerability, where it's applicable, what products are affected, all of these sorts of things. If I can have AI assist with this, maybe not do it entirely right and certainly not end-to-end without a human in the loop. But if I can have that, make that faster and more accurate, that's a huge win for me. Okay, so I think the main idea that we're playing with here is people, process and technology. People are always going to be the weakest link in our cyber defense. What can we do about that? You're not wrong. And it's something that I unfortunately have to repeat often because in my line of work, when you're looking in a product security perspective, everyone wants to blame the vulnerabilities in software. But if you go look at, you know, these data breach incident response reports, right? You're looking at, depending on the year, between 5 and 15% of breaches are caused by exploited software. Mhm. The other 85, 95% is the human element. Yeah. And can we use AI to reduce some of this? Like, there are things that, my goodness, we could have been doing for ten years and we haven't done it yet. Why not? Can we do it now, please? Absolutely. I think that's the first step. And I think we all have a hand in figuring out what we can do with it. I think so. There's the creativity and the action from everybody. Yeah. No, I am not asking you, Emily, to do all the work. But I'll do my fair share. But collectively, like we were talking about, this is a, an issue that all of us can participate in. It affects all of us, and we can all be part of the solution. Nailed it. I think that's exactly right. You've heard all of our thoughts about AI security and threats. Now we want to hear a little bit from all of you. Let us know what you think. You can hit us up on social media at Red hat and use the #compilerpodcast. And other than that, I think that does it for this episode of Compiler. And this episode was written by Kim Huang. And a big thank you to our guest Jeff Crume. Compiler is produced by the team at Red Hat with technical support from Dialect. And if you like today's episode, don't keep it to yourself. Follow the show, read it, leave a review or share it with someone you know. And we'll see you next time.

About the show

Compiler

Do you want to stay on top of tech, but find you’re short on time? Compiler presents perspectives, topics, and insights from the industry—free from jargon and judgment. We want to discover where technology is headed beyond the headlines, and create a place for new IT professionals to learn, grow, and thrive. If you are enjoying the show, let us know, and use #CompilerPodcast to share our episodes.