Robot as Threat

  |  Command Line Heroes Team   Technologiegeschichte

Command Line Heroes • • Robot as Threat | Command Line Heroes

Robot as Threat | Command Line Heroes

About the episode

When a robot goes bad, who is responsible? It’s not always clear if the user or the manufacturer is liable when a robot leaves the lot. Human behavior can be complex—and often contradictory. Asking machines to interpret that behavior is quite the task. Will it one day be possible for a robot to have its own sense of right and wrong? And barring robots acting of their own accord, whose job is it to make sure their actions can’t be hijacked?

AJung Moon explains the ethical ramifications of robot AI. Ryan Gariepy talks about the levels of responsibility in robotic manufacturing. Stefanie Tellex highlights security vulnerabilities (and scares us, just a little). Brian Gerkey of Open Robotics discusses reaching the high bar of safety needed to deploy robots. And Brian Christian explores the multi-disciplinary ways humans can impart behavior norms to robots.

Command Line Heroes Team Red Hat original show

Abonnieren

Subscribe here:

Listen on Apple Podcasts Listen on Spotify Subscribe via RSS Feed

Transkript

Are you still struggling to keep those pesky pieces of paper together? No more, my friend. Introducing the Paperclip Maximizer Bot 3000, a robot whose sole purpose is to produce as many paperclips as possible. The future of office supplies has never been so bright. We're interrupting this broadcast to bring you updates on the catastrophe playing out downtown. It looks like the Paperclip Maximizer has torn apart most of the city's buildings. It's repurposing them into piles of … well … paperclips. I'm told the company's founders ... [voice trails off] When it comes to robots, even the most innocent of intentions can go awry. They obey the letter of the law, but not the spirit. A Roomba might try to vacuum up your cat, for example. Making sure robots don't cause harm has become a crucial field of research, and figuring out who is responsible for what as robots become more a part of our lives is more difficult than you might imagine. When a machine has some measure of autonomy, like a lot of robots do, is the manufacturer responsible for its actions? Is the user? Could a robot be held responsible? I'm Saron Yitbarek and this is Command Line Heroes, an original podcast from Red Hat. All season, we've been tracking the fast-evolving field of robotics. And this time we're asking, what happens when good robots go bad? Who's responsible for their actions and who do we blame when a Paperclip Maximizer Bot 3000 decides to destroy the city? We'll come back to that disaster scenario, an interesting thought experiment by philosopher Nick Bostrom. But first we need to grapple with some immediate worries because questions about robotic responsibility are already here—and the stakes are high. Paperclips… More paperclips. So, who is responsible when robots do harmful things? If I cut my hand while preparing dinner, I'm not going to blame the company that made my knife. But robots are different. Sometimes robots have a degree of autonomy. Sometimes their inherent wiring controls their decisions—and that means responsibility in the world of robotics is a lot more confusing. Our search for robot responsibility begins with the folks who make them, the manufacturer. What responsibility might they bear even after the robot's been sold? No, one's really teaching me what I'm supposed to do, what I can or cannot build using the powers I have. Ajung Moon is an assistant professor at McGill University. She studies the ethical consequences of AI and robotics. Moon says her students are engaged by these questions in a way that previous generations might not have been. They're pushing to understand the multifaceted responsibilities of this field, and the role manufacturers play when designing robots is especially blurry. They don't have a lot of legal responsibilities to make sure that everything that they put out the door is used for ethical reasons and purposes. Fact is, it's almost impossible to know how somebody will use a robot once it's sold. People constantly come up with innovative ways to use robots. The New York Police Department, for example, purchased robotic dogs and repurposed them to help with police work. They didn't exactly weaponize their robots, but they did use them on patrols and in some perceived dangerous situations. That got a lot of people anxious. New uses for technologies are often positive. They can move things forward. But manufacturers that are wary of unplanned uses could revisit their user agreements. The fact that these machines can make certain "decisions" or behave in a particular way in contexts that the designer hasn't necessarily hard-coded into the system or has thought through fully—that allows for a little bit of uncertainty to be built into how users interact with the system. A user agreement might include a promise that you won't use the robot to harm a human or won't allow the robot to be easily hacked. Both those things are easier said than done. And the more powerful the robot, the more specific that contract needs to get. For example, back in 2014, ClearPath Robotics released a statement saying we are building these field robots that can be used underwater, above ground, and so forth. And it has a lot of military use and it continues to have military clients. But they've recognized that retrofitting these systems to become "killer" robots or robots that are weaponized is not good for society. So they've essentially made it so that it is their responsibility to communicate that their clientele would not be using the technologies for those particular killer robots purposes. Moon has recently been purchasing robots for her lab, so she's seen a few user agreements lately. ClearPath's contractual language about ethical boundaries was some of the most direct out there. Why the use of such strong language for their robots? I'm Ryan Gariepy, and I am the CTO and co-founder of ClearPath Robotics and Auto Motors. ClearPath created the Husky robot, an all-terrain four wheeler about the size of a dog. They've made other animal-named robots too: the Jackal, the Grizzly, the Moose. And because ClearPath's products are so heavy duty, they can't just sell them and walk away. They need to think seriously about how they might be used. We don't want to contribute to the proliferation of autonomous weapons systems. That's not good for anyone, and I think it's pretty clear on the research front that the proliferation of these weapons is actually bad for everybody involved. So we made a statement internally and publicly that we are not going to knowingly sell to anybody who's going to use our technology to make autonomous weapons. ClearPath's clients include several militaries, but they draw a line. They won't sell to anyone who wants to attach weapons to their robots. This is one way manufacturers are taking responsibility for their creations. But policing the use of technology once it's been sold is complicated. We can't really follow each one of these robots around forever, right? So, so we do our due diligence, we ask the customers what they're going to do with it. If they tell us the truth, then we can make an informed decision. If they don't, then we can't really do anything about that because we're not going to send somebody to follow the robot around for the rest of its life. Once a robot has been sold, the user becomes a big part of the responsibility equation. So, what obligations do users have? From a user perspective, you need to follow the law, you need to be responsible about what you're doing and you need to understand that these things, while they can be quite capable, they are not infallible. That last point about robots being fallible is a big one. Users need to understand the limits of their robots. They can't expect robots to always make perfect decisions, especially in complex or unexpected situations. But what happens when a robot isn't just making poor decisions because of its limitations, but because someone has hacked into it? Then we've got a whole different level of responsibility to think about. I'm Stefanie Tellex, and I'm an associate professor of computer science at Brown University. And I work on robotics and AI. Tellex recently made headlines for exposing security vulnerabilities in robots around the world. She and her team scanned the internet for robots that were connected to the web and found thousands of them that were completely unsecured. So we found about 100 different types of robots in 19 different countries. And these included things like industrial robots, service robots, robots in homes, telepresence robots, and even some robots that were designed for military and law enforcement applications. None of these robots had any security protection. Anyone with basic technical knowledge could take control of them remotely. Tellex could have made a factory robot move in dangerous ways, or turned on cameras and microphones in private homes. We could have caused the robots to move in ways that could injure people or damage property. We could have accessed audio and video feeds. We could have stolen data from these robots. But of course, we didn't do any of those things because that would be unethical and illegal. Instead, Tellex's team responsibly disclosed their findings to the robot manufacturers and users. But the fact that so many robots were vulnerable was shocking. It revealed a widespread lack of understanding about robot security. I think a lot of people are thinking about robots as just physical devices, and they're not thinking about the fact that they're connected to the internet and that makes them vulnerable to the same kinds of cyber attacks that we see with other internet-connected devices. The security issue becomes even more serious when you consider that robots aren't just collecting data like other internet-connected devices. They're also capable of taking physical actions in the real world. The difference between a robot and, say, a smart light bulb is that if someone hacks your smart light bulb, the worst they can do is turn your lights on and off. But if someone hacks your robot, they could potentially cause physical harm. So who's responsible for robot security? Is it the manufacturer's job to build secure robots? Is it the user's job to set them up securely? Tellex thinks it's both. I think manufacturers have a responsibility to build security into their robots from the ground up. They shouldn't just bolt it on as an afterthought. But users also have a responsibility to follow best practices when they deploy these robots. The problem is that many users don't know what those best practices are. And many manufacturers are still learning how to build secure robots. It's a field that's moving so fast that security often gets left behind. We need to do a better job of educating both manufacturers and users about robot security. And we need to develop better tools and standards for securing robots. One organization that's working on these challenges is Open Robotics, the group behind the Robot Operating System, or ROS. Brian Gerkey is the CEO and co-founder. I'm Brian Gerkey, CEO and co-founder of Open Robotics. We're the organization that stewards the open source Robot Operating System, ROS. ROS is used by thousands of robots around the world, including many of the ones that Tellex found to be vulnerable. Gerkey knows that this creates a big responsibility for his organization. ROS was originally designed for research environments where security wasn't the primary concern. Researchers were more focused on getting their robots to work than on securing them against attacks. But as ROS has been adopted more widely, security has become a much bigger issue. Open Robotics has been working to address these security concerns in ROS2, the newer version of the system. ROS2 was designed with security in mind from the beginning. We've implemented encryption, authentication, and access control. We've also worked with security researchers to identify and fix vulnerabilities. But even with these improvements, Gerkey emphasizes that robot security is an ongoing challenge that requires collaboration between manufacturers, users, and the broader robotics community. Security is not a one-time thing. It's an ongoing process. We need to keep updating our systems, keep learning about new threats, and keep improving our defenses. The security vulnerabilities that Tellex discovered highlight a crucial point about robot responsibility: it's not just about preventing robots from doing bad things on their own. It's also about preventing bad actors from taking control of robots and making them do harmful things. We need to think about robots not just as autonomous agents, but as potential tools for malicious actors. That changes the whole responsibility equation. This brings us back to the broader question of robot responsibility. We've talked about the responsibilities of manufacturers and users. But what about the robots themselves? Can we hold robots responsible for their actions? To explore this question, let's return to that thought experiment we heard at the beginning: the Paperclip Maximizer Bot 3000. This scenario was first proposed by philosopher Nick Bostrom as a way to think about the potential dangers of artificial intelligence. With ROS2, you can actually enable security at the application level. We would still advise you not to expose your ROS2-based robot directly to the internet. I mean, frankly, that's bad practice for any device anywhere. Basically, no device should be directly exposed to the internet without incredibly high levels of security applied, which are generally not applied to most devices. Gerkey is aware of Stefanie Tellex's work, exposing those vulnerable ROS users, and he's not totally surprised by it. Back in the day when we had the PR2 robots running around, one of our interns showed up to the office one day—I think either very early or on a weekend, and didn't have his key with him. And he was a very creative guy, and he got on his laptop, connected to the wifi network in the building, which required credentials to access it. It wasn't open to the public. But then used that connection to get onto one of the robots and then drive the robot over to the door and push the door from the inside and let him in. So it was actually a robot mediated to break in. That example of course, is a bit like somebody taking over your Roomba. Not too big a worry. But what if the same kind of hack allowed someone to take over a car or a piece of heavy machinery at a factory? Suddenly you've got a major problem. We've all heard stories about hackers getting into IOT devices, but the difference with internet connected robots is that they're moving. They're manipulating the world—and that instantly becomes more serious. There's certainly a higher bar whenever you deploy something out in the world. And to meet that higher bar, we're going to explore one final level of responsibility. We've already looked at the manufacturer's role and the user's role. But what about the robot itself? Can we hold robots responsible for their own actions? Science fiction authors have spun up robot disastrous scenarios ever since the word robot was first coined. There have been fantasies of SkyNet and a laundry list of robot rebellions. But it was a philosopher called Nick Bostrom who described a subtler and maybe more likely disaster. Bostrom suggested that a super intelligent, goal-oriented robot would consume everything in sight, including the planet's resources in order to accomplish its one goal. It goes about turning the entire planet into paperclips, including all of the human beings and everything that we hold dear. Brian Christian is telling us about this thought experiment. He's an author and researcher at UC Berkeley and has become a leading thinker on the future of tech and how it impacts humanity. It offers us a different vision than the SkyNet Terminator vision. In this case, it is not necessarily that the AI takes on a set of goals that are contrary to our own, but rather it's trying in earnest, in good faith, to do exactly what we asked it to do. Hmm. So how do we avoid getting turned into paperclips? It's a harder problem than you might think. If we give a machine a goal and then let it run on its own, we've got to be absolutely positive that goal doesn't interfere with other goals like human safety. In other words, how do we give robots a sense of responsibility toward not just a narrow goal, but the larger interest of humanity? How do we make sure that our robots are pursuing the things we truly desire? We might try the machine learning route, giving robots fewer explicit instructions, letting them learn through countless examples all on their own. But that has its problems, too. It turns out to be extremely difficult to make sure that the system is learning exactly the thing that you have in mind when you taught it and not something else. So what's the solution? Christian suggests that we could avoid the Paperclip Maximizer robot by moving our thinking beyond the single-mindedness of corporate goals and thinking about the goals of multiple disciplines at once. I think it's really going to take a pretty holistic approach to solving this problem. There has been an increasing awareness dawning on people in the computer science and machine learning community, that they really need to address these problems in dialogue with people in other fields—people who have disciplinary expertise, whether it's doctors in the medical context or people with a criminal justice expertise. And a lot of these problems I think exist in the boundaries between those disciplines. Aligning a robot's goals not just with the goals of those who created the robot, but also with the goals of people who actually interact with that robot, people whose lives are touched by that robot is one level of work. And then, in addition, we may need to move beyond simply shoveling millions of data points at our robots. We may need to find some entirely new way to help robots learn what really matters to human life. There's been a lot of work by the theoretical computer science community around can we avoid having to translate all of the things that we want into this numerical form? Might there be other ways to impart our norms, our values, our desires into a system. Can we, in other words, give robots a sense of right and wrong instead of uploading every single example of what is considered right and wrong? It's a problem as sprawling as the field of robotics itself. But that's the next level of robot responsibility—translating our real complex, messy values and desires. And it means in addition to making manufacturers and users responsible for robots' behavior, we need to start giving robots a sense of responsibility that's all their own. You might've noticed in this episode, there are a lot of stakeholders. You've got the manufacturer, you've got the user, you've got the robot itself, and that's the point, really. Designing a robot future that works for everybody means bringing everybody to the table. The more that robots move through our lives, the higher the stakes get. We're forced to think about who gets to offer input and who gets to help design our robotic future. Next time, it's our season finale. We're looking at the robot revolution that's been rolling toward us for over a century: the self-driving car. I'm Saron Yitbarek, and this is Command Line Heroes, an original podcast from Red Hat. Keep on coding.

About the show

Command Line Heroes

During its run from 2018 to 2022, Command Line Heroes shared the epic true stories of developers, programmers, hackers, geeks, and open source rebels, and how they revolutionized the technology landscape. Relive our journey through tech history, and use #CommandLinePod to share your favorite episodes.