A.I. Revolutionaries

The People Behind OpenAI

Origins: A.I. + Open Source

The Right Side of the Robots

Possible Futures


The People Behind OpenAI

A.I. Revolutionaries | Part I

You might think, based on the type of research they're doing, that the OpenAI office would be full of gadgets, full of wonder, full of weird experiments.

But you'd be wrong.

There are no Faraday cages. No supercomputers. No giant robots.

Well, okay, there is a robot. But it's small. And it's tucked away in a side room. It's surrounded by cobbled-together protective material so that it doesn't smash into itself if it starts flailing about due to a programming error. As Jack Clark, OpenAI's strategy and communications director, phrases it: "This room is much more tool-sheddy and hacky than you'd expect AI to feel like."

Jack Clark

OpenAI is basically just a lot of desks, laptops, and bean bag chairs. On its surface—minus the robot—it feels like any other tech startup.

And it functions like one, too.

"We do our weekly meetings on Tuesday," Clark says, standing in front of an open area with a few dozen chairs haphazardly strewn about. There's a whiteboard in the corner and a large TV at the front. In these meetings, people stand up and update everyone on their work, whether it's a research breakthrough or details on a new piece of software from engineering.

This space is also used for a daily reading group.

"We have such a broad spread of expertise here—the people who work on robots, the generative adversarial people—all of them come together to soak up different ideas," Clark says.

When you hear about the work people are doing here, you realize there are incredible things happening in this place. Things that have the potential to change the way we use and think about technology, the way the world conducts itself day to day, and the way we think about the nature of intelligence beyond humans.

But before going any farther, you need to know about a dinner that happened in August 2015.

A Dinner. And a big conversation.

This dinner took place at a restaurant in Menlo Park, California, just outside of Palo Alto.

"We'd each come to the dinner with our own ideas," Greg Brockman—the co-founder of OpenAI—writes in a blog post.

Brockman, who'd previously been the chief technology officer for the online payment platform Stripe, was becoming increasingly interested in AI—a field in which he saw great promise, but knew little about.

Then a friend set up a meeting between Brockman and tech entrepreneur/Y Combinator (YC) president Sam Altman. They talked about Brockman's emerging interest in AI.

Altman told him, "We've been thinking about spinning up an AI lab through YC. We should keep in touch."

A few months later, Altman invited Brockman to the dinner.

The other guests included Ilya Sutskever—a research scientist on the Google Brain team—and Elon Musk, among others.

During the meal, the conversation quickly turned to AI.

"Elon and Sam had a crisp vision of building safe AI in a project dedicated to benefiting humanity," Brockman recalls.

The two then floated an idea that went against the current mode of AI development at big tech companies. Instead of intensively training algorithms behind closed doors, they wanted to build AI and share its benefits as widely and as evenly as possible.

The conversation centered around what kind of organization could best work to ensure that AI was beneficial.

— Greg Brockman

"The conversation centered around what kind of organization could best work to ensure that AI was beneficial," Brockman writes.

They decided that it would need to be a nonprofit because only then could they prioritize a good outcome for all instead of their own self-interest.

Shortly after the dinner, OpenAI was born, with Brockman and Sutskever at the helm.

Brockman would focus on building the team and getting the culture right. Sutskever would focus on their research agenda. In a short period, they would raise more than $1 billion in funding.

And, one by one, they'd start hiring their team.

Over the next several months, they managed to attract some of the top AI researchers in the country, luring them away from major tech companies and academic institutions with the promise of competitive salaries and freedom from business requirements.

For many of these researchers, it was the best of all worlds, combining the freedom of academia with the backing of a well-funded tech company.

For many of these researchers, it was the best of all worlds, combining the freedom of academia with the backing of a well-funded tech company.

They could focus on what was best for AI.

Like most of OpenAI's researchers and engineers, Vicki Cheung found the proposition intriguing. It was her chance to do what she always wanted to do, the thing that she couldn't quite pull off at other places where she'd worked: Build technology with a big social impact without worrying about whether it made business sense.

If you ask her how she ended up at OpenAI, though, Cheung will start off by telling you how she cheated in her high school physics class back in Hong Kong.

From bots to infrastructure

"We had these online assignments," Cheung says, "and I just didn't want to do them."

So Cheung did what any future software engineer would do: She wrote a bot that filled in the answers for her. And then she shared it with all of her classmates.

To her, the logic was simple. If you can automate something—even if that something is schoolwork and you could get in serious trouble for cheating—you don't need to waste your time doing it yourself.

Cheung is very matter-of-fact when she tells this story, as if it was the quizzes' fault she wrote the program so easily.

Vicki Cheung

"I don't think most high school online assignments were that sophisticated, " she says. "They put a lot of their answers and equations on the page, so it was really easy to crawl."

In other words, if you design an exploitable system, you must know that someone is eventually going to exploit it, right?

The teacher eventually caught on and, without ever confronting Cheung, put an end to it. The new law of the land: No more online assignments.

"It was a win for everyone," Cheung says without the slightest sense of vindication. Because what's better than automating a task? Getting rid of the task altogether.

Cheung was clearly smart. And she clearly needed a better outlet for her talent. So during high school, she started doing low-level engineering work for a professor at the University of Hong Kong.

She also went to a summer camp at Carnegie Mellon University during her final year of high school. One of her computer science professors there was so blown away by her talent that he asked her to apply for admission a week before the fall semester started. She was accepted, and moved to the States.

Eventually, after graduating Carnegie Mellon, she found her way into the tech industry, becoming a founding engineer at Duolingo, the free language training company.

Throughout all of it—from creating quiz-taking bots to becoming the founding engineer of a leading online language training company—Cheung has stuck to her belief that technology should benefit other people, that it should have a positive impact on society, and that it should be shared.

So, when she heard about OpenAI and its mission, Cheung started contacting people who could get her in touch with Greg. And it worked.

In their meeting, Greg explained to Cheung his vision for OpenAI and the type of team he wanted to build. She was immediately on board. In her words, it was "the right problem at the right time." Cheung would become one of the first engineers at OpenAI.

Along with Brockman, she would build the infrastructure needed to do state-of-the-art AI research.

The challenge was that neither Cheung nor Brockman knew exactly what the researchers would be doing.

Cheung explains, "We knew that researchers would need somewhere to run their experiments. But we didn't know what kind of stuff they were going to run."

It was like designing a city grid without knowing the size of a car, or what normal traffic patterns look like.

It was like designing a city grid without knowing the size of a car, or what normal traffic patterns look like. She and Brockman were, more or less, working in the dark.

But they continued to build.

They studied architectures. They spent a lot of time working with researchers, trying to understand how they preferred to work. And eventually, they put the core infrastructure in place for researchers to run thousands of experiments.

When the day finally came to start running those experiments and documenting their results, Cheung and Brockman were amazed to see that the infrastructure held up better than they'd expected.

Out of the entire infrastructure, only a few things were scrapped.

Then one research team started talking about creating a rather ambitious project. It was big and complicated. And it would push OpenAI's infrastructure to its limits.

It was Universe.


Before going into exactly what Universe is, you should probably know some basic AI terminology.

There's the general and all-encompassing term, artificial intelligence (which we won't go into). Then there's machine learning, which is a practice that's basically a subset of AI. And then there's deep learning, which is a subset of machine learning.

Machine learning is the practice of teaching machines to perform a certain task, rather than coding them to do it—for example, teaching a machine to recognize a photo without writing a line of code that commands it to.

Machine learning is a little like our standard way of teaching math, for instance: A teacher shows a student how to solve a problem in class, and the student applies that lesson to other problems.

Deep learning is a form of machine learning in which the machine teaches itself to perform that task through repeated exposure to massive amounts of data.

deep learning diagram

Sticking with the school analogy: Deep learning is akin to that student teaching herself to solve the problem by tackling it over and over and over and over.

Along with these subsets, you also have an approach that's called "reinforcement learning."

Applied to deep learning, this approach focuses on giving the machine a reward for successfully teaching itself a set of tasks. It's a little like offering the kid who's teaching herself math an ice cream cone if she succeeds at solving her problems.

But, for that analogy to truly hold up, the kid would never sleep, eat, go to the bathroom, text her friends, or get bored. She would spend all her time learning math. And, in one training session, she would be given data sets equal to the amount of information a normal child learns over the course of several months, if not years.

So there's a few limitations.

Anyway, this at least gets you to a place where you can understand exactly what Universe is and why it's important.

"A lot of people here at OpenAI are interested in deep reinforcement learning," Dario Amodei—a research scientist using Universe—says.

Dario Amodei
Dario Amodei

Amodei notes that reinforcement learning wasn't initially a major part of the deep learning revolution, which started around 2011. Only relatively recently did it start to gain traction with researchers.

"It really began picking up steam when it was used by AlphaGo to beat the reigning champion [of Go] back in 2016," he says.

For those unfamiliar with AlphaGo, this is sort of like saying human space travel picked up steam after Yuri Gagarin became the first person to leave the Earth's orbit and then return safely.

And that's without being hyperbolic.

In March 2016, Google DeepMind's AlphaGo AI program went up against Lee Sedol, one of the world's highest-ranked Go players, in a five-game match. When it was over, Sedol had only won a single match.

"I am speechless," Sedol was quoted as saying.

Unlike chess, which has a total of 400 possible moves, Go has 130,000.

So were most of the observers. For decades, researchers had considered Go the Mount Everest of achievements in AI. That's because Go, which dates back to ancient China, involves a ton of strategy. Unlike chess, which has a total of 400 possible moves, Go has 130,000.

In addition to intelligence, Go requires ingenuity and improvisation. These additional aspects made AlphaGo's achievement even more remarkable. The victory pointed to a near future in which AI would no longer be confined to a narrow series of tasks.

Artificial general intelligence—which some have likened to human intelligence—was nearer than previously thought. And deep reinforcement learning was emerging as the method for achieving it.

This is one of the primary reasons that OpenAI decided to develop Universe.

But rather than picking Go as the environment for its platform, the OpenAI team decided to turn to a technology more popular among researchers: video games.

"Over the last three years," Amodei explains, "the tool that most of the people studying reinforcement learning used to test out new approaches and then compare the results with each other was Atari—literally the Atari games from the 1970s."

The Arcade Learning Environment (ALE) was introduced in 2013 by researchers at the University of Alberta. It used an Atari 2600 emulator to train AI to further "the development of algorithms capable of general competency in a variety of tasks and domains without the need for domain-specific tailoring."

In other words, AI got closer to artificial general intelligence by playing and replaying Atari games in the ALE.

And although the ALE didn't explicitly describe itself as an open source project, no one owned it. "So anyone is free to use it," Amodei says. That made it a suitable starting point for Universe.

The one issue with ALE, though, was that it was limited.

"There are only 55 Atari games," Amodei says. "And the graphics—because it's Atari—are quite primitive."

What happens when the AI—or "agent" as it's known—plays and beats all of the Atari games, like Google DeepMind's AI agent did in 2015? What do you do then? Can that agent take what it's learned from these more primitive environments and apply them to more complexly rendered ones?

"We felt that, in order to train an agent that can act more broadly," Amodei says, "we needed more environments than that."

So the Universe team sought to expand beyond Atari.

Doing so would allow an agent to not only play and conquer the 8-bit worlds of Adventure! and Pitfall, but also more modern, graphical first-person shooter games, 3-D-exploration worlds, and mobile Flash-environments like Candy Crush.

With each game, each world, and each environment that it conquers, the AI agent remembers.

With each game, each world, and each environment that it conquers, the AI agent remembers. And as it remembers, it learns. It uses this knowledge to adapt to the next game, the next world, and the next environment. And so on and so forth, inching further and further toward general intelligence.

As they sought to expand the types of games, worlds, and environments that agents could play in, the Universe team also wanted to avoid creating barriers for other researchers to add new ones in the future.

"We felt that the way to give both ourselves and the community more power to train agents was to build a single interface," Amodei says. "And that way, if you wanted to integrate a new game, you just needed to be able to connect to a server on a machine that's playing that game."

So, while Universe is not itself an AI agent, it's a useful platform for training those agents.

Universe diagram from OpenAI
Universe diagram courtesy of OpenAI

And, like our actual universe, it's ever-expanding. If you're a researcher out there training an agent to play video poker, Universe gives you the power to let that agent go on to learn to play GoldenEye or Super Mario Bros. or even the horrendously awful Atari E.T. game—if you're so inclined. And, in doing so, your agent can begin to learn to apply the knowledge it's gained to games, worlds, and environments it hasn't even encountered yet.

One of the promises of this platform is real transfer learning. This means learning on a set of nine tasks and then doing a 10th task you've never seen before.

— Catherine Olsson

"One of the promises of this platform is real transfer learning," says Catherine Olsson, a software engineer on the Universe project. "This means learning on a set of nine tasks and then doing a tenth task you've never seen before."

This is similar to how humans learn.

We take our experience from a ton of narrow tasks and apply that knowledge—along with common sense gleaned through years of experience—to tasks we've never undertaken.

Take, for instance, riding a motorcycle. When people first get on one, they're not usually stepping into a completely unknown situation. They most likely draw upon knowledge of similar-yet-different tasks: balancing, riding a bike, driving a car through traffic, etc.

Lots of engineers and researchers, like Olsson, joined OpenAI precisely because they wanted to bridge this gap between what we know about human understanding and what we can do with computer algorithms.

The MindAs A Computer

"It was thinking about human cognition as a computational process," Olsson says about what first sparked her interest in AI.

Her interest took root in her public middle school's gifted student program. "They gave us this philosophy of mind course," Olsson says. "They asked a bunch of 12-year-olds to be introspective about the meaning of consciousness. It was very inspiring, thinking: 'Okay, I have this brain and it somehow gives rise to me. What's going on there?'"

That question would remain with her as she progressed through school, developing an interest in programming along the way.

"In high school, I had a very good friend who was interested in programming," Olsson says. "And for some reason, the school was going to let him teach his own computer science elective."

She signed up, despite the fact that her only previous experience with programming—seeing nerdy boys in high school obsess over coding their graphing calculators—had been underwhelming.

Catherine Olsson

"I was almost certain I was going to hate it," she says. "But then it quickly became my favorite class. There was very little lecturing. We read book chapters and had the occasional quiz, but mostly we'd just come in and build something. Totally free choice. Just taking the tools we'd been given and making whatever we wanted."

As an undergraduate at the Massachusetts Institute of Technology (MIT), Olsson pursued her dual interests in philosophy of mind and programming by double-majoring in computer science and cognitive science. Along the way, she gained practical experience doing software engineering internships during her summer breaks. She also got involved in several open source projects.

Following undergrad, Olsson went on to a Ph.D. program in neuroscience at New York University.

Soon afterward, the deep learning revolution began to take off. "It was clear that the next big thing had arrived," Olsson says. Upon realizing that academia wasn't for her, Olsson decided to make the move from studying how the human brain works to researching how to mimic that process with machines. She decided to pursue a career in machine learning. Specifically, she decided that she wanted to try and get a job at OpenAI.

After tracking down Brockman, whom she had briefly met when he was a fellow undergrad at MIT, she asked for a position—any position whatsoever.

I thought they'd tell me to spend six months brushing up on my machine learning skills. But Greg was, like, 'No, come build something for us.'

— Catherine Olsson

"I was not expecting that they would be hiring just engineers," Olsson says. "I thought they'd tell me to spend six months brushing up on my machine learning skills. But Greg was, like, 'No, come build something for us.'"

The OpenAI opportunity appealed to her on two levels. One, it offered the chance to work on cutting-edge deep learning projects. And two, it gave her the ability to develop projects in the open.

"The open source ethic has been extremely important to me," Olsson says. "And that was an important reason to come to OpenAI, specifically."

Now that the Universe platform has been released, Olsson—along with the rest of the team—is excited about the prospect of moving beyond video games entirely.

"We're trying to bridge the gap from games to real world tasks," she says. "Like booking a flight online."

Amodei echoes her. "The goal with Universe is to provide a single platform that allows you to connect to a computer," he says, "and train an agent to do anything a human can do on a computer."

On the one hand, this prospect is extremely exciting. On the other, though, it's somewhat concerning. While humans have done and continue to do amazing things with computers, there's also a very obvious flip side.

If you can train an agent to mimic the beneficial or benign things a human can do on a computer, can't you also train it to mimic what not-so-great people do?

The answer to this is, unfortunately, yes. What's more, you don't have to necessarily train an AI agent to do something not-so-great or downright nefarious. You can, because of numerous security vulnerabilities, trick it into doing those things.

For its part, OpenAI is fully aware of these security concerns.


Ian Goodfellow is a big deal in the world of deep learning. In fact, he literally co-wrote the book on the subject. It's called Deep Learning.

Authored with two other big names in the field, Yoshua Bengio and Aaron Courville, this 2016 book has already racked up 444 citations on Google Scholar—an extremely impressive feat in the slow-moving world of academic publishing.

Ian Goodfellow

Goodfellow's path to artificial intelligence, however, didn't start with a grade-school philosophy class or a high-school hack to an online physics quiz.

It began with a death sentence.

In late 2011, while a Ph.D. student at the University of Montreal, Goodfellow developed a bad headache at the back of his neck.

"I went to the doctor just to confirm that I didn't have meningitis," he says.

After being examined by the doctor, he received a far graver diagnosis: a brain hemorrhage.

I went to the doctor just to confirm that I didn't have meningitis. He told me that I was likely to die in the next few hours.

— Ian Goodfellow

"He told me that I was likely to die in the next few hours," Goodfellow says.

While waiting for an MRI to confirm that diagnosis, Goodfellow decided to call a fellow researcher.

"I began brain-dumping all of these machine learning ideas I wanted him to try out if I died," he says.

At that point, Goodfellow realized AI was pretty important to him.

"I was like, 'OK, if this is the way I spend my final moments in life, I'm pretty clearly committed,'" he says.

I was like, 'OK, if this is the way I spend my final moments in life, I'm pretty clearly committed.'

— Ian Goodfellow

After the MRI failed to show anything (and after Goofellow didn't die), he was sent home and told that nothing was wrong with him.

Later, while interning at Google, Goodfellow—who didn't have insurance at the time—paid $600 to see a neurologist in Mountain View about the still-present pain.

"By poking me in the neck, he diagnosed me as having a pinched nerve," he says, shaking his head.

Despite the psychological toll of being told he might die, Goodfellow isn't entirely bitter about the whole experience.

"If there had been a remotely competent doctor in Montreal, I wouldn't have had this experience of realizing that my last wish was to make sure my machine learning ideas got tried out," he says.

Goodfellow says that, in retrospect, the ideas that he brain dumped to his friend were not that great. "They were things like sparse coding," he says. "No one cares about sparse coding anymore."

Today, he focuses on something people do care a great deal about, adversarial training, or—to put it another way—AI security.

"In the past, security has revolved around application-level security, where you try to trick an application into running the wrong instructions," he explains. "Or network security, where you send messages to a server that can get misinterpreted. Like you send a message to a bank saying, 'Hey, I'm totally the account owner, let me in,' and the bank gets fooled into doing it, even though you're not actually the account owner."

But with AI, and specifically machine learning, security is a different animal.

"With machine learning security, the computer is running all the right code and knows who all the messages are coming from," he says. "But the machine learning system can still be fooled into doing the wrong thing."

Goodfellow equates this with phishing. With standard phishing, the computer isn't tricked, but the person operating the computer is.

It's the same for AI. Its code remains uncorrupted. But it is tricked into doing different tasks than it was trained for.

With standard phishing, the computer isn't tricked, but the person operating the computer is. It's the same for AI.

We've all heard stories about someone's grandfather getting a Nigerian-prince-scam-style phishing email, promising untold riches in exchange for sending $1,000 or $2,000. The grandfather, of course, ends up losing the money and gets nothing in return.

Well, it turns out AI is even more vulnerable than someone's grandfather.

"Machine learning algorithms are really, really gullible, compared to people," Goodfellow says.

Machine learning algorithms are really, really gullible, compared to people.

— Ian Goodfellow

To make things worse, AI has the potential to be more powerful than anyone's grandfather. This is no knock against your or anyone else's elder patriarch. It's just that Gramps falling for the Nigerian Prince scam is not as problematic as, say, a machine learning algorithm used for the financial services sector being tricked into helping hackers defraud a major bank or credit card company.

"If you're not trying to fool a machine learning algorithm, it does the right thing most of the time," Goodfellow says. "But if someone who understands how a machine learning algorithm works wanted to try and fool it, that'd be very easy to do."

Furthermore, it's very hard for the person building the algorithm to account for the myriad ways it might be fooled.

Goodfellow's research focuses on using adversarial training on AI agents. This approach is a "brute force solution" in which a ton of examples meant to fool an AI are generated. The agent is given these examples and trained not to fall for them.

For example, you might train the AI used in a self-driving car not to fall for a fake sign telling the AI to halt in the middle of the highway.

For example, you might train the AI used in a self-driving car not to fall for a fake sign telling the AI to halt in the middle of the highway.

Goodfellow has developed (along with Nicholas Papernot) cleverhans, a library for adversarial training.

The name comes from a German horse who became famous in the the early 20th century for his ability to do arithmetic.

A German math teacher (also a self-described mystic and part-time phrenologist) bought the horse and claimed that he had taught it to add, subtract, multiply, divide, and even do fractions. People would come from all over and ask Clever Hans to, for example, divide 15 by 3. The horse would then tap his hoof 5 times. Or people would ask it what number comes after 7. The horse would tap his hoof 8 times.

The problem was, Clever Hans wasn't that clever—at least not in the way his teacher thought.

The problem was, Clever Hans wasn't that clever—at least not in the way his teacher thought.

A psychologist named Oskar Pfungst discovered that the horse wasn't actually doing math. Rather, he was taking his cues from the people around him. He'd respond to these people's body language, tapping his hoof until he got a smile or a nod. Pfungst illustrated this by having the horse wear a set of blinders. When asked a question, the horse began tapping his hoof. But, unable to see the person who'd asked the question, he just kept tapping indefinitely.

"Machine learning is a little like Clever Hans," Goodfellow says, "in the sense that we've given the AI these rewards for, say, correctly labeling images. It knows how to get the rewards, but it may not always be using the correct cues to get to those rewards. And that's where security researchers come in."

Goodfellow's cleverhans library has been open sourced.

"With traditional security, open source is important because, when everybody can see the code, they can inspect it and make sure it's safe," he says. "And if there's a problem, they can report it relatively easily or even send the fix themselves."

A similar dynamic holds for machine learning security. Generally speaking, that is.

"For machine learning, there isn’t really a fix yet," Goodfellow says. "But we can at least study the same systems that everybody is using and see what their vulnerabilities are."

When asked if there's anything that has surprised him about his experiences doing machine learning research, Goodfellow talks about the time he ran an experiment for a machine learning algorithm to correctly classify adversarial examples.

He had just read a research paper that made some claims he thought were questionable. So he decided to test them. While his experiment was running, Goodfellow decided to step out to grab some lunch with his manager.

"I told him," Goodfellow recalls, "'when we get back from lunch, I'm not sure the algorithm's going to correctly classify these examples. I bet it will be too hard. And, even after this training, it will still misclassify them.'"

But when he came back, Goodfellow found that the algorithm not only recognized the adversarial examples, it had also set a record for accuracy in classifying the normal ones.

"The process of training on the adversarial examples had forced it to get so good at its original task that it was a better model than what we had started with," Goodfellow says.

At that moment, Goodfellow realized that, for AI, adversarial training wasn't just important for finding vulnerabilities.

"By thinking about security," he says, "we could actually make everything better across the board."

By thinking about security, we could actually make everything better across the board.

— Ian Goodfellow


This is how AI is developed.

It's Vicki Cheung pouring through research, trying to build a Kubernetes cluster. It's Catherine Olsson sitting at a workstation, helping build a platform for an ever-expanding universe for AI agents. It's Ian Goodfellow stepping away to grab a sandwich while an algorithm he's testing gets smarter and more secure.

Ian and Catherine
Ian Goodfellow and Catherine Olsson

It's work that seems mundane.

Developing AI means sitting down each and every day in front of a computer, thinking about problems when you go home at night and during your commute in the morning (or even while you wait for a potentially grave medical diagnosis), focusing on achieving incremental victories, and dealing with unforeseen setbacks.

But amidst this day-in and day-out grind, researchers and engineers are working toward a goal that, for many people outside of AI, is more science fiction than science fact.

And, in many respects, it's always been that way.

The people behind the AI revolution—the people at OpenAI, the people at the MIT AI lab in the 1960s, the people building AI startups today, the people who work tirelessly to make sure the AI we eventually build isn't an a**hole—they're just people.

They just happen to be solving a problem that will change our lives forever.




Ian Goodfellow left OpenAI in February 2017 to return to Google Brain—where he worked prior to coming to OpenAI—in order to work more closely with his research collaborators.

Next up:
Origins: A.I. + Open Source

A faulty printer and a conversation. This is how it all started.

More reading

Here are some more sources on the topics discussed in this article:

What's the next story?