A.I. Revolutionaries

The People Behind Open AI

Origins: AI + Open Source

The Right Side of the Robots

Possible Futures

Share:

Possible Futures

A.I. Revolutionaries | Part IV

When you look at the existing literature, theories, and random blog posts about the future of AI, there are three possibilities that everyone is talking about.

One of these futures you already know from decades of science fiction films and novels. And, while we'll go into this one, we should start off on a more positive note.

Let's begin with the first possible future. It's optimistic. It's romantic. And, for many true believers, it's only a few decades away.

Its chief promise is that we'll get to defeat death. That's right. Biological death will no longer be a thing.

This future is called...

The Singularity

Coined by noted mathematics professor and sci-fi author, Vernor Vinge, and championed by celebrated inventor/educator/futurist, Ray Kurzweil, the Singularity is the biggest and final advancement of the human species.

The steam engine. Electricity. Flight. Harnessing the atom. Breaking the sound barrier. Walking on the moon. The internet. The Harry Potter series. All of these human achievements changed our societies and our species in profound and previously unimagined ways.

But, compared to the Singularity, they're nothing.

That's because the Singularity is the final capstone achievement for the abstract-thinking and tool-building Homo sapiens. After we achieve the Singularity (in which machines gain some form of consciousness), technological innovation will no longer be solely within our control and its impact will be immeasurable from our current standpoint.

But what else, exactly, the Singularity entails—in terms of what we can expect and when we can expect it—is not exactly clear or all that agreed upon by the Singularity futurists.

Kurzweil has one idea.

We will gain power over our fates. We will be able to live as long as we want.

—Ray Kurzweil, The Singularity is Near

Cancer? Gone. AIDS? Gone. ALS? Gone. Heart disease? A thing of the past. Once we have the Singularity, we'll be able to inject nanobots into our bodies that will cure any and all maladies. Name a disease, illness, or disorder. They'll all be wiped out. And not only that, but actual biological death—the breaking down of organs, aging, etc.—can also be done away with.

In addition to defeating death, we'll also be able to expand our knowledge of things beyond comprehension.

"By the end of [the 21st century]," Kurzweil writes, "the nonbiological portion of our intelligence will be trillions and trillions of times more powerful than unaided human intelligence." Like Neo in The Matrix, we'll have immediate access to all knowledge—literally, all collected human knowledge—which we can download into our brains.

We will indeed be able to learn kung fu in a few seconds.

Kurzweil has filled several very long and dense books laying out his grand vision of AI. He details the requirements—both in the power of computing and the power of the human imagination—to turn the Singularity into a reality. And he goes on to further discuss the benefits of merging humans with machines forever.

The biggest benefit is that we'll finally get world peace. No wars. No conflict. No Black Friday fights in the parking lot of your local retail store over a particular toy your child wants for Christmas but will immediately forget about in a few months. All of these things will cease.

For Kurzweil, human beings will be made perfect. And our future will be a utopia.

Vinge—the guy who coined the term—shares many of Kurzweil's grand visions about the Singularity. For example, on ending death, Vinge writes: "Immortality (or at least a lifetime as long as we can make the universe survive) would be achievable."

But unlike Kurzweil, Vinge is less sure exactly how we'll get there and what precisely will happen once we do. He writes, "when it finally happens it may still be a great surprise and a greater unknown." That's because the Singularity is such a profound change that has almost no equal that, in Vinge's words, "our models must be discarded." It is a point in which "a new reality rules."

Immortality (or at least a lifetime as long as we can make the universe survive) would be achievable.

—Vernor Vinge

The path to and beyond the Singularity is also largely unknowable because, in order to get there, we need to accomplish two things. And they're far from simple.

For artificial intelligence, there are three big stages of development: ANI, AGI, and finally ASI.

ANI stands for "artificial narrow intelligence." At this stage, AI can be fed a ton of data and spit out fairly accurate probabilities and decisions. It can also identify images, words in speech, play complex games (like Go), and do other tasks that we've come to expect—e.g., Siri. But, apart from the "narrow" tasks they're trained to accomplish, they can't do much else.

For example, an AI agent trained to play Atari games right now is not going to be able to clean our houses, invest in stocks, and fix our cars.

And, just for reference, we as a society are still in the ANI phase.

To get to AGI—which stands for "artificial general intelligence"—there needs to be rapid advancement in both a machine's computing and cognitive ability. And clearing those bars is hard.

The reason? AGI is considered equal to human intelligence.

As shown with OpenAI's Universe project, many prominent data scientists believe that deep reinforcement learning is the best way to get to AGI. But while deep learning has made many great strides in ANI, no current advancement has indicated that AGI is around the corner anytime soon.

The route from AGI to ASI—artificial super intelligence—is even less clear.

But the transition will, if it happens at all, be rather fast. At least that's the theory among futurists who believe in the likelihood of the Singularity.

Once a machine is able to achieve the cognitive learning capability of a human, it will most likely surpass that state soon afterward and begin its runaway ascent to super intelligence.

For lack of a better analogy, artificial super intelligence is akin to summoning a god—or a demon, depending on your outlook (but we'll get to that in a moment).

ASI will be the most intelligent creature to ever appear on this planet. And then it will constantly get smarter. In doing so, it will seek to merge us with its intelligence.

Artificial super intelligence is akin to summoning a god—or a demon, depending on your outlook.

First it will cure our mortality. Because why would it want to merge with a sack of meat with an expiration date?

Kurzweil bets that we will get to AGI by 2029. And when I say "bets," I literally mean he has bet on it. Kurzweil has wagered $20,000 against Lotus co-founder Mitch Kapor that AGI will be achieved before the end of 2029.

Kurzweil predicts that, by 2045, we will have summoned the ASI god and have arrived at the Singularity.

Vinge also thinks that we'll jump start the Singularity by 2030. But, again, he's less sure than Kurzweil. In addition to not betting cold hard cash on the matter, Vinge says that this guestimate "depends in large part on improvements in computer hardware."

But once it's achieved, it's happily ever after for the human/machine super intelligence. And that's that. At least for the Singularity future.

Now, many of you are probably thinking about the second possible future. The one we've all heard about. The one we're all very familiar with.

And the one that terrifies us all.

Super intelligent machines will intentionally destroy us all

No. They won't.

Why super intelligent machines won't intentionally destroy us all

For a lot of actual AI researchers and engineers, ASI is not all that likely, at least in the "summoning a god" sense. Machines will definitely become more intelligent. But don't expect a Terminator or a (spoiler) conscious Westworld host intent on doing us harm.

However, if a machine should come to obtain super intelligence—basically, any sort of consciousness at all—it's not likely that it will become hell-bent on destroying us out of rage, vengeance, or general dislike.

It's unlikely to do this for the same reason it's unlikely to benefit mankind in the way that Kurzweil imagines: AI is a product of human endeavor. And, right now, we are not developing AI for the sole purposes of either fully eradicating the human species or fully eradicating human death.

As Eliezer Yudkowsky—a researcher who opposes the idea of an all-destroying superintelligent AI—writes: "Storytellers spinning tales of the distant and exotic land called Future, say how the future will be. They make predictions. They say, 'AIs will attack humans with marching robot armies' or 'AIs will invent a cure for cancer.' They do not propose complex relations between initial conditions and outcomes—that would lose the audience."

And yet proposing those "complex relations" is precisely what these storytellers should do. Because when we want to look to the future of AI, we shouldn't start with our gaze fixated toward some far-off future and then work backward to the present day.

Wishful thinking adds detail, constrains prediction, and thereby creates a burden of improbability.

—Eliezer Yudkowsky

He then gives the following analogy of a civil engineer. "What of the civil engineer who hopes a bridge won't fall?," he writes. "Should the engineer argue that bridges in general are not likely to fall?" Or, should the engineer start from a more practical place? Well, when we look at actual civil engineers, we see that that's precisely what they do. "A civil engineer starts by desiring a bridge; then uses a rigorous theory to select a bridge design which supports cars; then builds a real-world bridge whose structure reflects the calculated design; and thus the real-world structure supports cars."

And so, if we want to accurately imagine what the future of AI will look like, we need to move away from any and all overly-generalized theories of what AI might be like. That means no Singularity and no Terminator deathbots.

We need to stop fixating on that far-off future and working backward.

Instead, we should start by taking stock of where the technology is presently at and then we should anticipate how it might evolve or change in the future based on its current status.

When we do that a very real possible future emerges. And though it avoids the hyperbole of the two previous futures, especially the killer robot one, this possible future is still unnerving.

An A.I. that takes our jobs and is also an a**hole

"Lots of people talk about fears of the robots attacking us, but don't talk about the much more likely and more immediate scenario of mass unemployment and drastic inequality," says Rachel Thomas, a mathematics Ph.D. who's concerned about this possible and more realistic future.

In this future imagined by Thomas (and others), AI advances to the point where robots do almost all labor currently done by humans. And, to add insult to injury, it's also a huge jerk.

First, why in this future has AI taken our jobs?

Well, it's already started to. Late last year, the Obama administration released a report stating that AI-driven automation could threaten up to 47% of currently available U.S. jobs.

For Thomas, this creates all sorts of problems—especially in a country like the U.S.

"The United States operates in such a way that if you're unemployed, you have few options and a very low quality of life," Thomas says.

Combine increasing unemployment with lack of alternative options, and you have what Thomas calls "a recipe for massive inequality and social unrest."

Thomas is not alone in her worries.

Two leading economists at MIT, Andy McAfee and Erik Brynjolfsson, have been warning about the societal implications of increased automation by way of increased machine intelligence for years.

In a recent article in Foreign Policy, McAfee and Brynjolfsson write: "Such a radical reshaping of work will call for new policies to protect the vulnerable while reaping gains of the new age."

The two federal policies that McAfee and Brynjolfsson call for are to encourage flexibility and redefine "workers."

By "flexibility," McAfee and Brynjolfsson mean loosening certain restrictions to allow further innovation in the technology sector so it can compete with more traditional sectors. For instance, lifting regulations on taxi companies so that self-driving car companies can better compete. The idea is that, by allowing this increased competition between new and old forms of technology, policymakers can get a better handle on what types of automation and innovation are more likely to succeed than others.

For this reason, McAfee and Brynjolfsson call for greater transparency through data. By better understanding how these intelligent machines are impacting society, the government can create a smarter plan for the possible negative effects, such as unemployment.

Enter proposal number two: redefining "workers."

McAfee and Brynjolfsson report that, at present, only 0.4% of the US population earns a living through the "on-demand" economy forged by companies like Uber and Lyft.

Still, that's roughly 600,000 people. And that number is growing.

But, as Uber moves toward automating its vehicles, these people will soon find themselves out of work. For this reason, McAfee and Brynjolfsson call for "rethinking the way workers are classified."

The Obama administration's report from last year called for one possible approach to rethinking employment. It argued that these on-demand or "contingent workers" ought to be afforded access to the same benefits as traditional salaried employees. Currently, they're not. Such benefits include retirement, healthcare, and employer payroll tax contributions. The administration also suggested that policymakers strengthen the social safety net—social security, Medicaid, and unemployment—"to ensure that people can still make ends meet, retain, and potentially transition careers" if automation takes their jobs.

And while such proposals would deal with one side of the issue, they still don't really deal with the second concern Thomas has: AI becoming a giant a**hole.

Think about the worst person you've encountered on Reddit at 3 a.m. on some thread you shouldn't have even been on in the first place.

I'm kind of concerned about automation innovating us out of our jobs.

Anything you can do, I can do better!

Well, I'm not sure it's that simple. I mean—

I can do anything better than you.

Ugh. No you can't.

You're a [bad words]

No you ca— Wait, why am I arguing with a robot?

That's what we might be dealing with.

Not a villainous Skynet-like entity. Not an evil iRobot. No. What we're talking about is an AI that acts like a troll. And, what's worse, it's a troll that's given some modicum of power.

Here are some examples of how this is playing out right now.

Last year, Motherboard reported that an AI called "Beauty.ai" ran an online beauty pageant where 600,000 men and women from around the world sent in selfies. The AI judged the entrants based on facial symmetry, wrinkles, and age. It then picked 44 winners it deemed "most attractive." Almost all of them were white.

In May of 2016, ProPublica featured a story about racial bias in a risk assessment software that judges use to make sentencing decisions for criminal cases. African American defendants were 77% more likely to be identified as being a "higher risk" for future offenses than white defendants. This score was based on 137 questions given to the defendants. And, while none of the questions mentioned race, the end result was a racial bias in favor of white defendants.

These news stories point to a painstakingly obvious fact about AI.

Just as we learn our biases from the world around us, AI will learn its biases from us.

—Nathan Collins, Pacific Standard

In June of last year, Bloomberg ran a story stating that the majority of AI researchers were men. Bloomberg then highlighted two sets of statistics. The first was that only 17% of computer science graduates today are women. The second was that, at 2015's premier AI conference, only 13.7% of the attendants were women. Margaret Mitchell, a Microsoft researcher, is quoted in the story saying that AI has a "sea of dudes" and it's a problem. As the article notes: "If everyone teaching computers to act like humans are men, then the machines will have a view of the world that's narrow by default and, through the curation of data sets, possibly biased."

And so, in this way, AI is not intentionally becoming a racist and misogynistic jerk. It's simply acting like this because we haven't taken the proper steps to weed out certain unchecked biases that we've unintentionally taught it.

Because when it comes to algorithms, they're not as neutral as we'd like to think.

If AI is going to become an ever-increasing presence in our lives, where it's given greater and greater autonomy and power, how can we avoid the future where unchecked bias is one of its defining features?

For Rachel Thomas, the solution starts with a relatively simple idea.

Make A.I. Uncool

"There are a lot of qualified people out there who aren't getting hired," Thomas says.

She says that, of the 90,000 or so Black and Latinx computer science graduates in the last ten years, only about half wind up in the tech industry.

One primary cause for this disparity is due to social networks. Not Facebook or Twitter, but a person's actual network of friends, family, and acquaintances. Most of the leading tech companies, specifically in Silicon Valley, hire through employee referrals.

And since a lot of these referrals are for people already working for other tech giants, which—according to Thomas—"already have appalling diversity stats," you then have a repetitious cycle in which diversity stagnates rather than increases.

But, even for those underrepresented groups that do manage to get hired for a leading tech company, the retention rate is not that great.

"41% of women working in tech leave within 10 years," Thomas says. "That's over twice the attrition rate for men. And those with advanced degrees, who presumably have more options, are 176% more likely to leave."

Thomas lists the many reasons for such a gender-imbalanced attrition rate: "They can't get promoted because they aren't given high-visibility assignments. They're channeled away from more creative and innovative roles into less fulfilling ones. They're underpaid and unappreciated. Or they received biased performance reviews."

And, for some, it could be a combination of all of the above.

Furthermore, Thomas is talking about the tech industry generally. When it comes to artificial intelligence, the situation is even worse.

"The field requires a lot of math background, even when it's unnecessary," Thomas says. "As a math Ph.D, I know from experience that the culture in math can be toxic, sexist, and overly aggressive."

Added to this is a predominant myth about math.

"We falsely believe it's an innate skill—you either understand it or you don't—and not something that can be built through practice," Thomas says. "This is absolutely false, but it prevents a lot of people from learning math. And combined with gender and racial stereotypes, it's particularly harmful for women and People of Color."

So, what's the solution? Thomas believes the answer is making AI uncool.

"There's so much hype about deep learning being this super cool area—it's an elite field being used at elite companies by experts who went to elite schools working on a narrow scope of problems," she says.

To fight this dominant perception, Thomas believes you need to break AI's cool and exclusive aura. You need to unlock it for outsiders. You need to make it accessible to those with non-traditional and non-elite background. And when you do all of this—when you make AI's development more inclusive—you make its possible future better and safer for everyone.

The power of prediction and a non-traditional path

"The power of prediction was fascinating to me," Thomas says, reflecting on how she ended up becoming interested in machine learning and AI.

But her path from growing up in Galveston, Texas, to leading a machine-learning startup with her husband, Jeremy Howard, in the tech capital of the world was anything but predictable.

"I went to a very large public high school," she explains. "It was very poor."

A 2012 The New York Times article on Thomas' alma mater notes that it is part of the 2% of Texas public schools ranked as "academically unacceptable."

The school was also notable because of its diverse student body, especially in comparison to surrounding schools. "My school was 40% Black, 30% Latino, 25% White," Thomas says, "whereas the wealthier districts near us were predominantly White and Asian."

Although Thomas is White and both of her parents have graduate degrees, her time at the high school first exposed her to people's biases.

"There were times where students from other schools would say things to me such as, 'Aren't you afraid to go to school there?,'" Thomas says. "Or others would say, 'You can't go there, you're White.'"

This sort of exposure to people's biased perceptions of others—in this case, racism and classism—would stay with Thomas long after she graduated.

When talking about her time as a high school student, though, Thomas still has fond memories. In particular, she looks back to her first exposure to programming when she took two years of C++.

"It was really unusual because a lot of schools still don't offer classes like that," Thomas says. "We had a woman teacher who used to be a professional programmer. She was fantastic."

Thomas went on to attend Swarthmore College, where she majored in mathematics and minored in computer science and linguistics. After graduating, she decided to further pursue these interests at a Ph.D. program in mathematics.

As a graduate student, Thomas was further exposed to discriminatory behavior—this time with respect to gender.

"The students in my program were less than 20% women, and during my time there, the female students were more likely to drop out than the male students," Thomas recalls. "There were also no tenured women professors at the time in the math department."

She also experienced more overt problems, such as a male professor telling her that she was "too feminine" to be successful.

It all left me wondering, is this due to sexism or are academics just jerks?

—Rachel Thomas

These issues, coupled with uncertainty about career prospects in academia, influenced Thomas's decision to leave academia after earning a Ph.D. and go into private industry.

She took a position with an energy company in Philadelphia, doing quantitative finance work.

"There, I started working with large data sets," Thomas says. "And through this, I became interested in machine learning."

And it was this interest that prompted Thomas to make another major move.

At the end of 2011, Thomas read an article on data science being an emerging field out in Silicon Valley. At this same time, the company she was doing finance work for was going through a merger. Knowing that she would have to end up moving locations once the merger was complete, Thomas thought: Why not leave her job entirely and go west?

I sold my car, got rid of all my furniture, and moved out to San Francisco.

—Rachel Thomas

After landing a job for a digital marketing company, Thomas was quickly introduced to the strange and unique culture of Silicon Valley.

"It was like they were speaking a different language out here," Thomas says.

Not really knowing anyone, Thomas networked by attending meetups and conferences.

After a year of working at the digital marketing company, Thomas began looking around for another tech job. One of her friends knew someone at a growing ride sharing/transportation company, and suggested she speak with him.

"I really didn't know much about the company at the time," Thomas says. "Just that they were a taxi alternative."

She met with this friend of a friend to ask if he knew of any good jobs to apply for. "He said, 'Why don't you come up and check out our offices.'" And so she did.

Finding the issues the company was working on fascinating, Thomas applied for a position as a software engineer and data scientist. After running the prospective hire gauntlet, Thomas was brought on board.

At the same time, Thomas's interests in neural nets and machine learning continued to increase. But as she became more and more interested in this rapidly developing field, her frustrations about the feasibility of doing any sort of work on her own also began to grow.

"I started wondering, how do you do this when you can't afford to set up a large server cluster, which cost hundreds of thousands of dollars" Thomas recalls. "What are the best ways to build fast, effective prototypes? What can you do with a small dataset?"

For someone who had always been a self-starter, Thomas was finding it difficult to begin doing substantive work in deep learning.

She turned to research publications for guidance. However, that also became a frustrating exercise. In almost every research paper she consulted, Thomas would find the same thing: the authors would report their hypotheses and findings with information about how to replicate experiments, but with little else in the way of practical information on how to actually do it.

"I went to a meetup in 2013 where a deep learning expert was presenting on neural networks," Thomas says. "It was all theory."

When Thomas asked about the practical side of doing machine learning and why it was missing from his talk (as well as most publications), the expert responded: "Nobody publishes that part."

The assumption was, if you were doing AI, you already knew all the practical answers and you already had access to everything you needed to do the work.

Thomas decided something needed to change—both for AI and herself.

Changing pace

"I was burnt out," Thomas says, reflecting on her reasons for leaving the ride-sharing company. "It wasn't a great culture fit for me."

But this feeling wasn't just restricted to this one company—it characterized her experiences throughout Silicon Valley.

"When I first moved here, it was so exciting," Thomas says. "It felt like anyone could start their own company. Anyone could have an ambitious idea and people aren't going to think you're crazy. But so many of these companies that people started, no matter the idea, had the same culture and operated in the same way."

In particular, Thomas saw myriad problems associated with the tech industry that she'd experienced elsewhere in her life—elitism, biases against women, and a problem with diversity.

And so Thomas quit.

But while she was feeling tired and frustrated, she knew precisely what she wanted to do next: teach.

She took a position at Hackbright Academy, a school for women software engineers.

"Hackbright is for adult women who are changing careers," Thomas says. "There was such a wide variety of students there. You had some who had worked at startups or at bigger tech companies in non-technical roles who wanted to move into engineering. And then you had others who were coming from completely different walks of life—like former lawyers or former teachers. And that variety was fantastic."

Thomas, of course, could relate. And as she taught and engaged with these students, she began to feel renewed and gained some critical distance.

She also kept pursuing her interests in AI, which she was starting to write about and was starting to understand the practical side of the technology as open source frameworks and libraries began to get released.

"And then last spring, all of these background pieces finally just fell into place," Thomas says. Feeling renewed and refreshed, Thomas decided to make another change.

Her husband, Jeremy Howard—who also worked in tech—decided to leave his job. "We'd always talked about starting our own company, but we were always out of sync," Thomas says. "But now we were finally in sync." For both, it only made sense to start a company focused on AI.

And that was how fast.ai was born.

But unlike almost every other AI startup in Silicon Valley, Thomas and Howard decided that fast.ai wouldn't try to define itself based on some cool new algorithm or general take on AI. Instead, the company would differentiate itself by completely countering this idea that AI should be "cool" at all.

Through education and outreach that—unlike almost everyone else—focused on the practical side of machine learning, Thomas and Howard would do their part to make AI uncool. And they'd make that fast.ai's mission going forward. Literally.

They chose "making neural nets uncool" as their startup's official slogan.

Creating a new future for A.I.

This past fall, Thomas and Howard were given the opportunity to teach a machine-learning course over two semesters at the University of San Francisco's Data Institute.

Unsure of what to expect initially, both were amazed at the response.

We had about 100 students sign up for the first semester.

—Rachel Thomas

Additionally, they were able to offer diversity fellowships that included full tuition waivers, giving students from less privileged backgrounds the ability to pursue their own interests in AI.

One of those fellowships went to an international student, Samar Haider, who was able to attend the class remotely. Haider's research focuses on using natural language processing to study his native language Urdu.

"At fast.ai, this is exactly the type of project that we want to equip people to work on—domains outside of mainstream deep learning research, meaningful but low-resource areas, problems that smart people from a wide variety of backgrounds are passionate about," Thomas writes in a blog post about the class. "And Samar is exactly the kind of passionate person that we want to support."

When asked about where she sees the future of a more inclusive and accessible AI going, Thomas looks to the other pressing problems that currently exist throughout the world.

There are so many smart people in a variety of fields working on meaningful issues, particularly in the developing world.

—Rachel Thomas

"These include: micro-finance, low-resources languages, agricultural yields, and medical imaging. I want people to be able to use AI on these sorts of problems, and others they care about, without having to be an AI expert," Thomas says.

As fast.ai continues onward, Thomas and Howard are doing their part to make AI more inclusive and accessible. In doing so, they hope to stop this trend of unchecked biases harming the development of this powerful and still rapidly evolving technology.

And because of this—and because of the research they are now helping to facilitate—they have a more optimistic view of AI's future.

"There's still a lot more that needs to be done," she says. "But it's a very exciting time to be in this field."

THE END.

Share:

More reading

Here are some more sources on the topics discussed in this article:

A.I. Revolutionaries

There’s a quiet revolution happening all around us. Intelligent machines are taking over. But not in the way you might think.

There are people behind the AI revolution.

Our new film

Self-driving cars are here, and it’s starting to get weird. Road to A.I. is a story about data, decisions, and the future of an entire industry.

What's the next story?