Share:
A.I. Revolutionaries
The People Behind Open AI
Origins: A.I. + Open Source
The Right Side of the Robots
Possible Futures
Share:
Origins: A.I. + Open Source
A.I. Revolutionaries | Part II
Before there was open source, there was a faulty printer and a conversation.
Let's start with the conversation, since it was about the faulty printer.
The MIT researcher meets the Carnegie Mellon professor
It's 1980.
A 27-year-old artificial intelligence researcher from the Massachusetts Institute of Technology (MIT) is at Carnegie Mellon University's computer science lab. It just so happens that a professor there knows something about the malfunctioning printer that's been giving him—and the whole MIT AI Lab—headaches for the last several months.
The MIT researcher comes into the professor's office. He says:
"Hi, I'm from MIT. Could I have a copy of the printer source code?"
"No, I promised not to give you a copy."
"..."
"...."
(turns and walks out of the room, extremely upset by the encounter)
And that's it. That's how the most important event in open source's history went down.
Or...maybe it didn't. At least, not precisely in that way.
—Bob Sproull (the professor)
"I have absolutely no recollection of the incident," said Bob Sproull (the professor), when recounting the particulars of his one-on-one with the MIT researcher, Richard Stallman.
Stallman, for his part, doesn't remember much beyond a rough recollection of what was said and how upset he got. Stallman's biographer reports that when he discussed the incident with Stallman years later, "[n]ot only does he not remember the motivating reason for the trip or even the time of year during which he took it, he also has no recollection of the professor or doctoral student on the other end of the conversation."
It was only when Stallman's biographer interviewed a former Carnegie Mellon doctoral student that Sproull emerged as the most likely person Stallman spoke with.
"That code that Stallman was asking for was leading-edge, state-of-the-art code that Sproull had written in the year or so before going to Carnegie Mellon," the student told Stallman's biographer.
So it would make sense that Stallman would track down Sproull.
The printer in question was a new laser printer Xerox Corp. had donated to MIT's AI Lab. It was fast and top-of-the-line, but it had a major problem: It jammed all the time.
To make matters worse, the machine managing the printer, a PDP-11, wouldn't report the jam. Only after trekking all the way over to the printer did Stallman and his colleagues realize that neither their documents nor the ones ahead of them had printed. It was maddening.
The lab's previous printer did the same thing. But Stallman had altered the source code of the program that ran interference between the machine and the printer. After Stallman was finished with it, the program would send a notification to the user: "Go fix the printer."
Stallman wanted to do the same thing with the new Xerox printer.
Only this time, he couldn't access the source code. The software Xerox provided didn't let him. It was written in binary, and no one at the lab could alter any of the commands.
—Richard Stallman (the researcher)
For Stallman, requesting the source code from Sproull wasn't outrageous in the slightest. But Sproull's refusal to give it to him was.
Unbeknownst to Stallman at the time, Sproull had signed a non-disclosure agreement (NDA) with Xerox. If he gave Stallman the source code, there was indeed potential harm: a lawsuit from a very large corporation.
NDAs were fairly new at that time, but they were gaining popularity with major tech companies like Xerox and IBM. This was Stallman's first encounter with one. And, when he found out about it, he came to see it in apocalyptic terms.
—Richard Stallman (the researcher)
To understand why Stallman viewed Sproull's NDA and refusal to hand over the source code as such a threat, you have to understand the ethic that MIT's AI Lab had embraced for its nearly 21 years of existence—an ethic Stallman held dear.
That ethic was based on sharing. It was based on freedom. And it was based on the belief that individual contributions were just as important as the community in which they were made.
How a model railroad club changed the world and taught it to share
In no uncertain terms, MIT's Tech Model Railroad Club (TMRC) changed the way all of us interact with machines. Without it, our daily use of smartphones, laptops, and even self-driving cars might look completely different.
The club was founded during the 1946-1947 academic year at MIT—and is still around today. Its Signals & Power committee—which takes care of all technical aspects of the club's model railroading—counts among its members the first generation of programmers, or "hackers," who made up the inaugural AI Lab.
Before going into how they changed the world, I have to define this word, "hacker." Because it's a little problematic.
For some of you—I'm looking at you, readers who work in tech—it's a heavily nuanced term that carries both positive and negative connotations, depending on the context. For others, it's a purely negative term applied to, and sometimes used by, various cybercriminals. And, for a smaller few, it's the title of a really hokey 1996 movie starring a young Angelina Jolie.
For the latter two groups, some background. The noun "hacker" and the verb "hack" were both born at the TMRC, and are still embraced by current members.
As the club notes on its website:
"We at TMRC use the term 'hacker' only in its original meaning: someone who applies ingenuity to create a clever result, called a 'hack' ... . Here at TMRC, where the words 'hack' and 'hacker' originated and have been used proudly since the late 1950s, we resent the misapplication of the word to mean the committing of illegal acts. People who do those things are better described by expressions such as 'thieves,' 'password crackers,' or 'computer vandals.' They are certainly not true hackers, as they do not understand the hacker ethic."
In the book Hackers: Heroes of the Computer Revolution—the definitive history of the early days of computing—author Steven Levy uses the term “the hacker ethic” to describe a common philosophy that he initially found among his interview subjects.
"It was a philosophy of sharing, openness, decentralization, and getting your hands on machines at any cost to improve the machines and to improve the world, " Levy writes.
It was the thing that Stallman worried about when he encountered Robert Sproull and the NDA.
—[The Hacker Ethic]
And it was born with the first generation of hackers in the TMRC, who used it to underlay both the problem-solving and the innovating roles that the Signals & Power committee played in the club. On the problem-solving side, if there was an electrical issue that needed fixing, but for which there were no official solutions, you would devise your own solution—or "hack." And then you'd share it with everyone else. Because ...why not? The same thing went for any and all kinds of innovative hacks. Because ...what's the harm?
—[The Hacker Ethic]
In the spring of 1959,
MIT electrical engineering professor John McCarthy offered a course that charted a revolutionary path for computing at MIT. It was a language course. The language was LISP. And its inventor was the course's instructor, McCarthy.
He invented the language a year earlier in the hopes of teaching a machine—in this case, a giant computer, the IBM 704—how to understand both declarative and imperative sentences and to exhibit common-sense logic in carrying out its actions.
In other words, LISP was designed to create artificial intelligence—another term coined by McCarthy.
McCarthy and fellow MIT professor Marvin Minsky believed machines designed to do simple calculations and told to carry out other rudimentary tasks were capable of much, much more. These machines could be taught to think for themselves. And, in doing so, they could be made intelligent.
Out of this belief and following the invention of LISP, McCarthy and Minsky created the AI Lab, even though the university wouldn't have a formal computer science department until 1975—when the electrical engineering department became the electrical engineering and computer science department.
The Lab drew heavily from the TMRC hackers, who were becoming increasingly interested in the university's small collection of computers. And while many appreciated McCarthy's and Minsky's dream of teaching machines how to think, these hackers really wanted to do something much more basic with the machines.
They wanted to play with them.
In those early days of computing, when machines were gigantic and expensive (the IBM 704 was worth several million dollars), you needed to schedule time to access them. There were only a few computers available, and lots of people wanted to use them. So time was extremely limited—at least during the day.
This first generation of computer hackers would typically do their hacking from the late hours of day until the wee hours of the morning, often at the expense of their actual coursework.
Their hacking consisted of testing the capabilities of these machines. They could calculate and perform various tasks, but could they do other things?
Could they make music?
Two hackers (Peter Samson and Jack Dennis) discovered the answer when they programmed the Lab's TX-0 computer to play the music of Johann Sebastian Bach.
With that feat accomplished, the hackers turned to other potential hacks. Could a machine provide other forms of entertainment?
In 1962, three other hackers (Steve "Slug" Russell, Martin "Shag" Graetz, and Wayne Witaenem) developed one of the world's first video games on the Lab's PDP-1. It was called Spacewar! In it, two players battled via a crudely drawn rocket ship avatar—one was long and thin, and the other was fat and wide—trying to blow each other up with missiles. The game achieved legendary status soon after it was created.
Beyond being novel and fun, the game was also scientifically accurate—at least, cosmologically speaking. Another hacker (Peter Samson) created a routine in which the stars and constellations displayed behind the rocket ships were in their correct positions. The game even accurately depicted the relative brightness of each star.
In terms of gameplay on the computer itself, hitting the switches on the PDP-1 quickly enough was difficult and cumbersome. And so, two fellow TMRC hackers—Alan Kotok and Bob Saunders—picked through the random parts and electronics in the club's tool room one day. They used those spare parts to fashion the first joysticks.
When all was said and done, more than 10 hackers had left their marks on Spacewar! Its gameplay was the result of successive hackers improving upon previous hackers' works—hacks on hacks on hacks.
Spacewar! represented the best of the hacker ethic. It illustrated the type of innovation and problem-solving that open collaboration brought.
For their parts, McCarthy and Minsky supported these student hackers—whether by developing a video game or teaching a machine to make music. It was these hackers' clever ways of attacking problems that would enable AI research to continue. As Minsky and McCarthy later reflected on this first class of researchers: "When those first students came to work with us, they had to bring along a special kind of courage and vision, because most authorities did not believe AI was possible at all."
The hacker ethic yielded a unique form of ingenuity that pushed the boundaries of AI research and the power of computing forward. But most of the hackers did not see earning potential in their efforts, And neither did most of the tech leaders at that time.
Back then, the computer industry was radically different than the tech industry today.
Computers were marketed more as support tools than as engines of change. For industry leaders, hacking was less a contribution than a distraction.
It would take several decades and a lot of ups and downs for the hacker ethic to make its way into the mainstream, and for businesses to view it as a legitimate way to make a profit.
In the meantime, as this first generation of hackers graduated from MIT, the hacker ethic passed to a new class. And, in the process, it reached beyond the campus of MIT and the borders of Massachusetts.
The hacker ethic goes west
In 1962, John McCarthy took a position at Stanford University and started the Stanford Artificial Intelligence Lab (SAIL). With him came the hacker ethic, which a new generation soon adopted.
One member of this generation was Fred Moore, a hardware enthusiast who co-founded a hobbyist group that aimed to apply the hacker ethic to a new project: getting computers into people's homes.
He and 40 other people met at SAIL in mid-1975 to come up with a name for this new group. After some back and forth on different suggestions, which included "Midget Brains" and "Eight-Bit Byte Bangers," the group settled on "Bay Area Amateur Computer Users Group—Homebrew Computer Club."
The last three words become the unofficial title for the group. And it's safe to say that without the Homebrew Computer Club, the personal computer and, eventually, the smartphone would not exist as we know them today.
Among the members of the Homebrew Computer Club were Steve "Woz" Wozniak and his friend Steve Jobs.
But while the hacker ethic was spreading to new adherents, AI research was experiencing some tumultuous times. Funding from the Defense Advanced Research Projects Agency (DARPA), which had propelled major AI research projects during the 1960s, disappeared suddenly in 1969 with the passage of the Mansfield Amendment. The amendment stipulated that funding would no longer go to undirected research projects, which included the great majority of AI projects.
Still, a few survived.
In 1974, things got even worse when DARPA pulled nearly $3 million from Robert Sproull's future home, Carnegie Mellon University's AI Lab. The Lab had been working on a speech recognition program, which DARPA hoped could be installed in planes for pilots to give direct commands. The only problem? Those commands had to be spoken in a particular order and cadence. Pilots, it turns out, have a hard time doing that in highly stressful combat situations.
With government funds dwindling, researchers turned to the business world as their primary source of funding and marketing for AI projects. In the late 1970s, these enterprise-related AI projects centered on emulating expert decision-making, and they became known as "expert systems." Using if-then reasoning, these projects and resulting software became highly successful.
Software had boomed into a multimillion-dollar industry. The innovation achieved in the AI Labs and computer science departments at MIT, Stanford, Carnegie Mellon, and other universities had gone mainstream. And, with the help of new companies like Apple, computers were finally becoming vehicles of transformation.
But, while the hacker ethic continued to thrive among hobbyists, proprietary products became the standard in this new burgeoning tech industry.
Businesses relied on competitive advantage. If a company was going to be successful, it needed a product that no one else had. That reality conflicted with the hacker ethic, which prized sharing as one of its foundational principles.
Further troubling was the line being drawn between hobbyist hackers and professional programmers.
In the former camp, openness and transparency still thrived. But in the latter, those ideals were verboten. If you wanted to make a high-end salary working with computers, you needed to cross that line. And many did. NDAs became the contract these former hackers signed to seal their transformation.
That's why, for Richard Stallman, witnessing Robert Sproull's NDA-driven refusal to hand over the source code for his software was like seeing a canary in a coal mine begin to perform its death flutter.
The hacker ethic turns to dust as the A.I. winter approaches
In the months after Richard Stallman's unfortunate meeting with Robert Sproull, most of the hackers at MIT's AI Lab left for a company started by the Lab's former administrative director, Russell Noftsker.
Noftsker was the one who hired Stallman back in 1971, when the latter Harvard undergrad applied to be a programming intern at the lab.
And while Stallman came to fully embrace the hacker ethic over the next nine years, Noftsker began to drift from it. In 1973, he left the Lab and moved to California, where he worked in the software industry. But he still maintained a presence at the Lab, returning occasionally throughout the years. In that time, he became particularly interested in LISP—the AI programming language that John McCarthy helped pioneer—which was gaining popularity in the emerging expert systems market. In 1979, Noftsker proposed that the Lab and its cadre of hackers form a company to take advantage of this market.
Countering Noftsker's vision was the AI Lab's LISP expert and resident hacker supreme, Richard Greenblatt. Where Noftsker proposed a proprietary company model, Greenblatt advocated an open one.
Noftsker's vision eventually won the day with the majority of the AI Lab's hackers, who left to join this new company, called Symbolics. Noftsker later said of this mass exodus: "We took so many [people] that it's going to take years for MIT to build back up."
Stallman, who refused to go the proprietary route, was one of the few who stayed behind at MIT. As he later commented on this period: "20 years or so of work of our community turned to dust."
Symbolics's lifespan, though, would be short.
It and several other LISP businesses found some success in selling specialized hardware to run LISP software. But this focus on hardware rather than software would ultimately lead to these companies' downfalls. In 1987, several desktop computing companies, including Apple and IBM, debuted cheaper architecture to run LISP.
Within a year, the billion-dollar business of specialized LISP hardware collapsed.
A few years later, in the early 1990s, the expert systems market would follow suit. They eventually proved ineffective and too costly. These thinking systems could not keep pace with the rapidly changing business world.
Soon, artificial intelligence entered a long and seemingly endless winter.
Research projects dwindled. Specialized business almost died out completely, with few investors finding any kind of promise or value in AI. To most, the field was—according to a 2007 article in The Economist—"associated with systems that have all too often failed to live up to their promises."
Even the term "artificial intelligence" fell out of fashion.
In 2005, The New York Times reported that AI had become so stigmatized that "some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."
The hacker ethic reborn
While artificial intelligence endured a long, slow decline, Richard Stallman—the self-proclaimed "last true hacker"—sought to resurrect the hacker ethic.
On September 27, 1983, Stallman sent out a message on Usenet (a kind of pre-web Reddit) stating that he was going to begin work on a Unix-compatible software system called GNU—a recursive acronym meaning GNU's Not Unix. The goal would be to develop a new operating system based on the principle of sharing.
—Richard Stallman
"I cannot in good conscience sign a nondisclosure agreement or a software license agreement," Stallman wrote.
Stallman closed his message by calling on individual programmers to contribute to his project, especially those "for whom knowing they are helping humanity is as important as money."
This was the beginning of what would become the free software movement. For many, it was the reincarnation of the hacker ethic. It countered the proprietary model of development and emphasized the value of sharing. And it was solidified with the creation of the GNU General Public License (GPL), which was released on February 25, 1989.
With Linus Torvalds' creation of Linux® in 1991 (and then choosing to release it under version 2 of the GPL in 1992), the free software movement gained even greater attention and momentum outside the normal hacker channels.
And around the same time that artificial intelligence entered its long winter, the hacker ethic 2.0—in the form of free software—surged in popularity.
In 1998, Netscape made headlines when it released the source code for its proprietary Netscape Communicator Internet Suite. This move prompted a serious discussion among developers about how to apply the Free Software Foundation's ideals to the commercial software industry.
Was it possible to develop software openly and transparently, but still make a profit?
By that point, there had already been a few examples of companies that believed it was.
The first was Cygnus Solutions, started by Michael Tiemann, John Gilmore, and David Henkel-Wallace. Tiemann said that, when he read Stallman's vision of free software, he "saw a business plan in disguise." In short order, Cygnus Solutions became a multimillion-dollar company.
By embracing profits but rejecting the proprietary system, these business-minded hackers—like Tiemann—created a distinct offshoot of the free software movement. It was one that showed you could make money through sharing and transparency.
But while it rejected the proprietary system's reliance on NDAs and secrecy, this offshoot was, for Stallman, also decidedly different from his vision of free software. For Stallman, the initial motivation must always be the ethical idea of available and accessible software for all—i.e. free as in free speech (one of Stallman's favorite analogies). If you went on to make a profit, great. There was nothing wrong with that. In his view, though, this other branch's initial motivation was profits, while the ethical idea of accessibility was secondary.
As such, he sought to differentiate this branch from free software.
"The main initial motivation of those who split off," Stallman later said, "...was that the ethical ideas of 'free software' made some people uneasy."
Despite the fact that many in this new branch—if not most—disagreed with this characterization, they still sought to differentiate themselves.
In early February 1998, Christine Peterson gave this new branch its official name when she suggested "open source" as an alternative to free software following the Netscape release. Later that same month, Bruce Perens and Eric S. Raymond launched the Open Source Initiative (OSI). The OSI's founding conference adopted Peterson's suggested name to further differentiate it "from the philosophically and politically focused label 'free software.'"
For his part, Stallman doesn't view open source as entirely inimicable to free software. As he writes in an essay on the differences between the two, "We in the free software movement don't think of the open source camp as an enemy; the enemy is proprietary (nonfree) software."
Still, for him, there will always be a difference.
"The philosophy of open source, with its purely practical values, impedes understanding of the deeper ideas of free software; it brings many people into our community, but does not teach them to defend it," he writes.
Yet, despite this divide, many who work in open source recognize Stallman as a founding father.
It was his desire to keep the hacker ethic alive—following his conversation with Sproull and the near-apocalyptic mass exodus from MIT—that created the free software movement. And it was the free software movement that would later inspire the emergence of open source.
In the years that followed, open source would come to equal, and in some cases rival, proprietary development.
This was especially true once AI's long winter came to an end.
TO BE CONTINUED...
Share:
Next up:
The Right Side of the Robots
Chris Nicholson lived in a Buddhist monastery. Then he became a journalist in Paris. But his latest venture tops it all: Launching an AI startup.
More reading
Here are some more sources on the topics discussed in this article:
- Hackers: Heroes of the Computer Revolution (O'Reilly Media) by Steven Levy
- Free as in Freedom: Richard Stallman's Crusade for Free Software (O'Reilly Media) by Sam Williams
- A Marriage of Convenience: The Founding of the MIT Artificial Intelligence Laboratory by Stefanie Chiou, Craig Music, Kara Sprague, and Rebekah Wahba
Additional images
- Used Punchcard by Pete Birkinshaw is licensed under CC BY 2.0.
- Spacewar! is alive! by Jason Eppink is licensed under CC BY 2.0.
- Symbolics-3670-boards by Dave Fischer is licensed under CC BY-SA 3.0.
What's the next story?