The Linux kernel and the second version of the GNU General Public License (GPLv2) turned 30 this year. As part of that major milestone, we asked Red Hatters who have been using or contributing to Linux since the early days about their experiences. What was it like contributing to Linux, what was it like using it? Could you imagine that Linux would have the impact it's had on the world up until now?
Today, we’re talking to Ethan Dicks, who has been using Linux since the early 1990s. Dicks joined Red Hat in 2017 and is a Senior Consultant.
Q: What was your first encounter with Linux?
I first encountered Linux through the Usenet post because I was a very avid Usenet reader and contributor—starting in about ‘85 or so. I saw the Andrew Tanenbaum post about the release of Minix and newsgroups were created for that, so that was an exciting chance to have [something like] Unix on desktop-grade hardware.
I’d been running Unix at work since ‘84, ‘85 and had tried (on a number of occasions) to gather enough hardware to be able to run it at home and just really couldn't ever afford to put it together because disks were expensive. I remember when that famous first message came out from Linus [Torvalds]. I was not a PC guy at the time.
By April of 1992, which was five months after that announcement, I was at a computer show at a fairgrounds and felt that things had gotten cheap enough. So, I went and bought a 386 motherboard and four megabytes of RAM (in April of 1992, it was $35 per megabyte!) specifically to run Linux, popped on a drive, brought it home and put together a PC.
That was in the days of the massive stacks of diskettes where you had to download dozens and dozens and dozens of individual files, write them out to the A stack, the D stack, the C stack. It was two floppies, boot and root. There were jokes at the time made about how many floppies it took to install Linux.
Q. When did you start contributing to the project?
On Usenet, there was a very rich open source community. The two newsgroups that I spent a lot of time reading and following were Comp Sources Unix and Comp Sources Games. So, when Linux came out I'd already been getting open source projects.
The first thing I contributed to was the time zone library that keeps track of daylight saving time. At the time I was working in Antarctica, and my workstation didn't have the right time on it. It turns out New Zealand changed which weekend of the year they did their time change, and this machine didn't have a new enough library. We were using New Zealand because of the way airplanes work—US stations work off of New Zealand time.
I went looking for Antarctic-specific time zones and the zone information file, the compiled version, with zero bytes. There was no data in the OS for how to keep time in Antarctica. So I jumped in, discovered the mailing list, joined the mailing list, read through archives, and became part of the discussion. The upstream of Timezone did have some information about Antarctica, which I helped refine and improve so I've got my name in the source code for that—for helping improve timekeeping and interview.
In 1995, I moved down to Antarctica to basically be a computer guy. I moved into the science lab that had a little nest of Macintosh computers and Unix machines, and my boss said, “Hey, here's a CD-ROM for Linux.” As far as I know, I'm the first person to put Linux on a machine in Antarctica.
Q: What was the experience like?
It was normal at the time. There weren't any centralized repositories or source code control. The mailing list would be a sort of assistance, while one person was the actual code keeper or code librarian. You had to check codes out, and nobody else could touch that file while you were editing. When you work with a company or team, The problem is you’d have to check your code in before you went to lunch.
That's how the early coding system prevented two people from working on the same thing at the same time. Only one person could have it. And it was all on you to redo your changes based on changes made by the previous person. That’s why forking and tags, and all that stuff we have now, exist. It used to be very, very painful to have more than one person with their fingers in the pie.
Q: Back in the day, it was possible to “let the magic smoke out” of various hardware components when installing Linux. Do you have any stories about blowing up monitors or otherwise bricking hardware while you were learning how to do all of this stuff?
The biggest risks for letting the smoke out in terms of Linux-grade stuff were: (1) If you had a cheap, fixed-frequency monitor and screwed up your X settings you could get the oscillators going at the wrong frequency and you could toast stuff. I heard about it happening, it never happened to me.
The other one is early hard drives—you had to low-level format. You had to write out the headers and the blocks. They came absolutely blank or just had random data patterns from the factory—they weren’t actually broken up into blocks. The earliest IDE drives...if you sent them a FORMAT command, you would blow up the factory format and you could brick the hard drives. And I never had any of those hard drives. They couldn't stop people from using the tools they were used to using and breaking the hard drives, but that never happened to me.
Q: After you started contributing, why did you stick with it? What kind of future did you see for Linux at the time?
In the science lab, Linux was absolutely a skunkworks. They had $15,000-$25,000 Unix workstations on one side of the house and DOS machines and Novell NetWare on the other side. We kind of snuck Linux into the science lab in the middle. The first thing we set up with a box that could do file sharing and print serving for everything.
In the early days, you really couldn’t get funded by management because they were used to paying lots of money for hardware and software and then turning around saying, “If something goes wrong, here's who we nail to the wall because it didn't perform like what we paid for.” And Linux had none of that.
What changed over the years was watching enterprises adopt Linux publicly. It was snuck in, and people would set things up, but to actually have a two-year business plan to buy this hardware, set it up and intentionally run Linux on it, was a change.
We were using cheap 486 and Pentium machines (like $2,000 and less) instead of $10,000-$25,000 for server/workstation hardware and licensed OS. Cost was the big thing.
Q: How did you come to work in open source full time—does it go back to the Usenet days for you?
I started that job between high school and college, and then came and went a couple of times over the next several years but yeah it was a data communications company. Our business was making a $25,000 board that would plug into a $100,000 mini-computer to talk to a million-dollar mainframe. And we sold millions of dollars of that stuff because big machines and little machines that wouldn’t talk to each other.
Uncharacteristically, in the mid-1980s, we had three major operating systems in the building: VMS, IBM and Unix. Some of our customers were academic or research-oriented—they were using Unix, and they wanted our product for their environment. Just like today when people say, “That's a Windows shop” or “This is an IBM shop.” There was very little drifting outside of those core choices.
So working in that place, we had to be a lot more nimble because we had to make everybody talk. I learned a lot and got an appreciation for “I can just go get this piece of software somebody has created for free and is sharing with the world” and “I'm going to customize it for my own.” This leads to a culture of sharing.
On the flip side the PC world was into shareware and DOS—here's some software, and it's free to try out, but you better pay me. And they started adding in feature-locking. In the PC world, it wasn’t free. In the Unix and Linux world, it was here's the stuff, you have a good time.
Q: You mentioned when you started working there, you were in college. Over the years we’ve seen people from all different educational levels and across a range of disciplines be able to contribute to Linux—from high school grads to college and beyond. What was your background in that sense, and what led you to open source in the first place?
I started writing software for money when I was in high school for home computers. They didn't have a degree that we have now called computer engineering, which is a mix of hardware and software. I started doing both a double major in computer information systems and electrical engineering.
Very randomly, I took a history class because everything else was closed, and I needed to finish up my schedule. The professor did a trip to Greece, so I ended up going to Greece to do archaeology at the end of my first year of college, which led to another quarter at Oxford. After a year, I wanted to do this again.
I actually changed out of engineering and switched my major to history, and graduated one year sooner. At the time, I had products you could go into a computer store and buy off the shelf. I didn't go to school to get a career, I had a career. So I didn't have a problem looking the dean of engineering in the eye and saying, “Well I'll just leave your college, how does that sound?”
One of my bosses always said that college is your time to do the fun stuff. Take a pottery class, take writing, do something you don't need. You’ve got your whole life to work, go do something fun when you're in college. He said, “You're the only one who listened.”
Q: Did you ever practice with that history degree or in archaeology?
I took my other skills into that world. In the mid-1980s, it was a multi-year process to get an excavation permit to dig in the ground. We were pioneering non-invasive techniques for surveying sites. We were in an olive grove but nobody knew what was underneath the olives.
We brought in a soil resistivity meter, which is a box with a battery and a knob and a dial. And you plant electrodes and you measure how much resistance there is in the soil, and by moving along the line and taking a reading every meter, you can map density. The other tool was a proton magnetometer, which uses the resonant spin of cyclohexane, to determine what the Earth's gravitational field is, which implies the density of what you're standing over.
Part of my skills were writing code to take the daily readings and try to graph to get an idea of what we're looking at, or when they were slamming this resistivity box around in the dirt, and it would break...I would be the guy fixing it with a soldering iron at the end of the day. I took my background, my electronics and computer skills, and put them into history versus the other way around.
Q: Thinking back to those early days, did you ever imagine where Linux would be now? Did you ever imagine you’d still be contributing so many years later?
Would I be contributing? Absolutely, because I was contributing before Linux came along. That's just a lifelong thing I’ve always done. Did I see where it was going? Well, I certainly hope so because it was that 10%, 20% of the cost to get the same amount of work done. Then, of course, the famous open source freedom of, “If I don't like it, I can go fix it myself.” So it was very much a hope that it would thrive.
Q: You mentioned what surprised you the most was the adoption by enterprises of Linux. Can you elaborate on that?
Big companies are very risk-averse, and they do long-term strategic plans. Part of the web revolution—beginning in the late-1990s and really picking up speed a little bit later—was watching big companies, especially banks, pick up Linux...that was a big surprise.
Red Hat was right there, providing some of that corporate space, some of that stability. In the early days, kernel updates were happening all the time and Linux was moving so quickly—fixing things and adding features—that businesses couldn't possibly plan a migration, plan a deployment before it was three versions old.
They couldn't keep up, and their processes were so slow. One of the things that Red Hat did that was very helpful was to say, “You know what, we'll do this once or twice a year, and we promise we won’t pull the rug out from under you.” Just promising not to change too quickly really led to the ability of slow-moving organizations like banks to finally sign on.
Q: Did you work specifically with the banking vertical throughout your time? Have you ever encountered any industry-specific challenges for Linux during your career or has adoption been similar across verticals?
Mostly I've done telco through Red Hat. That’s partially because of my background in data communications. There's a lot of vertical-specific stuff with telco—a lot of detail, hardware and methods of connection that you have to learn. I also used to work as a contractor or H&R Block, so I have had some exposure to the financial sector through that. They were using mini-computers; they weren’t using Linux. But, they had all those regulatory issues. Healthcare has regulatory compliance issues, but they don't have that same reticence that banks do. They're willing to try radical technologies.
With telco, a big challenge was scale. A few switches to patch could mean 800 sites, 10,000 devices. One of the projects I've worked on was trying to regularize inventory. We were literally having to manage the interconnection of a billion pieces of hardware.
Q: Looking forward, where do you think Linux will be 10, 30 years from now?
You look back 10 years, and how much different is it now than what we had 10 years ago? It's really not that different. Things are a little faster, but not a lot faster; a little bigger, but we’re not 100 times the memory footprints that we used to have.
But, we've got a couple of things sneaking in on the side that have the potential to really turn things sharply to the left or right. Things like cost, reliability and scaling of quantum computing. You don't know how far it's going to go and how fast.
And watching to see if we're going to go from dozens of fat cores like we have now in our boxes or if we're going to go to thousands of small cores. We're not quite there yet, but that could be one of the directions that is going to radically change things
As for 30 years from now...you just make stuff up. What hardware looked like 30 years ago versus today, it's really hard to kind of look ahead. Are we still going to have rack after rack of heat-belching machines and enough electricity to smelt aluminum? Or are we going to completely change—is it going to be optical-based or small and distributed?
Q: Is there anything else that we didn't ask that you'd like to share?
One of the things I was involved with for a long time was our local Linux user group (LUG). Back in the 1970s when home computers were brand new, people would go to these meetings once a month and would bring machines and show them off publicly, share information, trade software and provide advice.
The kind of people that would show up were people who heard about this stuff, were interested in buying but didn't know what to get. Or they just spent a bunch of money and didn’t know what to do with the machine they just got. Or they bought the machine and they broke it and didn’t know how to get it fixed. So there were different needs.
When Linux was brand new, all that same category and kinds of people would show up, and go, “I've heard about this Linux stuff, tell me more.” Over the next 20 years or so, we used to have the Ohio LinuxFest and meetings to build the community. Part of the things that Linux had that DOS never really had were passionate people who were creative, intelligent and all that, who were motivated to go off of the paved road and do interesting things. The user groups were really a very helpful thing in the 1990s and early 2000s.
And then once Linux became just this “thing I use at work,” the activity in the user groups really wound down. But in the early days, it was really important to have a place to go, besides just UseNet to get my questions answered. Where would I find people like me? It was user groups.