This post is brought to you by Command Line Heroes, an original podcast from Red Hat.
Pure Gould
UNIX is C and C is for UNIX.
“I want to be an architect” was the mantra for some teenager in a British period TV series I watched in Ireland as a kid. For my final four years of secondary school (like high school) in Ireland it had become my mantra. I loved technical drawing and specifically drawing houses in perspective with shadow—it looked so realistic. I didn’t much care for computers as I couldn’t really afford one. I got to play on my girlfriend’s Commodore 64 and my dad bought me a scientific calculator that had about 2K of memory to write BASIC programs (for the millenials and younger, that was one of the first popular computer languages).
Studying computer science was not even on my radar—I wanted to continue learning the finer art of
drafting. But the the Irish college Central Applications Office (CAO) procedure had different plans for me. My top three choices in the application focused on architecture studies, and I let my career guidance counsellor fill in the rest, referencing something about my aptitude for computers. I didn’t object because I didn’t care. I was going to be an architect. A couple of weeks after my results came out in early August. I, like thousands of others, bought a national newspaper, went to my father’s shop, laid it out on the counter, and scoured through the supplement to find my student number and my assigned college and degree. I found ND03. “What’s that?” my dad asked. In a sense of panic and massive disappointment, I replied, “I don’t know. I think it might be some computer thing in Dublin.”
The first few months of college I had no idea what was going on in some of the computer classes. I could follow the examples and get projects completed but the abstractions were escaping me. I prayed and asked God to figure this out for me, because it was going to get embarrassing in the second semester. Then came Computer Science where I had to program a D5 kit that was essentially a board with a Motorola 6800 chip and a hexadecimal keypad. Following the program counter execute instruction after instruction of hexadecimal machine code seemed like magic. Suddenly this was going into the part of my brain where I was also figuring out Data Structures and Algorithms. I could see the abstractions in my head and I could see what the code was doing and predict where it was going. I could improve the algorithms and make them more efficient. I was finally understanding.
One of my fondest memories of this time was a project for programming an AVL (Adelson, Velski, Landis) tree—a balanced binary tree—in C. It was during my second year. I wrote it all out in C on computer printer paper at home and debugged it for hours. No computer.
I arrived in the computer lab the night before it was due, typed it in, compiled it, tested it, and printed it. My fellow classmates were incredulous. “You’re only starting this now? We’ve been in here for weeks.” I explained that I had also started this weeks ago and that I merely had to type it in and quickly test it because I had tested it at home on paper. They rolled their eyes. I sat down at a free terminal (many people had finished) and logged into the Gould machine running BSD UNIX. I found it quirky but powerful. Quirky because, back then, the command line seemed a bit sensitive and the editor, vi, seemed more sensitive. Powerful because everything on UNIX was a file, and simply listing the contents of a directory using “ls,” gave me visibility to all the permissions on the file—for “u” (user, i.e., me), “g” (users in my group), and “o” (other). And, like the VAX/VMS, the whole class was sharing this computer resource. But it seemed much easier to share with people on UNIX, because I could see everyone in my class with a simple “cd ..” changing the directory up a level.
I typed in my AVL tree code. I compiled. Had a couple of syntax errors. Fixed the syntax errors. Finished the build process with my Makefile. Tested it for about 20 minutes with test data, and watched it just work on the first try. I printed it and said goodnight to the rest of the class and left the lab.
After struggling so hard in my first semester of my freshman (fresher) year, I was really proud walking out that door. I had fallen in love with that “UNIX box” or “the Gould.” Others fell in love with the PC and focused many of their elected projects on PC-related projects in Turbo Pascal or C. For me, it was going to be C on UNIX. Though I liked the independence of the PC, I loved the power and the distributed or system nature of UNIX. I loved that besides a few lines of Assembly Language, UNIX was written in C. Also, I felt that it would benefit my career in the longer term because I was going to work for businesses and enterprises. UNIX was definitely going to play a bigger role there.
Here comes the Sun
To each their own. The vendor game.
In my fourth year at university, we had heard about these powerful new “workstations” from Sun that would be located in the “small systems lab.” If we could somehow manage to get some time on those new Sun SPARCstation machines, we’d be working on very cool UNIX machines that run a flavor of UNIX called SunOS.
These machines were gorgeous. This was the first GUI desktop environment I worked on, and it was UNIX (this is pre-Windows). I did several projects on them, including a C program code analyzer using the complexity and readability software metrics of McCabe, Halstead et al. It also forced me to work with the X11 windows system using OpenWindows with SunView. One of the fun parts was trying to generate bars for graphs dynamically. SunView didn’t have that so I used a slider bar for my visuals and just set the length dynamically and set it at 100%. As long as you didn’t use the mouse on the bar, you’d never know it was a slider bar.
Move over Gould. Sun—with it’s swanky GUI desktop—was my new favorite UNIX machine.
And SCO then it was HP
UNIX for a PC
After graduating from Dublin City University in 1990, I spent a couple of years with a consulting company. One of my favorite consulting engagements was a solo programming effort on an HP SCO UNIX distributed system. HP had made a product that was x86-based and ran on cheaper HP PC hardware. I developed a logistics and operations system for a trucking company that operated between Ireland and the UK. It managed and tracked the pick up and drop off off containers between locations on the islands of Ireland and Great Britain. This included dropping tailers and their containers off on ships between the islands. It also included notifying drivers of pick-up and drop-off locations. So, essentially the system was container logistics and communications between sites and truck drivers. HP SCO machines were located at multiple company sites in Ireland and the UK. My programming language and database were from Progress Software and I had never used them before. I used UUCP as the communications protocol for transferring data every 15 minutes between the various sites.
Watching this go live and being successful was amazing. I had great help from the customer and a good project manager, but I wrote the whole system myself. Whenever I saw a truck container with that big company logo on it heading towards Belfast or Dublin, I knew the driver was driving with a FAXed itinerary that my program had generated. I knew that information was transferred across the Irish sea and which ship they could expect to see the trailer on for pickup.
Little did I know at the time that a different type of container would have a big impact on Linux and enterprises about twenty-five years later.
Full time with UNIX
UNIX all day, every day
After over a year doing consulting work in London (COBOL, Cincom Mantis 4GL with IBM CICS), I decided to go back to school full time to pursue a master’s degree through research. Thankfully, my wonderful supervising professor’s resources included some new Sun SPARCstations. These machines were running Solaris, a combination of BSD, System V, and SunOS. My project partner (shout out to Jane Power) and I would be researching software risk management and developing a risk management system for software development projects. We were to use an artificial-intelligence-type framework called Blackboard Systems.
At this time, 1992-94, the internet was a reality but in its infancy. Universities around the world were connected and there were open source projects like Mozilla being developed. A professor at DCU was doing research in hypertext, one of the foundations of the World Wide Web. While doing my research I was amazed at how well UNIX adapted to the internet. Because it was built with network connectivity in mind, it was already pretty easy to connect and consider distributed computing scenarios beyond the tool UUCP that I had used earlier on SCO. I was using telnet and ftp and there were things called TCP sockets that would allow you to connect your application to remote applications. I bought my first TCP/IP book.
Just before I transfered into the PhD program, I received a USA permanent resident visa. A green card. I had a decision to make: transfer into the PhD program and risk losing the green card or finish the master’s program and immigrate. I decided to head west to look for the American dream. I decided that the projects I would be working on in the US would be much larger than anything I had worked on to date in the UK and Ireland. I finished my research, moved to Virginia, and then Colorado and graduated in absentia.
Heavy metal
Why do we “make” it so hard?
After a year or so doing IBM mainframe development at MCI Telecommunications in Colorado, I was transferred to a large distributing computing initiative that was being developed on Sun Solaris and IBM AIX. I was back doing UNIX. The machines were beasts. The Solaris machines, Superservers, were actually originally developed by Cray Research and SGI, and the AIX boxes were also impressively large.
This was my start at having to develop UNIX-based applications that could be compiled and run consistently on different vendors’ UNIX distributions. They didn’t do that for free, because each of the distributions had their own compilers and libraries and various switches. Trying to “make” applications run consistently across Solaris and AIX took some portability efforts. Years later, I would do an entire project making builds portable using Imake across multiple versions each of Solaris, AIX, HPUX, and Windows. That was a dreadful experience. Portability is a hard problem to solve.
But at MCI, I was developing distributed applications using Common Object Request Broker Architecture (CORBA). It was so new that many enterprise features like CORBA security, distributed transactions, and directory service had yet to be defined. My colleagues and I at MCI developed our own version of these enterprise features and some of what we defined ended up being used in the CORBA standards. This was my first real participation in an open community. It was open standards but not open source.
Around the same time, Sun MIcrosystems was developing the Java programming language. The promise of Java was that a developer could “write once and run everywhere.” Because of the Java virtual machine (JVM), all your Java programs promised portability across Solaris, AIX, HPUX, Windows, etc. No Makefile gymnastics. There was even a Java language binding specification for CORBA. This project is where I first realized I was becoming an architect. Not the architect I imagined as a child—but a system and application architect for very large enterprises.
SOA where next?
Services. Services everywhere.
I was now regarded as an object oriented programming (OOP) “expert”. I’d done C++ and Java on various UNIX flavors and on Windows, and I even had enterprise IBM mainframe experience. But when developing object oriented programming in a distributed fashion, we started talking more about services. We applied the UNIX “services” language to the API type interfaces we were developing in CORBA interface definition language (IDL) and Java interfaces. It wasn’t long before many of us started using the term service-oriented architecture (SOA). There was an important distinction because OO developers tended to forget about the network and often treated objects like they were local. Programming in this way, as OO purists tended to do, was a disaster for writing distributed applications. You had to be service-oriented and not object-oriented.
Around this time, I became aware of Linux. I had left MCI and had started a company with a few other guys (Aurora Technologies Inc.). We were specializing in CORBA and other OO-based technologies. Now with no massive resources at my fingertips I discovered I could have my own, very cheap, UNIX-based operating system on my home desktop. I dual-booted the family IBM Aptiva with Slackware from Walnut Creek in California. That was my first experience with Linux. Watching Slackware boot was exciting. I didn’t have to spend thousands on a high-powered workstation. I could run Linux on my home desktop.
Linux, though increasing in popularity, was not yet in large enterprises. It was, however, growing rapidly with startups and hosting companies. The cheapest way to get your dream website up and running was to go to a hosting provider and ask for some Linux-based host and get your website up and live.
Linux with a Red Hat
Linux was going mainstream and other software products were making sure they certified on a supported commercial distribution—Red Hat Linux, or later, Red Hat Enterprise Linux. At the same time, lots of middleware projects were becoming open source. Enterprise-focused projects—CORBA implementations (TAO), J2EE (JBoss), message-oriented middleware (Apache ActiveMQ), enterprise service bus (Apache Camel), and lots of others—were coming along to replace expensive proprietary commercial products.
JBoss the company, with an open source J2EE implementation, had been bought by Red Hat. Banks and government agencies had decided that supported, open source products were often of excellent quality with great engineers standing behind it. They could transfer the risk of using open source software to a commercial enterprise like Red Hat.
In 2008, Red Hat invited me to join the company on the Emerging Technologies team of the CTO office. Frankly, joining Red Hat was a kick in the arse for me. I had my head so far up the layers of abstractions in the SOA world that I almost assumed that real innovation in the operating system had slowed or stopped. Boy was I surprised. The work in the real-time kernel, like high-resolution timers, priority inheritance for preemption, completely fair scheduler, threaded interrupt handlers, TUNA for RT tuning, and much more was forcing boutique financial trading houses to stop investing in being Linux maintainers for all their kernel patches. Instead, most of the “secret sauce” was coming into the upstream kernel through the community, with organizations like Red Hat, IBM, and Novell contributing. Now the financial services companies could focus on providing value at their trading application layer.
In 2010, I got involved in a project called OpenShift. It was using Linux containers to provide a Platform-as-a-Service (PaaS). During my 2008 Red Hat interview, I had talked about PaaS as something that was a very interesting development. The PaaS I had played with was all virtual machine-based. Red Hat acquired a company called Makara that was using Linux containers. The OpenShift Linux container unit was called a cartridge. I built my first cartridge for MRG Messaging (a product based on Apache Qpid). What I noticed was that building cartridges was sort of difficult and there was a question of who would maintain the cartridge after it was done.
In 2013, the Docker open source project changed all that—and the entire PaaS and cloud landscape. Almost at the same time, Google announced they would open source the Kubernetes project for orchestrating the deployment of applications that use one or more containers and/or services. Red Hat quickly saw the advantages of a more consistent and easier container build technology and also the larger community of an enterprise-class orchestration technology. OpenShift made the pivot to Docker (OCI) and Kubernetes. I contributed to the Docker project and got involved with Project Atomic efforts. Now, Kubernetes is seen by some as the “kernel” for a distributed operating system. I’ve been involved in related projects—Buildah and Podman. Red Hat OpenShift Container Platform is Red Hat’s distinct distributed OS. The next generation of Linux is here.
It’s been a long love affair since 1987 when I first worked on BSD on that Gould machine. Thirty years later I still find pleasure in opening up a Fedora or Red Hat Enterprise Linux terminal window and seeing the command line prompt and wondering, “What shall I play with today?
Want to hear more stories about the OS?
Check out @rossturk’s post on the Magic of Linux, @ThomasDCameron’s post From police officer to Open Source devotee: One man’s story, @ghaff's story entitled My journey from BASIC to Linux, @MattTheITGuru's journey From manual to automated DevOps or @FatherLinux's exploration of Power, control structures, and open source.
Subscribe to Command Line Heroes, an original podcast by Red Hat and be sure to follow William on Twitter at @ipbabble.
About the author
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit