Issue #5 March 2005

Tiemann's take on the Summit

Back in 1999 there were two must-see conferences on my list: O'Reilly's Open Source Conference (OSCON) and LinuxWorld. OSCON was a must-see because that was where the hackers were. If you wanted to talk Perl with Larry, Python with Guido, Linux with Linus, Cathedrals and Bazaars with Eric, or G++ with, well, me, it was the place to be. LinuxWorld was also on the map because it was the first time many of us would get a chance to see what was new and exciting in the business world of open source. At LinuxWorld one had the chance to hear people practicing their business pitches, see people sizing up the competition, and wonder "what will all this be like in a few years?" The vast array of choice led to a world of near infinite possibilities.

Well, more than a few years have passed, and the difference between these two types of events have become wider than ever. OSCON is still the premiere event for meeting and mingling with free software and open source hackers, learning about new technologies and methods of implementation, and getting a strong dose of vision without too many corporate strings attached. But the Linux and open source world has expanded and matured, and industry tradeshows no longer thoroughly address the market.

Why the Summit?

A year into Red Hat®'s Enterprise Linux® experience, it occurred to me how silly it was for Red Hat to be focusing on conferences where the majority of the attendees have chosen a solution yet presentations are focused on decision-making. It seemed to me that those who had made a choice wanted to learn how to maximize the value of that choice, to talk about how to go deeper into their decision, and to understand how to participate and influence the direction and capabilities of the platform they had chosen.

Thus, the kind of attendee we have in mind for the Red Hat Summit is not one who wants only to know what's new, how vendor X is different than vendor Y, or whether vendor Z has a bigger or smaller booth than the year before. Those may be interesting questions for the part of the market that has made no choice, but for thousands of customers and hundreds of thousands of servers, the time for those questions has past. The relevant question for those who have made a choice are "What can I do today with the decisions I've already made? What decisions should I revisit in light of changing technologies? Did I underestimate what I could do with commodity hardware? How can I translate what worked in Topeka to Toronto, Turin, and Tokyo?" Those questions are unanswered at a conventional Linux-oriented tradeshow.

Last year we received a report from a large commercial (not investment) bank about their experiences migrating a significant J2EE application from Unix to Linux. They reported to us several facts: (1) performance improved 6x versus Unix on a specific application (2) the improved performance reduced a key performance metric from "unacceptable" to "exceeds requirements," (3) costs could be reduced by more than 30% the first year and by more than 60% over three years, and (4) the cost effect of Red Hat Enterprise Linux in the context of their overall hardware and software costs was less than the cost of changing money from one currency to another at a typical foreign exchange counter. In their case, they understood that the longer they spent evaluating the platform, the less would be their return on the ultimate migration project.

Technology liquidity

I have always been a fan of "technology liquidity"—the ability to change out one technology for another at relatively low cost. In fact, two years ago I co-presented a keynote speech at LinuxWorld in New York City on the subject. The title was 100 Million Reasons Why Architecture Matters, and the point of the speech was that when new technologies come along that offer tremendous value compared with conventional technologies (in this case, Linux on commodity hardware versus proprietary Unix on proprietary hardware), the value of an enterprise architecture can be measured according to four factors: (1) how quickly can the new technology be adopted, (2) how broadly can the new technology be applied, (3) how does the new technology objectively compare with the old, in terms of price, performance, or price/performance, and (4) what are the risks and risk mitigation costs of bringing in a new technology versus the old. Using data I collected from various Wall Street evaluations, I showed that when Linux on commodity hardware offered a 4x overall price/performance advantage over a proprietary Unix platform (conservative considering that the Chicago Mercantile exchange recently reported 5x performance at 1/5th to cost), an architecture that allowed Linux to replace Unix across 30% of the enterprise annually (the average rate of capital replacement) to a maximum of 90% of all systems, achieving 90% of expected performance with 10% risk was worth over $100M in a scenario with $40M of annual spend over a five year period. But using the same analysis, if the architecture limited one to replacing only 10% of the systems per year (perhaps because of a lack of immediate application availability, lack of immediate application portability, lack of immediately certified hardware or software), to a maximum of 50% (perhaps because some applications will never be ported), achieving only 70% of expected performance (perhaps because of limited technical functionality or expertise) with 30% risk (perhaps because of untested software, untested hardware, political challenges, etc.), one could only expect at best a $10M savings—a number less than a better-than-average professional negotiator could do in a similar spending scenario. Thus, with a good match between architecture and newly available technologies, the benefits were tremendous, whereas with a bad architecture, a bad fit, or both, technology itself becomes irrelevant, and only the negotiators earn their keep.

This brief story about how the value of enterprise architecture can be quantified, based on the option value of new technologies, is just one example of the kind of insight that can be gained by talking with leading IT architects who are developing today the best practices of tomorrow. And in the great tradition of New Orleans Jazz, I expect that there will be hundreds of variations on this and many other themes.

In my experience talking with literally hundreds of top IT executives, a good enterprise architecture must design in the fact that the future holds unknown innovations. Conversely, architectures that preclude the integration or exploitation of these unknown technologies are going to become competitive albatrosses. But what are the fundamental design decisions, the fundamental tradeoffs that one should consider? Those are the questions we will ask and hopefully answer in New Orleans. How will technologies from Red Hat and the open source community change the way people do identify management? Or storage and storage management? Virtualization? Provisioning and Monitoring? Application development and deployment? What is the future of Open Source Java? How can open source continue to provide greater and greater returns on investment beyond just Unix-to-Linux migrations? Those are questions of nuance and detail and importance that must be intelligently discussed, not drowned out by thousands of loudspeakers amplifying a thousand one-way messages. Customers who have made their choice—Red Hat Enterprise Linux—will find that everything else can be different. And better.

About the author

Michael Tiemann founded the world's first company exclusively devoted to providing commercial support for free (and later open source) software. He is Vice President of Open Source Affairs at Red Hat, President of the Open Source Initiative, and provides financial and other support to organizations that promote software, civil, and artistic freedom, including the Free Software Foundation, the Electronic Frontier Foundation, and the Creative Commons.