This is the first of a series of posts in which we’ll take a look at the technologies, thinking and open source projects that are feeding into future products on the Red Hat cloud computing roadmap.
Different clouds, both public and private, speak different languages.
That this Tower of Babel exists isn’t especially surprising. The speed with which the technologies and practices that define cloud computing are evolving virtually ensure that there’s no way for any formal standard to get out in front of what customers and vendors are experimenting with and implementing every day. While cloud computing often builds on existing products and approaches, it would be naive to assume at this point in its development that any one vendor or standards organization has the template for the “right” language.
Not that we should expect a single lexicon to be adopted by all clouds in any case. Leave aside for the moment that some vendors and service providers prefer incompatible interfaces that make it harder for customers to hop over to another vendor’s private cloud or another service provider. Nor do all clouds have the same purposes and goals. One might expose lots of options; another might choose to keep things simple. One is most concerned with providing customers with tight controls over service levels; another just concentrates on cost. Regulations in a specific industry can mandate interfaces that relate to audit and compliance that aren’t needed by most users.
In short, one cloud doesn’t fit all and one cloud language doesn’t fit all. Yet, as Red Hat’s vice president and general manager of our Cloud Business Scott Crenshaw puts it, “The promise of the cloud can’t be achieved if vendors create islands.” That’s because a big part of the vision of cloud computing is abstracting computing resources so that applications can be given precisely the resources that they need in the physical location that makes business sense. That gets a lot harder if incompatible interfaces foreclose easy movement.
At this point, let us be more technically precise about what we mean by “language” in the context of Infrastructure-as-a-Service (IaaS) clouds. Let’s say you want to run an application container–which is to say a virtual machine (VM)–in “the cloud.” To do so, you need to be able to spin up a computing resource of the desired size, perhaps allocate persistent storage, set network interfaces, and configure backup systems and so forth based on the application’s needs. Ideally, you’d be able to do this in the same way on every cloud so that you could treat each cloud as effectively part of a single resource pool. But you can’t because different clouds have different application programming interfaces (APIs). The commands you’d issue to, say, create and start a VM of a given size or type aren’t the same across clouds.
The obvious answer is to use some sort of translation “shim.” If there’s one incantation to create a small Amazon EC2 instance and a different incantation to do the same thing on another service provider, you just need to map one to the other. Pretty straightforward. And, indeed, we see a degree of interoperability achieved by using just this sort of mapping of basic cloud functions in some public cloud management products.
However, getting beyond least common denominator translation of base-level functions is a trickier problem.
Red Hat began tackling this problem in September 2009 when it introduced the Deltacloud project. As CTO Brian Stevens wrote at the time, “The goal is simple. To enable an ecosystem of developers, tools, scripts and applications which can interoperate across the public and private clouds.”
However, while the big picture goal may have been simple, the details involved in getting there aren’t so straightforward. Not all of these details were technical. For example, it can be difficult for an interoperability project that’s under the effective control of a single vendor to gain broad support. Even if the code is open source, as a number of interoperability frameworks including Deltacloud are, issues of community involvement and project governance remain. That’s a major reason that Deltacloud was made an Incubator Project under the Apache Software Foundation last May. The incubator proposal stated:
There are also no efforts currently to define a truly open-source cloud API, one for which there is a proper upstream, independent of any specific cloud provider. Deltacloud API strives to create a community around building an open-source cloud API in a manner that fully allows for tried-and-true open source mechanisms such as user-driven innovation.
The project has specific technical concepts and goals as well.
The first of these is to be computer language agnostic. By contrast, a framework like jclouds is written in Java and is specifically focused on Java development. Deltacloud is written in Ruby (together with the Sinatra Web framework), but all communications from clients are handled through a REST interface, a widely-used style of lightweight client/server interaction. (For example, Amazon Web Services are accessed through REST interfaces in the vast majority of cases.) Deltacloud can then interface to existing cloud APIs through a modular chunk of code called a driver. Thus, different clients can support different computer languages and different drivers can support different target clouds without affecting the Deltacloud core. (Deltacloud can also support clouds directly with native code.)
Another advantage of this model is that the client and the target cloud are decoupled from each other. Different clients written in different languages can speak to an arbitrary set of supported clouds. This becomes particularly important when you consider that cloud computing introduces concepts such as resource abstraction which imply that workloads are largely insulated from operational details such as where they are going to run.
As far as mapping of APIs goes, this happens in a couple of different ways. The first is to smooth over any API differences that are essentially mechanical. In human language terms, “horse” in English is the equivalent of “cheval” in French. The words are different, yes, but they have precisely the same meaning. Translation is therefore a straightforward exercise with no loss in nuance or language richness by simply doing a straightforward mapping.
However, other mappings aren’t so straightforward. The much-told story about the unusually large number of words that Eskimos have for snow is apparently an urban legend, but the basic idea is that it’s hard to just translate a rich vocabulary into a simple one without losing something in the process. In cloud computing terms, imagine for example if one cloud provider offered just one size of VM while another let you configure a VM of arbitrary size. A lowest common denominator API would only let you create that single size of VM even when running on the more flexible platform.
Deltacloud deals with this challenge by abstracting the models and methods used by different public and private cloud providers to a few fundamental approaches. In essence, it understands how a given cloud performs a given function such as authentication and what resources a “small” VM instance, for example, includes on a given cloud. Defined hardware profiles which specify the amount of CPU power, storage, and so forth for an instance can be mapped as closely as possible to the options offered by a given cloud.
Given how central concepts like abstraction and workload mobility are to cloud computing, the importance of interoperability between clouds will only become more important. At the same time, this is a rapidly developing and fluid area so the idea of locking down a rigid set of interfaces that are tied to a single vendor or a slow-moving formal standardization effort isn’t particularly appealing either. Deltacloud tackles the problem through a combination of open source development and a specific and complementary focus on community and governance.
Coming next: A podcast discussion with Red Hat technical director Carl Trieloff about what’s important in a cloud architecture.