SolutionsPrivate Clouds: Cloudforms PaaS: OpenShift Public Clouds Cloud Portfolio Building Clouds Cloud Partner Program ResourcesMedium & Large Enterprise Enterprise Solutions Business Intelligence Collaboration & Content Management Enterprise Applications Security Web Applications
Best Practices: Migration
by Michael Tiemann, Chief Technical Officer
Birds do it, bees do it, but most managers of IT try to have nothing to do with it. We're talking about migration, of course. While different animal species migrate for different reasons, the fundamental reason is one of nature's best practices: survival. Migration is a remarkable adaptation that makes it possible for a species to go where the food is plentiful or the environment is hospitable without depending on a benevolent zookeeper.
One would expect that migration from one hardware or software platform to another--for reasons of cost, performance, reliability, security, or a combination--would be a natural, if not highly developed best practice of the modern IT manager. Alas, this practice is only partially developed, in limited areas, and most don't even consider it migration at all--merely competition among commodity suppliers.
Moore's Law, as practiced by Intel, has encouraged hardware vendors to migrate ever up the performance curve because to do otherwise is to become extinct. Hardware manufacturers who have honed the best practice of just-in-time manufacturing with the best components deliver not only continuous improvement to customers but also the most competitive products at any point in time. They have used migration best practices to become industry leaders.
For whatever reason, proprietary RISC microprocessors have had a more difficult time migrating to newer and better manufacturing technology, and as a consequence, are unable to maintain competitive performance. Their failure to implement a best practice for microprocessor technology, or to adopt a model that did, has now forced their customers to confront, on their own, the question of migration.
From our experience, the best practice way to handle a migration strategy is to break it down into four sub-questions. Considering all systems that could conceivably benefit from a migration: How complete will the migration be? How much time will it take? How much actual benefit can be realized? How much risk does the plan contain?
How complete will the migration be?
Systems that are not modular can only be migrated when every necessary component is ready to go. Put another way, a modular system lets you migrate 100% of the 80% that would benefit from migration. Conversely, systems that cannot be separated from platform-level dependencies (even if those dependencies are only 1% of the application's functionality) cannot be migrated at all. The three most important issues that define or defy application portability are data formats, network protocols, and documented APIs. If these are documented, and better yet, if you have the same software available on your legacy and future systems, application portability can be a snap. If your data is in a proprietary format, readable and writable only by proprietary applications bound to your legacy platform, you need to migrate your data or write off migrating applications that use that data. The same goes for network protocols and APIs.
How much time will it take?
Guitar-maker Ernie Ball was surprised to discover that the platform on which they had built their business didn't allow them to run their business. Never having designed for portability, it took them two years to migrate away from this hostile platform. In the case of Amazon.com, who had designed with portability in mind, the company was able to migrate a substantial fraction of their infrastructure from one production platform to another in 120 days. In general, a best-practice design goal would be to be able to migrate application components within a few days (many Red Hat customers report being able to migrate applications from IA-32 to IA-64, for example, simply by recompiling the application), applications within a few weeks, and production environments in 90-180 days.
How much actual benefit can be realized?
When USA Today reports that Dresdner Bank reduced cost by a factor of more than 12 while improving performance by a factor of 90, should we all expect to see a 100x price/performance improvement by replacing any Unix server with an arbitrary Linux server? Of course not. In fact, predicting the actual benefit of replacing one platform with another is one of the most difficult challenges because in most cases, proprietary software makes it impossible to understand the true aspects of a proprietary system's performance. But just because it is difficult to pinpoint the exact reason one system performs better or worse than another, one cannot ignore that any difference in your favor is a difference you want to leverage. With Moore's law offering 2x the performance every 12-18 months, it makes no sense to attempt a 12-18 month migration just to get a 2x performance improvement (presuming your platform tracks to Moore's law). On the other hand, if competing platforms demonstrate 4x, 10x, 40x, or 100x price/performance advantages, in addition to better future performance due to better tracking of Moore's law, it would clearly be a bad practice not to make a good-faith effort to at least measure what might be possible, if not optimal.
How much risk does the plan contain?
When the potential payoff of a migration is an 80% or 90% reduction in direct hardware and software costs, the expected payoff is so great that even a 30% risk of failure favors the bold. Managing to zero risk is not a best practice--it's no practice at all. Knowing one's risk, and being able to mitigate it with experience and external resources is good, but better is the knowledge of how to limit risk by separating it into discrete projects. Modular architecture makes it possible to expose oneself to only a small amount of risk at any one time. Most successful large-scale migration projects start small and then continue small, moving 10-100 applications or platforms at a time, not thousands or tens of thousands. A best-practice from the risk perspective is to focus on reducing the cycle time of a migration from months to weeks, rather than trying to increase the lot size from 10s to 1000s (where risk increases 100x with little increase in reward).
The quality with which one can answer these four questions can mean the difference between remaining chained to a dying platform and being the hero who helps turn loss into profit or profit into heaps more profit.
The following stories, collected by Red Hat and our many partners, illustrate how best migration practices deliver the goods, no matter how long the journey might first appear:
And of course, visit the Red Hat Linux Migration Center for more success stories.
The theme that unites these stories is that IT flexibility and choice, maintained as a continuous competence rather than a one-time event, are the key to keeping up with the ever-changing technology environment. Those who have both an architecture that is flexible and the willingness to always seek the best technology assets are the ones who will survive for another year.
Of course, migration for migration's sake is not a best practice. Indeed, such migrations are a waste of energy. They increase cost, reduce performance, decrease reliability, degrade security, and ultimately expose the organization to new risks that are difficult to assess. Many IT managers have experienced the cost-without-benefit that occurs when vendors force migration by abusing market power (the so-called upgrade treadmill). We believe that a best practice in that scenario is to use the migration skills acquired in such an environment to migrate to a safer haven.
In summary, in an environment where change is constant and competition fierce, IT managers who do not have the agility to migrate--to leave any platform or technology that has outlived its usefulness or trustworthiness--are doomed share the fate of their legacy systems: extinction.