Blog de Red Hat
This is the fourth in a series of posts that delves deeper into the questions that IDC’s Mary Johnston Turner and Gary Chen considered in a recent IDC Analyst Connection. The fourth question asked:
What about existing conventional applications and infrastructure? Is it worth the time and effort to continue to modernize and upgrade conventional systems?
In an earlier post in this series, I discussed how both the economics and disruption associated with the wholesale replacement of existing IT systems makes it infeasible under most circumstances. In their answer to this question, Mary and Gary highlight the need for these existing systems to work together with new applications. As they put it: “Much of the success of cloud-native applications will depend on how well conventional systems can integrate with modern applications and support the integration and performance requirements of cloud-native developers.”
This modernization takes a variety of forms. Mary and Gary touch on a number including migrating from legacy UNIX systems where appropriate, introducing policy-based automation and orchestration, and providing on-demand access to development environments.
One well-established aspect is the migration from old proprietary systems and software to volume hardware and modern open source software like Linux and JBoss middleware. This is a well-mapped migration path for enhancing IT performance and increasing flexibility—and has helped customers such as CingleVue International (an IT company headquartered in Australia), a large US telecommunications service provider, and a large US publishing company significantly reduce costs as compared to proprietary systems and software. Modernizing a classic IT infrastructure in this way reflects the focus on efficiency and stability when making changes in these types of environment.
Another important aspect of improving efficiency is automation. Automation is a big win in part because it eliminates the labor associated with repetitive tasks. The end goal is to make automation pervasive and consistent using a common language across both classic and cloud-native IT. For example, Ansible, which was recently acquired by Red Hat, allows configurations to be expressed as “Playbooks” in a data format that can be read by both humans and machines. This makes them easy to audit with other programs, and easy for non-developers to read and understand. Even narrowly targeted uses of automation are a highly effective way for organizations to gain immediate value from DevOps.
Codifying tasks also documents them so that they can be performed correctly, securely, and repeatedly across different infrastructure types and at different scale points. Thus, automation doesn’t just cut out manual work that isn’t adding value. It also improves quality because manual processes are error-prone. Automation under policy-based control also supports complex policy-based task and resource orchestration and automation to help ensure service availability and performance. All this helps IT maintain control of applications and infrastructure capacity--a key part of built-in security features, compliance, and governance.
Closely related to automation is self-service, which makes both developers and operations more productive by making them more self-sufficient. There are many approaches and stages to self-service. It’s perhaps most associated with a full platform-as-a-service (PaaS) environment like OpenShift by Red Hat which provides self-service as part of an integrated toolset in greenfield cloud-native environments.
However, management tools like Ansible Tower and Red Hat CloudForms can deliver end-to-end self-service in traditional infrastructures. A typical automated workflow begins with operations provisioning a containerized development environment, whether a PaaS or a more customized environment. This provides an example of how a mature DevOps process separates operations and developer concerns; by providing developers with a dynamic self-service environment, operations can focus on deploying and running stable, scalable infrastructure while developers can focus on writing code.
As I’ve written previously, the idea that different parts of an IT organization will and should operate as different modes or at different speeds sometimes gets a bad rep because it’s interpreted as saying that class IT infrastructures and applications should continue to be used as-is without any changes or updates. This would indeed be a bad idea. However, selectively modernizing classic IT can be a great asset as part of a hybrid cloud portfolio.
Read more on the series: