Inscreva-se no feed

Adoption of OpenStack in the enterprise has been progressing steadily over the last two years. As a Forrester Report* on enterprise adoption from September noted, “OpenStack demonstrates the completeness, robustness, and capability upon which a broader range of adopters can depend.” OpenStack deployments have proven to be complex in larger IT organizations though, but not because of the reasons that you might anticipate. Much has been made about the complexity of installing the software, but we’ve found that the lion’s share of effort in these implementation comes around the practice of integrating IaaS into the fabric of enterprise IT and evolving existing processes to meet the expectations of the user community.

The first area where we’ve seen complexity in adoption of OpenStack is around the deployment of the infrastructure software itself. While most large organizations have a strong competency in agile development practices at the application layer these days, very few of them have a similar competency at the infrastructure layer. Disciplines like incremental release planning, automated testing, and continuous delivery are often applied to the OpenStack deployment process with great success. These application development processes and tools need to be adapted to the requirements of the infrastructure team and integrated into their workflow. The benefits of this work are large - as infrastructure teams adopt version-controlled configuration management, automated deployments, and automated testing, the scale at which they can operate is dramatically increased.

Adopting Infrastructure-as-a-Service can mean a number of changes for the deployment of tenant environments as well. Traditional server workloads have been engineered around a hardware life cycle and they typically stay in service for three to five years. As enterprises have automated the build and initial configuration of these systems, a build time of one to two hours has become fairly common. Automated builds for Windows systems may include one or more reboots as systems are initially configured. Many enterprise system builds are done in two stages - the first stage installs the operating system, performs initial system configuration, and registers the system with a centralized configuration management system. The second stage installs application server software and the application workload. The second stage may immediately follow the first stage or may be manually initiated by the user who requested the system.

In contrast, most OpenStack workloads expect to be fully configured and in service within ten minutes. The workflow from image instantiation to application deployment is typically fully automated, although it may still progress in two stages. While it is possible that an enterprise’s current configuration management tooling and process would be able to deliver a configured system in this time frame, it’s likely that at least the process and often the tools will need to be re-engineered to provide the expected service level. Given this, many organizations that we’ve worked with include an effort to modernize the operating system build process and tools in their Infrastructure-as-a-Service deployment projects. A benefit of this approach is that the new processes developed to enable elasticity in cloud workloads can be applied to the rest of the server estate to increase elasticity in those environments as well.

Another large difference in the initial configuration of new workloads in elastic environments comes with the introduction of Software-Defined Networking and widespread use of Network Address Translation. Many of the configuration management, IP address management, and identity management systems that enterprise IT organizations have adopted use IP addresses or MAC addresses as unique identifiers for managed systems. Elastic workloads, in contrast, typically use private addressing and IP uniqueness is not guaranteed. The managed systems in cloud environments are instead assigned personality during second-stage configuration via a metadata service as they boot.

These two different approaches to systems management are demonstrated in the different workflows for provisioning a configured system in Red Hat Satellite 5 and Red Hat Satellite 6. In Satellite 5, an administrator assigns an activation key or kickstart profile to a system by MAC or IP address. This system record is defined in Satellite prior to a system booting for the first time. In OpenStack, however, the MAC address or IP address of a provisioned instance is not known prior to the instance being provisioned. To compensate for this, in Satellite 6 the administrator assigns an activation key and host group at the time of provisioning via the OpenStack metadata service. The managed instance queries the metadata service on boot, registers itself with the Satellite, and then requests its configuration based on this metadata instead of its network identity.

The changes in tools and processes that an infrastructure management team needs to make to enable elastic workloads aren’t limited to systems and configuration management. Capacity management, performance, and monitoring systems are typically updated or replaced as well during Infrastructure-as-a-Service implementations. OpenStack provides a wealth of performance data via the Ceilometer API for the tenant space, but virtually no performance data for the underlying infrastructure. As such, organizations need to adopt hybrid management strategies, where the hypervisors and control plane are monitored as traditional infrastructure and the tenant space is monitored using second generation tools which are designed to manage elastic infrastructure.

Given the large scope of change required for a successful Infrastructure-as-a-Service deployment in the enterprise, many of the companies that we work with are looking for a compelling return on their investment before beginning these projects. The good news is that we’re seeing great returns from the adoption of these new tools and practices. FICO, for example, reported a 50% reduction in time to market and a 30% reduction in total cost of ownership after adopting a Red Hat OpenStack Platform solution. As more organizations apply these new tools and practices to the rest of their enterprise IT footprint, they can expect these kinds of improvements to be amplified.

 

*http://www.openstack.org/enterprise/forrester-report/


Sobre o autor

UI_Icon-Red_Hat-Close-A-Black-RGB

Navegue por canal

automation icon

Automação

Últimas novidades em automação de TI para empresas de tecnologia, equipes e ambientes

AI icon

Inteligência artificial

Descubra as atualizações nas plataformas que proporcionam aos clientes executar suas cargas de trabalho de IA em qualquer ambiente

open hybrid cloud icon

Nuvem híbrida aberta

Veja como construímos um futuro mais flexível com a nuvem híbrida

security icon

Segurança

Veja as últimas novidades sobre como reduzimos riscos em ambientes e tecnologias

edge icon

Edge computing

Saiba quais são as atualizações nas plataformas que simplificam as operações na borda

Infrastructure icon

Infraestrutura

Saiba o que há de mais recente na plataforma Linux empresarial líder mundial

application development icon

Aplicações

Conheça nossas soluções desenvolvidas para ajudar você a superar os desafios mais complexos de aplicações

Original series icon

Programas originais

Veja as histórias divertidas de criadores e líderes em tecnologia empresarial