There’s no question DevOps is more than technology. DevOps is a trifecta of people, processes and technology. Its goal is to help IT organizations across industries, including telecommunications, more quickly deliver optimal services, best meet the needs of their internal and external customers, and foster innovation. But what are the technology pillars in a successful DevOps initiative?
We’d argue that the key technology pillars are continuous integration/continuous delivery (CI/CD), configuration and change management, management and monitoring, and deployment pipelines. Each pillar plays a vital role in providing DevOps that is easily and effectively managed, fine-tuned, standardized, automated, measurable and efficiently executed.
CI/CD tools are designed to encourage collaboration among developers, test and verify early and regularly, and to help organizations cut the time it takes to develop, code, and test applications and prepare them for deployment. Tools include Jenkins CI, an open-source CI server, Atlassian’s Bamboo CI/CD server and Travis CI, an open-source hosted, distributed continuous integration service used to build and test projects hosted at GitHub
CI is an agile development practice; whenever IT begins work on an application, there are typically multiple teams working on building and testing individual pieces of an overall application. Regularly, even daily, the individual builds are integrated into a common build, or integration build, for testing and verification. CD moves beyond the development processes and is focused on getting new features, new applications or changes out to customers as quickly as possible. When a CI build is finished and determined functional, it is moved over to quality assurance (QA); from there, if all is well it is moved to a staging area that serves as a mock production environment for further testing and validation. Automated testing is a critical component of the development and QA processes, and part of CI/CD.
Configuration management and change management have to be considered early on and need to include the bigger picture. Every new application release, no matter how small the changes are, will have measurable impact across the enterprise. A common, standardized environment with automated configuration and change management to help teams understand the impact of their builds will help verify that builds, patches and installs are all working within the parameters of that baseline. This is a particularly important assurance since enterprise environments running in a cloud can have thousands of servers running hundreds of applications each. Artisanal builds of software are no longer a good model for cloud computing. Puppet and Chef are two open source stacks of code designed to make it for developers to manage change and configurations across the enterprise.
Developers also need access to the requirements necessary for monitoring application specific functionality beyond what’s provided in infrastructure and in monitoring tools. But they also need tools that provide application management and monitoring. These tools need to provide a single point of control for deploying, managing and monitoring. Such tools, including Red Hat’s JBoss Operations Network, enable discovery and inventory, configuration management, application deployment, the ability to perform and schedule actions on servers, applications and services, performance and availability monitoring and measurement, and provisioning. Red Hat’s newest version of Satellite provides a single management console and methodology for managing the tools required to build, deploy, run and retire a system. Satellite 6 provides a single content view, a collection of RPMs and/or Puppet modules that have been refined with filters and rules, as well as provisioning, system discovery and drift remediation for automatically correcting system state with reporting, auditing and history of changes.
It’s wise to view DevOps as a manufacturing pipeline for creating and deploying the best software in the shortest time possible to give organizations a competitive advantage. The deployment pipeline—a CI term—should consist of all the processes that make up DevOps. There really isn’t an endpoint either. The pipeline should include a plan, coding and automated builds, integrated builds, testing and verification, functional environment testing, QA, production testing that includes load/stress testing, then release into production. But it doesn’t stop there. While in operation, management and monitoring will provide feedback that may result in application changes, which initiates the pipeline process all over again. It’s important to note that, as organizations develop a deployment pipeline, they need to consider any bottlenecks that may occur in the pipeline, a way to view the entire operation and the pipeline components (processes, people and tools) from a single point, as well as mechanisms for improving on the pipeline.
We’d love to hear your thoughts on DevOps, how your organization is using it, and which technologies you have deployed. Let us know in the comments section below!