Sélectionner une langue
Pressure has never been greater for IT to deliver meaningful services to the business and for customers. However, an organization is only as good as its underlying processes. As new tools, technologies, and frameworks continue to evolve to allow developers to release new features and functionality quickly, enterprises turn to DevOps methodologies to facilitate better communication and collaboration.
Learn how about DevOps from Senior Red Hat Consultant Andrew Block and his webinar during DevOps Enterprise on October 21, 2015.
Andrew Block of Red Hat will be speaking on this topic at DevOps Enterprise on 21 October – below is a primer from him on what he’ll cover:
"Although the DevOps movement clearly and correctly centers around culture and the processes that guide or derail our success, the root of this shift often gets overlooked in favor of the symptoms. Sure, we all want our teams better aligned and more consistent, our processes streamlined and automated, our release cycles as short as humanly (or inhumanly) possible, but we have to ask ourselves why that’s the case. It’s not only because the previous way of doing things isn’t completely effective, it’s an entirely different value proposition that’s required. As IT is increasingly expected to do more with less and deliver better services quicker, DevOps addresses a need to more clearly demonstrate practical wins and measurable achievements.
DevOps services this need under the assumption that an organization is only as good as its underlying processes, which is, of course, true. By bridging different groups that were previously siloed and overcoming hindering manual processes, rapid development is enabled so new/better features can be continuously released, tested, and optimized quickly, delivering tangible value not just from a technology standpoint, but for the business and its users or customers. That said, technology functions as a key enabler of DevOps transformation if it can act as a framework for better communication and collaboration to remove some of the barriers that stand in the way of meaningful productivity.
Though we’ve frequently talked about how DevOps practices can be accelerated in a Platform-as-a-Service-enabled environment  for such things as seamlessly moving code from an idea to production, facilitating appropriate control points, and automating complex testing and provisioning processes, OpenShift Enterprise 3 offers improvements that better expedite CI/CD adoption. By allowing developers to focus strictly on the application code as opposed to the underlying infrastructure, it’s much simpler and faster to define what gets built and deployed when you don’t have to worry about provisioning. That can all be done within OpenShift for dev, test, and production environments, while an orchestration tool such as Jenkins can be run to manage pipelines. This promotes code across multiple environments and allows for better transparency at each step of the software delivery lifecycle, encouraging such concepts as infrastructure-as-code and the enablement of applications packaged into images and deployed to containers, which can easily be orchestrated within OpenShift.
The integration that now exists between OpenShift and containers further empowers developers by decoupling requirements from the base operating system, allowing for better agility, more control, and a CI structure that packages the application and the environment together across multiple platforms. Your image and code only needs to be created and packaged once and it can be deployed and re-used across any environment with the ability to scale with ease. Even the most monolithic applications can be broken down into more manageable chunks, simplifying deployment into a single portable artifact.
Infrastructure-as-code, or programmable infrastructure, is another way that developers are empowered by allowing for configuration management and automated provisioning as part of their workflow. No longer are those tasks areas where you have to rely on others or additional technology, it’s all part of your code and done in familiar languages that are automatically replicated wherever and whenever the application is deployed. And with the ability for developers to define and use templates from within the Paas solution, as well as workflows describing the delivery pipeline within the continuous integration environment, you ensure environments are in compliance and in policy in a transparent manner."
Technology like containers or an orchestration tool is not DevOps, but it can get you to the point where you can demonstrate progress and prevent teams from lapsing into old bad habits. I’ll be exploring some of these concepts in more detail during an upcoming talk and demo at DevOps Enterprise Summit in San Francisco on October 21, and then again during a webinar hosted by Red Hat on October 27, which will also be available on-demand immediately after the live session. If you’ll be in San Francisco please stop by or join the webinar session to learn more or participate in the discussion, or feel free to post your thoughts and comments here.
About Andrew Block, Red Hat senior consultant
Andrew is a senior member of the Red Hat Consulting team focused on delivering solutions to solve business challenges. Throughout his career, he has emphasized the benefits of automation in each step of the Software Development Life Cycle. He specializes in systems integration and continuous delivery methodologies and is a contributor on several open source projects, including Apache Camel, the integration framework within Red Hat Fuse.
Connect with Red Hat Consulting and Training
Learn more about Red Hat Consulting
Learn more about Red Hat Training
Learn more about Red Hat Certification
Subscribe to the Training Newsletter
Follow Red Hat Training on Twitter
Like Red Hat Training on Facebook
Watch Red Hat Training videos on YouTube
Follow Red Hat Certified Professionals on LinkedIn
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License