Subscribe to our blog


Value can be a hard word to nail down. Does it refer to the actual dollar amount you can get for your car, today? Is it about the amount you paid for it when it was new? Is it a function of the number of miles you drove it, or the raw tonnage it hauled for you over time? Is it a measurement of some unknown variable, an unquantifiable je ne sais quoi?

The answer depends upon what you use the vehicle for. A truck would probably be measured by tons hauled. A family car could be measured by reliability and ease of maintenance. An old Porsche might be best measured by that certain zest it adds to life which cannot be measured by any number.

Value can be even harder to judge when the thing being purchased is large and solves numerous problems in different ways. Take Red Hat OpenShift, for example. There are many ways to quantify the value of OpenShift, and not all of them are purely measured in ROI.

There are many other elements of OpenShift that bring benefits to users and customers, but which are more difficult to quantify in charts and graphs. Benefits such as time savings, optimizations of development processes, and increasing the velocity of innovation. Indeed, sometimes the best way to measure the value of a platform engineering endeavor is to measure the new applications coming online on top of it, rather than to measure the nuts and bolts of the platform itself.

Sure, hosting 1000 plus containers or applications is a great benchmark, but what are the actual business benefits of running those workloads in an automated fashion? The answer varies wildly from business to business, because the actual lines of business at each company behave very differently.

Virtual Value

Because of this, the value is really based on the way your business is structured. Take, for example, the virtualization capabilities of OpenShift Virtualization. This platform enables numerous IT goals to be fulfilled in one place, but the business wins will be based on the existing infrastructure in your organization.

That’s because OpenShift Virtualization can help to eliminate some costly virtualization layers of software from your infrastructure. The value of that optimization depends on the amount already being spent on the virtualization layer within your organization, and how much your organization is investing in containerized applications. For some, this could mean large savings on software licensing costs. For others, this could simply mean a consolidation of workloads that reduces operational costs, and a leveraging of cloud hosting benefits for the financial savings over a datacenter.

Developers, then, reap the benefits of fast provisioning, safe environments for testing, and automated infrastructure for scaling out or up. This is where the value of the platform can really begin to hit exponential growth, but it is also the area in which it can be most difficult to attach an actual numeric value to those benefits. What, exactly, is your lead developers’ time worth?

There are a few ways to quantify that value more successfully. To begin with, examine your development CI/CD pipeline. Many a vice president of development has been made by quantifying the time spent building and testing software, and then bringing that number down, overall. If your average developer spends one hour every day waiting on compilation, testing and deployment, you’ve got an easy knob to turn to see results.

If you can remove 50 minutes from that process, you’re already saving one developer 250 minutes per week. That’s four hours a week, almost an extra hour per day of time savings. While we don’t want to go and start calculating mythical person-months, and while we all know that adding more people to a software project does not necessarily make it go faster, that amount of time savings every week is enough to move the dial on something, somewhere else in the organization. 

End to End Automation

Then there are the nebulous values for both developers and IT administrators. Automation through Ansible and the overall Kubernetes infrastructure can save your employees from being subsumed by mind-numbing tasks that can suck out their will to keep working. Instead of requiring hand-driven security checks, supply chain controls, and individual approvals for software to be promoted towards production, automation at every level can help to smooth the process for all involved.

And when you’re mixing both virtual machines and containers into the process, your teams can mix modern and legacy infrastructure and tools together into the same processes and pipelines. Upgrades take time, and often require side quests into unexpected areas of the infrastructure or applications. Keeping those workloads as close to their original environments by migrating VMs to a combined platform instead of moving them wholesale to containers can save everyone involved time and effort, allowing them to focus on the workloads and the business needs, instead of on the basic nitty gritty of keeping applications up to date and running in new environments.

While the value proposition of Red Hat OpenShift is quantifiable through a number of classical business performance assessment metrics, it’s also the total package of offerings, working together, that can provide the most savings of all possible metrics. Integrating Ansible automation allows for easier IT workload management. Integrating the Quay repository can extend trust and security from end-to-end with any application. You can even extend these benefits down to the language layer, with Red Hat Enterprise Application Service and Quarkus, which can keep Java applications humming in the cloud.

Whatever the metric you wish to improve is, Red Hat OpenShift and its supporting platforms and software projects can help you move the graph in the direction you desire. And it can do so in a manner that will still meet your compliance, support and long term road map needs.

About the author

Red Hatter since 2018, technology historian and founder of The Museum of Art and Digital Entertainment. Two decades of journalism mixed with technology expertise, storytelling and oodles of computing experience from inception to ewaste recycling. I have taught or had my work used in classes at USF, SFSU, AAU, UC Law Hastings and Harvard Law. 

I have worked with the EFF, Stanford, MIT, and to brief the US Copyright Office and change US copyright law. We won multiple exemptions to the DMCA, accepted and implemented by the Librarian of Congress. My writings have appeared in Wired, Bloomberg, Make Magazine, SD Times, The Austin American Statesman, The Atlanta Journal Constitution and many other outlets.

I have been written about by the Wall Street Journal, The Washington Post, Wired and The Atlantic. I have been called "The Gertrude Stein of Video Games," an honor I accept, as I live less than a mile from her childhood home in Oakland, CA. I was project lead on the first successful institutional preservation and rebooting of the first massively multiplayer game, Habitat, for the C64, from 1986: . I've consulted and collaborated with the NY MOMA, the Oakland Museum of California, Cisco, Semtech, Twilio, Game Developers Conference, NGNX, the Anti-Defamation League, the Library of Congress and the Oakland Public Library System on projects, contracts, and exhibitions.

Read full bio

Browse by channel

automation icon


The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon


The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon


The latest on the world’s leading enterprise Linux platform

application development icon


Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech