Subscribe to the feed

The Red Hat team participates in an "Ask Me Anything" panel at the end of the Gathering.


The OpenShift Commons Gathering at KubeCon Seattle, last week, was packed with information on the past, present and future of Red Hat OpenShift in all its forms. Over 350 people from over 115 companies from around the world to gathered at the event and hear about the future of the platform. The event even included the first live demo of Red Hat OpenShift 4.0, which is currently in development.

This was the first time the outside world got a glimpse of the OpenShift 4.0 platform in action. The goal for the platform, said Derek Carr, senior principal software engineer at Red Hat, is similar to the original goal of Kubernetes. While Kubernetes was built to enable a 10 fold increase in the velocity of application operations, the goal of OpenShift 4.0 is to provide a 10 fold increase in velocity for Kubernetes-based operations.

This can be seen through the refactoring of many platform services as operators. OpenShift 4.0 is built from the inside out with Operators, providing a platform that we hope will be more amenable to rolling updates without causing service outages. .

Chris Wright, Red Hat CTO, said that one of the new trends OpenShift 4.0 is tracking is the move inside the community away from smaller numbers of large clusters, towards more numerous smaller clusters. This change in the ecosystem has spurred work on multi-cluster federation and coordination within the OpenShift community. While this work is not yet complete, Wright noted that this is a planned focus for future releases.

Max Schulze of Vattenfall, discusses his company's usage of OpenShift and OpenStack to recycle the heat generated by servers.

Keeping the Hot Side Hot

Behind all of the plentiful talk of the future of OpenShift were some interesting use case presentations from OpenShift customers USAA, GE, and Progressive, among others. Perhaps the most unique of these use cases was from Vattenfall, the Swedish state-owned power company.

Max Schulze, who heads up new business development at Vattenfall, said that the Swedish government has mandated a move off of fossil fuels within a generation.

While this might not initially sound like a software problem, Schulze detailed the difficulties of prediction in power generation: the system must meet 100% of demand at all times, and storage of power for use later is simply not a carbon neutral option, due to the pollution created by the manufacturing of Lithium Ion batteries, said Schulze. Thus, the power consumption across Sweden must be made more predictable in order to provide reliable energy generation from less predictable sources, such as wind and solar.

Another factor affecting Vattenfall was the proliferation of data centers in Sweden. As the country is quite cold, data centers are easier to cool here. But that very thought sparked a remarkable discovery inside Vattenfall: CPUs generate an almost 1-to-1 amount of heat to power. That means every datacenter in Sweden was already generating megawatts worth of heat as part of their daily operations.

The resulting project at Vattenfall saw heat harvesting devices installed into their datacenter racks. In order to handle all this custom hardware, Vattenfall is running  OpenShift on OpenStack. “We decided we would build a test bed,” said Schulze. “We first went to Red Hat and said, ‘Maybe we can solve this purely on software side.’ They immediately jumped on board because they could really align with our vision to build a sustainable digital infrastructure. They said, ‘You can do this with OpenShift,’ and we also found some other partners, like Cloud&Heat, and Nvidia was also excited: for them cooling these GPUs is a big problem.”

Building this heat harvesting system required some new metrics and goals. “How do we define a sustainable digital infrastructure?” asked Schulze. “For us, we try to reuse 80% of the heat. We cool the chips with hot water, and it flows at an incredible speed. It is 55F at intake, and it is 140 degrees at outflow. To manage this required distributing workloads, the datacenter doesn’t have a flat workload all the time. We had to manipulate the workloads. Sometimes when we really need heat, we ramp up artificial workloads to make heat. We want to solve this problem by concentrating workloads on the machines to generate heat.”

Finally, there was one last step in the process at Vattenfall, and it encompassed the last mile of datacenter technology: the physical servers. “In order to make the datacenter efficient you need to physically shut down systems,” said Schulze.

That means shutting down actual servers at the UPS level. The Vattenfall team has now accomplished this, and is able to physically shut down actual hardware on demand, via automated processes in OpenShift.

Automating Work

Tripti Singhal (Nvidia) discusses usage of GPUs inside OpenShift clusters.

One of the speakers at OpenShift Commons Gathering who was, perhaps, a tad uneasy about the forthcoming OpenShift 4.0 was Jackie Chute, senior site reliability engineer at GE Digital. She said that, while she is excited about OpenShift 4.0, it may put her out of a job. That’s because Jackie now does the work of setting up virtual machines for use inside OpenShift. OpenShift 4.0 begins laying the groundwork for automating, and managing virtual machines, thus taking most of Jackie’s day-to-day work away from her and automating it.

She’s part of a small team at GE Digital that has spent the past few years bringing on-demand cloud style provisioned systems to the broader GE organization. At the OpenShift Commons Gathering, she took the stage alongside fellow SRE Timothy Oliver, and staff infrastructure architect Jay Ryan. The three detailed the GE journey toward hybrid cloud infrastructure.

Jay said that the implementation of OpenShift at GE Digital was done in a services model. “We have fully automated OpenShift on AWS. We’re using Amazon Elastic Block Storage. One of the things that made this such a great choice is that Red Hat lays out the architecture. They show you how it should run in production.”

Timothy said that their work is not the flashiest part of their day job. “Orchestration is not sexy: it’s just running containers. But that’s what you want. You want it back there doing the job,” he said, adding that reliable infrastructure for running cloud services enables innovation to happen inside each individual GE department. “Our customers are innovating,” said Jay. “The customers we have in the environment today are teaching us about Kubernetes, and asking about Operators and Helm and wondering how they can get in on the bottom floor.”

Brian Gracely of Red Hat discusses the road ahead for OpenShift.

More Ways To Win

Other talks at the OpenShift Commons Gathering covered topics ranging from continuous deployment, to security, to turning a monolithic application into microservices. Ankur Lamba, technical architect at USAA, detailed some of the work his team has done to bring security to their cloud-based applications. This included a few steps along the way, but ended with OpenShift hosting services for the management of certificates across thousands of applications.

James McShane, on the other hand, said that HealthPartners has used OpenShift to speed up its software development processes. As a result, the company can now push an application to production in just 18 minutes, saving time for developers, operators, and everyone else who has a stake in said application.

You can find all of the slides and full videos of the presentations from this OpenShift Commons Gathering elsewhere on the OpenShift Blog. If you missed out on this gathering in Seattle, your next chance to attend in person is at the OpenShift Commons Gather in London on January 30 at Savoy Place. More information can also be found at

About the author

Red Hatter since 2018, technology historian and founder of The Museum of Art and Digital Entertainment. Two decades of journalism mixed with technology expertise, storytelling and oodles of computing experience from inception to ewaste recycling. I have taught or had my work used in classes at USF, SFSU, AAU, UC Law Hastings and Harvard Law. 

I have worked with the EFF, Stanford, MIT, and to brief the US Copyright Office and change US copyright law. We won multiple exemptions to the DMCA, accepted and implemented by the Librarian of Congress. My writings have appeared in Wired, Bloomberg, Make Magazine, SD Times, The Austin American Statesman, The Atlanta Journal Constitution and many other outlets.

I have been written about by the Wall Street Journal, The Washington Post, Wired and The Atlantic. I have been called "The Gertrude Stein of Video Games," an honor I accept, as I live less than a mile from her childhood home in Oakland, CA. I was project lead on the first successful institutional preservation and rebooting of the first massively multiplayer game, Habitat, for the C64, from 1986: . I've consulted and collaborated with the NY MOMA, the Oakland Museum of California, Cisco, Semtech, Twilio, Game Developers Conference, NGNX, the Anti-Defamation League, the Library of Congress and the Oakland Public Library System on projects, contracts, and exhibitions.

Read full bio

Browse by channel

automation icon


The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon


The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon


The latest on the world’s leading enterprise Linux platform

application development icon


Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech