Sélectionner une langue
With all eyes turning towards Shanghai, we’re getting ready for the next Open Infrastructure Summit in November with great excitement. But before we hit the road, I wanted to draw attention to the latest OpenStack upstream release. The Train release continues to showcase the community’s drive toward offering innovations in OpenStack. Red Hat has been part of developing more than 50 new features spanning Nova, Ironic, Cinder, TripleO and many more projects.
But given all the technology goodies (you can see the release highlights here) that the Train release has to offer, you may be curious about the features that we at Red Hat believe are among the top capabilities that will benefit our telecommunications and enterprise customers and their uses cases. Here's an overview of the features we are most excited about this release.
Extended bandwidth management via Quality of Service (QoS)
For many customers, especially telecommunications providers running network functions virtualization (NFV) workloads, having the ability to manage bandwidth provides the ability to guarantee a minimum bandwidth to critical VMs and also limit the bandwidth per VM to avoid “the noisy neighbor problem” where in a multi-tenant environment, one “neighbor” takes over available bandwidth and starves other VMs of resources who then suffer from uneven cloud network performance.
In this latest OpenStack release, features include QoS management that enables IT admins to set minimum and maximum bandwidth as well as bandwidth-aware scheduling. This helps customers avoid NIC overcommitment, as each virtual machine declares a bandwidth consumption QoS limit so, the Nova scheduler doesn’t overcommit any Neutron physical NIC.
At scale monitoring of the OpenStack cloud
For large enterprises or telecommunications providers doing network functions virtualization infrastructure (NFVi) that need end to end telemetry and near real-time metrics at scale, this new capability offers monitoring of the underlying infrastructure such as nodes, GPUs, networks and cloud services with low latency using Prometheus, AMQP and other open source technologies.
It provides a choice of three levels of APIs including:
At the “sensor” (collectd agent) level through plugins (Kafka, AMQP1.0) to allow connect to collectd via message bus of choice.
At message bus level featuring an integrated, highly available AMQ interconnect message bus with collectd.
Time series database / management cluster level based on Prometheus.
This is all running on a separate Kubernetes/OpenShift cluster independent of the monitored OpenStack cloud to monitor application infrastructure availability, infrastructure performance and troubleshooting insights.
Hardening of persistent storage at the edge
With the bulk of development that adds persistent storage to edge architectures done in Stein, Train further tests and hardens this emerging use case for OpenStack deployments. This is achieved by using multiple features such as Cinder Active/Active that allows for placing Cinder volumes at edge sites as well as multiple Ceph clusters support by deploying multiples Heat stacks, one for the core/central datacenter plus one per edge site where each stack can be managed independently providing a better experience for day 2 operations such as scaling, config changes, updates, upgrades, etc. In addition to this, Train also now provides a unified way to manage images at remote sites, such as those seen with distributed compute nodes at the edge, through Glance Image caching.
Beyond the edge, additional storage capabilities continue to improve the storage integration with Barbican including:
The ability to change the encryption when cloning a cinder volume.
Automatic removal of Barbican keys through Glance when deleting an image created from a Cinder encrypted volume.
Both features focus on enhancing and simplifying the operations of a more secure architecture.
We know from our interactions with customers that they tend to have different requirements for their on-premises IaaS architecture and that different customers require different capabilities—it’s definitely not “one size fits all” when it comes to building a cloud architecture. With the Train release, we continue to see innovations added to what is now a stable OpenStack community distribution.