Recently, we looked at how OpenStack’s use of vGPUs enables new technology use cases such as time series forecasting and autonomous vehicle image recognition. Now let’s examine the deployment options that can enable those applications.
Red Hat and the OpenStack community recognize that to serve the needs of today’s providers of telecommunications service, IoT, retail apps and other workloads, centralized infrastructure only may not be a feasible approach. Instead, applications and their underlying infrastructure likely need to move out to the edge to be as close to the client or data source as possible in order to deliver processing and insights in near real time.
Let’s look at some of the new capabilities that are available in OpenStack's Stein release or may come to future versions of Red Hat OpenStack.
More deployment options
As part of the upstream Stein release of OpenStack (and future versions of Red Hat OpenStack Platform), the ability to have a single management node deploy and manage multiple standalone clusters may lower the hardware requirements to stand up and extend openstack infrastructure to remote locations.
This ability can bring with it advantages in a number of areas, especially when it comes to management. Fewer total management nodes are required, which can increase the cost-effectiveness of the architecture leaving more budget left to spend on the nodes that are actually running the workload.This can mean more applications on more nodes in more places—making edge edgier!
Getting your infrastructure close to customers is good, however not all customers are external. What about internal IT? By stretching the OpenStack environment from the data center to the edge, IT can have a distributed architecture with a more consistent and centralized management experience, which can reduce complexity and resource consumption and therefore help increase operational agility and ROI—do more with less!
Stability and resiliency
Let’s talk stability. Now instead of a single stack, spread across distance (which, by design, may leave the door open for downtime) a deployment can accommodate multiple stacks.
This gives IT greater ease of deploying and managing one entity, yet with the resiliency to handle "site A" going down without affecting "site B." Common sense? Sure, but as more and more infrastructure gets moved out to the edge in remote locations, the risk of an outage, disconnect, etc., increases with it. Infrastructure that can tolerate more variables can help to support consistent, performing applications and reduce unplanned downtime.
This is done via the new Tripleo ability that can separate the control plane from compute and storage, which allows customers to establish the control plane, then deploy compute and storage nodes in batches in separate deployment stacks. For distributed environments, speeding up deployments of hundreds or thousands of edge nodes and isolating groups of nodes from each other can help save many hours, save travel to remote locations and reduce the opportunity for errors with a validated, reusable workflow.
Management and deployment are great, but what advancements are happening to the edge infrastructure itself? A great management plane isn’t the only plane that is persistent, edge storage gets the same treatment.
As part of Stein, each cluster at the edge can have its own Ceph cluster which allows for additional use cases where data locality is required. Cinder storage can also achieve higher availability by going active-active, and Ceph performance may also be improved thanks to data locality. This may give even the most remote locations the ability to meet stricter SLA’s and with fewer urgent on-location visits.
Finally, features such as Glance cache service can help reduce boot times and bandwidth usage - useful when you have tens or hundreds of remote nodes. These features lay the groundwork for future hyperconverged nodes at the edge, which can combine compute and storage into a single appliance. The size, combined with OpenStack’s scalability and management, can help make edge deployments faster, more simple, and simpler to manage.
Let's get small
What’s the opposite of a massive deployment? Only need a single node to do basic development? Want the power of OpenStack in a cramped marine environment? What about on a telephone pole? Consider running a single node.
When factors such as uptime and availability come second to those such as cost or footprint, a single node can be the answer: Stein allows for a single OpenStack node. Sometimes small and more simple is the answer and when your infrastructure lives on the edge, this added functionality can open the door to use cases where a massive, multi-node cluster just isn’t the right architecture.
As consumers demand more responsive applications, getting infrastructure closer to them is possible with a distributed OpenStack architecture. Red Hat OpenStack Platform is designed to provide the scale, simplified management, and application availability needed to help provide the best customer experience balanced with the flexibility to architect their OpenStack deployment based on business and technical needs.
Stay tuned for more great OpenStack advancements as we continue looking at new Stein features.