Wählen Sie eine Sprache
It shouldn’t be a surprise that as we demand more complex applications at both a business and consumer level, that we’re also, in effect, demanding more computational power. In a number of cases, these applications are driven by data and artificial intelligence (AI) in some way, either at the user level or on the backend. These workloads require AI models to analyze and parse the huge amounts of data necessary for these apps to actually do their job...which leads to a need for even more processing power. As organizations use technology to differentiate their businesses they are inevitably becoming more software-defined. For the IT industry, this means that we must be even more creative when it comes to developing and supporting emerging hardware to address these challenges.
But given the scale of the computing challenges today and emerging for tomorrow, it’s unlikely that one singular processor or hardware solution is the answer to our voracious appetite for computational power. Instead, we need to pair technologies together to work in concert to meet these needs, even if they disrupt our view of what "traditional architecture" may be in the datacenter. One of these blended technologies is data processing units (DPUs), which can impact nearly every level of the IT landscape, from single systems to multi-region cloud deployments.
DPUs combine multiple accelerators, making it possible to offload critical tasks from the CPU to dedicated hardware. The DPU concept is not entirely new. Until recently, implementations focused on a particular datapath or function, such as network acceleration using SmartNICs.
More robust DPU implementations combine an easily programmable multi-core CPU with a state-of-the-art network interface and a powerful set of networking, storage, and security accelerators. These advanced DPUs can be programmed to perform multiple, software-defined, hardware-accelerated functions, exemplified by the recently launched NVIDIA BlueField-2 DPU.
By offloading to BlueField-2, IT organizations can achieve multiple goals, primarily:
Freeing up a host server’s CPU to run business applications and providing a dedicated platform for executing critical management and security functions.
This, in turn, leads to a composable datacenter with optimal resource utilization and, as an added bonus, provides extra visibility into how well workloads are actually running.
Developers can take advantage of the BlueField-2's cryptographic, networking and storage hardware acceleration features to create security, machine learning, edge computing and storage applications with greater performance while improving server utilization and workload isolation.
Red Hat and NVIDIA recognize the value placed by customers on well-balanced and optimized data center infrastructure. Red Hat sees a future where the turbo-charged next generation datacenter can be composed and re-configured on-demand with containers. Red Hat plans to support BlueField-2 DPUs with Red Hat Enterprise Linux and Red Hat OpenShift and we’ll continue to work with NVIDIA on making Red Hat’s industry-leading software available for the datacenters of the future.
"Red Hat has a history of supporting AI, software-defined infrastructure, containerized computing, and accelerated networking," said Dror Goldenberg, vice president of Software Architecture in the Networking Business Unit at NVIDIA. "Having Red Hat support NVIDIA BlueField-2 DPUs gives customers the power to accelerate key data center infrastructure functions on the DPU to deliver servers with higher application performance, greater efficiency, and improved security."
Many of our customers are realizing that their plans, from moving computing workloads closer to the edge to full-scale digital transformation, require a trusted technology partner that can offer powerful, scalable and fully-open software technologies to complement hardware innovations.
BlueField-2 DPU devices adhere to key ecosystem standards simplifying integration with Red Hat’s software portfolio and enabling customers to standardize their infrastructure across various hardware architectures and solutions.
These same customers also value choice in their technology implementations, and choice is a key benefit of open source solutions. We don’t want our customers to choose between needed hardware and our fully open hybrid cloud portfolio, so we emphasize the delivery of enterprise-grade open source solutions across multiple footprints and multiple architectures.
Red Hat has long been delivering enterprise software that brings consistency across these platforms by working with the leading infrastructure providers in the industry. Our unwavering mission is to enable and tie together advances in hardware technology with the applications that leverage them while providing the support and expertise for which we are known in the industry.
Read more about our ongoing collaboration with NVIDIA, attend our sessions at NVIDIA's GTC 2020 virtual event and hear from Red Hat’s customers and partners first hand during OpenShift Commons Gathering on AI and Machine Learning that is co-located with GTC 2020.
About the author
Chris Wright is senior vice president and chief technology officer (CTO) at Red Hat. Wright leads the Office of the CTO, which is responsible for incubating emerging technologies and developing forward-looking perspectives on innovations such as artificial intelligence, cloud computing, distributed storage, software defined networking and network functions virtualization, containers, automation and continuous delivery, and distributed ledger.