Red Hat is proud to announce industry-leading results from the latest MLPerf Inference v6.0 benchmarks, achieved through deep engineering co-design with NVIDIA. These results demonstrate that when you combine Red Hat’s open-source leadership with NVIDIA’s leading AI infrastructure, the result is a versatile, proven platform ready for any enterprise inference workload—from vision and speech to complex reasoning.

Our latest submissions focused on maximizing the potential of the NVIDIA HGX H200 and NVIDIA HGX B200 systems, proving that software optimization is just as critical as raw horsepower for achieving peak ROI.

Results at a glance

Across language, vision, and speech models, Red Hat’s stack delivered top-tier throughput and latency results on NVIDIA AI infrastructure.

Model Category

Model

GPU Configuration

Scenario

Leading Results

Vision

Qwen3-VL-235B

8× NVIDIA B200

Server

67.9 samples/sec 

Reasoning

GPT-OSS-120B

8× NVIDIA B200

Offline

93,071 tokens/sec

Speech

Whisper-Large-v3

8× NVIDIA H200

Offline

36,396 tokens/sec 

Qwen3-VL-235B (multimodal vision model)

The Qwen3-VL-235B model, a massive 235-billion-parameter multimodal vision-language model, represents a significant challenge for inference engines due to highly variable image resolutions. Using NVIDIA Blackwell GPUs running on Red Hat Enterprise Linux (RHEL) with vLLM and NVIDIA Dynamo, we achieved the highest offline throughput in our class. Notably, our Blackwell submission exceeded the next top performer by 50% in the Server scenario.

Key engineering wins:

  • Triton-based improvements: Optimizations to the vision encoder yielded 30-40% faster ViT processing.
  • FlashInfer Mixture-of-Experts (MoE) kernels: These specialized kernels handled the MoE architecture with extreme efficiency.
  • FP8 Multimodal Attention: Leveraging NVIDIA’s advanced data formats to lower cost per token without sacrificing accuracy.

GPT-OSS-120B

Our submission for GPT-OSS-120B marks the first time a model of this scale has been benchmarked on Kubernetes infrastructure for MLPerf. By using Red Hat OpenShift AI and the llm-d scheduler, we demonstrated that distributed inference can scale effectively on NVIDIA AI infrastructure (H200 and B200 GPUs) while maintaining strict latency requirements.

We adopted a two-pronged strategy to optimize inference performance. First, our Bayesian optimization–based hyperparameter tuning pipeline on OpenShift identified an optimal configuration for a single replica that reduced P99 time-to-first-token (TTFT) from 3.4 seconds to 2.1 seconds (~38% improvement), meeting the sub-3s target.

Second, we optimized multi-replica performance by refining our load balancing and scoring strategy. By analyzing request distribution across replicas, we improved utilization and minimized tail latency, enabling more consistent scaling under load.

Whisper large-V3 (speech-to-text)

We submitted Whisper-large-v3 results on NVIDIA H200 and NVIDIA L40S GPUs, both running Red Hat Enterprise Linux (RHEL) and vLLM.

  • 8x H200 offline: 36,396 tokens per second, the leading H200 result, 13% faster than the next closest submission                                            
  • 2x L40S offline: 3,647 tokens per second, the first and only L40S submission for Whisper in MLPerf Inference v6.0

These results were driven by a systematic ablation study across config parameters to identify the optimizations that matter most for Whisper inference. Batch size tuning delivered a 40% throughput gain by maximizing GPU utilization, asynchronous scheduling contributed a further 12.8% by eliminating CPU-GPU synchronization stalls, and CUDA Graphs provided an additional 6%. With L40S widely deployed in cost-sensitive environments, our results show that an open-source inference stack delivers world-class speech recognition performance across both high-end and cost-efficient hardware.

Delivering greater efficiency and ROI

Red Hat’s software stack utilizes NVIDIA inference software Dynamo and Red Hat AI’s vLLM and llm-d to deliver significant efficiency gains on NVIDIA accelerated computing infrastructure. By optimizing every layer of the stack—from the RHEL kernel to the inference engines—we help enterprises lower their cost per token and improve overall ROI on their NVIDIA investments. Whether you are deploying on-premises or in the cloud, Red Hat provides a proven, high-performance foundation for the next generation of agentic and multimodal AI.

Want to replicate our results? Here’s how… Repo

Check out the full MLPerf Inference v6.0 results at mlcommons.org and learn more about Red Hat AI.


关于作者

Ashish Kamra is an accomplished engineering leader with over 15 years of experience managing high-performing teams in AI, machine learning, and cloud computing. He joined Red Hat in March 2017, where he currently serves as the Senior Manager of AI Performance at Red Hat. In this role, Ashish heads up initiatives to optimize performance and scale of Red Hat OpenShift AI - an end to end platform for MLOps, specifically focusing on large language model inference and training performance.

Prior to Red Hat, Ashish held leadership positions at Dell EMC, where he drove the development and integration of enterprise and cloud storage solutions and containerized data services. He also has a strong academic background, having earned a Ph.D. in Computer Engineering from Purdue University in 2010. His research focused on database intrusion detection and response, and he has published several papers in renowned journals and conferences.

Passionate about leveraging technology to drive business impact, Ashish is pursuing a Part-time Global Online MBA at Warwick Business School to complement his technical expertise. In his free time, he enjoys playing table tennis, exploring global cuisines, and traveling the world.

UI_Icon-Red_Hat-Close-A-Black-RGB

按频道浏览

automation icon

自动化

有关技术、团队和环境 IT 自动化的最新信息

AI icon

人工智能

平台更新使客户可以在任何地方运行人工智能工作负载

open hybrid cloud icon

开放混合云

了解我们如何利用混合云构建更灵活的未来

security icon

安全防护

有关我们如何跨环境和技术减少风险的最新信息

edge icon

边缘计算

简化边缘运维的平台更新

Infrastructure icon

基础架构

全球领先企业 Linux 平台的最新动态

application development icon

应用领域

我们针对最严峻的应用挑战的解决方案

Virtualization icon

虚拟化

适用于您的本地或跨云工作负载的企业虚拟化的未来