Welcome to Neural Magic's monthly vLLM roundup! We are excited to announce the agreement to be acquired by Red Hat. Joining forces with the industry's open source leader will enable us to bring our cutting-edge AI model optimization and accelerated inference technology to a worldwide audience of enterprises adopting open LLM capabilities.
Keep scrolling for exciting vLLM updates and opportunities to engage with the community!
Bi-Weekly vLLM Office Hours
Recent Recordings
vLLM Project Update: 2024 Retrospective and 2025 Roadmap | Watch Now
Exploring Machete, a Mixed-Input GEMM Kernel for Hopper GPUs | Watch Now
Disaggregated Prefill and KV Cache Storage in vLLM | Watch Now
SOTA Tool-Calling Implementation in vLLM | Watch Now
Take Your AI Performance to the Next Level
2:4 Sparse Llama: Smaller Models for Efficient GPU Inference
Large language models (LLMs) are approaching their limits in terms of traditional scaling, with billions of parameters added for relatively small accuracy gains and advanced quantization techniques squeezing out the last possible bits before accuracy plummets.
We Ran Over Half a Million Evaluations on Quantized LLMs: Here's What We Found
Quantizing models to lower precision formats, such as 8-bit or 4-bit, significantly reduces computational costs and accelerates inference.
Introducing Machete, a Mixed-Input GEMM Kernel Optimized for NVIDIA Hopper GPUs
Mixed-input quantization is a technique that processes weights and activations at different precisions in neural networks.
Research From Our Labs 🧪
1️⃣ "Give Me BF16 or Give Me Death"? Accuracy-Performance Trade-Offs in LLM Quantization | Read Here
2️⃣ PV-Tuning: Beyond Straight-Through Estimation for Extreme
LLM Compression | Read Here
3️⃣ QuaRot: Outlier-Free 4-Bit Inference in Rotated LLMs | Read Here
4️⃣ The Iterative Optimal Brain Surgeon: Faster Sparse Recovery by Leveraging Second-Order Information | Read Here
5️⃣ MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence | Read Here
vLLM has surpassed 32,000 stars! 🌟 Be sure to add your star and join the community. Thank you for your support.
Risorsa
Definizione della strategia aziendale per l'IA: una guida introduttiva
Sull'autore
Saša Zelenović is a Principal Product Marketing Manager at Red Hat, joining in 2025 through the Neural Magic acquisition where he led as Head of Marketing. With a passion for developer-focused marketing, Sasa drives efforts to help developers compress models for inference and deploy them with vLLM. He co-hosts the bi-weekly vLLM Office Hours, a go-to spot for insights and community around all things vLLM.
Altri risultati simili a questo
9 strategic articles defining the open hybrid cloud and AI future
How Red Hat Training makes you a better IT professional
Technically Speaking | Driving healthcare discoveries with AI
Technically Speaking | Security for the AI supply chain
Ricerca per canale
Automazione
Novità sull'automazione IT di tecnologie, team e ambienti
Intelligenza artificiale
Aggiornamenti sulle piattaforme che consentono alle aziende di eseguire carichi di lavoro IA ovunque
Hybrid cloud open source
Scopri come affrontare il futuro in modo più agile grazie al cloud ibrido
Sicurezza
Le ultime novità sulle nostre soluzioni per ridurre i rischi nelle tecnologie e negli ambienti
Edge computing
Aggiornamenti sulle piattaforme che semplificano l'operatività edge
Infrastruttura
Le ultime novità sulla piattaforma Linux aziendale leader a livello mondiale
Applicazioni
Approfondimenti sulle nostre soluzioni alle sfide applicative più difficili
Virtualizzazione
Il futuro della virtualizzazione negli ambienti aziendali per i carichi di lavoro on premise o nel cloud