The Red Hat Blog: Expert insights for navigating tech complexity

Featured posts

Illustration of women working on laptop
Blog post

Red Hat Insights is now Red Hat Lightspeed: Accelerating AI-powered management

November 4, 2025
  |  
Learn about Red Hat Lightspeed, the evolution of Red Hat Insights, and how it brings AI-powered management to Red Hat platforms. Discover how it helps you work faster, smarter, and with greater security.

Latest posts

Blog post

Announcing general availability of SQL Server 2025 on Red Hat Enterprise Linux 10

February 2, 2026
  |  
Learn how SQL Server 2025 on Red Hat Enterprise Linux 10 combines Microsoft's modern, AI-ready database with Red Hat's security, stability, and performance. Discover new features like Contained Availability Group sessions, enhanced observability, and Linux DMVs. Try it today on the Red Hat Ecosystem Catalog.
Blog post

Cracking the inference code: 3 proven strategies for high-performance AI

February 2, 2026
  |  
Learn how to maximize tokens per dollar, maintain sub-50ms latency, and scale horizontally with optimized runtimes (vLLM), model optimization, and distributed inference (llm-d).
Blog post

Fast and simple AI deployment on Intel Xeon with Red Hat OpenShift

February 2, 2026
  |  
Discover how Intel Xeon and Red Hat OpenShift combine to offer a protected and flexible foundation for deploying agentic AI in the enterprise. Simplify the adoption process with AI quickstarts and reduce AI infrastructure costs over the long term.
Blog post

IT automation with agentic AI: Introducing the MCP server for Red Hat Ansible Automation Platform

February 2, 2026
  |  
Learn how the MCP server enables Large Language Models (LLMs) to interact with and manage Ansible Automation Platform through natural language interactions, enhancing IT automation and maintaining enterprise-grade security.
Blog post

Friday Five — January 30, 2026

January 30, 2026
  |  
The Friday Five is a weekly Red Hat blog post with 5 of the week's top news items and ideas from or about Red Hat and the technology industry.

Featured stories

Blog post

Cracking the inference code: 3 proven strategies for high-performance AI

February 2, 2026
  |  
Learn how to maximize tokens per dollar, maintain sub-50ms latency, and scale horizontally with optimized runtimes (vLLM), model optimization, and distributed inference (llm-d).

Red Hat AI Inference Server

Red Hat AI Inference Server optimizes model inference across the hybrid cloud, creating faster and more cost-effective model deployments

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Virtualization icon

Virtualization

The future of enterprise virtualization for your workloads on-premise or across clouds

Inline CSS for this page