The Red Hat Blog: Expert insights for navigating tech complexity

Featured posts

Illustration of women working on laptop
Blog post

AI quickstarts: An easy and practical way to get started with Red Hat AI

January 20, 2026
  |  
Discover AI quickstarts, a new catalog of ready-to-run, industry-specific use cases that put the power of open source AI directly into your hands. Learn how to deploy, explore, and extend these practical examples to master Red Hat AI and take your AI ideas from experimentation to production.

Latest posts

Blog post

Achieve more with Red Hat OpenShift 4.21

February 3, 2026
  |  
Blog post

Evolution to a sovereign TechCo: Embracing Polycloud, agentic AI, and digital trust

February 3, 2026
  |  
Learn about StarHub's journey to become a Trusted Service Provider, focusing on the 3 non-negotiable pillars: polycloud agility, secure agentic AI, and digital trust. Discover the strategic imperative for Communications Service Providers to evolve and the advanced security design principles being adopted.
Blog post

Announcing general availability of SQL Server 2025 on Red Hat Enterprise Linux 10

February 2, 2026
  |  
Blog post

Cracking the inference code: 3 proven strategies for high-performance AI

February 2, 2026
  |  
Learn how to maximize tokens per dollar, maintain sub-50ms latency, and scale horizontally with optimized runtimes (vLLM), model optimization, and distributed inference (llm-d).
Blog post

Fast and simple AI deployment on Intel Xeon with Red Hat OpenShift

February 2, 2026
  |  
Discover how Intel Xeon and Red Hat OpenShift combine to offer a protected and flexible foundation for deploying agentic AI in the enterprise. Simplify the adoption process with AI quickstarts and reduce AI infrastructure costs over the long term.

Featured stories

Blog post

Cracking the inference code: 3 proven strategies for high-performance AI

February 2, 2026
  |  
Learn how to maximize tokens per dollar, maintain sub-50ms latency, and scale horizontally with optimized runtimes (vLLM), model optimization, and distributed inference (llm-d).

Red Hat AI Inference Server

Red Hat AI Inference Server optimizes model inference across the hybrid cloud, creating faster and more cost-effective model deployments

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Virtualization icon

Virtualization

The future of enterprise virtualization for your workloads on-premise or across clouds

Inline CSS for this page