We're hiring
Let’s shape the future of AI, together
The future of open source starts with you. Red Hat® is looking for top talent to join our AI Engineering team and become an integral part of democratizing AI technology, making it available and accessible to all.
Innovate beyond boundaries
Make your mark in the world of AI and take your talents and skills to the next level. At Red Hat, you'll have the support, flexibility, and perks to do your best work.
Our vision is full of possibilities
Modern enterprise challenges require innovative thinking. At Red Hat, we’re not just adapting to change—we’re one of the driving forces behind it, building open AI solutions that are transforming how organizations address their most complex problems. To learn about how our open source culture supports our AI goals, you can read more from our CEO, Matt Hicks.
Why join us?
Collaborate to innovate
AI Engineering is a newly formed team where you’ll have the scope and impact to make your mark and get your ideas into people’s hands. Red Hat’s open culture fosters collaboration and allows the best ideas to grow. Most Red Hatters have flexible work options and can choose the projects and growth opportunities that matter most to them.
Contribute to open source communities
Build innovative AI technologies in the open source way. As part of our growing team, you’ll have the opportunity to actively participate in communities like KServe, Kubeflow, and vLLM that value and anticipate contributions from all levels.
Expand your skillset and impact
Be empowered to innovate, and focus on challenging, high-impact work that makes a difference. Our team is actively solving problems at scale, across cloud, on-premise, and at the edge. Being part of our fast-growing AI Engineering team means your career can grow along with it.
Comprehensive rewards
Along with our competitive base pay and equity programs, we offer a quarterly bonus program, leave opportunities, and a full benefits portfolio. As a global company, we design unique offerings that align with the needs and values of Red Hatters wherever they are.
Take part in limitless innovation
From systems engineering to model tooling, there's bold and meaningful work happening across our AI teams - and something for every kind of builder, thinker, and doer to dive into. Whether you're motivated by infrastructure, inference, or enterprise applications, you'll find a team here where your skills (and curiosity) can thrive.
Inference Platform
This team is at the epicentre of LLM performance, building the next generation of serving and inference platforms. Engineers here don’t just use state-of-the-art models; they build the engines that run them and make GPUs go brrr. They are core contributors to vLLM, the high-throughput engine behind countless LLM inference applications, and they’re pushing boundaries with llm-compressor for advanced quantization and compression, speculators for cutting-edge speculative decoding, and llm-d, a Kubernetes-native serving stack designed for intelligent scheduling and disaggregated inference at scale. If you're the kind of engineer who loves performance tuning, building fast systems, and getting deep into model internals, this is the team for you.
Model Customization & AI Tooling
This is where powerful models meet practical applications - bridging the gap between cutting-edge foundation models and the specific needs of enterprise users. In this team, you’ll build open source tools that let others securely fine-tune models on their own data, create sophisticated RAG pipelines, and design smaller, more efficient models tailored to specific tasks. If you want to make AI more accessible, adaptable, and cost-effective, this is where you’ll thrive.
Agents & Application Development
Multi-agent systems are the next frontier in AI, and this team is laying the foundation. They’re building the platform that enables developers to move beyond simple prototypes and single-model applications to full-scale, intelligent workflows. Here, you’ll help design the core framework, unified APIs, and reusable components that power AI-native apps: think coordination, security, and observability at scale. If you’re excited by the idea of designing the architecture for tomorrow’s AI-driven systems, this is the place to be.
AI Platform Core Components (AIPCC)
The AIPCC team is responsible for building the secure, scalable foundation that supports AI across Red Hat. Whether it’s trusted data pipelines, model packaging standards, or deployment blueprints, you’ll create the foundation every team depends on to turn ideas into production-ready AI solutions. AIPCC is the place to be if you're a systems thinker who thrives on enabling others - and building foundational components that power entire ecosystems.
AI Platform and Operations
This team makes everything else possible. They’re building the robust, enterprise-ready MLOps and GenAI Ops platform that powers Red Hat’s AI vision—on OpenShift, across the hybrid cloud, and at scale. You’ll engineer solutions for model lifecycle management, GPU resource allocation, and observability, turning complex infrastructure challenges into elegant, flexible systems. It’s the backbone of AI at Red Hat, and it’s the perfect place for someone who enjoys infrastructure engineering, distributed systems, and building platforms that scale AI across environments.
People should join us because they are passionate about open source and want to contribute to making AI open. Plus, they can engage with and become influential in meaningful open source communities, at a company where this is highly valued and encouraged.
Explore how Red Hat is shaping AI into our culture
Learn more about us and what we’re working on.
Gain more AI knowledge
Hear from other Red Hatters
Want to access more Red Hat content or thought leadership on AI? Access Red Hat TV where you can stream videos on all things AI.
Meet Brian, Senior Machine Learning Engineer on the AI Inference team
Language and logic: Mustafa’s journey into AI research
Check out our Journey into AI video series
How our customers are using Red Hat AI solutions
The City of Vienna’s IT department brings GenAI technology to city employees - while keeping the experience human-centric.
Hitachi operationalized AI across its entire business with Red Hat OpenShift AI.
DenizBank is developing AI models to help identify loans for customers and potential fraud. With Red Hat AI, its data scientists gained a new level of autonomy over their data.
Join the talent community
Be a part of a team of passionate problem-solvers who want to make what they build available for everyone. Don’t see a role that’s a fit right now? Leave your details below to stay connected about new opportunities and life at Red Hat.