Artificial intelligence (AI) is shaping the future of industries across the globe. Yet, the intricate mathematics and complex theories often associated with AI research can pose significant barriers to its broader adoption. That’s where Red Hat’s newest podcast, “No Math AI”, comes in.
Hosted by Dr. Akash Srivastava, Red Hat chief architect and a pioneer of the InstructLab project, and Isha Puri, Massachusetts Institute of Technology PhD student in AI, “No Math AI” is a monthly podcast designed to make cutting-edge AI research more accessible. Whether you’re an AI practitioner, business leader or a tech enthusiast, this podcast offers insights into the real-world impact of AI advancements on business.
Each episode will break down crucial AI concepts and distill them into actionable takeaways. “No Math AI” makes it easier for enthusiasts and business leaders to understand and embrace AI and incorporate it into their strategy with confidence.
Episode 1: Inference-time scaling and how small models beat the big ones
In the debut episode of “No Math AI”, Dr. Srivastava and Isha are joined by guest speakers and research engineers, Shivchander Sudalairaj, GX Xu and Kai Xu. Together, they dive into a crucial topic that’s making waves in AI performance: inference time scaling.
Our hosts and guest speakers discuss how this technique is unlocking new levels of performance for AI models, enhancing reasoning capabilities, powering agentic AI and helping ensure higher accuracy.
Tune in to the episode below to discover how inference time scaling is transforming AI performance in real-world scenarios and how businesses can use it to stay ahead in the rapidly evolving AI landscape.
You can listen and watch the first episode on Spotify or the Red Hat YouTube channel. Don't forget to join our hosts each month for more insights into how AI can help shape the future of your business.
About the author
Carlos Condado is a Senior Product Marketing Manager for Red Hat AI. He helps organizations navigate the path from AI experimentation to enterprise-scale deployment by guiding the adoption of MLOps practices and integration of AI models into existing hybrid cloud infrastructures. As part of the Red Hat AI team, he works across engineering, product, and go-to-market functions to help shape strategy, messaging, and customer enablement around Red Hat’s open, flexible, and consistent AI portfolio.
With a diverse background spanning data analytics, integration, cybersecurity, and AI, Carlos brings a cross-functional perspective to emerging technologies. He is passionate about technological innovations and helping enterprises unlock the value of their data and gain a competitive advantage through scalable, production-ready AI solutions.
More like this
Context as architecture: A practical look at retrieval-augmented generation
Sovereign AI architecture: Scaling distributed training with Kubeflow Trainer and Feast on Red Hat OpenShift AI
Technically Speaking | Build a production-ready AI toolbox
Technically Speaking | Platform engineering for AI agents
Browse by channel
Automation
The latest on IT automation that spans tech, teams, and environments
Artificial intelligence
Explore the platforms and partners building a faster path for AI
Cloud services
Get updates on our portfolio of managed cloud services
Security
Explore how we reduce risks across environments and technologies
Edge computing
Updates on the solutions that simplify infrastructure at the edge
Infrastructure
Stay up to date on the world’s leading enterprise Linux platform
Applications
The latest on our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech