Inscreva-se no feed

My journey with artificial intelligence (AI) began with my deep-rooted passion for both computer science and philosophy. I found myself drawn to natural language processing (NLP), where the study of formal logic, languages and semantics seamlessly connect to a world of computer algorithms, graphs and networks. That curiosity led me to research and ultimately to Red Hat’s AI engineering team. 

I first joined Red Hat as an undergraduate intern on the performance and scale team. After joining Red Hat full-time, I began working on MLCommons MLPerf AI inference optimization, as well as leading development for the CodeFlare/Ray distributed machine learning (ML) training stack for Red Hat OpenShift AI. At the same time, I dove further into NLP research in graduate school, where I focused on topics like speech recognition, language model reasoning and natural language to structured query language (SQL) translation. 

Around the time I was finishing grad school, the InstructLab project began taking off. I took the opportunity to join the InstructLab team as an ML engineer , and worked on developing the model training library and building LAB models used upstream and in Red Hat Enterprise Linux AI

At this point, I was still trying to figure out how best to continue with my AI research work while aligning my passion with Red Hat’s goals in the open source, AI space. That’s when the AI innovation team, who I had been collaborating with, moved from IBM Research to Red Hat. This presented the perfect opportunity to finally unify my interests in AI research and engineering. I joined the AI innovation team at the end of 2024, and began working on a combination of exciting research paths and open source model production and development.

A typical day for a research engineer

One of the most exciting things about working in AI research is that there really is no typical day. Every day presents a new challenge and every month a potential new topic.

I spend most of my days in Red Hat’s Boston office. Research requires constant collaboration, so it's nice to be able to come in, meet with people, work out ideas on a whiteboard, step away and set up experiments and then reconvene to discuss. We’re also pretty close as a team, so it’s also nice to be able to take a coffee break and chat or play a game of table tennis after a busy day.

Our team operates in a structured yet dynamic way—we meet daily to discuss findings, review experimental results and challenge each other’s ideas. Outside of those sessions, much of the research is independent, but we frequently meet in smaller, focused groups to tackle shared tasks. 

Whenever we have a new project there’s often a “hackathon” phase when multiple workstreams are moving towards a common goal, and meeting times become less formal and more ad hoc. For example, with the launch of DeepSeek R1, we had an exciting few weeks of reasoning, inference scaling and Group Relative Policy Optimization (GRPO) experiments to kickstart our effort to see how far we could push reasoning on custom data.

Why work on open source AI at Red Hat?

Red Hat pushes the idea of open source AI to a unique level—ensuring that not only models but also platforms, methods and pipelines are open.

Red Hat also focuses heavily on making AI approachable and accessible. The ability to lead the way with cutting-edge work and interact directly with the open source AI community with transparency (rather than waiting until we have a final result we are trying to sell), allows us to share, with our open source approach to AI more deeply with to the rest of the world.

Rather than prescribing a one-size-fits-all answer, we aim to provide solutions for users and companies to build their own path in AI. We hope to become a part of a living and evolving community, instead of just capturing an AI space.

Advice for aspiring AI researchers

If you’re looking to break into AI, whether at Red Hat or elsewhere, start with the fundamentals. A basic grasp of probability theory, calculus and linear algebra provides the foundation for understanding machine learning. From there, you can explore core ML concepts like maximum likelihood estimation, perceptrons and neural network architectures.

Ultimately, it’s about following your passions. AI is a vast field, and different aspects resonate with different people. If you are drawn to language and communication, you should delve into recurrent neural networks (RNNs), long short-term memory (LSTM) and transformers. If you love visuals and generative AI, you might find convolutional neural networks (CNNs) and diffusion models more fun. Robotics and gaming enthusiasts should look into reinforcement learning. Don’t worry about what is trending, and focus on topics you enjoy  learning. 

Also, don’t forget to pursue your passions outside of work, either! Since childhood, I’ve been passionate about video games, especially handheld gaming. From the early Game & Watch systems to Gameboys, DSs, PSPs and now the latest gaming handhelds like the ROG Ally and AYN Odin, odds are if I’m playing a single-player game, it’s on a handheld system. I also enjoy modding and collecting retro handhelds. When playing multiplayer games, you’ll typically find me on PC for online play, and on Switch for couch co-op. It’s always super helpful to be able to step away from a research project, clear your head, decompress, and come back with a fresh perspective. For me, that’s the critical space that video games provide. Sometimes, I even find that after spending some time away from a problem and letting the knowledge rest, I come back with a better understanding than when I had stopped!

The next frontier of AI research

AI is evolving rapidly, and I’m excited to see where this field takes us next. The recent interest in language model reasoning has been really cool. Previously, it’s been seen as a niche or academic topic, which made the opportunities to work on reasoning research in industry and open source a bit more limited. 

The idea of AI models that can think more like humans is becoming increasingly popular. It opens up a lot of possibilities for introducing novel reasoning techniques for more practical, accessible applications. We are currently working on some exciting options within Red Hat, so I’m looking forward to seeing where this path ends up taking us.

There is a lot on the horizon in terms of AI accessibility, tools for productivity and more interesting deep search offerings. Model communication protocols and agentic libraries are also making strides in AI model integration. No matter what area you are interested in, one thing is certain: the future of AI research is open source.

Our AI Engineering team is growing, and we’re looking for passionate technologists to join Mustafa  in making AI technology available and accessible to all. Learn more about the team and explore our open roles here.


Sobre o autor

I am a research engineer focused on language model quality, efficiency, and scalability. I received my B.S. and M.S. in Computer Science from Columbia University, both with a focus on natural language processing and machine learning. I also work with IBM Research on various knowledge/data-related research topics.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

Navegue por canal

automation icon

Automação

Últimas novidades em automação de TI para empresas de tecnologia, equipes e ambientes

AI icon

Inteligência artificial

Descubra as atualizações nas plataformas que proporcionam aos clientes executar suas cargas de trabalho de IA em qualquer ambiente

open hybrid cloud icon

Nuvem híbrida aberta

Veja como construímos um futuro mais flexível com a nuvem híbrida

security icon

Segurança

Veja as últimas novidades sobre como reduzimos riscos em ambientes e tecnologias

edge icon

Edge computing

Saiba quais são as atualizações nas plataformas que simplificam as operações na borda

Infrastructure icon

Infraestrutura

Saiba o que há de mais recente na plataforma Linux empresarial líder mundial

application development icon

Aplicações

Conheça nossas soluções desenvolvidas para ajudar você a superar os desafios mais complexos de aplicações

Original series icon

Programas originais

Veja as histórias divertidas de criadores e líderes em tecnologia empresarial