Subscribe to the feed

2024 was an exciting year in technology, so Red Hat blog authors have delivered exciting news stories, musings and tutorials on all the latest trends and topics. Using a dump of traffic logs and the awk command, I’ve compiled a list of the top 10 articles we’ve published this year on the topic of artificial intelligence (AI).

1. What is RHEL AI? A guide to the open source way for doing AI

AI might be an impressive feat of engineering, but what can it actually do for your organization? It seems AI-powered chatbots are a dime a dozen, but it’s rare to find one that can actually provide factual and useful information, especially when it comes to specialized domains. The only way for you to make AI useful to your organization is to take ownership of the language model used for training the AI, and that’s exactly what Red Hat Enterprise Linux AI (RHEL AI) makes possible for you.

RHEL AI is a platform designed to simplify the development and deployment of AI models for enterprise applications. RHEL AI provides a bootable optimized RHEL image that combines the open source-licensed Granite large language model (LLM) from IBM Research with InstructLab model alignment tooling. With these tools, you can customize the knowledge base of your AI model so that it’s broadly useful across your business. In this article, Tushar Katarki and Jeremy Eder take a look at RHEL AI, and how open source components enable you to make it your own.

Read the article

2. 4 use cases for AI in cyber security

Humans are good at pattern recognition and at recognizing aberrations from an established pattern. That can be a useful skill in information security because computers tend to be fairly predictable systems. However, thorough monitoring of an IT system requires constant vigilance, and humans get fatigued and distracted. As Owen Watkins points out in his article, however, AI is also good at pattern recognition and it conveniently never tires. It’s a natural fit for cyber security. Whether it’s scanning code for potential vulnerabilities, discovering threats from databases of known exploits or reviewing logs, AI can be an invaluable tool for keeping your organization safe from attack.

Read the article

3. Evaluating LLM inference performance on Red Hat OpenShift AI

Red Hat OpenShift AI is a scalable and flexible MLOps platform with tools to build, deploy and manage AI-enabled applications. Built using open source technologies, it enables you to test and serve models and develop and deliver innovative apps. The software stack includes KServe, Red Hat OpenShift Serverless, and Red Hat OpenShift Service Mesh, with more available depending on your LLM and GPU choice. The Red Hat OpenShift team has also created the llm-load-test tool to help you load test models running on your model serving stack.

If you’re planning on implementing AI broadly throughout your organization’s workflow, or just running a dedicated AI for general use, then Red Hat OpenShift AI is likely the foundation you want to build upon.

Read the article

4. What to expect in the next era of artificial intelligence in banking

Banks have a long history of using predictive AI to automate and streamline operations. Banks use pattern recognition to reconcile payments or to assist debt collection by predicting who is most likely to repay. However, expanding the adoption of AI throughout a banking organization and across delivery and operations teams is challenging. In his article, Steven Huels explores what factors influence the acceptance and integration of AI in banks and banking customers.

Read the article

5. Tips for generative AI large language model prompt patterns

Generative AI (gen AI) is a probability engine that can produce highly likely sentences, but ensuring that those sentences are relevant and factual is still something of an art. Through a process called prompt engineering, you can design AI prompts to maximize the likelihood of a contextually appropriate response from AI. Crafting well-designed prompts helps guide an AI model to produce meaningful output and poorly formulated prompts can lead to incorrect data. There are a variety of factors and techniques you can consider when writing a good AI prompt and Michael Santos explores them in this article.

Read the article

6. Introducing image mode for Red Hat Enterprise Linux

There are lots of possible places you might install Linux, including servers, containers, virtual machines, workstations and more. Each of those platforms is a little different from the other, so rarely does a single golden image suffice. In Matt Micene’s article, he explores the image mode technology now available for RHEL. Image mode for RHEL uses the same familiar tools, skills and patterns you’re used to in your container workflow to help you build a customized operating system that’s easy to install and run on any platform. Because it starts with containers, you get a complete inventory of your images and environments, complete with history and version control. It uses bootc and OSTree, so your images can receive regular updates, with rollbacks requiring only a reboot.

Read the article

7. Developer preview of Red Hat OpenShift Lightspeed

The ultimate test of any software is its usability. An intuitive user interface and well-written documentation go a long way, but when you’re in the middle of a task there’s nothing quite like interactive guidance. Red Hat OpenShift Lightspeed is an integrated gen AI virtual assistant. Using an English natural-language interface, users can ask the assistant questions related to Red Hat OpenShift. It can assist with troubleshooting and investigating cluster resources by leveraging and applying Red Hat’s extensive knowledge and experience in building, deploying and managing applications across the hybrid cloud. In this article, you’ll learn more about Red Hat OpenShift Lightspeed and watch a video of it in action as it guides a user through scaling pods.

Read the article

8. RHEL vs. RHEL AI: What’s the difference?

The title of Deb Richardson’s article says it all. Maybe you already know RHEL, and maybe you’ve heard of RHEL AI, but what sets the latter apart from the former? In this article, you not only learn the difference between RHEL and RHEL AI, but you also learn about data transparency, Granite LLM, InstructLab and more.

Read the article

9. How to get started with InstructLab today

InstructLab is a large-scale alignment tool for customizing AI. You can use InstructLab to train or fine-tune an LLM so that your AI applications provide relevant and current answers to user queries. One way you refine your LLM is by providing sample questions and answers about a range of topics. This data set is formatted as a YAML file for easy parsing, so it helps to understand some basic concepts about YAML. It’s a starkly simple format, and you only need to know a few rules about its syntax.

Read the article

10. Making LLMs environmentally and budget friendly

The future of AI includes smaller models. The days of training an AI on an outdated snapshot of the Internet and hoping for the best are limited. Costs are already spiraling out of control and the resources required for a generalist AI model with only a tenuous knowledge of context make it unrealistic at scale. As organizations adopt AI, the need for fine-tuning and specificity have become obvious. In his article, Ishu Verma explains quantization, pruning and knowledge distillation, so you can make your AI applications affordable and sustainable.

Read the article

product trial

Red Hat Enterprise Linux AI | Product Trial

A foundation model platform to develop, train, test, and run Granite family large language models (LLMs) for enterprise applications.

About the author

Seth Kenlon is a Linux geek, open source enthusiast, free culture advocate, and tabletop gamer. Between gigs in the film industry and the tech industry (not necessarily exclusive of one another), he likes to design games and hack on code (also not necessarily exclusive of one another).

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

Keep exploring

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech