Subscribe to the feed

There’s been remarkable growth in the realm of artificial intelligence (AI) in a short period of time recently, and it can be difficult to know when or where to start or how to keep up with it all. One of the distinct advantages of using Red Hat Enterprise Linux (RHEL) is that you don’t have to spend your own mental cycles to cut through the AI noise. By its nature, RHEL puts technology that’s useful and reliable to good use, so you can use RHEL as both a gateway and a platform for emerging-but-proven AI use cases. Here’s how you can use RHEL to get up to speed with AI.

What’s the business case of AI?

The market is ready for AI to move on from being a fun distraction to something actually useful for productivity. Red Hat OpenShift Lightspeed already uses AI to help cloud admins get complex configurations done quickly and accurately. It’s being used in the real world right now to configure, launch and maintain containerized environments running mission-critical applications. Your use case may differ, and RHEL can help you discover where AI can assist in your business team’s workflow.

With only a rudimentary understanding of a few Python commands, you can run InstructLab and its ilab interactive command-line interface on a RHEL, Fedora Linux or MacOS laptop or desktop computer. It’s just three steps:

  1. Install InstructLab
  2. Download a pre-trained large language model (LLM)
  3. Chat with the LLM

How to use AI without exposing business secrets

Publicly available AI models don’t know about your specialized business processes, research or plans,, and you wouldn’t want them to! That doesn’t mean AI can’t be useful to you, though. You can develop your own private AI model that’s based on the open source work of Red Hat, IBM and the community at large by adding in-house knowledge to a baseline LLM.

AI is about experimentation and finding the boundaries of an LLM’s existing knowledge base. Part of understanding how AI can fit in with your business is determining what the base level of an LLM can provide without further training. Once you’ve identified the limitations of a pre-trained LLM using InstructLab, you can add new knowledge and skills to it by adding information to its taxonomy repository.

The easiest way to get started with AI

It doesn’t get any easier than a two-step process, which is all RamaLama requires:

1. Install RamaLama

You can use the dnf package manager:

$ sudo dnf install python3-ramalama

Or you can use Python’s pip command:

$ python3 -m pip install ramalama --user

2. Run an AI model

Use the ramalama command to run and interact with a pre-trained AI model:

$ ramalama run granite3-moe

This opens an interactive prompt. Test the model and see what it knows.

> Write a one-line hello world  Python application undefinedprint("Hello, World!")

Cut through the noise of AI

There’s a lot of noise about AI right now, but it doesn’t have to be confusing or overwhelming. When you have a curated environment available to you providing the tools you need to explore and develop AI, you can quickly get up to speed with the latest developments, emerging technologies, and most importantly the possibilities for useful AI.

product trial

Red Hat Enterprise Linux AI | Product Trial

A foundation model platform to develop, train, test, and run Granite family large language models (LLMs) for enterprise applications.

About the author

Seth Kenlon is a Linux geek, open source enthusiast, free culture advocate, and tabletop gamer. Between gigs in the film industry and the tech industry (not necessarily exclusive of one another), he likes to design games and hack on code (also not necessarily exclusive of one another).

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

Keep exploring

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech