Subscribe to the feed

How many times have you had to start running a program on your terminal, and then… you’re just stuck there? Suddenly, you can’t close out your terminal or multi-task.

Well, no more–kind of!

With the release of Red Hat Enterprise Linux (RHEL) AI 1.4, Red Hat is introducing background process management for developer preview. This feature enables you to run the synthetic data generation (SDG) process in the background by allowing you to detach it from the terminal, spin off child processes to remain running in the background, monitor the status, reattach the process to bring it back and terminate the process if needed.

We are introducing new flags and commands to achieve this. Here’s how it will work in 1.4: 

The ‘-dt’ flag: The detach flag can be used with the ‘ilab data generate’ command to then spin off a child process for synthetic data generation.

As you can see, it returns a unique ID for that child process and provides a log file where the logs of the process will be captured.  

Once you’ve sent the process to run in the background and want to check in on the status of it, you can use a new command ‘ilab process list’ that will list out all of the processes that are currently running in the background - complete with the UUID and log file location, along with statuses for each.status The status will indicate if the process is complete, in progress or if there’s an error.

For example, we can see the run that we kicked off with the UUID 56511 has begun and is running. 

Finally, to attach it, you can run ‘ilab process attach’ with the flag ‘--latest’ or the specific UUID of the process you want to bring back into the foreground resume running in the terminal.

What’s new with model management in RHEL AI 1.4?

With RHEL AI 1.4, you can not only begin customizing and tuning models, but also integrate them into your applications! Additionally, you’re now able to upload your model to Hugging Face and S3 buckets.

 To achieve this, you’ll make use of a new command–‘ilab model upload’. As you can see below, you will specify your destination (Hugging Face for the purposes of this example) and pass your model path and any necessary authorizations. In my case, I passed a Hugging Face token that gave me access to the folder.

Once I go to the folder in Hugging Face, I can see that the fine tuned model is available there for more use.

Fine-tuned model available on Hugging Face

Check out RHEL AI 1.4 to experiment with these features today!


About the authors

Jehlum is in the Red Hat OpenShift Product Marketing team and is very passionate about learning about how customers are adopting OpenShift and helping them do more with OpenShift!

Read full bio

Nathan Weinberg is a senior software engineer at Red Hat based in the Boston area. He currently works on RHEL AI and has previously worked on OpenShift, OpenStack, Ceph, and Gluster.

Read full bio

Charlie is an intern with Red Hat's Container Runtimes team working on products like Podman and Buildah. He is currently a computer science major at the College of the Holy Cross.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech