I’m excited to announce the general availability of Red Hat Enterprise Linux AI (RHEL AI) 1.2, our generative AI (gen AI) foundation model platform to develop, test and run large language models (LLMs) for enterprise applications. RHEL AI combines open source Granite LLMs with InstructLab model alignment tools on a purpose-built RHEL image optimized for gen AI use cases.
Building on the success of our initial release of RHEL AI 1.1 on September 5, 2024, this version furthers our commitment to empowering developers, AI engineers and data scientists by lowering the barriers of entry and adoption to enterprise AI. RHEL AI 1.2 brings significant enhancements, allowing organizations to more efficiently fine-tune and deploy LLMs using private, confidential and sovereign data to better align to enterprise use cases. These improvements, powered by InstructLab and a comprehensive software stack, now support a wider range infrastructure options, including NVIDIA accelerated computing and software and the newly introduced AMD Instinct accelerators. We intend to continue expanding our hardware accelerator support with partners like Intel in upcoming releases.
Key highlights of RHEL AI 1.2:
Support for Lenovo ThinkSystem SR675 V3 servers
RHEL AI 1.2 is now supported on Lenovo ThinkSystem SR675 V3 servers wit NVIDIA accelerated computing. Users can also take advantage of factory preload options for RHEL AI on these servers, making deployment faster and easier.Support for AMD Instinct Accelerators (technology preview)
Language models require powerful computing resources, and RHEL AI now supports AMD Instinct Accelerators with the full ROCm software stack, including drivers, libraries and runtimes. With RHEL AI 1.2, organizations can leverage AMD Instinct MI300x GPUs for both training and inference, and AMD Instinct MI210 GPUs for inference tasks.- Availability on Azure and GCP:
RHEL AI is now available on Azure and Google Cloud Platform (GCP). With this users will be able to download RHEL AI from Red Hat and bring them to Azure and GCP and create RHEL AI based GPU instances. Training checkpoint and resume
Long training runs during model fine tuning can now be saved at regular intervals, thanks to periodic checkpointing. This feature allows InstructLab users to resume training from the last saved checkpoint instead of starting over, saving valuable time and computational resources.Auto-Detection of hardware accelerators
The ilab CLI can now automatically detect the type of hardware accelerator in use and configure the InstructLab pipeline accordingly for optimal performance, reducing the manual setup required.Enhanced training with PyTorch FSDP (technology preview)
For multi-phase training of models with synthetic data, ilab train now uses PyTorch Fully Sharded Data Parallel (FSDP). This dramatically reduces training times by sharding a model’s parameters, gradients and optimizer states across data parallel workers (e.g., GPUs). Users can pick FSDP for their distributed training by using ilab config edit.
These are just a few of the exciting new features in RHEL AI 1.2. Many more improvements and bug fixes are included, making this release a powerful tool for AI development.
Don’t miss out on these powerful new features! Download RHEL AI 1.2 today and deploy it on-premises or across all major public cloud providers and take your AI development to the next level.
Important notice:
With the introduction of RHEL AI 1.2, we will be deprecating support for RHEL AI 1.1 in 30 days. Please ensure your systems are upgraded to RHEL AI 1.2 to continue receiving support.
Sobre o autor
Navegue por canal
Automação
Últimas novidades em automação de TI para empresas de tecnologia, equipes e ambientes
Inteligência artificial
Descubra as atualizações nas plataformas que proporcionam aos clientes executar suas cargas de trabalho de IA em qualquer ambiente
Nuvem híbrida aberta
Veja como construímos um futuro mais flexível com a nuvem híbrida
Segurança
Veja as últimas novidades sobre como reduzimos riscos em ambientes e tecnologias
Edge computing
Saiba quais são as atualizações nas plataformas que simplificam as operações na borda
Infraestrutura
Saiba o que há de mais recente na plataforma Linux empresarial líder mundial
Aplicações
Conheça nossas soluções desenvolvidas para ajudar você a superar os desafios mais complexos de aplicações
Programas originais
Veja as histórias divertidas de criadores e líderes em tecnologia empresarial
Produtos
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Red Hat Cloud Services
- Veja todos os produtos
Ferramentas
- Treinamento e certificação
- Minha conta
- Suporte ao cliente
- Recursos para desenvolvedores
- Encontre um parceiro
- Red Hat Ecosystem Catalog
- Calculadora de valor Red Hat
- Documentação
Experimente, compre, venda
Comunicação
- Contate o setor de vendas
- Fale com o Atendimento ao Cliente
- Contate o setor de treinamento
- Redes sociais
Sobre a Red Hat
A Red Hat é a líder mundial em soluções empresariais open source como Linux, nuvem, containers e Kubernetes. Fornecemos soluções robustas que facilitam o trabalho em diversas plataformas e ambientes, do datacenter principal até a borda da rede.
Selecione um idioma
Red Hat legal and privacy links
- Sobre a Red Hat
- Oportunidades de emprego
- Eventos
- Escritórios
- Fale com a Red Hat
- Blog da Red Hat
- Diversidade, equidade e inclusão
- Cool Stuff Store
- Red Hat Summit