I’m excited to announce the general availability of Red Hat Enterprise Linux AI (RHEL AI) 1.2, our generative AI (gen AI) foundation model platform to develop, test and run large language models (LLMs) for enterprise applications. RHEL AI combines open source Granite LLMs with InstructLab model alignment tools on a purpose-built RHEL image optimized for gen AI use cases.
Building on the success of our initial release of RHEL AI 1.1 on September 5, 2024, this version furthers our commitment to empowering developers, AI engineers and data scientists by lowering the barriers of entry and adoption to enterprise AI. RHEL AI 1.2 brings significant enhancements, allowing organizations to more efficiently fine-tune and deploy LLMs using private, confidential and sovereign data to better align to enterprise use cases. These improvements, powered by InstructLab and a comprehensive software stack, now support a wider range infrastructure options, including NVIDIA accelerated computing and software and the newly introduced AMD Instinct accelerators. We intend to continue expanding our hardware accelerator support with partners like Intel in upcoming releases.
Key highlights of RHEL AI 1.2:
Support for Lenovo ThinkSystem SR675 V3 servers
RHEL AI 1.2 is now supported on Lenovo ThinkSystem SR675 V3 servers wit NVIDIA accelerated computing. Users can also take advantage of factory preload options for RHEL AI on these servers, making deployment faster and easier.Support for AMD Instinct Accelerators (technology preview)
Language models require powerful computing resources, and RHEL AI now supports AMD Instinct Accelerators with the full ROCm software stack, including drivers, libraries and runtimes. With RHEL AI 1.2, organizations can leverage AMD Instinct MI300x GPUs for both training and inference, and AMD Instinct MI210 GPUs for inference tasks.- Availability on Azure and GCP:
RHEL AI is now available on Azure and Google Cloud Platform (GCP). With this users will be able to download RHEL AI from Red Hat and bring them to Azure and GCP and create RHEL AI based GPU instances. Training checkpoint and resume
Long training runs during model fine tuning can now be saved at regular intervals, thanks to periodic checkpointing. This feature allows InstructLab users to resume training from the last saved checkpoint instead of starting over, saving valuable time and computational resources.Auto-Detection of hardware accelerators
The ilab CLI can now automatically detect the type of hardware accelerator in use and configure the InstructLab pipeline accordingly for optimal performance, reducing the manual setup required.Enhanced training with PyTorch FSDP (technology preview)
For multi-phase training of models with synthetic data, ilab train now uses PyTorch Fully Sharded Data Parallel (FSDP). This dramatically reduces training times by sharding a model’s parameters, gradients and optimizer states across data parallel workers (e.g., GPUs). Users can pick FSDP for their distributed training by using ilab config edit.
These are just a few of the exciting new features in RHEL AI 1.2. Many more improvements and bug fixes are included, making this release a powerful tool for AI development.
Don’t miss out on these powerful new features! Download RHEL AI 1.2 today and deploy it on-premises or across all major public cloud providers and take your AI development to the next level.
Important notice:
With the introduction of RHEL AI 1.2, we will be deprecating support for RHEL AI 1.1 in 30 days. Please ensure your systems are upgraded to RHEL AI 1.2 to continue receiving support.
Sobre el autor
Navegar por canal
Automatización
Las últimas novedades en la automatización de la TI para los equipos, la tecnología y los entornos
Inteligencia artificial
Descubra las actualizaciones en las plataformas que permiten a los clientes ejecutar cargas de trabajo de inteligecia artificial en cualquier lugar
Nube híbrida abierta
Vea como construimos un futuro flexible con la nube híbrida
Seguridad
Vea las últimas novedades sobre cómo reducimos los riesgos en entornos y tecnologías
Edge computing
Conozca las actualizaciones en las plataformas que simplifican las operaciones en el edge
Infraestructura
Vea las últimas novedades sobre la plataforma Linux empresarial líder en el mundo
Aplicaciones
Conozca nuestras soluciones para abordar los desafíos más complejos de las aplicaciones
Programas originales
Vea historias divertidas de creadores y líderes en tecnología empresarial
Productos
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Servicios de nube
- Ver todos los productos
Herramientas
- Training y Certificación
- Mi cuenta
- Soporte al cliente
- Recursos para desarrolladores
- Busque un partner
- Red Hat Ecosystem Catalog
- Calculador de valor Red Hat
- Documentación
Realice pruebas, compras y ventas
Comunicarse
- Comuníquese con la oficina de ventas
- Comuníquese con el servicio al cliente
- Comuníquese con Red Hat Training
- Redes sociales
Acerca de Red Hat
Somos el proveedor líder a nivel mundial de soluciones empresariales de código abierto, incluyendo Linux, cloud, contenedores y Kubernetes. Ofrecemos soluciones reforzadas, las cuales permiten que las empresas trabajen en distintas plataformas y entornos con facilidad, desde el centro de datos principal hasta el extremo de la red.
Seleccionar idioma
Red Hat legal and privacy links
- Acerca de Red Hat
- Oportunidades de empleo
- Eventos
- Sedes
- Póngase en contacto con Red Hat
- Blog de Red Hat
- Diversidad, igualdad e inclusión
- Cool Stuff Store
- Red Hat Summit