Subscribe to the feed

With the increase in adoption of Kubernetes, it has become popular to run containerized applications using GPUs, which are specialized to process data, especially for artificial intelligence and machine-learning workloads. ROSA provides a managed Red Hat OpenShift service running on AWS Cloud for customers to quickly build, deploy and manage containerized applications.

Now, you can use GPU instance types with Red Hat OpenShift Service on AWS (ROSA) as well as Red Hat OpenShift Data Science to accelerate your deployment of AI/ML workloads. Support for the GPU instance type software stack is provided by AWS.

With GPU instance types now enabled for ROSA, you can develop, test and run AI/ML workloads that rely on accelerated instance-types from AWS. Additionally, with ROSA being a managed OpenShift service and OpenShift Data Science providing core machine learning tooling, customers can focus on accelerating the deployment of AI/ML workloads without the need to manage the complexities of the underlying Kubernetes infrastructure. 

Check to see the instance types available to you with the ROSA CLI, by running `rosa list instance-types`. 

The specific instances being made available are as follows:

  • p3.2xlarge
  • p3.8xlarge
  • p3.16xlarge
  • p3dn.24xlarge
  • p4d.24xlarge
  • g4dn.xlarge thru g4dn.metal
  • g5.xlarge thru g5.48xlarge
  • dl1.24xlarge (Intel)

The following are additional resources on this topic:

Or try our learning path on the developer portal to learn how to create a Jupyter notebook to use GPUs for AI/ML modeling 


Anes Kim is a product marketing manager for Red Hat OpenShift cloud services and has been at Red Hat since 2020.

Read full bio


automation icon


テクノロジー、チームおよび環境に関する IT 自動化の最新情報

AI icon

AI (人工知能)

お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート

open hybrid cloud icon



security icon



edge icon



Infrastructure icon


世界有数のエンタープライズ向け Linux プラットフォームの最新情報

application development icon


アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細

Original series icon