Artificial intelligence (AI)-powered applications are growing to deliver more mainstream experiences to users. Whether predicting rainfall and its effects on specific terrain or building a video game, AI models use machine learning (ML) to humanize the user experience in virtual or real world applications.
Operationalizing these applications with integrated ML capabilities and keeping them up to date—known as MLOps—ensures prediction accuracy. In this session, we will show you how to accelerate MLOps using an example of an AI game of battleship we built recently that was played by 8000+ users live at Red HatⓇ Summit.
MLOps requires collaboration amongst data scientists, developers, ML engineers, IT operations, and various DevOps technologies. This can require significant effort and coordination.
We’ll briefly discuss how data scientists build, test, and train ML models on Kubernetes hybrid cloud platforms such as Red Hat OpenShift. Next, we will explore how the integrated DevOps CI/CD capabilities in Red Hat OpenShiftⓇ (i.e., GitOps and Pipelines), allow us to automate and accelerate the integration of ML models into the application development process. Ultimately, these capabilities allow consistent, scaled application deployments, which also helps accelerate the frequent redeployment of updated ML models into production.
In this webinar, we’ll talk about:
- How Data Scientists build, test, and train ML models on Kubernetes hybrid cloud platforms like Red Hat OpenShift
- How Red Hat OpenShift incorporates GitOps and pipelines to automate and accelerate ML model integration during application development
- How the integrated DevOps CI/CD capabilities in Red Hat OpenShift facilitate on-demand application deployment and updates
Live event date: Thursday, July 15, 2021 | 1:00 pm ET
On-demand event: Available for one year afterward.