In the newest episode of Technically Speaking, Chris Wright — Red Hat's Chief Technology Officer and SVP Global Engineering — chats with Kavitha Prasad —Intel's VP and GM Datacenter, AI and Cloud Execution and Strategy — about DevOps and MLOps, machine learning model drift and how machine learning is moving to the edge.
But let's begin at the beginning.
What is machine learning?
Machine learning (ML) is a subset of artificial intelligence (AI) which uses data and algorithms to "learn" and improve over time. Machine learning is increasingly important in our lives, forming the foundation of many services we use every day, including search engines, voice assistants and recommendation engines. If you've ever wondered exactly how some of these services seem so magical, it's probably because of machine learning.
What is machine learning model drift?
Machine learning is still a relatively new technology, however, and is far from perfect. One major issue machine learning algorithms face is "model drift" — the gradual degradation of predictive power due to normal and continual changes in the real world or digital environments. To maintain a model's accuracy and usefulness, model drift must be effectively detected and mitigated over time.
How MLOps can help
Machine learning operations (MLOps) is similar to developer operations (DevOps), but with a focus on deploying, maintaining and retraining machine learning models rather than code versioning and software.
MLOps, like DevOps, increases ML teams' agility by making it possible to quickly and frequently introduce small, incremental changes that help maintain the reliability of machine learning models. Importantly, MLOps also allows teams of IT professionals to deal with model deployment and maintenance, so data scientists can spend their time on model development.
Watch the latest episode of Technically Speaking
Wright and Prasad talk about all of this and more in the latest episode of Technically Speaking.