Edge machine learning refers to the process of running machine learning (ML) models on an edge device to collect, process, and recognize patterns within collections of raw data.
To best explain machine learning at the edge, let’s start by breaking down the two components that make it up: machine learning, and edge computing.
- Machine learning is a subset of artificial intelligence (AI) in which the AI is able to perform perceptive tasks within a fraction of the time it would take a human.
- Edge computing refers to the act of bringing computing services physically closer to either the user or the source of the data. These computing services exist on what we call edge devices, a computer that allows for raw data to be collected and processed in real-time, resulting in faster, more reliable analysis.
Machine learning at the edge brings the capability of running machine learning models locally on edge devices, such as the Internet of Things (IoT).
As customer expectations rise, so does the demand for fast, secure processing power.
Every interaction between company and customer is now a mix of hybrid technologies and touchpoints that require easy access to devices, data, and applications that power new experiences and create a positive end-to-end user experience.
Traditionally, this processing takes place by transporting datasets to distant clouds via networks that can have trouble operating at full capacity due to the long journey that the data must take to travel between destinations. This can potentially result in issues ranging from latency to security breaches.
With edge computing, you can place artificial intelligence/machine learning (AI/ML)-powered applications physically closer to data sources like sensors, cameras, and mobile devices to gather insights faster, identify patterns, then initiate actions without relying on traditional cloud networks.
Edge computing is an important part of an open hybrid cloud vision that allows you to achieve a consistent application and operations experience—across your entire architecture through a common, horizontal platform.
While a hybrid cloud strategy allows organizations to run the same workloads in their own datacenters and on public cloud infrastructure (like Amazon Web Services, Microsoft Azure, or Google Cloud), an edge strategy extends even further, allowing cloud environments to reach locations that are too remote to maintain continuous connectivity with the datacenter.
Because edge computing sites often have limited or no IT staffing, a reliable edge computing solution is one that can be managed using the same tools and processes as the centralized infrastructure, yet can operate independently in a disconnected mode.
In general, comprehensive edge computing solutions need to be able to:
- Run a consistent deployment model from the core to the edge.
- Offer flexible architectural options to best meet connectivity and data management needs.
- Automate and manage infrastructure deployments and updates from your core data center to your remote edge sites.
- Provision, update, and maintain software applications across your infrastructure, at scale.
- Continue operations at remote edge sites, even when internet connectivity is not reliable.
- Include a robust software platform that can scale in and out.
- Secure data and infrastructure in security-challenged edge environments.
There’s no single way to build and operationalize ML models, but there is a consistent need to gather and prepare datasets, develop models into intelligent applications, and derive revenue from those applications. Operationalizing these applications with integrated ML capabilities – known as MLOps– and keeping them up to date requires collaboration amongst data scientists, developers, ML engineers, IT operations, and various DevOps technologies.
By applying DevOps and GitOps principles, organizations automate and simplify the iterative process of integrating ML models into software development processes, production rollout, monitoring, retraining, and redeployment for continued prediction accuracy.
With Red Hat® OpenShift®, this process can essentially be broken down into 4 steps:
- Train: ML models are trained on Jupyter notebooks on Red Hat OpenShift.
- Automate: Red Hat OpenShift Pipelines is an event-driven, continuous integration capability that helps package ML models as container images by:
Saving the models ready for deployment in a model store.
Converting the saved models to container images with Red Hat OpenShift build.
Testing the containerized model images to ensure they remain functional.
Storing the containerized model images in a private, global container image registry like Red Hat Quay, where the images are analyzed to identify potential issues, mitigating security risks and geo replication.
- Deploy: Declarative configuration managed by Red Hat OpenShift GitOps automates the deployment of ML models at scale, anywhere.
- Monitor: Models are monitored for reliability, speed, scale, etc. with the tooling from one of our ecosystem partners, and updated with retraining and redeployment, as needed.
Artificial intelligence and machine learning have rapidly become critical for businesses as they seek to convert their data to business value. Red Hat’s open source edge computing solutions focus on accelerating these business initiatives by providing services that automate and simplify the process of developing intelligent applications in the hybrid cloud.
Red Hat recognizes that as data scientists strive to build their AI/ML models, their efforts are often complicated by a lack of alignment between rapidly evolving tools. This, in turn, can affect productivity and collaboration among their teams, software developers, and IT operations.
To sidestep these potential hurdles, Red Hat OpenShift services are built to provide support for users to design, deploy, and manage their intelligent applications consistently across cloud environments and datacenters.
Most businesses could be making better use of their data, but are limited by their tools and workflows. Red Hat® OpenShift® Data Science provides a supported, self-service environment that allows data scientists to refine algorithms and experiment with the development, training, and testing of machine learning models.
Edge computing on OpenShift is useful across a variety of industries and can serve as a critical tool for a range of tasks, from fraud detection, to automated insurance quotes, to exploration of natural resources.