Edge computing presents IT challenges—and offers operational benefits
The manufacturing sector relies on technology to remain competitive. Companies understand that implementing modern information and operation technologies (IT and OT) will help optimize production, streamline core functions, and fuel innovation.
Through edge computing in the hybrid cloud, IT and OT come together to bring processing power closer to the data source on the shop floor. Edge computing with artificial intelligence and machine learning (AI/ML) can:
- Support faster decisions and actions in the plant.
- Proactively discover potential errors at the assembly line.
- Reduce equipment downtime through predictive maintenance.
- Boost product quality.
While the business benefits of edge computing with AI/ML are compelling, the distributed nature of the model presents challenges to IT operations management. The edge system can exceed 100,000 points, and the management of each point will vary according to location and data needs. Monitoring and controlling this environment must ensure rapid application deployment and updates, as well as maintenance of all clusters at every edge tier.
GitOps and blueprints meet the need for consistency at the edge
For manufacturing, a key benefit of edge architecture is eliminating the tedious and error-prone manual configuration of different systems and applications at scale. One of the top considerations for building the architecture is the ability to scale to hundreds of factories without manual intervention—and the required resources to manage them. Consistency across a wide scale is achieved by using GitOps and blueprints.
GitOps uses git as the declarative source of truth for the continuous deployment of system updates. Administrators can efficiently manage system complexity by pushing reviewed commit changes of configuration settings and artifacts to git. The changes are then automatically pushed into operational systems. The idea is that the current state of complex distributed systems can be reflected within git as code.
In Kubernetes, GitOps is a powerful abstraction since it mirrors the underlying design goals of that platform. A Kubernetes cluster is always reconciling to a point of truth with respect to what developers and administrators have declared should be the target state of the cluster. An ideal GitOps model for Kubernetes employs a pipeline approach where YAML-based configuration and container image changes can originate as git commits that then trigger activities in the pipeline that result in updates to applications—and the cluster itself.
In manufacturing environments, GitOps is useful for achieving a consistent, declarative approach to managing individual cluster changes and upgrades. At the same time, edge manufacturing environments are heterogeneous in their size and type of hardware resources. To address this heterogeneity, the concept of blueprints was created.
Blueprints are used to define all the components used within an edge reference architecture such as hardware, software, management tools, and point of delivery tooling. Version control allows auditing and rollback if necessary. Each blueprint is a set of declarative specifications that can be organized in layers. This configuration allows the sharing of infrastructure settings and various points of customization where required.
This example shows how GitOps and blueprints can be used to implement an edge solution.
Experience the edge AI/ML demonstration for manufacturing
Red Hat shows how management of an edge computing environment can be streamlined and automated for rapid results. The edge AI/ML demonstration for manufacturing showcases the solution blueprint approach through workflows in an edge-AI/ML environment.
The demonstration combines several Red Hat® solutions:
- Red Hat OpenShift® Container Platform
- Red Hat Quay, a container registry
- Red Hat OpenShift Data Foundation, previously known as Red Hat OpenShift Container Storage
- Red Hat Integration, including:
- Real-time messaging with Red Hat AMQ
- Data streaming based on Apache Kafka with Red Hat AMQ
- Application connectivity and data transformation based on Apache Camel with Red Hat Integration
It includes three key workflows:
- Continuous integration and continuous deployment to the edge
- AI/ML on the edge
The demonstration focuses on three tiers: the central datacenter, factory datacenter, and a line dataserver serving a specific set of operations on the floor. Simulations include access to sensor data, detection of Internet of Things (IoT) anomalies, and continuous deployment through GitOps.
You can navigate the solution blueprint demonstration to view the areas of the IT management environment and the workflows that are relevant to your manufacturing operation.
To schedule a demonstration of the solution blueprint approach for you and your organization, contact our Sales team. After participating in the edge AI/ML demonstration, you can choose to download a blueprint for your own exploration and use.