Skip to main content

5 processor architectures making machine learning a reality for edge computing

New chips offer more perceptive edge architecture options than ever before and are paving the way for edge computing in modern architecture design.
Image
chip hardware for edge computing devices

By Umberto on Unsplash

The edge is becoming more important as our ability to link and coordinate smart devices in crucial business settings and the wild increases. Those edge devices benefit from cloud compute and high-bandwidth networks even if they require edge compute for some use cases, including machine learning.

What is edge machine learning? ]

A whole class of machine-learning algorithms performs perceptive tasks. By perceptive, I mean the act of recognizing something when it is seen or heard. For humans, these tasks usually take less than a second. Machines often perform them in microseconds. Perception is something tied to our senses. It’s what we rely on to navigate the world and know what’s going on. We see an object, and we know whether to move towards it or away.

The logical place for many predictive ML models is on edge devices because these devices act as the sense organs of a larger system. The closer your perceptive ML is to the raw sensory data, the faster your machines can respond.

Edge computing devices

Edge compute devices—from chipmakers like NVIDIA, Intel, AMD, and Qualcomm, among others—can host more than one neural network performing a perceptive task. Five examples of hardware designed for edge compute should give an idea of what’s possible in the market now:

  • NVIDIA recently announced the Jetson Xavier NX, which is reported as “the smallest for AI at the edge.” Jetson Xavier is 2-7 times faster than the Jetson Nano.
  • AMD has a growing range of products focused on Embedded Solutions for IoT, like the AMD EPYC™ Embedded 3000 Series Processors. AMD’s third-generation EPYC chips showed a raw performance throughput increase of 19%.
  • Intel technologies such as the Intel Core Processors and Intel Atom Processors, the Intel Movidius Vision Processing Units (VPUs), and the Intel Xeon Scalable Processors were designed for the edge and IoT. They help provision and connect devices to the network and take advantage of 5G for high speeds with low latency.
  • Qualcomm announced three new AI accelerator chips to power edge computing: The DM.2e offers at least 50 tera operations per second (TOPS) at 15 Watts, the DM.2 card offers 200 TOPS at 25 Watts, and the PCIe card offers 400 TOPS at 75 Watts.
  • ARM has a new architecture for the edge, the ARM-based heterogeneous computers, which are an alternative to the x86 processors of Intel and AMD. They rely on low-latency, high-bandwidth connections between processors and do not require much intermediate storage. They use SRAM instead of DRAM to consume less power.

[ Learn how to accelerate machine learning operations (MLOps) with Red Hat OpenShift. ]

The more perceptive the compute and its related analysis on those edge devices, the more robust they will be to network failure, e.g., an autonomous vehicle navigating a warehouse where network coverage is spotty. Local compute is also an advantage when low-latency decisions are required, which is often the case with robots and vehicles in motion, whose actions are tightly coordinated and interlocked.

Conclusion

Ultimately those devices will depend on network bandwidth to connect with on-premises data centers or cloud compute, where there are more resources for training machine-learning models. Compute-intensive training is necessary to update ML models as the underlying data drifts from the historical distribution. The cloud also plays a crucial role as a control tower enabling operators to monitor the performance of many devices and ML models simultaneously.

Another aspect of edge devices and machine learning is the degree to which world experience will make ML progress. Some knowledge can only be acquired by moving through the world, and that is what edge devices enable ML to do, bringing it a step closer to true superintelligence.

In sum, edge devices require their own compute for certain machine learning workloads, even as they send data and insights back to cloud compute clusters where even more learning and wider coordination can occur.

Try OpenShift Data Science in our Developer sandbox or in your own cluster. ]

Author’s photo

Chris Nicholson

Chris Nicholson is the founder of Pathmind, an AI startup that applies deep reinforcement learning to supply chain and industrial operations. Pathmind optimizes performance in warehouses and on factory floors, using cloud and edge compute. More about me

Navigate the shifting technology landscape. Read An architect's guide to multicloud infrastructure.

OUR BEST CONTENT, DELIVERED TO YOUR INBOX

Privacy Statement