There is no one edge. An edge architecture that deploys a 5G radio access network (RAN) for a telecommunications service provider is going to look quite different from one used by retailers to capture customer interactions, understand customer sentiments and predict and anticipate unspoken needs.

However, as you plan, build a business case for and start to deploy an edge architecture, there are some general concepts to keep in mind. They won’t always apply and specifics may markedly differ—but think of them as a checklist for any planning exercise.

What’s the problem?

Obvious perhaps, but your primary goals and objectives may well suggest different priorities and approaches such as efficiency or better customer experience. For example, in the retail case, IoT devices like sensors, cameras and RFID tags can capture data on customer behavior, which can be analyzed to deliver personalized experiences and improve customer loyalty. Artificial intelligence and machine learning (AI/ML) technologies at the edge help retailers make data-driven decisions in real time. This too can be used to help create better customer experiences. But maybe the top goal is operational efficiency by helping retailers optimize staffing allocation, manage fresh food stock for last-mile delivery, track demand and key performance indicators (KPIs), monitor in-store promotions and identify trends and patterns in customer data, resulting in more efficient operations.

It’s fine to have multiple priorities but it’s also important to know who is the primary stakeholder for the project, what metrics they’re most focused on and key in on that—at least at first. Did you know senior supply chain managers are now among the most involved decision makers throughout the entire edge computing enablement process? And they often look at edge more through a supply chain optimization lens than IT decision makers did historically.

What are the constraints at the edge locations?

Edge operations can’t just replicate a datacenter playbook.

For example, edge clusters may be installed in locations that don’t have an IT staff and may even be in places with no permanent human presence at all. You may need to think differently about physical security as anyone can have access to the hardware. Or it might lead you to a different strategy for dealing with hardware failures than you would follow in a data center with 24-hour IT staff coverage.

You’re also potentially dealing with potentially unreliable and throughput-constrained networks. In a data center, you can mostly take high bandwidth and low latency network connectivity—especially within the datacenter—as a given. Not so in an edge architecture.

Would you like the edge system to be highly available? Do you want to minimize its cost and its footprint in terms of size and power? What do you want to do if an edge cluster loses its connection? Do you want a way to continue operating even if in degraded mode? How much and how often do you need to communicate with a datacenter?

Answers to questions like these can drive significantly different architectural decisions. (And can change the price tag considerably given that the edge deployments can scale to tens of thousands of nodes—even a small individual cost difference can have a significant impact.)

Let’s talk more about data

Data has mostly been an implicit part of our story so far. I’d like to raise it up a level, however, because data plays such a starring role with respect to edge computing. If it weren’t for data we mostly wouldn’t have or need edge computing—maybe just some sensors reporting back to a central location.

Data is a source of insights. Edge devices collect and pre-process data from sensors, wearables and other sources, providing raw material for real-time insights and decision making. Edge systems can also filter and pre-process data and send it back to larger centralized systems for further use and analysis. One common architecture is to develop AI/ML models in the datacenter, usually assisted by GPUs and other specialized hardware, and then periodically deploy the trained models back to edge systems.

Data analysis at the edge can also trigger immediate actions, like adjusting industrial machinery settings based on sensor readings or routing traffic based on real-time congestion data. In the Edge Solutions for Retail Red Hat Architecture, you can see how data from edge devices is transmitted to Red Hat AMQ for model development in the core datacenter and live inference in the store. Apache Camel K provides integration, normalizing and routing of sensor data to other components. That sensor data is mirrored into a data lake that is provided by IBM Storage Ceph. 

Consider operations at scale

Many of the most pressing challenges for edge computing relate to scale; there may be thousands of (or more) network endpoints. You have to standardize ruthlessly, minimize operational surface area and automate even the smallest things.

You should prefer atomic updates so that an edge system can’t end up only partially updated and therefore in an ill-defined state. Instead of pushing updates from a centralized location, consider pulling them from the edge wherever possible; this lets edge devices choose the timing and frequency of updates based on their specific needs and network conditions. However, whatever the exact update process, take care to avoid overloading systems and the network by spacing out update processes.

Red Hat Ansible Automation Platform uses containerization and automation to help operations teams standardize configuration and deployment all the way to edge locations. It offers a single, consistent view of an IT environment, so that teams can reliably manage thousands of sites, network devices and clusters.

Developing an automation strategy is a key part of developing an edge architecture.

Be adaptable

Edge computing has evolved and adapted since forms of it got their start in the late 1990s. (Content Delivery Networks are often cited as at least a precursor.) Its evolution has stemmed from a confluence of technologies and needs around distributed computing, latency reduction and processing data closer to the source.

Edge computing architectures have generally grown to be more complex and have more layers.

As part of 5G deployments, we see carriers shifting to a more flexible vRAN approach whereby the high-level logical RAN components are disaggregated by decoupling hardware and software, as well as using cloud technology for automated deployment and scaling and workload placement.

One big change we’re seeing today is that there’s more computing and more storage out on the edge. Decentralized systems have often existed more to reduce reliance on network links than to perform tasks that couldn’t practically be done in a central location assuming reasonable communication links. AI/ML has made sure you can’t practically ship all the data home and wait for the analysis to come back.

Of course, this makes the job of architects harder. But it also opens up possibilities in what they can help their organizations accomplish.

Red Hat architectures provide insights into how customers have implemented Red Hat products and approaches as part of their own edge strategies. These include use cases as varied as telco service assuranceretailmedical imaging diagnostics and manufacturing modernization. Check these out to see how other customers are successfully putting their edge deployments in place.


Sobre o autor

Gordon Haff is a technology evangelist and has been at Red Hat for more than 10 years. Prior to Red Hat, as an IT industry analyst, Gordon wrote hundreds of research notes, was frequently quoted in publications such as The New York Times on a wide range of IT topics, and advised clients on product and marketing strategies.

Read full bio