Select a language
Every week seems to bring a new report on how edge computing is going to take over the world. But the question remains—will the edge computing phenomenon take over the world as predicted and, if so, how can businesses benefit from it?
In this and future posts, we’ll demystify edge computing, examine its motivations, and explore best practices in creating scalable edge deployments and the role of open source at the edge. We'll also look at 5G and its impact on the telco industry, remote office/branch office, IoT, and other use cases.
What is edge computing?
Depending on the industry or use case, the term edge computing (EC) has been used to describe everything from actions performed by tiny IoT devices to datacenter-like infrastructure. Terms used to denote edge computing include: distributed computing, hybrid edge computing, heterogeneous computing, matrix computing, datacenter-in-a-box, local cloud, network edge, fog computing, and more. Depending on the industry, the meaning of each of these terms is imbued with its own perspective.
To add to the confusion, there is not a single edge, but a continuum of edge tiers with different properties in terms of distance to users, number of sites, size of sites, ownership, etc. The location of where edge computing is located is subject to interpretation. For service providers, EC can extend from core to the last mile, whereas for enterprises, EC is located on-premise.
At the conceptual level, edge computing refers to the idea of bringing computing closer to where it's consumed or closer to the sources of data. This concept is not limited to computing services but could also include networking or storage services.
The debate over where computing resources should be located is perhaps as old as computing itself. The pendulum often swings between efficiencies and economies of scale offered by centralized computing to flexibility and user control offered by non-centralized computing. Past trends have included client/server, PC vs. mainframe, etc.
The edge computing concept is more than two decades old. One pioneer in the field is Akamai for Content Delivery Networks (CDNs) whereby frequently accessed content is cached closer to end users. In the present context, the edge computing scope is much broader, encompassing businesses, consumers, and service providers.
There is great variability across various EC use cases and industries, where every use case presents its own unique requirements for edge. For an IoT use case, how the edge operates is different from a remote site like a windmill or autonomous vehicle and also different from the requirements of a factory or stadium.
For example, a remote site that has constraints on computing infrastructure and network bandwidth mostly operates in offline mode, whereas a stadium would have a mini datacenter-like infrastructure with broadband connectivity.
Why is computing moving to the edge?
In the past decade, the shift to cloud services has resulted in computing resources being concentrated
in a few large data centers. Edge computing is a counter-trend that decentralizes cloud services and distributes them to many sites that are located closer to end users or data sources. It allows applications to deliver a better quality experience, thereby also enabling new use cases and gaining operational efficiencies. The main reasons for EC can be categorized into areas of bandwidth, latency, resilience, and security.
Some emerging use cases, like IoT or video surveillance, are expected to generate huge amounts of data (100s of GB/day) and have constrained network connectivity via cellular/satellite (e.g., offshore oil platform, ship at sea). By processing data closer to the data source, EC can help reduce network bandwidth required to move device data to back-end systems. The majority of device data could be redundant information. Think of room temperature data from a thermostat, for example, that could be processed locally with only a small aggregated dataset being sent to back-end systems.
For use cases like mobile AR/VR with edge-based rendering or autonomous driving with real-time decision making, the latency introduced by communicating with a centralized site over a long distance, could impact user experience or safety. EC helps reduce latencies and becomes a key requirement for time-sensitive use cases.
For critical business functions, edge computing provides resilience for service continuity despite intermittent network connectivity (e.g., autonomous vehicles, smart buildings, agriculture). By limiting the affected areas of service failures to a smaller service area (e.g., mobile edge computing), it provides for greater resilience. EC also allows for better data sovereignty by keeping sensitive information close to its source for security or regulatory reasons.
It’s not an either/or choice between edge computing and centralized computing. As EC gains greater adoption in the marketplace, the overall solution would encompass a combination of the two. In such a hybrid computing model, centralized computing would be used for compute-intensive workloads, data aggregation and storage, AI/machine learning, coordinating operations across geographies, and traditional back-end processing. Edge computing, on the other hand, could help solve problems at the source, in near real time.
Architects will need to identify use cases that are aligned with edge computing. If a use case doesn’t benefit from reduced latency, real-time monitoring, or other attributes, then edge computing may not be appealing.
Who is using edge computing?
Emerging use cases like IoT, AI/ML, AR/VR, robotics, and telco network functions are often cited as key drivers to move computing to the edge. However, traditional enterprises are also starting to adopt this approach in order to better support their remote/branch offices, retail locations, manufacturing plants, etc. Even cloud service providers have recognized the need for processing data closer to source and are offering edge solutions.
For companies looking for low-latency or disconnected computing, where remote sites can operate without communication with centralized infrastructure, EC can help improve the infrastructure resilience and application availability. Edge computing can similarly benefit a large number of use cases including utilities, transportation, healthcare, industrial, energy, and retail.
For service providers, EC can help improve the quality of experience of their customers by moving applications or content towards the edge tiers in the network hierarchy. They can also deploy an entirely new class of services on the edge to take advantage of their proximity to the customers. As network edge represents a majority of the operator’s capital and operational expenses, it is also a key area of interest for network modernization efforts.
About the author
Ishu Verma is Technical Evangelist at Red Hat focused on emerging technologies like edge computing, IoT and AI/ML. He and fellow open source hackers work on building solutions with next-gen open source technologies. Before joining Red Hat in 2015, Verma worked at Intel on IoT Gateways and building end-to-end IoT solutions with partners. He has been a speaker and panelist at IoT World Congress, DevConf, Embedded Linux Forum, Red Hat Summit and other on-site and virtual forums.