Edge computing is computing that takes place at or near the physical location of either the user or the source of the data. By placing computing services closer to these locations, users benefit from faster, more reliable services while companies benefit from the flexibility of hybrid cloud computing. Edge computing is one way that a company can use and distribute a common pool of resources across a large number of locations.
Edge is a strategy to extend a uniform environment all the way from the core datacenter to physical locations near users and data. Just as a hybrid cloud strategy allows organizations to run the same workloads both in their own datacenters and on public cloud infrastructure (like Amazon Web Services, Microsoft Azure, or Google Cloud), an edge strategy extends a cloud environment out to many more locations.
Edge computing is in use today across many industries, including telecommunications, manufacturing, transportation, utilities, and many others. The reasons people implement edge computing are as diverse as the organizations they support.
Some common edge use cases
Many edge use cases are rooted in the need to process data locally in real time—situations where transmitting the data to a datacenter for processing causes unacceptable levels of latency.
For an example of edge computing driven by the need for real-time data processing, think of a modern manufacturing plant. On the factory floor, Internet of Things (IoT) sensors generate a steady stream of data that can be used to prevent breakdowns and improve operations. By one estimate, a modern plant with 2,000 pieces of equipment can generate 2,200 terabytes of data a month. It’s faster—and less costly—to process that trove of data close to the equipment, rather than transmit it to a remote datacenter first. But it’s still desirable for the equipment to be linked through a centralized data platform. That way, for example, equipment can receive standardized software updates and share filtered data that can help improve operations in other factory locations.
Connected vehicles are another common example of edge computing. Buses and trains carry computers to track passenger flow and service delivery. Delivery drivers can find the most efficient routes with the technology onboard their trucks. When deployed using an edge computing strategy, each vehicle runs the same standardized platform as the rest of the fleet, making services more reliable and ensuring that data is protected uniformly.
A step further is autonomous vehicles—another example of edge computing that involves processing a large amount of real-time data in a situation where connectivity may be inconsistent. Because of the sheer amount of data, autonomous vehicles like self-driving cars process sensor data on board the vehicle in order to reduce latency. But they can still connect to a central location for over-the-air software updates.
Edge computing also helps keep popular internet services running fast. Content delivery networks (CDNs) deploy data servers close to where the users are, allowing busy websites to load quickly, and supporting fast video-streaming services.
Another example of edge computing is happening in a nearby 5G cell tower. Telecom providers increasingly run their networks with network functions virtualization (NFV), using virtual machines running on standard hardware at the network edge. These virtual machines can replace expensive proprietary equipment. An edge computing strategy enables the providers to keep the software at tens of thousands of remote locations all running consistently and with uniform security standards. Applications running close to the end user in a mobile network also reduce latency and allow providers to offer new services.
Edge computing can mean faster, more stable services at a lower cost. For users, edge computing means a faster, more consistent experience. For enterprises and service providers, edge means low-latency, highly available apps with real-time monitoring.
Edge computing can reduce network costs, avoid bandwidth constraints, reduce transmission delays, limit service failures, and provide better control over the movement of sensitive data. Load times are cut and online services deployed closer to users enable both dynamic and static caching capabilities.
Applications that benefit from lower response time, such as augmented reality and virtual reality applications, benefit from computing at the edge.
Other benefits of edge computing include the ability to conduct on-site big data analytics and aggregation, which is what allows for near real-time decision making. Edge computing further reduces the risk of exposing sensitive data by keeping all of that computing power local, thereby allowing companies to enforce security practices or meet regulatory policies.
Enterprise customers benefit from the resiliency and costs associated with edge computing. By keeping computing power local, regional sites can continue to operate independently from a core site, even if something causes the core site to stop operating. The cost of paying for bandwidth to take data back and forth between core and regional sites is also greatly reduced by keeping that compute processing power closer to its source.
An edge platform can help deliver consistency of operations and app development. It should support interoperability to account for a greater mix of hardware and software environments, as opposed to a datacenter. An effective edge strategy also allows products from multiple vendors to work together in an open ecosystem.
One way to view edge computing is as a series of circles radiating out from the code data center. Each represents a different tier moving closer to the far edge.
- Provider/enterprise core: These are traditional "non-edge" tiers, owned and operated by public cloud providers, telco service providers, or large enterprises.
- Service provider edge: These tiers are located between the core or regional datacenters and the last mile access, commonly owned and operated by a telco or internet service provider and from which this provider serves multiple customers.
- End-user premises edge: Edge tiers on the end-user side of the last mile access can include the enterprise edge (e.g., a retail store, a factory, a train) or the consumer edge (e.g., a residential household, a car).
- Device edge: Standalone (non-clustered) systems that directly connect sensors/actuators via non-internet protocols. This represents the far edge of the network.
Edge computing, with its emphasis on data collection and real-time computation, can contribute to the success of data-intensive intelligent applications. As an example, artificial intelligence/machine learning (AI/ML) tasks, such as image recognition algorithms, can be run more efficiently closer to the source of the data, removing the need to shuttle large amounts of data to a centralized datacenter.
These applications take combinations of many data points and use them to infer higher-value information that can help organizations make better decisions. This functionality can improve a wide range of business interactions such as customer experiences, preemptive maintenance, fraud prevention, clinical decision making, and many others.
By treating each incoming data point as an event, organizations can apply decision management and AI/ML inference techniques to filter, process, qualify, and combine events to deduce higher-order information.
Data-intensive applications can be broken down into a series of stages, each performed at different parts of the IT landscape. Edge comes into play at the data ingestion stage—when data is gathered, pre-processed and transported. The data then goes through engineering and analytics stages—typically in a public or private cloud environment―to be stored and transformed, and then used for machine learning model training. Then it’s back to the edge for the runtime inference stage, when those machine learning models are served and monitored.
An infrastructure and application development platform that is flexible, adaptable, and elastic is required to fulfill these different needs and provide the connection between these various stages.
A hybrid cloud approach,providing a consistent experience across public and private clouds, provides the flexibility to optimally provision the data capture and intelligent inference workloads at the edge of an environment, the resource-intensive data processing and training workloads across cloud environments, and the business events and insight management systems close to business users.
Edge computing is an important part of the hybrid cloud vision that offers a consistent application and operation experience.
Adopting edge computing is a high priority for many telecommunications service providers, as they move workloads and services toward the network’s edge.
Milliseconds count when serving high-demand network applications, like voice and video calls. Because edge computing can greatly reduce the effects of latency on applications, service providers can offer new apps and services that can improve the experience of existing apps, especially following advancements in 5G.
But it’s not just about offering new services. Providers are turning to edge strategies to simplify network operations and improve flexibility, availability, efficiency, reliance, and scalability.
What is NFV?
Network functions virtualization (NFV) is a strategy that applies IT virtualization to the use case of network functions. NFV allows standard servers to be used for functions that once required expensive proprietary hardware.
What is vRAN?
Radio access networks (RAN) are connection points between end-user devices and the rest of an operator's network. Just as network functions can be virtualized, so can RANs, giving rise to the virtual radio access network, or vRAN.
Accelerate your 5G strategy with vRAN
What is MEC?
MEC stands for multi-access edge computing, a means for service providers to offer customers an application service environment at the edge of the mobile network, in close proximity to users’ mobile devices.
Advantages of MEC include increased throughput and reduced latency. MEC makes connection points available to app developers and content providers, giving them access to lower level of network functions and information processing as well.
Cloud computing is the act of running workloads within clouds—which are IT environments that abstract, pool, and share scalable resources across a network.
Traditionally, cloud computing has focused on centralized cloud services into a handful of large datacenters. Centralization allowed resources to be highly scalable and shared more efficiently, while maintaining control and enterprise security.
Edge computing addresses those use cases that cannot be adequately addressed by the centralization approach of cloud computing, often because of networking requirements or other constraints.
Additionally, a cloud strategy of running software in containers complements the edge computing model. Containers make apps portable, allowing businesses to run them wherever they make the most sense. A containerization strategy allows an organization to shift apps from datacenter to edge, or vice versa, with minimal operational impact.
The Internet of Things (IoT) refers to the process of connecting everyday physical objects to the internet—from common household objects like lightbulbs; to healthcare assets like medical devices; to wearables, smart devices, and even smart cities.
IoT devices aren’t necessarily edge devices. But these connected devices are part of many organizations’ edge strategies. Edge computing can bring more compute power to the edges of an IoT-enabled network to reduce the latency of communication between IoT-enabled devices and the central IT networks those devices are connected to.
Simply sending or receiving data is what marked the advent of IoT. But sending, receiving, and analyzing data together with IoT applications is a more modern approach made possible by edge computing.
What about IIoT?
A related concept, Industrial Internet of Things (IIoT), describes industrial equipment that’s connected to the internet, such as machinery that’s part of a manufacturing plant, agriculture facility, or supply chain.
Fog computing is a term for computing that takes place at distributed physical locations, closer to the users and data sources.
Fog computing is a synonym for edge computing. There is no difference between fog computing and edge computing other than terminology.
Edge computing can simplify a distributed IT environment, but edge infrastructure isn’t always simple to implement and manage.
- Scaling out edge servers to many small sites can be more complicated than adding the equivalent capacity to a single core datacenter. The increased overhead of physical locations can be difficult for smaller companies to manage.
- Edge computing sites are usually remote with limited or no on-site technical expertise. If something fails on site, you need to have an infrastructure in place that can be fixed easily by non-technical local labor and further managed centrally by a small number of experts located elsewhere.
- Site management operations need to be highly reproducible across all edge computing sites to simplify management, allowing for easier troubleshooting. Challenges arise when software is implemented in slightly different ways at each site.
- Physical security of edge sites is often much lower than that of core sites. An edge strategy has to account for a greater risk of malicious or accidental situations.
As data sources and data storage become distributed across many locations, organizations need a common horizontal infrastructure that spans across their entire IT infrastructure, including edge sites. Even for organizations that are used to operating across multiple geographical locations, edge computing presents unique infrastructure challenges. Organizations need edge computing solutions that:
- Can be managed using the same tools and processes as their centralized infrastructure. This includes automated provisioning, management, and orchestration of hundreds, and sometimes tens of thousands, of sites that have minimal (or no) IT staff.
- Address the needs of different edge tiers that have different requirements, including the size of the hardware footprint, challenging environments, and cost.
- Provide the flexibility to use hybrid workloads that consist of virtual machines, containers, and bare-metal nodes running network functions, video streaming, gaming, AI/ML, and business-critical applications.
- Ensure edge sites continue to operate in the event of network failures.
- Are interoperable with components sourced from various vendors. No single vendor can provide an end-to-end solution.
Red Hat’s broad portfolio provides the connectivity, integration, and infrastructure as the basis for the platform, application, and developer services. These powerful building blocks enable customers to solve their most challenging use cases.
A foundation that works
It all starts with Red Hat Enterprise Linux® as our foundation. Red Hat Enterprise Linux provides a large ecosystem of tools, applications, frameworks, and libraries for building and running applications and containers.
For building, deploying, and managing container-based applications across any infrastructure or cloud, including private and public datacenters or edge locations, choose Red Hat® OpenShift®. It’s a container-centric, high-performance, enterprise-grade Kubernetes environment.
Virtual machine and HPC workloads
Red Hat OpenStack® Platform, with distributed compute nodes, supports the most challenging virtual machine workloads, like network functions virtualization (NFV), and high-performance computing (HPC) workloads. It’s a reliable and scalable Infrastructure-as-a-Service (IaaS) solution that includes industry-standard APIs with hard multitenancy. Make it easier to place your compute power closer to the data source with this consistent, centralized management solution for your core datacenters and extending to the edge.
Storage and data services play an important role in edge computing, where it’s paramount to keep data close to the source. Red Hat OpenShift Data Foundation provides persistent storage for Red Hat OpenShift, both in a converged mode for smaller-footprint deployments, or connecting to external, centralized clusters. Red Hat Ceph Storage provides self-healing and massively scalable block, file, and object storage for modern workloads like storage-as-a-service, data analytics, AI / ML, and backup and restoration systems. Red Hat Ceph Storage combines with Red Hat OpenStack Platform to provide a 3-node hyperconverged configuration through Red Hat Hyperconverged Infrastructure for distributed compute and storage at the edge for telecommunications NFV, financial services industries, and large retail deployments.
Messaging and communication
Red Hat Application Services and developer tools provide cloud-native capabilities to develop fast, lightweight, scalable edge applications with data aggregation, transformation, and connectivity to support edge architectures. In highly distributed environments, communication between services running on edge sites and cloud needs special consideration. The messaging and data streaming capabilities of Red Hat AMQ support different communication patterns needed for edge computing use cases. Messaging, combined with a variety of cloud-native application runtimes (Red Hat Runtimes) and application connectivity (Red Hat Integration), offers a powerful foundation for building edge-native data transport, data aggregation, and integrated edge application services.
Red Hat offers a powerful portfolio of technologies that extends and complements its open hybrid cloud platforms to manage and scale your hybrid cloud environments.