Select a language
Understanding edge computing
Cloud computing has led many organizations to centralize their services within large datacenters. However, new end-user experiences like the Internet of Things (IoT) require service provisioning closer to the outer "edges" of a network, where the physical devices exist.
What is edge computing?
Edge computing is computing that takes place at or near the physical location of either the user or the source of the data, which results in lower latency and saves bandwidth.
In a cloud computing model, compute resources and services are often centralized at large datacenters, which are accessed by end users at the edge of a network. This model has proven cost advantages and more efficient resource sharing capabilities. However, new forms of end-user experiences like IoT need compute power closer to where a physical device or data source actually exists, i.e. at the network’s "edge."
By placing computing services closer to these locations, users benefit from faster, more reliable services with better user experiences, while companies benefit by being better able to process data, support latency-sensitive applications, and use technologies like AI/ML analysis to identify trends and offer better products and services.
What is an edge device?
Edge devices are physical hardware located in remote locations at the edge of the network with enough memory, processing power, and computing resources to collect data, process that data, and execute upon it in almost real-time with limited help from other parts of the network.
Edge devices require some kind of network connectivity to facilitate back-and-forth communication between the device and a database at a centralized location. An edge device is where the data is collected and processed.
Edge computing can complement a hybrid computing model, specifically where centralized computing is used for:
- compute intensive workloads
- data aggregation and storage
- artificial intelligence/machine learning
- coordinating operations across geographies
- traditional back end processing
- autonomous vehicles
- augmented reality/virtual reality
- smart cities
Edge computing can also help solve problems at the data source, in near real time. In short, if reduced latency and/or real-time monitoring can support business goals, there is a use case for edge computing.
Why IoT and edge computing need to work together
IoT produces a large amount of data that needs to be processed and analyzed so it can be used. Edge computing moves computing services closer to the end user or the source of the data, such as an IoT device.
Edge computing is a local source of processing and storage for the data and computing needs of IoT devices, which reduces the latency of communication between IoT devices and the central IT networks those devices are connected to.
Edge computing allows you to benefit from the large amount of data created by connected IoT devices. Deploying analytics algorithms and machine learning models to the edge enables data processing to happen locally and be used for rapid decision making.
What is IIoT?
IIoT stands for Industrial Internet of Things, a term for connected devices in manufacturing, energy, and other industrial practices. IIoT devices are often deployed in connection with edge computing. IIoT is significant for bringing more automation and self-monitoring to industrial machines, helping improve efficiency.
What is multi-access edge computing?
Multi-access edge computing (MEC) is a type of network architecture that provides cloud computing capabilities and an IT service environment at the edge of the network. The goal of MEC is to reduce latency, ensure highly efficient network operation and service delivery, and improve the customer experience.
Multi-access edge computing is now more broadly defined as an evolution in cloud computing that uses mobility, cloud technologies, and edge computing to move application hosts away from a centralized datacenter to the edge of the network, which results in applications that are closer to end users and computing services that are closer to the data created by applications.
What is 5G?
5G refers to the fifth generation of mobile networks, representing upgrades in bandwidth and latency that enable services that weren’t possible under older networks. 5G networks promise gigabit speeds—or data transmission speeds of up to 10 Gbps. 5G service also vastly reduces latency and can expand coverage to remote areas.
5G can be considered a use case for edge computing, and it also enables other edge use cases. Edge computing is a way to meet the performance and low latency requirements of 5G networks and improve the customer experience.
Edge computing for telecommunications
Adopting edge computing is a high priority for many telco service providers as they modernize their networks and seek new sources of revenue. Specifically, many service providers are moving workloads and services out of the core network (in datacenters) toward the network’s edge, to points of presence and central offices.
For telcos, the apps and services their customers want to consume on edge networks are the key to revenue generation, but success depends on building the right ecosystem and coordinating among stakeholders and technology partners alike.
Your open source foundation for edge computing
No single vendor can provide a complete edge computing solution. Instead, you will assemble a solution from multiple components. Open source platforms ensure interoperability across a wide ecosystem, without the vendor lock-in of a proprietary technology stack. And to enable new edge computing use cases, Red Hat is investing in upstream open source communities like Kubernetes, OpenStack, and Fedora IoT.
The Linux foundation for scaling existing apps and rolling out emerging technologies across edge, bare-metal, virtual, container, and all types of cloud environments.
For building, deploying, and managing container-based applications across any infrastructure or cloud—including private and public datacenters, or edge locations.