Select a language
Understanding edge computing
Cloud computing has led many organizations to centralize their services within large datacenters. However, new end-user experiences like the Internet of Things (IoT) require service provisioning closer to the outer "edges" of a network, where the physical devices exist.
In a cloud computing model, compute resources and services are often centralized at large datacenters, which are accessed by end users at the "edge" of a network. This model has proven cost advantages and more efficient resource sharing capabilities. However, new forms of end-user experiences like IoT need compute power closer to where a physical device or data source actually exists, i.e. at the network’s "edge."
In response to this, edge computing refers to a model that distributes compute resources out to the "edge" of a network when necessary, while continuing to centralize resources in a cloud model when possible. It is a solution to the problem of needing to quickly provide actionable insights based on time-sensitive data.
Red Hat Summit 2021 registration is now open
Share your present and shape your future—at no cost to you—by joining us for Red Hat Summit Virtual Experience, taking place April 27–28 and June 15–16
No single vendor can provide a complete edge computing solution. Instead, you will assemble a solution from multiple components. Open source platforms ensure interoperability across a wide ecosystem, without the vendor lock-in of a proprietary technology stack. And to enable new edge computing use cases, Red Hat is investing in upstream open source communities like Kubernetes, OpenStack, and Fedora IoT.