Edge architecture is a distributed computing architecture that encompasses all the components active in edge computing—all the devices, sensors, servers, clouds, etc.—wherever data is processed or used at the far reaches of the network.
Edge computing refers to computing done at the location closest to a system’s data or its end user—where information is coming from or going to. Edge architecture allows processing to occur more quickly by reducing latency and lag. Applications and programs running at the edge are able to work more quickly and efficiently, resulting in a better user experience and improved overall performance.
For instance, if you’re a shipping company, your “edge” might be each of multiple docks where shipments are loaded and unloaded, checking merchandise before it leaves the dock, and otherwise managing, gathering, and sending information. Your company’s headquarters could be located miles away, housing the main datacenter, but the edge is where the app-processing action is.
Edge computing architecture encompasses an ecosystem of infrastructure components that have been dispersed from the central location of an enterprise’s datacenter or main server outward—across all edge locations—as part of an organization’s holistic deployment.
This includes compute and storage capabilities, applications, devices and sensors—as well as network connectivity back to the central datacenter or cloud working in concert with Internet of Things (IoT) devices.
The devices and sensors are where information is collected, processed, or both. They have just enough bandwidth, memory, processing ability and functionality, and computing resources to collect, process, and execute upon data in real-time with little to no help from other parts of the network. Some kind of connectivity with the network enables communication between the device and a database at a centralized location.
Scaled-down, on-premises edge servers or datacenters can be easily moved and fit into comparatively small remote locations. Flexibility and scalability are necessities as an enterprise's needs change and grow. Flexible topology options to fit smaller spaces or different environmental requirements, including power/cooling intermittent network connectivity, are just part of what Red Hat brings to the table for edge computing.
When it comes to creating your own edge architecture, you’ll need to assemble a collection of components that will serve the unique needs of your enterprise, wherever your edge happens to be and including however many devices, clouds, servers, sensors, etc. as needed. That can feel like a lot to take on.
Red Hat’s edge computing solutions make operations simpler through automated provisioning, management, and orchestration, freeing you up to focus on what’s next for your enterprise. We can help you handle the challenges that come with deploying edge devices wherever you need them, anywhere in the world, along with an ecosystem of partners to build the best edge stack for your enterprise.
Red Hat Enterprise Linux is the consistent and flexible operating system that can help you run enterprise workloads from your datacenter to devices for modeling and analytics at the edge, while Red Hat OpenShift provides the platform you need to build, deploy, and manage container-based applications across any infrastructure or cloud—including private and public datacenters and edge locations.