In the last post, we looked at how Edge Computing (EC) differs from centralized computing and why businesses are adopting EC. Projected growth of EC in next several years means an ever growing number of businesses may be adding EC to their IT infrastructure. This EC growth translates into  business and technical requirements that are significantly different from the ones needed for non-localized infrastructure. In this post we’re going to look at some of the factors that businesses should consider when adopting EC.

New thinking needed for edge

EC brings the flexibility and simplicity of cloud computing to sites distributed across a large number of locations. Unlike traditional cloud computing with a few large sites, EC is spread across many small sites. EC solutions are as varied as the range of edge use cases, with deployments spanning from a few computing clusters to millions of edge devices. EC infrastructure could include any combination of edge devices, edge gateways, edge servers, mini-cluster or micro-datacenter.

While cloud computing infrastructure is hardware-centric and rather rigid, EC infrastructure is software defined and flexible. There are two areas where EC differs significantly from cloud computing in terms of technology and operations:

1) The “illusion of infinite capacity,” where supply leads demand and users can request more resources on demand, does not hold true for Edge deployments where capacity is provisioned for a smaller set of workload. This means careful resource planning and management are needed.

2) EC will require not only providing a compute platform but also managing the whole hardware and software stackfrom firmware,  hardware, software and up to the services, in a consistent and repeatable manner.

When planning for EC, it’s important to consider how it affects different stakeholders. Let’s look at some of key requirements from perspective of  business, operations and developer:

Business

Resiliency: When dealing with critical business functions, edge deployments need to be highly resilient to failure. These edge systems need to continue to operate, even if it's at reduced capabilities e.g. operating in offline mode in case of network disruption.

Hardware: With advancement in processor capabilities, it's becoming possible to run complex, compute intensive workloads at edge e.g. AI/ML on edge systems. New hardware form factors will be needed to address a broad range of edge computing requirements. These systems could include a combination of general purposes processors, GPUs,  FPGAs and application specific processors.

Security: Edge sites often have lesser physical access security which raises the risk of malicious or accidental disruption. In addition, bringing less capable devices (e.g. industrial microcontrollers, actuators) online without adequate protection is a recipe for disaster. Edge systems act as firewall and protect the entire downstream infrastructure from physical or virtual attacks. The edge systems need to be hardened from ground up—from firmware to OS to memory subsystem to storage to communication channels.

Non-technical: Remote sites may lack technical expertise, hence, any on-site maintenance will be performed by workers lacking IT skills. The operation and maintenance of edge infrastructure needs to be simple enough to be performed by non-technical on-site workers.

Environment: Remote locations have challenges with reliable power, space, cooling and connectivity e.g. oil rigs, mines. Edge systems need to be designed with these environmental challenges in mind.

Cost: Due to the potential for massive scale, EC is highly cost-sensitive. In small edge deployments, fixed cost and overhead per-site do not amortize as they do in centralized computing. As the number of edge sites increase, even a small change in cost, recurring across hundreds of thousands of sites, could have a big impact on the budget.

Open: Using monolithic edge solutions that use custom tooling without integrating with the rest of the IT infrastructure and processes could cause major pain down the road when EC achieves mass deployment. Using modular design approach built on open APIs gives businesses choice to build a solution that meets their current and future needs.

Operations

Remote: A business might have tens of thousands of edge sites that need to be deployed, patched, upgraded or migrated via remote operations from a central location. This requires advanced capabilities to manage these sites remotely.

Deterministic: All site management operations have to be highly reproducible, otherwise, troubleshooting can become a huge issue at scale. EC configurations need to be highly deterministic with divergences detected and documented centrally.

Automation: Site management operations need to be managed remotely from a centralized location, with a small number of experts. This requires a fully automated operational capability with minimal to no manual intervention.

Reporting: For central management to be possible, central logging and reporting is a key requirement for EC solution.

Developer

Skillset: Developers should be able to create and deploy applications, irrespective of location. This means, no special development skills should be needed to create edge applications.

APIs: EC enables businesses to offer new classes of services based on location data. This real time location data may also need to made available to partners. Well-defined and open APIs enable the partner ecosystem to exchange data and provide value added services.  APIs also allow data can be accessed in the programmatic manner, e.g. front-end developers can access IoT device data without worrying about the hardware interfaces or device drivers.

Application Management: Edge and cloud native apps have similarities in how these apps are  developed, installed, configured and shared across various teams. Application management platforms should be able to support a diversity of  scenarios, including, deploying these apps at various edge tiers.

Conclusion

EC needs to bring the flexibility and simplicity of cloud computing to the edge. Yet, EC differs significantly from cloud computing as the idea of “supposedly” infinite resources does not hold true for Edge deployments This means careful resource planning and management are needed. An edge computing platform needs to manage the whole hardware and software stackfrom firmware, hardware, software and up to the services, in a consistent and repeatable manner. 

When planning for EC, requirements from perspective different stakeholders should be considered. These requirements include business requirements like resilience, hardware form factors, security, lack of technical skills, adverse environment, cost sensitivity and flexibility of open source based solution. Operational considerations include, remote capabilities, deterministic configuration, automation and reporting. For developers, factors like no special skills for edge applications, use of APIs and flexible application management solution, should be considered.

Although, EC mass deployments remain a few years away, the design and tooling decisions made today, will have a lasting impact on future capabilities. Instead of adopting monolithic cloud to edge solutions, companies should be looking at how to leverage their existing tools to manage edge deployments. For example, consider using tooling used to manage, provision, and configure hybrid cloud for edge systems as well. This approach provides a consistent approach for managing all the systems, including edge systems.

EC can benefit from IT best practices on how to deliver standards-based solutions that can be securely deployed at scale. Open source is an obvious choice that  provides flexibility of choice and future proofing investments in EC.


About the author

Ishu Verma is Technical Evangelist at Red Hat focused on emerging technologies like edge computing, IoT and AI/ML. He and fellow open source hackers work on building solutions with next-gen open source technologies. Before joining Red Hat in 2015, Verma worked at Intel on IoT Gateways and building end-to-end IoT solutions with partners. He has been a speaker and panelist at IoT World Congress, DevConf, Embedded Linux Forum, Red Hat Summit and other on-site and virtual forums. He lives in the valley of sun, Arizona.

Read full bio