What is Edge Computing? Much like the early days of “Cloud” computing, the answer is different depending on who you ask… Similar to the core/edge networking model, edge computing is often part of a core/edge compute model, where the core may be hosted in a private datacenter or public cloud, and edge systems are distributed, in proximity to remote users or devices.
What does “Edge Computing” Look Like Today?
Until recently, edge computing was primarily used in telecommunications, industrial automation, and remote offices. The historical role of edge gateways was to aggregate data in the remote environment, perform minimal processing, and transfer the majority of data and processing to the core. Edge gateways (sometimes called “Embedded systems”) are typically deployed on stand-alone servers, with all applications installed in a single bare-metal operating system (i.e. no redundancy).
The Internet of Things (IoT) is quickly expanding the use cases and the requirements on edge gateways. IoT solution stacks are evolving rapidly from the simple sensors of today, to advanced systems that analyze sensor data, make AI-driven decisions and take action in real time. To meet these growing expectations, the edge must be more powerful, intelligent, available and enabled.
What do the Cloud Vendors Think?
The hyperscale cloud vendors believe IoT sensors in the field will connect locally to very small edge gateway devices that essentially just handle local comms to the devices and bridge their signals onto an ethernet network for transport to the cloud. In their view, all the intelligence and decision-making is done in the cloud. Not surprisingly, in their minds IoT and edge computing (and the enablement of) is all about driving the consumption of more core cloud, since that’s what they currently monetize. For instance, if you look at the list of devices certified to run Amazon GreenGrass, they are Raspberry Pi-like devices, most with under 4 GB RAM. Their supported IoT sensors and edge gateways are as simple and cheap as possible. Since the cloud vendors don’t sell IoT or edge systems, this “Skinny Edge” approach makes business sense – try to minimize customer spend on anything outside of the core cloud.
What are the Challenges with the Skinny Edge Approach?
- There will be millions of them, so they must be cheap (e.g. not much room to spend effort locking down their security)
- They are remote, on the frontier, not behind biometrically controlled datacenter walls. They are exposed to physical attacks.
- Most of them use wireless communications.
- There will be a bazillion different sensors from manufacturers all over the world.
In the Skinny Edge, the only thing between a cheap, exposed, sensor of unknown origin, and the core corporate network, is a tiny gateway. Consider that many of the biggest, most costly exfiltrations were initiated through connections to partner networks (which are arguably reasonably secure), then consider the ability of a Raspberry Pi edge gateway to protect against a hacker with a beachhead on an IoT device.
We believe that these kinds of highly distributed endpoints require automation-based Zero Trust with no reliance on humans. NetFoundry provides a private network with Zero Trust from edge-to-cloud by:
- Limiting access to apps with micro-segmented, least privilege access AppWANs integrated into IAM
- Abstracting critical systems away from the Internet using bi-directional and outbound-only dark networks
- Harnessing multi-dimensional security architectures for data-in-motion protection
Bandwidth is another issue, particularly at the edge. Many edge locations won’t have enough bandwidth to the core to handle the expected increases in signal data.
Latency: Not just the latency of network transmissions, but also the time between sensor input, analysis, decision, and action.
Availability: As the edge becomes more enabled and responsible for control, it becomes critical infrastructure just like the core. Skinny Edge gateways look like embedded systems of today – standalone bare metal servers. If the gateway fails, the entire edge application is down until a service tech can physically repair it, and given the remote nature of some of these systems, that could be a while.
Flexibility: IoT is evolving rapidly, and so are IoT solution architectures. Developers need the flexibility to run applications in the most appropriate location, whether that is in the core or at the edge. The Skinny Edge approach doesn’t allow much flexibility to run applications at the edge, even when necessary.
Compute power: Current edge gateways don’t have nearly enough processing power or performance to run big data and AI applications in near real time. Sure, you can run multiple edge gateways, each running pieces of the application stack, but without HA that just means more single points of failure.
What do we Think “Edge Cloud” Will Look Like?
Edge gateways need to be capable of being the first line of defense against compromised IoT devices, which means they need the compute power to run advanced and layered security applications that integrate with corporate network security tools.
Many experts believe the bandwidth and latency needs of edge/IoT applications will demand that processing move to the edge, in close proximity to the devices. That means edge gateways will need the processing power and performance to run big data and advanced AI applications in near real time.
And if security, processing and control are all being done at the edge, rather than in the core, then availability of the edge becomes critical. Edge gateways will need to be highly available, just like core infrastructure – redundant, self-healing storage, redundant compute with automatic failover, autonomic load balancing, and modular scalability.
Now of course, IoT and edge computing COULD evolve in a totally different direction – applications could be stateless, data could be transient, new security models may emerge – but that would require the development of a completely new universe of products than currently exist. Those products would be IoT specific, and existing off-the-shelf products could not be used. That’s pretty much how the real-time, embedded systems market is today. Specialized vendors and products with different/limited functionality that integrate poorly with the rest of core IT.
A more efficient evolution (spoiler alert) is made possible if we can provide a scalable, redundant, highly available edge cloud that can run off-the-shelf advanced, layered security applications, AI applications that analyze data and make decisions in near real time, that easily integrates with either private or public cloud-based core infrastructure, and is still cost-appropriate. “In other words, cost effective hyperconverged fabric that deploys on top of you infrastructure at the edge”