Blesson Varghese, Associate Professor in computer science at Queen's University Belfast and Flavio Bonomi, Founder of Nebbiolo Technologies and Board Technology Adviser for LYNX, explore what edge computing means.

They take the idea beyond a means of moving memory and computing power to a network’s periphery and explore how edge might help foster a more ethical and fairer internet.

The underlying model adopted by many internet applications relies on services offered by centralised, power hungry and geographically distant clouds. Typically, a user interacts with a device to generate data that is sent to the cloud for processing and storage.

This model works reasonably well for many applications we rely on daily. However, the rapid expansion seen in the internet, the need for real-time data processing in futuristic applications and the global shift in attitudes towards a more ethical and sustainable internet, have all challenged the current working model.

There are four main arguments that favour a change to the underlying internet model, the ‘four Ps’:

  1. Proliferation. Can the existing network infrastructure cope with the billions of users and their manifold devices that need to be connected?
  2. Proximity. Are users sufficiently close to data processing centres to achieve the real-time responses required by critical applications?
  3. Privacy. Can we control our data and the subsequent choices and actions arising from processing it?
  4. Power. Is it sustainable to rely on power hungry cloud data centres?

Leveraging compute capabilities that are placed at the edge of the network for processing data appeals to all of the above arguments. Firstly, (pre)processing data at the edge would reduce data traffic beyond the edge, thus reducing the ingress bandwidth demand to the core network.

Secondly, with compute locations closer to the user, the communication latencies can be significantly reduced when compared to the cloud, thus making applications responsive enough for real-time use.

Thirdly, localised processing of data acts as a privacy firewall to ensure selective release of user data beyond one’s familiar geographic boundary and legal jurisdiction.

Finally, fewer resources will be hosted in many more edge locations, thus offering an energy and environmentally friendly alternative to traditional data centres.

Understanding the edge advantage

Edge computing is generally understood to be the integrated use of resources located in a cloud data centre and along the entire continuum towards the edge of the network for distributing internet applications to (pre)process data and address the challenges posed above. The edge may refer either to infrastructure or user edge.

The infrastructure edge refers to edge data centres, such as those deployed on the telecom operator side of the last mile network. The user edge refers to resources, including end-user devices, home routers and gateways that are located on the user side of the last mile network.

The origins of edge

The initial ideas relevant to edge computing were envisioned over a decade ago in a 2009 article entitled The Case for VM-Based Cloudlets in Mobile Computing by M. Satyanarayanan et al. Fog computing is an alternative term referring to the same concept that was presented by Cisco in 2012 in an article entitled, Fog Computing and its Role in the Internet of Things by F. Bonomi (co-author of this article) et al.

Two categories of applications have been identified to benefit from edge computing. The first is referred to as edge enhanced (or edge accelerated). These applications may be native to the cloud or the user device and will achieve a performance or functionality gain when selected services of the application are moved to the edge.

The more important class is edge native applications - they cannot emerge in the real world without the use of the edge. Illustrative examples include those that:

  • Augment human cognition, for example providing real-time cognitive assistance to the elderly or those with neurodegenerative conditions using wearables.
  • Perform live video analytics from a large array of cameras, including real-time denaturing for privacy.
  • And, using machine learning for predictive safety in driverless cars and predictive quality control in manufacturing.