Peter Hopton explores the physics and the factors that are slowing internet access today and presents a vision of a high speed, low power and low latency future.

You may have heard that computing is moving to ‘the edge’. You have also probably seen the use of the word ‘edge’ in many marketing materials - whether correctly or otherwise. Perhaps you’ve been told that ‘edge’ is the new cloud?

The more enquiring and sceptical will, of course, now be asking: ‘what exactly is edge?’, ‘why is it happening now?’ and, ultimately, ‘what does this mean for me?’ As you read on we’ll answer all these questions and more. We’ll also look at how 5G relates to edge.

The location of creation

Only a few years ago, most of the content we viewed was centralised and the supporting network was built to accommodate this nature. It relied on technologies such as ADSL (asymmetric digital subscriber line), CDNs (content delivery networks) and buffering.

With ASDL, the A stood for asymmetric, which meant our download bandwidth massively exceeded our upload. CDNs were locally located caches of Netflix videos, designed to reduce the strain on the network and to deliver content reliably. Buffering was the network’s answer to unreliable latency and to data arriving out of order (known as jitter).

In 2018, with the advent of the internet of things, we find that data was being created at the edge of the network rather than in this previously centralised way. Uploads and downloads were much more symmetrical.

Look to your house. It you’re a tech early-adopter, your heating may be connected to the internet, along with the electricity and gas meters. In many rooms there could be an Amazon Alexa awaiting commands from users. Perhaps your car is connected to WiFi and regularly updates itself and its firmware.

Maybe if the car develops a fault the garage will instruct it to send several gigabytes of information to its manufacturer. Even your fridge might have a sticker left on the front of it, from the showroom, indicating some sort of obscure connectivity!

All of these items are creating data, and are uploading it to the cloud for analysis or control. But among the internet of things in the household, by far the biggest uploaders of data are the ‘children units’. Ask your kids what they want to be when they grow up. The majority will say: a YouTuber - a modern day personality who makes their living from presenting TV-style shows that are broadcast via YouTube. Our kids are trailblazers and are using the internet in ways we could never have imagined a few years ago.

Almost every person under the age of 18 has their own YouTube channel, and likely streams footage of themselves playing their favourite console game. It’s common for YouTubers to record and upload over 10 small videos a day - all from their phone and all in 4k. Just imagine the world they - and we - will be inhabiting in five years.

And this is, in part, the problem edge computing is addressing. It’s just not possible to pass all this data back to a central server for processing. Our kids are creating and consuming data created at the edge of the network.

The cost of transmission

Looking back at the cloud’s origins, you could argue that it was enabled or born as a result of cheaper and faster connectivity. In the early 2000s, 56kbps dial was quickly replaced with a roll out of ADSL and cable services. These had speeds of up to 2Mbps - an improvement of more than 30x over 56Kb modems.

Looking at mobile. When 3G was born. It boosted connectivity speeds from 9.6Kbps to 384Kbps - a 40x improvement. But do you remember what it did to your battery life on your device? It was terrible!

Every 18 months, the amount of data processed per unit of cost and per unit of energy doubles - we know this phenomenon as Moore’s law (and also by the less wellknown, but still associated, Koomey’s law).

Moore’s law is, however, limited by something called ‘Shannon-Hartley Theory’, which is based around real laws of physics and limits the amount of energy (and therefore cost) required to transmit and receive a unit of data over a given distance by conduction or radio. The great leap of bandwidth we saw in the 2000s has taken us to some of the limits of Shannon-Hartley, so now we need to reduce communications distances by copper or radio to save energy.

You may have seen this trend realised in your home with computer motherboards getting more compact and power supplies getting bigger. You may have also noticed your ISP bringing fibre optics to your street box in order to boost your bandwidth.

For our mobile devices of today, the distance to the transmitter is quite large - usually a tower will cover a radius of a few kilometres. With 5G, bandwidth is increasing, so unless you want your phone to discharge its battery in an hour your tower will be much lower power and, within a metropolitan area, it will likely be only 150m away from you.

You can envisage then, that with 5G, we’re going to have more devices, transmitting more data. They will no longer talk to a few towers that are kilometres apart though. There will be tens of thousands of towers across a city and they will be only a few hundred meters apart.

Managing, compressing, deduplicating, caching and processing this data centrally could be an endeavor that would strain the network and could be much more efficiently dealt with at the edge of the network. Afterall, your kids 144 videos of four-hour long Fortnite sessions are unlikely to need the same central distribution and global delivery system as Pirates of the Caribbean - it’s far more likely that he’ll be sharing a few minutes here and there with his schoolmates, locally.

The latency of data

While we’re on the topic of our kids’ Fortnite addiction, go and ask them about pings or latency. They’re unlikely to know exactly, but will appreciate that high ‘ms’ is bad and will give laggies.

In a video game, a lag event, effectively where some data has jittered between the console and the server - is a virtual life and death issue for eager young players. There’s a more serious side to lag though. In real life, a lag event in the era of autonomy, smart cars and smart cities could very well be a real life or death issue.

Think of data as three trucks of parts to build a building setting off from York to London. Latency is the time it takes them to arrive - a function of the number of junctions or traffic lights and the cruising speed of the vehicles. Bear in mind that the internet might send them different routes too - especially over long distances. Jitter is what happens when they arrive out of order - the truck carrying the windows arrives first, and the concrete last - which means that you can’t start your building until the last truck arrives, whereas you could have begun work whilst waiting for the windows if the concrete arrived first.

Packet loss is what happens when a truck goes missing, requiring you to wait and send a messenger to request a replacement truck. Packet loss and jitter are more likely at longer distances and larger amounts of data, and they’re more problematic with real time problems (like lag in an online game).

The concept of ‘edge’

The concept of edge computing is to place more computing resource closer to the point of data creation - logically, or ideally, at the 5G tower. The purpose is to provide processing for high bandwidth data, caching for locally popular content as well as fast and reliable response for real time problems - without lag, jitter or packet loss.

Edge may become essential to ensuring a cost-efficient delivery of services on the network over the next 10 years, and it could become a matter of life and death - both virtually and physically.