What makes a city smart? Ahead of this year’s Turing Talk, Professor Julie McCann explores the technologies we’ll need to master to make cities smarter.

The smart city dream is a familiar one: people, goods and resources will move around more efficiently. Our streets will be safer. Public services could be commissioned and deployed with more precision. Smarter cities will be more sustainable, in most of the word’s many definitions, than today’s largely analogue ones.

But, what protocols, processes and thinking will help us move from today’s relatively low-tech cities to places where life is augmented for everyone? Professor Julie McCann from Imperial College London explains what makes a city smart.

Why don’t you introduce yourself?

I'm a professor of computer systems. My work revolves around looking at sensor networks, the internet of things - those kinds of systems.

My particular interest, in terms of research, is how those systems can be made more reliable, safe and secure. How they can better serve their purpose. It’s also about using information and knowledge about how those systems behave in environments.

When did you first become interested in computers?

Turing Talk 2022: a day in the life of a smart city

Monday 21 February 2022, 18:00 - London
Wednesday 23 February 2022, 17:30 - Belfast

Reserve your place

When I was in the third year of secondary school, my maths teacher was asked to introduce this new topic called computer science.

So, he put a few programs up on the board and said: ‘Right, this is a program to do this. Now, you do a program to do that!’

I wouldn’t say I was top of the class, but I got the idea in a second.

My answer was right and I started thinking, ‘Well, that's easy...’ And, after a degree and PhD from the University of Ulster, here I am today!

Do you see the smart city concept as being purely tech driven, or is it about people and their needs coming first?

Absolutely, what's the point of any technology if it doesn't have a use? The truth is, it's okay for us academics to play with next generation technologies, technology without a use - yet. That's good and that’s normal.

But you know, a smart city is a living city. It's not a big university. It has to have a purpose. And if it doesn't have a purpose, a positive purpose that’s more than just making money, it will not happen.

What is a smart city and is the idea really that new?

We've been talking about smart cities for over 20 years now and they haven't emerged the way people thought they would.

If you look back at the predictions we were making about smart cities in the early 2000s, the idea was that cities would have this big, centralised mission control. Somebody sitting and looking up at a big screen, making sure everything was working.

Instead, they have evolved in a more grassroots way, where individuals and small groupings of people have got together, and thought ‘This is useful for people. Let's put this in.’

And there's a reason for that. Mission control - it won't work. It doesn't allow the city to adapt and scale. It doesn't allow for change and it isn’t what people actually want or need.

How does edge fit into the story of what makes a city smart?

Technologically, you can have a local system that makes decisions straight away - about closing something, turning on a light, or indicating there’s an error. We’ve had that for decades. The point of edge processing has been two-fold.

Firstly, to maintain some notion of privacy. The other is to distribute the computing processing through the city so all this data doesn’t need to be shipped back to a big server to be processed.

We are now starting to see indicators that doing knowledge processing out on the edge doesn’t provably help with privacy. But the reduction of compute complexity is something one can do by essentially spreading the computing load across the city.

So, how many sensors will a smart city need? The number must be huge!

In your phone, you have maybe tens of sensors. If you’re augmented in any way - maybe for hearing, or perhaps wearing headphones - those will have three of four sensors to maximise the sound quality. [Rather than the city itself having more sensors] we’ll have more sensors on us, particularly as we get older. The notion of adding sensors to cities has relatively bottomed out, compared to what we predicted.

So, we’ll be the sensors?

Maybe. That’s a cheap way of doing it. You buy the mobile phone; you buy the hardware. It’s cheaper than digging up a road, which costs tens of thousands of pounds a dig. The upgrade onus is on the individual too.

On the other hand, with critical infrastructure you wouldn’t want to be beholden to a third party.

With all this data being generated, is there a risk existing networks will become completely saturated?

Yes. There are limits on capacity. It’s a bit like e=mc2. There’s only so much data the air can hold. That’s why, if you talk to people who do radio communications research, they’re working on ways you can double up on network capacity. There are also regulations that allow us to share networks.

If you look at an IoT network called LoRa (short for long range) you can only send messages for something like thirty six seconds an hour, point one percent of the time. Point one percent of your day, per node. So, we can share the air.

Have architectures changed over time?

When sensor nets were first invented, we looked at hub and spoke topologies. That’s because, if you looked at legacy sensor-based systems, that’s exactly how they worked. We were just looking at smaller versions of that - systems that were completely radio based and battery powered. But we started to think about the limits, things like the central unit being a single point of failure and the distance you can place your sensor network away from that unit. And the cost...

We started to look at a more mesh-based network. Some were completely flat, where the system decides on where it sends its data, based on the current state of its neighbourhood. [The node] would be only listening to its neighbourhood, not the whole network - just the nodes it can hear in one hop. From there it can infer where its next hop will go and that trickles through the whole network.

We were able to show that we could get optimal routes through the network and these were very agile; they could react to changes in the network. In the real world, though, deploying and maintaining such a system is difficult - they weren’t showing the performance we thought we could theoretically show. So, people moved back to the hub and spoke model. So, LoRa is a hub and spoke - it’s practical and easy to maintain.

However, in computing, it goes from distributed to centralised, to distributed to centralised. Since computing started, we always wave between those two styles of architecture and topologies

You’ve written, researched and talked about how nature might have some of these answers. Can you tell us about that?

Networks are able to be formed from individual devices, just listening to the other devices around them. Now, the individual device itself is relatively unsophisticated. It doesn't have much processing power - maybe like a ZX Spectrum.

These systems run on batteries. You don't want to waste any power, so you minimise the amount of stuff an individual device does. You design the network, or swarm, to be the ‘smart’ bit. The individual node or device is simpler, and that's exactly how nature protects itself.

For example, if you look at swarming systems like flocking birds and shoaling fish, you'll see that they form the network to keep all the individual fish or birds as safe as possible. Each fish and bird, they all have different jobs to do. That minimises the amount of food they're using, too.

Swarm, flock, whatever you want to call it - we try to have that sort of thinking in the sensor systems. Though each individual sensor is relatively basic, it’s the position and role in the network that enables all the sensors to survive – or, in this case, not run out of battery! They can still relay the data to where it has to go. It’s that sort of thinking that we do. It’s not borrowing from, but rather reflecting nature.

Smart dust

Smart dust is the fun thing we do in the lab: it’s taking the notion of a sensor network that’s made of tiny devices with a radio a step it further and allowing the nodes to scupper around. It’s become a bit robotic. Maybe they can join and become a 3D thing. It’s taking an idea, thinking about the future and thinking about the challenges.

If you know anything about Neal Stevenson’s work - he’s a science fiction writer - he had this stuff called ‘matter’, which clumped together to form, well, matter. I’ve always imagined it be like a microwave oven. You say: ‘I need a new bow for my dress,’ and it’ll go and compile this essence of smart dust into something. So, we took that concept and thought about how to achieve it and what the challenges would be.

One of the challenges is network capacity. If you had 10,000 ‘smart dusts’ in your hand, they couldn’t communicate with each other simultaneously. If they needed to form a 3D thing or scurry together, you’d have to have tricks in your network protocol to allow for that to happen. Those ideas are fun to research but we can’t really think of applications for this. Sometimes, in research, you do a lot of work to meet challenges that need to be solved right now; sometimes you want to do what-if experiments. This is one of those.