Professor of Quantitative Modelling and former Head of the School of Informatics at the University of Edinburgh, Jane Hillston, was awarded the Lovelace Medal 2023 in recognition of her work developing new approaches to modelling both artificial and natural systems by combining elements of formal languages with mathematical modelling. She discusses her work with Brian Runciman MBCS.
Combining elements of formal languages with mathematical modeling — could you link those two ideas together for us?
Formal languages are the basis for programming languages, but often in computer science we also use them for understanding whether things will behave correctly. Many people will be familiar with the idea of a state machine, where you examine the possible states a process may go through and perform an analysis to ensure that you avoid undesirable states.
What I do, beyond simply looking at correctness from that point of view, is think about the resources that a system is going to use. In performance modelling where I started, the resources are going to be the CPU, the disk and perhaps channel bandwidth. It's really important to know how much you're going to use and when. That’s when we get into a richer mathematical representation than something like the state machine; people can do small examples directly in the maths, but we found that modelling modern large distributed systems like that is very difficult.
By bringing in these ideas from formal methods that we use for functional correctness, we can develop small programming languages that then compile down into the mathematics. This helps people express systems more using the abstractions that they're used to, in terms of states and transitions, and generates an underlying mathematical model that can be analysed.
I know you use the example of biological systems, bacteria and bike sharing systems. Can you explain a little about that?
An area called systems biology developed about 20 years ago. People started regarding cell biology in terms of processes through which cells evolve — systems based on states and interactions — rather than individual elements. They began adopting the kind of modelling languages that we had been developing within computer science, so we modified the language for biochemistry and studied a variety of different biological processes. We could build models that explained the circadian rhythms of plants in terms of the biochemical interactions within the cell, and even looked at bigger biological granularities, examining whole organisms like E-coli and how they make decisions in response to their environment. Trying to understand how those systems work used very similar techniques.
Bike sharing came about through my involvement in a larger EU project looking at smart cities. In smart cities, data feedback loops can help make better use of resources.
Research shows that bike-sharing scheme users’ system satisfaction, and their likelihood of using it again, depends on both easily finding a bike, and being able to easily park it. But some cities have a real problem with bike stations becoming full; in Paris, for example, a strong preference for not cycling up hills means stations at the bottom of hills become fuller much faster. We did some modelling that investigated both the bike use and the redistribution. Consequently, Paris introduced a discount incentive for people to take their bike further to an emptier station. This meant that humans were doing the distribution rebalancing, instead of vehicles driving bikes around.
Did your work start because you saw those kinds of problems and were trying to look for solutions, or you looking at the stochastic processes?
I was in Edinburgh working on mathematical models of resource use at the same time that Robin Milner was developing formal languages to model the behavioural correctness of systems, so I came at it from both the mathematical side and through developing a love of those formal languages.
My PhD thesis looked at bringing the two worlds together; the applications came later. To some extent, BCS played a role; my thesis was about the potential of marrying the mathematical models with the formal languages and methods and winning the BCS/CPHC Distinguished Dissertation Award raised its profile. It was through that exposure that I got the opportunity to work in different application domains, which really helped my career blossom.
Have you found that the cross-disciplinary stuff is where the really interesting stuff is happening?
We're involved with the Advanced Care Research Centre at Edinburgh University, looking at ways technology can make people's later lives more comfortable and allow them to live independently for longer. We're working with engineers who are developing ambient sensors in houses that can check people's movements etcetera, and the part of the project I'm involved in is using that data to look at people's daily patterns. By having sensors on cupboard doors and appliances you can build a real picture of somebody's ‘normal’. The idea is that through the models we are developing, we would be able to detect changes in routine that indicate it's worth somebody checking that that person is alright.
So in that example of somebody being monitored for health reasons, the person's state of mind or how they're feeling on the day might be the random element in that?
Yes. Generally people are just random — we're not clocked like computer systems, we don't do things in fixed time. As soon as you have humans (and this applies back to computer systems too, even in performance modelling), as soon as you have users, the systems become stochastic because the users themselves bring in that randomness.
The phrase ‘stochastic process algebras’ comes up in your work. Can you tell us a little bit about what that means?
Process algebras are formal modelling techniques, of which Robin Milner was an originator. They describe what's happening in terms of agents, which we can think of as processes that undertake various actions. It's similar to a state machine view of the world — except rather than viewing it as one monolith, you can have many agents working concurrently.
For you
Be part of something bigger, join BCS, The Chartered Institute for IT.
In my PhD I augmented models that describe processes and actions; each action has some probability of happening and some duration, and that's where the stochastic element comes in. The original processes would correctly say ‘this is what happens next’, but couldn’t tell you when. The languages I designed could add ‘here is roughly when.’ It's not fixed, it's stochastic: it's going to have a duration, and there will be other things with their own durations that then are racing with it, competing for resources. Then you have a way of mapping what's most likely to happen next, but it's not definitive.
Quantum computing is also based on probabilities — is there any potential crossover with your work there?
A colleague, Professor Elham Kashefi, who is Scientific Director at the National Quantum Computing Centre, has often encouraged me to think about quantum. But you start to get ‘double stochasticity’ in a sense because quantum processing itself encompasses all probabilities, progressing all possibilities at once. In my work I take a branching time view; though I may have a choice of different possibilities each time, I assume that just one possibility occurs. Quantum is more of an all-time view, so all the possibilities advance until you pick something to measure. It would be an interesting challenge to try and marry the two. We recently launched the Quantum Software Lab at the University of Edinburgh — so now that we are moving towards real quantum programs, that's a possibility for the future.