Professor Gordon Plotkin FRS FRSE leads this year’s Lovelace lecture and will review progress in both probabilistic and differentiable programming.

He was awarded the 2018 Lovelace Medal for his invention of 'structural operational semantics' and his contributions to a range of other topics.

This lecture is in association with Arm.



Headline Speaker: Professor Gordon Plotkin, University of Edinburgh

Introductory Speaker: Peter O’Hearn, UCL and Facebook

Vote of thanks: Jane Hillston, University of Edinburgh



17:45-18:30 - Registration with tea/coffee

18:30 - Lecture commences

20:00 - End of lecture

20:00-21:00 - Networking buffet and drinks reception (optional)

21:00 - Close



Programming languages are central to computer science. They are of enormous practical importance and they are a source of intriguing theoretical questions. It is particularly exciting when the two come together. There is increasing evidence of this in machine learning, and so, in this talk, we explore a nascent area: languages for learning.

Probabilistic modelling and inference, and deep learning are two prominent examples of machine learning. In both cases one builds up a model and combines that with observations to generate hypotheses.

In the first case, one typically samples from a distribution over hypotheses; in the second, one typically seeks a hypothesis optimising an objective function. As models and observations become larger and more complex, one needs tools to generate and handle them; it then becomes increasingly natural to seek specialised programming languages. Probabilistic programming languages have already appeared. In the area of deep learning we have yet to see them, but there is a general perception of the interest of differentiable programming languages (differentiable, as one needs to calculate gradients in order to optimise).

In this talk we review progress in both probabilistic and differentiable programming. In both cases there are fascinating challenges of combining mathematical requirements with programming desires: one needs mathematical notions general enough to apply to the application, in one case of probability distributions, and in the other of differentiation. In both cases one wishes to accommodate as broad a range of program types and computational effects as possible.

There are other areas of machine learning and one can ask whether programming notions can also help there; reinforcement learning may provide an example. We also ask if there is commonality to the various forms of learning. After all, they all involve methods of producing hypotheses from models and data. Encouragingly, as we explain, there are semantical notions which seem to embody that commonality. Perhaps, one day, these may be usefully incorporated in programming languages to provide a common framework for languages for learning.



Headline Speaker: Professor Gordon Plotkin

After an undergraduate degree in Mathematics and Physics at Glasgow University, Gordon Plotkin obtained a doctorate in Artificial Intelligence from Edinburgh University in 1972. He then joined the faculty at Edinburgh, becoming a full professor in 1986. He is a Fellow of the Royal Society, a member of Academia Europaea, and a Fellow of the Royal Society of Edinburgh.

He may be best known for his work on the operational semantics of programming languages, in particular for Structural Operational Semantics. He has also contributed to many other areas of the semantics and logic of programming languages; his most recent interests include an algebraic approach to programming language effects and the theory of differentiable programming languages. He is the recipient of both the ACM Programming Languages Achievement Award and the Royal Society Milner Award.

Introductory Speaker: Peter O'Hearn

Peter O'Hearn is a computer scientist who has made major contributions to the science and engineering of program correctness. His research contains a strand stretching from abstract topics such as mathematical logics through to automated analysis of industrial software used regularly by billions of people.

Peter is known particularly for separation logic, a theory he developed with John Reynolds which opened up new possibilities for scaling logical reasoning about code. This built upon prior work of Peter and David Pym on logic for resources. After over 20 years as an academic, Peter took a position at Facebook in 2013 with the acquisition of a startup he cofounded, Monoidics Ltd. The Infer program analyzer, developed by Peter's team, has resulted in tens of thousands of issues being fixed by Facebook engineers before they reach production. Infer is also used at Amazon, Spotify, Mozilla and other companies.

Peter has received a number of awards for his work, including the 2011 POPL Influential Paper Award, the 2016 CAV Award and the 2016 Gödel Prize. Peter is a Fellow of the Royal Society, a Fellow of the Royal Academy of Engineering, and he received an honorary doctorate from Dalhousie University in 2018.

Vote of thanks: Jane Hillston

Jane Hillston was appointed Professor of Quantitative Modelling in the School of Informatics at the University of Edinburgh in 2006, having joined the University as a Lecturer in Computer Science in 1995. She is currently the Head of School.

Jane Hillston’s research is concerned with formal approaches to modelling dynamic behaviour, particularly the use of stochastic process algebras for performance modelling and stochastic verification. She has developed high-level modelling languages for application domains ranging from computer systems, biological process and collective adaptive systems. Her PhD dissertation was awarded the BCS/CPHC Distinguished Dissertation award in 1995 and she was the first recipient of the Roger Needham Award in 2005. She has published over 100 journal and conference papers and held several Research Council and European Commission grants.

BCS Lovelace Lecture 2019 - Languages for learning
Date and time
Monday 4 March, 6:00pm - 9:00pm
The Royal Society
6-9 Carlton House Terrace
5.00 - 30.00 GBP