BCS is a registered charity: No 292786
This was the question debated at a Computer Journal Lecture, which began with the following presentation by Robin Milner of Cambridge University, followed by the discussion below.
Pervasive or ubiquitous computing has become an aspiration of the computing community in the last 15 years.
Its vision, pioneered by Mark Weiser , is that populations of computing entities - hardware and software - will become an effective part of our environment, performing tasks that support our broad purposes without our continual direction, thus allowing us to be largely unaware of them. The vision arises because the technology begins to lie within our grasp.
This simply stated aspiration arouses questions ranging across computer science and beyond. Here are a few of them.
The UK Computing Research Committee (UKCRC) has mounted an exercise posing a number of Grand Challenges for computing research over the next two decades . One of these Challenges, Ubiquitous Computing: Experience, Design and Science , proposes to develop ubiquitous computing by tackling these four classes of question in a closely coupled manner.
The questions are already addressed to a varying extent for conventional software, but they are raised to an unprecedented level of difficulty by the following qualities that ubiquitous systems will possess in high degree.
First, since we shall be largely unaware of the activity of such systems, they will continually make decisions hitherto made by humans.
Second, new technologies will allow such systems to be vast - orders of magnitude larger than any we know, including the Worldwide Web - and the difficulties of designing and understanding them will surely grow nonlinearly with their size.
Third, the systems must adapt continually to new requirements, retaining reliability and without interrupting service (on whose continuity we shall increasingly depend).
Fourth, though individual systems may be designed independently, their pervasive nature will inevitably lead to unplanned interaction between them.
This tangle of concerns, about future systems of which we have only hazy ideas, will deﬁne a new character for computer science over the next half-century. What sense can we make of the tangle from our present knowledge?
In answering this, it is helpful to expose an (intended) ambiguity in the title of this lecture. The question 'Shall we understand it?' can be interpreted passively: Will understanding be possible? or actively: Do we wish to ensure understanding?
A recent report  contains an admirably detailed analysis of the risks and effects of ubiquitous computing, but declares (p.17) ubiquitous systems that '... no one can cope with the complexity of such distributed systems, neither mathematically nor legally'.
So the report interprets our question passively, and answers it 'No'. Since the systems in question are to be designed by us, my response is quite opposite; indeed, our role as computer scientists demands that we interpret the question actively and answer it Yes - we do indeed wish to design in a way that admits understanding.
This understanding must exist not only for scientists, but also to a certain degree for the general public. It may be argued that a user, being largely unaware of the performance of a UCS, has no need to understand it. But we do not apply this argument for the natural sciences.
For example, most of us are largely unaware of the chemistry of our bodies, yet nowadays public understanding of the natural sciences is increasingly sought. It would be paradoxical not to seek it for the science of our own artefacts.
So, as we aim to create new systems that we do not yet fully understand, we shall need to create models - and modelling tools - that help us to improve our understanding, both as scientists and as users. What kinds of model do we want?
Read the rest of Robin's presentation:
1. Morris Sloman, Department of Computing, Imperial College London
2. Martyn Thomas, Martyn Thomas Associates, UK
3. Karen Spark-Jones, Computer Laboratory, University of Cambridge
4. Jon Crowcroft, Cambridge University
5. Marta Kwiatkowska, University of Birmingham
6. Paul Gardner, Head of Pervasive ICT Research Centre, BT Group Chief Technology Ofﬁce, British Telecommunications
7. Nicholas R Jennings, School of Electronics and Computer Science, University of Southampton
8. Vladimiro Sassone ECS, University of Southampton
9. Eamonn O’Neil, Department of Computer Science, University of Bath
10. Michael Wooldridge, Department of Computer Science, University of Liverpool
11. Carsten Maple, Institute for Research in Applicable Computing, Department of Computing and Information Systems, University of Luton
12. George Coulouris, Digital Technology Group, Computer Laboratory, Cambridge University
13. Dan Chalmers, Department of Informatics, University of Sussex
As a ﬁrst attempt to answer the latter question, I would like to discuss an insight that emerged recently in discussion of our manifesto for the Grand Challenge.
It arises from the fact that humans will be embedded in our ubiquitous systems just as deeply as computational artefacts. What kinds of model do we need, to understand this socio-technical combination?
Consider, for example, an integrated model of a healthcare system; it must embrace not only the technologies of heart-monitoring and patient records, but also the behaviour of patients and ambulance drivers, and the interplay between the two. We may adopt a holistic view, and seek an integrated model of both human and artefact.
On the other hand we may adopt a dualistic view, and focus on modelling the artefactual parts of a system; this allows us to deduce how the artefact will behave under hypothetical human behaviour, and also to mount live experiments with people, feeding our ﬁndings back to modify the design of the artefact.
The view we take inﬂuences the research we do; in particular, research from the holistic viewpoint will give early attention to models of human behaviour, while research from the dualistic viewpoint is likely to proceed - at least initially - without modelling human behaviour. There is clearly a spectrum between these two views. The extremal views are antithetical, but there is no need to settle for one over the other; we need both!
The contrast between them gives researchers a strategy: a dialectic that can help us to clarify both points of view and synthesize them. Thus the holistic and dualistic approaches should be pursued in a closely coupled manner. Later I shall say more about how we hope to embark on this path.
Before that, I want to take a more detailed look at the dualistic approach, which characterizes the software practice and theory developed over the last half-century. To what extent can this practice and theory be extrapolated to the greater challenge of ubiquitous systems?
Ubiquitous systems will involve many varieties of cooperation between hardware and software. But each system, whether for medical care or home management or trafﬁc control, will not stand alone.
As we observed already, by their very nature they will tend to be interconnected to a lesser or greater extent. Their aggregation will constitute a vast and ever-growing organism.
I have been tempted to call it the global ubiquitous computer; it is likely to become the most complex artefact in human history. To what extent can we scale up the present means of software production and analysis, to build this artefact with full understanding?
Let us use the term software science to refer to the principles, concepts, models and constructions that we use to build and understand software systems, and ask to what extent they will expand to perform the same task for ubiquitous systems.
Considerable success has been achieved over the last half-century, in modelling subtle and varied features of modern software. This science stretches from highly developed theories of sequential computing and databases, to theories that are less developed - but already enjoy fair consensus - for concurrent interactive systems and distributed data.
Here are just some of these contributory theories, roughly in order of discovery: universal machines, automata theory, formal language theory, automation of logics, program semantics, speciﬁcation and veriﬁcation disciplines, type theories, process calculi, temporal and modal logics, calculi for mobile systems, intelligent agents, semi-structured data, game-theoretic models.
This is a substantial science base, and much of it is equipped with automated tools for design and analysis, such as simulation and model-checking.
Nonetheless, its impact on industrial practice has been incomplete and haphazard. Why?1 A dominant reason is the extraordinary pace of technological advance, and the corresponding pace of change in market expectations.
The science has been aimed at a moving target, attempting to underpin the ever more complex designs made possible by (for example) networking technology. Another reason is that theories remain longer in gestation than opportunistic design practices. Three effects can be observed:
In other words, theories have not sufﬁciently informed software design; often they have been merely retroﬁtted to it, revealing weaknesses too late to mend them. In some cases, for example for the year 2000 problem, the appropriate theories were quite modest and already two decades old.
The point of this example is not that a disaster occurred: it didn’t. But enormous amounts of time and money were wasted, since no one knew whether it would occur or not.
We are all well aware of large failings of software systems, for example the brittle quality of legacy software. It is pointless to attribute blame for this phenomenon; better to regard it as the penalty of resounding success in equipping a hungry world market with computing power.
This point is expressed resoundingly by G. Robinson, quoted in a recent report The Challenge of Complex IT Projects, prepared by the Royal Academy of Engineering with BCS :
For another analysis of this question see the Impact Project , an activity under the auspices of SigSoft, the ACM special interest group on software engineering. The present paper can be seen as narrowing the focus of the Impact project to the speciﬁc problem of ubiquitous computing.
The pace of technological change and the ferociously competitive nature of the industry ... lead to the triumph of speed over thoughtfulness, of the maverick shortcut over discipline, and the focus on the short term.
The report studies the current practice of software production in depth, with many recommendations for improving it. This is valuable, but creates a risk. The report's recommendations are overwhelmingly about industrial process; virtually no attention is paid to software science as we have deﬁned it here.
The risk is that attention will be restricted to these recommendations; it will thus weaken the link between the software industry and software science, not strengthen it.
Can the current mode of software production be not only placed on a ﬁrmer footing, but also scaled up to serve the purposes of ubiquitous computing - whose huge systems will be less and less easy to dismount for maintenance, repair or replacement? I see no reason to expect that it can; a more scientiﬁc approach is urgently needed.
But is a scientiﬁc approach even possible? It is instructive to compare progress in software science - a 'science of the artiﬁcial' in the terms of Herbert Simon  - with progress in sciences of the natural world.
Whether we think of physics, chemistry or biology the contrast is the same; these sciences are driven by the urge to understand, and ours by the urge to build. The natural sciences are distinguished by the dignity of their subject matter, which remains undisturbed by our understanding of it. In software, on the other hand, each new technology moves the goalposts.
Furthermore, the artefacts of software are virtual, and remarkably free from physical constraints. These stark features appear to lead to a stark conclusion: we can expect no convergence in software science - it will always trail the latest technological advance, and never stabilize its constructions.
Before we yield (or not) to this depressing conclusion, it is worth tracing the development of programming languages over the last half century.
The seed of early languages is found not in technology but in an amazingly powerful logical notion: the universal computer - that can be programmed to simulate any other one.
This notion arises in different models of computing. The most famous model is Turing’s paper machines as he called them; but the one that most informed computer design - and hence language design - was the model of register machines pursued by von Neumann, Minsky and others.
The class of register machines also has a universal member; if this were not so, our whole approach to computing would be unimaginably different!
For early programming languages, model and language were still closely wedded; John Backus, designer of Fortran, was able to boast that its semantics took barely 51 pages (many about syntax); the semantics was simply the register machine model shining through.
What models shine through modern computer languages? After all, these languages mediate between all computing designs and their implementation, and are therefore indicators of the degree to which coherent models inform our design.
To an extent, they do inform it. For example theories of data structures, types and concurrent processes have all inﬂuenced languages. But, as I have always taught students, every language goes too far in combining concepts whose interplay is only partly understood.
In other words, despite the aspiration of logic programming, our languages are not executable parts of understood theories. That is one way to explain the embarrassing phenomenon of inscrutable legacy software; it could have been averted by an understood theory for modular construction of systems whose requirements will vary unpredictably, but we have no such theory.
How can we arrive at such a theory? Ubiquitous computing, because of its size and complexity, cannot do without it. But a universal theory is a distant goal. The very breadth and difﬁculty of ubiquitous computing forces us to approach it incrementally. What will be the increments of this broad theory?
There is hope here: any application can be viewed at different levels of abstraction, and at a particular level there are often only a few relevant concepts. Moreover, a theorization of these concepts can lead to a rigorous model speciﬁc to the application at that level of abstraction.
Such speciﬁc theories, for varying applications and varying levels, will be increments toward an ultimate theory of ubiquitous computing. In this way we may at least approach such a theory, even if it is limitless and therefore cannot be fully attained.
Moreover, in each speciﬁc application domain we can seek to integrate its models with the languages used to express design.
I am involved with two experiments in speciﬁc theories that integrate model and language.
In each case, the aim is for experts in a particular application to work with theorists, in an attempt to identify concepts that underpin the way users, designers and analysts all think about the application domain, and thence to distill a model and language that are coordinated.
The ﬁrst experiment is in business processes; these are not strictly pervasive systems, but they provide an excellent example of how to coordinate model and language.
The second concerns movement in a sentient environment; this admits interesting ubiquitous system designs, and also illustrates that there is a ﬂexible boundary between the human and artefactual contribution to a system.
Example 1: Business processes. Part of this initiative is the aim of the Choreography Working Group  of W3C, the Worldwide Web Consortium, to provide a standard platform for deﬁning automated business processes.
There has been much exploration of rigorously based methods to describe business processes. At a recent workshop an independent group, drawn from commerce and academia, formed the Process Modelling Group  to investigate these methods.
The aim is not to impose some chosen theory and method, but rather for theorists to discuss with practitioners the different purposes that are to be served by rigorous process description: purposes as distinct as writing programs on one hand, and explaining to senior management on the other hand. These different modes of description then become facets of a single theory for business processes.
In the W3C choreography group this discussion takes the form of designing CDL (Choreography Decription Language)  with the help of translations between CDL and mathematical process calculi, such as Petri Nets  and Pi calculus [12, 13]. It is already clear that commercial experts and scientists learn more than they expected from each other.
The concepts that arise are all familiar to theoreticians; notions like workﬂow and inter-process (contrast intra-process) communication. They also arise, but differently, in the way that different groups (e.g. managers and implementers) think about a system. Thus the models and languages come together.
Example 2: Mobility in a sentient environment. A common feature of ubiquitous computing will be mobility of people, of hardware (sensors and effectors) and of software. Each application may have its own pattern of mobility.
How does one describe such a pattern, and the purpose to which it is put? The Bigraphical Programming Language (BPL) group  at the IT University of Copenhagen is exploring the deﬁnition of a programming language, derived from the bigraph model of mobility  which in turn uses ideas from the calculus of mobile ambients .
This language and model will be used to describe and analyse patterns of movement in a sentient environment, such as a building. The design of such systems poses hard problems in handling space, as discussed for example by Hopper , and by Dix et al.. Languages and models do not themselves solve these problems, but they provide tools for design and analysis of proposed solutions.
In one such system we may imagine people locating each other, or locating devices, for a variety of purposes. The BPL group will model such a system that is reﬂective; agents will keep track of their own actions, in order to perform tasks more effectively and to preserve a report of their activity.
This example involves only a few concepts and properties - an advantage that we noted earlier. A reﬁnement of the experiment may raise questions of whether certain decisions are taken by humans or by software agents.
For example Alice may wish to locate Bob, but not to interrupt him if he is busy. Can sensors and software deﬁne ‘busy’ clearly enough to decide whether Bob is busy, e.g. on the basis that his door is open/shut, and there are already some/no other people in his ofﬁce? Or will it simply provide Alice with such information and let her make the decision? This is a case for a subtle collaboration between human and artefact.
These examples illustrate features that we can expect to ﬁnd in ubiquitous computing.
First, they make clear the need for experimental research, even in simple applications, in order to deﬁne systems that are humanly acceptable.
Second, each of them shows that a non-trivial model may involve only a few concepts, and hence suggests that a viable research strategy for ubiquitous computing can be based on such case studies. We may call these foothill projects; I shall say more about them later.
Finally, the disparity between our examples leads us to expect a multiplicity of theories to underpin ubiquitous computing. But, as a result of such piecemeal explorations, we may hope to unify the speciﬁc theories into some kind of whole. Let us now speculate on the nature of this whole.
What concepts and properties are relevant to describing a ubiquitous system? Many are relevant precisely because of the degree of autonomy required.
We need only look at familiar disciplines, such as artiﬁcial intelligence and agent technologies, to see that the range of concepts is large and open-ended.
Here are a few: authenticity, belief, communication, compilation, connectivity, continuous space, continuous time, data protection, delegation, duty, encapsulation, failure, game theory, history, knowledge, implementation, intention, latency, locality, mobility, negotiation, obligation, power supply, protocol, provenance, reﬂectivity, security, speciﬁcation, stochasticity, trust, veriﬁcation, workﬂow.
Perhaps the most striking thing about this list is its heterogeneity. Indeed, in many applications - and certainly in exploratory ones - we hope to be concerned with only a few concepts.
For example in modelling interaction in a sentient environment, as outlined above, we are concerned mainly with locality, mobility and reﬂectivity; a more reﬁned model will introduce stochasticity. In general, each model will characterize the relevant concepts either in logical or behavioural terms.
In the case of concepts like belief or trust, as they pertain to software agents, we shall not be concerned with the full human interpretation; instead we expect to use simpler pragmatic notions that are both deﬁnable and implementable.
Moreover, any system must be modelled at higher and lower levels of abstraction. Each level of the model deﬁnes certain concepts rigorously, and employs them to describe the system's admissible behaviour.
Then, crucially, we arrive at a tower of models, cemented by the way in which a lower-level model realizes a higher-level one.
Consider the example of trust. In a higher-level model M, trust would be axiomatized; this is how it could be treated in a system of intelligent agents as described by Wooldridge .
A simple axiom for trust may be transitivity: if A trusts B and B trusts C then A trusts C. (In a more reﬁned model we may admit degrees of trust; then this simple law of transitivity has to be reﬁned). The model M would characterize admissible behaviour in terms of trust; it may declare that A allocates a resource to B only when A (sufﬁciently) trusts B. (_)
What would be a lower-level model N that correctly realizes M, but does not itself entertain the notion of trust? N may instead be concerned with histories of communication events. Then the realization of M by N would deﬁne ‘A (sufﬁciently) trusts B’ to mean that A’s history of communications with B has a certain quality. Furthermore, for the realization to be sound, the property (_) must hold in N when trust is interpreted in this way.
Interesting questions arise in a model for agents with intentions. The intentions in a population of agents may be inconsistent; so we have to model the negotiation by which they decide which intentions will be met.
Considerable work - often using game theory - already exists on this topic, for example by Jennings et al.. Further, suppose that a composite agent A comprises a family of agents, B1, B2, ...; how are A’s intentions related to those of its members? It is likely that A has a better chance of meeting its intentions if, in exercising control over its members, it accommodates their intentions to some degree.
As with the notion of trust, a model for such systems of agents is subtle. It is therefore not trivial to deﬁne - and to validate - how this model is realized by a lower-level model (perhaps consisting of computer programs using certain interaction protocols) in which the concepts of intention and negotiation are absent. Such rigorous validation of complex realizations will be essential for the reliability of UCSs.
We now begin to see a possible structure in the totality of ubiquitous systems, their relevant concepts, and their models related by valid realizations.
There will be many varieties of models: they may be mathematical structures, or speciﬁcations of different kinds, or high-or low-level languages.
Accordingly, realization may take the form of interpreting a description in more elementary terms, or correctly implementing a speciﬁcation by a program, or correctly compiling from high-to low-level programming language.
In this framework, software engineering comes to be seen as deﬁning, analysing, adapting and evolving both models and their realization. Indeed, the term ‘software’ itself will come to include the aggregate of models and their realizations; it will no longer be kept arbitrarily distinct from analytical theories.
But it will include even more. As in other branches of engineering we expect engineering design to be intimately linked with science. The search for generic design principles, pertaining to many applications, will guide the search for concepts and models - and in turn be guided by it.
These searches cannot be pursued in a vacuum. The external goals - those visible to our whole society - involve creating previously unknown kinds of human experience.
So, alongside our forays into new science and engineering design, our research must explore elementary new forms of such experience. All these three ingredients of the Challenge must closely guide one another. Indeed, the UKCRC has advocated this approach in its response  to an enquiry from the Parliamentary Ofﬁce for Science and Technology (POST) concerning the possibilities and risks inherent in ubiquitous computing.
So we have formulated three interlinked goals for the Challenge, in such a way that after 15 or 20 years our degree of success can at least partly be measured. From the outside inwards, i.e. from society inwards to science, they are as follows:
So far I have said little about the human experience of ubiquitous computing; I have focussed upon the kind of software design and science that appears necessary to realize our vision of this experience.
The ﬁrst of our three goals indeed depends on the vision; it can be seen from the holistic viewpoint. The other two, for design and science, can be seen from the dualistic viewpoint: they focus attention ﬁrst upon the artefacts of ubiquitous computing, consisting of huge populations of largely autonomous computing entities, organized into systems whose purpose and function is to serve humans.
Can we postulate criteria for success in achieving our vision for new human experience, with the same clarity as criteria for the success in the design and science? The vision is substantial, and it seems to indicate a path whose initial direction is somewhat clear; but the later twists and turns on the path can only be determined at later points on the journey.
So the vision describes a journey whose ultimate goals remain hidden. Nonetheless it is ambitious and inspiring; more than that, given our technological power, it has the quality of the inevitable. Thus, to follow the path wherever it leads has the stature of a Grand Challenge.
The initial research strategy for the visionary goal is clear, and is already adopted by, for example, the Equator project ; we need to mount exploratory projects that aim to deﬁne the kinds of experience that lie at the core of the vision. This requires experiments that create speciﬁc socio-technical environments and ask humans to enter them. Only thus will the essence of the vision become progressively deﬁned.
As these exploratory projects proceed, how shall we codify and build upon the experience that they provide? Here we look for synergy between the societal vision on one hand, and the development of scientiﬁc models and engineering principles on the other; i.e. we look for a synthesis of the holistic and the dualistic views.
From what we have discussed, it seems we must mount two closely interwoven initiatives: to develop relevant models and design principles we need the focus of exploratory projects in ubiquitous computing, and these projects in turn must surely beneﬁt from any models and design principles identiﬁed in the process.
With this synergy in mind, the Steering Committee for the Grand Challenge has identiﬁed the notion of a foothill project. (In fact the idea was born at the Grand Challenges conference in Newcastle, 2004.) Such a project should have modest aims, achievable in two or three years, and should involve collaboration between groups that have not previously collaborated.
Ideally a foothill project should cross the triple boundary between an exploratory application, engineering principles and theoretical development; it should certainly contribute to at least two of these. But it should limit its concern to just a few concepts that are relevant to ubiquitous systems.
A foothill project need not be the subject of a current research proposal; indeed it may represent work that is already in progress and which can be afﬁliated with the Grand Challenge exercise.
The Steering Committee wishes to identify a set of topics for foothill projects and would like to keep track of any current or proposed research, worldwide, that is relevant to each topic.
Such topics fulﬁl two functions: they link application, engineering and theory, and they help to identify (and even assemble) an international community of researchers who share the goals of the Challenge, both for its vision and for its understanding.
We have hitherto identiﬁed six foothill topics. They are as follows, with a bare indication of the problem-space of each:
Each of these topics is outlined on the Grand Challenge website . Each one may be the subject of many different projects, which we may call its partner projects. A topic such as ubiquitous healthcare no doubt involves many of the concepts that we identiﬁed earlier, and others too; in deﬁning a partner project, it will usually be necessary to focus on a small subset of these concepts.
The topics are of mixed character. The ﬁrst two and last two explore in each case a particular application. In these cases the initial partner projects are likely to be concerned with virtual experiment; in later stages of our attack on the Grand Challenge we would expect experiments in the real-world environment. On the other hand the middle two topics, concerned with model-checking and protocols, lean towards rigorous underpinning for generic forms of ubiquitous behaviour.
These foothill topics are an integral part of our strategy for the Grand Challenge. We hope to extend the list, and the Ubiquitous Computing: Shall we Understand It? website encourages you to contribute further ideas; these could be new foothill topics, or ampliﬁcation (via links) of the existing topics.
What is the purpose of a Grand Challenge in computing? Is it to induce an achievement in the form of a successful application of our discipline?
Or is it to advance the stuff of which the discipline is made? We would like both, of course. But computing is strangely underestimated in the world at large.
The world sees the technical achievements of computing, and admires them (when they work); what it does not see - nor do even our academic colleagues in other disciplines - is the stuff of computer science. The practical achievements, and the debacles, are seen as the outcome of an inscrutable technology.
The world at large knows less about the structure of computing systems, and what good or bad structures are, than it knows about the structures of physics, chemistry of biology. (One exchange I had: 'What do you do?' - 'I'm a computer scientist' - 'Oh! I wish I’d learnt to type ...').
Compare our Grand Challenge in ubiquitous computing with (what might have been called) the grand challenge to build a chess program to defeat an international grandmaster. The latter achievement was successful, and its side effect was no doubt of great value to the programming of chess.
The ubiquitous computing challenge may be more ambiguously deﬁned, but its side effect will be to reveal to the academic and industrial world - and ultimately to the world at large - the essential organizing principles of our science.
This is at least as important as developing applications that startle the world. Fortunately, the imperative of ubiquitous computing is that each of these achievements reinforces the other.
I would like to thank the anonymous referees for suggesting signiﬁcant improvements.
I am also grateful to colleagues Dan Chalmers, Matthew Chalmers, Jon Crowcroft, Paul Garner, Tony Hoare, Nick Jennings, Marta Kwiatkowska, Suzanne Lace, Eamonn O’Neill, Tom Rodden, Vladi Sassone, Morris Sloman, Karen Sparck-Jones, Martyn Thomas and Mike Wooldridge for discussions and comments that have helped me to form my view of ubiquitous computing, and to improve the way I have expressed it.
It remains a personal view; the variety of expertise and interest in UCSs is so large that it would have been unwise to attempt even to survey all views, let alone to reconcile them. The best effect of this paper will be to promote debate among all concerned, leading to a coordinated attack on ubiquitous computing from all angles.
 Weiser, M. (1991) The computer for the 21st century. Sci. Am., 265(3), 94–104.
 The UK Grand Challenges Exercise. Available at http://www.ukcrc.org.uk/grand_challenges/.
 Ubiquitous Computing: Experience, Design and Science (a UKCRC Grand Challenge). Avaiable at http://www-dse.doc.ic.ac.uk/Projects/UbiNet/GC/index.html.
 Hilty, L. et al. (2005) The Precautionary Principle in the Information Society: Effects of Pervasive Computing on Health and Environment. Report of TA-SWISS, Centre for Technology Assessment, Berne Available at http://www.ta-swiss.ch/www-remain/reports_archive/publications/2005/
 The Impact project. Available at http://www.sigsoft.org/impact/index.htm.
 (2005) The Challenge of Complex IT Projects. Report of a working group from the Royal Academy of Engineering and BCS.
 Simon, H. (1996) The Sciences of the Artiﬁcial, 3rd ed. MIT Press, Cambridge, MA.
 The Choreography Working Group of the Worldwide Web con sortium (W3C). Available at http://www.w3.org/2002/ws/chor/.
 The Process Modelling Group. Available at http://www.process-modelling-group.org/.
 CDL, the Web Services Choreography Description Language. Available at http://www.w3.org/TR/2004/WD-ws-cdl-10-20040427/.
 Reisig, W. (1985) Petri Nets: an Introduction,Vol.4. EATCS Monographs in Theoretical Computer Science, Springer-Verlag, Berlin.
 Milner, R., Parrow, J. and Walker, D. (1992) A calculus of mobile processes. Inform. Comput., 100(1), 1–77.
 Milner, R. (1999) Communicating and Mobile Processes: The Pi-calculus, Cambridge University Press, Cambridge, UK.
 Bigraphical programming languages (BPL). Available at http://www.itu.dk/research/theory/bpl/.
 Milner, R. Pure bigraphs: structure and dynamics. (2006). Inform. Comput. to appear. Available at http://www.cl.cam.ac.uk/~rm135/.
 Cardelli, L. and Gordon, A. (1998) Mobile ambients. LNCS, 1378, pp. 140–155.
 Hopper, A. (2000) Sentient computing. Phil. Trans. Roy. Soc. A, 358(1773), 2349–2358.
 Dix, A. et al. (2000) Exploiting space and location as a design framework for interactive mobile systems. ACM Trans. Comput. Human Interact., 7(3), 285–321.
 Wooldridge, M. (1999) Intelligent agents. In Weiss, G. (ed.),
Multi-Agent Systems, MIT Press.
 Dash, R., Parkes, D. and Jennings, N. (2003) Computational mechanism design: a call to arms. IEEE Intell. Syst., 18(6), 40–47.
 Sloman, M. UKCRC Response to POST Enquiry on Ubiquitous Computing. Available at http://www.ukcrc.org.uk/resource/reports/
 Equator (a 6-year Interdisciplinary Research Collaboration funded by the Engineering and Physical Research Council, UK). Available at http://www.equator.ac.uk/.
Although I agree with most of the points made in the paper, Robin indicates that he is tempted to consider a global ubiquitous computer.
I think this would not work, as the scale of such a system would be so large, it would not be practical to apply design concepts, analyse or understand it.
Software Engineering has taught us how to break large systems down into manageable components and similar concepts need to apply to ubiquitous computing. We advocate considering a Ubiquitous System (US) as multiple, interacting cells which we call self managed cells (SMCs) .
An SMC could correspond with a body area network which monitors (and in the future controls) the health of a person as well as providing the means to interact with other people.
At another level an SMC could correspond to a meeting room in which personal SMCs interact with the other people in the meeting, as well as the local ubiquitous environment in terms of devices such as printers, displays, and audio systems supporting the meeting, devices controlling the air-conditioning and lighting etc.
The meeting room SMC is part of a building SMC which manages the whole infrastructure of the building to optimize energy usage, environmental conditions, communication and even people movement via lifts etc. A large-scale service provider could also be considered an SMC providing communications services to both personal SMCs, vehicle SMCs and building SMCs.
An SMC is more complex than current software engineering concepts such as objects or components in that it is very dynamic. The components of the SMC may join or leave over short time-scales as people and vehicles are mobile.
An SMC provides a scope for designing autonomic functions such as self-conﬁguration, self-healing, self-optimization, self-protection and context aware adaptation which are essential for ubiquitous computing systems. An SMC can provide a scope for applying theory for analysing, modelling and understanding.
A personal SMC will also be the means to support multi-modal interactions with the ubiquitous environment and through the environment with other people’s SMCs. Although a personal SMC may be a comparatively small-scale US, we also have to be able to consider large-scale SMCs such as the communications service provider supporting millions of personal, vehicle and building SMCs.
From the above it can be seen that we have to consider the following forms of interaction between SMCs and ways of combining SMCs.
Peer-to-peer interactions support collaboration between people in a meeting or typical social interactions. However, peer-to-peer interactions also represent typical forms of collaborations between service providers e.g. for routing of communication; providing services to 'foreign' subscribers who are unable to access their own service provider as happens with mobile phone usage in a foreign country; or the collaboration between emergency services at the scene of an accident.
Hierarchical interactions are needed for supporting the applications which are built up from simpler lower level services, such as a location aware information service using a communication service, location tracking service and accessing various information providers for weather, trafﬁc conditions or local restaurants.
Composition is needed to build a large SMC such as the Building SMC from room SMCs or an intensive care ward to manage the patient monitoring SMCs and supporting the actions of medical staff with their own SMCs.
In my view the concept of an SMC provides a basis for developing design methods, tools, models and theories for interactions which will cater for the very large scale global USs.
Robin Milner has drawn a visionary picture of ubiquitous computing and sketched the software science that is needed if the vision is to be realized.
The science of ubicomp is a Grand Challenge indeed, and one that may need rather more than 20 years before it is fully realized. The scientiﬁc challenges are exciting, and I hope they will inspire a generation of researchers, but I am deeply concerned about practical issues of engineering.
What will cause an industry to adopt the new science when building real USs?
The answer should be obvious: if these complex systems are built without the solid foundations of sound theory, they will fail: projects will overrun, costs will escalate, the systems will prove unreliable, insecure or unsafe in use and practical deployment will be patchy, falter and perhaps stall completely.
That is a conﬁdent prediction because, as Robin Milner has shown clearly, the ubicomp community is targeting levels of complexity far beyond anything that has ever been built before, and requiring that their systems continue to function dependably whilst reconﬁguring themselves to interact with other systems that had not even been considered in the original design.
That may seem a powerful argument for adopting software science, but it has not proven a compelling one for today's software industry, who continue to build complex and sometimes safety-critical systems without making appropriate use of the software science that exists today.
As a consequence, most new systems cost far more than they should, take longer to develop than necessary and deliver fewer beneﬁts than the customers expected. This problem is not unique to the UK: international surveys  and books [3–5] suggest that it is a widespread (and probably universal) problem.
Robin Milner has generously suggested that this is the inevitable result of a marketplace that is highly competitive and rapidly developing, but I believe the reverse to be the case.
The marketplace for new complex systems is dominated by a small number of very large companies, and when customers increasingly outsource their entire IT, or seek to buy very large systems from single prime contractors, it is impossible for the smaller, innovative companies to compete.
For example, there are a number of European companies whose software development methods, languages and tools are based on mathematically formal methods and sound computer science.
They have shown that they are able to develop systems that consistently have 5% or fewer of the defects  that are routinely delivered by their larger competitors (with no increase in development costs) but these companies each have less than 200 staff: it is not possible for them to win projects of the scale of the National Identity Register, or the ﬂight software for a new aircraft. So competition is limited and new science is only adopted slowly.
The result is that industry still uses design notations with shallow semantics, programming languages that are ambiguous and impossible to analyse mechanically to any depth, and components that lack adequate speciﬁcations or warranties.
The problems that were identiﬁed in the NATO software engineering conferences of 1968  and 1969  are still with us, and yet the software science to overcome them has been available for at least 35 years .
The two main international standards  for safety critical systems are currently being revised and, shockingly, neither of these revisions will require that all developers of safety-critical systems specify the required safety properties with mathematical rigour.
This is a major barrier to practical ubicomp. Unless industry can be persuaded or compelled to adopt the relevant software science of the last 40 years, there is no hope that companies will be in a position to adopt the output from Robin Milner's Grand Challenge. And, as the paper shows so clearly, without the science, the applications will remain little more than academic dreams.
Taking USs seriously means thinking about systems whose constituent subsystems can be properly described as autonomous agents, i.e. systems that do not just have functions, but have goals.
However the presumption seems to be that constituent agent activity is motivated and bounded by the larger US. Subsystem agents are autonomous only in how they achieve their goals.
Robin Milner writes about designing USs in a way that assumes control over the whole enterprise: i.e. as if a US is a single teleological construct that can be understood, because of its 'singlemindedness', sufﬁciently to support a comprehensive, but meticulously grounded, and rigorous design. (Variant external user interests are negotiated into the design.)
This may seem reasonable. But USs will in general not be greenﬁeld operations. They will be built on top of existing system(s). These existing systems, though viewed as US constituents, will have their own prior functions, i.e. goals.
US builders will need to factor in these existing goals. This will be problematic: ﬁrst, because these goals will be heterogeneous and probably conﬂicting; second, because given the subsystems already exist, perhaps as antique legacy, their goals may be unobvious or inaccessible. However it will not, in general, be sufﬁcient to model the subsystems from their observed behaviour. US designers will have to think, not just about what a subsystem does, but about why it does it.
As an analogy, the bees in a hive are a miniature US. The bees are autonomous agents, with class goals, e.g. get nectar, and individual goals, e.g. for some worker bee, pick up this blob of nectar, ﬂy back to hive, or for a queen, lay eggs here, start off swarm now.
There is separate behaviour, e.g. foraging, and collective behaviour, e.g. swarming, and communication. This is the bee view. The beekeeper, who did not create the bees but provided their hive, has his own goals, e.g. getting honey, making money. There are also users, the honey buyers, with their goals.
In designing his bee US, the beekeeper will not get a system that satisﬁes his goals unless his model factors in the bees’ goals. Of course the bees' goals are inaccessible. But to get a nicely working hive producing honey he needs to think not just about what he can see the bees doing when they’re foraging, but about why they might be doing it the way they are.
For example, they forage as they do so as to minimize journey time to the hive. That will help him locate the hive well. Similarly with swarming. If he does not accommodate the bees’ inferred goals, he'll get badly stung.
But because he can only infer the bees' goals, he’ll get it wrong sometimes, so he needs to wear protective clothing. Even then, he'll get stung on occasion, part of the price to pay for the honey.
The three towers (with apologies to J.R.R. Tolkien)
Robin Milner has presented a view of the challenge of ubiquitous computing from the perspective of software science.
In the process, he has also made a strong plea for the value of the science of software, which I also subscribe to. The manifesto for the challenge incorporates three viewpoints, the other two covering the experiential and technological aspects of the problem space.
I am interested in the engineering principles that will be required to succeed in conquering the technological problems in UbiComp. I'll use two examples from the past to try to illustrate what I think we need to do in this space for the future.
In the manifesto we wrote that the principles will be visibly uncovered when we can see them being taught and applied in Master's courses, and in exemplar projects around 10–15 years from now.
What I would like to add to Robin Milner's discussion is this: we could envisage three towers of models, leaning together to make a tripod, a stable mutually supportive structure that contains experience, science and technology.
I am not the person to address a tower of models for the human factors, interactive systems and experiential side of UbiComp. I will attempt to assert that there will be a tower of engineering design principles or rules, applicable in a similar way to the tower of scientiﬁc models.
As with the debate at the lecture based on Robin Milner’s paper, I agree that the word 'tower' is of course too restrictive—the application of models, or of design rules, may be made in some arbitrary graph of course, but the word tower is there and will stick.
I do not know precisely what the types of design rules and principles will be speciﬁcally for this Grand Challenge (of course, otherwise it would not be a challenge), but I would like to point out that the idea of design principles for computer systems has been successfully applied in at least two areas in the last 25 years.
The design philosophy of the DARPA Internet was documented in the 1988 paper  by Dave Clark of MIT (about 12 years after the migration of the ARPANET from NCP to TCP/IP).
An elegant expression of the set of rules of thumb used to design successful cryptographic protocols was documented by Abadi and Needham .
Other broader questions were discussed by Sally Floyd et al. , and an economic and policy perspective was bought to bear in . These works represent a tip of the iceberg of practices today. They took around one to two decades to emerge as the underlying computing and communications systems were being researched and prototyped.
They incorporate much material that is delivered in Masters programmes today in communications systems, and in security engineering (for example, we reference them in material taught in Cambridge). They are referenced in standard documents today.
They incorporate a web of other disciplines including control theory, optimization, agents and game theory, queueing systems, random and self-stabilizing algorithms, modularization, virtualization, layering, viewpoints transparencies (viz. ODP Affordance) as well as security principles.
However, there are some interesting meta-lessons from the success of both security and inter-networking engineering design. In both of these, I think the meta-lesson is that engineering and science exist in a subtle relationship which inform each other (and I am certain that one can make the same argument for the third element of the challenge too, although of course, we know the three body problem is a tad trickier!
In the internet, we have succeeded in designing a system that is reliable (extraordinarily so compared to prior systems), despite the unreliability of its components.
Indeed, we are starting to understand the performance and the behaviour of the internet in great detail, and yet the component computers (hosts and routers) are a long way from being built out of scientiﬁc software (the majority of computers run Microsoft Windows, the majority of routers run Cisco IOS, and these contain 20-year-old code written in C, with scarce attention even to software engineering disciplines, let alone software science).
In that same internet, we routinely carry out secure transactions. So secure are these that credit card and banking agencies would like to stop using legacy mechanisms for absolute cost reasons, and yet also are happy to underwrite the risk as it is lower, and, crucially quantiﬁable so, than in those legacy systems.
What this is supposed to illustrate is that one can envisage UbiComp meeting some desirable goals, without the necessity to apply a microscopic level of scientiﬁcally correct programming at every level.
By analogy, I would point out that while we have a tower of models in physics (in some senses, all of natural science is a tower of models from quantum mechanics through chemistry, up through biology to ecosystems), we do not need to use it to build systems that are correct.
To spell it out, we do not use quantum mechanics in aeroplane design.
What emerges is that we can, through prototyping, develop empirical results which can lend us assurance that the necessary science could, in principle, be applied, but does not need to be.
In fact, some of the engineering techniques we use allow a more statistical approach to meeting desired behaviour—the correct operation of a very large ensemble of systems, despite faulty (even insecure and undermined) components is something we have developed methods to build (some mentioned in the list above).
Recent theoretic results from the control theory world such as those reported by Doyle from Caltech , have started to explain this success. Thus, I suppose, a suitable challenge for the science of US could be to give us a more predictive model for the combination of multiple such systems. This is not in conﬂict with the tower of models Robin Milner proposes, but is a holistic view of that tower!
Robin Milner's lecture discusses ubiquitous computing and the risks associated with our failure to develop proper scientiﬁc foundation for ubiquity.
We are already witnessing exemplars of systems that use sensor and mobile devices in safety-and business-critical applications such as medical monitoring and banking, but less so coordinated attempts to address the generic scientiﬁc principles that underpin the development of USs.
As my research interests have been close to Robin's, I strongly agree with his analysis and conclusion. Robin mentions examples of models and theories developed in the past that informed the practice of programming - data structures, types and concurrent processes.
The recently espoused synergy between the pi-calculus and business process languages is indeed a very exciting prospect, with potential to inﬂuence the W3C choreography standard. The role of these models and theories goes far beyond informing the standards, however.
The mathematical formulation of the problem allows for formal reasoning about system correctness, which is capable of handling systems of huge, perhaps even inﬁnite, scale, via decomposition and mechanised theorem proving.
Importantly also, it makes it possible to automate aspects of the processes of system validation and veriﬁcation as embodied, for example, in model checking software tools . Examples include the Static Device Veriﬁer developed as part of the SLAM project  at Microsoft and used for model checking of compliance of device drivers to the speciﬁcation.
The ubiquitous computing scenario, as aptly illustrated by Robin, presents new and daunting challenges for formal reasoning and model checking. We must consider stochastic models of mobility, be able to deal with context and adaptation, and address issues such as quality of service and security. Techniques such as probabilistic model checking [17, 18] can be of assistance, but currently can only handle static, ﬁnite system conﬁgurations.
One issue that I ﬁnd particularly intriguing is the argument for scientiﬁc foundations that are not only descriptive, but also predictive, and therefore more in line with established sciences.
The issue of whether our subject (variously referred to as computer science, computing, computation, informatics) is a science in the conventional sense is hotly disputed right now. Personally, I agree with Alan Bundy  that strong scientiﬁc foundations are necessary and look forward to participating in the exciting future developments of our discipline.
Notwithstanding, I have one disagreement with Robin, and that is that I do not believe the tower of models is feasible or appropriate. Rather, a looser collection of theories, perhaps a graph, perhaps comprising subtowers, is what we should aim for.
Robin's paper discusses the lack of an underlying foundation of fundamental computer science which, if it existed, would allow us to more successfully model behaviour in large-scale computing systems and therefore design more correctly from the outset.
In industrial research we aspire to develop a set of technologies that will allow the underlying components of USs to be assembled in real time on-demand, to provide useful, reliable and safe applications and services.
In an attempt to ﬁll the science gap we have embraced a nature-inspired approach to the creation of solutions to complexity, with many leading software scientists in this area being biologists by training. As set out in the IBM autonomic computing agenda, we aim to create distributed autonomous systems that are self_ [20–22].
In order not to lose control over such fully distributed systems, future engineers will have to be able to make appropriate use of self-organization and 'infer' local rules capable of promoting the desired system behaviour from statistical, and not deterministic, predictions.
Such self-managing solutions will play an important role in making any future ubiquitous environment 'invisible' so that users need not become full-time system administrators to their pervasive home entertainment, or healthcare systems.
In support of this approach a new EU Integrated Project called Cascadas  has been formed to create an 'autonomic handbook' of engineering guidelines and decision support in how to build, understand and tune UBs.
It is taking an approach which draws on complexity science and current principles of biological systems and their organizational attributes and behaviours, such as cooperation, coordination and communication.
Account also needs to be taken of future business and economic models as they will be the ultimate drivers of new applications. Detailed models need to be constructed that expose the potential value of USs and which reveal how such value can be successfully delivered.
We must engineer adaptive ambient user interfaces that will engender trust. USs that are adaptive and derive intelligence from our environment must produce services that are understandable and that have predictable behaviour in the eyes of the end user.
These requirements need to be balanced with the desire for invisible technology, to avoid situations where end users are unable to ascribe speciﬁc outcomes to their interaction with USs.
This paper provides an excellent summary of the scientiﬁc challenges we face in trying to understand the underlying principles of viable ubiquitous computing applications for the future.
On computational service economies
The effective design and development of complex computer systems is one of the biggest challenges facing computer scientists.
Such systems are characterized by their open nature (components can come and go during the system’s lifetime), their decentralized control regimes, the high degrees of dynamism and uncertainty that are endemic within them, and the fact that they contain components that represent the aims and interests of multiple distinct stakeholders .
Examples of these types of system include the ubiquitous computing systems discussed in Robin Milner's paper, as well as grid computing systems, peer-to-peer systems, the web, and autonomic systems. In all of these seemingly diverse types of system, however, I believe there is a common computational model that can be used as the underpinning for conceptualization, design and implementation.
This is the service-oriented model. In this, the various components in the system are viewed as providing services to one another. This needs to be done in a ﬂexible and responsive manner to cope with the dynamism and uncertainty that is present in the environment and so I will term the entities that produce and consume the services software agents.
Now, these agents need to interact with one another in order to gain access to the services that others provide. Since the agents are autonomous and represent distinct stakeholders, the de facto form of interaction will be some form of negotiation. If a negotiation is successful it will result in some form of service ageeement or contract that speciﬁes the terms and conditions under which the service is provided.
For me, this service-oriented view is the right high-level model for UbiComp systems and so should form the top of Milner's tower of models.
The other, more traditional, computer science models mentioned in his paper are important and useful, but they are too ﬁne-grained to be an effective point of departure for conceptualizing and designing the sorts of complex systems that are being discussed.
Clearly, a mapping needs to be established to the more traditional models, but if we start with these then I believe we are doomed to fail since they are simply too low level a start point.
Given this standpoint, a natural source of concepts and models is provided by game theory. In this way, the complex system can be viewed as a computational service economy in which the various agents cooperate, coordinate and compete with one another in order to achieve their individual and collective aims and objectives.
Game theory is compelling because it provides a series of concepts (such as preferences, equilibria, incentive compatibility) for both analysing what outcomes are likely in the system and how particular desire-able characteristics (such as fairness or stability) can be attained.
However, game theory as it currently stands is not a panacea. Traditionally, game theory has not considered issues associated with computability, and issues assocated with dynamism and uncertainty have not been to the fore.
Thus further work is needed to modify and adapt it to computer settings (see  for a more comprehensive discussion). Nevertheless, the successes of a number of projects that have used game theoretic techniques to analyse and build UbiComp systems (see, for example, [26, 27]) indicate that it has much offer in this space.
Perhaps because I have been involved with some of the technical aspects covered in the paper, I was most struck by Robin's 'philosophical' remarks that our science is fundamentally different from the natural sciences, in that those are driven by the desire to understand, while we are by the need to build.
I ﬁnd his analysis of where this leads us particularly interesting. Robin's subsequent question, which underlies the entire paper, is shall we ever catch up? Given that advances and new technologies move the goalpost, shall we ever converge?
An observation hidden in Robin's argument is that the universe works and will keep working relatively undisturbed whether or not we understand it; computer systems will not.
We have no comprehensive theory of the universe yet, but the laws of physics are not particularly perturbed by our lack of certainties about dark matter and energy. Not so of course for our ability to build artefacts, and I believe that pointing this out helps put in focus how seriously we should take our relative lack of foundational understanding of the emerging ﬁeld of ubiquitous computing.
What seems important - and to me perhaps not so different between natural sciences and 'sciences of the artiﬁcial' - is that our wish for progress and advances must be underpinned at the same time by adequate technological support, appropriate scientiﬁc understanding and suitable economic/social drive.
To reach the moon we needed the models of Newtonian mechanics (the science), the thrust of suitable propulsion engines (the technology), and the large amount of money and drive of a cold-war competition to get there ﬁrst (the socio-economics).
While for travels in our solar system the science was at least 400 years ahead of the technology, for computing the technological support and the economic drive have a clear lead on the scientiﬁc understanding. Is this an intrinsic property of computing, or an ‘accident’ of history we will catch up with?
Whatever the answer, the important point made by Robin's lecture is that there will not be a single comprehensive, all-explaining theory, even though that will always be our ultimate ambition (as it is for the natural sciences). We will have to have a tower, or perhaps a network, or even a patchwork of theories and models relating to the objects of study from different angles and perspectives.
To develop all necessary levels of abstraction, so that all models interlace together properly is part of the present 'challenge'. It is a task that cannot be compartmentalized, e.g. in theory and engineering, exactly as theoretical speculation over nature cannot proceed isolated from observation and experimentation.
Robin suggests that at any one level of abstraction only a small number of concepts may be relevant. I suspend a ﬁnal judgement on that, but I am happy to proceed with this working hypothesis.
We certainly need to build conﬁdence in the fact that theories can work seamlessly at different abstraction levels, whereby lower levels implement and validate assumptions made at higher levels which, in turn, will have to be feasible and in agreement with the observations.
At an elementary level, Robin's example of a high-level axiomatic of trust can be mapped down to models based on temporal properties of system execution traces, as e.g. .
The forthcoming round of 'foothill projects' will have to build more sophisticated such mappings, and make them signiﬁcant.
Understanding ubiquitous computing: a view from HCI
A substantial body of research approaches ubiquitous computing from a Human–Computer Interaction (HCI) perspective. The goals of HCI as a discipline include concepts, theories, models, design principles, methods, tools and techniques.
We may also agree on these goals for ubiquitous computing across a wide range of disciplines but can attach very different meanings to these terms.
Let us consider modelling. As noted by Wegner [29, 30] and others, interactive systems are very difﬁcult to specify and to model. In HCI, we emphasize interactivity. From time to time, there are attempts to model more or less formally in HCI.
Examples include the syndetic modelling of Barnard et al. . This work, and HCI in general, reﬂects what Milner calls a 'holistic view' that considers both human and artefact. In contrast, Milner adopts a 'dualistic view' that focuses on modelling the artefactual parts of the system.
Milner  presents an example of the dualistic approach to modelling trust: 'A simple axiom for trust may be transitivity: if A trusts B and B trusts C then A trusts C'. Is this simple? Yes. Is it useful? Often it is not - because for people trust is not transitive.
Of course, we can deﬁne trust to be transitive (or anything else that we would like it to be) but that does not help extend our modelling to cover human behaviour and experience.
Milner  notes that in the dualistic approach, 'in the case of concepts like ... trust, as they pertain to software agents, we shall not be concerned with the full human interpretation; instead we expect to use simpler pragmatic notions that are both deﬁnable and implementable’.
This simpliﬁcation is of course necessary in order to implement systems. But then people will come along and perform all kinds of unmodelled behaviours which subvert the simple model on which the system design is based.
For example, we have empirical evidence that people will behave based on factors such as convenience, regardless of trust . Whether within a dualistic or a holistic approach, we lack the ability effectively to model the human behaviours in the whole system.
For the Grand Challenge of ubiquitous computing, Milner  identiﬁes three perspectives of ubiquitous computing: human experience (or HCI), design and science. He also proposes three goals that, he claims, map respectively to these three perspectives:
But these goals do not map neatly to the three perspectives. For the HCI researcher, reducing the goal of HCI to the development of methods and techniques (goal 1) is anathema. Milner  asks if we can postulate criteria for success in achieving our vision for new human experience with the same clarity as criteria for success in the design and science perspectives.
The answer is yes, if and only if our methods and techniques in HCI instantiate empirically tested design principles (goal 2) that are founded on the concepts, models and theories of a coherent informatic science (goal 3). Hence, the goals of HCI as an approach to ubiquitous computing include concepts, theories, models, design principles, methods, tools and techniques.
What is correctness in the age of ubiquitous computing?
Robin paints a compelling picture of the future of computing and clearly identiﬁes the key issues that computer science must address in order to realize this vision.
In this response, I want to comment on one particular issue that Robin's vision raises, which relates to one of the main underlying concerns of computer science - and British computer science in particular. This is the issue of correctness.
There is a well-established set of formalisms and associated technologies for investigating the correctness of computer systems of various types, of which model checking is perhaps the best-known and most successful contemporary example .
While these approaches differ in many ways, they all start from the assumption that there is some precise, formal speciﬁcation of the desired properties and behaviour of the system under study; and the purpose of the veriﬁcation exercise is to show that the system does (or does not) satisfy this speciﬁcation.
Typically, the speciﬁcation is expressed in some form of logic, for example, temporal logic in the case of model checking. This formal system speciﬁcation is, in AI terminology, a goal; and a key purpose of the development process is to develop a system that achieves this goal, i.e. is correct with respect to the speciﬁcation.
This model of correctness assumes that the speciﬁer enjoys a privileged position, in the sense that the speciﬁer is able to deﬁne the criteria by which the system is understood to be correct or otherwise.
Of course, the speciﬁcation will often have been derived from consultations with many stakeholders, but nevertheless, ultimately, there is a single overall position from which the correctness of the system is assessed. And of course, the speciﬁer is assumed to have a consistent speciﬁcation - the software cannot be required to satisfy logically inconsistent requirements.
But it is easy to see that this standard model of corectness simply does not map to Robin's ubiquitous computing world. It implies that someody has unique ownership of the system, and that the system can be designed to satisfy their goals.
But in a globally accessible network of software and hardware components, there can be no privileged position from which a unique standard of correctness is deﬁned. The participants in USs will have different goals and agendas, and these goals - their speciﬁcations - will inevitably be mutually inconsistent.
So, what will replace this classical notion of correctness in a system containing multiple interacting computing elements, each seeking to achieve their mutually inconsistent goals? One approach we have been following is to adapt ideas developed in game theory - the mathematical theory of interacting self-interested agents.
Central to game theory is the notion of an equilibrium: a standard of behaviour that is 'stable', under the assumption that agents within the system act rationally, i.e. in pursuit of their personal goals.
Instead of asking whether the system is correct, we can then ask instead what are the equilibria of the system, and whether these equilibria are desirable. We can also try to engineer a system so that its equilibria are desirable from some point of view (e.g. so that any equilibrium of the system leads to an outcome that is ‘fair’ to all participants).
Where the system involves a substantial legacy component, the metaphor of a social contract becomes useful: a social contract being an agreed standard of behaviour that agents within a system will abide by in order that the beneﬁts of cooperation are not lost in conﬂicts [34, 35].
With respect to Robin’s hierarchy of models, I believe that at one level in this hierarchy, we must recognize these multiple agents with their potentially conﬂicting goals; the task of linking these micro-level models to macro models characterizing system-level behaviours seems to me to be a key challenge.
The aim of ubiquitous computing research and implementation is the embedding of systems into everyday life in a transparent manner: the user moves between locations and tasks, largely or perhaps completely unaware of the computing infrastructure . Agent-based technologies have been developed to facilitate this transparency.
A user interacts with some (software) agent or set of agents and this agent then acts on behalf of the user to achieve some goal. In achieving this goal, it is likely, certainly in a ubiquitous computing environment, that this agent will have to interact with other agents.
This brings about issues concerning trust both within agents and between the user and the interaction agent. As He et al. note , users will need to become content to let a piece of software make decisions on their behalf. This will obviously require time and will only occur as agents show what they are capable of.
Herein lies a major issue: if a user is to be content to allow some software agent to act on their behalf, then clearly, it is better if the user has some (even if it is quite basic) understanding of how that agent will attempt to achieve that goal. If the user is unaware of the interactions the agent will have to undertake with other agents, they will not be in a suitable position in which to make an assessment of risk.
Each time that two agents interact, there is a possible trust (and security) implication. A great deal of work exists in attempting to best formulate models of trust between agents, much of it building upon the work of McKnight et al..
As the ubiquitous environment becomes more complex, in terms of numbers of agent–agent interactions, the user (at the point of interaction, and therefore often authentication) can soon become quite distanced from the point of the goal being achieved.
As such the average user may have little real awareness of the power they are potentially devolving to an agent. It is possible that a rogue agent can use information gained from an honest agent, to act in some malicious way.
The average user will have no idea of such possibilities: what can be seen as a great time-and effort-saving environment can be used to wreak unforeseen havoc if the appropriate risk assessment has not been undertaken. It is therefore imperative that the engineers of USs consider the distance between points of authentication and goal satisfaction.
Should the maximum distance become too large, the user may feel a loss of control. Standard graph-theoretic techniques can be used to model USs and can provide a measure of distance between interaction and goal.
If a user has a mobile telephone that is left unlocked, most would be generally aware of the risks associated with someone ﬁnding the telephone and acting maliciously. This is largely due to the fact that goals achieved using the mobile telephone are relatively close to the interaction point in a graph representation of the system.
Now consider mobile telephone that is conﬁgured to allow access to bank account details without the need for further authentication. Hence one interaction agent (the telephone interface) is used to access some other service. This brings about different issues of security.
Due to the relatively simple system described most users can understand the associated risk, and so would likely ensure that there is a further need for authentication before accessing the bank account details.
If we consider more interconnected devices and services in this ubiquitous environment: can a user be sure what access an attacker has, and what distress, inconvenience and cost malicious acts can result in?
For a small proportion of users, the technologically literate, this might be easily answered; for those more innocent - users who do not enable security features or just leave the default setting  - they may be unaware of the danger, unable to assess the risks or prepare for the consequences.
Agent technology holds out the possibility of connecting multiple, seemingly (to the user) unrelated agents. In this scenario new risks appear notwithstanding inter-agent negotiation : anyone who has access to one password or, say, an open phone may now have access to a wealth of personal information.
In a world in which technologists offer up wearable computing, pervasive access control sensors and single sign-on, users stand to gain much in terms of immediate task fulﬁlment, yet may be rendered unable to conduct usability-versus-security risk assessments. Thus, embedding security into USs and at the same time engaging users in understanding and employing security measures remains a signiﬁcant challenge.
My concern about the Ubiquitous Computing Grand Challenge is that it lacks an industrial perspective and has received little input from the potential producers and end-users.
Fifteen years after Weiser's original vision statement, ubiquitous and pervasive computing remain predominantly research programmes with little associated industrial activity.
The need for further research is perfectly understandable: the emergence of new technologies follows no set timescales and the more complex and generic the goals, the longer the required research activities needed to yield the necessary understanding. T
hat is one of the important messages of Robin Milner’s excellent paper. But he also identiﬁes experience-led activities as one of three domains within which understanding must be advanced and the Grand Challenge mamifesto elaborates on that theme.
I take this to refer to the design of systems that enhance human activity and the experience of using information systems. (Weiser expected the ubiquitous computer to be 'invisible', but he presumably also expected an enhanced experience to result from it).
Reﬂection on the history of the major developments in interactive computing - time sharing, interactive graphical systems and the personal computer - shows that these developments took place largely in research environments but were all stimulated and focussed by a day-to-day awareness of needs in industry and more widely.
Although improved methods for the evaluation of HCI have emerged from recent research, it remains the case that the constraints of real-world needs and market forces are an essential context for design. (Eamonn O’Neill points out that the problem may be less prevalent for Japanese and other Asian researchers.)
The idea of user intention and training was raised, however, we should remain aware that pervasive computing will be subject to use by those who are curious, half-asleep, taking drugs, violent, suicidal or otherwise hard to plan for. It may be that there is no 'intention', or that intentions will not be those anticipated by the designers.
The component devices of pervasive computing will be mass-produced and subject to day-to-day life: chewed by babies; modiﬁed for cosmetic effect and to alter their performance (cf. over-clocking of PC CPUs); ﬁtted or worked round by DIY enthusiasts; and taken to harsh environments such as the beach, the garden or the kitchen.
To be viable, without causing an ecological disaster, these multitudes of devices must be self-sufﬁcient: not requiring regular servicing, remaining embedded in, and decaying with, their environment. Any careful calibration will soon be lost, so we must consider that their data about the world will be noisy or wrong and that individual devices will become unreliable and fail.
So, in addition to being wary of assuming we understand our users, we should be wary of assuming we understand the context of operation. In real deployments we are frequently reminded of this, but when forming theories it is easy to start simplifying the world into a clean, precise, logical place. Instead, we should endeavour to embed the idea of uncertainty in our models, as a connecting structure within and between the various layers of Robin's 'tower of models'.
Finally, in specifying correct behaviour the idea of the unknown arises again. We are already building and deploying the ﬁrst components of the vision, based on current practice, varying quality speciﬁcations and a drive to market.
However good our speciﬁcations they cannot incorporate the unanticipated, but we should expect ﬂuid adaptation not obsolescence and restriction. We should be able to use models, of computers and of people, to examine from various viewpoints what may happen in emerging situations.
A sound theoretical basis to our understanding will let us adapt our creations, in rather the same way as new uses for old buildings can be analysed; modiﬁcations, limits and anticipated properties speciﬁed; but their use evolved in ways which the original architects never dreamt of.
As pointed out in the lecture, this requires the grand challenge’s combination of experience, engineering and theory coming together to develop both real applications and scientiﬁc principles.
 Dulay, N. et al. (2005) AMUSE: autonomic management of (AHM 2005). September 19-22 2005. Available at http://www.allhands.org.uk/2005/proceedings/
 Standish Group. The Chaos Reports. Available at http://www.standishgroup.com/sample_research/
chaos_1994_1.php and subsequent surveys; see also the 2001 BCS Review, p. 62
 Ewusi-Mensah, K. (2003) Software Development Failures, MIT Press, Cambridge, MA.
 Smith, J.M. (2001) Troubled IT Projects. IEE, London, UK.
 Yardley, D. (2002) Successful IT Project Delivery: Learning the Lessons of Project Failure. Addison-Wesley, London, UK.
 Pﬂeeger, S. and Hatton, L. (1997) Investigating the inﬂuence of formal methods. IEEE Comput., 30, 33–42.
 Naur, P. and Randell, B. (1969) Software Engineering, NATO, Brussels, Belgium.
 Buxton, J. and Randell, B. (1970) Software Engineering Techniques. NATO, Brussels, Belgium.
 Dijkstra, E.W. (1972) The Humble Programmer, ACM Turing Award Lecture. Commun. ACM, 15, 859–866.
 International standard IEC 61508: Functional safety of electrical/electronic/programmable electronic safety-related systems, International Electrotechnical Commission, Geneva, Switzerland; and Avionics standard DO-178B: Software Considerations in Airborne Systems and Equipment Certiﬁcation, Radio Technical Commission for Aeronautics, Inc, Washington, DC.
 Clark, D. (1988) The design philosophy of the DARPA Internet protocols MIT, ACM SIGCOMM, pp. 106–114.
 Abai, M. and Needham, R. (1996) Prudent engineering practise for cryptographic protocols. IEEE Trans. Softw. Eng., 22, 6–15.
 Floyd, S. (ed.) (2002) General Architectural and Policy Considerations Request for Comments: 3426. Network Working Group, Internet Architecture Board. RFC Editor at http://www.rfc-editor.org/
 Clark, D.D., Wroclawski, J., Sollins, K.R. and Braden, R. (2002) Tussle in cyberspace: deﬁning tomorrow’s Internet. In Proc. ACM SIGCOMM ’02, Pittsburgh, PA, August. pp. 347–356. ACM Press, New York.
 Doyle, J. (2005) Highly Optimized Tolerance. Available at http://www.physics.ucsb.edu/~complex.
 Clarke, E., Grumberg, O. and Peled, D. (2000) Model Checking, MIT Press.
 SLAM project website, Availabel at http://research.microsoft.com/slam/.
 Rutten, J., Kwiatkowska, M., Norman, G. and Parker, D. (2004)
Mathematical Techniques for Analyzing Concurrent and Probabilistic Systems. CRM Monograph Series, vol. 23, AMS.
 Bundy, A. The Need for Hypotheses in Informatics, Available at http://homepages.inf.ed.ac.uk/bundy/seminars/
 IBM (2003). IBM and Autonomic Computing: An Architectural Blueprint for Autonomic Computing. IBM Publication.
Available at http://www.ibm.com/autonomic/pdfs/ACwpFinal.pdf.
 Chase, N. (2004) An autonomic computing roadmap. IBM DeveloperWorks. Available at http://www-106.ibm.com/ developerworks/library/ac-roadmap.
 IBM. IBM’s Autonomic Computing Website. Available at http://www.research.ibm.com/autonomic.
 IP CASCADAS. Component-ware for Autonomic Situation-aware Communications, and Dynamically Adaptable Services. Available at http://www.cascadas-project.org.
 Jennings, N.R. (2001) An agent-based approach for building complex software systems. Commun. ACM, 44, 35–41.
 Dash, R.K., Parkes, D.C. and Jennings, N.R. (2003) Computational mechanism design: a call to arms. IEEE Intell. Syst., 18, 40–47.
 Padhy, P., Dash, R.K., Martinez, K. and Jennings, N.R. (2006) A utility-based sensing and communication model for a glacial sensor network. In Proc. 5th Int. Conf. Autonomous Agents and Multi-Agent Systems, Hakodate, Japan.
 Rogers, A., David, E. and Jennings, N.R. (2005) Self organised routing for wireless micro-sensor networks. IEEE Trans. Syst., Man Cybern. A, 35, 349–359.
 Krukow, K., Nielsen, M. and Sassone, V. (2005) A framework for concrete reputation systems with applications to history-based access control. In Proc. 12th ACM Conf. Computer and Communication Security, CCS 2005, 7–11 November, Alexandria, VA, pp. 260–269. ACM Press.
 Wegner, P. (1997) Why interaction is more powerful than algorithms. Commun. ACM, 40, 80–91.
 Wegner, P. and Eberbach, E. (2004) New models of computation. Comput. J., 47, 4–9.
 Barnard, P., May, J., Duke, D. and Duce, D. (2000) Systems, interactions, and macrotheory. ACM Trans. Comput. Human Interact., 7, 222–262.
 Milner, R. (2006) Ubiquitous computing: shall we understand it? Comput. J., advanced access published 23 may 2006, 10.1093/comjnl/bxl015.
 Kindberg, T., Sellen, A. and Geelhoed, E. (2004) Security and trust in mobile interactions: a study of users’ perceptions and reasoning. In Proc. Ubicomp, pp. 196–213.
 Binmore, K. (1994) Game Theory and the Social Contract, Volume 1: Playing Fair. MIT Press.
 Wooldridge, M. and van der Hoek, W. (2005) On obligations and normative ability: towards a logical analysis of the social contract. J. Appl. Logic, 3, 396–420.
 He, M., Jennings, X. and Leung, H.-F. (2003) On agent-mediated electronic commerce. IEEE Trans. Knowl. Data Eng., 15, 985–1003.
 Klein, D. (1990) Foiling the cracker: a survey of, and improvements to, password security. In Proc. 2nd USENIX Unix Security Workshop, Oakland, CA, August, pp. 5–14.
 McKnight, D.H., Choudhury, V. and Kacmar, C. (2002) Developing and validating trust measures for e-commerce: an integrative typology. Inform. Syst. Res., 13, 334–359.
 Parsons, S. and Jennings, N. (1996) Negotiation through argumentation—a preliminary report. In Proc. Second Int. Conf. Multi-Agent Systems (ICMAS 96), Kyoto, Japan, pp. 267–274.
 Weiser, M. (1991) The computer for the 21st century. Sci. Am., 265, 94–104.
 Amey, P. Available at http://www.sparkada.com/downloads/Mar2002Amey.pdf
This Computer Journal event was held in March 2006.