Dr. Martin Wheatman MBCS suggests that, rather than being an ‘AI-hard’ problem, machine understanding is the sharing of ideas which plays a key role in the (self-)identification of malware. The implied self-awareness — which underpins ideas of ‘killer robots’ — is simply the well-known programming technique of reflection.

You receive a pop-up promising £10,000; is this too good to be true or can you safely click on this ‘once only’ offer? Before running this, or any other software, we just might want to know what it will do. Niels Bohr once said ‘prediction is difficult, especially about the future’; but machine instructions already exist in memory. However, if an algorithm could describe what it was going to do — should you trust it?

Human understanding, in essence, is the sharing of meaningful values. One value may be ‘stop the boats’ — but if I mean strafing the English Channel, our understandings might not coincide! This is the metaphysical problem of the unseen meanings behind words, which was addressed in 1878 by polymath Charles Sanders Peirce in his Pragmatic Maxim: ‘Consider what effects, that might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of these effects is the whole of our conception of the object.’

In short, hidden human intentions are revealed by their external effects; however, machines have very little external activity, so one may well ask, ‘does it really understand?’ The definition of machine understanding here is: ‘the sharing of ideas, and their subsequent use, to interact with a machine.’ Interaction is the activity, and it includes the creation of ideas in machines. Today, this can be seen as the production of software: the writing of source code, and the compilation and installation of machine code. However, this low-level code is far from readable. The source code remains with the programmer — and even if it were available, for most, it would be no more comprehensible.

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

The readability of code has been addressed in a previous ITNOW article, which shows speech to be a comprehensible and functional medium, whether this is a program or the recipe for baking a cake. Using plain language makes a program easier to read, but can also be used for creating ideas. Such ‘meta-language’ defines values by saying, ‘I can say…’, and adds meaningful effects with ‘this implies …’.

However, it is not the ease of reading software that is at issue, it is having access to it at runtime. This shifts understanding from comprehension to self-awareness: meta-language is also used to query a value’s intentions. ‘What does this imply’, or ‘what is implied by…’, reads out the actual statements that are to be invoked.

Most importantly, this article is not theory; the technique is demonstrated by software. The example below shows the retrieval of facts from Wikipedia, and also how this is achieved: age is displayed in the ‘born’ and ‘died’ entries. Since each implication is a further value, the user can drill down to see what they imply. Interestingly, this affords people sight of the programming so a qualitative judgement can also be made.

Assuming the program git and a Java Development Kit are installed, the above can be demonstrated by doing the following:

git clone git@bitbucket.org:martinwheatman/enguage.git

cd enguage

javac opt/test/UnitTest.java

java opt.test.UnitTest -T implied

Diagram 1: This composite shot below reflects interaction with Wikipedia, and since this is live code it is released under the CC, with no warranty of fitness for purpose: