2015: The limits of algorithms

In January 2014, my post on this blog outlined my belief that 2014’s big picture issue was trust. I leave it to you to judge whether I was accurate or not. In the same spirit, I offer my 2015 theme as the limit of algorithms.

Back in the 1980s, there was a poster which was fairly ubiquitous around IT departments: ‘To err is human, to really foul things up requires a computer’. If we were to update that poster I would suggest that we would replace computer with algorithm.

Let me illustrate with an email I received this morning from Twitter. It suggested some followers to me, ‘based on God’. In a mystically surreal way, when I tried to follow God, Twitter informed me that I was unable to follow God at the moment as I was following too many others.

Over Christmas, there was much comment around Facebooks personalised view of the year. Pictures of a deceased parent or child or cheating partner were ‘highlights’ of 2014 according to the algorithm used. The claim that ‘it worked for most people’ only illustrates a lack of empathy and understanding among those who truly believe that algorithms are the way forward on all our problems.

Fundamentally, it demonstrates a misunderstanding of the nature of information and human insight.

Looking at the 2015 prognostications of some of my favourite futurists, there is a theme emerging across a lot of disciplines:

‘In 2015, developments in big data, algorithms and AI will start to exceed human performance in all walks of life.’

How do you feel about that?

I tried the following on an iPhone: ‘Siri, who invented television?’ Siri responded with Charles Jenkins. I find that interesting. Given the huge data and software behind Siri, how do you evaluate the answer?

On my last visit to the Smithsonian in Washington, I remember Philo Farnsworth being celebrated. As you drive into Helensburgh in West Scotland, you are greeted with the sign ‘birthplace of John Logie Baird, inventor of Television’.

What does it mean to match or exceed human performance in this case? Should Siri give different answers in different countries? What would be the answer in North Korea? Or, should Helensburgh take down the signs now that Siri has, literally, spoken?

It’s not that I am sceptical that there are many complex areas where these maturing technologies can contribute to step changes in our society and economy, but a more nuanced set of claims seems sensible to avoid overpromising and under delivering.

Let me try to explain my current position. Do I think that it is in principle possible to create an algorithm that could ‘write’ a Beethoven symphony to a standard that would challenge or fool a music expert? My answer would be yes.

However, do I think that an algorithm could be created that would create a step change in symphonic form as Beethoven did with the Eroica? I’ve been reading Howard Gardner on the neuroscience of creativity and I just don’t think we could yet say that this will be possible. Anyway, who would judge that it was a step forward, the algorithm or people? Many breakthrough examples in music have been dismissed or hated initially.

Similarly, the existence of algorithms that outperform doctors in diagnoses seems clearly to be realistic. However, the evidence on which this is based is human expertise and experience. Once we know how to do something we can automate it and machines can be faster, more consistent and reliable than people. That does not of itself move the state of the art forward.

Remember our old friend GIGO, Garbage in-Garbage Out? I used to have two copies of letters from the 80s sent by local government to citizens as reminders.

One was a change of circumstance letter, which outlined the reason for a change of circumstance being that the addressee was now dead. Helpfully, it outlined the appeal procedure.

The second was a threat of sending in the bailiffs for an unpaid debt of £0.00.The problem lay both in the data and the process. It seems to me that we are at risk of falling back into the same set of traps again.

Given the importance of the US West Coast in IT innovation, I find it helpful to see each technology leap as recreating a new hippy era. In 2015 we are in ‘Search of the Lost Algorithm’. This time round the rallying cry seems to be ‘Make Data, not War’.

So, how smart is a smart algorithm? How can you tell? If I create a new one which is twice as good as the old one, how would you demonstrate that improvement?

Also over Christmas I read an article that suggested that people’s postings on social media were not always accurate. I confess I nearly fell off the bar stool on my super yacht at that one. Apparently, some people exaggerate and make their lives sound more interesting than they really are. Well I never!

Let’s make the progress we can make from these exciting advances in these technologies. I’ve blogged before that the technology has developed faster than our fundamental understanding of information.

People are part of the information system, not impartial observers. Algorithms that don’t realise that are as dumb as my earlier examples.

Have a great 2015!

Comments (3)

Leave Comment
  • 1
    Brian Jones wrote on 19th Jan 2015

    Elon Musk said "We are summoning the demon with artificial intelligence" but manufactures such as ourselves cannot compete without it.

    Report Comment

  • 2
    Chris Reynolds wrote on 11th Feb 2015

    The problem with the algorithmic approach is that it assumes that the rules relating to the information being processed are known in advance, and unfortunately the world – and human beings – doesn’t always work in this way. The result of this difference is that there will inevitably be difficulties at the interface between the algorithmic systems and the human users. In practice it has proved far easier for humans to learn to live with computers rather than to design computer which are compatible with the way the human mind works. Part of the problem is that the modern computer is the ultimate black box and this limits the kinds of machine/human interactions that are feasible when an unpredicted interface situation arises.
    You are almost certainly unaware that in 1968 the British computer pioneers David Caminer and John Pinkerton, of LEO computers fame, decided to fund research into a “white-box computer”. This would have a user-friendly symbolic assembly language – aimed at allowing people and computers to work closely together symbiotically on a wide class of open-ended tasks. I was the project leader and despite funding and other set-backs explored the idea for some 20 years – ending up with a series of publications, including three papers in the Computer Journal and a working schools software prototype which attracted very favourable reviews.
    Recently I decided to re-examine the CODIL project in the light of recent developments in information technology and also in the brain related sciences. I have come to the following conclusions:
    (1) Information Technology. The CODIL model and the stored program computer model represent opposite yet complimentary ways of viewing information. The conventional approach uses a top down, rule based numeric model while CODIL uses a bottom up pattern matching set processing model. Each model can be considered as a special case subset of the other and each is strongest in the area where the other is weakest.
    (2) Brain Sciences. I now realise that CODIL can be mapped onto a simple neural network model and the working programs of the 1970s and 80s can be considered to be a crude but logically very powerful models of human short term memory. The model opens up a possible evolutionary pathway to explain the evolution of human intelligence.
    (3) Supporting Creative Scientific Research. The spark that triggered the CODIL research really need two or three years of unrushed cross-disciplinary studies to understand the underlying theory and how it related to established scientific models. What actually happened was that the idea was seen as commercially important and it got sucked into the devil-take-the-hindmost rat race that was the computer industry at the time. For instance the first funding was to implement an experimental simulation to prove the (initially very crude) idea worked - under conditions of total secrecy at least until the patents were taken out … While there were various setbacks resulting from the company merger to form ICL and disruption caused by health related problems, the project finally collapsed because time had not been spent on creative thinking about the project’s theoretical foundations.
    (4) Further Research. The above three points all clearly need further research. I am fully retired and at the age of 76 I feel all I can now do is to alert the current generations of researchers of the existence of the earlier research, and to ensure that any relevant documentation of the research (including listings of the simulation programs and research applications) are available if needed.
    If you, or any of your colleagues in the British Computer Society, are interested is exploring the limits of algorithms, I would be interested to hear your views on the way this research might be advanced.
    More information is given in an extended reply “The Limits of Algorithms” on my blog “Trapped by the Box” (www.trapped-by-the-box.blogspot.com) and the blog also contains copies of many of the relevant research papers.

    Chris Reynolds, FBCS (retired)

    Report Comment

  • 3
    Andrew Fryer wrote on 18th Feb 2015

    you mention that the rules relating to the information have to be known in advance. In humans we have in innate ability to form language even if language is not taught to us which is how pidgin evolved amongst slave children separated form their parents. It's not as clever as that in computers but there are algorithms for unsupervised learning and you can see this sort of thin in Cortana and Siri as well as in services like google news.
    as for your garbage in garbage out comment yes my father used that term before there was a BCS, but actually AI can help with that hance spam and clutter filtering, noise reduction, and adaptive optics in the super KEK telescope.
    The problem is not AI it's the humans that are teaching AI to do bad things.
    @deepfat

    Report Comment

Post a comment

About the author
Chris is a technology and policy futurologist. Chris has been in the IT industry since 1980. His roles have spanned Honeywell, ICL, HP, Microsoft and Capgemini. He is a Fellow of the BCS and a Fellow of the RSA.

See all posts by Chris Yapp

Search this blog

February 2015
M
T
W
T
F
S
S
1
2
3
4
5
6
7
8
9
10
11
12
14
15
16
17
18
19
20
21
22
23
25
26
27
28