Sitting in a traffic jam, watching the lights turn to red, Chris Yapp wonders how much research will be needed before artificial intelligence can drive like human.

This year, digging below the AI-robot-work big picture has been a fascinating area of work. The aim is to try and understand where to target research and thinking to understand what roles can be replaced and what can be augmented by tech advances. Importantly, what are the theoretical barriers to progress?

It’s led me into places I never thought I’d find myself.

Try this thought experiment: I want you to walk into a room and tell me when the last person was in there. It’s tidy, there is no CCTV. On the desk is a one week old paper. There is a grandfather clock, still working. Opening the front, it looks like it has run half it’s time till needing to be rewound. That suggests three-four days. Wait, there is a smell of aftershave. That suggests an hour or so rather than days. If the smell was of say Chanel No5, a good working hypothesis would be a mature woman.

Real world, human problem solving relies on sensory data and subjective experience (the philosopher’s Qualia). Without a solution to the sense of smell, no amount of data and clever algorithms can begin to match human skills. A key problem is that the theory of the sense of smell is still controversial. The dominant thinking is that it is based on molecular shape, but there is an alternative based on molecular vibration. I have seen some experiments on digital smell, but it is far from a mature field.

I have observed in this blog, as have many others, that modern AI/ML can match and exceed expert medical diagnostics. However, I remember a case of a GP shaking hands with a restaurateur and feeling a “puffy” hand advised him to seek medical advice. He had found evidence of a rare tumour. The sense of touch was central to the diagnostic.

Two related examples in the field of autonomous vehicles.

I live in rural England, a few miles north of a town. The local hospital is half way between me, the town and on one of a series of roundabouts. I could hear an ambulance but couldn’t see it. I saw it in my rear mirror as I was going around one roundabout. There is a one mile stretch to the hospital, a narrow road with bollards in the middle and so it is difficult to overtake.

Knowing that the ambulance was heading to the hospital, I pulled in to a garage forecourt to allow it to pass. In an area I didn’t know I could not have made that judgement. How would an autonomous vehicle learn to do that?

A similar situation arose. One of the roundabouts was being enlarged and there were temporary traffic lights and two lanes were coned down to one. I was in front of the queue and the lights were on red.

I heard, then saw an ambulance behind me. I took the decision that the ambulance could not pass me unless I crossed a red light to allow it to pass. I felt I had mitigating circumstances if I got a traffic violation in the post. What should and will an autonomous vehicle do?

So, sight, sound, smell and touch are part of our real-world skills in tackling real problems. In what domains can purely digital data outperform humans?

I remember a cancer doctor explain how the body language of a patient or signs of stress in the voice were important to note before giving bad news.

I am deeply suspicious of the exponential claims in AI/ML. Do we have a rich enough theory of work in any real domain where sensory data and subjective expertise and experience is not a part of the outcome?

Where is the overarching theory and underpinning research to move from artificial intelligence (AI) to artificial general intelligence (AGI)? Who is doing it and are they part of the dialogue with IT professionals?

It looks like a lifetime of fun and challenge for a young researcher. As one colleague used to comment, ‘don’t look at me in that tone of voice, it smells a funny colour’.

I’d love an invite to that conference!!