XClose

UCL News

Home
Menu

Opinion: What separates humans from AI? It’s doubt

16 April 2021

Computers can drive our cars and beat us at chess. What they lack is our ability to know when we don’t know, says Dr Steve Fleming (UCL Psychology & Language Sciences).

Dr Steve Fleming

Humans are self-aware animals, I was told as an undergraduate psychology student. We not only perceive our surroundings, but can also reflect on the beauty of a sunset — or wonder whether our senses are being fooled by illusions or magic tricks. We not only make high-stakes financial or medical decisions; we consider whether those decisions are good or bad.

It is one thing to be conscious, but to know that we are conscious and to be able to think about our own minds — that’s when my head began to spin.

Now consider robots. Ever since Alan Turing devised blueprints for the first universal computer in the 1930s, the singularity of our intelligence has become more precarious. In many arenas, humans have now been comprehensively outclassed — even in traditional tests of intellect and ingenuity such as Go, chess and computer games.

But while the algorithms behind those feats can seem stunningly intelligent, they currently differ from humans in that one crucial respect — they don’t know what they don’t know, an ability psychologists refer to as metacognition.

Metacognition is the capacity to think about our own thinking — to recognise when we might be wrong, for example, or when it would be wise to seek a second opinion.

AI researchers have known for some time that machine-learning technology tends to be overconfident. For instance, imagine I ask an artificial neural network — a piece of computer software inspired by how the brain works, which can learn to perform new tasks — to classify a picture of a dolphin, even though all it has seen are cats and dogs. Unsurprisingly, having never been trained on dolphins, the network cannot issue the answer “dolphin”. But instead of throwing up its hands and admitting defeat, it often gives wrong answers with high confidence.

In fact, as a 2019 paper from Matthias Hein’s group at the University of Tübingen showed, as the test images become more and more different from the training data, the AI’s confidence goes up, not down — exactly the opposite of what it should do.

I now run a research lab at University College London that studies the brain mechanisms that support self-awareness and metacognition. Our research has broad implications for how computer scientists continue to develop AI — and not just for games of chess but for self-driving cars and robots making potentially life-or-death decisions.

It should also make us reflect on quite what it is we’re creating when we build these algorithms. The history of automation suggests that once machines become part of the fabric of our daily lives, humans tend to become complacent about the risks involved. As the philosopher Daniel Dennett points out, “The real danger . . . is not that machines more intelligent than we are will usurp our role as captains of our destinies, but that we will over-estimate the comprehension of our latest thinking tools, prematurely ceding authority to them far beyond their competence.”

In my lab, we quantify metacognition by asking people to play simple games and to rate how confident they feel about their performance. Some of our tasks ask people to remember lists of words, answer quiz questions such as “How high is Mont Blanc?” or classify visual stimuli such as letters, faces or dot patterns. If your confidence is higher when you’re right and lower when you’re wrong, then your metacognition is in good shape.

We can track this, too. Thanks to the advent of powerful brain-imaging technologies such as functional MRI, which measures changes in blood flow caused by neural activity, we have discovered that when we engage in metacognition, particular brain networks in the prefrontal and parietal cortices crackle into life. Patterns of activity within these networks signal, in fine detail, how confident we feel about our choices.

In turn, damage or disease to these same networks can lead to devastating impairments of self-awareness. For instance, there is growing recognition that dementia attacks not only the brain circuits supporting memory, but also those involved in metacognition. Without metacognition, we can lose the capacity to understand what we have lost, or recognise when we need a helping hand. The connection between our view of ourself and the reality of our behaviour becomes weakened.

From this research, we are beginning to discover how brains represent uncertainty — how they know when they don’t know — and to build uncertainty into AI. Ingmar Posner, Yarin Gal and their colleagues at Oxford university are creating “introspective” robots that know whether they are likely to be right before they take a decision, rather than after the fact. One promising approach, known as “dropout”, runs a problem through a neural network algorithm multiple times, each with slightly different settings. We can then ask to what extent the resulting answers agree — a proxy for how uncertain the network is about its choice.

For instance — and showing how “dropout” tempers the overconfidence of most machine-learning technology — if all the answers to an image classification problem return “dog”, then we can be more certain that the image is actually a dog, while if some of the answers disagree, then we should be less sure.

In Hein’s research, computing with probabilities allows an AI system to realise that it has not encountered a scenario or image before, thus reducing its confidence in unfamiliar situations. This is akin to metacognition.

Let’s imagine what a future might look like in which we are surrounded by metacognitive machines. Self-driving cars could be engineered (both inside and out) to glow gently in different colours, depending on how confident they were that they knew what to do next — perhaps a blue glow for when they were confident and a yellow glow for when they were uncertain. These colours could signal to their human occupants to take over control when needed and would increase trust that our cars know what they are doing at all other times. 

Intriguingly, such confidence signals could also be shared between the cars themselves. If two cars are approaching a junction and one begins to glow yellow, it would be wise for both to slow down — just as human drivers would do if unsure of each other’s intentions. Doctors and their AI assistants could interactively share levels of confidence to come to a better decision about diagnosis or treatment than either would have been able to achieve alone. Manufacturing robots could use similar algorithms to signal when they need a helping hand.

There are limits to the blind pursuit of uncertainty — we would not want our robots to be paralysed by self-doubt, especially where swift action is needed. But our studies of humans show that the benefits of metacognition for decision-making outweigh the downsides. When our metacognition is accurate, we can be open to changing course when we are likely to be wrong, while remaining steadfast when we are right.

When I quoted Dennett earlier, it was with a legitimate worry: we do tend to trust too much in technology that we do not understand. But it may not matter that we don’t understand how AI systems work, as long as they are connected to our natural capacity for metacognition.

Consider that only a small number of biologists understand in detail how the eye works. And yet we can instantly recognise when an image might be out of focus, or when we need a new pair of glasses. Few people understand the complex biomechanics of movement, and yet we know when we have fluffed a tennis serve or swung a golf club poorly. In exactly the same way, the machines of the future may be supervised by our biological machinery for self-awareness, without needing an instruction manual to work out how to do so.

Thanks to the plasticity of neural circuits, we already know that it is possible for the brain to incorporate external devices as if they were new senses or limbs. Since the 1990s, neuroscientists have demonstrated the potential for controlling robot arms via thought alone, by using implants that read out patterns of neural activity. More recently, companies such as Elon Musk’s Neuralink have promised to accelerate the development of such technologies by using surgical robots to integrate the implants with neural tissue.

And while most research on these “brain-computer interfaces” is focused on ways that the brain can control the outside world, there is no principled reason why they could not also be used to monitor the performance of autonomous systems. Just as we are jolted into awareness when we clumsily knock over a glass or put our car into the wrong gear, we might acquire a natural sense for when our AIs are about to malfunction, allowing us to step in and take over control.

A remarkable side benefit of this research into metacognition and its application to technology is that it could increase the trust and accountability of AI systems. Socrates held that being able to examine what we know and do not know is the hallmark of wisdom, and modern philosophers such as Harry Frankfurt have suggested that being aware of our desires is the bedrock of autonomy. We effortlessly lean on self-awareness to explain to each other what we did and why — a capacity that is at the heart of our legal and educational systems.

As the UK’s late former chief rabbi Jonathan Sacks wrote in his book Morality, “If we seek to preserve our humanity, the answer is not to elevate intelligence . . . [It is] self-consciousness that makes human beings different.” The ability to doubt, to question ourselves, to pursue what we don’t yet know, powers the scientific creativity that created AI in the first place.

By expanding the scope of self-awareness — both biological and artificial — we may be able to create a world in which we not only know our own minds, but also start to know the minds of our machines.

This article originally appeared in The Financial Times on 16 April 2021.

Links