Opinion: AI in education will help us understand how we think
11 March 2020
Professor Rose Luckin (UCL Knowledge Lab) writes in the Financial Times, saying robot teachers are just the start of an evolving relationship with artificial intelligence and education.
Forget robot teachers, adaptive intelligent tutors and smart essay marking software — these are not the future of artificial intelligence in education but merely a step along the way. The real power that AI brings to education is connecting our learning intelligently to make us smarter in the way we understand ourselves, the world and how we teach and learn. For the first time we will be able to extend, develop and measure the complexity of human intelligence — an intellect that is more sophisticated than any AI. This will revolutionise the way we think about human intelligence.
We take much of our intelligence for granted. For example, when travelling to an unfamiliar country, I recognise a slight anxiety when ordering food in a foreign language and feel the pleasure when my meal arrives as requested. It Is only when we attempt to automate these kinds of activities that we realise how much intelligence they require. Such a future will not be easy or uncontroversial. We need to confront the possible harm that such a pervasive, connected intelligence infrastructure could permit when misused or abused.
However, if we get the ethics right, the intelligence infrastructure will power our learning needs, both with and without technology. Just as electricity invisibly powers lighting, computers and the internet, so it shall be for AI in education. For example, secondary school students explain to a friend how much they understand about photosynthesis. The way they articulate their explanation can be captured and analysed, and each student offered an immersive augmented reality experience that targets their misconceptions.
The analysis of each student’s performance is available to the teacher, who can encourage them to listen to a recording of their original explanation and identify corrections. Students can then predict how well they are now explaining photosynthesis and the accuracy of their predictions could be used to stimulate conversations between student and teacher. We will be able to tap into, evaluate and galvanise our meta-intelligence: the ability to probe, reflect upon, control and understand our intelligence. We will be able to gauge our ability to deal with complex situations to differentiate our human intelligence from that of AI as we build the social relationships that are the foundation of civil society.
How do we build this intelligence infrastructure for education? Through the integration of big data about human behaviour, deep learning algorithms and our own intelligence to interpret what the algorithms tell us. We must leverage the science that has helped us to understand how humans learn, as well as the science that has helped us build machines that learn. For example, explaining and articulating our developing knowledge makes reflection and metacognition possible so that we can examine and monitor our learning processes. Metacognition in turn helps us to understand things more deeply.
The implications are significant. We can collect and analyse huge amounts of data about how we move, what we say and how we speak, where we look, what problems we can and cannot solve and which questions we can answer. The processing and AI-enabled analysis of multimodal data such as this will highlight more about our progress than how much better we understand science, maths, history or foreign languages.
It will show us how well we work with other people, how resilient, self-aware, motivated and self-effective we are. Sound ethical frameworks, regulation and education about AI are essential if we are to minimise the risks and reap the benefits. Embrace today’s educational AI systems judiciously. Use them to learn as much as possible about AI. But remember that today’s AI is merely the start. The future is the use of AI to build the intelligence infrastructure to radically reform the way we value our own human intelligence.
Rose Luckin is a UCL professor, co-founder of the Institute for Ethical AI in Education, and author of ‘Machine Learning and Human Intelligence: the future of education in the 21st century’
This article was originally published in the Financial Times on 10 March, 2020