A groundbreaking research in the area of assessment and Artificial Intelligence (AI), aiming to build an examination model for English language testing.
There has been exponential growth in the development and application of AI over the last ten years, particularly due to the convergence of algorithmic advances, big data proliferation and enormous increases in computing power.
AI is now being introduced within higher education selection and screening procedures, to ensure that students have a fair, valid and reliable assessment for entry into universities across the world.
English language testing is at the forefront of using AI in this environment. This research focuses on the Pearson PTE Academic test of English.
This project started in February 2021 and will end in December 2023.
The project comprises three parts over three years - with the central focus being how we harness an understanding of modern examination practice, specifically the PTE global examinations that employ AI technologies to deliver and grade tests of English.
Our current and future vision for research in this area of education will be built on the idea of a ‘Modern Examination’, its characteristics and its place in contemporary educational settings, at both local and global levels.
An examination model for the 21st century
This is a unique opportunity to build, via an alliance of industry and academic partners, a more appropriate examination model for many 21st century functions, as AI is further embedded in globalised higher education provision.
We will work with Assessment MicroAnalytics who can provide particular methodological expertise and analytic skills in order to co-create a novel, global data set.
These are, to a large extent, unchartered waters in the realm of educational assessment and we view this as an opportunity to establish a body of well-evidenced research that clearly articulates how both the test developers and assessment stakeholders understand and work within the expanding parameters of AI in education.
We view the proposal as an exploration of two domains:
- the technical understanding of using AI tests for language testing, and
- an explanation of the active experience of taking it.
We are seeking answers to three questions:
- What is known about the role [their perception of planning for and taking the test] of the test taker in the AI-led language testing setting of PTE Academic?
- How do we characterise the lived experience of test takers in AI-led language test domains?
- How can we include the test takers in the testing cycle for effective user experience feedback?
We propose a model of methodology that is situated in largely qualitative strategies using literature and then data sets to bring ideas and findings together for the purposes of triangulation.
The epistemological basis for the study is that the test takers are viewed as actors within a social world, and ontologically they are seen as understanding reality in complex and unique ways.
We will seek to document and interpret these rich data using a range of conventional social science research tools, adapted for online delivery using conventional digital platforms, although mindful of privacy concerns.
Working with Assessment Micro Analytics will help us to create ethical and sustainable research tools based on their years of experience - this will enable us to tap into a range of methods such as the use of video to capture behaviour and gesture, and facial expressions, and eye tracking to triangulate and link to larger-scale data sets (i.e.the gist of micro-analytic methodology).
The PTE Academic research team have approached IOE to develop and undertake this research as independent researchers who have expertise in testing and AI in education.