New research questions results from the PISA 2015 study
26 January 2018
A new paper, written by UCL Institute of Education (IOE) academic Professor John Jerrim, questions the findings from the latest Programme for International Student Assessment (PISA) study.
The Organisation for Economic Co-operation and Development's (OECD) PISA study is an international survey which aims to evaluate education systems worldwide by testing the skills and knowledge of over half a million 15-year-old students who undertake a two-hour test.
Professor Jerrim's report, published by the Centre for Education Economics (CfEE) questions important changes made to PISA's methodology.
In 2015, PISA was conducted on computer in 58 countries, while 14 others used a standard paper test. This was a significant departure from previous cycles of PISA, when all countries assessed their children using paper-based assessment.
Using data from three countries that took part in the PISA 2015 pilot, the paper illustrates how this change could have had a significant impact upon the results, with children tending to perform the equivalent of around six months of schooling worse when taking a computer-based (rather than a paper-based) test.
Professor Jerrim questions whether the methodology the OECD has used to "adjust" for this problem has worked sufficiently well, and if results from the PISA study continue to be truly comparable.
"Taking a test on computer is very different to the standard procedure of taking a test using paper and pencil. Yet the OECD has really only provided scant evidence on the impact this is likely to have had upon the PISA 2015 results" said Professor Jerrim.
"Could this have driven some of the more surprising findings from the PISA 2015 study, such as Scotland's plummeting performance of reading and science compared to 2012, or the significant decline in several East Asian countries' mathematics scores? At the moment, I don't think we have enough evidence on this issue to say, but we certainly can't rule such possibilities out."
James Croft, founder and chair of CfEE, said "it is vital that there is clarity around the methodology of these assessments as governments clearly rely on them when setting education policy. We hope that by publishing this paper today, governments across the world will carefully reflect upon how comparable the 2015 results are both to other countries and to those from previous PISA assessments".
The research is based upon data from more than 3,000 15-year-olds from across three countries. These pupils were randomly assigned to complete either a paper or computer version of the PISA test in a pilot study conducted in 2014. By comparing test scores across these two groups, the research team was able to establish that children who took the computer version found the PISA questions much harder to answer correctly.
The paper explains that there are several reasons why this may have occurred. This included the computer assessment not allowing students to go back to questions they may have skipped passed, along with potential differences in the test environment.
- Read the report 'A digital divide? Randomised evidence on the impact of computer-based assessment in PISA'
- View Professor John Jerrim's research profile
- Centre for Education Economics (CfEE)
- Programme for International Student Assessment (PISA)