top of page

The PISA 2015 Response Time Puzzle

Using the publicly available data for PISA 2015, I plotted the mean score and the mean number of actions according to the response time for every second to all science items.

​

That is, the mean score for responses given after less than one second, mean score for the responses given after more than one second and less than two seconds, etc.

The results, in the attached image, were puzzling.

 

Responses given within less than a second are with a very high average score and more than 30 actions on average. I looked into responses by item and noticed the phenomena appears in almost all the items.

oecd.query.jpg

PISA administration's first response was - “These two plots do not make sense...

...Please send us some more information”.

Well, yes, this is exactly what I was thinking, they don’t make sense.

 

Two weeks later, they confirmed that they indeed also notice the problem, and after another month, they also found the cause - “total time” variables included in the database were actually the measures of the time spent for the last visit (if students visited an item multiple times). This problem appeared on both PISA 2015 and PISA 2018.

​

Therefore, the good students, that went over the test again, got very low response times.

Several working papers and even published papers used this problematic response time data.

 

On November 6th, a new data file was added - Cognitive items total time/visits data file. It consists of the total time on an item, that is - the correct RT measurement, for all the students and items. However, the falsely reported RT measurements are also still on the cognitive data file, so researchers must pay a great deal of attention to use the right variable - ending with TT and not the ones ending with T.

bottom of page