News of the testing effect is no longer new.  Thanks to authors like Daniel Willingham and bloggers including David Didau, Joe Kirby and Kris Boulton, the merits of frequent, low-stakes tests appear increasingly well-known (vindicating quite a lot of people who’d been doing it all along).  For a good summary, I recommend this, by David Didau, noting the power of testing to aid students’ recall, their future learning and the transfer of knowledge to new contexts.  This week, I visited Goldsmiths and met Simon Katan, a lecturer in Computing – picking up some new angles on the usefulness of testing in universities.

Many of the issues Simon faced were analogous to those many teachers face, among them the ‘second year dip’ – similar to the Year 8 dip in secondary schools.  It comes about through the leap from the foundations laid in students’ first years – in their second year the marks begin to count, the work becomes more difficult and tutors’ support more limited – in preparation for the independence required in final year.  Although they usually end the year contented enough, Simon said many students can find much of the year dispirited.  This is particularly problematic because students who become discouraged are free to stop attending (although there are sanctions for this in theory, they are very limited).  Once they do so, not only are they likely to stop learning, but Simon noted they don’t even know what the assignments are.

15055177180_cfdbe55f07_o

Simon turned to Kahoot to run mini-tests at the end of each lecture.  Kahoot allows teachers to run multiple choice quizzes during lessons – in this case, students respond using their mobile phones, within a given time limit.  Having seen it used, it’s similar to, and appears to be as straightforward as similar apps like Plickr and QuickKey.  This approach seems to have many merits for a lecturer.  Most obviously, it takes advantage of what we know already about improved recall and learning during the lecture itself.

In this case it has additional benefits too:

  • Each test is worth 1% of a student’s final mark.  These stakes seem low enough to forgive occasional absence and avoid significant stress, but high enough to encourage (and reward) frequent attendance.  It is worth being at every lecture, even if a student doesn’t feel like it or imagines they’ll catch up what they might miss.
  • Students get a score of 50% on each test just for logging on.  Again, this rewards attendance further, but it also means that, in the event of technical problems mid-way, every student has got something (Simon says the platform has been pretty reliable anyway).
  • Ordinarily, attendance is marked on a paper register; processing this data and getting it back to him takes a long time.  Students’ marks for this test act as a proxy for an attendance register, which Simon can access much more swiftly.
  • Students get an email automatically, with their mark.  They also receive summaries of how well they are doing during the course, and a clear picture of what they need to improve upon.  (At present, students receive this irregularly, but Simon hopes to offer a dashboard with all their marks clear to them in future).

Students have responded positively to the app.  Their scores appear to correlate with other indicators of their performance.  The whole arrangement provides additional, useful data to Simon and students alike.

It was interesting to note how similar many of the challenges facing university lecturers are to those faced by school teachers.  I don’t know what school teachers may wish to do with these ideas or this insight – but it seemed interesting enough to be worth sharing, and I’m sure someone can make something of it.

Everything of mine on multiple choice questions can be found here, thoughts on and experiments with the testing effect here.

Image credit – Gwyneth Anne Bronwynne Jones