Let’s say that you want to help a teacher teach differently. Or that you want to teach differently yourself. Perhaps you’re pursuing a more efficient entry routine, more succinct questions, or clearer explanations of key points. What helps a teacher to change?
We might review the evidence, plan change, practise it – and examine a model of the desired practice. But, while looking at models may sound obvious, it’s far from universal: a third of teacher professional development programmes didn’t offer models (in our systematic review of teacher professional development for the EEF).
Getting hold of a good model can be hard. We have to find someone doing whatever it is really well, get permission to film, catch it happening, and edit and perhaps annotate the results. This was a large part of my job for a couple of years. Creating one or two good videos could take one or two days’ work. So how much of a difference will a model really make? Enough to justify this effort? What kind of model are we looking for? And how is this likely to affect the watcher?
Surprisingly, until now, we have lacked robust evidence answering these questions in teacher professional development. In our systematic review, we argued that modelling was one of fourteen mechanisms likely to help teachers improve. We based this on broad evidence across domains that models help people change. The review found that the more of these mechanisms a professional development programme included, the more likely we were to see increased student learning. But the effect of models on teacher learning and practice has not been rigorously tested. That’s what we (a team led by Sam Sims and under the auspices of Ambition Institute) set out to do.
The experiment
This was a (fairly) simple randomised experiment. We recruited 89 primary trainee teachers. First, we asked them to read an evidence summary about questioning for retrieval. This advocated techniques including cold calling, using wait time, correcting partially-correct answers and offering hints when students were stuck.
Then, we asked trainees to use these techniques to teach a simulated group of students. That is, the teacher was teaching five avatar pupils (as in the image below). The five ‘students’ were voiced and controlled by a simulation specialist. Students could raise their hands, talk to their partners, and show their disappointment if they got something wrong.
We gave the teachers questions, focusing on sound, and desired answers. We gave the simulation specialist scripted responses: when to give the correct answer, when to give a partial answer and when to say “I don’t know.”
After a trainee had read the evidence and asked students the retrieval questions once, we introduced the experimental element. Trainees were randomly assigned to one of three groups:
- The rereading group reread the evidence
- The video group watched a video of these techniques being used by a primary teacher
- The video + theory group watched the same video, but saw additional captions explaining the technique
Then we asked trainees to repeat the simulation, asking the retrieval questions again. For each round of practice, we scored the trainees’ application of the skills we’d asked them to use. For example, we recorded whether they cold called or took hands up each time they asked a question. We counted how many seconds passed between giving the question and inviting a student to answer, to check whether trainees were using wait time. We recorded their responses to (planned) errors by students: did they hint, correct, or switch to another student (we had asked them to correct)? This allowed us to see whether the video model made a difference, and if so, whether the captioned version helped more.
How do models make a difference to teacher learning?
1) Watching a model makes a big difference to teacher practice. On the first attempt, all three groups of trainees averaged a score of 6 for their use of these techniques (out of a possible maximum 18 points). On the second attempt, the rereading group scored roughly the same. The two groups of trainees who had seen videos almost doubled their scores.
2) An annotated model offers no additional benefits. We had thought that a model with captions explaining why the techniques work might help trainees to perform better. Surprisingly, there was no real difference in the effects of the two different kind of models. Models are useful, captions add nothing.
I say surprisingly… in a sense it’s surprising, because there’s an argument that just watching good practice isn’t enough to understand why it works. But this finding shouldn’t have surprised me, because it echoes something which had surprised me once before. While writing Responsive Teaching, I spent some time research the effect of models. I wrote that:
I was even more surprised to learn how little teachers’ explanations add to models. There’s compelling evidence that having students evaluate the merits of contrasting models may be sufficient without additional teacher explanation (Renkl, Hilbert and Schworm, 2008; Wittwer and Renkl, 2010) – another example of the limits of words in conveying what success looks like, and the merits of models.
What matters is showing a good model. If you can do that, people seem to be able to work out what they need to do without you then talking about it as well. Choosing a good model is worth 80% of our effort, picking a way for learners to engage with it 15%, and coming up with captions or a narration about 5%. Show, don’t tell.
3) Models don’t boost declarative knowledge. A week after the experiment, we followed up with a quick knowledge check. We wanted to know whether trainees remembered key principles about retrieval practice, like the benefits of cold calling and why hints can help. Trainees who had watched the captioned video remembered little more than those who had just reread the text.
4) Nor do models boost self-efficacy. We also asked about trainees’ self-efficacy: how confident did they feel about asking questions. Again, surprisingly, all three groups’ confidence grew to the same extent. Even though the video groups asked questions more effectively second time around, the rereading group (who had got no better) gained in confidence too. Presumably, just another round of practice made everything feel smoother and easier, and boosted their confidence. So practice boosts confidence – even when it doesn’t boost effectiveness!
Conclusion
Using models has become increasingly popular in teacher training and professional development over the last few years. So it’s very useful to have this practice, and the findings of our systematic review, affirmed by a study of the value of models specifically. We can say:
- When asking teachers to change their practice, a model will almost certainly help
But this study also tells us:
- Don’t worry too much about narrating the model – worry more about what it shows.
- Don’t rely on video models to convey underlying principles – reading still matters.
- Practice can boost self-efficacy, even if effectiveness remains stuck!
There are nuances not covered in this study. For example, you might introduce models differently with more experienced teachers. But the key point is that in teacher development, just as in student learning, models are a magic.
If you enjoyed this, you might like…
The full paper, here (link at the bottom of the page)
This post, on the magic of models.