I’ve been trying to make sense of the evidence on feedback and to share what I’ve learned.  This has proved challenging.  Feedback can improve students’ learning and performance “if delivered correctly”, but the large body of research suffers from “many conflicting findings and no consistent pattern of results (Shute, 2008, pp.153-4).”  For example, should we give immediate or delayed feedback?  Studies point in apparently contradictory directions and my initial drafts for the book I’m writing, Responsive Teaching, did not seem to convey what I’d learned clearly.

Could a decision tree help?  I read an article recently showing how doctors can use them to improve their decision-making: the decision tree poses critical questions based on the evidence; doctors combine this with their knowledge and experience to make rapid, accurate decisions (Wegwarth, Gaissmaier and Gigerenzer, 2009).  So I’ve tried conveying some key findings from the evidence around feedback and guiding improvement in this decision tree:

Each judgement demands evidence and explanation however.  Including this on the decision tree itself would create a monster, not a handy summary; instead, I’ve explained the rationale and evidence for key ideas below.  The literature on feedback is not only contradictory, it’s vast: if you have better evidence on any of these questions, please let me know and I’ll make appropriate amendments.

Should I offer guidance yet?

I’d always thought the literature made no sense on this: I couldn’t reconcile the logical imperative of stepping in early to stop students making mistakes with the evidence that delaying feedback helped students remember more.  The important thing to remember is that feedback is not always helpful: it may make students dependent on feedback, rather than alert to their own errors (Kluger and DeNisi, 1996).  Since it’s easier not to provide feedback than to provide it, identifying times when it is of limited (or no) use seems valuable.  It seems you are likely better off delaying or withholding feedback if:

Students lack knowledge: Although most student errors are ultimately attributable to some lack of knowledge, the point is that students can only use feedback if they know enough to make sense of it: if it addresses faulty interpretations, not a total lack of understanding (Hattie and Timperley, 2003).  Offering students facilitative feedback like ‘What do you think should go here?’ is a waste of time if they have no idea: we should simply reteach explicitly.

The task is relatively easy for students: Students who are able to identify errors and problems themselves, or simply to keep going, should be allowed to do so; for students who are struggling delaying feedback may cause frustration, and waste their time (Shute, 2008).

The task is complex: Immediate feedback is more effective for simpler tasks earlier in the learning process: correcting errors immediately leads to faster acquisition (Hattie and Timperley, 2007) and greater success with procedural skills like programming and maths (Shute, 2008).  Delayed feedback is more effective with more complex tasks and where we want students to transfer learning from one task to another

Students are not yet fluent in a (relatively simple) task: If students are developing fluency in tasks, immediate error correction can detract and distract from both learning and students’ automaticity (Hattie and Timperley, 2007).  (I struggle to make sense of this one the most: don’t students risk encoding errors?).

The task provides it’s own feedback: If the task provides its own feedback it is better for students to use that feedback than add external distractions; examples might be a computer programme or a conversation in a foreign language (Kluger and DeNisi, 1996).

If students have not done their best yet: Offering feedback on things students know to do but have forgotten (or skipped) is a poor use of our time: we can ask students to check their work, or one another’s, using a checklist.  For students to take this responsibility seriously, we may have to return work to them for further checking if we find they have ‘completed’ the checklist without the care and attention we’d hope.

How can I guide improvement?

I’ve called this ‘guiding improvement’ to emphasise that we can help students improve in many ways other than providing individual feedback.  Even if feedback is the best way to help any one individual, it may not be the best way to help the whole class.  There are many ways to guide improvement which may be more efficient and effective than reaching for the red pen and marking student books individually.  This section is grounded in the evidence, where it’s available, and in the most plausible ideas I’ve used and seen, where the evidence runs out.

Guiding improvement during the lesson: Assuming the previous questions have led here (rather than to ‘consider delaying feedback’, it makes sense to guide improvement during the lesson, rather than waiting and allowing errors to stick.  As a rule of thumb, if we are helping individual students and find more than three students facing the same problem, it’s worth stopping the class to offer guidance.

Planning guidance to be used next lesson: If we’re looking over student work after the lesson, we do not have to mark it; we can offer guidance in many ways, on its own or combined with feedback.

Whether we’re guiding improvement during the lesson, or planning guidance for the next, we could:

  • Reteach key points explicitly
  • Revisit the models we have shared with students previously
  • Model ways to improve a ‘C+’ piece of work
  • Provide more practice

I’ve written more about whole-class guidance as alternatives to individual feedback here.

Marking: The evidence on marking is limited and inconclusive: (Elliot et al., 2016) found “a striking disparity between the enormous amount of effort invested in marking books, and the very small number of robust studies that have been completed to date.”  If we do want to mark, it’s worth considering how we can do so efficiently and rapidly, for example:

  1. Target marking on specific features of student responses, such as opening paragraphs or the three most challenging problems, (we can then reteach these features).
  2. Standardise marking: rather than writing the same thing repeatedly: write a code and tell students what the code means (or just tell the whole class the feedback).
  3. The more complex our feedback, the less students seem to be able to act on it (Kulhavey et al, 1985, in Shute, 2008); the less we write, the better.

I’ve written about marking more efficiently here, and standardising approaches here.

If we do mark students’ work, we can then tie our comments to guidance for the whole group: for example, we could reteach something, then ask students to find a place where they have made the error we’ve mentioned and change it.

How can I  ensure students welcome & act upon feedback?

How students feel about feedback affects how well they respond to it (Shute, 2008).  We can:

  1. Discuss emotional responses with students and encourage them to think about and overcome them (Sarah Donarski has written insightfully about this here).
  2. Convey high standards and a belief students can meet those standards ‘I’m giving you this feedback because I know you can get an A on this’, has a dramatic effect on student likelihood to redraft and student grades (Yeager et al., 2014).
  3. Celebrate improvement: helping students to recognise the impact of their responses should encourage them and also be a good way to reinforce their metacognitive awareness.

Controlling feedback tends not to be welcome, so at every point where I’ve suggested reteaching, revisiting models and so on, this should be taken to imply ‘In the same kind, encouraging way you usually use to get students doing the thinking and taking responsibility and pride in their work’.

How do I know students have improved?

A stringent definition of feedback suggests that if it evokes no change in the recipient, it is not feedback.  Ideally, we would like to see that students have responded to the guidance they’ve received – but having carefully avoided unnecessary or excessive marking, we don’t want to reintroduce it now.  We might ask students to:

  • Correct their work
  • Redraft their work (Ron Berger’s work influenced me powerfully to see the importance of this- more here).
  • Respond to a new check for understanding, like a quick hinge question.
  • Practice further.

Conclusion

Any model like this creates almost as many problems as it solves.  This is an attempt to summarise what I’ve learned about feedback so far; I’m sharing it in the hope that it provides evidence around a few tricky questions and promotes viable alternatives to laborious marking.  I make no claim that these findings fit Year 3 Numeracy as well as they do Year 12 Politics.  The decision trees which helped doctors worked because they provided a handy summary of the evidence which doctors combined with their experience and judgement: we should do the same.  So I would be delighted if you:

  1. Share your thoughts on how this can be improved (both in clarifying the tree and improving the evidence base)
  2. Develop any ideas about how this would would look different for your subject or phase

* * * * *

Michael Perhsan has offered thoughtful and challenging feedback which has led to a series of refinements to the decision tree this week.

What to read next

Guiding improvement without giving individual feedback: ways to plan feedback for the whole class.

Checklists for students: efficiency, autonomy and excellence in the classroom: getting students improving their own work.

What if you marked every book every lesson?  (Not because you should, but because this post exemplifies targeted, brief marking).

Bibliography

Elliott, V., Baird, J., Hopfenbeck, T., Ingram, J., Richardson J., Coleman, R. Thompson, I., Usher, N. and Zantout, M. (2016) A marked improvement? A review of the evidence on written marking. Education Endowment Fund.

Hattie, J. and Timperley, H. (2007). The Power of Feedback. Review of Educational Research, 77(1), pp.81-112.

Kluger, A. and DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), pp.254-284.

Shute, V. (2008). Focus on Formative Feedback. Review of Educational Research, 78(1), pp.153-189.

Wegwarth, O., Gaissmaier, W. and Gigerenzer, G. (2009). Smart strategies for doctors and doctors-in-training: heuristics in medicine. Medical Education, 43(8), pp.721-728.

Yeager, D., Purdie-Vaughns, V., Garcia, J., Apfel, N., Brzustoski, P., Master, A., Hessert, W., Williams, M. and Cohen, G. (2014). Breaking the cycle of mistrust: Wise interventions to provide critical feedback across the racial divide. Journal of Experimental Psychology: General, 143(2), pp.804-824.