Feedback improves learning by changing students’ knowledge, understanding or behaviour, but only if students act on it. As Royce Sadler put it:
Information about the gap between actual [students’ current] and reference [desired] levels is considered as feedback only when it is used to alter the gap.
Sadler, 1989, p.121
By this tough standard, much of the marking I did as a new teacher wasn’t feedback: students seldom acted upon it or improved as a result. We could argue that Sadler is too demanding, but he highlights an important problem:
There is ample evidence of both anecdotal and scientific nature that a number of students do not use the feedback they receive, and therefore do not realize the potential of feedback for learning.
Jonsson, 2013, p.64
Sadler encourages us to ask ‘What did students learn from feedback?’ rather than ‘What feedback did students get?’ (‘Did they learn it?’ not ‘Did I teach it?’). I discussed ways to ensure students understand, act upon and learn from feedback in Responsive Teaching; this update is based on my subsequent reading. I’ll discuss four aspects of feedback which affect how students respond (following a structure proposed by Winstone et al., 2017) and two revealing small-scale experiments.
How students respond to feedback depends on…
1) The learner: students must be motivated and equipped to act on feedback. They often see feedback as criticism: when teachers told students “I’m giving you these comments because I have very high expectations and I know that you can reach them (Yeager et al., 2014, p.809), they were more likely to resubmit essays, received higher grades, and trusted the school more. We could also encourage them to respond by:
- Using framing – telling students “Don’t miss out on the chance to improve”
- Highlighting role models – for example, sportspeople who value feedback
- Emphasising social norms – noting that the majority of students act on feedback
Students often lack strategies to act on feedback (Jonsson, 2013): we may want to teach specific approaches to help them monitor and regulate their responses to feedback, like pinpointing areas for improvement, redrafting and comparing their work to models.
2) The sender: students must believe we are a credible source of feedback. Hopefully they do; if we are unsure, we could bolster our authority by explaining why we know this feedback is appropriate:
- “I gave this feedback to a student two years’ ago; she used it to…”
- “The chief examiner has said that…”
- “When we discussed this in our department meeting, we agreed…”
3) The message: feedback must be specific and comprehensible. Students struggle with feedback which is illegible, includes jargon or is pitched too high: peer feedback may be more comprehensible (Jonsson, 2013; Cho and MacArthur, 2010). The desired improvement should be clear: one study found that clearly identifying the location and possible solutions to a problem made students more likely to respond (Nelson and Schunn, 2008). What students like in a feedback message may not be what’s best for them:
- Students like individualised, detailed feedback… but models and group feedback which show students the standard to be achieved may be more useful (Jonsson, 2013; Huxham, 2007)
- Students like to get a lot of feedback… but this may overshadow key points; additional explanation may make students less likely to respond (Jonsson, 2013; Nelson and Schunn, 2008)
- Students like positive comments… but they are less likely to act on them (Jonsson, 2013)
- Students want grades… but they respond by seeking higher grades (not necessarily genuinely improving), making superficial changes or giving up (Jonsson, 2013): we may help students focus on improvement by giving comments without grades (Butler, 1988).
4) The context: students need the chance to act on feedback. Feedback often comes late in a module: it may be too late to improve the current task and students may not see how it applies to future tasks (Jonsson, 2013). We can offer feedback which links improvements to the current task to more general principles (Hattie and Timperley, 2007; more on this here) and give students the chance to improve the task, helping them to respond by asking them to plan when and how they will do so (Jonsson, 2013; Winstone et al., 2017).
Practical approaches to making feedback more useful
Using models
An intriguing study compared giving first-year biology undergraduates model answers or individual comments (Huxham, 2007). For half the questions, each student received model answers; for the other half, they received individual comments on their answers. Most students preferred individual comments, but in the final exam, they did better on questions for which they’d received model answers, by a small but significant margin. Model answers are not only more efficient, they may be clearer: 30% of those who preferred models in this study did so due to the marker’s handwriting (Jonsson, 2013; Huxham, 2007).
Feedback from multiple peers
Another study examined exactly how a small group of students changed their work in response to feedback (Cho and MacArthur, 2010). Students were randomly assigned to receive feedback from either:
- A single expert
- A single peer
- Multiple peers
Feedback from the expert focused on simple corrections (like adding or deleting words), with occasional suggestions about major changes (like adding a new paragraph). Feedback from peers focused on clarifying the meaning of sentences and paragraphs. Students who received expert feedback made the simple corrections suggested, but this had little effect on their final grades. They tried to make major changes, but struggled to do so successfully. Students who received feedback from multiple peers clarified their arguments substantially, receiving higher grades as a result. This study highlights that:
- The ‘curse of the expert’ means expert feedback may be less clear to students than peer feedback.
- Individual feedback may not be the best way to encourage major changes: students may not understand how to make major revisions based on written comments.
- Feedback from multiple peers may give students a sense of the audience for their writing, helping them to write more clearly.
Conclusion
The reviews discussed here note the limits to improving student responses: the evidence is limited, many approaches are time-consuming and some are unpopular with students (Winstone et al., 2017). Much of the evidence is about undergraduates and most studies focus on students’ preferences; one review found only two articles which had examined students actual responses (Jonsson, 2013). However, the studies and reviews which exist suggest four questions which may encourage students to respond to feedback:
- Commitment: how can we increase students’ commitment to improve?
- Credibility: how can we encourage students to value our feedback?
- Clarity: how can we make the desired improvement clearer?
- Chance: how can we offer students the opportunity to improve?
This post is one of a series of updates to Responsive Teaching, marking a year since it’s publication.
If you found this interesting, you might appreciate
Detailed discussion and exemplification of various aspects of feedback in Responsive Teaching: Cognitive Science and Formative Assessment in Practice, alongside discussion of five other endemic problems in teaching.
Alternatives to marking when offering feedback
Determining what kind of feedback will move students on
References
Cho, K., and MacArthur, C. (2010). Student revision with peer and expert reviewing, Learning and Instruction, 20, pp.328-338.