Say ‘No’ to professional development. That’s the obvious conclusion to be drawn from a series of recent, rigorous, randomised-controlled studies of professional development programmes. Unlike so much professional development, they incorporated all the ingredients suggested by reviews of the evidence (such as Desimone, 2009), including expert input, leadership support, a subject knowledge focus and lasting a year or more. Yet despite thoughtful design, careful implementation and significant investment, these programmes led to no discernible improvements in student learning.
A brief synopsis. Each study was a randomised-controlled trial, providing a comparison group of similar teachers who did not experience the professional development – allowing clear causal inferences to be made about the impact of the training. Garet et al. (2016) found that summer training, video coaching and professional learning communities “improved teachers’ knowledge and some aspects of classroom practice but did not improve student achievement”. Garet et al. (2011) found that summer training, coaching and follow-up seminars over two years changed neither teacher knowledge nor student achievement. And Jacob et al. (2017) found three years of problem solving and examination of student work and misconceptions led teachers to evaluate the programme positively, increased their knowledge slightly but left their teaching unchanged. Before we reject professional development entirely however, I’d like to suggest five ways these studies could help us to refine the way we design professional development.
1) Focus on the right knowledge
Focusing on the subject to be taught is important, but some aspects of this knowledge are more powerful than others. Programmes tended to focus on common content knowledge – general knowledge about the subject – and specialised content knowledge – such as different ways to solve maths problems (see Ball et al., 2008). They focused less on the specific content and problems teachers were due to teach: one programme, for example, taught maths useful across elementary school, even though the study participants were fourth grade teachers (Garet et al., 2016). Developing teachers’ general subject knowledge is a massive, vague task: teacher general knowledge is valuable, but teachers may well not transfer it to their teaching. Professional development focusing on teacher knowledge should therefore be designed around the curriculum: it may fill gaps in teacher general knowledge, but the real gains will derive from understanding different ways students approach problems, their misconceptions and useful models (Ball et al. (2008)’s content knowledge for teaching and content knowledge of students).
2) Get inside the black box
Yet teachers’ practice is far more important than their knowledge. The association between teachers’ knowledge and student learning is so weak that developing it may be a distraction:
To obtain an impact on [student] achievement of 0.1 standard deviations (the equivalent of moving a student’s test score from the 50th percentile to the 54th percentile), for example, would require an improvement in [teachers’ knowledge and] practice by about 1 standard deviation (the equivalent of moving a teacher’s practice from the 50th percentile to the 84th percentile) (NCEE, 2016).”
Professional development may influence isolated aspects of teachers’ knowledge and behaviour, but doing so is unlikely to cause change in the existing, complicated patterns of teacher classroom behaviour. It is possible to influence teacher behaviour: a striking counter to these unsuccessful programmes is Allen et al.’ study (2011), which led to significant impact on student learning through video coaching about classroom climate.
3) All teacher-training should be practice-based
In the same vein, talking about practice is no substitute for practice. Here’s a typical activity from one of the programmes:
Teachers sitting in small groups were given a container of rice, tape, and three 5- by 8-inch index cards. The teachers were asked to make three shapes with the index cards—a circular cylinder, a triangular prism, and a square prism—by taping the two five-inch ends together with no gap or overlap…”
They were then asked to predict which shape had the largest volume and test their hypotheses by filling the shapes with rice: “Upon doing so, many were surprised that the circular cylinder actually held more than the other two shapes.” They tried to work out the shapes’ volumes:
Neither group made progress toward a reason for their outcome. The whole-group conversation that followed focused briefly on the mathematical reason for the cylinder’s larger volume (the height was the same in all three shapes, but the areas of the bases differed) and then covered topics such as when to use the task during the school year, how long it should take to enact, and other desirable features of the task.”
Like so much teacher education, these programmes embraced discussion. Teachers can discuss student learning until the cows come home, with insight and erudition, but it’s a waste of time unless they practise behaving differently in the classroom: insight without action is indulgence. (This point was made by just one of these studies (Jacob et al., 2017)). All teacher training should be practise-based.
4) No matter how strong your external training is, culture is stronger
Each programme included a summer institute of a week or so, followed by interventions in schools, of four to six days in a year. While external expertise can be a powerful challenge to existing ideas (Teacher Development Trust, 2015), it’s telling that some of these studies randomised at teacher level (within schools) and ruled out spillover: which is to say, they concluded that the training some teachers received would not affect other teachers teaching the same grades in the same school (Jacob et al., 2017). This makes the intervention look particularly shallow: it is in the school or the department that sustained and sustainable improvement needs to occur. Similarly, placing most of the training before the school year is administratively convenient but limits the ability of participants to learn, applying their learning and then return to learn more.
5) Design your CPD around reality in schools
The value of focusing on the school is reaffirmed by the massive turnover of teachers these programmes suffered. One reason they struggled to demonstrate an impact was their ‘intent to treat’ approach: if a school, or teacher, was included in the trial at the start, their results counted at the end – even if a teacher missed every training session. Turnover was dramatic: only 57 of 105 teachers remained in Year 3 in one study, (Jacob et al., 2017); only 23 of 45 maths teachers remained in Year 2 in another (Garet et al., 2011). Yet such turnover is not unusual, and while we may wish politicians took this more seriously, we need to design professional development to take account of this: probably by focusing on the department and revisiting key ideas each year. (Jacob et al. (2017) also suggests that planning to ensure leadership support is sustained is similarly important).
6) Evaluate properly
We know very little about what’s working: a recent review found only 9 of 1,343 trials of professional development with randomised-controlled trials or quasi-experimental designs (Garet et al., 2011). The startling thing about these studies is that there is a good deal of plausible professional development which isn’t working: we need to investigate this more closely; randomisation should be the rule, not the exception.
Conclusion
Teachers: don’t say ‘No’ to professional development, but do examine what’s being offered very carefully before investing in it.
Teacher educators: incorporate (or improve upon) the points above.
Wow. This is important stuff. I feel that a move towards fortnightly or even weekly sessions run in departments is the way to go. I wonder if we should still distinguish between ITE/ITT and ongoing CPD. Is there evidence that the foundational knowledge gained in ITE is valuable? Does this need to lead to improved test scores to count as being important? Everything I’ve learned supports the idea that internal microcultures are the key element is whether standards rise. That rings very true. This is so interesting. Thanks.
Very useful post with links to follow up later. Do you think CPD focussed on teacher’s motivational development to support conceptual or subject knowledge development programmes would have an impact? The issue of quality data and study design is of real interest too. A real challenge to address this though. I’m also interested in this idea of micro-cultures. Thanks for the post and comments.
While I think it’s important that teachers do continue to develop their subject knowledge, I’m sceptical about focusing on teachers’ motivation. This is partly because the actions are more important than the motivation – at least to the professional development lead – if teachers are motivated to do something we will see them doing it, so I’d rather focus on seeing them doing it. And partly because the evidence seems pretty clear that our beliefs change more slowly than our actions: it’s easier to act our way into a new way of thinking than think our way into a new form of action.
Really interesting post, Harry, coming at a very interesting time for me as just today, during a LA inspection visit I was challenged to match student data outcomes to PD input from me and my team. (We resisted, because it feels too much like making up the data.) Your post suggests some reasons why it is so tough to find that link – not least of all because it may simply not be there.
I know the arguments for RCTs, but I do wonder if they are too blunt an instrument with which to measure impact of PD on student outcomes.
Guskey is very fair on this and argues that we can collect evidence of the value of our professional development but are very unlikely to be able to prove its role. So, on the one hand, I think it’s fine for a professional development leader to demonstrate valued outcomes which go beyond positive feedback from sessions but stop short of student results. For researchers and those with the scale and resources however, I do think it’s important we keep testing the value of PD, simply because, if it’s having no impact, we might be better placed spending the money and time on other interventions more useful for students (like tutoring, for example).