Take a moment to consider this question: if we’re designing a professional development programme, what should it look like?

I’ve asked many teachers this question. Usually, they mention some or all of the following features: professional development should be sustained and collaborative, subject-specific and practice-based, and should be supported by external expertise and teachers’ buy-in.

This reflects a broad consensus among researchers. But in an article recently published in School Effectiveness and School Improvement, Sam Sims and I question the evidence underpinning this consensus and suggest a more promising way to identify the characteristics of effective professional development.

The consensus on effective professional development

Several reviews have described collaboration, subject-specificity, external expertise and so on as important or necessary features of effective professional development. Most notably, these include a pioneering review of professional development (Timperley et al., 2007), an influential review of reviews (Cordingley et al., 2015) and a ‘meta-synthesis’ (Dunst et al., 2015). Different reviews are not unanimous, but the overlap is sufficient that many researchers describe it as a “consensus.” This consensus has influenced professional development policy, design and evaluation.

The problem: if this is what works, why doesn’t it work?

Our curiosity was aroused by studies of professional development programmes which included these features, but did not improve student learning. To give one example, Jacob et al. (2017) evaluated the Maths Solutions professional development programme:

“because it meets the criteria… of effective professional development program features”

It was designed by external experts to develop teachers’ mathematical knowledge for teaching: groups of teachers worked together, experiencing “active learning” for forty hours a year over three years. However, after three years, neither teachers’ teaching nor student learning had changed. Studies like this (and others, described here) inspired us to investigate the evidence which had informed their design.

Our approach: working backwards

We worked our way back to a foundational study of teacher professional development, which first introduced many of these features: Timperley et al. (2007). It has had a lasting influence: the 2015 review of reviews (Cordingley et al.) described it as the “only fully consistent and rigorous review” influencing their work. Timperley et al. reviewed the available research on professional development programmes: we examined two aspects of their review process in detail, the studies they included, and the way they identified features of effective professional development from them.

Question 1: Are the suggested features based on robust studies?

The studies included in the review did not seem to warrant strong claims about what works in professional development. Of the twelve included, some reported only qualitative findings. Most did not demonstrate that the group receiving professional development was equivalent to the group which did not receive professional development: this makes it hard to attribute any improvement in student learning to the professional development they received. The only experimental study included only twenty teachers at the beginning of the programme: this had fallen sixteen at the end. We were unconvinced that these studies met current standards of evidence for making claims about what works in professional development.

Reviews which were more demanding in their inclusion criteria were less assured in their conclusions. For example, Yoon et al, (2007) included only experimental trials (which offer more robust evidence the programme influenced participating teachers). However, Yoon et al. concluded that: “discerning any pattern in these characteristics and their effects on student achievement is difficult,” a view echoed by other reviews.

Question 2: Was the inference process appropriate?

We were equally unsure about the process by which reviewers identified the features they argued made professional development effective. Several reviews examined studies of professional development and asked ‘What do these studies have in common?’ For example, Timperley et al. (2007) noticed that the apparently-successful studies they included had seen teachers working together, and concluded that teacher collaboration was important. Other studies in which teachers had collaborated did not see the same positive effects: Timperley et al. suggested that collaboration was therefore necessary, but insufficient for success.

The problem with this approach is that a feature of a professional development programme – like collaboration – may not contribute to its success. For example, we might design a professional development programme in which teachers work together because it’s a cheap way to offer everyone training, or because we want to increase staff cohesion. If the programme works, that doesn’t mean it works because of collaboration (this may be the case, but our programme does not prove it).

An alternative approach: link evidence of impact with evidence of mechanism

An alternative – and more compelling – approach to identifying the features of effective professional development would combine two kinds of evidence: evidence of impact and evidence of mechanism.

For example, My Teaching Partner is an instructional coaching programme in which teachers receive fortnightly feedback about a video they have chosen of their classroom. Randomised-controlled trials in many different contexts have shown that students whose teachers experience My Teaching Partner seem to learn more (see, for example, Allen et al., 2011). That’s evidence of impact.

Evidence of mechanism comes from psychology or behavioural science. For example, we know that lasting change requires the formation of new habits (a mechanism), and that repeating an action in context helps people to form habits. My Teaching Partner uses this mechanism: teachers review their teaching every fortnight. This seems likely to create a habit of reflecting on their interactions with students. (This could help explain why My Teaching Partner continues to influence teachers after coaching ends).

This is just an illustration and more work would be needed to demonstrate that habit-formation is a feature of effective professional development: the point is that sticking these two kinds of evidence together – evidence that something works, and evidence for the mechanism making it work – is more compelling than evidence of impact or evidence of mechanism on its own.

Conclusion

If we’re designing a professional development programme, what should it look like?

We cannot be certain that the features we have come to accept as essential are indeed effective: we do not know for sure that professional development should be collaborative, subject-specific, supported by external expertise, and so on. (We’re not arguing these are undesirable either, rather that their desirability has not been proven).

The reviews we examined sought to offer teachers useful guidance based on the limited sample of studies available at the time. We’re excited about the potential for drawing more robust conclusions from the increasing number of rigorous studies of professional development.

We believe the best way to draw robust conclusions from them, is to combine what we know from effective professional development programmes with what we know about how people learn and change, from cognitive and behavioural science.

If you found this interesting, you may appreciate:

The full paper (paywalled journal article here; open-access version of the accepted manuscript here).

A summary of our draft paper and a critique of it here.

A (dated) take on effective professional development.

References

Allen, J., Pianta, R., Gregory, A., Mikami, A., Lun, J., (2011) An Interaction-Based Approach to Enhancing Secondary School Instruction and Student Achievement. Science. 333 (6045) 1034-1037

Cordingley, P., Higgins, S., Greany, T., Buckler, N., Coles-Jordan, D., Crisp, B., Saunders, L., Coe, R. (2015) Developing Great Teaching: Lessons from the international reviews into effective professional development. Teacher Development Trust.

Dunst, C.J., Bruder, M.B., and Hamby, D.W. (2015). Metasynthesis of in-service professional development research: Features associated with positive educator and student outcomes. Educational Research and Reviews, 10(12), pp. 1731-1744.

Jacob, R., Hill, H., Corey, D. (2017) The Impact of a Professional Development Program on Teachers’ Mathematical Knowledge for Teaching, Instruction, and Student Achievement. Journal of Research on Educational Effectiveness, DOI: 10.1080/19345747.2016.1273411.

Timperley, H., Wilson, A., Barrar, H. & Fung, I. (2007) Teacher professional learning and development. Best evidence synthesis iteration (BES). Wellington, New Zealand: Ministry of Education.

Yoon, K. S., Duncan, T., Lee, S. W.-Y., Scarloss, B., & Shapley, K. (2007). Reviewing the evidence on how teacher professional development affects student achievement (Issues & Answers Report, REL 2007–No. 033). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Southwest.