On pulling the right lever

Some people may have seen these already, but they’re worth watching and thinking about in relation to the Phonics Screening Check

Here and here

John Seddon’s definition of ‘deliverology’ – ie management by targets – which he gives in the second linked videos is

‘A top-down method by which you distort a system, undermine achievement of purpose, and demoralise people’.

Both videos are well worth watching and thinking about, and Seddon’s view reflects my own (and I’m sure others’) experience of the distorting effects of targets.

I seem to be back to asking questions again. I’ll start with these:

 

The Check is a big lever to pull, so, given all possible levers, is it the right one?

 

If the system is to work at its best, how does the Check contribute to this?

This is a real question, by the way, not an implication that it makes no contribution.

 

Another is:

How do we know when the system is working ‘at its best’?

another way of putting this would be:

What are the criteria for judging how near or far the current system is from being the best it could be?

 

Given that there are some schools which are failing to reach the levels of literacy attainment at the end of KS1 which are generally agreed to be necessary, what does the check contribute to solving this problem?

This is where things get quite interesting. Because although the most recent evaluation of the Check says that the check seems to have led to increased focus on SSP – including non-words – and that this has improved the recent results for the Check itself, it also says this:

Will/has the introduction of the phonics screening check have/had an impact on
the standard of reading and writing?
Exploratory analysis of NPD data suggests that the check provides additional
information on pupils’ progress as their literacy skills develop from the end of the Early Years Foundation Stage to their outcomes at the end of key stage 1. Scores on the check tend to be consistent with, but not the same as, other measures of literacy development during these first years of school. Most children who achieve level 2 in reading and writing at key stage 1 have previously met the expected standard on the check, but there is a substantial minority who have not. In addition, initial analysis by multilevel modelling revealed that positive attitudes and practices towards the teaching of systematic synthetic phonics and the value of the check are reflected in higher scores on the check for pupils. In contrast to the phonics scores, there were no significant associations with school typology on the results for children at the end of key stage 1. Thus attainment in reading and writing more broadly appears unaffected by the school’s enthusiasm, or not, for systematic synthetic phonics and the check,
and by their approach to the teaching of phonics. (Walker et al. 2014)

the last sentence is the most interesting one, in terms of answering a question about the usefulness of the Check in relation to its ultimate purpose:

Attainment in reading and writing more broadly appears unaffected by the school’s enthusiasm, or not, for systematic synthetic phonics and the check, and by their approach to the teaching of phonics.
Now, this seems to me to connect with the study by Duff et al. which concluded:
Although the check fulfils its aims, we argue that resources might be better focused on training and supporting teachers in their ongoing monitoring of phonics.
Their criteria for saying that the Check ‘fulfils is aims’ are clear in this summary of results:
The phonics screening check correlates strongly with teacher judgements of phonic phases (r = .72) and with standardised measures of reading accuracy (nonword reading, single-word reading and prose reading accuracy, r’s = .75–.83) and spelling (r = .72). It also correlates well with phoneme awareness, prose reading rate and comprehension (r’s = .57–.68). In contrast, there are more moderate correlations between the phonics screening check and vocabulary and maths (r’s  = .45), indicating that the check is more specific to the domain of literacy and does not simply measure general abilities. Thus, the phonics screening check shows convergent and discriminant validity.
That is, children who do well at the Check tend to do well in literacy generally, but may or may not do well in other areas such as maths.
This means, that as a screening check it does have validity, in spite of its various idiosyncrasies (as discussed elsewhere).
Various people, however, have questioned what the point of the Check is if there is no followup other than doing ‘more phonics’. This approach would seem to me to run the risk, where SSP practice is not good, into pushing some schools into doing more of the wrong thing, rather than changing and improving their practice.
This post was triggered by a conversation this morning with a Year 1 teacher, who expressed the opinion (I hope I’m paraphrasing fairly) that problems with SSP can often be traced to the implementation, especially in relation to the training and professional development of teachers.
Now, in the second video above, John Seddon notes that in the case of social workers, driven out of a demoralized profession by the burden of paperwork, targets, and lack of professional freedom, the ‘deliverology’ solution is ‘more training’ – rather than, for instance, less paperwork or more professional freedom. So it’s important to be careful, if suggesting training as the solution, that lack of training, and not flaws in the system’s targets, really is the problem.
I’m not sure if the two situations are entirely analogous: I do think that non-ideal practice must lie behind some of the SSP horror stories, if only because people such as Elizabeth Nonweiler and Debbie Hepplewhite, who have contributed so helpfully to the comments section of this blog, are adamant that disconnected, isolated teaching of phonics has no part to play in their view of what SSP should be.
The question is, though:
If the system is flawed, in what way is it flawed?
Schools are a social institution, and their effectiveness depends entirely on the behaviour of the people in them. So:
Does behaviour need to change in terms of the teaching of literacy?
Failure to bring children to accepted levels of literacy suggests that behaviour does need to change in those situations. So:
What is the most effective and helpful way to change behaviour in order to ensure that children are brought to an accepted level of literacy?
High-stakes testing, where teachers and schools are punished for failure to achieve targets, has been the preferred tool of behaviour change in the public sector for the past couple of decades. But, as Seddon warns, if you’re not careful targets will twist your outcomes in all the wrong ways.
My own view is that if a check does not upset children, or take teachers and funds away from other literacy work, and if the results of the check are used to spark positive changes in pedagogical/institutional behaviour, then, well, a valid check is as good a way as any to find where the problems are. But those are some quite big ‘if’s – especially since problems show up in KS1 tests anyway. So:
How effective is the Check in flagging bad SSP practice?
I can see that it would show up failure to sound or blend well; but it doesn’t seem to me to have the facility to show where sounding and blending are disconnected from other literacy work: i.e. the sounding & blending may be OK, but the rest well below par – or, actually, the other way around.
I also think that CPD should really be professional development, not the frontline effects of what Seddon rather cheekily terms ‘Mickey-Mouse command-and-control’. I think, regardless of industry or job, most of us will have seen the negative effects and professional disengagement created by that kind of ‘training’.
My correspondent says that there is not enough focus on teaching teachers to teach reading; she describes ‘a couple of sessions’ in four years of study for a BEd, and also says that the introduction of SSP can amount to teachers being handed the materials for the chosen SSP programme and then told to get on with it. Again, I’m paraphrasing but that does seem to be what she meant.
So, more questions, with apologies for my ignorance:
Is there an agreed standard, during teacher training, for teaching teachers to teach children to read? If so what is it, and how much time do students spend on learning about reading?
How much do teachers currently learn about SSP during training? what specific methods were discussed during training in the past? Were specific methods discussed or was the approach more theoretical/general?
If you are a teacher who has switched to SSP, how much good-quality, helpful CPD accompanied the change? What, in outline, did this entail? What effects do you feel it had on your ability to use SSP well, and integrate it into the rest of the curriculum?
Or do you feel you were just left to try and get on with it?
Is it fair to suggest that CPD (and by association a school’s institutional, behavioural culture) is often the issue?
(NB I’m not talking about behaviour in terms of school discipline, but in the wider sense of the behaviour of the people who make up the institution as a whole.)
Advertisements

One thought on “On pulling the right lever

  1. Pingback: Phonics and literacy – separate strands, or warp and weft? | Miscellaneous Witterings

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s