[NB One person’s usefully naive question is another person’s irritatingly ignorant one, so apologies if this post falls into the latter category for you. I do just find all this really fascinating, hence the banging on about it….]
I’ve read the responses to my previous post, and some of the other related blog posts as well, and seen some of the discussion on Twitter. I’m still as confused as ever as to where one might draw a distinction between what SSP advocates would consider good practice, and what would be considered good practice by those who dislike the current SSP approach. (NB Near the end of this mammoth post one particular confusion gets partly sorted out by serendipity, but just produces further questions…).
So I’ve put together some more points and questions, some of which may have no simple answer, in the hope that articulating them might clarify what it is that’s confusing and worrying me.
So, firstly, to me it all boils down to the moment a child is ‘decoding’ a word, and the extent to which context and latent vocabulary assists in their recognition of the set of ordered phonemes as having potential to become a ‘word’. So this is what I’m going to focus on here, and then try and articulate how I think that relates to the screening check.
The decoding process
When I’m reading with my children, I’ve observed a process with the following steps:
1. Recognize phonemes (s, oa, ph, etc).
2. Remember sound of phonemes.
3. Sound out phonemes in the order they appear.
4. Listen to the sounds.
5. Chunk the phonemes together.
6. Listen to the sounds.
7. Blend the phonemes closely together.
8. Listen to the sounds.
9. Draw on vocabulary to match blended sounds with potential words.
10. Decide on likely word.
11. Sound the likely word.
12. Use context to make final choice (esp in cases of eg read/read).
Some of these steps may be looped; some of them become a flow and then are almost elided as the children get more fluent with their reading; but the process, in essence, remains the same. When one of the boys has a problem working out a word, they will loop on a particular set of stages once or twice before they ‘get’ it. (I always help them as soon as they want me to, of course.)
There seem to me to be some essential elements to this, the most basic ones being:
Learning/understanding/hearing relationship between sounds of chunked phonemes and sounds of blended word.
I’ve made a distinction between ‘sounded phonemes’, ‘chunked phonemes’, the ‘potential word’ and the ‘actual word’ because I think that progression through these stages of blending can stall between any one, or more, of these stages of the blend. Why and how a child moves from one to another is what interests me, especially how it’s possible for a child to go from chunked to potential, or potential to actual, without ever using context or vocab or syntactic memory to help them. (Is this what’s claimed? I’m still not clear…)
Is this all that’s going on?
As experienced readers, our view of the relationship between the chunked phonemes and the blended word is pretty teleological: we know where it’s headed; the chunks stand in easily for us as the conceptual parts of the word. But for a child this is not so clear. The flow of sounds in a word is qualitatively different from those in a set of phonemes chunked together.
I have observed in both of my children a process whereby they learn to ‘tune in’ to the relationships between the chunked phonemes, the blended proto-word, and the actual word itself. They have developed a memory for a conceptual relationship that they did not have before.
The most obvious problems are with extra sounds (eg yod) not represented by the separate phonemes, and with stressing of the wrong part of the word.
1. The word ‘tube’. There’s a great phonics reader called ‘Sue Kangaroo’, much loved, which is all about different ways of representing ‘oo’: ‘ou’ in you, ‘ew’ in Mrs. Drew, ‘ue’ in Sue and glue. But there’s a problem, which the book recognizes tacitly: in many accents, some of these ‘oo’ words are not so simple.
Some accents, especially in the US, would find ‘tube’ simple: it’s ‘t’ + ‘oo’ + ‘b’. But not for me, since my accent adds a ‘y’ sound before ‘ube’: ‘t’ + ‘y’ + ‘oo’ + ‘b’. Where, for the child, does that ‘y’ come from? How do they know ‘tube’ next time around? Similarly with ‘Tuesday’. The learner reader is being asked to do more than recognize, chunk, and blend here. Latent vocabulary, and a more nuanced (is that nooanced or nyooanced?) phonetic awareness, is being brought into play.
Some accents, therefore, will possibly be slightly more advantaged by a phoneme-focussed approach than will others.
2. The word ‘began’. I remember both of my children having trouble with this one. They could sound out the parts of the word – on the surface, it appears phonetically simple – but they simply could not hear the word itself in the chunked phonemes, because in ‘began’, the stress is crucially important to the sound (and in fact to the deep etymological history) of the word itself. ‘BE GAN’ is not the same as ‘beGAN’.
Relevance to the screening check
All this has made me think a lot about the screening check and especially its nonsense words.
If I’ve understood correctly, the idea of the check is to ensure that children have acquired a specific set of technical skills (phoneme recognition, sounding, and blending). It is not about reading for sense.
Oxford Owls says ‘These are words that are phonically decodable but are not actual words with an associated meaning e.g. brip, snorb. Pseudo words are included in the check specifically to assess whether your child can decode a word using phonics skills and not their memory.’
To me this also implies that the use of nonsense words is intended to ensure that the test is not biased in favour of children whose background is rich in English vocabulary – I have seen it discussed in this way, and the use of nonsense words does imply that otherwise some children may get ‘help’ from their vocabulary which others will not, thus skewing the test.
My own experience of observing the recognition-sound-blend process is such that I wonder whether a level playing field can ever be achieved in this way, for a number of reasons to do with cultural capital and what ‘memory’ really means in the context of language.
Cultural capital giving advantage:
1. A child from an English-vocab-rich background will also be from an English-phoneme-rich background. They will have what you might call high ‘phonemic capital’. Therefore, they will know, tacitly, that ‘qu’ is more likely to be followed by ‘-ee’ or ‘- i(e)’ than by ‘uh’, and that ‘quee’ is more likely to be followed by a consonant than another vowel. A child with a poorer or different ‘phonemic capital’ is less likely to have this ‘feel’ for the potential sound of unfamiliar English words.
So the screening check will advantage the already linguistically advantaged children with phonemic capital.
Or it may disadvantage them, if the nonsense words fail to be as sensitive as is the child itself to the language’s more subtle phonetic characteristics (not ‘rules’).
2. There is also the issue of what part the teacher’s accent plays in the process. I have observed that often child needs to hear and know the phonemes in their own accent in order to hear the word that these sounds are meant to be blended into.
Are children whose teacher’s accent matches their own advantaged by this? How sensitive does the test manager have to be to a child’s accent in order to be sure that the child has sounded something ‘correctly’?
[Edited: section on syllable stress removed because not currently relevant in relation to the screening check – with thanks to Elizabeth Nonweiler for her comments: I’m not a teacher but a parent with a background in English literature/language research, and I’m trying to get my head around how reading is taught, so I’m always very grateful for people pointing out where I’ve got something wrong.]
[Edited again, 20/7/2014: Although EN says that the check does not contain two-syllable words, the government Framework document does in fact allow for this possibility:
…in the phonics screening check, a child working at the minimum expected standard should be able to decode..some items containing 2 syllables.
…The two-syllable words assessed will be real words because of the difficulty of inventing
polysyllabic pseudo-words with limited alternative pronunciations that can be scored
reliably. This is an issue for two-syllable words because of the effects of stress placement on
This suggests that even if the check has not so far included such words, it might do in the future, so we should be aware of the pitfalls of syllable stress.]
3. In one list of nonsense words I’ve seen, one of the words (scry) was in fact a real word. Oddly enough, I’ve used the word with my two, because there are scrying spoons in the Ashmolean Museum, which we visit a lot. So the geographical and cultural advantages of being near Oxford and being able to afford the time and travel to get to the Ashmolean, and the fact that I myself know the word ‘scry’ and used it with them, would potentially advantage my children further in a test of ‘nonsense’ words. This is particular to us, but presumably there are other circumstances where this could occur with other words, for instance:
A list of ‘nonsense’ words on flashcards in TES Resources suggests ‘geck’. This is in OED as a real word for a fool. I’ve heard it elsewhere as a variant of ‘gack’, meaning muck. Also suggested are ‘ulf’ (a real Scandi personal name), tox, sug, tren, hain, fress (all also real, if obscure, words). Some potential nonsense words in standard English might have meaning for children with one dialect and not another, eg ‘daps’ would be nonsense to many children but in Somerset is a common word for plimsolls; ‘girt’ in Somerset means big. Suffolk dialect, I learned recently, preserves ‘snew’ as a past tense of ‘to snow’. Other words might exist in a child’s home language, so there is potential for confusion there (and for both advantage or disadvantage).
A child with greater or variant ‘cultural capital’ could be advantaged/disadvantaged in the so-called nonsense words part of the test because credible fully nonsense words are hard to find.
How can a child be tested without it using its memory?
Of course, taking the Owls explanation at face value, the idea presumably is that a child is not ‘remembering’ whole words, but is showing that they have the specific transferable technical skill of sounding and blending. I understand the logic of this, know the benefits of these skills, and very much like the way that the sounding/blending-focussed approach helps a child be relatively self-sufficient in working out unfamiliar words. It’s a way that a child can feel ‘ownership’ of their reading at the earliest possible stage. I like that: it seems hugely valuable in encouraging a child’s enthusiasm for reading.
When a child is sounding out phonemes, they are using their memory. When they blend those phonemes into a word, real or not, they are also using their memory. Not just their memory of things learned in phonics lessons, but the kind of deep linguistic memory that comes from their life before and outside school.
All of the points relating to advantage as a result of phonemic and other linguistic ‘capital’ relate to the child’s memory too, since that is where the advantage is stored.
In addition, the child is being asked to use its memory to learn all the separate phonemes, all the ways in which phonemes can shift slightly in sound as they blend into a word, and all the degrees of difference between chunked/blended phonemes and the actual vocabulary word. As their vocabulary increases, they are also increasing their ‘phonemic capital’, that is, their feel for what might come next, also stored in the memory.
So my concerns with the screening check are:
The kind of ‘blind test’ that the check seems to aim for is probably impossible.
The test, and it seems pure SSP, both draw a line between the functions of memory which are acceptable during reading (remembering phonemes, possibly also blend-relationships), and those which are not (syntactic context, sense, whole word recognition); but neither, as has been claimed, excludes the use of recall from memory during reading.
I’m not sure what the full implications are for the validity of the test, or of the purest SSP method, but I do think it’s worth questioning whether the distinctions being made are actually real ones, and if so, whether the lines are being drawn in the most useful place.
While I was writing this, a comment popped up on the previous post in which ‘nemocracy‘ talks about:
…deduction from context to support phonic decoding. This is explicitly ‘outlawed’ in SP. In practical terms there is a thin line between using context to decode and using context to understand. I think quite often the mind bounces between the two (using context and using phonics to decode and arrive at understanding-it’s all in the same parcel). SP fans would not argue with the second use but denigrate the first by calling it ‘guessing’.
I like the ‘mind bounces between’ description of what is going on: I’ve answered by describing it as an ‘iterative’ process. Memory and reading are not linear things, and although I tried above to break my children’s activity down into a numbered sequence, I found immediately that there were repeated actions, and also that some of the sub-sequences work on a loop as well. I also found that for words that are not completely simple to sound and blend, the loop includes context.
So I have a question for SSP fans:
Is nemocracy’s characterization of your distinction between ‘using context to decode and using context to understand’ an accurate one as far as you are concerned?
What do you consider to be problematic about a child using context as a prompt to help move from chunked phonemes or potential word to actual word?