A response to this article.
It seems to me that there is something odd about the conclusions drawn from this study, in terms of the usefulness of whole words for learner readers. It states that the brain activity was *different* when subjects were shown (and learned) the words initially, and only *later*, on recognizing the nonsense words, was the brain activity the same as for the real words. The implication is that the real words were also *known* words – though this is not necessarily the same thing.
If the known ‘real’ words had the same effect on the brain as the known ‘nonsense’ words, it seems likely that *unknown* real words would have the same effect on the brain as unknown nonsense words. That is, in both cases readers would be doing something *different* when first encountering a word, compared with what they do when encountering a familiar word.
If the ‘familiar word’ response is some kind of all-at-once processing (which could cover a multitude of complex and parallel processes in practice) and the ‘unfamiliar word’ response is *different* from this, then logically the approach to unfamiliar words must include piecemeal recognition (the most likely being letter-by-letter, or perhaps as clumps of letters, eg syllables or graphemes).
This means that the usual approach to unfamiliar words, even for experienced adult readers, is a matter of breaking the word into constituent parts. Which, in turn, makes the recommendation that some people who find reading difficult might find it helpful to attempt to *bypass* this step rather bizarre. Surely it makes more sense to give them support in finding ways to make this step work for them?
It’s equally likely that the recognition of familiar words is, like the recognition of faces used as an analogy in the article, a matter of recognizing the relationship of the constituent parts *to each other* as well as individually, but at great speed. When we see faces, we register the constituent parts of the whole: if the Mona Lisa’s nose were painted out, we would notice. There’s a good summary here of the issues in terms of face recognition, which are very useful to think about in relation to word recognition: http://web.mit.edu/bcs/sinha/papers/20Results_2005.pdf especially in the evidence for ‘holistic’ recognition (which does not mean simplistic recognition of the whole face as a single object, but something much more complex and subtle).
It seems that people who suffer from prosopagnosia, or ‘face blindness’ have to use ‘piecemeal’ recognition strategies, and that the disorder may stem from a deeper inability to perceive the visual relationships between the parts and the whole (many sufferers have problems with landscapes and orientation landmarks, too). Learning each face ‘as a whole’ (and how, really, does one do that without elements of piecemeal and relational processing?) is apparently not an option for sufferers.
When thinking about how best to help young beginner readers, Result 17 from the linked survey is very suggestive – if children of 6 years old are still in partially in a ‘piecemeal processing’ phase with faces and are therefore far less accurate at recognizing faces than are 10-year-olds, this must have implications for the processing of words when they are learning to read, if the analogy posited in the article stands. That is, recognizing words as wholes is likely to be *harder* for them than piecemeal processing via sounding out letters, and so the best way to help them would be to engage with this developmental stage.