Welcome to the discussion area of the Dyslexia Positive website. The idea is that anyone interested in dyslexia can join in a discussion based on themes initiated by a member of the Dyslexia Positive team. Please participate by commenting on the articles and feel free to ask any questions!
Showing posts with label WRAT 4. Show all posts

Posted 11th October, 2017 by Sue Partridge

Back in 2012, I was doing a lot of dyslexia assessments  and thought I would share some reflections about reading comprehension.  I used WRAT 4 for word recognition and sentence level comprehension, TOWRE2 to get insight into processing visual and auditory patterns at speed and miscue analysis when I wanted something a little more in depth.

I set a “brain teaser” to stimulate discussion about problem solving reading assessment results:
“Mary” came out in the average range for word recognition and comprehension from WRAT.  Her score for nonsense words was also just average on the TOWRE (see an earlier post for my views on non-word tests), though her lower score for real words at speed brought her overall word reading efficiency down below average.  She read extended text at 142 words per minute and with 98% accuracy, so miscue analysis was not possible, there being so few errors.

The big surprise came when she could only recall 40% of the detail of what she had read.  Even more intriguingly, this score did not improve when I read her an equivalent  level passage for listening comprehension.

I might have gone along with Kate Cain and said she had a specific problem with comprehension, but on reflection I thought….

Well why don’t I let you think about it and comment back… ?   I posted some discussion points and revealed my analysis, but you might want to think about this too, so only “read more” when you have had a think!

Continue reading this article… »

To receive an email when a new article is published, enter your email address:

NB: you should receive an email asking you to confirm whether you want to subscribe.


Posted 30th January, 2017 by Sue Partridge

When I was writing the concluding chapters of my Ed D thesis, attempting to answer my research questions, I pondered over what counts as a tangible improvement in word recognition skills.

My research attempted to guide the practitioner on how to measure improvement on an individual basis and in comparison to others in the adult literacy context.

The obvious starting point was a standardised test of word recognition, since any changes in performance can be compared with established norms for learners in the same age bands.  Adult literacy in the UK has rightly sought to avoid the anomaly of improvements being judged against educational grade designators or reading ages with inappropriate ceiling levels.

Using the WRAT 4 word recognition subtest (Wilkinson and Robertson 2006), in practice, none of my 10 learners made an improvement in score that could be considered as being beyond test error.

A typical confidence interval spans 12 or 13 standard points (for 90% confidence).  Taking the example of an adult aged 35 years using the green form subtest (see page 215 of the test manual), this change would represent an additional 11 words read for someone falling within the average range, but only 5 extra words for someone reading at above the mean 100 point score.  For an adult of this age it takes a nine word improvement simply to get out of the lowest band (the 0.1st percentile), assuming they can read the alphabet (which accounts for the first 15 points). It would take a massive 35 word improvement to get this learner from this baseline to a score at the lowest point of the “average” band (standard score 84, which represents one standard deviation below the mean and the point used as the criterion for examination boards in the UK to decide whether exam concessions are applicable)

Given the random nature of the word selection in this test (a mixture of phonetically regular and irregular words), short of teaching to the test, we are no clearer in being able to quantify in reality how many extra words of vocabulary a reader has to learn to recognise to show suitable progress, let alone being able to describe what extra word attack skills they need.

What do you think is reasonable progress in developing a learner’s vocabulary to enable them to recognise more words by sight?  How useful do you find WRAT 4  as an assessment tool?

Partridge, S.E.  (2012) Unravelling reading: Evaluating the effectiveness of strategies used to support adults’ reading skills, (Ed D thesis), Milton Keynes, The Open University.
Wilkinson, G. & Robertson, G. J. (2006) WRAT 4 Wide Range Achievement Test, Professional Manual, Lutz, Florida, Psychological Assessment Resources, Inc.

Leave a comment or ask a question »

Posted 16th January, 2017 by Sue Partridge

Back in 2011, at a meeting of Dyslexia Positive we discussed the pros and cons of assessing readers when they read silently and when they read aloud.  Clearly these are two different processes.  The former may be the preferred mode of reading for competent readers, but not always for readers with dyslexia who may like an auditory feedback loop.  Reading aloud requires an additional skill in articulation on top of the regular reading skill.

The assessment issue comes when you want to measure reading speed and reading comprehension. Reading silently will almost certainly (though not invariably) be faster than reading aloud.  Reading comprehension depends on so much else, but the extra burden on working memory when articulating words to read aloud may skew the score.

Those of us who use the WRAT 4 sentence comprehension sub-test (with all of its flaws) to get a standardised score for reading comprehension will have observed some candidates reading silently and others aloud, with some readers using a mixed strategy. What bearing does this have on the score and its validity?

In an ideal world we would want to assess the reader with equivalent texts both silently and aloud and make a close comparison between the findings for the two.  Even better would be throw in a third passage to test listening comprehension and try to build up a full profile of the differences in performance. Against this is the very real threat of test fatigue.

Jocelyn from Dyslexia Positive observed that some readers think they have to read silently, because they have been taught that is the best way, even though they might not want to and it might not suit them.Yvonne liked getting the people she assessed to read silently, if they can, as it tells her about their potential for effective study.  Melanie used the Adult Reading Test (ART) for assessment, trying to get a sample of reading aloud and reading silently, but is really concerned about over-testing (the ART is particularly exhaustive and exhausting!).  Clearly you can’t do miscue analysis unless you hear the learner read aloud…

All of this argues for a more extended period of assessment and observation, so as to build up an extensive profile of reading ability, without the dangers of test stress.  With reading, it may be important for each learner to develop different strategies depending whether they want to speed read silently, read and recite (to their children or to hear a particular effect, say when appreciating poetry) or any other purpose.

This debate on assessment practice for reading is still relevant, although in 2017 there is more pressure to cram even more assessment tests into a diagnosis, and to explore co-occurring conditions as well as dyslexia. Something has to give!

Leave a comment or ask a question »