top of page
Search

What's in a name?

Updated: 5 days ago



A fundamental tenet of what neuropsychologists do is convey important information about cognitive function, based on the objective test results. The language that we use to do that is important, and needs to be consistent. So if someone has a low average score on your assessment, the next assessor should know what that means.  However, one complication is that the descriptors have both evolved over time, and can differ according to which test you are using. Many of us find we just have to select one set of descriptors and use them consistently, and for many it is the Wechsler descriptions.

 

In the light of the recent concern about this topic (thanks Kelly B on NpinOz!), I took the opportunity to re-visit some of my old test manuals, to see how things have evolved over time. One thing that became very obvious (apart from the fact that I have a lot of them!) is that there has been a real change over time in the language used.

 

My oldest manual is the WAIS-R (published 1981), and it refers to the classification of scores <2%ile as being in the “mentally retarded” range, and it notes that this classification in the WAIS was “mentally defective”. It introduced high and low average, noting that in the original WAIS they were bright and dull normal. The WISC-III in 1991 seemed to try to change the lowest classification, but used intellectually deficient. Happily we have moved on since then!

 

The WAIS-III, published in 1997 moved to a collection of descriptors which are more familiar to us today – from very superior to extremely low. The WISC-IV published in 2004 followed suit. In 2008 the WAIS-IV retained the same descriptors – which many of us are still using.

 

However, in 2014 the WISC-V heralded a change. They replaced very superior with extremely high, and superior became very high. Borderline became very low, and the rest remained the same. In 2018 the APS expert committee acknowledged that there was now a difference between the adult and pediatric test descriptors, and recommended that the WISC—V descriptors be adopted widely.

 

Then, the AACN published its consensus statement on descriptors in 2020. It is a great article that captures the issues with the variations in test descriptor use, and reminds us that interpretation of scores is different to labelling scores, and that simplicity of descriptors can enhance communication. They offered a set of descriptors for normally distributed tests (as well as non-normally distributed, and validity tests). Rather than extremely high they use exceptionally high (and low), and both above average and high average (and similarly low average and below average). I personally found it very difficult to explain to consumers the difference between these classifications [They are exceptionally high, above average, high average, average, low average, below average, exceptionally low].

 

In 2025 the WAIS-V has brought with it a new set of descriptors.  They range from extremely high to extremely low. Borderline was replaced by very low, and low average became below average. Similarly, high average became above average and superior became very high. [extremely high, very high, above average, average, below average, very low, extremely low].

 

So no problem, we just update our templates and move on - right?

 

Well the challenge is this – the AACN and the WAIS-5 descriptors are not consistent, and the potential for significant confusion exists if they are both in use.  I.e. both systems use the term “above average” but it reflects different classification levels (ie: 75-90th %ile versus 91-97th %ile). Similarly, “below average” means 9-24th %ile in one and 2-8th %ile in the other.

 

What is the solution? It’s hard to know what is best, as there are lots of valid arguments for many options. However, consistency is important, and it is important to bear in mind that the test usage extends beyond neuropsychology.

 

What will you do? Feel free to add your comments – ACNpA is developing a response to this issue.

 
 
 

2 Comments


drleoniesimpson
4 days ago

This is a great summary Deb. Several of us have raised this with Pearson Australia (I can tell "several" from the look of resignation when I mentioned it). As colleagues and Pearson reps  pointed out, the Technical Manual does say the descriptors are "only suggestions and are not evidence based".


I think it's important that we do raise it with Pearson repeatedly even if it doesn't have an immediate impact (they aren't going to reprint the manual now). It's good to remind them we have an opinion (on most things) and we'd be happy to share it!


I agree with you Evrim that clearly stating your descriptors in relation to percentiles in your report is the best option currently. Nonetheless,…


Like

evrmart
4 days ago

In report, provide percentiles for tasks. Only after explaining what each descriptor means. E.g. average = 25-75th %les. This will reduce interpretation errors, and increase communication between assessors. Increase report writing and reading efficiency.

Edited
Like

Keep in Contact - ACNpA mailing list

Thanks for submitting!

SOCIALS

bottom of page