Hostname: page-component-7c8c6479df-r7xzm Total loading time: 0 Render date: 2024-03-28T07:42:06.877Z Has data issue: false hasContentIssue false

Examining the range of normal intraindividual variability in neuropsychological test performance

Published online by Cambridge University Press:  27 August 2003

David J. Schretlen*
Affiliation:
Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, Maryland
Cynthia A. Munro
Affiliation:
Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, Maryland
James C. Anthony
Affiliation:
Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, Maryland Department of Mental Hygiene, Johns Hopkins University School of Public Health, Baltimore, Maryland
Godfrey D. Pearlson
Affiliation:
Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, Maryland Department of Mental Hygiene, Johns Hopkins University School of Public Health, Baltimore, Maryland
*
*Reprint requests to: David J. Schretlen, Ph.D., Johns Hopkins Hospital, 600 N. Wolfe St., Meyer 218, Baltimore, MD 21287-7218. E-mail: dschret@jhmi.edu

Abstract

Neuropsychologists often diagnose cerebral dysfunction based, in part, on marked variation in an individual's cognitive test performance. However, little is known about what constitutes the normal range of intraindividual variation. In this study, after excluding 54 individuals with significant health problems, we derived 32 z-transformed scores from 15 tests administered to 197 adult participants in a study of normal aging. The difference between each person's highest and lowest scores was computed to assess his or her maximum discrepancy (MD). The resulting MD values ranged from 1.6 to 6.1 meaning that the smallest MD shown by any person was 1.6 standard deviations (SDs) and the largest MD shown by any person was 6.1 SDs. Sixty-six percent of participants produced MD values that exceeded 3 SDs. Eliminating each person's highest and lowest test scores decreased their MDs, but 27% of the participants still produced MD values exceeding 3. Although MD values appeared to increase with age, adjusting test scores for age, which is standard in clinical practice, did not correct for this. These data reveal that marked intraindividual variability is very common in normal adults, and underscore the need to base diagnostic inferences on clinically recognizable patterns rather than psychometric variability alone. (JINS, 2003, 9, 864–870.)

Type
Research Article
Copyright
Copyright © The International Neuropsychological Society 2003

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Axelrod, B.N. & Millis, S.R. (1994). Preliminary standardization of the Cognitive Estimation Test. Assessment, 1, 269274.10.1177/107319119400100307CrossRefGoogle Scholar
Benedict, H.R.B. (1997). Brief Visuospatial Memory Test–Revised professional manual. Odessa, Florida: Psychological Assessment Resources, Inc.Google Scholar
Benton, A.L., Sivan, A.B., Hamsher, K. deS., Varney, N.R., & Spreen, O. (1994). Contributions to neuropsychological assessment: A clinical manual (2nd ed.). New York: Oxford University Press.Google Scholar
Blair, J.R. & Spreen, O. (1989). Predicting premorbid IQ: A revision of the National Adult Reading Test. The Clinical Neuropsychologist, 3, 129136.10.1080/13854048908403285CrossRefGoogle Scholar
Brandt, J. & Benedict, H.R.B. (2001). Hopkins Verbal Learning Test–Revised professional manual. Odessa, Florida: Psychological Assessment Resources, Inc.Google Scholar
Christensen, H., Mackinnon, A.J., Korten, A.E., Jorm, A.F., Henderson, A.S., Jacomb, P., & Rodgers, B. (1999). An analysis of diversity in the cognitive performance of elderly community dwellers: Individual differences in change scores as a function of age. Psychological Aging, 14, 365379.10.1037/0882-7974.14.3.365CrossRefGoogle ScholarPubMed
Conners, C.K. (1995). Continuous Performance Test manual. Toronto, Ontario: Multi-Health Systems, Inc.Google Scholar
Folstein, M.F., Folstein, S.E., & McHugh, P.R. (1975). “Mini-Mental State”: A practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research, 12, 189198.CrossRefGoogle ScholarPubMed
Goodglass, H. & Kaplan, E. (1983). Boston Diagnostic Aphasia Examination (BDAE). Philadelphia, Pennsylvania: Lea and Febiger.Google Scholar
Hultsch, D.F., MacDonald, S.W., & Dixon, R.A. (2002). Variability in reaction time performance of younger and older adults. Journals of Gerontology, Series B: Psychological Sciences and Social Sciences, 57, P101115.CrossRefGoogle ScholarPubMed
Jones-Gotman, M. & Milner, B. (1977). Design fluency: The invention of nonsense drawings after focal cortical lesions. Neuropsychologia, 15, 653674.CrossRefGoogle ScholarPubMed
Kløve, H. (1963). Clinical neuropsychology. In Forster, F.M. (Vol. Ed.), The medical clinics of North America, Vol. 47 (pp. 16471658). New York: Saunders.Google Scholar
Lezak, M.D. (1995). Neuropsychological assessment (3rd ed.). New York: Oxford University Press.Google Scholar
Matarazzo, J.D. & Herman, D.O. (1985). Clinical uses of the WAIS–R: Base rates of differences between VIQ and PIQ in the WAIS–R standardization sample. In Wolman, B.B. (Ed.), Handbook of intelligence (pp. 899932). New York: J. Wiley & Sons.Google Scholar
Matarazzo, J.D. & Prifitera, A. (1989). Subtest scatter and premorbid intelligence: Lessons from the WAIS–R standardization sample. Psychological Assessment, 1, 186191.CrossRefGoogle Scholar
Matarazzo, J.D., Daniel, M.H., Prifitera, A., & Herman, D.O. (1988). Inter-subtest scatter in the WAIS–R standardization sample. Journal of Clinical Psychology, 44, 940950.3.0.CO;2-A>CrossRefGoogle Scholar
Meyers, J.E. & Meyers, K.R. (1995). Rey Complex Figure Test and Recognition Trial professional manual. Odessa, Florida: Psychological Assessment Resources, Inc.Google Scholar
Nelson, H.E. (1976). A modified card sorting test sensitive to frontal lobe defects. Cortex, 11, 918932.Google Scholar
Reitan, R.M. (1958). Validity of the Trail Making Test as an indicator of organic brain damage. Perceptual and Motor Skills, 8, 271276.CrossRefGoogle Scholar
Rey, A. (1993). Psychological examination of traumatic encephalopathy. Corwin, J. & Bylsma, F.W. (Trans.). Clinical Neuropsychologist, 7, 321. (Original work published 1941).Google Scholar
Schaie, K.W. (1994). The course of adult intellectual development. American Psychologist, 49, 304313.10.1037/0003-066X.49.4.304CrossRefGoogle ScholarPubMed
Schretlen, D. (1997). Brief Test of Attention professional manual. Odessa, Florida: Psychological Assessment Resources, Inc.Google Scholar
Silverstein, A.B. (1982). Pattern analysis as simultaneous statistical inference. Journal of Consulting and Clinical Psychology, 50, 234249.CrossRefGoogle Scholar
SPSS for Windows (Release 10.0.5). [Computer software]. (1999). Chicago, Illinois: SPSS, Inc.Google Scholar
Ward, L.C. (1990). Prediction of verbal, performance, and full scale IQs from seven subtests of the WAIS–R. Journal of Clinical Psychology, 46, 436440.3.0.CO;2-M>CrossRefGoogle ScholarPubMed
Wechsler, D. (1981). Wechsler Adult Intelligence Scale–Revised manual. San Antonio, Texas: The Psychological Corporation.Google Scholar
Wechsler, D. (1987). Wechsler Memory Scale–Revised manual. San Antonio, Texas: The Psychological Corporation.Google Scholar
Wechsler, D. (1997). Wechsler Adult Intelligence Scale–Third Edition administration and scoring manual. San Antonio, Texas: The Psychological Corporation.Google Scholar
Wing, J.K., Sartorius, N., & Üstün, T.B. (1996). Schedules for Clinical Assessment in Neuropsychiatry (SCAN) version 2.1. Geneva: World Health Organization.Google Scholar