Hostname: page-component-8448b6f56d-42gr6 Total loading time: 0 Render date: 2024-04-18T09:53:28.633Z Has data issue: false hasContentIssue false

Exploring What the Austin Maze Measures: A Comparison Across Conventional and Computer Versions

Published online by Cambridge University Press:  19 September 2013

Renerus John Stolwyk*
Affiliation:
School of Psychology and Psychiatry, Monash University, Melbourne, Australia
Shuzi Lee
Affiliation:
School of Psychology and Psychiatry, Monash University, Melbourne, Australia
Adam McKay
Affiliation:
School of Psychology and Psychiatry, Monash University, Melbourne, Australia Monash-Epworth Rehabilitation Research Centre, Melbourne, Australia Epworth Rehabilitation, Melbourne, Australia
Jennie Louise Ponsford
Affiliation:
School of Psychology and Psychiatry, Monash University, Melbourne, Australia Monash-Epworth Rehabilitation Research Centre, Melbourne, Australia
*
Address for correspondence: Dr Rene Stolwyk, School of Psychology and Psychiatry, Building 17, Clayton Campus, Monash University, Melbourne, Victoria, 3800, Australia. E-mail: rene.stolwyk@monash.edu
Get access

Abstract

The Austin Maze is a neuropsychological assessment tool used to measure cognitive function. A computerised version of the tool has recently been developed and shown to be equivalent to the conventional version in terms of performance. However, controversy remains regarding which specific cognitive constructs the conventional and computer versions of the Austin Maze purport to measure. The aim of this study was to investigate which cognitive constructs are associated with Austin Maze performance and whether these constructs remain equivalent across conventional and computer versions. Sixty-three healthy people completed both conventional and computerised versions of the Austin Maze in addition to a number of established measures of planning, error utilisation, working memory, visuospatial ability and visuospatial memory. Results from a series of regression analyses demonstrated that both versions of the Austin Maze were predominantly associated with visuospatial ability and visuospatial memory. No executive measures, including those of planning, error utilisation or working memory, significantly contributed to any Austin Maze performances. This study complements previous research and supports equivalency of the conventional and computer versions of the Austin Maze.

Type
Articles
Copyright
Copyright © The Author(s), published by Cambridge University Press on behalf of Australian Academic Press Pty Ltd 2013 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

American Psychological Association. (1986). Guidelines for computer based tests and interpretations. Washington DC: American Psychological Association.Google Scholar
Bowden, S.C. (1988). Learning in young alcoholics. Journal of Clinical and Experimental Neuropsychology, 10 (2), 157168.CrossRefGoogle ScholarPubMed
Bowden, S.C. (1989). Maze learning: Reliability and equivalence of alternate pathways. Clinical Neuropsychologist, 3 (2), 137144.CrossRefGoogle Scholar
Bowden, S.C., Dumendzic, J., Clifford, C., Hopper, J., Tucker, A., & Kinsella, G. (1992). Healthy adults' performance on the Austin Maze. Clinical Neuropsychologist, 6 (1), 4352.CrossRefGoogle Scholar
Bowden, S.C., & McCarter, R.J. (1993). Spatial memory in alcohol-dependent subjects: Using a push-button maze to test the priniciple of equivailability. Brain and Cognition, 22, 5162.CrossRefGoogle Scholar
Bowden, S.C., & Smith, L.C. (1994). What does the Austin Maze measure? Australian Psychologist, 29 (1), 3437.CrossRefGoogle Scholar
Bray, R., & McDonald, S. (2010). Austin Maze. Sydney: Australasian Society for the Study of Brain Impairment. Retrieved from http://www.workforhumankind.com/AustinMaze/DownloadGoogle Scholar
Bugbee, A.C.J. (1996). The equivalence of paper-and-pencil and computer-based testing. Journal of Research on Computing in Education 28 (3), 282299.CrossRefGoogle Scholar
Bugbee, A.C.J., & Bernt, F.M. (1990). Testing by computer: Findings in six years of use 1982–1988. Journal of Research on Computing in Education, 23 (1), 87100.CrossRefGoogle Scholar
Crowe, S.F., Barclay, L., Brennan, S., Farkas, L., Gould, E., Katchmarsky, S., & Vayda, S. (1999). The cognitive determinants of performance on the Austin Maze. Journal of the International Neuropsychological Society, 5 (1), 19.CrossRefGoogle ScholarPubMed
Culbertson, W.C., & Zillmer, E.A. (2005). Tower of London – Drexel University (TOLDX) (2nd ed.). Toronto: Multi-Health Systems.Google Scholar
Heaton, R.K., Chelune, G.J., Talley, J.L., Kay, G.G., & Curtiss, G. (1993). Wisconsin Card Sorting Test manual: Revised and expanded. Odessa: Psychological Assessment Resources.Google Scholar
Hohne, H.H., & Walsh, K.W. (1970). Surgical modifications of the personality. Melbourne: Mental Health Authority.Google Scholar
Hughes, D.L., & Bryan, J. (2002). Adult age differences in strategy use during verbal fluency performance. Journal of Clinical and Experimental Neuropsychology, 24 (5), 642654.CrossRefGoogle ScholarPubMed
Lezak, M.D., Howieson, D.B., Bigler, E.D., & Tranel, D. (2012). Neuropsychological assessment (5th ed.). New York: Oxford University Press.Google Scholar
Malec, J.F., Ivnik, R.J., & Hinkeldey, N.S. (1991). Visual Spatial Learning Test. Psychological Assessment, 3, 8288.CrossRefGoogle Scholar
Malec, J.F., Ivnik, R.J., Smith, G.E., Tangalos, E.G., Petersen, R.C., Kokmen, E., & Kurland, L.T. (1992). Visual Spatial Learning Test: Normative data and further validation. Psychological Assessment, 4, 433441.CrossRefGoogle Scholar
Mattson, D., Berk, M., & Lucas, M.D. (1997). A neuropsychological study of prefrontal lobe function in the positive and negative subtypes of schizophrenia. Journal of Genetic Psychology, 158, 487494.CrossRefGoogle ScholarPubMed
McKay, A., Lee, S., Stolwyk, R.J., & Ponsford, J.L. (2012). Comparing performance of young adults on a computer-based version of the Austin Maze and the conventional form of the test. Brain Impairment, 13, 339346.CrossRefGoogle Scholar
Milner, B. (1965). Visually-guided maze learning in man: Effects of bilateral hippocampal, bilateral frontal, and unilateral cerebral lesions. Neuropsychologia, 3 (4), 317338.CrossRefGoogle Scholar
Miyake, A., Friedman, N.P., Emerson, M.J., Witzki, A.H., & Howerter, A. (2000). The unity and diversity of executive functions and their contributions to complex ‘frontal lobe’ tasks: a latent variable analysis. Cognitive Psychology, 41, 49100.CrossRefGoogle ScholarPubMed
Morrison, P., & Gates, G. (1988). Assessment of a microcomputer-based version of the Austin Maze. Journal of General Psychology, 115 (3), 307314.CrossRefGoogle ScholarPubMed
Newcombe, F., Ratcliff, G., & Damasio, H. (1987). Dissociable visual and spatial impairments following right posterior cerebral lesions: Clinical, neuropsychological and anatomical evidence. Neuropsychologia, 25, 149161.CrossRefGoogle ScholarPubMed
Pallant, J. (2011). SPSS survival manual. A step by step guide to data analysis using SPSS (4th ed.). Crows Nest: Allen & Unwin.Google Scholar
Peterson, L.F., & Peterson, M.J. (1959). Short-term retention of individual verbal items. Journal of Experimental Psychology, 58, 193198.CrossRefGoogle ScholarPubMed
Rorden, C., & Karnath, H.-O. (2004). Using human brain lesions to infer function: a relic from a past era in the fMRI age? Nature Reviews Neuroscience, 5 (10), 813819.CrossRefGoogle ScholarPubMed
Shallice, T. (1982). Specific impairments of planning. Philosophical Transactions of the Royal Society of London, 298, 199209.Google ScholarPubMed
Strauss, E., Sherman, E., & Spreen, O. (2006). A compendium of neuropsychological tests: Adminstration, norms, and commentary (3rd ed.). New York: Oxford University Press.Google Scholar
Stuss, D.T. (2011). Functions of the frontal lobes: relation to executive functions. Journal of the International Neuropsychological Society, 17, 759765.CrossRefGoogle ScholarPubMed
Tabachnick, B.G., & Fidell, L.S. (2007). Using multivariate statistics (5th ed.). Boston: Pearson Education.Google Scholar
Tucker, A., Kinsella, G., Gawith, M., & Harrison, G. (1987). Performance on the Austin Maze: Steps towards normative data. Australian Psychologist, 22 (3), 353359.CrossRefGoogle Scholar
Unkenstein, A.E., & Bowden, S.C. (1991). Predicting the course of neuropsychological status in recently abstinent alcoholics: A pilot study. The Clinical Neuropychologist, 5 (1), 2432.CrossRefGoogle Scholar
Walsh, K.W. (1985). Understanding brain damage: A primer of neuropsychological evaluation. Edinburgh: Churchill Livingstone.Google Scholar
Walsh, K.W. (1987). Neuropsychology: A clinical approach. Edinburgh: Churchill Livingstone.Google Scholar
Walsh, K.W. (1991). Understanding brain damage: A primer of neuropsychological evaluation (2nd ed.). Edinburgh: Churchill Livingstone.Google Scholar
Walsh, K.W. (1994). Neuropsychology: A clinical approach (3rd ed.). Edinburgh: Chuchill Livingstone.Google Scholar
Wechsler, D. (1981). Wechsler Adult Intelligence Scale – Revised. New York: The Psychology Corporation.Google Scholar
Wechsler, D. (2008). Wechsler Adult Intelligence Scale (4th ed.). Texas: Pearson.Google Scholar