Hostname: page-component-8448b6f56d-sxzjt Total loading time: 0 Render date: 2024-04-23T21:04:52.782Z Has data issue: false hasContentIssue false

Seek and Ye Shall Find

Published online by Cambridge University Press:  02 October 2015

Charles E. Lance*
Affiliation:
Organizational Research & Development, Lawrenceville, Georgia, and University of the Western Cape, Republic of South Africa
Duncan J. R. Jackson
Affiliation:
Department of Organizational Psychology, Birkbeck, University of London, and Faculty of Management, University of Johannesburg
*
Correspondence concerning this article should be addressed to Charles E. Lance, Organizational Research & Development, LLC, 173 Crystal River Drive, Lawrenceville, GA 30043. E-mail: clancephd@gmail.com

Extract

Being familiar with their earlier work investigating the factor structures of the Armed Services Vocational Aptitude Battery and the Air Force Officer Qualifying Test, we read with interest Ree, Carretta, and Teachout's (2015) proposal to extend the idea of a dominant general factor (DGF) beyond the realm of cognitive abilities to other areas of research and practice in industrial–organizational (I-O) psychology. We found their ideas intriguing and their arguments compelling, but we stumbled on a reference to an article of one of the present authors (Lance, Teachout, & Donnelly, 1992) and Ree et al.’s claim that Lance et al. (1992) had found a DGF that accounted for 59% of the variance in job performance ratings in a military job. They did not. Rather, Lance et al. reported a hierarchical confirmatory factor analysis (CFA) model that explained parsimoniously the correlations among 15 job performance first-order factors in terms of four Job Proficiency and four other Measurement Source second-order factors. We reasoned that Ree et al. must have conducted some secondary analysis on the results presented by Lance et al., and indeed we replicated their claim by finding that the first unrotated principal component accounted for 59% of the variance in correlations among the four Proficiency second-order factors reported in Lance et al.’s Table 6.

Type
Commentaries
Copyright
Copyright © Society for Industrial and Organizational Psychology 2015 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Ackerman, P. L., & Cianciolo, A. T. (2000). Cognitive, perceptual-speed, and psychomotor determinants of individual differences in skill acquisition. Journal of Experimental Psychology: Applied, 6, 259290.Google Scholar
Ahmetoglu, G., Leutner, F., & Chamorro-Premuzic, T. (2011). EQ-nomics: Understanding the relationship between individual differences in trait emotional intelligence and entrepreneurship. Personality and Individual Differences, 51, 10281033.Google Scholar
Bandalos, D. L., & Boehm-Kaufman, M. R. (2009). Four common misconceptions in exploratory factor analysis. In Lance, C. E. & Vandenberg, R. J. (Eds.), Statistical and methodological myths and urban legends: Doctrine, verity and fable in the organizational and social sciences (pp. 6187). New York, NY: Routledge.Google Scholar
Barbuto, J. E., & Wheeler, D. W. (2006). Scale development and construct clarification of servant leadership. Group & Organizational Management, 31, 300326.CrossRefGoogle Scholar
Carretta, T. R., Perry, D. C. Jr., & Ree, M. J. (1996). Prediction of situational awareness in F-15 pilots. International Journal of Aviation Psychology, 6, 2141.Google Scholar
Carretta, T. R., & Ree, M. J. (1996). Factor structure of the Air Force Officer Qualifying Test: Analysis and comparison. Military Psychology, 8, 2942.CrossRefGoogle Scholar
Carretta, T. R., & Ree, M. J. (1997). Expanding the nexus of cognitive and psychomotor abilities. International Journal of Selection and Assessment, 5, 149158.Google Scholar
Castro, S. L., Scandura, T. A., & Williams, E. A. (2004). Validity of Scandura and Ragins’ (1993) multidimensional mentoring measure: An evaluation and refinement. Management Faculty and Papers. Paper 7. Retrieved from http://scholarlyrepository.miami.edu/management_articles/7Google Scholar
Chan, K.-Y., & Drasgow, F. (2001). Toward a theory of individual differences and leadership: Understanding the motivation to lead. Journal of Applied Psychology, 86, 481498.Google Scholar
Cronbach, L. J., & Meehl, P. M. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281302.Google Scholar
Edwards, B. D., Bell, S. T., Arthur, W., & Decuir, A. D. (2008). Relationships between job satisfaction and task and contextual performance. Applied Psychology: An International Review, 57, 441465.CrossRefGoogle Scholar
Field, A., Miles, J., & Field, Z. (2012). Discovering statistics using R (4th ed.). Thousand Oaks, CA: Sage.Google Scholar
Fiori, M., & Antonakis, J. (2011). The ability model of emotional intelligence: Searching for valid measures. Personality and Individual Differences, 50, 329334.Google Scholar
Ford, J. K., MacCallum, R. C., & Tait, M. (1986). The application of exploratory factor analysis in applied psychology: A critical review and analysis. Personnel Psychology, 39, 291314.CrossRefGoogle Scholar
Harris, R. J. (1985). A primer of multivariate statistics. Orlando, FL: Academic Press.Google Scholar
Herrnstein, R. J., & Murray, C. (1994). The bell curve. New York, NY: Free Press.Google Scholar
Hoffman, B. J., Kennedy, C., LoPilato, A., Monahan, E., & Lance, C. E. (2015). A review of the content, criterion-related, and construct-related validity of assessment center exercises. Journal of Applied Psychology. Advance online publication. http://dx.doi.org/10.1037/a0038707Google Scholar
Hoffman, B. J., Melchers, K. G., Blair, C. A., Kleinmann, M., & Ladd, R. T. (2011). Exercises and dimensions are the currency of assessment centers. Personnel Psychology, 64, 351395.CrossRefGoogle Scholar
Judge, T. A., Van Vianen, A. E. M., & De Pater, I. E. (2004). Emotional stability, core self-valuations, and job outcomes: A review of the evidence and an agenda for future research. Human Performance, 17, 325346.Google Scholar
Krause, D. E., Kersting, M., Heggestad, E. D., & Thornton, G. C. III. (2006). Incremental validity of assessment center ratings over cognitive ability tests: A study at the executive management level. International Journal of Selection and Assessment, 14, 360371.CrossRefGoogle Scholar
Lance, C. E., Teachout, M. S., & Donnelly, T. M. (1992). Specification of the criterion construct space: An application of hierarchical confirmatory factor analysis. Journal of Applied Psychology, 77, 437452.Google Scholar
Liden, R. C., & Maslyn, J. M. (1998). Multidimensionality of leader–member exchange: An empirical assessment through scale development. Journal of Management, 24, 4372.Google Scholar
Mardia, K. V., Kent, J. T., & Bibby, J. M. (1979). Multivariate analysis. New York, NY: Academic Press.Google Scholar
Meriac, J. P., Hoffman, B. J., & Woehr, D. J. (2014). A conceptual and empirical review of the structure of assessment center dimensions. Journal of Management, 40, 12691296.CrossRefGoogle Scholar
Musek, J. (2007). A general factor of personality: Evidence for the big one in the five factor model. Journal of Research in Personality, 41, 12131233.Google Scholar
Parker, J., Keefer, K., & Wood, L. (2011). Toward a brief multidimensional assessment of emotional intelligence: Properties of the Emotional Quotient Inventory–Short Form. Psychological Assessment, 23, 762777.CrossRefGoogle Scholar
Putka, D. J., & Hoffman, B. J. (2013). Clarifying the contribution of assessee, dimension, exercise, and assessor-related effects to reliable and unreliable variance in assessment center ratings. Journal of Applied Psychology, 98, 114133.CrossRefGoogle ScholarPubMed
Rahim, M. A., & Magner, N. R. (1995). Confirmatory factor analysis of the styles of handling interpersonal conflict: First-order factor model and its invariance across groups. Journal of Applied Psychology, 80, 122132.Google Scholar
Ree, M. J., Carretta, T. R., & Teachout, M. S. (2015). Pervasiveness of dominant general factors in organizational measurement. Industrial and Organizational Psychology: Perspectives on Science and Practice, 8 (3), 409427.CrossRefGoogle Scholar
Ree, M. J., & Earles, J. A. (1991). Predicting training success: Not much more than g. Personnel Psychology, 44, 321332.CrossRefGoogle Scholar
Rushton, J. P., & Irwing, P. (2008). A general factor of personality (GFP) from two meta- analyses of the Big Five: Digman (1997) and Mount, Barrick, Scullen, and Rounds (2005). Personality and Individual Differences, 45, 679683.Google Scholar
Stauffer, J. M., Ree, M. J., & Carretta, T. R. (1996). Cognitive components tests are not much more than g: An extension of Kyllonen's analyses. Journal of General Psychology, 123, 193205.Google Scholar
Stevens, J. (2002). Applied multivariate statistics for the social sciences (4th ed.). Mahwah, NJ: Erlbaum.Google Scholar
Vandenberg, R. J., & Grelle, D. M. (2009). Alternative model specifications in structural equation modeling. In Lance, C. E. & Vandenberg, R. J. (Eds.), Statistical and methodological myths and urban legends: Doctrine, verity, and fable in the organizational and social sciences (pp. 165191). New York, NY: Routledge.Google Scholar
Viswesvaran, C., Schmidt, F. L., & Ones, D. S. (2005). Is there a general factor in ratings of job performance? A meta-analysis framework for disentangling substantive and error influences. Journal of Applied Psychology, 90, 108131.CrossRefGoogle Scholar