Cognitive Ability Testing in the Workplace: Modern Approaches and Methods

A special issue of Journal of Intelligence (ISSN 2079-3200).

Deadline for manuscript submissions: 31 May 2024 | Viewed by 12222

Special Issue Editors


E-Mail Website
Guest Editor
Department of Psychology, Baruch College, City University of New York, New York, NY 10010, USA
Interests: employee selection; assessment; diversity

E-Mail Website
Guest Editor
Department of Psychology, Baruch College, City University of New York, New York, NY 10010, USA
Interests: employee selection; assessment; diversity
School of Business, Government & Economics, Seattle Pacific University, Seattle, WA 98119, USA
Interests: employee selection; assessment; diversity

E-Mail Website
Guest Editor Assistant
Department of Psychology, Baruch College & Graduate Center, CUNY, 55 Lexington Ave., Box 8-215, New York, NY 10010, USA
Interests: employee selection; assessment; applicant reactions

Special Issue Information

Dear Colleagues, 

Despite the increasing importance of cognitive abilities in the modern world of work and increasing dissatisfaction with the status quo of cognitive ability assessment, the ways in which cognitive abilities are conceptualized and measured in workplace applications have changed very little over the past century (Scherbaum et al., 2012). Many other fields (e.g., clinical and cognitive psychology, developmental and educational research, neurosciences) have made considerable progress in understanding cognitive ability constructs, their role in the modern world, and how they can be measured (Goldstein et al., 2009; Scherbaum et al., 2015). Additionally, evolutions in technology have created new possibilities for measuring individual differences. However, these innovations have not substantially influenced the conceptualization and measurement of cognitive abilities in the workplace. As a result, an opportunity to better understand, measure, and use cognitive abilities in the workplace is being missed (Ployhart & Holtz, 2008). The goal of this Special Issue is to feature innovative research applying modern theories of cognitive ability, analytical approaches, and measurement methods to workplace applications, demonstrating the value of adopting modern thinking and approaches for tackling the so-called validity/diversity dilemma.

Goldstein, Harold W., Scherbaum, Charles A. and Yusko, Kenneth (2009). Adverse impact and measuring cognitive ability. In Adverse Impact: Implications for Organizational Staffing and High Stakes Testing. Edited by James Outtz. New York: Psychology Press, pp. 95–134.

Ployhart, Robert E., and Brian C. Holtz. (2008). The diversity–validity dilemma: Strategies for reducing racioethnic and sex subgroup differences and adverse impact in selection. Personnel Psychology, 61: 153–172.

Scherbaum, Charles A., Harold W. Goldstein, Kenneth P. Yusko, Rachel Ryan, and Paul J. Hanges. (2012). Intelligence 2.0: Reestablishing a Research Program on g in I-O Psychology. Industrial and Organizational Psychology: Perspectives on Science and Practice 5: 128–148.

Scherbaum, Charles, Harold Goldstein, Rachel Ryan, Paul Agnello, Ken Yusko, and Paul Hanges (2015). New Developments in Intelligence Theory and Assessment: Implications for Personnel Selection. In Employee Recruitment, Selection, and Assessment. Contemporary Issues for Theory and Practice. Edited by Nikolaou, Ioannis and Janneke K. Oostrom. London: Psychology Press-Taylor & Francis. Vol. 5, pp. 128–148. 

Dr. Charles Scherbaum
Prof. Dr. Harold Goldstein
Dr. Annie Kato
Guest Editors
Yuliya Cheban
Guest Editor Assistant

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a double-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Intelligence is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • cognitive ability
  • validity/diversity trade-off
  • assessment
  • testing
  • measurement
  • selection
  • industrial-organizational psychology

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

28 pages, 3032 KiB  
Article
AI for Psychometrics: Validating Machine Learning Models in Measuring Emotional Intelligence with Eye-Tracking Techniques
by Wei Wang, Liat Kofler, Chapman Lindgren, Max Lobel, Amanda Murphy, Qiwen Tong and Kemar Pickering
J. Intell. 2023, 11(9), 170; https://doi.org/10.3390/jintelligence11090170 - 22 Aug 2023
Cited by 2 | Viewed by 3726
Abstract
AI, or artificial intelligence, is a technology of creating algorithms and computer systems that mimic human cognitive abilities to perform tasks. Many industries are undergoing revolutions due to the advances and applications of AI technology. The current study explored a burgeoning field—Psychometric AI, [...] Read more.
AI, or artificial intelligence, is a technology of creating algorithms and computer systems that mimic human cognitive abilities to perform tasks. Many industries are undergoing revolutions due to the advances and applications of AI technology. The current study explored a burgeoning field—Psychometric AI, which integrates AI methodologies and psychological measurement to not only improve measurement accuracy, efficiency, and effectiveness but also help reduce human bias and increase objectivity in measurement. Specifically, by leveraging unobtrusive eye-tracking sensing techniques and performing 1470 runs with seven different machine-learning classifiers, the current study systematically examined the efficacy of various (ML) models in measuring different facets and measures of the emotional intelligence (EI) construct. Our results revealed an average accuracy ranging from 50–90%, largely depending on the percentile to dichotomize the EI scores. More importantly, our study found that AI algorithms were powerful enough to achieve high accuracy with as little as 5 or 2 s of eye-tracking data. The research also explored the effects of EI facets/measures on ML measurement accuracy and identified many eye-tracking features most predictive of EI scores. Both theoretical and practical implications are discussed. Full article
Show Figures

Figure 1

12 pages, 505 KiB  
Article
Person-Centered Study of Cognitive Ability Dimensions Using Latent Profile Analysis
by Jeffrey M. Conte and Rebecca K. Harmata
J. Intell. 2023, 11(5), 80; https://doi.org/10.3390/jintelligence11050080 - 26 Apr 2023
Viewed by 1621
Abstract
A number of researchers have called for additional investigations into cognitive ability and intelligence in recent years. This paper utilized a person-centered approach, multiple cognitive ability dimensions, and latent profile analysis to investigate multivariate relationships among cognitive ability dimensions in a sample of [...] Read more.
A number of researchers have called for additional investigations into cognitive ability and intelligence in recent years. This paper utilized a person-centered approach, multiple cognitive ability dimensions, and latent profile analysis to investigate multivariate relationships among cognitive ability dimensions in a sample of 1681 Army recruits. Six cognitive ability dimensions were assessed via the Armed Services Vocational Aptitude Battery. Performance measures were obtained from supervisor ratings of Effort, Discipline, and Peer Leadership. Using latent profile analysis, the results identified five distinct cognitive profiles or classes, which differed significantly across the three types of supervisor ratings. Full article
Show Figures

Figure 1

15 pages, 529 KiB  
Article
Reducing Black–White Racial Differences on Intelligence Tests Used in Hiring for Public Safety Jobs
by Harold W. Goldstein, Kenneth P. Yusko, Charles A. Scherbaum and Elliott C. Larson
J. Intell. 2023, 11(4), 62; https://doi.org/10.3390/jintelligence11040062 - 28 Mar 2023
Viewed by 2769
Abstract
This paper explores whether a diversity and inclusion strategy focused on using modern intelligence tests can assist public safety organizations in hiring a talented diverse workforce. Doing so may offer strategies for mitigating the issues of systematic racism with which these occupations have [...] Read more.
This paper explores whether a diversity and inclusion strategy focused on using modern intelligence tests can assist public safety organizations in hiring a talented diverse workforce. Doing so may offer strategies for mitigating the issues of systematic racism with which these occupations have historically struggled. Past meta-analytic research shows that traditional forms of intelligence tests, which are often used in this sector, have not consistently demonstrated predictive validity but have negatively impacted Black candidates. As an alternative, we examine a modern intelligence test that consists of novel unfamiliar cognitive problems that test takers must solve without relying on their prior experience. Across six studies of varying public safety jobs (e.g., police, firefighter) in different organizations, we found a pattern of results that supports the criterion-related validity of the modern intelligence test. In addition to consistently predicting job performance and training success, the modern intelligence test also substantially mitigated the observed Black–White group differences. The implications of these findings are discussed in terms of how to alter the legacy of I/O psychology and human resource fields when it comes to our impact on facilitating employment opportunities for Black citizens, particularly in public safety positions. Full article
Show Figures

Figure A1

21 pages, 752 KiB  
Article
Exploring the Relationship between Cognitive Ability Tilt and Job Performance
by Anne E. Kato and Charles A. Scherbaum
J. Intell. 2023, 11(3), 44; https://doi.org/10.3390/jintelligence11030044 - 23 Feb 2023
Cited by 1 | Viewed by 2762
Abstract
Most of the work examining the relationship between intelligence and job performance has conceptualized intelligence as g. Recent findings, however, have supported the claim that more specific factors of intelligence contribute to the prediction of job performance. The present study builds upon [...] Read more.
Most of the work examining the relationship between intelligence and job performance has conceptualized intelligence as g. Recent findings, however, have supported the claim that more specific factors of intelligence contribute to the prediction of job performance. The present study builds upon prior work on specific cognitive abilities by investigating the relationship between ability tilt, a measure representing differential strength between two specific abilities, and job performance. It was hypothesized that ability tilt would differentially relate to job performance based on whether or not the tilt matched the ability requirements of the job, and that ability tilt would provide incremental validity over g and specific abilities for predicting performance when the tilt matched job requirements. Hypotheses were tested using a large sample from the General Aptitude Test Battery (GATB) database. Ability tilt related with job performance in the expected direction for 27 of the 36 tilt-job combinations examined, with a mean effect size of .04 when the tilt matched job requirements. The mean incremental validities for ability tilt were .007 over g and .003 over g and specific abilities, and, on average, tilt explained 7.1% of the total variance in job performance. The results provide limited evidence that ability tilt may be a useful predictor in addition to ability level, and contribute to our understanding of the role of specific abilities in the workplace. Full article
Show Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Title: Measuring Emotional Intelligence Unobtrusively and Objectively: An Eye-Tracking and Machine Learning Approach

Abstract: Emotional intelligence—one of the second stratum factors of intelligence—plays a critical role from frontline services to executive leadership in the workplace. It predicts job performance beyond general mental ability and Five-Factor personality traits. Yet the measurement of emotional intelligence has been long critiqued for serious unresolved issues, including unreliability and bias inherited from the self-report nature. Leveraging psychophysiology and machine learning models, the current study examined a novel approach to unobtrusively and objectively measuring emotional intelligence. Specifically, we exposed 50 participants to images of 1) four emotional faces (neutral, happy, anger, and fear; randomly arranged), and 2) twelve face-crowds with varying ratios of happy-to-angry faces. We recorded participants’ eye movements with a high-end eye-tracker and processed the eye-tracking data with the gazepath technology to extract hundreds of eye movement features, which were then fed into machine learning models to predict the emotional intelligence scores. Our results showed that this approach was able to achieve high predictive accuracy. In addition, we found this approach was particularly powerful for measuring two facets of emotional intelligence: self-emotion appraisal and other-emotion appraisal. Theoretical and practical implications are discussed.

Title: A meta-analytic investigation on the relationship of individual gaming performance and traditional cognitive ability tests

Abstract: Technological advances have opened up new ways of assessing psychological constructs beyond traditional tests. While game-related assessments (GRAs) appear to offer several advantages for practice and research, the question remains whether the results of GRAs are comparable to traditional tests. We present a meta-analysis of the intercorrelation between indicators from existing or specially developed GRAs and paper-pencil measures of cognitive ability, and test whether the purpose of the game or the level of aggregation of the variables moderates this relationship. We used several databases and initially screened 9,719 records identified for our inclusion criteria (e.g. adult, non-clinical sample). This resulted in the identification of at least 32 eligible studies, including over 9,000 participants. Our (preliminary) results from a series of three-stage mixed-effects meta-analyses support a moderate relationship between indicators from GRAs and traditional measures of cognitive ability (r = 0.40; p < 0.001) and indicate substantial heterogeneity of effect sizes within/between the identified studies (0.022/0.042). Our results support that indicators from the GRAs are indeed related to cognitive ability as measured by traditional tests. However, the overall moderate relationship found may also reflect the fact that the two assessment methods tap different aspects of cognitive ability.

Back to TopTop