Assessment of Human Intelligence—State of the Art in the 2020s

A special issue of Journal of Intelligence (ISSN 2079-3200).

Deadline for manuscript submissions: closed (15 March 2023) | Viewed by 39806

Special Issue Editors


E-Mail Website
Guest Editor
Child Study Center, Yale University, School of Medicine, New Haven, CT 06520, USA
Interests: Individual differences; Changes in cognitive development across the lifespan; Toddler and preschool development; Neuropsychological assessment; Psychometrics and test development; Theories of intelligence

E-Mail Website
Guest Editor
Neag School of Education, University of Connecticut, Storrs, CT 06269, USA
Interests: psychological assessment validity issues; relations between social variables and academic achievement; application of structural equation modeling & longitudinal analysis

E-Mail Website
Guest Editor
Educational Psychology Department, Neag School of Education, University of Connecticut, Storrs, CT 06269, USA
Interests: mind-body health; integrated behavioral health care; video self-modeling and virtual reality; stuttering, selective mutism

E-Mail Website
Guest Editor
Derner School of Psychology, Adelphi University, Garden City, NY 11530, USA
Interests: self-modeling; technology; mind-body health; LGBTQ

Special Issue Information

Dear Colleagues,

Contemporary IQ testing in the United States—a century after Lewis Terman published the Stanford–Binet in 1916—has evolved in ways that even David Wechsler could not have envisioned. Several of the early pioneers in the 1910s, including Terman, were fans of eugenics, and some were decidedly despicable. Henry Goddard of the Vineland Training School (and author of the Goddard–Binet) was a pioneer in the fields of clinical psychology and special education. He was also a strong proponent of eugenics, segregation, and racial inferiority; he argued vehemently that four out of five Jewish, Hungarian, Italian, and Russian immigrants were “feebleminded.” He even coined the term moron to give these immigrants a label!

Furthermore, all of these American pioneers were devotees of Spearman’s theory and had absolute certainty that global intelligence was not only unitary but was “fixed” at birth (Alfred Binet and Henri Simon, who developed the original Binet in Paris in 1905, by contrast, believed neither in g nor fixed intelligence). IQ tests were psychometric instruments whose only claim to fame was to yield a single score; the main voice on how to interpret the Stanford–Binet IQ was Terman’s personal statistician, Quinn McNemar.

Wechsler gave IQ testing a new spin when he published the Wechsler–Bellevue in 1939. He offered Verbal, Performance, and Full-Scale IQs, along with reliable scores on nearly a dozen subtests. He believed IQ to be an aspect of personality and transformed psychometric testing into clinical assessment. The Stanford–Binet reigned supreme in the 1940s and 1950s, but the emergence of the burgeoning fields of learning disabilities and neuropsychology in the 1960s put Wechsler’s scales—by then the WISC and WAIS—in the driver’s seat. These new fields needed profiles of cognitive strengths and weaknesses to diagnose and treat specific learning disabilities and neurological dysfunction. Wechsler’s scales filled this need.

Yet, into the 1980s, the thousands of research studies on learning and intelligence failed to make a dent in the field of IQ testing; neither did the dozen or so major theories of learning, cognitive development, or intelligence. The latest innovations in psychometric theory, such as Rasch latent-trait modeling, were immediately put into the IQ test mix, but not the theories that defined the constructs that these tests were supposed to be measuring. The decade of the 1980s saw movement in that direction with the publication of the Kaufman Assessment Battery for Children (K-ABC), Stanford–Binet—4th edition, and Woodcock–Johnson—Revised (WJ-R). However, it was not until the decades of the 1990s, 2000s, and 2010s that the theoretical framework of the underlying constructs began to match the sophistication of the statistical theories.

Both the growth and acceleration of theory-based test development and interpretation has been geometric during the past three decades and continues its ascent. Diagnosis of specific learning disabilities, neurological disorders, intellectual disabilities, ADHD, autism spectrum disorders, giftedness, and the like—and the instruments used to make these diagnoses within bilingual populations, for people of color, those with sensory or motor disabilities, and for mainstream children and adults—are built upon a strong foundation of theory and empirical research. The goal of every clinical, neuropsychological and psycho-educational evaluation is to make a difference in the lives of the children and adults referred for evaluation by translating test results and clinical observations to action, often in the form of educational interventions. 

This Special Issue is devoted to the assessment of human intelligence in the present day. Terman’s psychometric testing gave way to Wechsler’s clinical assessment in the 1960s. Wechsler’s scales never lost their popularity, but his clinical approach was supplanted in the 1990s by sophisticated theory-based test development and cutting-edge theory-driven test interpretation.

The major aim of this Special Issue is to define the breadth and scope of cognitive assessment in the 2020s, from infancy to adulthood, as a testament to how far the field has advanced in the century since IQ testing was synonymous with g and its pioneers included some bigots. This issue attests to the array of instruments that have joined Wechsler’s and Terman’s scales in the clinician’s toolbox; the diversity of theories that have impacted cognitive assessment over the past generation and continue to be refined and redefined, including the innovative research that continually shapes the direction for future generations. Even the pandemic has changed the way we assess children and adults. Furthermore, contemporary society in the U.S. has had no less of an impact on clinical research and practice than the latest theories or statistical procedures.

Similar to how society and the scholars in the field have moved on from eugenics, g theory, and fixed intelligence, the landscape of modern assessment has moved toward a framework that is more equitable and socially just in concert with a society that increasingly emphasizes the importance of diverse viewpoints. Assessment-related legislative mandates tend to echo this push for diversity. In essence, test theory, test development, empirical research, and clinical practice must inherently be equitable in order to be strong over time.

We invited a handful of articles from giants in the field, namely the theorists, researchers, and test developers who transformed cognitive assessment and test interpretation into the dynamic, influential field that it is today. We also asked a few rising stars to contribute articles.

We are eager to solicit articles from other professionals who have special interest and expertise in cognitive assessment. The articles will be consistent with the goals and themes of the Special Issue—most notably equitable assessment based on a topnotch research-based and theory-driven foundation—spanning the gamut from clinical to theoretical to statistical to practical. 

Illustrations of topics that would be welcomed are:

  • non-discriminatory assessment;
  • diagnosis and treatment of learning disabilities;
  • forensic use of cognitive testing in capital punishment cases;
  • tablet-based testing;
  • remote assessment;
  • technological advances in assessment;
  • advances in school neuropsychological assessment;
  • latest research on working memory; 
  • clinical assessment of the elderly; 
  • cognitive referencing relative to the diagnosis and treatment of language disorders;
  • any relevant topic in which you have particular expertise

Please note that the “Planned Papers” Section on the webpage does not imply that these papers will eventually be accepted; all manuscripts will be subject to the journal’s normal and rigorous peer review process.

Dr. Alan S. Kaufman
Dr. Jacqueline M. Caemmerer
Prof. Dr. Melissa A. Bray
Dr. Johanna DeLeyer-Tiarks
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a double-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Intelligence is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • IQ testing
  • cognitive assessment
  • Wechsler scales
  • intelligence theories
  • theory-based test interpretation
  • clinical assessment
  • Cattell–Horn–Carroll (CHC) theory

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

22 pages, 784 KiB  
Article
Does the Degree of Prematurity Relate to the Bayley-4 Scores Earned by Matched Samples of Infants and Toddlers across the Cognitive, Language, and Motor Domains?
by Emily L. Winter, Jacqueline M. Caemmerer, Sierra M. Trudel, Johanna deLeyer-Tiarks, Melissa A. Bray, Brittany A. Dale and Alan S. Kaufman
J. Intell. 2023, 11(11), 213; https://doi.org/10.3390/jintelligence11110213 - 08 Nov 2023
Viewed by 1628
Abstract
The literature on children born prematurely has consistently shown that full-term babies outperform preterm babies by about 12 IQ points, even when tested as adolescents, and this advantage for full-term infants extends to the language and motor domains as well. The results of [...] Read more.
The literature on children born prematurely has consistently shown that full-term babies outperform preterm babies by about 12 IQ points, even when tested as adolescents, and this advantage for full-term infants extends to the language and motor domains as well. The results of comprehensive meta-analyses suggest that the degree of prematurity greatly influences later test performance, but these inferences are based on data from an array of separate studies with no control of potential confounding variables such as age. This study analyzed Bayley-4 data for 66 extremely premature infants and toddlers (<32 weeks), 70 moderately premature children (32–36 weeks), and 133 full-term children. All groups were carefully matched on key background variables by the test publisher during the standardization of the Bayley-4. This investigation analyzed data on the five subtests: cognitive, expressive communication, receptive communication, fine motor, and gross motor. A multivariate analysis of covariance (MANCOVA) assessed for group mean differences across the three subsamples, while controlling for the children’s age. Extremely premature children scored significantly lower than moderately premature children on all subtests, and both preterm groups were significantly outscored by the full-term sample across all domains. In each set of comparisons, the cognitive and motor subtests yielded the largest differences, whereas language development, both expressive and receptive, appeared the least impacted by prematurity. A follow-up MANOVA was conducted to examine full-term versus preterm discrepancies on the five subtests for infants (2–17 months) vs. toddlers (18–42 months). For that analysis, the two preterm groups were combined into a single preterm sample, and a significant interaction between the age level and group (full-term vs. preterm) was found. Premature infants scored lower than premature toddlers on receptive communication, fine motor, and cognitive. Neither expressive communication nor gross motor produced significant discrepancies between age groups The findings of this study enrich the preterm literature on the degree of prematurity; the age-based interactions have implications for which abilities are most likely to improve as infants grow into toddlerhood. Full article
(This article belongs to the Special Issue Assessment of Human Intelligence—State of the Art in the 2020s)
Show Figures

Figure 1

17 pages, 794 KiB  
Article
The Effect of Secondary Education Teachers’ Metacognitive Knowledge and Professional Development on Their Tacit Knowledge Strategies
by Maria Sofologi, Evaggelia Foutsitzi, Aphrodite Papantoniou, Georgios Kougioumtzis, Harilaos Zaragas, Magdalini Tsolaki, Despina Moraitou and Georgia Papantoniou
J. Intell. 2023, 11(9), 179; https://doi.org/10.3390/jintelligence11090179 - 06 Sep 2023
Cited by 1 | Viewed by 1325
Abstract
The present study investigated the pattern of relations among the tacit knowledge of high school teachers, their professional development, and their metacognitive knowledge concerning their teaching practices. Two hundred and seventy-nine secondary school teachers of both sexes, between the ages of 30 and [...] Read more.
The present study investigated the pattern of relations among the tacit knowledge of high school teachers, their professional development, and their metacognitive knowledge concerning their teaching practices. Two hundred and seventy-nine secondary school teachers of both sexes, between the ages of 30 and 59 years, with teaching experience of between 1 and 19 years, participated in the study. Teachers’ tacit knowledge was evaluated through the hypothetical scenarios of the Tacit Knowledge Inventory for High School Teachers (TKI-HS), which has been developed for the estimation of teachers’ practical strategies. For the evaluation of teachers’ metacognitive knowledge and professional development, self-report questionnaires were administered to the participants. Path analysis indicated relationships between teachers’ metacognitive knowledge regarding difficulties in classroom management and in the use of modern methods and technologies on the one hand, and the use of certain tacit knowledge strategies on the other. In addition, teachers’ professional development, especially their ability to interact in socially heterogeneous groups, was also found to have an effect on their tacit knowledge strategies. Full article
(This article belongs to the Special Issue Assessment of Human Intelligence—State of the Art in the 2020s)
Show Figures

Figure 1

24 pages, 1215 KiB  
Article
Do Cognitive–Achievement Relations Vary by General Ability Level?
by Daniel B. Hajovsky, Christopher R. Niileksela, Sunny C. Olsen and Morgan K. Sekula
J. Intell. 2023, 11(9), 177; https://doi.org/10.3390/jintelligence11090177 - 04 Sep 2023
Cited by 1 | Viewed by 1391
Abstract
Cognitive–achievement relations research has been instrumental in understanding the development of academic skills and learning difficulties. Most cognitive–achievement relations research has been conducted with large samples and represent average relations across the ability spectrum. A notable gap in the literature is whether these [...] Read more.
Cognitive–achievement relations research has been instrumental in understanding the development of academic skills and learning difficulties. Most cognitive–achievement relations research has been conducted with large samples and represent average relations across the ability spectrum. A notable gap in the literature is whether these relations vary by cognitive ability levels (IQ). This study examined cognitive–achievement relations across different general ability levels (Low, Average, and High) to fill this gap. Based on Spearman’s Law of Diminishing Returns, it would be expected that general intelligence would be a stronger predictor of academic skills at lower levels of IQ, and more specific abilities would be stronger predictors of academic skills at higher levels of IQ. To test this, multi-group path analysis and structural equation modeling were used to examine whether integrated models of cognitive–reading relations are differentiated by IQ levels in the Woodcock–Johnson III and Woodcock–Johnson IV standardization samples. Global and broad cognitive abilities were used as predictors of basic reading skills and reading comprehension for elementary and secondary school students. The magnitude of prediction differed across ability groups in some cases, but not all. Importantly, the variance explained in basic reading skills and reading comprehension tended to be larger for the Low group compared to the Average and High groups. When variance accounted for by general intelligence was removed from the broad abilities, the effects of the broad abilities were similar across ability groups, but the indirect effects of g were higher for the Low group. Additionally, g had stronger relative effects on reading in the Low group, and broad abilities had stronger relative effects on reading in the Average and High groups. The implications and limitations of this study are discussed. Full article
(This article belongs to the Special Issue Assessment of Human Intelligence—State of the Art in the 2020s)
Show Figures

Figure 1

16 pages, 621 KiB  
Article
Measuring Domain-Specific Knowledge: From Bach to Fibonacci
by Marianna Massimilla Rusche and Matthias Ziegler
J. Intell. 2023, 11(3), 47; https://doi.org/10.3390/jintelligence11030047 - 28 Feb 2023
Cited by 1 | Viewed by 1844
Abstract
Along with crystallized intelligence (Gc), domain-specific knowledge (Gkn) is an important ability within the nomological net of acquired knowledge. Although Gkn has been shown to predict important life outcomes, only a few standardized tests measuring Gkn exist, especially for the adult population. Complicating [...] Read more.
Along with crystallized intelligence (Gc), domain-specific knowledge (Gkn) is an important ability within the nomological net of acquired knowledge. Although Gkn has been shown to predict important life outcomes, only a few standardized tests measuring Gkn exist, especially for the adult population. Complicating things, Gkn tests from different cultural circles cannot simply be translated as they need to be culture specific. Hence, this study aimed to develop a Gkn test culturally sensitive to a German population and to provide initial evidence for the resulting scores’ psychometric quality. Existing Gkn tests often mirror a school curriculum. We aimed to operationalize Gkn not solely based upon a typical curriculum to investigate a research question regarding the curriculum dependence of the resulting Gkn structure. A set of newly developed items from a broad range of knowledge categories was presented online to 1450 participants divided into a high (fluid intelligence, Gf) Gf (n = 415) and an unselected Gf subsample (n = 1035). Results support the notion of a hierarchical model comparable to the one curriculum-based tests scores have, with one factor at the top and three narrower factors below (Humanities, Science, Civics) for which each can be divided into smaller knowledge facets. Besides this initial evidence regarding structural validity, the scale scores’ reliability estimates are reported, and criterion validity-related evidence based on a known-groups design is provided. Results indicate the psychometric quality of the scores and are discussed. Full article
(This article belongs to the Special Issue Assessment of Human Intelligence—State of the Art in the 2020s)
Show Figures

Figure 1

33 pages, 2229 KiB  
Article
A Psychometric Network Analysis of CHC Intelligence Measures: Implications for Research, Theory, and Interpretation of Broad CHC Scores “Beyond g
by Kevin S. McGrew, W. Joel Schneider, Scott L. Decker and Okan Bulut
J. Intell. 2023, 11(1), 19; https://doi.org/10.3390/jintelligence11010019 - 16 Jan 2023
Cited by 13 | Viewed by 6685
Abstract
For over a century, the structure of intelligence has been dominated by factor analytic methods that presume tests are indicators of latent entities (e.g., general intelligence or g). Recently, psychometric network methods and theories (e.g., process overlap theory; dynamic mutualism) have provided [...] Read more.
For over a century, the structure of intelligence has been dominated by factor analytic methods that presume tests are indicators of latent entities (e.g., general intelligence or g). Recently, psychometric network methods and theories (e.g., process overlap theory; dynamic mutualism) have provided alternatives to g-centric factor models. However, few studies have investigated contemporary cognitive measures using network methods. We apply a Gaussian graphical network model to the age 9–19 standardization sample of the Woodcock–Johnson Tests of Cognitive Ability—Fourth Edition. Results support the primary broad abilities from the Cattell–Horn–Carroll (CHC) theory and suggest that the working memory–attentional control complex may be central to understanding a CHC network model of intelligence. Supplementary multidimensional scaling analyses indicate the existence of possible higher-order dimensions (PPIK; triadic theory; System I-II cognitive processing) as well as separate learning and retrieval aspects of long-term memory. Overall, the network approach offers a viable alternative to factor models with a g-centric bias (i.e., bifactor models) that have led to erroneous conclusions regarding the utility of broad CHC scores in test interpretation beyond the full-scale IQ, g. Full article
(This article belongs to the Special Issue Assessment of Human Intelligence—State of the Art in the 2020s)
Show Figures

Figure 1

19 pages, 354 KiB  
Article
Intelligence Process vs. Content and Academic Performance: A Trip through a House of Mirrors
by Phillip L. Ackerman
J. Intell. 2022, 10(4), 128; https://doi.org/10.3390/jintelligence10040128 - 19 Dec 2022
Cited by 1 | Viewed by 2264
Abstract
The main purpose of modern intelligence tests has been to predict individual differences in academic performance, first of children, then adolescents, and later extending to adults. From the earliest Binet–Simon scales to current times, most one-on-one omnibus intelligence assessments include both process subtests [...] Read more.
The main purpose of modern intelligence tests has been to predict individual differences in academic performance, first of children, then adolescents, and later extending to adults. From the earliest Binet–Simon scales to current times, most one-on-one omnibus intelligence assessments include both process subtests (e.g., memory, reasoning) and content subtests (e.g., vocabulary, information). As somewhat parallel developments, intelligence theorists have argued about the primacy of the process components or the content components reflecting intelligence, with many modern researchers proposing that process constructs like working memory are the fundamental determinant of individual differences in intelligence. To address whether there is an adequate basis for re-configuring intelligence assessments from content or mixed content and process measures to all-process measures, the question to be answered in this paper is whether intellectual process assessments are more or less valid predictors of academic success, in comparison to content measures. A brief review of the history of intelligence assessment is provided with respect to these issues, and a number of problems and limitations of process measures is discussed. In the final analysis, there is insufficient justification for using process-only measures to the exclusion of content measures, and the limited data available point to the idea that content-dominated measures are more highly predictive of academic success than are process measures. Full article
(This article belongs to the Special Issue Assessment of Human Intelligence—State of the Art in the 2020s)

Review

Jump to: Research, Other

25 pages, 1491 KiB  
Review
The Bilingual Is Not Two Monolinguals of Same Age: Normative Testing Implications for Multilinguals
by Samuel O. Ortiz and Sarah K. Cehelyk
J. Intell. 2024, 12(1), 3; https://doi.org/10.3390/jintelligence12010003 - 31 Dec 2023
Viewed by 1625
Abstract
A fundamental concept in psychological and intelligence testing involves the assumption of comparability in which performance on a test is compared to a normative standard derived from prior testing on individuals who are comparable to the examinee. When evaluating cognitive abilities, the primary [...] Read more.
A fundamental concept in psychological and intelligence testing involves the assumption of comparability in which performance on a test is compared to a normative standard derived from prior testing on individuals who are comparable to the examinee. When evaluating cognitive abilities, the primary variable used for establishing comparability and, in turn, validity is age, given that intellectual abilities develop largely as a function of general physical growth and neuromaturation. When an individual has been raised only in the language of the test, language development is effectively controlled by age. For example, when measuring vocabulary, a 12-year-old will be compared only to other 12-year-olds, all of whom have been learning the language of the test for approximately 12 years—hence, they remain comparable. The same cannot be said when measuring the same or other abilities in a 12-year-old who has been raised only in a different language or raised partly with a different language and partly with the language of the test. In such cases, a 12-year-old may have been learning the language of the test at some point shortly after birth, or they might have just begun learning the language a week ago. Their respective development in the language of the test thus varies considerably, and it can no longer be assumed that they are comparable in this respect to others simply because they are of the same age. Psychologists noted early on that language differences could affect test performance, but it was viewed mostly as an issue regarding basic comprehension. Early efforts were made to address this issue, which typically involved simplification of the instructions or reliance on mostly nonverbal methods of administration and measurement. Other procedures that followed included working around language via test modifications or alterations (e.g., use of an interpreter), testing in the dominant language, or use of tests translated into other languages. None of these approaches, however, have succeeded in establishing validity and fairness in the testing of multilinguals, primarily because they fail to recognize that language difference is not the same as language development, much like cultural difference is not the same as acquisition of acculturative knowledge. Current research demonstrates that the test performance of multilinguals is moderated primarily by the amount of exposure to and development in the language of the test. Moreover, language development, specifically receptive vocabulary, accounts for more variance in test performance than age or any other variable. There is further evidence that when the influence of differential language development is examined and controlled, historical attributions to race-based performance disappear. Advances in fairness in the testing of multilinguals rest on true peer comparisons that control for differences in language development within and among multilinguals. The BESA and the Ortiz PVAT are the only two examples where norms have been created that control for both age and degree of development in the language(s) of the test. Together, they provide a blueprint for future tests and test construction wherein the creation of true peer norms is possible and, when done correctly, exhibits significant influence in equalizing test performance across diverse groups, irrespective of racial/ethnic background or language development. Current research demonstrates convincingly that with deliberate and careful attention to differences that exist, not only between monolinguals and multilinguals of the same age but also among multilinguals themselves, tests can be developed to support claims of validity and fairness for use with individuals who were in fact not raised exclusively in the language or the culture of the test. Full article
(This article belongs to the Special Issue Assessment of Human Intelligence—State of the Art in the 2020s)
Show Figures

Figure 1

17 pages, 384 KiB  
Review
Remote Assessment: Origins, Benefits, and Concerns
by Christy A. Mulligan and Justin L. Ayoub
J. Intell. 2023, 11(6), 114; https://doi.org/10.3390/jintelligence11060114 - 09 Jun 2023
Cited by 1 | Viewed by 2416
Abstract
Although guidelines surrounding COVID-19 have relaxed and school-aged students are no longer required to wear masks and social distance in schools, we have become, as a nation and as a society, more comfortable working from home, learning online, and using technology as a [...] Read more.
Although guidelines surrounding COVID-19 have relaxed and school-aged students are no longer required to wear masks and social distance in schools, we have become, as a nation and as a society, more comfortable working from home, learning online, and using technology as a platform to communicate ubiquitously across ecological environments. In the school psychology community, we have also become more familiar with assessing students virtually, but at what cost? While there is research suggesting score equivalency between virtual and in-person assessment, score equivalency alone is not sufficient to validate a measure or an adaptation thereof. Furthermore, the majority of psychological measures on the market are normed for in-person administration. In this paper, we will not only review the pitfalls of reliability and validity but will also unpack the ethics of remote assessment as an equitable practice. Full article
(This article belongs to the Special Issue Assessment of Human Intelligence—State of the Art in the 2020s)
22 pages, 429 KiB  
Review
The Use of Cognitive Tests in the Assessment of Dyslexia
by Nancy Mather and Deborah Schneider
J. Intell. 2023, 11(5), 79; https://doi.org/10.3390/jintelligence11050079 - 26 Apr 2023
Cited by 5 | Viewed by 7257
Abstract
In this literature review, we address the use of cognitive tests, including intelligence tests, in the assessment and diagnosis of dyslexia, from both historic and present-day perspectives. We discuss the role of cognitive tests in the operationalization of the concepts of specificity and [...] Read more.
In this literature review, we address the use of cognitive tests, including intelligence tests, in the assessment and diagnosis of dyslexia, from both historic and present-day perspectives. We discuss the role of cognitive tests in the operationalization of the concepts of specificity and unexpectedness, two constructs considered essential to the characterization of dyslexia since the publication of early case reports in the late nineteenth century. We review the advantages and disadvantages of several approaches to specific learning disabilities’ identification that are used in schools. We also discuss contemporary debates around the use of standardized cognitive testing in dyslexia evaluations, in particular, the arguments of those who favor an approach to diagnosis based on prior history and the results of a comprehensive evaluation and those who favor an approach based on an individual’s response to intervention. We attempt to explain both perspectives by examining clinical observations and research findings. We then provide an argument for how cognitive tests can contribute to an accurate and informed diagnosis of dyslexia. Full article
(This article belongs to the Special Issue Assessment of Human Intelligence—State of the Art in the 2020s)

Other

Jump to: Research, Review

20 pages, 444 KiB  
Essay
Modern Assessments of Intelligence Must Be Fair and Equitable
by LaTasha R. Holden and Gabriel J. Tanenbaum
J. Intell. 2023, 11(6), 126; https://doi.org/10.3390/jintelligence11060126 - 20 Jun 2023
Cited by 3 | Viewed by 5548
Abstract
Historically, assessments of human intelligence have been virtually synonymous with practices that contributed to forms of inequality and injustice. As such, modern considerations for assessing human intelligence must focus on equity and fairness. First, we highlight the array of diversity, equity, and inclusion [...] Read more.
Historically, assessments of human intelligence have been virtually synonymous with practices that contributed to forms of inequality and injustice. As such, modern considerations for assessing human intelligence must focus on equity and fairness. First, we highlight the array of diversity, equity, and inclusion concerns in assessment practices and discuss strategies for addressing them. Next, we define a modern, non-g, emergent view of intelligence using the process overlap theory and argue for its use in improving equitable practices. We then review the empirical evidence, focusing on sub-measures of g to highlight the utility of non-g, emergent models in promoting equity and fairness. We conclude with suggestions for researchers and practitioners. Full article
(This article belongs to the Special Issue Assessment of Human Intelligence—State of the Art in the 2020s)
19 pages, 839 KiB  
Essay
Within-Individual Variation in Cognitive Performance Is Not Noise: Why and How Cognitive Assessments Should Examine Within-Person Performance
by Arabella Charlotte Vaughan and Damian Patrick Birney
J. Intell. 2023, 11(6), 110; https://doi.org/10.3390/jintelligence11060110 - 02 Jun 2023
Cited by 1 | Viewed by 1614
Abstract
Despite evidence that it exists, short-term within-individual variability in cognitive performance has largely been ignored as a meaningful component of human cognitive ability. In this article, we build a case for why this within-individual variability should not be viewed as mere measurement error [...] Read more.
Despite evidence that it exists, short-term within-individual variability in cognitive performance has largely been ignored as a meaningful component of human cognitive ability. In this article, we build a case for why this within-individual variability should not be viewed as mere measurement error and why it should be construed as a meaningful component of an individual’s cognitive abilities. We argue that in a demanding and rapidly changing modern world, between-individual analysis of single-occasion cognitive test scores does not account for the full range of within-individual cognitive performance variation that is implicated in successful typical cognitive performance. We propose that short-term repeated-measures paradigms (e.g., the experience sampling method (ESM)) be used to develop a process account of why individuals with similar cognitive ability scores differ in their actual performance in typical environments. Finally, we outline considerations for researchers when adapting this paradigm for cognitive assessment and present some initial findings from two studies in our lab that piloted the use of ESM to assess within-individual cognitive performance variation. Full article
(This article belongs to the Special Issue Assessment of Human Intelligence—State of the Art in the 2020s)
Show Figures

Figure 1

17 pages, 376 KiB  
Concept Paper
The Intelligent Attitude: What Is Missing from Intelligence Tests
by Robert J. Sternberg
J. Intell. 2022, 10(4), 116; https://doi.org/10.3390/jintelligence10040116 - 01 Dec 2022
Cited by 2 | Viewed by 3455
Abstract
Intelligence, like creativity and wisdom, has an attitudinal component as well as an ability-based one. The attitudinal component is at least as important as the ability-based one. Theories of intelligence, in ignoring the attitudinal component of intelligence, have failed to account fully or [...] Read more.
Intelligence, like creativity and wisdom, has an attitudinal component as well as an ability-based one. The attitudinal component is at least as important as the ability-based one. Theories of intelligence, in ignoring the attitudinal component of intelligence, have failed to account fully or accurately for why so many people who have relatively high levels of intelligence as an ability fail fully to deploy their ability, especially toward positive ends. The article reviews the need to view intelligence as comprising an attitude as well as an ability, and surveys reasons why people’s lack of an intelligent attitude hinders their deployment of intelligence. Suggestions are made for how things could change in a positive way. Full article
(This article belongs to the Special Issue Assessment of Human Intelligence—State of the Art in the 2020s)
Back to TopTop