Neuropsychological Assessment: Past and Future

A special issue of Journal of Clinical Medicine (ISSN 2077-0383). This special issue belongs to the section "Clinical Neurology".

Deadline for manuscript submissions: closed (15 June 2023) | Viewed by 16387

Special Issue Editor


E-Mail Website
Guest Editor
1. Department of Dynamic, Clinical and Health Psychology, Sapienza University of Rome, 00185 Rome, Italy
2. IRCCS Fondazione Santa Lucia, 00179 Rome, Italy
Interests: neuropsychological tests; neuropsychopathology; cognitive neuroscience; cognitive-behavioral psychotherapy

Special Issue Information

Dear Colleagues,

The use of testing in cognitive assessment has historically affected much of neuropsychology. From the earliest paper-and-pencil tests to recent innovative virtual reality testing, clinical neuropsychology has always sought to standardize measures that best capture the deficits exhibited by neurological and, more recently, neuropsychiatric patients. This monographic issue intends to provide a detailed analysis of the state of the art of neuropsychological assessment, from bedside assessment to the evaluation of cognitively overachieving individuals.

This Special Issue will accept experimental and clinical studies and research on normative populations, with particular attention to the use of measures used in single-case studies. Reviews, systematic reviews, and meta-analyses are also welcome.

Finally, papers presenting non-traditional evaluative situations (via-web; An Ecological Momentary Cognitive Assessment Study) and forensic neuropsychology will be enthusiastically considered.

Dr. Grazia Spitoni
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Clinical Medicine is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • cognitive assessment
  • ecological vs. standard tests
  • in-bed assessment
  • pencil and paper tests
  • telemedicine and neuropsychological tests
  • ecological momentary assessment (EMA)

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

16 pages, 822 KiB  
Article
Clinical Assessment of Judgment in Adults and the Elderly: Development and Validation of the Three Domains of Judgment Test—Clinical Version (3DJT-CV)
by Simon-Pierre Bernard-Arevalo, Robert Jr Laforce, Olivier Khayat, Vital Bouchard, Marie-Andrée Bruneau, Sarah Brunelle, Stéphanie Caron, Laury Chamelian, Marise Chénard, Jean-François Côté, Gabrielle Crépeau-Gendron, Marie-Claire Doré, Marie-Pierre Fortin, Nadine Gagnon, Pierre R. Gagnon, Chloé Giroux, Léonie Jean, Geneviève Létourneau, Émilie Marceau, Vincent Moreau, Michèle Morin, Christine Ouellet, Stéphane Poulin, Steve Radermaker, Katerine Rousseau, Catherine Touchette and Alexandre Dumaisadd Show full author list remove Hide full author list
J. Clin. Med. 2023, 12(11), 3740; https://doi.org/10.3390/jcm12113740 - 29 May 2023
Viewed by 2147
Abstract
(1) Background: This article discusses the first two phases of development and validation of the Three Domains of Judgment Test (3DJT). This computer-based tool, co-constructed with users and capable of being administered remotely, aims to assess the three main domains of judgment (practical, [...] Read more.
(1) Background: This article discusses the first two phases of development and validation of the Three Domains of Judgment Test (3DJT). This computer-based tool, co-constructed with users and capable of being administered remotely, aims to assess the three main domains of judgment (practical, moral, and social) and learn from the psychometric weaknesses of tests currently used in clinical practice. (2) Method: First, we presented the 3DJT to experts in cognition, who evaluated the tool as a whole as well as the content validity, relevance, and acceptability of 72 scenarios. Second, an improved version was administered to 70 subjects without cognitive impairment to select scenarios with the best psychometric properties in order to build a future clinically short version of the test. (3) Results: Fifty-six scenarios were retained following expert evaluation. Results support the idea that the improved version has good internal consistency, and the concurrent validity primer shows that 3DJT is a good measure of judgment. Furthermore, the improved version was found to have a significant number of scenarios with good psychometric properties to prepare a clinical version of the test. (4) Conclusion: The 3DJT is an interesting alternative tool for assessing judgment. However, more studies are needed for its implementation in a clinical context. Full article
(This article belongs to the Special Issue Neuropsychological Assessment: Past and Future)
Show Figures

Figure 1

20 pages, 4951 KiB  
Article
Validation of “Neurit.Space”: Three Digital Tests for the Neuropsychological Evaluation of Unilateral Spatial Neglect
by Gemma Massetti, Federica Albini, Carlotta Casati, Carlo Toneatto, Stefano Terruzzi, Roberta Etzi, Luigi Tesio, Alberto Gallace and Giuseppe Vallar
J. Clin. Med. 2023, 12(8), 3042; https://doi.org/10.3390/jcm12083042 - 21 Apr 2023
Cited by 2 | Viewed by 1235
Abstract
Patients suffering from Unilateral Spatial Neglect (USN) fail to pay attention to, respond to, and report sensory events occurring in the contralesional side of space. The traditional neuropsychological assessment of USN is based on paper-and-pencil tests, whose data recording and scoring may be [...] Read more.
Patients suffering from Unilateral Spatial Neglect (USN) fail to pay attention to, respond to, and report sensory events occurring in the contralesional side of space. The traditional neuropsychological assessment of USN is based on paper-and-pencil tests, whose data recording and scoring may be subjected to human error. The utilization of technological devices can be expected to improve the assessment of USN. Therefore, we built Neurit.Space, a modified digital version of three paper-and-pencil tests, widely used to detect signs of USN, namely: Bells Cancellation, Line Bisection and Five Elements Drawing Test. Administration and data processing is fully automatic. Twelve right brain-damaged patients (six with and six without USN) and 12 age- and education-balanced healthy participants were enrolled in the study. All participants were administered both the computerized and the paper-and-pencil versions of the tests. The results of this preliminary study showed good sensitivity, specificity, and usability of Neurit.Space, suggesting that these digital tests are a promising tool for the evaluation of USN, both in clinical and in research settings. Full article
(This article belongs to the Special Issue Neuropsychological Assessment: Past and Future)
Show Figures

Figure 1

22 pages, 397 KiB  
Article
Towards the Validation of Executive Functioning Assessments: A Clinical Study
by Daniel Faber, Gerrit M. Grosse, Martin Klietz, Susanne Petri, Philipp Schwenkenbecher, Kurt-Wolfram Sühs and Bruno Kopp
J. Clin. Med. 2022, 11(23), 7138; https://doi.org/10.3390/jcm11237138 - 30 Nov 2022
Cited by 3 | Viewed by 1617
Abstract
Neuropsychological assessment needs a more profound grounding in psychometric theory. Specifically, psychometrically reliable and valid tools are required, both in patient care and in scientific research. The present study examined convergent and discriminant validity of some of the most popular indicators of executive [...] Read more.
Neuropsychological assessment needs a more profound grounding in psychometric theory. Specifically, psychometrically reliable and valid tools are required, both in patient care and in scientific research. The present study examined convergent and discriminant validity of some of the most popular indicators of executive functioning (EF). A sample of 96 neurological inpatients (aged 18–68 years) completed a battery of standardized cognitive tests (Raven’s matrices, vocabulary test, Wisconsin Card Sorting Test, verbal fluency test, figural fluency test). Convergent validity of indicators of intelligence (Raven’s matrices, vocabulary test) and of indicators of EF (Wisconsin Card Sorting Test, verbal fluency test, figural fluency) were calculated. Discriminant validity of indicators of EF against indicators of intelligence was also calculated. Convergent validity of indicators of intelligence (Raven’s matrices, vocabulary test) was good (rxtyt = 0.727; R2 = 0.53). Convergent validity of fluency indicators of EF against executive cognition as indicated by performance on the Wisconsin Card Sorting Test was poor (0.087 ≤ rxtyt ≤ 0.304; 0.008 ≤ R2 ≤ 0.092). Discriminant validity of indicators of EF against indicators of intelligence was good (0.106 ≤ rxtyt ≤ 0.548; 0.011 ≤ R2 ≤ 0.300). Our conclusions from these data are clear-cut: apparently dissimilar indicators of intelligence converge on general intellectual ability. Apparently dissimilar indicators of EF (mental fluency, executive cognition) do not converge on general executive ability. Executive abilities, although non-unitary, can be reasonably well distinguished from intellectual ability. The present data contribute to the hitherto meager evidence base regarding the validity of popular indicators of EF. Full article
(This article belongs to the Special Issue Neuropsychological Assessment: Past and Future)
15 pages, 1405 KiB  
Article
Regression-Based Normative Data for the Montreal Cognitive Assessment (MoCA) and Its Memory Index Score (MoCA-MIS) for Individuals Aged 18–91
by Roy P. C. Kessels, Nathalie R. de Vent, Carolien J. W. H. Bruijnen, Michelle G. Jansen, Jos F. M. de Jonghe, Boukje A. G. Dijkstra and Joukje M. Oosterman
J. Clin. Med. 2022, 11(14), 4059; https://doi.org/10.3390/jcm11144059 - 13 Jul 2022
Cited by 13 | Viewed by 8565
Abstract
(1) Background: There is a need for a brief assessment of cognitive function, both in patient care and scientific research, for which the Montreal Cognitive Assessment (MoCA) is a psychometrically reliable and valid tool. However, fine-grained normative data allowing for adjustment for age, [...] Read more.
(1) Background: There is a need for a brief assessment of cognitive function, both in patient care and scientific research, for which the Montreal Cognitive Assessment (MoCA) is a psychometrically reliable and valid tool. However, fine-grained normative data allowing for adjustment for age, education, and/or sex are lacking, especially for its Memory Index Score (MIS). (2) Methods: A total of 820 healthy individuals aged 18–91 (366 men) completed the Dutch MoCA (version 7.1), of whom 182 also completed the cued recall and recognition memory subtests enabling calculation of the MIS. Regression-based normative data were computed for the MoCA Total Score and MIS, following the data-handling procedure of the Advanced Neuropsychological Diagnostics Infrastructure (ANDI). (3) Results: Age, education level, and sex were significant predictors of the MoCA Total Score (Conditional R2 = 0.4, Marginal R2 = 0.12, restricted maximum likelihood (REML) criterion at convergence: 3470.1) and MIS (Marginal R2 = 0.14, REML criterion at convergence: 682.8). Percentile distributions are presented that allow for age, education and sex adjustment for the MoCA Total Score and the MIS. (4) Conclusions: We present normative data covering the full adult life span that can be used for the screening for overall cognitive deficits and memory impairment, not only in older people with or people at risk of neurodegenerative disease, but also in younger individuals with acquired brain injury, neurological disease, or non-neurological medical conditions. Full article
(This article belongs to the Special Issue Neuropsychological Assessment: Past and Future)
Show Figures

Figure 1

Other

Jump to: Research

11 pages, 643 KiB  
Systematic Review
Montreal Cognitive Assessment for Evaluating Cognitive Impairment in Subarachnoid Hemorrhage: A Systematic Review
by Amalia Cornea, Mihaela Simu and Elena Cecilia Rosca
J. Clin. Med. 2022, 11(16), 4679; https://doi.org/10.3390/jcm11164679 - 10 Aug 2022
Cited by 5 | Viewed by 1592
Abstract
Subarachnoid hemorrhage (SAH) is a severe condition with high mortality and extensive long-term morbidity. Although research has focused mainly on physical signs and disability for decades, in recent years, it has been increasingly recognized that cognitive and psychological impairments may be present in [...] Read more.
Subarachnoid hemorrhage (SAH) is a severe condition with high mortality and extensive long-term morbidity. Although research has focused mainly on physical signs and disability for decades, in recent years, it has been increasingly recognized that cognitive and psychological impairments may be present in many patients with SAH, negatively impacting their quality of life. We performed a systematic review aiming to provide a comprehensive report on the diagnostic accuracy of the Montreal Cognitive Assessment (MoCA) test for evaluating the presence of cognitive impairment in patients with SAH. Using appropriate search terms, we searched five databases (PubMed, Scopus, PsychINFO, Web of Sciences, and Latin American and Caribbean Health Sciences Literature) up to January 2022. Two cross-sectional studies investigated the accuracy of MoCA in SAH patients in the subacute and chronic phase. We appraised the quality of the included studies using the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) criteria. The MoCA test provides information about general cognitive functioning disturbances. However, a lower threshold than the original cutoff might be needed as it improves diagnostic accuracy, lowering the false positive rates. Further research is necessary for an evidence-based decision to use the MoCA in SAH patients. Full article
(This article belongs to the Special Issue Neuropsychological Assessment: Past and Future)
Show Figures

Figure 1

Back to TopTop