People Recognition through Face, Voice, Name and Their Interactions

A special issue of Brain Sciences (ISSN 2076-3425). This special issue belongs to the section "Neuropsychology".

Deadline for manuscript submissions: closed (20 September 2023) | Viewed by 17290

Special Issue Editor


E-Mail Website
Guest Editor
Institute of Neurology, Catholic University of the Sacred Heart, 20123 Rome, Italy
Interests: laterality of emotions; neuropsychology of dementia; unilateral spatial neglect; category-specific semantic disorders; verbal and non-verbal semantic representations; familiar people recognition disorders; anosognosia
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Faces, voices and names are the main channels through which we recognize both famous people or those we are personally acquainted with. However, both the modalities used and the difficulty met in the recognition process are different. Perceptual modalities are used in people recognition through the face and voice, whereas linguistic modality is used when recognition takes place through the personal name. Different involvement of the right and left hemispheres in these recognition modalities has been documented. Moreover, several studies have shown that people recognition is more difficult through the voice than through the face or name modalities. A further source of complexity stems from the fact that, in ecological conditions, our interactions with familiar or unknown people occur through more than one of these modalities. It is therefore likely that the joint activation of these channels of person recognition may lead to facilitation (or overshadowing) processes. However, the details of these interactions are still controversial. The aim of this Special Issue, therefore, is to consider both experimental and clinical investigations that could help to clarify this fascinating but complex issue.

Prof. Dr. Guido Gainotti
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Brain Sciences is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • person recognition
  • face
  • voice
  • name
  • hemispheric asymmetries
  • face-voice and face-name interactions

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

9 pages, 225 KiB  
Editorial
Human Recognition: The Utilization of Face, Voice, Name and Interactions—An Extended Editorial
by Guido Gainotti
Brain Sci. 2024, 14(4), 345; https://doi.org/10.3390/brainsci14040345 - 30 Mar 2024
Viewed by 492
Abstract
The many stimulating contributions to this Special Issue of Brain Science focused on some basic issues of particular interest in current research, with emphasis on human recognition using faces, voices, and names [...] Full article
(This article belongs to the Special Issue People Recognition through Face, Voice, Name and Their Interactions)

Research

Jump to: Editorial, Review

19 pages, 2367 KiB  
Article
Extensive Visual Training in Adulthood Reduces an Implicit Neural Marker of the Face Inversion Effect
by Simen Hagen, Renaud Laguesse and Bruno Rossion
Brain Sci. 2024, 14(2), 146; https://doi.org/10.3390/brainsci14020146 - 30 Jan 2024
Cited by 1 | Viewed by 903
Abstract
Face identity recognition (FIR) in humans is supported by specialized neural processes whose function is spectacularly impaired when simply turning a face upside-down: the face inversion effect (FIE). While the FIE appears to have a slow developmental course, little is known about the [...] Read more.
Face identity recognition (FIR) in humans is supported by specialized neural processes whose function is spectacularly impaired when simply turning a face upside-down: the face inversion effect (FIE). While the FIE appears to have a slow developmental course, little is known about the plasticity of the neural processes involved in this effect—and in FIR in general—at adulthood. Here, we investigate whether extensive training (2 weeks, ~16 h) in young human adults discriminating a large set of unfamiliar inverted faces can reduce an implicit neural marker of the FIE for a set of entirely novel faces. In all, 28 adult observers were trained to individuate 30 inverted face identities presented under different depth-rotated views. Following training, we replicate previous behavioral reports of a significant reduction (56% relative accuracy rate) in the behavioral FIE as measured with a challenging four-alternative delayed-match-to-sample task for individual faces across depth-rotated views. Most importantly, using EEG together with a validated frequency tagging approach to isolate a neural index of FIR, we observe the same substantial (56%) reduction in the neural FIE at the expected occipito-temporal channels. The reduction in the neural FIE correlates with the reduction in the behavioral FIE at the individual participant level. Overall, we provide novel evidence suggesting a substantial degree of plasticity in processes that are key for face identity recognition in the adult human brain. Full article
(This article belongs to the Special Issue People Recognition through Face, Voice, Name and Their Interactions)
Show Figures

Figure 1

15 pages, 1191 KiB  
Article
Familiarity Is Key: Exploring the Effect of Familiarity on the Face-Voice Correlation
by Sarah V. Stevenage, Rebecca Edey, Rebecca Keay, Rebecca Morrison and David J. Robertson
Brain Sci. 2024, 14(2), 112; https://doi.org/10.3390/brainsci14020112 - 23 Jan 2024
Cited by 1 | Viewed by 798
Abstract
Recent research has examined the extent to which face and voice processing are associated by virtue of the fact that both tap into a common person perception system. However, existing findings do not yet fully clarify the role of familiarity in this association. [...] Read more.
Recent research has examined the extent to which face and voice processing are associated by virtue of the fact that both tap into a common person perception system. However, existing findings do not yet fully clarify the role of familiarity in this association. Given this, two experiments are presented that examine face-voice correlations for unfamiliar stimuli (Experiment 1) and for familiar stimuli (Experiment 2). With care being taken to use tasks that avoid floor and ceiling effects and that use realistic speech-based voice clips, the results suggested a significant positive but small-sized correlation between face and voice processing when recognizing unfamiliar individuals. In contrast, the correlation when matching familiar individuals was significant and positive, but much larger. The results supported the existing literature suggesting that face and voice processing are aligned as constituents of an overarching person perception system. However, the difference in magnitude of their association here reinforced the view that familiar and unfamiliar stimuli are processed in different ways. This likely reflects the importance of a pre-existing mental representation and cross-talk within the neural architectures when processing familiar faces and voices, and yet the reliance on more superficial stimulus-based and modality-specific analysis when processing unfamiliar faces and voices. Full article
(This article belongs to the Special Issue People Recognition through Face, Voice, Name and Their Interactions)
Show Figures

Figure 1

12 pages, 2336 KiB  
Article
Familiarity Processing through Faces and Names: Insights from Multivoxel Pattern Analysis
by Ana Maria Castro-Laguardia, Marlis Ontivero-Ortega, Cristina Morato, Ignacio Lucas, Jaime Vila, María Antonieta Bobes León and Pedro Guerra Muñoz
Brain Sci. 2024, 14(1), 39; https://doi.org/10.3390/brainsci14010039 - 30 Dec 2023
Cited by 1 | Viewed by 1152
Abstract
The way our brain processes personal familiarity is still debatable. We used searchlight multivoxel pattern analysis (MVPA) to identify areas where local fMRI patterns could contribute to familiarity detection for both faces and name categories. Significantly, we identified cortical areas in frontal, temporal, [...] Read more.
The way our brain processes personal familiarity is still debatable. We used searchlight multivoxel pattern analysis (MVPA) to identify areas where local fMRI patterns could contribute to familiarity detection for both faces and name categories. Significantly, we identified cortical areas in frontal, temporal, cingulate, and insular areas, where it is possible to accurately cross-classify familiar stimuli from one category using a classifier trained with the stimulus from the other (i.e., abstract familiarity) based on local fMRI patterns. We also discovered several areas in the fusiform gyrus, frontal, and temporal regions—primarily lateralized to the right hemisphere—supporting the classification of familiar faces but failing to do so for names. Also, responses to familiar names (compared to unfamiliar names) consistently showed less activation strength than responses to familiar faces (compared to unfamiliar faces). The results evinced a set of abstract familiarity areas (independent of the stimulus type) and regions specifically related only to face familiarity, contributing to recognizing familiar individuals. Full article
(This article belongs to the Special Issue People Recognition through Face, Voice, Name and Their Interactions)
Show Figures

Figure 1

19 pages, 2442 KiB  
Article
Neural Correlates of Voice Learning with Distinctive and Non-Distinctive Faces
by Romi Zäske, Jürgen M. Kaufmann and Stefan R. Schweinberger
Brain Sci. 2023, 13(4), 637; https://doi.org/10.3390/brainsci13040637 - 07 Apr 2023
Cited by 2 | Viewed by 1450
Abstract
Recognizing people from their voices may be facilitated by a voice’s distinctiveness, in a manner similar to that which has been reported for faces. However, little is known about the neural time-course of voice learning and the role of facial information in voice [...] Read more.
Recognizing people from their voices may be facilitated by a voice’s distinctiveness, in a manner similar to that which has been reported for faces. However, little is known about the neural time-course of voice learning and the role of facial information in voice learning. Based on evidence for audiovisual integration in the recognition of familiar people, we studied the behavioral and electrophysiological correlates of voice learning associated with distinctive or non-distinctive faces. We repeated twelve unfamiliar voices uttering short sentences, together with either distinctive or non-distinctive faces (depicted before and during voice presentation) in six learning-test cycles. During learning, distinctive faces increased early visually-evoked (N170, P200, N250) potentials relative to non-distinctive faces, and face distinctiveness modulated voice-elicited slow EEG activity at the occipito–temporal and fronto-central electrodes. At the test, unimodally-presented voices previously learned with distinctive faces were classified more quickly than were voices learned with non-distinctive faces, and also more quickly than novel voices. Moreover, voices previously learned with faces elicited an N250-like component that was similar in topography to that typically observed for facial stimuli. The preliminary source localization of this voice-induced N250 was compatible with a source in the fusiform gyrus. Taken together, our findings provide support for a theory of early interaction between voice and face processing areas during both learning and voice recognition. Full article
(This article belongs to the Special Issue People Recognition through Face, Voice, Name and Their Interactions)
Show Figures

Figure 1

14 pages, 9693 KiB  
Article
Visual Deprivation Alters Functional Connectivity of Neural Networks for Voice Recognition: A Resting-State fMRI Study
by Wenbin Pang, Wei Zhou, Yufang Ruan, Linjun Zhang, Hua Shu, Yang Zhang and Yumei Zhang
Brain Sci. 2023, 13(4), 636; https://doi.org/10.3390/brainsci13040636 - 07 Apr 2023
Cited by 2 | Viewed by 1417
Abstract
Humans recognize one another by identifying their voices and faces. For sighted people, the integration of voice and face signals in corresponding brain networks plays an important role in facilitating the process. However, individuals with vision loss primarily resort to voice cues to [...] Read more.
Humans recognize one another by identifying their voices and faces. For sighted people, the integration of voice and face signals in corresponding brain networks plays an important role in facilitating the process. However, individuals with vision loss primarily resort to voice cues to recognize a person’s identity. It remains unclear how the neural systems for voice recognition reorganize in the blind. In the present study, we collected behavioral and resting-state fMRI data from 20 early blind (5 females; mean age = 22.6 years) and 22 sighted control (7 females; mean age = 23.7 years) individuals. We aimed to investigate the alterations in the resting-state functional connectivity (FC) among the voice- and face-sensitive areas in blind subjects in comparison with controls. We found that the intranetwork connections among voice-sensitive areas, including amygdala-posterior “temporal voice areas” (TVAp), amygdala-anterior “temporal voice areas” (TVAa), and amygdala-inferior frontal gyrus (IFG) were enhanced in the early blind. The blind group also showed increased FCs of “fusiform face area” (FFA)-IFG and “occipital face area” (OFA)-IFG but decreased FCs between the face-sensitive areas (i.e., FFA and OFA) and TVAa. Moreover, the voice-recognition accuracy was positively related to the strength of TVAp-FFA in the sighted, and the strength of amygdala-FFA in the blind. These findings indicate that visual deprivation shapes functional connectivity by increasing the intranetwork connections among voice-sensitive areas while decreasing the internetwork connections between the voice- and face-sensitive areas. Moreover, the face-sensitive areas are still involved in the voice-recognition process in blind individuals through pathways such as the subcortical-occipital or occipitofrontal connections, which may benefit the visually impaired greatly during voice processing. Full article
(This article belongs to the Special Issue People Recognition through Face, Voice, Name and Their Interactions)
Show Figures

Figure 1

12 pages, 973 KiB  
Article
Familiarity Facilitates Detection of Angry Expressions
by Vassiki Chauhan, Matteo Visconti di Oleggio Castello, Morgan Taylor and Maria Ida Gobbini
Brain Sci. 2023, 13(3), 509; https://doi.org/10.3390/brainsci13030509 - 18 Mar 2023
Cited by 1 | Viewed by 1154
Abstract
Personal familiarity facilitates rapid and optimized detection of faces. In this study, we investigated whether familiarity associated with faces can also facilitate the detection of facial expressions. Models of face processing propose that face identity and face expression detection are mediated by distinct [...] Read more.
Personal familiarity facilitates rapid and optimized detection of faces. In this study, we investigated whether familiarity associated with faces can also facilitate the detection of facial expressions. Models of face processing propose that face identity and face expression detection are mediated by distinct pathways. We used a visual search paradigm to assess if facial expressions of emotion (anger and happiness) were detected more rapidly when produced by familiar as compared to unfamiliar faces. We found that participants detected an angry expression 11% more accurately and 135 ms faster when produced by familiar as compared to unfamiliar faces while happy expressions were detected with equivalent accuracies and at equivalent speeds for familiar and unfamiliar faces. These results suggest that detectors in the visual system dedicated to processing features of angry expressions are optimized for familiar faces. Full article
(This article belongs to the Special Issue People Recognition through Face, Voice, Name and Their Interactions)
Show Figures

Figure 1

11 pages, 370 KiB  
Article
The Right Temporal Lobe and the Enhancement of Voice Recognition in Congenitally Blind Subjects
by Stefano Terruzzi, Costanza Papagno and Guido Gainotti
Brain Sci. 2023, 13(3), 431; https://doi.org/10.3390/brainsci13030431 - 02 Mar 2023
Cited by 2 | Viewed by 1080
Abstract
Background: Experimental investigations and clinical observations have shown that not only faces but also voices are predominantly processed by the right hemisphere. Moreover, right brain-damaged patients show more difficulties with voice than with face recognition. Finally, healthy subjects undergoing right temporal anodal stimulation [...] Read more.
Background: Experimental investigations and clinical observations have shown that not only faces but also voices are predominantly processed by the right hemisphere. Moreover, right brain-damaged patients show more difficulties with voice than with face recognition. Finally, healthy subjects undergoing right temporal anodal stimulation improve their voice but not their face recognition. This asymmetry between face and voice recognition in the right hemisphere could be due to the greater complexity of voice processing. Methods: To further investigate this issue, we tested voice and name recognition in twelve congenitally blind people. Results: The results showed a complete overlap between the components of voice recognition impaired in patients with right temporal damage and those improved in congenitally blind people. Congenitally blind subjects, indeed, scored significantly better than control sighted individuals in voice discrimination and produced fewer false alarms on familiarity judgement of famous voices, corresponding to tests selectively impaired in patients with right temporal lesions. Conclusions: We suggest that task difficulty is a factor that impacts on the degree of its lateralization. Full article
(This article belongs to the Special Issue People Recognition through Face, Voice, Name and Their Interactions)
Show Figures

Figure 1

21 pages, 927 KiB  
Article
The Curious Case of Impersonators and Singers: Telling Voices Apart and Telling Voices Together under Naturally Challenging Listening Conditions
by Sarah V. Stevenage, Lucy Singh and Pru Dixey
Brain Sci. 2023, 13(2), 358; https://doi.org/10.3390/brainsci13020358 - 19 Feb 2023
Cited by 2 | Viewed by 1176
Abstract
Vocal identity processing depends on the ability to tell apart two instances of different speakers whilst also being able to tell together two instances of the same speaker. Whilst previous research has examined these voice processing capabilities under relatively common listening conditions, it [...] Read more.
Vocal identity processing depends on the ability to tell apart two instances of different speakers whilst also being able to tell together two instances of the same speaker. Whilst previous research has examined these voice processing capabilities under relatively common listening conditions, it has not yet tested the limits of these capabilities. Here, two studies are presented that employ challenging listening tasks to determine just how good we are at these voice processing tasks. In Experiment 1, 54 university students were asked to distinguish between very similar sounding, yet different speakers (celebrity targets and their impersonators). Participants completed a ‘Same/Different’ task and a ‘Which is the Celebrity?’ task to pairs of speakers, and a ‘Real or Not?’ task to individual speakers. In Experiment 2, a separate group of 40 university students was asked to pair very different sounding instances of the same speakers (speaking and singing). Participants were presented with an array of voice clips and completed a ‘Pairs Task’ as a variant of the more traditional voice sorting task. The results of Experiment 1 suggested that significantly more mistakes were made when distinguishing celebrity targets from their impersonators than when distinguishing the same targets from control voices. Nevertheless, listeners were significantly better than chance in all three tasks despite the challenge. Similarly, the results of Experiment 2 suggested that it was significantly more difficult to pair singing and speaking clips than to pair two speaking clips, particularly when the speakers were unfamiliar. Again, however, the performance was significantly above zero, and was again better than chance in a cautious comparison. Taken together, the results suggest that vocal identity processing is a highly adaptable task, assisted by familiarity with the speaker. However, the fact that performance remained above chance in all tasks suggests that we had not reached the limit of our listeners’ capability, despite the considerable listening challenges introduced. We conclude that voice processing is far better than previous research might have presumed. Full article
(This article belongs to the Special Issue People Recognition through Face, Voice, Name and Their Interactions)
Show Figures

Figure 1

20 pages, 3583 KiB  
Article
Challenging the Classical View: Recognition of Identity and Expression as Integrated Processes
by Emily Schwartz, Kathryn O’Nell, Rebecca Saxe and Stefano Anzellotti
Brain Sci. 2023, 13(2), 296; https://doi.org/10.3390/brainsci13020296 - 10 Feb 2023
Cited by 2 | Viewed by 1857
Abstract
Recent neuroimaging evidence challenges the classical view that face identity and facial expression are processed by segregated neural pathways, showing that information about identity and expression are encoded within common brain regions. This article tests the hypothesis that integrated representations of identity and [...] Read more.
Recent neuroimaging evidence challenges the classical view that face identity and facial expression are processed by segregated neural pathways, showing that information about identity and expression are encoded within common brain regions. This article tests the hypothesis that integrated representations of identity and expression arise spontaneously within deep neural networks. A subset of the CelebA dataset is used to train a deep convolutional neural network (DCNN) to label face identity (chance = 0.06%, accuracy = 26.5%), and the FER2013 dataset is used to train a DCNN to label facial expression (chance = 14.2%, accuracy = 63.5%). The identity-trained and expression-trained networks each successfully transfer to labeling both face identity and facial expression on the Karolinska Directed Emotional Faces dataset. This study demonstrates that DCNNs trained to recognize face identity and DCNNs trained to recognize facial expression spontaneously develop representations of facial expression and face identity, respectively. Furthermore, a congruence coefficient analysis reveals that features distinguishing between identities and features distinguishing between expressions become increasingly orthogonal from layer to layer, suggesting that deep neural networks disentangle representational subspaces corresponding to different sources. Full article
(This article belongs to the Special Issue People Recognition through Face, Voice, Name and Their Interactions)
Show Figures

Figure 1

11 pages, 1074 KiB  
Article
Effects of Voice and Biographic Data on Face Encoding
by Thilda Karlsson, Heidi Schaefer, Jason J. S. Barton and Sherryse L. Corrow
Brain Sci. 2023, 13(1), 148; https://doi.org/10.3390/brainsci13010148 - 14 Jan 2023
Cited by 2 | Viewed by 1232
Abstract
There are various perceptual and informational cues for recognizing people. How these interact in the recognition process is of interest. Our goal was to determine if the encoding of faces was enhanced by the concurrent presence of a voice, biographic data, or both. [...] Read more.
There are various perceptual and informational cues for recognizing people. How these interact in the recognition process is of interest. Our goal was to determine if the encoding of faces was enhanced by the concurrent presence of a voice, biographic data, or both. Using a between-subject design, four groups of 10 subjects learned the identities of 24 faces seen in video-clips. Half of the faces were seen only with their names, while the other half had additional information. For the first group this was the person’s voice, for the second, it was biographic data, and for the third, both voice and biographic data. In a fourth control group, the additional information was the voice of a generic narrator relating non-biographic information. In the retrieval phase, subjects performed a familiarity task and then a face-to-name identification task with dynamic faces alone. Our results consistently showed no benefit to face encoding with additional information, for either the familiarity or identification task. Tests for equivalency indicated that facilitative effects of a voice or biographic data on face encoding were not likely to exceed 3% in accuracy. We conclude that face encoding is minimally influenced by cross-modal information from voices or biographic data. Full article
(This article belongs to the Special Issue People Recognition through Face, Voice, Name and Their Interactions)
Show Figures

Figure 1

13 pages, 920 KiB  
Article
Effects of Faces and Voices on the Encoding of Biographic Information
by Sarah Fransson, Sherryse Corrow, Shanna Yeung, Heidi Schaefer and Jason J. S. Barton
Brain Sci. 2022, 12(12), 1716; https://doi.org/10.3390/brainsci12121716 - 15 Dec 2022
Cited by 2 | Viewed by 1042
Abstract
There are multiple forms of knowledge about people. Whether diverse person-related data interact is of interest regarding the more general issue of integration of multi-source information about the world. Our goal was to examine whether perception of a person’s face or voice enhanced [...] Read more.
There are multiple forms of knowledge about people. Whether diverse person-related data interact is of interest regarding the more general issue of integration of multi-source information about the world. Our goal was to examine whether perception of a person’s face or voice enhanced the encoding of their biographic data. We performed three experiments. In the first experiment, subjects learned the biographic data of a character with or without a video clip of their face. In the second experiment, they learned the character’s data with an audio clip of either a generic narrator’s voice or the character’s voice relating the same biographic information. In the third experiment, an audiovisual clip of both the face and voice of either a generic narrator or the character accompanied the learning of biographic data. After learning, a test phase presented biographic data alone, and subjects were tested first for familiarity and second for matching of biographic data to the name. The results showed equivalent learning of biographic data across all three experiments, and none showed evidence that a character’s face or voice enhanced the learning of biographic information. We conclude that the simultaneous processing of perceptual representations of people may not modulate the encoding of biographic data. Full article
(This article belongs to the Special Issue People Recognition through Face, Voice, Name and Their Interactions)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

48 pages, 14608 KiB  
Review
Intracerebral Electrophysiological Recordings to Understand the Neural Basis of Human Face Recognition
by Bruno Rossion, Corentin Jacques and Jacques Jonas
Brain Sci. 2023, 13(2), 354; https://doi.org/10.3390/brainsci13020354 - 18 Feb 2023
Cited by 3 | Viewed by 2455
Abstract
Understanding how the human brain recognizes faces is a primary scientific goal in cognitive neuroscience. Given the limitations of the monkey model of human face recognition, a key approach in this endeavor is the recording of electrophysiological activity with electrodes implanted inside the [...] Read more.
Understanding how the human brain recognizes faces is a primary scientific goal in cognitive neuroscience. Given the limitations of the monkey model of human face recognition, a key approach in this endeavor is the recording of electrophysiological activity with electrodes implanted inside the brain of human epileptic patients. However, this approach faces a number of challenges that must be overcome for meaningful scientific knowledge to emerge. Here we synthesize a 10 year research program combining the recording of intracerebral activity (StereoElectroEncephaloGraphy, SEEG) in the ventral occipito-temporal cortex (VOTC) of large samples of participants and fast periodic visual stimulation (FPVS), to objectively define, quantify, and characterize the neural basis of human face recognition. These large-scale studies reconcile the wide distribution of neural face recognition activity with its (right) hemispheric and regional specialization and extend face-selectivity to anterior regions of the VOTC, including the ventral anterior temporal lobe (VATL) typically affected by magnetic susceptibility artifacts in functional magnetic resonance imaging (fMRI). Clear spatial dissociations in category-selectivity between faces and other meaningful stimuli such as landmarks (houses, medial VOTC regions) or written words (left lateralized VOTC) are found, confirming and extending neuroimaging observations while supporting the validity of the clinical population tested to inform about normal brain function. The recognition of face identity – arguably the ultimate form of recognition for the human brain – beyond mere differences in physical features is essentially supported by selective populations of neurons in the right inferior occipital gyrus and the lateral portion of the middle and anterior fusiform gyrus. In addition, low-frequency and high-frequency broadband iEEG signals of face recognition appear to be largely concordant in the human association cortex. We conclude by outlining the challenges of this research program to understand the neural basis of human face recognition in the next 10 years. Full article
(This article belongs to the Special Issue People Recognition through Face, Voice, Name and Their Interactions)
Show Figures

Graphical abstract

Back to TopTop