Action Understanding and Face Processing Interweavings during Development

A special issue of Brain Sciences (ISSN 2076-3425). This special issue belongs to the section "Developmental Neuroscience".

Deadline for manuscript submissions: closed (28 February 2022) | Viewed by 14047

Special Issue Editors


E-Mail Website
Guest Editor
Dipartimento di Psicologia, Università degli Studi di Milano-Bicocca, Milan, Italy
Interests: action understanding; face recognition; early development; social cognition; perceptual development

E-Mail Website
Co-Guest Editor
Department of Psychology, University of Nevada, Las Vegas, NV 89154, USA
Interests: developmental psychology; face perception and processing; development of biases and stereotypes

Special Issue Information

Dear Colleagues,

In the last two decades, significant efforts have been made to understand how infants and children become capable of navigating the social world that surrounds them. The domains of face processing and action understanding have been particularly explored. Indeed, both faces and actions represent fundamental aspects of social communication, and through them, infants learn about their environment and derive expectations about others’ behaviors. Quite surprisingly though, face and action processing mechanisms have been historically considered separate, and the interconnections between the two domains have been only poorly explored. Conversely, faces and actions should be considered as tightly linked. Examples are countless. Most of the faces we encounter in every-day life are moving, as facial expressions imply an action of facial muscles. Others’ hand gestures and body postures often elicit certain facial expressions: we may, for instance, react with a fearful facial expression to an angry hand gesture. The identity of a face (e.g., a friend, your mom) may elicit certain gestures and actions. Social cues conveyed by faces and actions may be coherent or not, thus providing us with a complex picture about what is going on. The aim of this Special Issue is to deepen the understanding of the interdependencies between facial and action perception processes throughout development, with particular attention to their neurobiological and neurophysiological underpinnings. Behavioral studies or research carried out with adult participants will also be taken into consideration if relevant.

Prof. Dr. Chiara Turati
Guest Editor

Dr. Jennifer L. Rennels
Co-Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Brain Sciences is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • face processing
  • action processing
  • neurophysiological measures
  • development

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 1593 KiB  
Article
Three Months-Old’ Preferences for Biological Motion Configuration and Its Subsequent Decline
by Isabel C. Lisboa, Daniel M. Basso, Jorge A. Santos and Alfredo F. Pereira
Brain Sci. 2022, 12(5), 566; https://doi.org/10.3390/brainsci12050566 - 27 Apr 2022
Viewed by 1343
Abstract
To perceive, identify and understand the action of others, it is essential to perceptually organize individual and local moving body parts (such as limbs) into the whole configuration of a human body in action. Configural processing—processing the relations among features or parts of [...] Read more.
To perceive, identify and understand the action of others, it is essential to perceptually organize individual and local moving body parts (such as limbs) into the whole configuration of a human body in action. Configural processing—processing the relations among features or parts of a stimulus—is a fundamental ability in the perception of several important social stimuli, such as faces or biological motion. Despite this, we know very little about how human infants develop the ability to perceive and prefer configural relations in biological motion. We present two preferential looking experiments (one cross-sectional and one longitudinal) measuring infants’ preferential attention between a coherent motion configuration of a person walking vs. a scrambled point-light walker (i.e., a stimulus in which all configural relations were removed, thus, in which the perception of a person is impossible). We found that three-month-old infants prefer a coherent point-light walker in relation to a scrambled display, but both five- and seven-month-old infants do not show any preference. We discuss our findings in terms of the different perceptual, attentional, motor, and brain processes available at each age group, and how they dynamically interact with selective attention toward the coherent and socially relevant motion of a person walking during our first year of life. Full article
Show Figures

Figure 1

15 pages, 1838 KiB  
Article
Facial Expression Time Processing in Typical Development and in Patients with Congenital Facial Palsy
by Mauro Belluardo, Elisa De Stefani, Anna Barbot, Bernardo Bianchi, Cecilia Zannoni, Alberto Ferrari, Holly Rayson, Santo Di Nuovo, Giovanni Belluardo, Paola Sessa and Pier Francesco Ferrari
Brain Sci. 2022, 12(5), 516; https://doi.org/10.3390/brainsci12050516 - 19 Apr 2022
Viewed by 1856
Abstract
Temporal dynamics of behavior, particularly facial expressions, are fundamental for communication between individuals from very early in development. Facial expression processing has been widely demonstrated to involve embodied simulative processes mediated by the motor system. Such processes may be impaired in patients with [...] Read more.
Temporal dynamics of behavior, particularly facial expressions, are fundamental for communication between individuals from very early in development. Facial expression processing has been widely demonstrated to involve embodied simulative processes mediated by the motor system. Such processes may be impaired in patients with congenital facial palsy, including those affected by Moebius syndrome (MBS). The aims of this study were to investigate (a) the role of motor mechanisms in the processing of dynamic facial expression timing by testing patients affected by congenital facial palsy and (b) age-dependent effects on such processing. Accordingly, we recruited 38 typically developing individuals and 15 individuals with MBS, ranging in age from childhood to adulthood. We used a time comparison task where participants were asked to identify which one of two dynamic facial expressions was faster. Results showed that MBS individuals performed worse than controls in correctly estimating the duration of facial expressions. Interestingly, we did not find any performance differences in relation to age. These findings provide further evidence for the involvement of the motor system in processing facial expression duration and suggest that a sensorimotor matching mechanism may contribute to such timing perception from childhood. Full article
Show Figures

Figure 1

12 pages, 620 KiB  
Article
The Detection of Face-like Stimuli at the Edge of the Infant Visual Field
by Chiara Capparini, Michelle P. S. To and Vincent M. Reid
Brain Sci. 2022, 12(4), 493; https://doi.org/10.3390/brainsci12040493 - 13 Apr 2022
Cited by 3 | Viewed by 2022
Abstract
Human infants are highly sensitive to social information in their visual world. In laboratory settings, researchers have mainly studied the development of social information processing using faces presented on standard computer displays, in paradigms exploring face-to-face, direct eye contact social interactions. This is [...] Read more.
Human infants are highly sensitive to social information in their visual world. In laboratory settings, researchers have mainly studied the development of social information processing using faces presented on standard computer displays, in paradigms exploring face-to-face, direct eye contact social interactions. This is a simplification of a richer visual environment in which social information derives from the wider visual field and detection involves navigating the world with eyes, head and body movements. The present study measured 9-month-old infants’ sensitivities to face-like configurations across mid-peripheral visual areas using a detection task. Upright and inverted face-like stimuli appeared at one of three eccentricities (50°, 55° or 60°) in the left and right hemifields. Detection rates at different eccentricities were measured from video recordings. Results indicated that infant performance was heterogeneous and dropped beyond 55°, with a marginal advantage for targets appearing in the left hemifield. Infants’ orienting behaviour was not influenced by the orientation of the target stimulus. These findings are key to understanding how face stimuli are perceived outside foveal regions and are informative for the design of infant paradigms involving stimulus presentation across a wider field of view, in more naturalistic visual environments. Full article
Show Figures

Figure 1

14 pages, 1756 KiB  
Article
The Relationship between Crawling and Emotion Discrimination in 9- to 10-Month-Old Infants
by Gloria Gehb, Michael Vesker, Bianca Jovanovic, Daniela Bahn, Christina Kauschke and Gudrun Schwarzer
Brain Sci. 2022, 12(4), 479; https://doi.org/10.3390/brainsci12040479 - 05 Apr 2022
Cited by 2 | Viewed by 1919
Abstract
The present study examined whether infants’ crawling experience is related to their sensitivity to fearful emotional expressions. Twenty-nine 9- to 10-month-old infants were tested in a preferential looking task, in which they were presented with different pairs of animated faces on a screen [...] Read more.
The present study examined whether infants’ crawling experience is related to their sensitivity to fearful emotional expressions. Twenty-nine 9- to 10-month-old infants were tested in a preferential looking task, in which they were presented with different pairs of animated faces on a screen displaying a 100% happy facial expression and morphed facial expressions containing varying degrees of fear and happiness. Regardless of their crawling experiences, all infants looked longer at more fearful faces. Additionally, infants with at least 6 weeks of crawling experience needed lower levels of fearfulness in the morphs in order to detect a change from a happy to a fearful face compared to those with less crawling experience. Thus, the crawling experience seems to increase infants’ sensitivity to fearfulness in faces. Full article
Show Figures

Figure 1

13 pages, 1925 KiB  
Article
Isolating Action Prediction from Action Integration in the Perception of Social Interactions
by Ana Pesquita, Ulysses Bernardet, Bethany E. Richards, Ole Jensen and Kimron Shapiro
Brain Sci. 2022, 12(4), 432; https://doi.org/10.3390/brainsci12040432 - 24 Mar 2022
Cited by 2 | Viewed by 1867
Abstract
Previous research suggests that predictive mechanisms are essential in perceiving social interactions. However, these studies did not isolate action prediction (a priori expectations about how partners in an interaction react to one another) from action integration (a posteriori processing of both partner’s actions). [...] Read more.
Previous research suggests that predictive mechanisms are essential in perceiving social interactions. However, these studies did not isolate action prediction (a priori expectations about how partners in an interaction react to one another) from action integration (a posteriori processing of both partner’s actions). This study investigated action prediction during social interactions while controlling for integration confounds. Twenty participants viewed 3D animations depicting an action–reaction interaction between two actors. At the start of each action–reaction interaction, one actor performs a social action. Immediately after, instead of presenting the other actor’s reaction, a black screen covers the animation for a short time (occlusion duration) until a still frame depicting a precise moment of the reaction is shown (reaction frame). The moment shown in the reaction frame is either temporally aligned with the occlusion duration or deviates by 150 ms or 300 ms. Fifty percent of the action–reaction trials were semantically congruent, and the remaining were incongruent, e.g., one actor offers to shake hands, and the other reciprocally shakes their hand (congruent action–reaction) versus one actor offers to shake hands, and the other leans down (incongruent action–reaction). Participants made fast congruency judgments. We hypothesized that judging the congruency of action–reaction sequences is aided by temporal predictions. The findings supported this hypothesis; linear speed-accuracy scores showed that congruency judgments were facilitated by a temporally aligned occlusion duration, and reaction frames compared to 300 ms deviations, thus suggesting that observers internally simulate the temporal unfolding of an observed social interction. Furthermore, we explored the link between participants with higher autistic traits and their sensitivity to temporal deviations. Overall, the study offers new evidence of prediction mechanisms underpinning the perception of social interactions in isolation from action integration confounds. Full article
Show Figures

Figure 1

13 pages, 1523 KiB  
Article
Preschool Children’s Processing of Events during Verb Learning: Is the Focus on People (Faces) or Their Actions (Hands)?
by Jane B. Childers, Emily Warkentin, Blaire M. Porter, Marissa Young, Sneh Lalani and Akila Gopalkrishnan
Brain Sci. 2022, 12(3), 344; https://doi.org/10.3390/brainsci12030344 - 03 Mar 2022
Cited by 1 | Viewed by 1841
Abstract
Verbs are central to the syntactic structure of sentences, and, thus, important for learning one’s native language. This study examined how children visually inspect events as they hear, and do not hear, a new verb. Specifically, there is evidence that children may focus [...] Read more.
Verbs are central to the syntactic structure of sentences, and, thus, important for learning one’s native language. This study examined how children visually inspect events as they hear, and do not hear, a new verb. Specifically, there is evidence that children may focus on the agent of the action or may prioritize attention to the action being performed; to date, little evidence is available. This study used an eye tracker to track 2-, 3-, and 4-year-olds’ looking to the agent (i.e., face) vs. action (i.e., hands) while viewing events linked to a new verb as well as distractor events. A Tobii X30 eye tracker recorded children’s fixations to AOIs (head/face and hands) as they watched three target events and two distractor events in different orders during the learning phase, and pointed to one of two events in two test trials. This was repeated for a second novel verb. Pointing results show that children in all age groups were able to learn and extend the new verbs to new events at test. Additionally, across age groups, when viewing target events, children increased their looking to the hands (where the action is taking place) as those trials progressed and decreased their looking to the agents’ face, which is less informative for learning a new verb’s meaning. In contrast, when viewing distractor events, children decreased their looking to hands over trials and maintained their attention to the face. In summary, children’s visual attention to agents’ faces and hands differed depending on whether the events cooccurred with the new verb. These results are important as this is the first study to show this pattern of visual attention during verb learning, and, thus, these results help reveal underlying attentional strategies children may use when learning verbs. Full article
Show Figures

Figure 1

17 pages, 1981 KiB  
Article
Sensorimotor Activity and Network Connectivity to Dynamic and Static Emotional Faces in 7-Month-Old Infants
by Ermanno Quadrelli, Elisa Roberti, Silvia Polver, Hermann Bulf and Chiara Turati
Brain Sci. 2021, 11(11), 1396; https://doi.org/10.3390/brainsci11111396 - 24 Oct 2021
Cited by 7 | Viewed by 2095
Abstract
The present study investigated whether, as in adults, 7-month-old infants’ sensorimotor brain areas are recruited in response to the observation of emotional facial expressions. Activity of the sensorimotor cortex, as indexed by µ rhythm suppression, was recorded using electroencephalography (EEG) while infants observed [...] Read more.
The present study investigated whether, as in adults, 7-month-old infants’ sensorimotor brain areas are recruited in response to the observation of emotional facial expressions. Activity of the sensorimotor cortex, as indexed by µ rhythm suppression, was recorded using electroencephalography (EEG) while infants observed neutral, angry, and happy facial expressions either in a static (N = 19) or dynamic (N = 19) condition. Graph theory analysis was used to investigate to which extent neural activity was functionally localized in specific cortical areas. Happy facial expressions elicited greater sensorimotor activation compared to angry faces in the dynamic experimental condition, while no difference was found between the three expressions in the static condition. Results also revealed that happy but not angry nor neutral expressions elicited a significant right-lateralized activation in the dynamic condition. Furthermore, dynamic emotional faces generated more efficient processing as they elicited higher global efficiency and lower networks’ diameter compared to static faces. Overall, current results suggest that, contrarily to neutral and angry faces, happy expressions elicit sensorimotor activity at 7 months and dynamic emotional faces are more efficiently processed by functional brain networks. Finally, current data provide evidence of the existence of a right-lateralized activity for the processing of happy facial expressions. Full article
Show Figures

Figure 1

Back to TopTop