Child–Computer Interaction and Multimodal Child Behavior Analysis

A special issue of Multimodal Technologies and Interaction (ISSN 2414-4088).

Deadline for manuscript submissions: closed (25 May 2023) | Viewed by 4147

Special Issue Editors


E-Mail Website
Guest Editor
Department of Information and Computing Sciences, Utrecht University, 3584 Utrecht, The Netherlands
Interests: affective computing; computational paralinguistics; speech emotion recognition; explainable machine learning

E-Mail Website
Guest Editor
Computer Science and AI Lab (CSAIL), Massachusetts Institute of Technology (MIT), Cambridge, MA 02139, USA
Interests: Natural Language Understanding (NLU); AI fairness, explainability and interpretability

E-Mail Website
Guest Editor
Department of Mechanical Engineering Sciences, Connected Autonomous Vehicle Lab (CAV-Lab), University of Surrey, Guildford GU2 7XH, UK
Interests: sound processing; computational paralinguistics; speech and speaker recognition; machine learning; music processing; pattern recognition

Special Issue Information

Dear Colleagues,

Child behavior is a topic of wide scientific interest among many different disciplines, including social and behavioral sciences and artificial intelligence (AI). Yet, knowledge from these different disciplines is not integrated to its full potential, owing to among others the dissemination of knowledge in different outlets (journals, conferences) and different practices. In this Special Issue, we aim to connect these fields and fill the gaps between science and technological capabilities to address topics such as the use of AI (e.g., audio, visual, textual signal processing and machine learning) to better understand and model child behavioral and developmental processes, challenges and opportunities in large-scale child behavior analysis, the implementation of explainable ML/AI on sensitive child data, etc. We also welcome contributions on new child-behavior-related multimodal corpora and experimental results on them.

The SI themes include but are not limited to:

  • Systems
    • Affective behavior analysis and elicitation for children;
    • Longitudinal monitoring of children;
    • Acoustic and linguistic analysis of children’s speech;
  • Applications
    • VR, AR, and wearable interfaces for children;
    • Machine learning for education;
    • Child monitoring in classrooms;
  • Theory
    • Linking child behavior across modalities;
    • Perspectives on parent–child interactive behavior;
  • Focus theme challenges: behavior analysis for children
    • Monitoring children during social interactions;
    • Child speech development delay in different developmental disorders;
    • Detection of abusive and aggressive behaviors, cyberbullying;
    • Privacy and ethics of multimedia access for children;
    • Databases collected from children;
    • Investigations into children’s interaction with multimedia content;
  • Kids, families, and COVID-19 (healthcare concerns, education, mental health risks, missing meals, disproportion concerns)
    • Behavior and well-being related to the COVID-19 pandemic;
    • Pandemic pain points and the urgent need to respond by leveraging technology;
    • Meeting the needs of children and families now and following the pandemic;
    • Relevant data sources.

Dr. Heysem Kaya
Dr. Maryam Najafian
Dr. Saeid Safavi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • child–computer interaction
  • uni/multi-modal child behavior analysis
  • signal processing for child behavior understanding
  • automated child healthcare/well-being monitoring
  • educational courseware/games for children
  • psychological/psychiatric analyses of child behavior

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 435 KiB  
Article
Using Open-Source Automatic Speech Recognition Tools for the Annotation of Dutch Infant-Directed Speech
by Anika van der Klis, Frans Adriaans, Mengru Han and René Kager
Multimodal Technol. Interact. 2023, 7(7), 68; https://doi.org/10.3390/mti7070068 - 03 Jul 2023
Cited by 1 | Viewed by 1646
Abstract
There is a large interest in the annotation of speech addressed to infants. Infant-directed speech (IDS) has acoustic properties that might pose a challenge to automatic speech recognition (ASR) tools developed for adult-directed speech (ADS). While ASR tools could potentially speed up the [...] Read more.
There is a large interest in the annotation of speech addressed to infants. Infant-directed speech (IDS) has acoustic properties that might pose a challenge to automatic speech recognition (ASR) tools developed for adult-directed speech (ADS). While ASR tools could potentially speed up the annotation process, their effectiveness on this speech register is currently unknown. In this study, we assessed to what extent open-source ASR tools can successfully transcribe IDS. We used speech data from 21 Dutch mothers reading picture books containing target words to their 18- and 24-month-old children (IDS) and the experimenter (ADS). In Experiment 1, we examined how the ASR tool Kaldi-NL performs at annotating target words in IDS vs. ADS. We found that Kaldi-NL only found 55.8% of target words in IDS, while it annotated 66.8% correctly in ADS. In Experiment 2, we aimed to assess the difficulties in annotating IDS more broadly by transcribing all IDS utterances manually and comparing the word error rates (WERs) of two different ASR systems: Kaldi-NL and WhisperX. We found that WhisperX performs significantly better than Kaldi-NL. While there is much room for improvement, the results show that automatic transcriptions provide a promising starting point for researchers who have to transcribe a large amount of speech directed at infants. Full article
(This article belongs to the Special Issue Child–Computer Interaction and Multimodal Child Behavior Analysis)
Show Figures

Figure 1

21 pages, 2172 KiB  
Article
Modeling “Stag and Hare Hunting” Behaviors Using Interaction Data from an mCSCL Application for Grade 5 Mathematics
by Rex P. Bringula, Ann Joizelle D. Enverzo, Ma. Gracia G. Gonzales and Maria Mercedes T. Rodrigo
Multimodal Technol. Interact. 2023, 7(4), 34; https://doi.org/10.3390/mti7040034 - 27 Mar 2023
Cited by 2 | Viewed by 1811
Abstract
This study attempted to model the stag and hare hunting behaviors of students using their interaction data in a mobile computer-supported collaborative learning application for Grade 5 mathematics. Twenty-five male and 12 female Grade 5 students with an average age of 10.5 years [...] Read more.
This study attempted to model the stag and hare hunting behaviors of students using their interaction data in a mobile computer-supported collaborative learning application for Grade 5 mathematics. Twenty-five male and 12 female Grade 5 students with an average age of 10.5 years participated in this study. Stag hunters are more likely to display personality dimensions characterized by Openness while students belonging to hare hunters display personality dimensions characterized by Extraversion and Neuroticism. Students who display personality dimensions characterized by Agreeableness and Conscientiousness may tend to be either hare or stag hunters, depending on the difficulty, types of arithmetic problems solved, and the amount of time spent solving arithmetic problems. Students engaged in a stag hunting behavior performed poorly in mathematics. Decision tree modeling and lag sequential analysis revealed that stag and hare hunting behaviors could be identified based on personality dimensions, types of arithmetic problems solved, difficulty level of problems solved, time spent solving problems, and problem-solving patterns. Future research and practical implications were also discussed. Full article
(This article belongs to the Special Issue Child–Computer Interaction and Multimodal Child Behavior Analysis)
Show Figures

Figure 1

Back to TopTop