Home Ambient Intelligent System

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 April 2020) | Viewed by 12060

Special Issue Editors

Department of Computer Science and Engineering, University of Westminster, 101 New Cavendish St, Fitzrovia, London W1W 6XH, UK
Interests: deep neural networks; image quality; image systems performance; automotive applications; visual motion modelling and recognition
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Over the last few years, there has been an increasing interest in the creation of  intelligent homes using machine learning and deep learning methods combined with multimodal information processing techniques, embedded hardware, and domestic robots. This Special Issue is designed to provide researchers and developers with the opportunity  to publish original, innovative, and state-of-the-art algorithms and architectures for home ambient intelligence real-time applications, with emphasis in the areas of computer vision, image processing, biometrics, virtual and augmented reality, neural networks, intelligent interfaces, and biomimetic object–vision recognition.

This Special Issue provides a platform for academics, developers, and industry-related researchers belonging to the vast communities of Neural Networks, Computational Intelligence, Machine Learning, Deep Learning, Biometrics, Vision Systems, and Robotics. The objective is to integrate interdisciplinary studies and research, applying machine learning and deep learning methods in vision and robotics to benefit society.

The methods and tools applied to vision and robotics include but are not limited to, the following:

  • Computational intelligence methods
  • Machine learning methods
  • Self-adaptation, self-organisation, and self-supervised learning
  • Robust computer vision algorithms (operation under variable conditions, object   tracking, behaviour analysis and learning, scene segmentation)
  • Convolutional Neural Networks (CNN)
  • Recurrent Neural Networks (RNN)
  • Deep Reinforcement Learning (DRL)
  • Hardware implementation and algorithms acceleration (GPUs, FPGA,s,.)
  • The fields of application can be identified as (but are not limited to) the following:
  • Video and image processing
  • Video tracking
  • 3D scene reconstruction
  • 3D tracking in virtual reality environments
  • 3D volume visualisation
  • Intelligent interfaces (user-friendly man–machine interface)
  • Multimodal human pose recovery and behaviour analysis
  • Gesture and posture analysis and recognition
  • Biometric identification and recognition
  • Extraction of biometric features (fingerprint, iris, face, voice, palm, gait)
  • Surveillance systems
  • Robotic vision
  • Assistive healthcare
  • Autonomous and social robots
  • IoT and cyber-physical systems

Prof. Dr. Jose Garcia-Rodriguez
Dr. Alexandra Psarrou
Mr. Marco Leo
Prof. Miguel Angel Cazorla Quevedo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • ambient intelligence
  • assistive computer vision
  • assistive healthcare
  • assistive robotics
  • machine learning
  • deep learning
  • pattern recognition

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 2672 KiB  
Article
A Vision-Based System for Monitoring Elderly People at Home
by Marco Buzzelli, Alessio Albé and Gianluigi Ciocca
Appl. Sci. 2020, 10(1), 374; https://doi.org/10.3390/app10010374 - 03 Jan 2020
Cited by 36 | Viewed by 7968
Abstract
Assisted living technologies can be of great importance for taking care of elderly people and helping them to live independently. In this work, we propose a monitoring system designed to be as unobtrusive as possible, by exploiting computer vision techniques and visual sensors [...] Read more.
Assisted living technologies can be of great importance for taking care of elderly people and helping them to live independently. In this work, we propose a monitoring system designed to be as unobtrusive as possible, by exploiting computer vision techniques and visual sensors such as RGB cameras. We perform a thorough analysis of existing video datasets for action recognition, and show that no single dataset can be considered adequate in terms of classes or cardinality. We subsequently curate a taxonomy of human actions, derived from different sources in the literature, and provide the scientific community with considerations about the mutual exclusivity and commonalities of said actions. This leads us to collecting and publishing an aggregated dataset, called ALMOND (Assisted Living MONitoring Dataset), which we use as the training set for a vision-based monitoring approach.We rigorously evaluate our solution in terms of recognition accuracy using different state-of-the-art architectures, eventually reaching 97% on inference of basic poses, 83% on alerting situations, and 71% on daily life actions. We also provide a general methodology to estimate the maximum allowed distance between camera and monitored subject. Finally, we integrate the defined actions and the trained model into a computer-vision-based application, specifically designed for the objective of monitoring elderly people at their homes. Full article
(This article belongs to the Special Issue Home Ambient Intelligent System)
Show Figures

Figure 1

26 pages, 1395 KiB  
Article
Computational Analysis of Deep Visual Data for Quantifying Facial Expression Production
by Marco Leo, Pierluigi Carcagnì, Cosimo Distante, Pier Luigi Mazzeo, Paolo Spagnolo, Annalisa Levante, Serena Petrocchi and Flavia Lecciso
Appl. Sci. 2019, 9(21), 4542; https://doi.org/10.3390/app9214542 - 25 Oct 2019
Cited by 27 | Viewed by 3389
Abstract
The computational analysis of facial expressions is an emerging research topic that could overcome the limitations of human perception and get quick and objective outcomes in the assessment of neurodevelopmental disorders (e.g., Autism Spectrum Disorders, ASD). Unfortunately, there have been only a few [...] Read more.
The computational analysis of facial expressions is an emerging research topic that could overcome the limitations of human perception and get quick and objective outcomes in the assessment of neurodevelopmental disorders (e.g., Autism Spectrum Disorders, ASD). Unfortunately, there have been only a few attempts to quantify facial expression production and most of the scientific literature aims at the easier task of recognizing if either a facial expression is present or not. Some attempts to face this challenging task exist but they do not provide a comprehensive study based on the comparison between human and automatic outcomes in quantifying children’s ability to produce basic emotions. Furthermore, these works do not exploit the latest solutions in computer vision and machine learning. Finally, they generally focus only on a homogeneous (in terms of cognitive capabilities) group of individuals. To fill this gap, in this paper some advanced computer vision and machine learning strategies are integrated into a framework aimed to computationally analyze how both ASD and typically developing children produce facial expressions. The framework locates and tracks a number of landmarks (virtual electromyography sensors) with the aim of monitoring facial muscle movements involved in facial expression production. The output of these virtual sensors is then fused to model the individual ability to produce facial expressions. Gathered computational outcomes have been correlated with the evaluation provided by psychologists and evidence has been given that shows how the proposed framework could be effectively exploited to deeply analyze the emotional competence of ASD children to produce facial expressions. Full article
(This article belongs to the Special Issue Home Ambient Intelligent System)
Show Figures

Figure 1

Back to TopTop