sensors-logo

Journal Browser

Journal Browser

3D Human-Computer Interaction Imaging and Sensing

A topical collection in Sensors (ISSN 1424-8220). This collection belongs to the section "Sensing and Imaging".

Viewed by 13947

Editors


E-Mail Website
Collection Editor
Department of Computer Science and Engineering, University of Ioannina, 45110 Ioannina, Greece
Interests: image processing and analysis; computer vision; pattern recognition; with emphasis on biomedical applications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Collection Editor
Department of Communication and Digital Media, University of Western Macedonia, 52100 Kastoria, Greece
Interests: image processing; machine learning; computer vision; biomedical imaging; augmented/virtual reality
Special Issues, Collections and Topics in MDPI journals

Topical Collection Information

Dear Colleagues,

In recent years, the (r)evolution in sensors and sensing systems (wearable, wireless, and micro-/nanoscale), computing power (ubiquitous computing) and algorithms (real-time modeling, neural computing, and deep learning), and three-dimensional visualization has expanded. The main reasons for this trend are the ever-increasing new range of applications such as human–robot interactions, gaming, and sports performance analysis, which are driven by current technological advances.

3D imaging has also been introduced in human–computer interaction systems in the last years to address several challenges, with remarkable results. Thus, 3D imaging is important in computer vision, being used in a broad range of scientific and consumer domains, including 3D pose estimation, human action recognition, gaming, and video surveillance. The release of 3D vision sensors along with toolkit extensions that facilitate the integration of full-body control with games and virtual reality applications are the most illustrative examples of how 3D human–computer interaction imaging and sensing can be used in sensor data acquisition for 3D model visualization.

To bring together different facets of natural computer interfaces and interactions between the human and the computer, papers reporting novel imaging methods using sensors or wearable hardware are invited for submission to this Topical Collection. In particular, in this Topical Collection, we invite submissions further researching and exploring the development of new algorithms for 3D human–computer interactions. We welcome contributions that can focus on sensors, wearable hardware, algorithms, or integrated monitoring systems. The application areas include, but are not limited to, the following topics:

  • 3D human–computer interaction;
  • 3D human–robot interaction;
  • 3D pose estimation;
  • Human action recognition;
  • Video surveillance;
  • Scene understanding;
  • Gaming;
  • Sport performance analysis;
  • Proxemic recognition;
  • Estimating the anthropometry of a human from a single image;
  • 3D avatar creation;
  • Understanding the camera wearer’s activity in an egocentric vision scenario;
  • Describing clothes in images;
  • Mixed, augmented, and virtual reality.

Survey papers and reviews are also welcomed.

Dr. Christophoros Nikou
Dr. Michalis Vrigkas
Collection Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • 3D human–computer interaction
  • sensors
  • wearable hardware
  • machine learning
  • deep learning
  • 3D imaging
  • mixed, augmented and virtual reality
  • gaming

Published Papers (3 papers)

2023

Jump to: 2022

27 pages, 5091 KiB  
Review
Review of Robot-Assisted HIFU Therapy
by Anthony Gunderman, Rudy Montayre, Ashish Ranjan and Yue Chen
Sensors 2023, 23(7), 3707; https://doi.org/10.3390/s23073707 - 03 Apr 2023
Cited by 2 | Viewed by 2817
Abstract
This paper provides an overview of current robot-assisted high-intensity focused ultrasound (HIFU) systems for image-guided therapies. HIFU is a minimally invasive technique that relies on the thermo-mechanical effects of focused ultrasound waves to perform clinical treatments, such as tumor ablation, mild hyperthermia adjuvant [...] Read more.
This paper provides an overview of current robot-assisted high-intensity focused ultrasound (HIFU) systems for image-guided therapies. HIFU is a minimally invasive technique that relies on the thermo-mechanical effects of focused ultrasound waves to perform clinical treatments, such as tumor ablation, mild hyperthermia adjuvant to radiation or chemotherapy, vein occlusion, and many others. HIFU is typically performed under ultrasound (USgHIFU) or magnetic resonance imaging guidance (MRgHIFU), which provide intra-operative monitoring of treatment outcomes. Robot-assisted HIFU probe manipulation provides precise HIFU focal control to avoid damage to surrounding sensitive anatomy, such as blood vessels, nerve bundles, or adjacent organs. These clinical and technical benefits have promoted the rapid adoption of robot-assisted HIFU in the past several decades. This paper aims to present the recent developments of robot-assisted HIFU by summarizing the key features and clinical applications of each system. The paper concludes with a comparison and discussion of future perspectives on robot-assisted HIFU. Full article
Show Figures

Figure 1

2022

Jump to: 2023

20 pages, 2288 KiB  
Article
A 3D Hand Attitude Estimation Method for Fixed Hand Posture Based on Dual-View RGB Images
by Peng Ji, Xianjian Wang, Fengying Ma, Jinxiang Feng and Chenglong Li
Sensors 2022, 22(21), 8410; https://doi.org/10.3390/s22218410 - 01 Nov 2022
Cited by 1 | Viewed by 1287
Abstract
This work provides a 3D hand attitude estimation approach for fixed hand posture based on a CNN and LightGBM for dual-view RGB images to facilitate the application of hand posture teleoperation. First, using dual-view cameras and an IMU sensor, we provide a simple [...] Read more.
This work provides a 3D hand attitude estimation approach for fixed hand posture based on a CNN and LightGBM for dual-view RGB images to facilitate the application of hand posture teleoperation. First, using dual-view cameras and an IMU sensor, we provide a simple method for building 3D hand posture datasets. This method can quickly acquire dual-view 2D hand image sets and automatically append the appropriate three-axis attitude angle labels. Then, combining ensemble learning, which has strong regression fitting capabilities, with deep learning, which has excellent automatic feature extraction capabilities, we present an integrated hand attitude CNN regression model. This model uses a Bayesian optimization based LightGBM in the ensemble learning algorithm to produce 3D hand attitude regression and two CNNs to extract dual-view hand image features. Finally, a mapping from dual-view 2D images to 3D hand attitude angles is established using a training approach for feature integration, and a comparative experiment is run on the test set. The results of the experiments demonstrate that the suggested method may successfully solve the hand self-occlusion issue and accomplish 3D hand attitude estimation using only two normal RGB cameras. Full article
Show Figures

Figure 1

21 pages, 11745 KiB  
Article
In-Cabin Monitoring System for Autonomous Vehicles
by Ashutosh Mishra, Sangho Lee, Dohyun Kim and Shiho Kim
Sensors 2022, 22(12), 4360; https://doi.org/10.3390/s22124360 - 08 Jun 2022
Cited by 14 | Viewed by 8709
Abstract
In this paper, we have demonstrated a robust in-cabin monitoring system (IMS) for safety, security, surveillance, and monitoring, including privacy concerns for personal and shared autonomous vehicles (AVs). It consists of a set of monitoring cameras and an onboard device (OBD) equipped with [...] Read more.
In this paper, we have demonstrated a robust in-cabin monitoring system (IMS) for safety, security, surveillance, and monitoring, including privacy concerns for personal and shared autonomous vehicles (AVs). It consists of a set of monitoring cameras and an onboard device (OBD) equipped with artificial intelligence (AI). Hereafter, this combination of a camera and an OBD is referred to as the AI camera. We have investigated the issues for mobility services in higher levels of autonomous driving, what needs to be monitored, how to monitor, etc. Our proposed IMS is an on-device AI system that indigenously has improved the privacy of the users. Furthermore, we have enlisted the essential actions to be considered in an IMS and developed an appropriate database (DB). Our DB consists of multifaced scenarios important for monitoring the in-cabin of the higher-level AVs. Moreover, we have compared popular AI models applied for object and occupant recognition. In addition, our DB is available on request to support the research on the development of seamless monitoring of the in-cabin higher levels of autonomous driving for the assurance of safety and security. Full article
Show Figures

Figure 1

Back to TopTop