sensors-logo

Journal Browser

Journal Browser

Image Processing, Computing, and Learning for Immersive User Interface

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (31 August 2023) | Viewed by 6743

Special Issue Editors


E-Mail Website
Guest Editor
Computer Science, Kent State University, Kent, OH 44202, USA
Interests: immersive user interface; haptic feedback; virtual reality

E-Mail Website
Guest Editor
Department of Embedded Systems Engineering, Incheon National University, 119 Academy-ro, Yeonsu-gu, Incheon 22012, Republic of Korea
Interests: remote sensing; deep learning; artificial intelligence; image processing; signal processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Due to advanced imaging and sensing technologies, various imaging technologies have been leading immersive user interface research as a new form of human computer interface. For instance, virtual reality or touch feedback-based user interfaces often require large image data as well as algorithms of image computing and processing to achieve natural interactions and realism. This trend has been found in numerous applications and fields such as medical and healthcare systems, sport training assistance, service and social robots, STEM education, car driver assistance, sensory motor rehabilitation, and entertainment. Hence, image processing, computing, and learning technologies can be applied for creating new forms of immersive user interface or leading to advance the state of the art.

This special issue therefore aims to provide a platform for scientists and researchers all over the world to promote, share, and discuss various new issues and developments in the area of image processing and immersive user interface from fundamentals to applications.

In this special issue, potential topics include, but are not limited to:

  • Fundamental image processing and algorithms
  • 3D image processing
  • Image processing with deep learning or machine learning
  • Deep learning for improving user interfaces
  • Immersive user interface using virtual reality or haptic feedback
  • Image-based user interface or usability improvement
  • Multimodal user interface using image data
  • Multimodal data fusion and computing
  • Medical education or training system

Dr. Kwangtaek Kim
Dr. Gwanggil Jeon
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image processing
  • image computing
  • deep learning
  • immersive user interface

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 5262 KiB  
Article
Bimanual Intravenous Needle Insertion Simulation Using Nonhomogeneous Haptic Device Integrated into Mixed Reality
by Jin Woo Kim, Jeremy Jarzembak and Kwangtaek Kim
Sensors 2023, 23(15), 6697; https://doi.org/10.3390/s23156697 - 26 Jul 2023
Cited by 1 | Viewed by 1257
Abstract
In this study, we developed a new haptic–mixed reality intravenous (HMR-IV) needle insertion simulation system, providing a bimanual haptic interface integrated into a mixed reality system with programmable variabilities considering real clinical environments. The system was designed for nursing students or healthcare professionals [...] Read more.
In this study, we developed a new haptic–mixed reality intravenous (HMR-IV) needle insertion simulation system, providing a bimanual haptic interface integrated into a mixed reality system with programmable variabilities considering real clinical environments. The system was designed for nursing students or healthcare professionals to practice IV needle insertion into a virtual arm with unlimited attempts under various changing insertion conditions (e.g., skin: color, texture, stiffness, friction; vein: size, shape, location depth, stiffness, friction). To achieve accurate hand–eye coordination under dynamic mixed reality scenarios, two different haptic devices (Dexmo and Geomagic Touch) and a standalone mixed reality system (HoloLens 2) were integrated and synchronized through multistep calibration for different coordinate systems (real world, virtual world, mixed reality world, haptic interface world, HoloLens camera). In addition, force-profile-based haptic rendering proposed in this study was able to successfully mimic the real tactile feeling of IV needle insertion. Further, a global hand-tracking method, combining two depth sensors (HoloLens and Leap Motion), was developed to accurately track a haptic glove and simulate grasping a virtual hand with force feedback. We conducted an evaluation study with 20 participants (9 experts and 11 novices) to measure the usability of the HMR-IV simulation system with user performance under various insertion conditions. The quantitative results from our own metric and qualitative results from the NASA Task Load Index demonstrate the usability of our system. Full article
Show Figures

Figure 1

24 pages, 4258 KiB  
Article
Application of Variational AutoEncoder (VAE) Model and Image Processing Approaches in Game Design
by Hugo Wai Leung Mak, Runze Han and Hoover H. F. Yin
Sensors 2023, 23(7), 3457; https://doi.org/10.3390/s23073457 - 25 Mar 2023
Cited by 11 | Viewed by 4876
Abstract
In recent decades, the Variational AutoEncoder (VAE) model has shown good potential and capability in image generation and dimensionality reduction. The combination of VAE and various machine learning frameworks has also worked effectively in different daily life applications, however its possible use and [...] Read more.
In recent decades, the Variational AutoEncoder (VAE) model has shown good potential and capability in image generation and dimensionality reduction. The combination of VAE and various machine learning frameworks has also worked effectively in different daily life applications, however its possible use and effectiveness in modern game design has seldom been explored nor assessed. The use of its feature extractor for data clustering has also been minimally discussed in the literature neither. This study first attempts to explore different mathematical properties of the VAE model, in particular, the theoretical framework of the encoding and decoding processes, the possible achievable lower bound and loss functions of different applications; then applies the established VAE model to generate new game levels based on two well-known game settings; and to validate the effectiveness of its data clustering mechanism with the aid of the Modified National Institute of Standards and Technology (MNIST) database. Respective statistical metrics and assessments are also utilized to evaluate the performance of the proposed VAE model in aforementioned case studies. Based on the statistical and graphical results, several potential deficiencies, for example, difficulties in handling high-dimensional and vast datasets, as well as insufficient clarity of outputs are discussed; then measures of future enhancement, such as tokenization and the combination of VAE and GAN models, are also outlined. Hopefully, this can ultimately maximize the strengths and advantages of VAE for future game design tasks and relevant industrial missions. Full article
Show Figures

Figure 1

Back to TopTop