Real-Time Visual Information Processing in Human-Computer Interface

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (30 March 2023) | Viewed by 3544

Special Issue Editors


E-Mail Website
Guest Editor
Division of Computer Engineering, Dongseo University, Busan 47011, Republic of Korea
Interests: image processing; computer vision; deep learning

E-Mail Website
Guest Editor
Division of Computer Engineering, Dongseo University, Busan 47011, Korea
Interests: computer graphics; human-computer interfaces; computer vision

Special Issue Information

Dear Colleagues,

Recently, as interest in the Metaverse has increased worldwide, interest in the interface to be used in the Metaverse is also increasing. In order for users to not feel the delay in the system, the prerequisite for these interfaces is to be able to operate in real time. In addition, these interfaces should be efficient and natural enough to resemble interactions occurring in the real world if possible. With the development of deep learning, various computer vision technologies have come to accurately recognize human behavior and poses of humans through cameras. This has made it possible to use human behavior as a tool for human–computer interfaces (HCIs) by simple devices.

The goal of this Special Issue is to highlight and invite state-of-the-art research papers related to real-time human–computer interfaces. Topics include but are not limited to:

  • Computer graphics techniques for HCI;
  • Artificial intelligence techniques for HCI;
  • Computer vision techniques for HCI;
  • Visualization techniques for HCI:

 - Information visualization;

 - Scientific visualization;

 - Knowledge visualization;

  • Design techniques for HCI;
  • Interactive art with HCI.

Prof. Dr. Sukho Lee
Prof. Dr. Byung Gook Lee
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • real-time system
  • human-computer interaction
  • visual processing
  • computer vision

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 3740 KiB  
Article
Precise Identification of Food Smells to Enable Human–Computer Interface for Digital Smells
by Yaonian Li, Zhenyi Ye and Qiliang Li
Electronics 2023, 12(2), 418; https://doi.org/10.3390/electronics12020418 - 13 Jan 2023
Cited by 2 | Viewed by 1785
Abstract
Food safety technologies are important in maintaining physical health for everyone. It is important to digitize the scents of foods to enable an effective human–computer interface for smells. In this work, an intelligent gas-sensing system is designed and integrated to capture the smells [...] Read more.
Food safety technologies are important in maintaining physical health for everyone. It is important to digitize the scents of foods to enable an effective human–computer interface for smells. In this work, an intelligent gas-sensing system is designed and integrated to capture the smells of food and convert them into digital scents. Fruit samples are used for testing as they release volatile organic components (VOCs) which can be detected by the gas sensors in the system. Decision tree, principal component analysis (PCA), linear discriminant analysis (LDA), and one-dimensional convolutional neural network (1D-CNN) algorithms were adopted and optimized to analyze and precisely classify the sensor responses. Furthermore, the proposed system and data processing algorithms can be used to precisely identify the digital scents and monitor the decomposition dynamics of different foods. Such a promising technology is important for mutual understanding between humans and computers to enable an interface for digital scents, which is very attractive for food identification and safety monitoring. Full article
(This article belongs to the Special Issue Real-Time Visual Information Processing in Human-Computer Interface)
Show Figures

Figure 1

14 pages, 2982 KiB  
Article
Complex Hand Interaction Authoring Tool for User Selective Media
by Bok Deuk Song, HongKyw Choi and Sung-Hoon Kim
Electronics 2022, 11(18), 2854; https://doi.org/10.3390/electronics11182854 - 9 Sep 2022
Cited by 1 | Viewed by 1098
Abstract
Nowadays, with the advancement of the Internet and personal mobile devices, many interactive media are prevailing, where viewers make their own decisions on the story of the media based on their interactions. The interaction that the user can make is usually pre-programmed by [...] Read more.
Nowadays, with the advancement of the Internet and personal mobile devices, many interactive media are prevailing, where viewers make their own decisions on the story of the media based on their interactions. The interaction that the user can make is usually pre-programmed by a programmer. Therefore, interactions that users can make are limited to programmable areas. In comparison, in this paper, we propose an Interactive media authoring tool which can compose diverse two-hand interactions from several one-hand interactive components. The aim is to provide content creators with a tool to produce multiple hand motions so that they can design a variety of user interactions to stimulate the interest of content viewers and increase their sense of immersion. Using the proposed system, the content creator can gain greater freedom to create more diverse and complex interactions than programmable ones. The system is composed of a complex motion editor that edits one-hand motions into complex two-hand motions, a touchless sensor that senses the hand motion and a metadata manager that handles the metadata, which specify the settings for the interactive functions. To our knowledge, the proposed system is the first web-based authoring tool that can authorize complex two-hand motions from single hand motions, and which can also control a touchless motion control device. Full article
(This article belongs to the Special Issue Real-Time Visual Information Processing in Human-Computer Interface)
Show Figures

Figure 1

Back to TopTop