Advances in Internet of Things and Computer Vision

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 November 2024 | Viewed by 5824

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Technology, China University of Mining and Technology, Xuzhou 221116, China
Interests: CV; IoT; mine intelligent information processing
School of Computer Science and Technology, China University of Mining and Technology, Xuzhou 221116, China
Interests: mine big data; mine intelligent information processing

Special Issue Information

Dear Colleagues,

The combination of IoT and computer vision is an emerging technology known as IoT Vision. IoT vision utilizes computer vision technology to analyze and process multimedia data, such as images and videos, in IoT devices, thereby improving the efficiency and intelligence of the IoT. Specifically, IoT vision can be applied in the following domains:

  • Detection and recognition: IoT devices can collect a vast quantity of image and video data, and employ computer vision technology to detect and recognize objects, faces, vehicles, etc., so as to realize intelligent monitoring, security and other functions.
  • Prediction and optimization: Via the analysis and processing of multimedia data in IoT devices, a vast quantity of information and patterns can be obtained, so as to realize the prediction and optimization of production, logistics, supply chain and other processes.
  • Intelligent control: IoT devices can perceive and analyze the environment and equipment status via computer vision technology, so as to realize the intelligent control of production processes and energy management.

In short, the combination of the Internet of Things and computer vision can realize the perception, analysis and control of the physical world, thereby enhancing the production efficiency, minimizing costs, diminishing risks, etc. Thus, it has a broad applicative potential.

The aim of this Special Issue is to compile recent advances and emerging trends in IoT and CV research, with a focus on exploring the intersection between these two fields. The integration of IoT and CV has significant implications for a wide range of applications, including smart cities, autonomous vehicles, healthcare, security, and robotics.

Prof. Dr. Qiang Niu
Dr. Xu Yang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • IoT
  • CV
  • smart city
  • intelligent

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

24 pages, 13048 KiB  
Article
Analysis of Urban Residents’ Travelling Characteristics and Hotspots Based on Taxi Trajectory Data
by Jiusheng Du, Chengyang Meng and Xingwang Liu
Appl. Sci. 2024, 14(3), 1279; https://doi.org/10.3390/app14031279 - 03 Feb 2024
Cited by 1 | Viewed by 669
Abstract
This study utilizes taxi trajectory data to uncover urban residents’ travel patterns, offering critical insights into the spatial and temporal dynamics of urban mobility. A fusion clustering algorithm is introduced, enhancing the clustering accuracy of trajectory data. This approach integrates the hierarchical density-based [...] Read more.
This study utilizes taxi trajectory data to uncover urban residents’ travel patterns, offering critical insights into the spatial and temporal dynamics of urban mobility. A fusion clustering algorithm is introduced, enhancing the clustering accuracy of trajectory data. This approach integrates the hierarchical density-based spatial clustering of applications with noise (HDBSCAN) algorithm, modified to incorporate time factors, with kernel density analysis. The fusion algorithm demonstrates a higher noise point detection rate (15.85%) compared with the DBSCAN algorithm alone (7.31%), thus significantly reducing noise impact in kernel density analysis. Spatial correlation analysis between hotspot areas and paths uncovers distinct travel behaviors: During morning and afternoon peak hours on weekdays, travel times (19–40 min) exceed those on weekends (16–35 min). Morning peak hours see higher taxi utilization in residential and transportation hubs, with schools and commercial and government areas as primary destinations. Conversely, afternoon peaks show a trend towards dining and entertainment zones from the abovementioned places. In the evening rush, residents enjoy a vibrant nightlife, and there are numerous locations for picking up and dropping off people. A chi-square test on weekday travel data yields a p-value of 0.023, indicating a significant correlation between the distribution of travel hotspots and paths. Full article
(This article belongs to the Special Issue Advances in Internet of Things and Computer Vision)
Show Figures

Figure 1

19 pages, 11822 KiB  
Article
LR-MPIBS: A LoRa-Based Maritime Position-Indicating Beacon System
by Zhengbao Li, Jianfeng Dai, Yuanxin Luan, Nan Sun and Libin Du
Appl. Sci. 2024, 14(3), 1231; https://doi.org/10.3390/app14031231 - 01 Feb 2024
Viewed by 486
Abstract
Human marine activities are becoming increasingly frequent. The adverse marine environment has led to an increase in man overboard incidents, resulting in significant losses of life and property. After a drowning accident, the accurate location information of the drowning victim can help improve [...] Read more.
Human marine activities are becoming increasingly frequent. The adverse marine environment has led to an increase in man overboard incidents, resulting in significant losses of life and property. After a drowning accident, the accurate location information of the drowning victim can help improve the success rate of rescue. In this paper, we explore a LoRa-based Maritime Position-Indicating Beacon System (LR-MPIBS). A low-power drowning detection circuit is designed in LR-MPIBS to detect drowning accidents in a timely manner after a person falls into the water. The instantaneous high current of the LoRa RF can lower the supply voltage and cause other modules to work abnormally. A fast current transient response circuit is proposed to solve the problem. LR-MPIBS includes a power ripple suppression circuit that can reduce the measurement errors and operational abnormalities caused by power ripple interference. We explore the impedance matching law of LoRa RF circuits through simulation experiments to improve the quality of LoRa communication. A data processing algorithm for personnel drift trajectory is proposed to alleviate the challenges caused by the raw positioning data with large deviations and high communication cost. The experimental results show that LR-MPIBS can automatically start and actively alarm within 3 s after a person falls into the water. The positioning cold start time is less than 50 s. The performance of communication distance is more than 5 km. The endurance of LR-MPIBS is 25 h (with a 30 s communication cycle). Full article
(This article belongs to the Special Issue Advances in Internet of Things and Computer Vision)
Show Figures

Figure 1

13 pages, 3845 KiB  
Article
CTDR-Net: Channel-Time Dense Residual Network for Detecting Crew Overboard Behavior
by Zhengbao Li, Jie Gao, Kai Ma, Zewei Wu and Libin Du
Appl. Sci. 2024, 14(3), 986; https://doi.org/10.3390/app14030986 - 24 Jan 2024
Viewed by 484
Abstract
The efficient detection of crew overboard behavior has become an important element in enhancing the ability to respond to marine disasters. It remains challenging due to (1) the lack of effective features making feature extraction difficult and recognition accuracy low and (2) the [...] Read more.
The efficient detection of crew overboard behavior has become an important element in enhancing the ability to respond to marine disasters. It remains challenging due to (1) the lack of effective features making feature extraction difficult and recognition accuracy low and (2) the insufficient computing power resulting in the poor real-time performance of existing algorithms. In this paper, we propose a Channel-Time Dense Residual Network (CTDR-Net) for detecting crew overboard behavior, including a Dense Residual Network (DR-Net) and a Channel-Time Attention Mechanism (CTAM). The DR-Net is proposed to extract features, which employs the convolutional splitting method to improve the extraction ability of sparse features and reduce the number of network parameters. The CTAM is used to enhance the expression ability of channel feature information, and can increase the accuracy of behavior detection more effectively. We use the LeakyReLU activation function to improve the nonlinear modeling ability of the network, which can further enhance the network’s generalization ability. The experiments show that our method has an accuracy of 96.9%, striking a good balance between accuracy and real-time performance. Full article
(This article belongs to the Special Issue Advances in Internet of Things and Computer Vision)
Show Figures

Figure 1

17 pages, 3641 KiB  
Article
Training-Free Acoustic-Based Hand Gesture Tracking on Smart Speakers
by Xiao Xu, Xuehan Zhang, Zhongxu Bao, Xiaojie Yu, Yuqing Yin, Xu Yang and Qiang Niu
Appl. Sci. 2023, 13(21), 11954; https://doi.org/10.3390/app132111954 - 01 Nov 2023
Viewed by 879
Abstract
Hand gesture recognition is an essential Human–Computer Interaction (HCI) mechanism for users to control smart devices. While traditional device-based methods support acceptable recognition performance, the recent advance in wireless sensing could enable device-free hand gesture recognition. However, two severe limitations are serious environmental [...] Read more.
Hand gesture recognition is an essential Human–Computer Interaction (HCI) mechanism for users to control smart devices. While traditional device-based methods support acceptable recognition performance, the recent advance in wireless sensing could enable device-free hand gesture recognition. However, two severe limitations are serious environmental interference and high-cost hardware, which hamper wide deployment. This paper proposes the novel system TaGesture, which employs an inaudible acoustic signal to realize device-free and training-free hand gesture recognition with a commercial speaker and microphone array. We address unique technical challenges, such as proposing a novel acoustic hand-tracking-smoothing algorithm with an Interaction Multiple Model (IMM) Kalman Filter to address the issue of localization angle ambiguity, and designing a classification algorithm to realize acoustic-based hand gesture recognition without training. Comprehensive experiments are conducted to evaluate TaGesture. Results show that it can achieve a total accuracy of 97.5% for acoustic-based hand gesture recognition, and support the furthest sensing range of up to 3 m. Full article
(This article belongs to the Special Issue Advances in Internet of Things and Computer Vision)
Show Figures

Figure 1

20 pages, 6894 KiB  
Article
Gait Recognition Algorithm of Coal Mine Personnel Based on LoRa
by Yuqing Yin, Xuehan Zhang, Rixia Lan, Xiaoyu Sun, Keli Wang and Tianbing Ma
Appl. Sci. 2023, 13(12), 7289; https://doi.org/10.3390/app13127289 - 19 Jun 2023
Cited by 2 | Viewed by 900
Abstract
This study proposes a new approach to gait recognition using LoRa signals, taking into account the challenging conditions found in underground coal mines, such as low illumination, high temperature and humidity, high dust concentrations, and limited space. The aim is to address the [...] Read more.
This study proposes a new approach to gait recognition using LoRa signals, taking into account the challenging conditions found in underground coal mines, such as low illumination, high temperature and humidity, high dust concentrations, and limited space. The aim is to address the limitations of existing gait recognition research, which relies on sensors or other wireless signals that are sensitive to environmental factors, costly to deploy, invasive, and require close sensing distances. The proposed method analyzes the received signal waveform and utilizes the amplitude data for gait recognition. To ensure data reliability, outlier removal and signal smoothing are performed using Hampel and S-G filters, respectively. Additionally, high-frequency noise is eliminated through the application of Butterworth filters. To enhance the discriminative power of gait features, the pre-processed data are reconstructed using an autoencoder, which effectively extracts the underlying gait behavior. The trained autoencoder generates encoder features that serve as the input matrix. The Softmax method is then employed to associate these features with individual identities, enabling LoRa-based single-target gait recognition. Experimental results demonstrate significant performance improvements. In indoor environments, the recognition accuracy for groups of 2 to 8 individuals ranges from 99.7% to 96.6%. Notably, in an underground coal mine where the target is located 20 m away from the transceiver, the recognition accuracy for eight individuals reaches 93.3%. Full article
(This article belongs to the Special Issue Advances in Internet of Things and Computer Vision)
Show Figures

Figure 1

Review

Jump to: Research

32 pages, 1219 KiB  
Review
Review of EEG-Based Biometrics in 5G-IoT: Current Trends and Future Prospects
by Taha Beyrouthy, Nour Mostafa, Ahmed Roshdy, Abdullah S. Karar and Samer Alkork
Appl. Sci. 2024, 14(2), 534; https://doi.org/10.3390/app14020534 - 08 Jan 2024
Cited by 2 | Viewed by 1417
Abstract
The increasing integration of the Internet of Things (IoT) into daily life has led to significant changes in our social interactions. The advent of innovative IoT solutions, combined with the enhanced capabilities and expanded reach of 5G wireless networks, is altering the way [...] Read more.
The increasing integration of the Internet of Things (IoT) into daily life has led to significant changes in our social interactions. The advent of innovative IoT solutions, combined with the enhanced capabilities and expanded reach of 5G wireless networks, is altering the way humans interact with machines. Notably, the advancement of edge computing, underpinned by 5G networks within IoT frameworks, has markedly extended human sensory perception and interaction. A key biometric within these IoT applications is electroencephalography (EEG), recognized for its sensitivity, cost-effectiveness, and distinctiveness. Traditionally linked to brain–computer interface (BCI) applications, EEG is now finding applications in a wider array of fields, from neuroscience research to the emerging area of neuromarketing. The primary aim of this article is to offer a comprehensive review of the current challenges and future directions in EEG data acquisition, processing, and classification, with a particular focus on the increasing reliance on data-driven methods in the realm of 5G wireless network-supported EEG-enabled IoT solutions. Additionally, the article presents a case study on EEG-based emotion recognition, exemplifying EEG’s role as a biometric tool in the IoT domain, propelled by 5G technology. Full article
(This article belongs to the Special Issue Advances in Internet of Things and Computer Vision)
Show Figures

Figure 1

Back to TopTop