Data Analysis and Artificial Intelligence for IoT

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 February 2023) | Viewed by 29778

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electronic Engineering, Kwangwoon University, Bima Build. #525, 20 Kwangwoon-ro, Nowon-gu, Seoul 01897, Republic of Korea
Interests: RFIC/MMIC/IPD device and system design; wireless communication; design and fab-rication of device and systems; RF biosensors; ICT convergence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Software, Soongsil University, Seoul 06978, Republic of Korea
Interests: learning/machine learning; image processing; sensor networks; IoT
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Convergence Science, Kongju National University, Gongju 32588, Korea
Interests: cryptology; applied algebra; system security; network security

Special Issue Information

Dear Colleagues,

Artificial intelligence is one of the most exciting fields of computing today. Over the past few decades, artificial intelligence has become a solid part of our daily life and has been successfully used to solve real-world problems.

IoT technology (IoT) develops hardware technologies day by day which can be connected by the devices. We can use artificial intelligence (AI) by reducing size, cost, and robustness. The AI-based IoT collects a huge amount of data from various sources. AI-based big data analytics are performed on these data to predict patterns. The globally accepted AI-based IoT is the driving force of innovative applications. The number of IoT devices will be increased by around 200 billion by the end of 2023.

This Special Issue will focus on the novel contributions that integrate IoT with ML/DL techniques and provide solutions to the big data paradigm.

Potential topics include but are not limited to the following:

  • ML/DL techniques for circumventing big data in IoT applications;
  • Hybrid and IoT-based networks;
  • Data analysis and deep learning;
  • Software engineering for IoT systems;
  • Artificial intelligence systems in robotics and mechatronics;
  • Innovative communications technologies and protocols for IoT applications;
  • Healthcare data analytics using IoT and ML techniques;
  • Data analysis application in communication.

Prof. Dr. Bhanu Shrestha
Prof. Dr. Seongsoo Cho
Prof. Dr. Changho Seo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 3810 KiB  
Article
Large-Scale Road Network Traffic Congestion Prediction Based on Recurrent High-Resolution Network
by Sachin Ranjan, Yeong-Chan Kim, Navin Ranjan, Sovit Bhandari and Hoon Kim
Appl. Sci. 2023, 13(9), 5512; https://doi.org/10.3390/app13095512 - 28 Apr 2023
Cited by 1 | Viewed by 1266
Abstract
Traffic congestion is a significant problem that adversely affects the economy, environment, and public health in urban areas worldwide. One promising solution is to forecast road-level congestion levels in the short-term and long-term, enabling commuters to avoid congested areas and allowing traffic agencies [...] Read more.
Traffic congestion is a significant problem that adversely affects the economy, environment, and public health in urban areas worldwide. One promising solution is to forecast road-level congestion levels in the short-term and long-term, enabling commuters to avoid congested areas and allowing traffic agencies to take appropriate action. In this study, we propose a hybrid deep neural network algorithm based on High-Resolution Network (HRNet) and ConvLSTM decoder for 10, 30, and 60-min traffic congestion prediction. Our model utilizes the HRNet’s multi-scale feature extraction capability to capture rich spatial features from a sequence of past traffic input images. The ConvLSTM module learns temporal information from each HRNet multi-scale output and aggregates all feature maps to generate accurate traffic forecasts. Our experiments demonstrate that the proposed model can efficiently and effectively learn both spatial and temporal relationships for traffic congestion and outperforms four other state-of-the-art architectures (PredNet, UNet, ConvLSTM, and Autoencoder) in terms of accuracy, precision, and recall. A case study was conducted on the dataset from Seoul, South Korea. Full article
(This article belongs to the Special Issue Data Analysis and Artificial Intelligence for IoT)
Show Figures

Figure 1

25 pages, 524 KiB  
Article
Application of Machine Learning Algorithms for the Validation of a New CoAP-IoT Anomaly Detection Dataset
by Laura Vigoya, Alberto Pardal, Diego Fernandez and Victor Carneiro
Appl. Sci. 2023, 13(7), 4482; https://doi.org/10.3390/app13074482 - 01 Apr 2023
Cited by 5 | Viewed by 1727
Abstract
With the rise in smart devices, the Internet of Things (IoT) has been established as one of the preferred emerging platforms to fulfil their need for simple interconnections. The use of specific protocols such as constrained application protocol (CoAP) has demonstrated improvements in [...] Read more.
With the rise in smart devices, the Internet of Things (IoT) has been established as one of the preferred emerging platforms to fulfil their need for simple interconnections. The use of specific protocols such as constrained application protocol (CoAP) has demonstrated improvements in the performance of the networks. However, power-, bandwidth-, and memory-constrained sensing devices constitute a weakness in the security of the system. One way to mitigate these security problems is through anomaly-based intrusion detection systems, which aim to estimate the behaviour of the systems based on their “normal” nature. Thus, to develop anomaly-based intrusion detection systems, it is necessary to have a suitable dataset that allows for their analysis. Due to the lack of a public dataset in the CoAP-IoT environment, this work aims to present a complete and labelled CoAP-IoT anomaly detection dataset (CIDAD) based on real-world traffic, with a sufficient trace size and diverse anomalous scenarios. The modelled data were implemented in a virtual sensor environment, including three types of anomalies in the CoAP data. The validation of the dataset was carried out using five shallow machine learning techniques: logistic regression, naive Bayes, random forest, AdaBoost, and support vector machine. Detailed analyses of the dataset, data conditioning, feature engineering, and hyperparameter tuning are presented. The evaluation metrics used in the performance comparison are accuracy, precision, recall, F1 score, and kappa score. The system achieved 99.9% accuracy for decision tree models. Random forest established itself as the best model, obtaining a 99.9% precision and F1 score, 100% recall, and a Cohen’s kappa statistic of 0.99. Full article
(This article belongs to the Special Issue Data Analysis and Artificial Intelligence for IoT)
Show Figures

Figure 1

14 pages, 11857 KiB  
Article
An Effective Motion-Tracking Scheme for Machine-Learning Applications in Noisy Videos
by HaeHwan Kim, Ho-Woong Lee, JinSung Lee, Okhwan Bae and Chung-Pyo Hong
Appl. Sci. 2023, 13(5), 3338; https://doi.org/10.3390/app13053338 - 06 Mar 2023
Cited by 2 | Viewed by 1548
Abstract
Detecting and tracking objects of interest in videos is a technology that can be used in various applications. For example, identifying cell movements or mutations through videos obtained in real time can be useful information for decision making in the medical field. However, [...] Read more.
Detecting and tracking objects of interest in videos is a technology that can be used in various applications. For example, identifying cell movements or mutations through videos obtained in real time can be useful information for decision making in the medical field. However, depending on the situation, the quality of the video may be below the expected level, and in this case, it may be difficult to check necessary information. To overcome this problem, we proposed a technique to effectively track objects by modifying the simplest color balance (SCB) technique. An optimal object detection method was devised by mixing the modified SCB algorithm and a binarization technique. We presented a method of displaying object labels on a per-frame basis to track object movements in a video. Detecting objects and tagging labels through this method can be used to generate object motion-based prediction training data for machine learning. That is, based on the generated training data, it is possible to implement an artificial intelligence model for an expert system based on various object motion measurements. As a result, the main object detection accuracy in noisy videos was more than 95%. This method also reduced the tracking loss rate to less than 10%. Full article
(This article belongs to the Special Issue Data Analysis and Artificial Intelligence for IoT)
Show Figures

Figure 1

12 pages, 3379 KiB  
Article
A Study on Pine Larva Detection System Using Swin Transformer and Cascade R-CNN Hybrid Model
by Sang-Hyun Lee and Gao Gao
Appl. Sci. 2023, 13(3), 1330; https://doi.org/10.3390/app13031330 - 19 Jan 2023
Cited by 2 | Viewed by 1281
Abstract
Pine trees are more vulnerable to diseases and pests than other trees, so prevention and management are necessary in advance. In this paper, two models of deep learning were mixed to quickly check whether or not to detect pine pests and to perform [...] Read more.
Pine trees are more vulnerable to diseases and pests than other trees, so prevention and management are necessary in advance. In this paper, two models of deep learning were mixed to quickly check whether or not to detect pine pests and to perform a comparative analysis with other models. In addition, to select a good performance model of artificial intelligence, a comparison of the recall values, such as Precision (AP), Intersection over Union (IoU) = 0.5, and AP (IoU), of four models including You Only Look Once (YOLOv5s)_Focus+C3, Cascade Region-Based Convolutional Neural Networks (Cascade R-CNN)_Residual Network 50, Faster Region-Based Convolutional Neural Networks, and Faster R-CNN_ResNet50 was performed, and in addition to the mixed model Swin Transformer_Cascade R-CNN proposed in this paper, they were evaluated. As a result of this study, the recall value of the YOLOv5s_Focus+C3 model was 66.8%, the recall value of the Faster R-CNN_ResNet50 model was 91.1%, and the recall value of the Cascade R-CNN_ResNet50 model was 92.9%. The recall value of the model that mixed the Cascade R-CNN_Swin Transformer proposed in this study was 93.5%. Therefore, as a result of comparing the recall values of the performances of the four models in detecting pine pests, the Cascade R-CNN_Swin Transformer mixed model proposed in this paper showed the highest accuracy. Full article
(This article belongs to the Special Issue Data Analysis and Artificial Intelligence for IoT)
Show Figures

Figure 1

12 pages, 3313 KiB  
Article
Parcel Classification and Positioning of Intelligent Parcel Storage System Based on YOLOv5
by Mirye Kim and Youngmin Kim
Appl. Sci. 2023, 13(1), 437; https://doi.org/10.3390/app13010437 - 29 Dec 2022
Cited by 7 | Viewed by 2210
Abstract
Parcel storage provides last-mile delivery services as part of the logistics process. In order to build an intelligent system for parcel storage, we conducted a study on parcel box recognition using AI’s deep learning technology. Box detection and location estimation studies were conducted [...] Read more.
Parcel storage provides last-mile delivery services as part of the logistics process. In order to build an intelligent system for parcel storage, we conducted a study on parcel box recognition using AI’s deep learning technology. Box detection and location estimation studies were conducted using the YOLOv5 model for parcel recognition, and this model is presently being applied to many studies because it has excellent object recognition and is faster than previous models. The YOLOv5 model is classified into small, medium, large, and xlarge according to the size and performance of the model. In this study, these four models were compared and analyzed to perform an experiment showing the optimal parcel box recognition performance. As a result of the experiment, it was determined that the precision, recall, and F1 of the YOLOv5large model were 0.966, 0.899, and 0.932, respectively, showing a stronger performance than the other models. Additionally, the size of the YOLOv5large is half that of YOLOv5xlarge, and the YOLOv5large showed the optimal performance in object recognition of the parcel box. Therefore, it seems that the basis for building an intelligent parcel storage system, which shows optimal efficiency in real time using the YOLOv5large model, can be laid through the parcel object recognition experiment conducted in this study. Full article
(This article belongs to the Special Issue Data Analysis and Artificial Intelligence for IoT)
Show Figures

Figure 1

20 pages, 3110 KiB  
Article
Design and Analysis of Dual-Band High-Gain THz Antenna Array for THz Space Applications
by Waleed Shihzad, Sadiq Ullah, Ashfaq Ahmad, Nisar Ahmad Abbasi and Dong-you Choi
Appl. Sci. 2022, 12(18), 9231; https://doi.org/10.3390/app12189231 - 14 Sep 2022
Cited by 9 | Viewed by 2358
Abstract
In this paper, a high-gain THz antenna array is presented. The array uses a polyimide substrate with a thickness of 10 μm, a relative permittivity of 3.5, and an overall volume of 2920 μm × 1055 μm × 10 μ [...] Read more.
In this paper, a high-gain THz antenna array is presented. The array uses a polyimide substrate with a thickness of 10 μm, a relative permittivity of 3.5, and an overall volume of 2920 μm × 1055 μm × 10 μm, which can be employed for THz band space communication and other interesting applications. The dual-band single-element antenna is designed in four steps, while operating at 0.714 and 0.7412 THz with −10 dB bandwidths of 4.71 and 3.13 GHz, providing gain of 5.14 and 5 dB, respectively. In order to achieve a high gain, multiple order antenna arrays are designed such as the 2 × 1 antenna array and the 4 × 1 antenna array, named type B and C, respectively. The gain and directivity of the proposed type C THz antenna array are 12.5 and 11.23 dB, and 12.532 and 11.625 dBi at 0.714 and 0.7412 THz, with 99.76 and 96.6% radiation efficiency, respectively. For justification purposes, the simulations of the type B antenna are carried out in two simulators such as the CST microwave studio (CSTMWS) and the advance design system (ADS), and the performance of the type B antenna is compared with an equivalent circuit model on the bases of return loss, resulting in strong agreement. Furthermore, the parametric analysis for the type C antenna is done on the basis of separation among the radiating elements in the range 513 to 553 μm. A 64 × 1 antenna array is used to achieve possible gains of 23.8 and 24.1 dB, and directivity of 24.2 and 24.5 dBi with good efficiencies of about 91.66 and 90.35% at 0.7085 and 0.75225 THz, respectively, while the 128 × 1 antenna array provides a gain of 26.8 and 27.2 dB, and directivity of 27.2 and 27.7 dBi with good efficiency of 91.66 and 90.35% at 0.7085 and 0.75225 THz, respectively. All the results achieved in this manuscript ensure the proposed design is a feasible candidate for high-speed and free space wireless communication systems. Full article
(This article belongs to the Special Issue Data Analysis and Artificial Intelligence for IoT)
Show Figures

Figure 1

20 pages, 14993 KiB  
Article
Design and SAR Analysis of a Dual Band Wearable Antenna for WLAN Applications
by Ashfaq Ahmad, Farooq Faisal, Sadiq Ullah and Dong-You Choi
Appl. Sci. 2022, 12(18), 9218; https://doi.org/10.3390/app12189218 - 14 Sep 2022
Cited by 20 | Viewed by 2485
Abstract
This paper presents the design of three types of dual band (2.5 & 5.2 GHz) wearable microstrip patch antennas. The first one is based on a conventional ground plane, whereas the other two antennas are based on two different types of two-dimensional electromagnetic [...] Read more.
This paper presents the design of three types of dual band (2.5 & 5.2 GHz) wearable microstrip patch antennas. The first one is based on a conventional ground plane, whereas the other two antennas are based on two different types of two-dimensional electromagnetic band gap (EBG) structures. The design of these two different dual-band EBG structures using wearable substrates incorporates several factors in order to improve the performance of the proposed conventional ground plane (dual band) wearable antenna. The second EBG with plus-shaped slots is about 22.7% more compact in size relative to the designed mushroom-like EBG. Subsequently, we have demonstrated that the mushroom-like EBG and the EBG with plus-shaped slots improve the bandwidth by 5.2 MHz and 7.9 MHz at lower resonance frequencies and by 33.6 MHz and 16.7 MHz at higher resonance frequencies, respectively. Furthermore, improvements in gain of 4.33% and 16.5% at a frequency of 2.5 GHz and improvements in gain of 30.43% and 4.57% at 5.2 GHz have been achieved by using the mushroom-like EBG and EBG with plus-shaped slots, respectively. The operation of the conventional ground plane antenna is investigated under different bending conditions, such as wrapped around different rounded body parts. The proposed conventional ground plane antenna is placed over a three-layered (flat body phantom (chest)) and four-layered (rounded body parts) tissue models, and a thorough SAR analysis has been performed. It is concluded that the proposed antenna reduces SAR effects (<2 W/kg) on the human body, thereby making it useful for numerous critical wearable applications. Full article
(This article belongs to the Special Issue Data Analysis and Artificial Intelligence for IoT)
Show Figures

Figure 1

30 pages, 5457 KiB  
Article
IoT Intrusion Detection Using Machine Learning with a Novel High Performing Feature Selection Method
by Khalid Albulayhi, Qasem Abu Al-Haija, Suliman A. Alsuhibany, Ananth A. Jillepalli, Mohammad Ashrafuzzaman and Frederick T. Sheldon
Appl. Sci. 2022, 12(10), 5015; https://doi.org/10.3390/app12105015 - 16 May 2022
Cited by 76 | Viewed by 6608
Abstract
The Internet of Things (IoT) ecosystem has experienced significant growth in data traffic and consequently high dimensionality. Intrusion Detection Systems (IDSs) are essential self-protective tools against various cyber-attacks. However, IoT IDS systems face significant challenges due to functional and physical diversity. These IoT [...] Read more.
The Internet of Things (IoT) ecosystem has experienced significant growth in data traffic and consequently high dimensionality. Intrusion Detection Systems (IDSs) are essential self-protective tools against various cyber-attacks. However, IoT IDS systems face significant challenges due to functional and physical diversity. These IoT characteristics make exploiting all features and attributes for IDS self-protection difficult and unrealistic. This paper proposes and implements a novel feature selection and extraction approach (i.e., our method) for anomaly-based IDS. The approach begins with using two entropy-based approaches (i.e., information gain (IG) and gain ratio (GR)) to select and extract relevant features in various ratios. Then, mathematical set theory (union and intersection) is used to extract the best features. The model framework is trained and tested on the IoT intrusion dataset 2020 (IoTID20) and NSL-KDD dataset using four machine learning algorithms: Bagging, Multilayer Perception, J48, and IBk. Our approach has resulted in 11 and 28 relevant features (out of 86) using the intersection and union, respectively, on IoTID20 and resulted 15 and 25 relevant features (out of 41) using the intersection and union, respectively, on NSL-KDD. We have further compared our approach with other state-of-the-art studies. The comparison reveals that our model is superior and competent, scoring a very high 99.98% classification accuracy. Full article
(This article belongs to the Special Issue Data Analysis and Artificial Intelligence for IoT)
Show Figures

Figure 1

19 pages, 2045 KiB  
Article
A Deep Learning Framework Performance Evaluation to Use YOLO in Nvidia Jetson Platform
by Dong-Jin Shin and Jeong-Joon Kim
Appl. Sci. 2022, 12(8), 3734; https://doi.org/10.3390/app12083734 - 07 Apr 2022
Cited by 24 | Viewed by 5603
Abstract
Deep learning-based object detection technology can efficiently infer results by utilizing graphics processing units (GPU). However, when using general deep learning frameworks in embedded systems and mobile devices, processing functionality is limited. This allows deep learning frameworks such as TensorFlow-Lite (TF-Lite) and TensorRT [...] Read more.
Deep learning-based object detection technology can efficiently infer results by utilizing graphics processing units (GPU). However, when using general deep learning frameworks in embedded systems and mobile devices, processing functionality is limited. This allows deep learning frameworks such as TensorFlow-Lite (TF-Lite) and TensorRT (TRT) to be optimized for different hardware. Therefore, this paper introduces a performance inference method that fuses the Jetson monitoring tool with TensorFlow and TRT source code on the Nvidia Jetson AGX Xavier platform. In addition, central processing unit (CPU) utilization, GPU utilization, object accuracy, latency, and power consumption of the deep learning framework were compared and analyzed. The model is You Look Only Once Version4 (YOLOv4), and the dataset uses Common Objects in Context (COCO) and PASCAL Visual Object Classes (VOC). We confirmed that using TensorFlow results in high latency. We also confirmed that TensorFlow-TensorRT (TF-TRT) and TRT using Tensor Cores provide the most efficiency. However, it was confirmed that TF-Lite showed the lowest performance because it utilizes a GPU limited to mobile devices. Through this paper, we think that when developing deep learning-related object detection technology on the Nvidia Jetson platform or desktop environment, services and research can be efficiently conducted through measurement results. Full article
(This article belongs to the Special Issue Data Analysis and Artificial Intelligence for IoT)
Show Figures

Figure 1

12 pages, 4713 KiB  
Article
Image Classification of Parcel Boxes under the Underground Logistics System Using CNN MobileNet
by Mirye Kim, Yongjang Kwon, Joouk Kim and Youngmin Kim
Appl. Sci. 2022, 12(7), 3337; https://doi.org/10.3390/app12073337 - 25 Mar 2022
Cited by 11 | Viewed by 2776
Abstract
Despite various economic crisis situations around the world, the courier and delivery service market continues to be revitalized. The parcel shipping volume in Korea is currently 3.37 billion parcels, achieving a growth rate of about 140% compared to 2012, and 70% of parcels [...] Read more.
Despite various economic crisis situations around the world, the courier and delivery service market continues to be revitalized. The parcel shipping volume in Korea is currently 3.37 billion parcels, achieving a growth rate of about 140% compared to 2012, and 70% of parcels are from metropolitan areas. Given the above statistics, this paper focused on the development of an underground logistics system (ULS), in order to conduct a study to handle the freight volume in a more eco-friendly manner in the center of metropolitan areas. In this paper we first analyzed the points at which parcel boxes were damaged, based on a ULS. After collecting image data of the parcel boxes, the damaged parcel boxes were detected and classified using computerized methods, in particular, a convolutional neural network (CNN), MobileNet. For image classification, Google Colaboratory notebook was used and 4882 images were collected for the experiment. Based on the collected dataset, when conducting the experiment, the accuracy, recall, and specificity of classification for the testing set were 84.6%, 82% and 88.54%, respectively,. To validate the usefulness of the MobileNet algorithm, additional experiments were performed under the same conditions using other algorithms, VGG16 and ResNet50. The results show that MobileNet is superior to other image classification models when comparing test time. Thus, in the future, MobileNet has the potential to be used for identifying damaged boxes, and could be used to ensure the reliability and safety of parcel boxes based on a ULS. Full article
(This article belongs to the Special Issue Data Analysis and Artificial Intelligence for IoT)
Show Figures

Figure 1

Back to TopTop