Survey in Deep Learning for IoT Applications

A special issue of Computers (ISSN 2073-431X). This special issue belongs to the section "Internet of Things (IoT) and Industrial IoT".

Deadline for manuscript submissions: closed (31 December 2022) | Viewed by 40987

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
Interests: disease diagnostics using artificial intelligence methods
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, methods and dedicated communication channels of the Internet of Things (IoT) have been developed to detect and collect all kinds of information to deliver a variety of advanced services and applications, generating huge amounts of data, constantly received from millions of IoT sensors deployed around the world. The techniques behind deep learning now play an important role in desktop and mobile applications and are now entering the resource-constrained IoT sector, enabling the development of more advanced IoT applications, with proven results in a variety of areas already, including image recognition, medical data analysis, information retrieval, language recognition, natural language processing, indoor location, autonomous vehicles, smart cities, sustainability, pollution, bioeconomy, etc. This Special Issue focuses on the research and application of the Internet of Things, focusing on multimodal signal processing, sensor extraction, data visualization and understanding, and other related topics, answering the question of which deep neural network structures can efficiently process and integrate multimodal sensor input data for various IoT applications, how to adapt current and develop new designs to help to reduce the resource cost of running deep learning models for the efficient deployment on IoT devices, how to correctly calculate reliability measurements in deep learning predictions for IoT applications within limited and constrained calculation requirements, how to reduce the use of labeled IoT for needs linked to learning signal data considering operational limitations and other key areas.

Dr. Rytis Maskeliunas
Prof. Dr. Robertas Damaševičius
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Internet of Things
  • Deep learning
  • Data fusion
  • Multimodal signal processing
  • Data processing and visualization

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

34 pages, 5475 KiB  
Article
IoT-Enabled Soil Nutrient Analysis and Crop Recommendation Model for Precision Agriculture
by Murali Krishna Senapaty, Abhishek Ray and Neelamadhab Padhy
Computers 2023, 12(3), 61; https://doi.org/10.3390/computers12030061 - 12 Mar 2023
Cited by 15 | Viewed by 11195
Abstract
Healthy and sufficient crop and food production are very much essential for everyone as the population is increasing globally. The production of crops affects the economy of a country to a great extent. In agriculture, observing the soil, weather, and water availability and, [...] Read more.
Healthy and sufficient crop and food production are very much essential for everyone as the population is increasing globally. The production of crops affects the economy of a country to a great extent. In agriculture, observing the soil, weather, and water availability and, based on these factors, selecting an appropriate crop, finding the availability of seeds, analysing crop demand in the market, and having knowledge of crop cultivation are important. At present, many advancements have been made in recent times, starting from crop selection to crop cutting. Mainly, the roles of the Internet of Things, cloud computing, and machine learning tools help a farmer to analyse and make better decisions in each stage of cultivation. Once suitable crop seeds are chosen, the farmer shall proceed with seeding, monitoring crop growth, disease detection, finding the ripening stage of the crop, and then crop cutting. The main objective is to provide a continuous support system to a farmer so that he can obtain regular inputs about his field and crop. Additionally, he should be able to make proper decisions at each stage of farming. Artificial intelligence, machine learning, the cloud, sensors, and other automated devices shall be included in the decision support system so that it will provide the right information within a short time span. By using the support system, a farmer will be able to take decisive measures without fully depending on the local agriculture offices. We have proposed an IoT-enabled soil nutrient classification and crop recommendation (IoTSNA-CR) model to recommend crops. The model helps to minimise the use of fertilisers in soil so as to maximise productivity. The proposed model consists of phases, such as data collection using IoT sensors from cultivation lands, storing this real-time data into cloud memory services, accessing this cloud data using an Android application, and then pre-processing and periodic analysis of it using different learning techniques. A sensory system was prepared with optimised cost that contains different sensors, such as a soil temperature sensor, a soil moisture sensor, a water level indicator, a pH sensor, a GPS sensor, and a colour sensor, along with an Arduino UNO board. This sensory system allowed us to collect moisture, temperature, water level, soil NPK colour values, date, time, longitude, and latitude. The studies have revealed that the Agrinex NPK soil testing tablets should be applied to a soil sample, and then the soil colour can be sensed using an LDR colour sensor to predict the phosphorus (P), nitrogen (N), and potassium (K) values. These collected data together were stored in Firebase cloud storage media. Then, an Android application was developed to fetch and analyse the data from the Firebase cloud service from time to time by a farmer. In this study, a novel approach was identified via the hybridisation of algorithms. We have developed an algorithm using a multi-class support vector machine with a directed acyclic graph and optimised it using the fruit fly optimisation method (MSVM-DAG-FFO). The highest accuracy rate of this algorithm is 0.973, compared to 0.932 for SVM, 0.922 for SVM kernel, and 0.914 for decision tree. It has been observed that the overall performance of the proposed algorithm in terms of accuracy, recall, precision, and F-Score is high compared to other methods. The IoTSNA-CR device allows the farmer to maintain his field soil information easily in the cloud service using his own mobile with minimum knowledge. Additionally, it reduces the expenditure to balance the soil minerals and increases productivity. Full article
(This article belongs to the Special Issue Survey in Deep Learning for IoT Applications)
Show Figures

Figure 1

26 pages, 1686 KiB  
Article
Foot-to-Ground Phases Detection: A Comparison of Data Representation Formatting Methods with Respect to Adaption of Deep Learning Architectures
by Youness El Marhraoui, Hamdi Amroun, Mehdi Boukallel, Margarita Anastassova, Sylvie Lamy, Stéphane Bouilland and Mehdi Ammi
Computers 2022, 11(5), 58; https://doi.org/10.3390/computers11050058 - 20 Apr 2022
Cited by 1 | Viewed by 2276
Abstract
Identifying the foot stance and foot swing phases, also known as foot-to-ground (FTG) detection, is a branch of Human Activity Recognition (HAR). Our study aims to detect two main phases of the gait (i.e., foot-off and foot-contact) corresponding to the moments when each [...] Read more.
Identifying the foot stance and foot swing phases, also known as foot-to-ground (FTG) detection, is a branch of Human Activity Recognition (HAR). Our study aims to detect two main phases of the gait (i.e., foot-off and foot-contact) corresponding to the moments when each foot is in contact with the ground or not. This will allow the medical professionals to characterize and identify the different phases of the human gait and their respective patterns. This detection process is paramount for extracting gait features (e.g., step width, stride width, gait speed, cadence, etc.) used by medical experts to highlight gait anomalies, stance issues, or any other walking irregularities. It will be used to assist health practitioners with patient monitoring, in addition to developing a full pipeline for FTG detection that would help compute gait indicators. In this paper, a comparison of different training configurations, including model architectures, data formatting, and pre-processing, was conducted to select the parameters leading to the highest detection accuracy. This binary classification provides a label for each timestamp informing whether the foot is in contact with the ground or not. Models such as CNN, LSTM, and ConvLSTM were the best fits for this study. Yet, we did not exclude DNNs and Machine Learning models, such as Random Forest and XGBoost from our work in order to have a wide range of possible comparisons. As a result of our experiments, which included 27 senior participants who had a stroke in the past wearing IMU sensors on their ankles, the ConvLSTM model achieved a high accuracy of 97.01% for raw windowed data with a size of 3 frames per window, and each window was formatted to have two superimposed channels (accelerometer and gyroscope channels). The model was trained to have the best detection without any knowledge of the participants’ personal information including age, gender, health condition, the type of activity, or the used foot. In other words, the model’s input data only originated from IMU sensors. Overall, in terms of FTG detection, the combination of the ConvLSTM model and the data representation had an important impact in outperforming other start-of-the-art configurations; in addition, the compromise between the model’s complexity and its accuracy is a major asset for deploying this model and developing real-time solutions. Full article
(This article belongs to the Special Issue Survey in Deep Learning for IoT Applications)
Show Figures

Figure 1

24 pages, 37226 KiB  
Article
An IoT System Using Deep Learning to Classify Camera Trap Images on the Edge
by Imran Zualkernan, Salam Dhou, Jacky Judas, Ali Reza Sajun, Brylle Ryan Gomez and Lana Alhaj Hussain
Computers 2022, 11(1), 13; https://doi.org/10.3390/computers11010013 - 13 Jan 2022
Cited by 24 | Viewed by 8520
Abstract
Camera traps deployed in remote locations provide an effective method for ecologists to monitor and study wildlife in a non-invasive way. However, current camera traps suffer from two problems. First, the images are manually classified and counted, which is expensive. Second, due to [...] Read more.
Camera traps deployed in remote locations provide an effective method for ecologists to monitor and study wildlife in a non-invasive way. However, current camera traps suffer from two problems. First, the images are manually classified and counted, which is expensive. Second, due to manual coding, the results are often stale by the time they get to the ecologists. Using the Internet of Things (IoT) combined with deep learning represents a good solution for both these problems, as the images can be classified automatically, and the results immediately made available to ecologists. This paper proposes an IoT architecture that uses deep learning on edge devices to convey animal classification results to a mobile app using the LoRaWAN low-power, wide-area network. The primary goal of the proposed approach is to reduce the cost of the wildlife monitoring process for ecologists, and to provide real-time animal sightings data from the camera traps in the field. Camera trap image data consisting of 66,400 images were used to train the InceptionV3, MobileNetV2, ResNet18, EfficientNetB1, DenseNet121, and Xception neural network models. While performance of the trained models was statistically different (Kruskal–Wallis: Accuracy H(5) = 22.34, p < 0.05; F1-score H(5) = 13.82, p = 0.0168), there was only a 3% difference in the F1-score between the worst (MobileNet V2) and the best model (Xception). Moreover, the models made similar errors (Adjusted Rand Index (ARI) > 0.88 and Adjusted Mutual Information (AMU) > 0.82). Subsequently, the best model, Xception (Accuracy = 96.1%; F1-score = 0.87; F1-Score = 0.97 with oversampling), was optimized and deployed on the Raspberry Pi, Google Coral, and Nvidia Jetson edge devices using both TenorFlow Lite and TensorRT frameworks. Optimizing the models to run on edge devices reduced the average macro F1-Score to 0.7, and adversely affected the minority classes, reducing their F1-score to as low as 0.18. Upon stress testing, by processing 1000 images consecutively, Jetson Nano, running a TensorRT model, outperformed others with a latency of 0.276 s/image (s.d. = 0.002) while consuming an average current of 1665.21 mA. Raspberry Pi consumed the least average current (838.99 mA) with a ten times worse latency of 2.83 s/image (s.d. = 0.036). Nano was the only reasonable option as an edge device because it could capture most animals whose maximum speeds were below 80 km/h, including goats, lions, ostriches, etc. While the proposed architecture is viable, unbalanced data remain a challenge and the results can potentially be improved by using object detection to reduce imbalances and by exploring semi-supervised learning. Full article
(This article belongs to the Special Issue Survey in Deep Learning for IoT Applications)
Show Figures

Figure 1

Review

Jump to: Research, Other

19 pages, 3268 KiB  
Review
Vertical Farming Perspectives in Support of Precision Agriculture Using Artificial Intelligence: A Review
by Riki Ruli A. Siregar, Kudang Boro Seminar, Sri Wahjuni and Edi Santosa
Computers 2022, 11(9), 135; https://doi.org/10.3390/computers11090135 - 08 Sep 2022
Cited by 24 | Viewed by 9816
Abstract
Vertical farming is a new agricultural system which aims to utilize the limited access to land, especially in big cities. Vertical agriculture is the answer to meet the challenges posed by land and water shortages, including urban agriculture with limited access to land [...] Read more.
Vertical farming is a new agricultural system which aims to utilize the limited access to land, especially in big cities. Vertical agriculture is the answer to meet the challenges posed by land and water shortages, including urban agriculture with limited access to land and water. This research study uses the Preferred Reporting for Systematic Review and Meta-analysis (PRISMA) item as one of the literary approaches. PRISMA is one way to check the validity of articles for a literature review or a systematic review resulting from this paper. One of the aims of this study is to review a survey of scientific literature related to vertical farming published in the last six years. Artificial intelligence with machine learning, deep learning, and the Internet of Things (IoT) in supporting precision agriculture has been optimally utilized, especially in its application to vertical farming. The results of this study provide information regarding all of the challenges and technological trends in the area of vertical agriculture, as well as exploring future opportunities. Full article
(This article belongs to the Special Issue Survey in Deep Learning for IoT Applications)
Show Figures

Figure 1

Other

Jump to: Research, Review

32 pages, 59599 KiB  
Systematic Review
Deep Learning (CNN, RNN) Applications for Smart Homes: A Systematic Review
by Jiyeon Yu, Angelica de Antonio and Elena Villalba-Mora
Computers 2022, 11(2), 26; https://doi.org/10.3390/computers11020026 - 16 Feb 2022
Cited by 21 | Viewed by 8083
Abstract
In recent years, research on convolutional neural networks (CNN) and recurrent neural networks (RNN) in deep learning has been actively conducted. In order to provide more personalized and advanced functions in smart home services, studies on deep learning applications are becoming more frequent, [...] Read more.
In recent years, research on convolutional neural networks (CNN) and recurrent neural networks (RNN) in deep learning has been actively conducted. In order to provide more personalized and advanced functions in smart home services, studies on deep learning applications are becoming more frequent, and deep learning is acknowledged as an efficient method for recognizing the voices and activities of users. In this context, this study aims to systematically review the smart home studies that apply CNN and RNN/LSTM as their main solution. Of the 632 studies retrieved from the Web of Science, Scopus, IEEE Explore, and PubMed databases, 43 studies were selected and analyzed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology. In this paper, we examine which smart home applications CNN and RNN/LSTM are applied to and compare how they were implemented and evaluated. The selected studies dealt with a total of 15 application areas for smart homes, where activity recognition was covered the most. This study provides essential data for all researchers who want to apply deep learning for smart homes, identifies the main trends, and can help to guide design and evaluation decisions for particular smart home services. Full article
(This article belongs to the Special Issue Survey in Deep Learning for IoT Applications)
Show Figures

Figure 1

Back to TopTop