Next Article in Journal
LiDAR Point Clouds Semantic Segmentation in Autonomous Driving Based on Asymmetrical Convolution
Next Article in Special Issue
Predicting People’s Concentration and Movements in a Smart City
Previous Article in Journal
Rulers2023: An Annotated Dataset of Synthetic and Real Images for Ruler Detection Using Deep Learning
Previous Article in Special Issue
Worker Abnormal Behavior Recognition Based on Spatio-Temporal Graph Convolution and Attention Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Integration of Deep Learning into the IoT: A Survey of Techniques and Challenges for Real-World Applications

by
Abdussalam Elhanashi
1,*,†,
Pierpaolo Dini
1,†,
Sergio Saponara
1,† and
Qinghe Zheng
2,†
1
Department of Information Engineering, University of Pisa, 56126 Pisa, Italy
2
School of Intelligent Engineering, Shandong Management University, Jinan 250357, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Electronics 2023, 12(24), 4925; https://doi.org/10.3390/electronics12244925
Submission received: 27 October 2023 / Revised: 27 November 2023 / Accepted: 1 December 2023 / Published: 7 December 2023
(This article belongs to the Special Issue AI Technologies and Smart City)

Abstract

:
The internet of things (IoT) has emerged as a pivotal technological paradigm facilitating interconnected and intelligent devices across multifarious domains. The proliferation of IoT devices has resulted in an unprecedented surge of data, presenting formidable challenges concerning efficient processing, meaningful analysis, and informed decision making. Deep-learning (DL) methodologies, notably convolutional neural networks (CNNs), recurrent neural networks (RNNs), and deep-belief networks (DBNs), have demonstrated significant efficacy in mitigating these challenges by furnishing robust tools for learning and extraction of insights from vast and diverse IoT-generated data. This survey article offers a comprehensive and meticulous examination of recent scholarly endeavors encompassing the amalgamation of deep-learning techniques within the IoT landscape. Our scrutiny encompasses an extensive exploration of diverse deep-learning models, expounding on their architectures and applications within IoT domains, including but not limited to smart cities, healthcare informatics, and surveillance applications. We proffer insights into prospective research trajectories, discerning the exigency for innovative solutions that surmount extant limitations and intricacies in deploying deep-learning methodologies effectively within IoT frameworks.

1. Introduction

1.1. Motivations

The integration of deep-learning models into IoT systems holds substantial promise, generating a myriad of advantageous opportunities and applications. Incorporating deep-learning models within IoT systems yields a multitude of benefits, each catering to the specific demands and complexities of the modern technological landscape.
One pivotal advantage lies in the realm of automatic feature extraction. Deep-learning models excel in autonomously extracting features from raw sensor data, a critical capability particularly beneficial in IoT applications grappling with unstructured, noisy, or intricately interconnected datasets.
Furthermore, the implementation of deep learning facilitates real-time and streaming data analysis, enabling IoT applications to efficiently handle the continuous influx of data. This functionality is paramount in time-sensitive applications such as real-time monitoring, predictive maintenance, or the seamless operation of autonomous control systems [1,2].
A notable enhancement that deep learning offers to IoT systems is the substantial boost in accuracy. By discerning intricate patterns and anomalies within data that may elude human comprehension, deep-learning models bolster the precision and dependability of IoT systems.
Moreover, the optimization of deep-learning models results in reduced resource consumption, making them ideally suited for deployment on resource-constrained IoT devices. This efficiency enables a streamlined and effective utilization of resources, underscoring the practicality and sustainability of incorporating deep learning into IoT systems.
Intriguingly, the integration of deep-learning models can pave the way for novel applications, fostering a new paradigm of interaction between humans and their physical environment. Efficiency and safety remain key concerns that demand thorough consideration, to fully harness the potential of this integration. However, ongoing initiatives such as the very efficient deep learning in the IoT (VEDLIoT) project are actively tackling these challenges, demonstrating a commitment to surmounting obstacles and maximizing the benefits of this powerful amalgamation.
The proliferation of internet access, coupled with advances in hardware sophistication and network engineering, has ushered in an era characterized by an extensive corpus of data and a multitude of data-analyzing techniques. This confluence of factors has empowered the analysis and verification of signals received by sensors, heralding a new era of possibilities in various domains, such as healthcare, transportation, agriculture, and the development of smart cities (Figure 1).
Wireless-access technologies, heavily reliant on ubiquitous sensing, have emerged as the linchpin for robust internet connectivity. Ubiquitous-sensing technology, capable of distilling insights from sensor-collected data, has emerged as a significant area of research, charting a course towards transformation advancements [3,4].
At the heart of these technological innovations lies the IoT, a catalytic force that amalgamates disparate technologies. The IoT encompasses a network of physical devices, vehicles, structures, and objects embedded with sensors, software, and cutting-edge technologies.
Deep learning, a prominent branch of machine learning, plays a crucial role in shaping the landscape of IoT (internet of things) systems [5,6,7,8,9]. At its core, deep learning relies on neural networks, which are computational models inspired by the human brain. These networks enable the automated extraction of intricate patterns and relationships from complex data. Deep learning leverages various types of neural networks, each designed for specific tasks and data structures:
  • Convolutional neural networks (Figure 2): These are tailored for image and video analysis. CNNs excel in detecting spatial patterns through convolutional layers, making them invaluable for tasks like image classification and object detection. The strength of CNNs lies in their hierarchical feature-extraction process. Convolutional layers apply filters or kernels to the input data, effectively scanning it for various features, such as edges, textures, shapes, and other visual cues [10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27].
  • Recurrent neural networks: RNNs are ideal for sequential data, such as natural language processing and time-series analysis. They maintain memory of past inputs, enabling them to capture temporal dependencies [12,28,29,30,31,32,33,34,35,36,37].
  • Long short-term memory (LSTM) networks: A specialized type of RNN, LSTMs address the vanishing gradient problem and are well-suited for tasks requiring longer-term memory retention [13,16,20,21,38,39,40,41,42,43,44,45].
  • Gated-recurrent-unit-(GRU) networks: Similar to LSTMs, GRUs are designed for sequential data but have a simplified architecture, making them computationally efficient [46,47,48,49,50,51,52,53,54,55,56,57,58].
  • Fully connected neural networks (Figure 3): These networks, also known as multilayer perceptrons (MLPs), are versatile and can be used for various tasks, including regression and classification. Fully connected layers, serving as the final layer in a deep neural network, play a central role in synthesizing output from preceding layers into comprehensive predictions [59,60,61,62,63,64,65,66,67,68,69,70,71].
Deep learning with neural networks is a powerful paradigm that excels at representing data hierarchically, extracting essential features, and performing predictive modeling. Feature extraction is a critical component of this process, wherein the network automatically identifies and isolates key patterns and relevant information from raw data. These features encapsulate distinctive characteristics, such as textures and shapes in images, enabling the network to comprehend the underlying structure within the data. This hierarchical feature extraction empowers deep-learning models to transform complex, unstructured data into meaningful representations, facilitating accurate predictions and insights. The integration of deep learning into the internet of things (IoT) presents a promising avenue for enhancing IoT capabilities. There is a need for a survey of the various deep-learning techniques that can be applied in IoT contexts, including CNNs and RNNs, to provide a holistic understanding of their strengths and weaknesses. Secondly, understanding the resource constraints in IoT devices and designing efficient deep-learning models that can operate within these limitations is crucial, and more research is needed in this area. Additionally, there is a need for benchmark datasets to assess the real-world applicability and performance of deep learning in diverse IoT applications, such as healthcare, human recognition, and surveillance applications. Addressing these research gaps will facilitate the effective integration of deep learning into the IoT and will foster its broader adoption in real-world scenarios.

1.2. State-of-the-Art on Deep Learning for the IoT

The integration of deep learning and the IoT has revolutionized multiple sectors. Ahmed et al.’s 2018 research highlighted deep learning’s crucial role in IoT data analysis, enabling real-time monitoring, predictive maintenance, and operational optimization across IoT applications [72]. This integration has sparked innovations in smart-home automation, autonomous vehicles, smart agriculture, energy management, and healthcare [73,74,75]. Anomaly detection in the IoT has benefited from advanced deep-learning techniques, such as CNNs with auto-encoders, LSTMs, and generative adversarial networks (GANs), bolstering security and pre-empting threats [76,77]. Activity recognition, a pivotal IoT data analysis aspect, leverages CNNs, RNNs, and DBNs for real-time adaptability [78,79,80]. Energy-efficient deep-learning models, including spiking neural networks (SNNs), binary neural networks (BNNs), and deep-compression techniques, address the IoT’s resource constraints without sacrificing data analysis accuracy [81,82]. Edge computing’s rise in the IoT has necessitated optimized deep-learning models, achieved through techniques like federated learning, transfer learning, and knowledge distillation, enhancing edge-device performance [83,84,85]. Visual data processing in the IoT benefits from deep learning, with CNNs, transfer learning, region-based CNNs, and CNNs with spatial pyramid pooling facilitating data-driven insights and comprehensive data categorization [86,87,88,89]. The collaboration between deep learning and the IoT showcases the transformative potential of modern technology, enhancing efficiency, and driving innovation across industries. It shapes the future of applications and paves the way for groundbreaking advancements.

1.3. Contributions

This survey represents a comprehensive examination of the most recent research pertaining to the intersection of deep learning and the IoT. Our approach encompassed an extensive exploration and analysis of pertinent academic publications and reports from diverse sources. Leveraging online databases, notably Google Scholar, we conducted a meticulous search, using targeted keywords such as “deep learning” and “Internet of Things”. Our focus centered on sifting through the latest and most impactful developments in the field. Additionally, we prioritized the inclusion of papers available in full-text format and composed in the English language, ensuring accessibility for a broader audience.
Each identified paper underwent rigorous scrutiny and meticulous analysis. Our emphasis was directed towards delineating the fundamental research inquiries, assessing the methodologies employed, and comprehensively evaluating the resultant findings. Moreover, an exhaustive examination of the references cited within the papers facilitated the identification of supplementary relevant sources, enhancing the depth and breadth of our analysis.
The collated papers were meticulously categorized, elucidating the diverse applications of deep learning within the realm of the IoT. Subsequently, we meticulously synthesized the key findings, elucidating the salient insights from each category, thereby highlighting the latest strides and transformative applications of deep learning within the IoT domain. In tandem, we expounded upon the intricate challenges and limitations inherent within these applications, supplementing our discussion with valuable insights into prospective research directions aimed at further enriching this dynamic and rapidly evolving landscape.
In essence, our methodology embodies a systematic and meticulous approach toward comprehensively evaluating the cutting-edge research pertinent to deep learning within the IoT sphere. By virtue of our comprehensive analysis, we aim to provide a lucid and informative survey, underscoring the rapid evolution and transformative potential of this burgeoning field.

2. Overview on Deep-Learning Performance in Typical IoT Applications

In this review, we employed a systematic approach to selecting and analyzing academic publications and reports. Initially, we identified relevant databases and search terms aligned with our study’s focus. Subsequent selection was based on predefined criteria, including publication date, relevance to DL in the IoT, and scholarly credibility. Each selected document underwent a thorough qualitative content analysis, to extract and synthesize key findings and trends relevant to our research objectives.

2.1. Anomaly Detection

The integration of deep learning in anomaly detection within the IoT realm has emerged as a transformative paradigm, ushering in a new era of data analysis and anomaly identification. Notably, deep learning plays a pivotal role in unraveling complex and unstructured data patterns, surpassing traditional anomaly-detection techniques reliant on manual feature extraction and rule-based systems, as articulated by several pioneering studies [90]. Leveraging its innate capacity to autonomously extract features from raw sensor data and to comprehend intricate data relationships, deep-learning models have redefined the landscape of anomaly detection, enhancing their efficacy in diverse IoT applications, including real-time monitoring, predictive maintenance, and autonomous control systems [76,91,92].
Moreover, the versatility of deep-learning models extends beyond their capacity for real-time analysis, exhibiting optimal resource utilization, a pivotal attribute in the context of resource-constrained IoT devices [76,91,92]. The amalgamation of sophisticated deep-learning-based anomaly detection techniques has witnessed proliferation, encompassing diverse methodologies such as unsupervised deep-learning techniques, auto-encoder neural networks, and spiking neural networks, each tailored to address the specific intricacies of anomalous event detection within IoT systems [76,91,92].
However, the domain of anomaly detection within IoT systems is not without its share of challenges, notably the paucity of large open datasets, which often impedes direct comparisons between various models. This challenge has prompted several researchers to rely on private datasets, resulting in a fragmented landscape that complicates comprehensive analyses and standardized evaluations [90]. Nonetheless, the cumulative efforts of various models have culminated in promising results, showcasing high metrics scores with accuracy rates soaring up to 90%. The comparative analysis, as depicted in Table 1, illustrates the diverse array of models, ranging from MLP and convolutional neural networks (CNN) to random forest classifiers and intricate artificial neural networks (ANNs) [90].
These persistent efforts within the anomaly-detection realm, coupled with the advancements and achievements of diverse deep-learning models, serve as a testament to the transformative potential of integrating cutting-edge technologies within the intricate fabric of IoT systems.
Table 1 summarizes the performance of the cited references.
The findings presented in Table 1 demonstrate the utilization of various techniques for anomaly detection in the IoT, all yielding high F1 scores. Specifically, the decision-tree (DT) and random-forest (RF) classifiers exhibited the highest F1 scores in the investigations conducted by [91], reaching 0.99. This suggests that algorithmic approaches in machine learning outperform neural networks for this specific task. Following this, MLP classifiers achieved F1 scores of 0.98 and 0.99 in two separate studies [76,91]. Similarly, the support-vector-machine-(SVM) and logistic-regression-(LR) approaches recorded high F1 scores of 0.98 in the same study [91]. By contrast, the study conducted by [92] employed a different methodology, termed segmentation-based self-taught convolutional neural network (SS-TCVN), achieving an F1 score of 0.9616, slightly lower than the scores achieved by the other methodologies. This discrepancy may be attributed to the propensity of complex neural-network architectures to overfit rapidly when applied to datasets with simple patterns.

2.2. Human-Activity Recognition

Deep learning plays a crucial role in various aspects of IoT applications, particularly in anomaly detection and human-activity recognition (HAR). When it comes to anomaly detection, deep learning’s ability to automatically learn complex patterns and relationships within the data is invaluable, making it an essential tool in identifying unusual events or behaviors in IoT systems. Traditional anomaly detection methods often rely on manual feature engineering, which can be time-consuming and may not capture all the relevant information in the data. By contrast, deep-learning models can automatically extract features from raw sensor data, making them more effective at detecting anomalies.
Moreover, deep-learning models excel in handling real-time and streaming data, enabling continuous analysis and decision making in time-sensitive applications, such as real-time monitoring and predictive maintenance. Their potential to be optimized for reduced resource consumption makes them ideal for deployment on resource-constrained IoT devices, further enhancing their versatility and applicability.
Various State-of-the-Art deep-learning-based anomaly detection techniques have been proposed, including unsupervised deep-learning techniques, auto-encoder neural networks, and spiking neural networks, with applications spanning diverse IoT domains, such as botnet detection, smart logistics, and healthcare monitoring. Similarly, in the field of HAR, deep learning’s capability to automatically learn intricate patterns and relationships in complex and unstructured data is indispensable. HAR, focusing on autonomously classifying human activities, benefits greatly from deep learning’s automatic feature extraction, leading to improved accuracy and efficient real-time data analysis.
The current State-of-the-Art models for HAR leverage CNNs, RNNs, and their combinations to classify human actions based on sensor data from accelerometers or gyroscopes [93]. CNNs extract spatial features from sensor data, while RNNs model temporal dependencies, allowing the recognition of activity patterns over multiple time steps [93]. Recent advancements in HAR research include the use of transfer-learning and data-augmentation techniques to enhance model performance and to mitigate overfitting [93].
One standout model, the DeepConvLSTM architecture, combines multiple CNN and LSTM layers, demonstrating notable accuracy and performance across various activity classes [93]. The field has also witnessed the application of hybrid feature-based State-of-the-Art methods, reflecting the ongoing progress and innovative approaches within HAR. Table 2 summarizes the efficiency results of the reference works.
Moreover, the DeepConvLSTM architecture represents a notable advancement in HAR, showcasing its ability to classify human activities accurately, with metrics scores ranging from accuracy and precision to recall and F1 score [93]. An example of Deep CNN–LSTM internal model architecture is shown in Figure 4.
In essence, the synergy between deep learning and IoT applications, particularly in anomaly detection and HAR, underscores the transformative potential of this technology in enabling efficient, accurate, and real-time data analysis, with implications across a wide array of domains ranging from healthcare and security to smart homes and logistics. In the realm of HAR, DeepConvLSTM serves as an illustration of a hybrid model with a substantial number of learning parameters. Unlike in the anomaly detection task, the model did not demonstrate signs of overfitting even after the number of layers reached 1024. However, the model’s accuracy was not optimal, indicating room for improvement for researchers.

2.3. Healthcare

Deep learning is becoming increasingly important in healthcare IoT applications, due to its ability to improve the accuracy of diagnosis, enable personalized treatment, monitor patients’ health in real time, and be optimized for deployment on resource-constrained IoT devices. Here are some of the most commonly used deep-learning models in healthcare IoT applications. Several State-of-the-Art deep-learning-based healthcare applications have been proposed for IoT applications, such as disease diagnosis, drug discovery, and personalized medicine (Figure 5). These applications have the potential to improve patient outcomes, reduce healthcare costs, and enable more efficient healthcare delivery. In terms of quantitative metrics, deep-learning models have been shown to achieve high accuracy, sensitivity, and specificity in various healthcare applications. For instance, a CNN-based model achieved an accuracy of 97.5% in detecting COVID-19 from chest X-ray images. Another CNN-based model achieved an accuracy of 96.5% in detecting skin cancer from dermoscopy images.
A computer-vision field is used for this task, and the current state is the same as the progress in this field. As an example of image processing in the IoT, we take its application in medicine [95], where DL models are used to predict diagnosis based on the X-ray results. This is a difficult task, as it requires a large amount of dataset and computer power to train the model. Although classification tasks [96] can achieve about 95% accuracy, more important tasks such as segmentation of the desired region are still low and vary around 60 in mean-average-precision metrics.
There are various applications of DL in the field of healthcare, and the IoT has enabled the development of numerous IoT systems for homes related to healthcare. These systems utilize sensors and devices to monitor and analyze patients’ health data in real time. The entries in Table 3 encompass a diversity of data types (RGB, skeleton, depth, audio), datasets used (UCF101, NTU-RGBD, HMDB51, etc.), and methods employed (e.g., TSN, FCN, DBN, GAN, etc.). The efficacy of the methods is evaluated in terms of accuracy, ranging from 14.40% to 97.4%. Given the disparate data types and architectural differences, direct comparison of the methods may be challenging. Nevertheless, the table serves as a valuable reference for researchers interested in applying deep learning to HAR, underscoring the wide array of methods and datasets employed in this field. Table 4 provides a summary of some of the IoT systems for homes related to healthcare that utilize DL techniques. These systems can help individuals manage their health better, enable early detection of diseases, and support remote patient monitoring.
In Table 4, Fonseca et al. aimed to enhance the living conditions of chronic multimorbidity patients by introducing new caregiving amenities, yet no statistical data were available to substantiate its effectiveness. Sandstrom et al. proposed a simplistically structured DL method to link smartphone sensor data to individual health, offering low computational load and high performance. However, they recommended further research into different sensor-data genres. Liu et al. developed a smart-dental-health-IoT system with cost-efficient hardware, but its coverage of larger teeth was incomplete. Sagar et al. put forth a DNN model with high accuracy and low cost for patient monitoring, necessitating a substantial amount of data.
In Table 5, Klenk et al. utilized classical and non-classical algorithms for fall detection, but the DL model exhibited lower accuracy for ADLs. Malasinghe et al. devised smart patches or chips for human health monitoring, exhibiting promising outcomes based on precision, efficiency, mean-residual-error delay, and energy usage. Nevertheless, they suggested exploring advanced multimedia methods to reduce costs and enhance privacy. Wei et al. introduced a novel system for automated nutrition monitoring, which proved cost-efficient with high accuracy, but they recommended exploring alternative methods for more precise diet prediction.

2.4. IoT for Surveillance Applications

The integration of deep learning into the IoT has revolutionized surveillance applications, offering unprecedented advancements in security and efficiency. In recent years, IoT devices have become indispensable tools in surveillance, enabling real-time monitoring, data analysis, and predictive insights. Deep-learning techniques, such as convolutional neural networks, have empowered these devices to recognize complex patterns and anomalies in video streams, making them highly effective at identifying threats and providing automated responses. However, this integration is not without challenges, including privacy concerns, data security, and computational limitations. This comprehensive survey explores the diverse range of applications, from smart cities to smart homes, and provides a critical assessment of the current landscape, addressing the hurdles that must be overcome for widespread adoption. Table 5 shows the advantages and challenges of applying various deep-learning methodologies for surveillance applications.
Below are some of the commercially available solutions that utilize deep learning in surveillance applications. They include:
  • Amazon Rekognition: Amazon Rekognition is an image-and-video-analysis service based on deep learning, which can identify objects, people, text, scenes, and activities in real time. It is used in surveillance applications for facial recognition, object detection, and traffic monitoring [115,116,117,118].
  • Hanwha Techwin: Hanwha Techwin is a company that produces deep-learning-based surveillance cameras, such as the Q-AI and X-AI series, which use deep-learning algorithms to detect objects, people, and activities [119,120,121].
  • Hikvision: Hikvision is a company that manufactures deep-learning-based surveillance cameras, such as the DeepinView series, which utilizes deep-learning algorithms to detect objects, people, and activities [122,123].
  • NVIDIA: NVIDIA is a company that produces hardware and software for image and video processing based on deep learning, such as the Jetson platform, which can be used for surveillance applications such as facial recognition and object detection [124,125,126,127].
Table 6 showcases a variety of studies related to deep learning in the IoT for surveillance applications, along with their methodologies, applications, benefits, and limitations. Concerning IoT security and surveillance, a series of studies have leveraged deep-learning techniques. Al-Amiedy et al. [128] employed GRU-based deep learning to detect and prevent RPL attacks in IoT networks, demonstrating enhanced security. However, the study was confined to the realm of RPL-based 6LoWPAN within the IoT. Similarly, Lerina et al. [129] exhibited improved IoT security through deep learning but failed to provide a comprehensive review of deep-learning approaches to IoT security. Banaamah et al. [130] focused on intrusion detection in the IoT, resulting in heightened security measures, yet the study was constrained to this specific aspect. Javed et al. [131] explored both machine learning and deep learning for IoT security but limited their scope to a systematic review of the literature. Gandhi et al. [132] concentrated on enhancing the privacy of deep-learning systems in the IoT, showcasing improved privacy measures, albeit within this niche exclusively. By contrast, Gherbi [133] encompassed various IoT networking domains with machine learning, demonstrating enhanced performance across applications, albeit within the realm of machine learning alone. Studies [57,58] focused on deep learning for diverse surveillance applications, including object detection, human-activity recognition, anomaly detection, and facial recognition, achieving enhanced surveillance capabilities within their respective domains but constrained by their specificities. Collectively, these studies have contributed to improved security, privacy, and surveillance in IoT applications. However, the challenge remains to integrate these individual successes into a holistic and comprehensive solution that can address the broader spectrum of IoT security, privacy, and surveillance requirements. Additionally, all these advancements are constrained by the limitations imposed by the hardware capabilities of IoT devices, emphasizing the need for further innovation in this area. In terms of quantitative metrics, deep-learning-based solutions have demonstrated highly accurate and reliable results in surveillance applications. For instance, a deep-learning-based facial-recognition solution achieved an accuracy of 99.97% in a large-scale facial-recognition test. Additionally, a deep-learning-based object-detection solution achieved an accuracy of 99.9% in a real-time object-detection test.

3. Further Discussion

Various case studies have demonstrated the successful implementation of IoT and machine-learning technologies in smart cities. These technologies have the potential to transform urban environments, enhance the efficiency of urban services, and improve the quality of life for citizens. Distinctive deep-learning algorithms with video analysis have been presented as accurate smart-city applications [136]. In a study, researchers developed a deep-learning-based IoT system for remote monitoring and early detection of health issues in real time. The system demonstrated high accuracy in identifying heart conditions, achieving an accuracy of 0.982. The study investigated the potential of integrating the IoT and deep-learning technologies in medical systems for home environments, to provide real-time monitoring, timely intervention, and improved patient care while reducing healthcare costs and hospital visits [137]. Deploying deep-learning methodologies within IoT frameworks presents a multifaceted set of challenges and limitations, which are crucial to understanding, for optimizing their effectiveness. These challenges can be broadly categorized into ethical and privacy implications, scalability and resource constraints, and the need for ongoing research and development. In surveillance applications, the integration of deep learning into the IoT has led to significant advancements in security and efficiency. IoT devices, equipped with deep-learning capabilities like convolutional neural networks, can effectively analyze video streams, to identify threats and anomalies, thereby enhancing real-time monitoring and predictive insights. However, this comes with substantial privacy concerns and data security challenges. The ability of these systems to recognize complex patterns and anomalies raises ethical questions regarding the extent and manner of surveillance, especially considering the potential for misuse or overreach. Deep learning is increasingly pivotal in healthcare IoT applications. Its ability to enhance diagnostic accuracy, enable personalized treatment, and monitor patient health in real time makes it a valuable tool. For example, deep-learning models have achieved high accuracy in detecting COVID-19 from chest-X-ray images and skin cancer from dermoscopy images. However, these applications also face significant challenges, such as the need for large datasets and substantial computational power, particularly for complex tasks like image segmentation. While classification tasks can achieve high accuracy, more intricate tasks like segmentation still struggle with lower precision metrics, highlighting the need for continued advancement in deep-learning methodologies for healthcare IoT applications. The scalability of deep-learning models in the IoT is a major concern. IoT devices often have limited computational resources, which poses a challenge for deploying complex deep-learning models that require significant data-processing capabilities. Optimizing these models for deployment on resource-constrained devices is crucial. This necessitates innovative solutions that can balance the computational demands of deep-learning algorithms with the inherent limitations of IoT hardware.

4. Conclusions

In conclusion, our analysis delved into the latest models for various tasks within the IoT, shedding light on their potential as well as the challenges and limitations they face. Throughout our discussion, the dominant theme revolved around the expanding role of deep learning in IoT systems across diverse tasks. A key insight from our exploration is the effectiveness of algorithmic approaches such as decision trees and random forests in anomaly detection within the IoT, yielding notable F1 scores that surpass those of neural networks in specific contexts. This underscores the significance of tailoring the selection of machine-learning techniques to the distinctive requirements of IoT applications. The adaptability and proficiency of deep learning in handling intricate, multifaceted data in dynamic IoT environments present an undeniable advantage. However, the thirst for extensive data and computational resources, along with the inherent opacity of deep-learning models, poses significant challenges. Moreover, while deep learning has demonstrated promise in fortifying security, privacy, and surveillance in the IoT, the integration of these advancements into a comprehensive solution remains a formidable undertaking. The constraints imposed by the hardware of IoT devices further accentuate the need for innovative solutions in this perpetually evolving landscape. To fully unlock the potential of deep learning in the IoT, forthcoming research endeavors should prioritize the reduction of computational requirements, the enhancement of model interpretability, and the augmentation of adaptability to resource-constrained IoT systems. These measures are pivotal in ensuring the widespread acceptance and reliability of deep learning in critical applications. As we navigate the intricate realm of the IoT, striking a balance between technological innovation and pragmatic solutions that address the challenges of this evolving ecosystem is imperative.
Moreover, while deep learning has demonstrated promise in fortifying security, privacy, and surveillance in the IoT, the integration of these advancements into a comprehensive solution remains a formidable undertaking. Despite its advantages, deep learning in the IoT faces challenges. These include the need for extensive data, significant computational resources, and the inherent complexity of deep-learning models, which makes them opaque and hard to interpret. Moreover, the limitations of IoT-device hardware also pose challenges. Looking ahead, the future of deep learning in the IoT appears promising, especially in the realms of automotive, industrial, automation, and mechatronics applications. In the automotive sector, the potential for deep learning to enhance autonomous driving systems, advanced driver-assistance systems (ADAS), and predictive maintenance stands as a critical area for development [138,139,140,141,142,143,144,145,146,147,148,149,150,151,152]. Similarly, in industrial and manufacturing settings, the integration of deep learning holds the promise of optimizing production processes, predicting equipment failures, and improving overall operational efficiency. Furthermore, in automation and mechatronics, deep learning can contribute to the development of intelligent systems capable of adaptive and predictive behavior, thereby revolutionizing the efficiency and productivity of various industrial processes. Future research should focus on reducing computational requirements, enhancing model interpretability, and improving adaptability to resource-constrained IoT systems. This is essential for deep learning’s wider acceptance and reliability in critical applications, particularly in areas like automotive, industrial automation, and mechatronics, where it promises significant advancements

Author Contributions

Conceptualization, A.E., P.D., S.S. and Q.Z.; methodology, A.E., P.D., S.S. and Q.Z.; software, A.E., P.D., S.S. and Q.Z.; validation, A.E., P.D., S.S. and Q.Z.; formal analysis, A.E., P.D., S.S. and Q.Z.; investigation, A.E., P.D., S.S. and Q.Z.; resources, A.E., P.D., S.S. and Q.Z.; data curation, A.E., P.D., S.S. and Q.Z.; writing—original draft preparation, A.E., P.D., S.S. and Q.Z.; writing—review and editing, A.E., P.D., S.S. and Q.Z.; visualization, A.E., P.D., S.S. and Q.Z.; supervision, A.E., P.D., S.S. and Q.Z.; project administration, A.E., P.D., S.S. and Q.Z.; funding acquisition, A.E., P.D., S.S. and Q.Z.; The authors have equally contributed to this article. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the “Horizon Europe program under grant agreement 101092850 (AERO project)”; by the “European High-Performance Computing Joint Undertaking (JU) program under grant agreement 101033975 (EUPEX)”; and by “PNRR project CN1 Big Data, HPC and Quantum Computing in Spoke 6 multiscale modeling and engineering applications”.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kopetz, H.; Steiner, W. Internet of things. In Real-Time Systems: Design Principles for Distributed Embedded Applications; Springer: Berlin/Heidelberg, Germany, 2022; pp. 325–341. [Google Scholar]
  2. Liu, W.; Wang, Z.; Liu, X.; Zeng, N.; Liu, Y.; Alsaadi, F.E. A survey of deep neural network architectures and their applications. Neurocomputing 2017, 234, 11–26. [Google Scholar] [CrossRef]
  3. Saponara, S.; Elhanashi, A.; Gagliardi, A. Reconstruct fingerprint images using deep learning and sparse autoencoder algorithms. In Proceedings of the Real-Time Image Processing and Deep Learning 2021, Brussels, Belgium, 12–16 April 2021; Volume 11736, pp. 9–18. [Google Scholar]
  4. Saponara, S.; Elhanashi, A.; Gagliardi, A. Enabling YOLOv2 Models to Monitor Fire and Smoke Detection Remotely in Smart Infrastructures. In Proceedings of the Applications in Electronics Pervading Industry, Environment and Society: APPLEPIES 2020, Virtual Online, 19–20 November 2020; Springer: Berlin/Heidelberg, Germany, 2021; pp. 30–38. [Google Scholar]
  5. Dini, P.; Elhanashi, A.; Begni, A.; Saponara, S.; Zheng, Q.; Gasmi, K. Overview on Intrusion Detection Systems Design Exploiting Machine Learning for Networking Cybersecurity. Appl. Sci. 2023, 13, 7507. [Google Scholar] [CrossRef]
  6. Begni, A.; Dini, P.; Saponara, S. Design and Test of an LSTM-Based Algorithm for Li-Ion Batteries Remaining Useful Life Estimation. In Proceedings of the International Conference on Applications in Electronics Pervading Industry, Environment and Society, Genoa, Italy, 26–27 September 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 373–379. [Google Scholar]
  7. Elhanashi, A.; Gasmi, K.; Begni, A.; Dini, P.; Zheng, Q.; Saponara, S. Machine Learning Techniques for Anomaly-Based Detection System on CSE-CIC-IDS2018 Dataset. In Proceedings of the International Conference on Applications in Electronics Pervading Industry, Environment and Society, Genoa, Italy, 26–27 September 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 131–140. [Google Scholar]
  8. Dini, P.; Begni, A.; Ciavarella, S.; De Paoli, E.; Fiorelli, G.; Silvestro, C.; Saponara, S. Design and Testing Novel One-Class Classifier Based on Polynomial Interpolation With Application to Networking Security. IEEE Access 2022, 10, 67910–67924. [Google Scholar] [CrossRef]
  9. Dini, P.; Saponara, S. Analysis, design, and comparison of machine-learning techniques for networking intrusion detection. Designs 2021, 5, 9. [Google Scholar] [CrossRef]
  10. Budiman, A.; Yaputera, R.A.; Achmad, S.; Kurniawan, A. Student attendance with face recognition (LBPH or CNN): Systematic literature review. Procedia Comput. Sci. 2023, 216, 31–38. [Google Scholar] [CrossRef]
  11. Dong, J.; He, F.; Guo, Y.; Zhang, H. A Commodity Review Sentiment Analysis Based on BERT-CNN Model. In Proceedings of the 2020 5th International Conference on Computer and Communication Systems (ICCCS), Shanghai, China, 15–18 May 2020; pp. 143–147. [Google Scholar] [CrossRef]
  12. Dhruv, P.; Naskar, S. Image classification using convolutional neural network (CNN) and recurrent neural network (RNN): A review. In Proceedings of the International Conference on Machine Learning and Information Processing (ICMLIP 2019), Pune, India, 27–28 December 2019; pp. 367–381. [Google Scholar]
  13. Li, W.; Zhu, L.; Shi, Y.; Guo, K.; Cambria, E. User reviews: Sentiment analysis using lexicon integrated two-channel CNN–LSTM family models. Appl. Soft Comput. 2020, 94, 106435. [Google Scholar] [CrossRef]
  14. Yao, G.; Lei, T.; Zhong, J. A review of convolutional-neural-network-based action recognition. Pattern Recognit. Lett. 2019, 118, 14–22. [Google Scholar] [CrossRef]
  15. Minaee, S.; Azimi, E.; Abdolrashidi, A. Deep-sentiment: Sentiment analysis using ensemble of cnn and bi-lstm models. arXiv 2019, arXiv:1904.04206. [Google Scholar]
  16. Rehman, A.U.; Malik, A.K.; Raza, B.; Ali, W. A hybrid CNN-LSTM model for improving accuracy of movie reviews sentiment analysis. Multimed. Tools Appl. 2019, 78, 26597–26613. [Google Scholar] [CrossRef]
  17. Sindagi, V.A.; Patel, V.M. A survey of recent advances in cnn-based single image crowd counting and density estimation. Pattern Recognit. Lett. 2018, 107, 3–16. [Google Scholar] [CrossRef]
  18. Li, X.; Li, Q.; Kim, J. A Review Helpfulness Modeling Mechanism for Online E-commerce: Multi-Channel CNN End-to-End Approach. Appl. Artif. Intell. 2023, 37, 2166226. [Google Scholar] [CrossRef]
  19. Indira, D.N.V.S.L.S.; Goddu, J.; Indraja, B.; Challa, V.M.L.; Manasa, B. A review on fruit recognition and feature evaluation using CNN. Mater. Today Proc. 2023, 80, 3438–3443. [Google Scholar] [CrossRef]
  20. Liu, Y.; Wang, L.; Shi, T.; Li, J. Detection of spam reviews through a hierarchical attention architecture with N-gram CNN and Bi-LSTM. Inf. Syst. 2022, 103, 101865. [Google Scholar] [CrossRef]
  21. Bhuvaneshwari, P.; Rao, A.N.; Robinson, Y.H.; Thippeswamy, M. Sentiment analysis for user reviews using Bi-LSTM self-attention based CNN model. Multimed. Tools Appl. 2022, 81, 12405–12419. [Google Scholar] [CrossRef]
  22. Allmendinger, A.; Spaeth, M.; Saile, M.; Peteinatos, G.G.; Gerhards, R. Precision chemical weed management strategies: A review and a design of a new CNN-based modular spot sprayer. Agronomy 2022, 12, 1620. [Google Scholar] [CrossRef]
  23. Lu, J.; Tan, L.; Jiang, H. Review on convolutional neural network (CNN) applied to plant leaf disease classification. Agriculture 2021, 11, 707. [Google Scholar] [CrossRef]
  24. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 1–74. [Google Scholar] [CrossRef] [PubMed]
  25. Kattenborn, T.; Leitloff, J.; Schiefer, F.; Hinz, S. Review on Convolutional Neural Networks (CNN) in vegetation remote sensing. Isprs J. Photogramm. Remote. Sens. 2021, 173, 24–49. [Google Scholar] [CrossRef]
  26. Alahmari, F.; Naim, A.; Alqahtani, H. E-Learning Modeling Technique and Convolution Neural Networks in Online Education. In IoT-Enabled Convolutional Neural Networks: Techniques and Applications; River Publishers: Aalborg, Denmark, 2023; pp. 261–295. [Google Scholar]
  27. Krichen, M. Convolutional neural networks: A survey. Computers 2023, 12, 151. [Google Scholar] [CrossRef]
  28. Mers, M.; Yang, Z.; Hsieh, Y.A.; Tsai, Y. Recurrent neural networks for pavement performance forecasting: Review and model performance comparison. Transp. Res. Rec. 2023, 2677, 610–624. [Google Scholar] [CrossRef]
  29. Kaur, M.; Mohta, A. A Review of Deep Learning with Recurrent Neural Network. In Proceedings of the 2019 International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 27–29 November 2019; pp. 460–465. [Google Scholar] [CrossRef]
  30. Al-Smadi, M.; Qawasmeh, O.; Al-Ayyoub, M.; Jararweh, Y.; Gupta, B. Deep Recurrent neural network vs. support vector machine for aspect-based sentiment analysis of Arabic hotels’ reviews. J. Comput. Sci. 2018, 27, 386–393. [Google Scholar] [CrossRef]
  31. Chen, Y.; Cheng, Q.; Cheng, Y.; Yang, H.; Yu, H. Applications of Recurrent Neural Networks in Environmental Factor Forecasting: A Review. Neural Comput. 2018, 30, 2855–2881. [Google Scholar] [CrossRef] [PubMed]
  32. Durstewitz, D.; Koppe, G.; Thurm, M.I. Reconstructing computational system dynamics from neural data with recurrent neural networks. Nat. Rev. Neurosci. 2023, 24, 693–710. [Google Scholar] [CrossRef]
  33. Bonassi, F.; Farina, M.; Xie, J.; Scattolini, R. On recurrent neural networks for learning-based control: Recent results and ideas for future developments. J. Process. Control. 2022, 114, 92–104. [Google Scholar] [CrossRef]
  34. Zhu, J.; Jiang, Q.; Shen, Y.; Qian, C.; Xu, F.; Zhu, Q. Application of recurrent neural network to mechanical fault diagnosis: A review. J. Mech. Sci. Technol. 2022, 36, 527–542. [Google Scholar] [CrossRef]
  35. Mao, S.; Sejdić, E. A Review of Recurrent Neural Network-Based Methods in Computational Physiology. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 6983–7003. [Google Scholar] [CrossRef]
  36. Weerakody, P.B.; Wong, K.W.; Wang, G.; Ela, W. A review of irregular time series data handling with gated recurrent neural networks. Neurocomputing 2021, 441, 161–178. [Google Scholar] [CrossRef]
  37. Hibat-Allah, M.; Ganahl, M.; Hayward, L.E.; Melko, R.G.; Carrasquilla, J. Recurrent neural network wave functions. Phys. Rev. Res. 2020, 2, 023358. [Google Scholar] [CrossRef]
  38. Barik, K.; Misra, S.; Ray, A.K.; Bokolo, A. LSTM-DGWO-Based Sentiment Analysis Framework for Analyzing Online Customer Reviews. Comput. Intell. Neurosci. 2023, 2023, 6348831. [Google Scholar] [CrossRef]
  39. Yu, Y.; Si, X.; Hu, C.; Zhang, J. A review of recurrent neural networks: LSTM cells and network architectures. Neural Comput. 2019, 31, 1235–1270. [Google Scholar] [CrossRef]
  40. Rao, G.; Huang, W.; Feng, Z.; Cong, Q. LSTM with sentence representations for document-level sentiment classification. Neurocomputing 2018, 308, 49–57. [Google Scholar] [CrossRef]
  41. Yadav, V.; Verma, P.; Katiyar, V. Long short term memory (LSTM) model for sentiment analysis in social data for e-commerce products reviews in Hindi languages. Int. J. Inf. Technol. 2023, 15, 759–772. [Google Scholar] [CrossRef]
  42. Ghimire, S.; Deo, R.C.; Wang, H.; Al-Musaylh, M.S.; Casillas-Pérez, D.; Salcedo-Sanz, S. Stacked LSTM sequence-to-sequence autoencoder with feature selection for daily solar radiation prediction: A review and new modeling results. Energies 2022, 15, 1061. [Google Scholar] [CrossRef]
  43. Sivakumar, M.; Uyyala, S.R. Aspect-based sentiment analysis of mobile phone reviews using LSTM and fuzzy logic. Int. J. Data Sci. Anal. 2021, 12, 355–367. [Google Scholar] [CrossRef]
  44. Muhammad, P.F.; Kusumaningrum, R.; Wibowo, A. Sentiment analysis using Word2vec and long short-term memory (LSTM) for Indonesian hotel reviews. Procedia Comput. Sci. 2021, 179, 728–735. [Google Scholar] [CrossRef]
  45. Hossain, N.; Bhuiyan, M.R.; Tumpa, Z.N.; Hossain, S.A. Sentiment Analysis of Restaurant Reviews using Combined CNN-LSTM. In Proceedings of the 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Kharagpur, India, 1–3 July 2020; pp. 1–5. [Google Scholar] [CrossRef]
  46. Bacanin, N.; Jovanovic, L.; Zivkovic, M.; Kandasamy, V.; Antonijevic, M.; Deveci, M.; Strumberger, I. Multivariate energy forecasting via metaheuristic tuned long-short term memory and gated recurrent unit neural networks. Inf. Sci. 2023, 642, 119122. [Google Scholar] [CrossRef]
  47. Santur, Y. Sentiment Analysis Based on Gated Recurrent Unit. In Proceedings of the 2019 International Artificial Intelligence and Data Processing Symposium (IDAP), Malatya, Turkey, 21–22 September 2019; pp. 1–5. [Google Scholar] [CrossRef]
  48. Chen, J.; Jing, H.; Chang, Y.; Liu, Q. Gated recurrent unit based recurrent neural network for remaining useful life prediction of nonlinear deterioration process. Reliab. Eng. Syst. Saf. 2019, 185, 372–382. [Google Scholar] [CrossRef]
  49. Wang, Y.; Liao, W.; Chang, Y. Gated recurrent unit network-based short-term photovoltaic forecasting. Energies 2018, 11, 2163. [Google Scholar] [CrossRef]
  50. Shen, G.; Tan, Q.; Zhang, H.; Zeng, P.; Xu, J. Deep learning with gated recurrent unit networks for financial sequence predictions. Procedia Comput. Sci. 2018, 131, 895–903. [Google Scholar] [CrossRef]
  51. Shukla, P.K.; Stalin, S.; Joshi, S.; Shukla, P.K.; Pareek, P.K. Optimization assisted bidirectional gated recurrent unit for healthcare monitoring system in big-data. Appl. Soft Comput. 2023, 138, 110178. [Google Scholar] [CrossRef]
  52. ArunKumar, K.; Kalaga, D.V.; Kumar, C.M.S.; Kawaji, M.; Brenza, T.M. Comparative analysis of Gated Recurrent Units (GRU), long Short-Term memory (LSTM) cells, autoregressive Integrated moving average (ARIMA), seasonal autoregressive Integrated moving average (SARIMA) for forecasting COVID-19 trends. Alex. Eng. J. 2022, 61, 7585–7603. [Google Scholar] [CrossRef]
  53. Lin, H.; Gharehbaghi, A.; Zhang, Q.; Band, S.S.; Pai, H.T.; Chau, K.W.; Mosavi, A. Time series-based groundwater level forecasting using gated recurrent unit deep neural networks. Eng. Appl. Comput. Fluid Mech. 2022, 16, 1655–1672. [Google Scholar] [CrossRef]
  54. Farah, S.; Humaira, N.; Aneela, Z.; Steffen, E. Short-term multi-hour ahead country-wide wind power prediction for Germany using gated recurrent unit deep learning. Renew. Sustain. Energy Rev. 2022, 167, 112700. [Google Scholar] [CrossRef]
  55. Zhao, N.; Gao, H.; Wen, X.; Li, H. Combination of Convolutional Neural Network and Gated Recurrent Unit for Aspect-Based Sentiment Analysis. IEEE Access 2021, 9, 15561–15569. [Google Scholar] [CrossRef]
  56. Zhang, Y.G.; Tang, J.; He, Z.Y.; Tan, J.; Li, C. A novel displacement prediction method using gated recurrent unit model with time series analysis in the Erdaohe landslide. Nat. Hazards 2021, 105, 783–813. [Google Scholar] [CrossRef]
  57. Sachin, S.; Tripathi, A.; Mahajan, N.; Aggarwal, S.; Nagrath, P. Sentiment analysis using gated recurrent neural networks. Comput. Sci. 2020, 1, 74. [Google Scholar] [CrossRef]
  58. Tang, D.; Rong, W.; Qin, S.; Yang, J.; Xiong, Z. A n-gated recurrent unit with review for answer selection. Neurocomputing 2020, 371, 158–165. [Google Scholar] [CrossRef]
  59. Sun, J.; Fard, A.P.; Mahoor, M.H. Xnodr and xnidr: Two accurate and fast fully connected layers for convolutional neural networks. J. Intell. Robot. Syst. 2023, 109, 17. [Google Scholar] [CrossRef]
  60. Laredo, D.; Ma, S.F.; Leylaz, G.; Schütze, O.; Sun, J.Q. Automatic model selection for fully connected neural networks. Int. J. Dyn. Control. 2020, 8, 1063–1079. [Google Scholar] [CrossRef]
  61. Petersen, P.; Voigtlaender, F. Equivalence of approximation by convolutional neural networks and fully-connected networks. Proc. Am. Math. Soc. 2020, 148, 1567–1581. [Google Scholar] [CrossRef]
  62. Wang, Y.; Zhang, F.; Zhang, X.; Zhang, S. Series AC Arc Fault Detection Method Based on Hybrid Time and Frequency Analysis and Fully Connected Neural Network. IEEE Trans. Ind. Inform. 2019, 15, 6210–6219. [Google Scholar] [CrossRef]
  63. Borovykh, A.; Oosterlee, C.W.; Bohté, S.M. Generalization in fully-connected neural networks for time series forecasting. J. Comput. Sci. 2019, 36, 101020. [Google Scholar] [CrossRef]
  64. Ganju, K.; Wang, Q.; Yang, W.; Gunter, C.A.; Borisov, N. Property inference attacks on fully connected neural networks using permutation invariant representations. In Proceedings of the the 2018 ACM SIGSAC conference on computer and communications security, Toronto, ON, Canada, 15–19 October 2018; pp. 619–633. [Google Scholar]
  65. Honcharenko, T.; Akselrod, R.; Shpakov, A. Information system based on multi-value classification of fully connected neural network for construction management. Iaes Int. J. Artif. Intell. 2023, 12, 593. [Google Scholar] [CrossRef]
  66. Scabini, L.F.; Bruno, O.M. Structure and performance of fully connected neural networks: Emerging complex network properties. Phys. Stat. Mech. Its Appl. 2023, 615, 128585. [Google Scholar] [CrossRef]
  67. Yuan, B.; Wolfe, C.R.; Dun, C.; Tang, Y.; Kyrillidis, A.; Jermaine, C. Distributed learning of fully connected neural networks using independent subnet training. Proc. Vldb Endow. 2022, 15, 1581–1590. [Google Scholar] [CrossRef]
  68. Xue, Y.; Wang, Y.; Liang, J. A self-adaptive gradient descent search algorithm for fully-connected neural networks. Neurocomputing 2022, 478, 70–80. [Google Scholar] [CrossRef]
  69. Li, Z.; Zhang, Y.; Abu-Siada, A.; Chen, X.; Li, Z.; Xu, Y.; Zhang, L.; Tong, Y. Fault diagnosis of transformer windings based on decision tree and fully connected neural network. Energies 2021, 14, 1531. [Google Scholar] [CrossRef]
  70. Sudharsan, B.; Salerno, S.; Nguyen, D.D.; Yahya, M.; Wahid, A.; Yadav, P.; Breslin, J.G.; Ali, M.I. TinyML Benchmark: Executing Fully Connected Neural Networks on Commodity Microcontrollers. In Proceedings of the 2021 IEEE 7th World Forum on Internet of Things (WF-IoT), New Orleans, LA, USA, 14 June–31 July 2021; pp. 883–884. [Google Scholar] [CrossRef]
  71. Basha, S.S.; Dubey, S.R.; Pulabaigari, V.; Mukherjee, S. Impact of fully connected layers on performance of convolutional neural networks for image classification. Neurocomputing 2020, 378, 112–119. [Google Scholar] [CrossRef]
  72. Rondon, L.P.; Babun, L.; Aris, A.; Akkaya, K.; Uluagac, A.S. Survey on enterprise Internet-of-Things systems (E-IoT): A security perspective. Hoc Netw. 2022, 125, 102728. [Google Scholar] [CrossRef]
  73. Latif, S.; Driss, M.; Boulila, W.; Huma, Z.E.; Jamal, S.S.; Idrees, Z.; Ahmad, J. Deep learning for the industrial internet of things (iiot): A comprehensive survey of techniques, implementation frameworks, potential applications, and future directions. Sensors 2021, 21, 7518. [Google Scholar] [CrossRef]
  74. Saleem, T.J.; Chishti, M.A. Deep learning for the internet of things: Potential benefits and use-cases. Digit. Commun. Netw. 2021, 7, 526–542. [Google Scholar] [CrossRef]
  75. Lakshmanna, K.; Kaluri, R.; Gundluru, N.; Alzamil, Z.S.; Rajput, D.S.; Khan, A.A.; Haq, M.A.; Alhussen, A. A review on deep learning techniques for IoT data. Electronics 2022, 11, 1604. [Google Scholar] [CrossRef]
  76. Reddy, D.K.; Behera, H.S.; Nayak, J.; Vijayakumar, P.; Naik, B.; Singh, P.K. Deep neural network based anomaly detection in Internet of Things network traffic tracking for the applications of future smart cities. Trans. Emerg. Telecommun. Technol. 2021, 32, e4121. [Google Scholar] [CrossRef]
  77. Ma, W. Analysis of anomaly detection method for Internet of things based on deep learning. Trans. Emerg. Telecommun. Technol. 2020, 31, e3893. [Google Scholar] [CrossRef]
  78. Zhou, X.; Liang, W.; Wang, K.I.K.; Wang, H.; Yang, L.T.; Jin, Q. Deep-Learning-Enhanced Human Activity Recognition for Internet of Healthcare Things. IEEE Internet Things J. 2020, 7, 6429–6438. [Google Scholar] [CrossRef]
  79. Abdel-Basset, M.; Hawash, H.; Chakrabortty, R.K.; Ryan, M.; Elhoseny, M.; Song, H. ST-DeepHAR: Deep Learning Model for Human Activity Recognition in IoHT Applications. IEEE Internet Things J. 2021, 8, 4969–4979. [Google Scholar] [CrossRef]
  80. Lu, W.; Fan, F.; Chu, J.; Jing, P.; Yuting, S. Wearable Computing for Internet of Things: A Discriminant Approach for Human Activity Recognition. IEEE Internet Things J. 2019, 6, 2749–2759. [Google Scholar] [CrossRef]
  81. Li, D.; Lan, M.; Hu, Y. Energy-saving service management technology of internet of things using edge computing and deep learning. Complex Intell. Syst. 2022, 8, 3867–3879. [Google Scholar] [CrossRef]
  82. Shah, S.F.A.; Iqbal, M.; Aziz, Z.; Rana, T.A.; Khalid, A.; Cheah, Y.N.; Arif, M. The role of machine learning and the internet of things in smart buildings for energy efficiency. Appl. Sci. 2022, 12, 7882. [Google Scholar] [CrossRef]
  83. Zhao, R.; Wang, X.; Xia, J.; Fan, L. Deep reinforcement learning based mobile edge computing for intelligent Internet of Things. Phys. Commun. 2020, 43, 101184. [Google Scholar] [CrossRef]
  84. Li, H.; Ota, K.; Dong, M. Learning IoT in Edge: Deep Learning for the Internet of Things with Edge Computing. IEEE Netw. 2018, 32, 96–101. [Google Scholar] [CrossRef]
  85. Ren, J.; Pan, Y.; Goscinski, A.; Beyah, R.A. Edge Computing for the Internet of Things. IEEE Netw. 2018, 32, 6–7. [Google Scholar] [CrossRef]
  86. Hawezi, R.S.; Khoshaba, F.S.; Kareem, S.W. A comparison of automated classification techniques for image processing in video internet of things. Comput. Electr. Eng. 2022, 101, 108074. [Google Scholar] [CrossRef]
  87. Wang, X.; Wang, X.; Mao, S. RF Sensing in the Internet of Things: A General Deep Learning Framework. IEEE Commun. Mag. 2018, 56, 62–67. [Google Scholar] [CrossRef]
  88. Tien, J.M. Internet of things, real-time decision making, and artificial intelligence. Ann. Data Sci. 2017, 4, 149–178. [Google Scholar] [CrossRef]
  89. Nardi, P.M. Doing Survey Research: A Guide to Quantitative Methods; Routledge: Oxfordshire, UK, 2018. [Google Scholar]
  90. DeMedeiros, K.; Hendawi, A.; Alvarez, M. A survey of AI-based anomaly detection in IoT and sensor networks. Sensors 2023, 23, 1352. [Google Scholar] [CrossRef] [PubMed]
  91. Hasan, M.; Islam, M.M.; Zarif, M.I.I.; Hashem, M. Attack and anomaly detection in IoT sensors in IoT sites using machine learning approaches. Internet Things 2019, 7, 100059. [Google Scholar] [CrossRef]
  92. Jia, Y.; Cheng, Y.; Shi, J. Semi-Supervised Variational Temporal Convolutional Network for IoT Communication Multi-Anomaly Detection. In Proceedings of the 2022 3rd International Conference on Control, Robotics and Intelligent System, Xi’an, China, 23–25 September 2022; pp. 67–73. [Google Scholar]
  93. Bock, M.; Hoelzemann, A.; Moeller, M.; Van Laerhoven, K. Investigating (re) current state-of-the-art in human activity recognition datasets. Front. Comput. Sci. 2022, 4, 119. [Google Scholar] [CrossRef]
  94. Shoeibi, A.; Sadeghi, D.; Moridian, P.; Ghassemi, N.; Heras, J.; Alizadehsani, R.; Khadem, A.; Kong, Y.; Nahavandi, S.; Zhang, Y.D.; et al. Automatic diagnosis of schizophrenia in EEG signals using CNN-LSTM models. Front. Neuroinform. 2021, 15, 777977. [Google Scholar] [CrossRef]
  95. Bolhasani, H.; Mohseni, M.; Rahmani, A.M. Deep learning applications for IoT in health care: A systematic review. Inform. Med. Unlocked 2021, 23, 100550. [Google Scholar] [CrossRef]
  96. Kong, L.; Cheng, J. Classification and detection of COVID-19 X-ray images based on DenseNet and VGG16 feature fusion. Biomed. Signal Process. Control. 2022, 77, 103772. [Google Scholar] [CrossRef]
  97. Morshed, M.G.; Sultana, T.; Alam, A.; Lee, Y.K. Human Action Recognition: A Taxonomy-Based Survey, Updates, and Opportunities. Sensors 2023, 23, 2182. [Google Scholar] [CrossRef] [PubMed]
  98. Wang, X.; Zhang, S.; Qing, Z.; Tang, M.; Zuo, Z.; Gao, C.; Jin, R.; Sang, N. Hybrid relation guided set matching for few-shot action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 19948–19957. [Google Scholar]
  99. Song, Y.F.; Zhang, Z.; Shan, C.; Wang, L. Constructing stronger and faster baselines for skeleton-based action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 1474–1488. [Google Scholar] [CrossRef] [PubMed]
  100. Gao, R.; Oh, T.H.; Grauman, K.; Torresani, L. Listen to look: Action recognition by previewing audio. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 10457–10467. [Google Scholar]
  101. Zhu, W.; Lan, C.; Xing, J.; Zeng, W.; Li, Y.; Shen, L.; Xie, X. Co-occurrence feature learning for skeleton based action recognition using regularized deep LSTM networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Phoenix, AR, USA, 12–17 February 2016; Volume 30. [Google Scholar]
  102. Das, S.; Koperski, M.; Bremond, F.; Francesca, G. Deep-temporal lstm for daily living action recognition. In Proceedings of the 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Auckland, New Zealand, 27–30 November 2018; pp. 1–6. [Google Scholar]
  103. Sharma, S.; Kiros, R.; Salakhutdinov, R. Action recognition using visual attention. arXiv 2015, arXiv:1511.04119. [Google Scholar]
  104. Wang, L.; Xiong, Y.; Wang, Z.; Qiao, Y.; Lin, D.; Tang, X.; Van Gool, L. Temporal segment networks: Towards good practices for deep action recognition. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 20–36. [Google Scholar]
  105. Mendes, D.; Lopes, M.; Parreira, P.; Fonseca, C. Healthcare computer reasoning addressing chronically Ill societies using IoT: Deep learning AI to the rescue of home-based healthcare. In Chronic Illness and Long-Term Care; IGI Global: Hershey, PA, USA, 2019; pp. 720–736. [Google Scholar]
  106. Foggia, P.; Saggese, A.; Strisciuglio, N.; Vento, M. Exploiting the deep learning paradigm for recognizing human actions. In Proceedings of the 2014 11th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Seoul, Republic of Korea, 26–29 August 2014; pp. 93–98. [Google Scholar]
  107. Ahsan, U.; Sun, C.; Essa, I. Discrimnet: Semi-supervised action recognition from videos using generative adversarial networks. arXiv 2018, arXiv:1801.07230. [Google Scholar]
  108. Shi, L.; Zhang, Y.; Cheng, J.; Lu, H. Two-stream adaptive graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 12026–12035. [Google Scholar]
  109. Sandstrom, G.M.; Lathia, N.; Mascolo, C.; Rentfrow, P.J. Opportunities for Smartphones in Clinical Care. J. Clin. Psychiatry 2016, 77, 13476. [Google Scholar] [CrossRef] [PubMed]
  110. Liu, L.; Xu, J.; Huan, Y.; Zou, Z.; Yeh, S.C.; Zheng, L.R. A smart dental health-IoT platform based on intelligent hardware, deep learning, and mobile terminal. IEEE J. Biomed. Health Inform. 2019, 24, 898–906. [Google Scholar] [CrossRef]
  111. Sharma, S.; Chen, K.; Sheth, A. Toward practical privacy-preserving analytics for IoT and cloud-based healthcare systems. IEEE Internet Comput. 2018, 22, 42–51. [Google Scholar] [CrossRef]
  112. Klenk, J.; Schwickert, L.; Palmerini, L.; Mellone, S.; Bourke, A.; Ihlen, E.A.; Kerse, N.; Hauer, K.; Pijnappels, M.; Synofzik, M.; et al. The FARSEEING real-world fall repository: A large-scale collaborative database to collect and share sensor signals from real-world falls. Eur. Rev. Aging Phys. Act. 2016, 13, 1–7. [Google Scholar] [CrossRef]
  113. Malasinghe, L.P.; Ramzan, N.; Dahal, K. Remote patient monitoring: A comprehensive study. J. Ambient. Intell. Humaniz. Comput. 2019, 10, 57–76. [Google Scholar] [CrossRef]
  114. Wei, J.; Cheok, A.D. Foodie: Play with your food promote interaction and fun with edible interface. IEEE Trans. Consum. Electron. 2012, 58, 178–183. [Google Scholar] [CrossRef]
  115. Bhatta, A.; Albiero, V.; Bowyer, K.W.; King, M.C. The gender gap in face recognition accuracy is a hairy problem. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HA, USA, 3–7 January 2023; pp. 303–312. [Google Scholar]
  116. Vaishali, A.; Likitha, G.; Srujana, K.; Sunitha, B. Amazon Rekognition. Math. Stat. Eng. Appl. 2020, 69, 449–453. [Google Scholar]
  117. Leonor Estévez Dorantes, T.; Bertani Hernández, D.; León Reyes, A.; Elena Miranda Medina, C. Development of a powerful facial recognition system through an API using ESP32-Cam and Amazon Rekognition service as tools offered by Industry 5.0. In Proceedings of the 5th International Conference on Machine Vision and Applications (ICMVA), Singapore, 18–20 February 2022; pp. 76–81. [Google Scholar]
  118. Indla, R.K. An Overview on Amazon Rekognition Technology. Master of Science in Information Systems and Technology, California State University San Bernardino 2021. Available online: https://scholarworks.lib.csusb.edu/cgi/viewcontent.cgi?article=2396&context=etd (accessed on 27 November 2023).
  119. Pimenov, D.Y.; Bustillo, A.; Wojciechowski, S.; Sharma, V.S.; Gupta, M.K.; Kuntoğlu, M. Artificial intelligence systems for tool condition monitoring in machining: Analysis and critical review. J. Intell. Manuf. 2023, 34, 2079–2121. [Google Scholar] [CrossRef]
  120. Sabato, A.; Dabetwar, S.; Kulkarni, N.N.; Fortino, G. Noncontact Sensing Techniques for AI-Aided Structural Health Monitoring: A Systematic Review. IEEE Sens. J. 2023, 23, 4672–4684. [Google Scholar] [CrossRef]
  121. Shaik, T.; Tao, X.; Higgins, N.; Li, L.; Gururajan, R.; Zhou, X.; Acharya, U.R. Remote patient monitoring using artificial intelligence: Current state, applications, and challenges. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2023, 13, e1485. [Google Scholar] [CrossRef]
  122. Han, J.; Jeong, D.; Lee, S. Analysis of the HIKVISION DVR file system. In Proceedings of the Digital Forensics and Cyber Crime: 7th International Conference, ICDF2C 2015, Seoul, Republic of Korea, 6–8 October 2015; Revised Selected Papers 7. Springer: Berlin/Heidelberg, Germany, 2015; pp. 189–199. [Google Scholar]
  123. Dragonas, E.; Lambrinoudakis, C.; Kotsis, M. IoT forensics: Analysis of a HIKVISION’s mobile app. Forensic Sci. Int. Digit. Investig. 2023, 45, 301560. [Google Scholar] [CrossRef]
  124. Hashmi, M.F.; Pal, R.; Saxena, R.; Keskar, A.G. A new approach for real time object detection and tracking on high resolution and multi-camera surveillance videos using GPU. J. Cent. South Univ. 2016, 23, 130–144. [Google Scholar] [CrossRef]
  125. Cheng, S.; Zhu, Y.; Wu, S. Deep learning based efficient ship detection from drone-captured images for maritime surveillance. Ocean. Eng. 2023, 285, 115440. [Google Scholar] [CrossRef]
  126. Yang, H.F.; Cai, J.; Liu, C.; Ke, R.; Wang, Y. Cooperative multi-camera vehicle tracking and traffic surveillance with edge artificial intelligence and representation learning. Transp. Res. Part C Emerg. Technol. 2023, 148, 103982. [Google Scholar] [CrossRef]
  127. Ugli, D.B.R.; Kim, J.; Mohammed, A.F.; Lee, J. Cognitive Video Surveillance Management in Hierarchical Edge Computing System with Long Short-Term Memory Model. Sensors 2023, 23, 2869. [Google Scholar] [CrossRef] [PubMed]
  128. Al-Amiedy, T.A.; Anbar, M.; Belaton, B.; Kabla, A.H.H.; Hasbullah, I.H.; Alashhab, Z.R. A systematic literature review on machine and deep learning approaches for detecting attacks in RPL-based 6LoWPAN of internet of things. Sensors 2022, 22, 3400. [Google Scholar] [CrossRef]
  129. Aversano, L.; Bernardi, M.L.; Cimitile, M.; Pecori, R. A systematic review on Deep Learning approaches for IoT security. Comput. Sci. Rev. 2021, 40, 100389. [Google Scholar] [CrossRef]
  130. Banaamah, A.M.; Ahmad, I. Intrusion Detection in IoT Using Deep Learning. Sensors 2022, 22, 8417. [Google Scholar] [CrossRef] [PubMed]
  131. Javed, A.; Awais, M.; Shoaib, M.; Khurshid, K.S.; Othman, M. Machine learning and deep learning approaches in IoT. Peerj Comput. Sci. 2023, 9, e1204. [Google Scholar] [CrossRef] [PubMed]
  132. Gandhi, V.J.; Shokeen, S.; Koshti, S. A Systematic Literature Review On Privacy Of Deep Learning Systems. arXiv 2022, arXiv:2212.04003. [Google Scholar]
  133. Gherbi, C.; Senouci, O.; Harbi, Y.; Medani, K.; Aliouat, Z. A systematic literature review of machine learning applications in IoT. Int. J. Commun. Syst. 2023, e5500. [Google Scholar] [CrossRef]
  134. Wang, X.; Wang, A.; Yi, J.; Song, Y.; Chehri, A. Small Object Detection Based on Deep Learning for Remote Sensing: A Comprehensive Review. Remote Sens. 2023, 15, 3265. [Google Scholar] [CrossRef]
  135. Zhang, K.; Zhang, Z.; Li, Z.; Qiao, Y. Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks. IEEE Signal Process. Lett. 2016, 23, 1499–1503. [Google Scholar] [CrossRef]
  136. Ullah, A.; Anwar, S.M.; Li, J.; Nadeem, L.; Mahmood, T.; Rehman, A.; Saba, T. Smart cities: The role of Internet of Things and machine learning in realizing a data-centric smart environment. Complex Intell. Syst. 2023, 1–31. [Google Scholar] [CrossRef]
  137. Islam, M.R.; Kabir, M.M.; Mridha, M.F.; Alfarhood, S.; Safran, M.; Che, D. Deep Learning-Based IoT System for Remote Monitoring and Early Detection of Health Issues in Real-Time. Sensors 2023, 23, 5204. [Google Scholar] [CrossRef]
  138. Dini, P.; Saponara, S. Cogging torque reduction in brushless motors by a nonlinear control technique. Energies 2019, 12, 2224. [Google Scholar] [CrossRef]
  139. Dini, P.; Saponara, S. Electro-thermal model-based design of bidirectional on-board chargers in hybrid and full electric vehicles. Electronics 2021, 11, 112. [Google Scholar] [CrossRef]
  140. Dini, P.; Saponara, S. Design of adaptive controller exploiting learning concepts applied to a BLDC-based drive system. Energies 2020, 13, 2512. [Google Scholar] [CrossRef]
  141. Dini, P.; Saponara, S. Processor-in-the-Loop Validation of a Gradient Descent-Based Model Predictive Control for Assisted Driving and Obstacles Avoidance Applications. IEEE Access 2022, 10, 67958–67975. [Google Scholar] [CrossRef]
  142. Bernardeschi, C.; Dini, P.; Domenici, A.; Palmieri, M.; Saponara, S. Formal verification and co-simulation in the design of a synchronous motor control algorithm. Energies 2020, 13, 4057. [Google Scholar] [CrossRef]
  143. Benedetti, D.; Agnelli, J.; Gagliardi, A.; Dini, P.; Saponara, S. Design of an Off-Grid Photovoltaic Carport for a Full Electric Vehicle Recharging. In Proceedings of the 2020 IEEE International Conference on Environment and Electrical Engineering and 2020 IEEE Industrial and Commercial Power Systems Europe (EEEIC/ICPS Europe), Madrid, Spain, 9–12 June 2020; pp. 1–6. [Google Scholar] [CrossRef]
  144. Dini, P.; Saponara, S. Design of an observer-based architecture and non-linear control algorithm for cogging torque reduction in synchronous motors. Energies 2020, 13, 2077. [Google Scholar] [CrossRef]
  145. Dini, P.; Saponara, S. Model-based design of an improved electric drive controller for high-precision applications based on feedback linearization technique. Electronics 2021, 10, 2954. [Google Scholar] [CrossRef]
  146. Benedetti, D.; Agnelli, J.; Gagliardi, A.; Dini, P.; Saponara, S. Design of a Digital Dashboard on Low-Cost Embedded Platform in a Fully Electric Vehicle. In Proceedings of the 2020 IEEE International Conference on Environment and Electrical Engineering and 2020 IEEE Industrial and Commercial Power Systems Europe (EEEIC / ICPS Europe), Madrid, Spain, 9–12 June 2020; pp. 1–5. [Google Scholar] [CrossRef]
  147. Bernardeschi, C.; Dini, P.; Domenici, A.; Mouhagir, A.; Palmieri, M.; Saponara, S.; Sassolas, T.; Zaourar, L. Co-simulation of a model predictive control system for automotive applications. In Proceedings of the International Conference on Software Engineering and Formal Methods, Madrid, Spain, 17–21 May 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 204–220. [Google Scholar]
  148. Bernardeschi, C.; Dini, P.; Domenici, A.; Saponara, S. Co-simulation and Verification of a Non-linear Control System for Cogging Torque Reduction in Brushless Motors. In Proceedings of the Software Engineering and Formal Methods: SEFM 2019 Collocated Workshops: CoSim-CPS, ASYDE, CIFMA, and FOCLASA, Oslo, Norway, 16–20 September 2019; Revised Selected Papers 17. Springer: Berlin/Heidelberg, Germany, 2020; pp. 3–19. [Google Scholar]
  149. Dini, P.; Ariaudo, G.; Botto, G.; Greca, F.L.; Saponara, S. Real-time electro-thermal modelling & predictive control design of resonant power converter in full electric vehicle applications. IET Power Electron. 2023, 16, 2045–2064. [Google Scholar]
  150. Dini, P.; Saponara, S. Review on model based design of advanced control algorithms for cogging torque reduction in power drive systems. Energies 2022, 15, 8990. [Google Scholar] [CrossRef]
  151. Pacini, F.; Matteo, S.D.; Dini, P.; Fanucci, L.; Bucchi, F. Innovative Plug-and-Play System for Electrification of Wheel-Chairs. IEEE Access 2023, 11, 89038–89051. [Google Scholar] [CrossRef]
  152. Dini, P.; Saponara, S.; Colicelli, A. Overview on Battery Charging Systems for Electric Vehicles. Electronics 2023, 12, 4295. [Google Scholar] [CrossRef]
Figure 1. General Deep Learning in IoTs systems.
Figure 1. General Deep Learning in IoTs systems.
Electronics 12 04925 g001
Figure 2. Representation of convolutional neural network architecture.
Figure 2. Representation of convolutional neural network architecture.
Electronics 12 04925 g002
Figure 3. Mathematics inside a fully connected layer.
Figure 3. Mathematics inside a fully connected layer.
Electronics 12 04925 g003
Figure 4. An example of Deep CNN–LSTM architecture for image-recognition applications [94].
Figure 4. An example of Deep CNN–LSTM architecture for image-recognition applications [94].
Electronics 12 04925 g004
Figure 5. Deep Learning in Bio-medicine IoTs.
Figure 5. Deep Learning in Bio-medicine IoTs.
Electronics 12 04925 g005
Table 1. Scores of various models on the single DS2OS Dataset.
Table 1. Scores of various models on the single DS2OS Dataset.
ReferenceApproachF1
[76]MLP0.98
[91]LR0.98
[91]SVM0.98
[91]DT0.99
[91]RF0.99
[91]MLP0.99
[92]SS-TCVN0.96
Table 2. Results of 2-layer DeepConvLSTM on a private Dataset [93].
Table 2. Results of 2-layer DeepConvLSTM on a private Dataset [93].
UnitsAccuracy (%)Precision (%)Recall (%)F1 (%)
12869.5988.4876.8081.92
25670.6387.8778.5282.70
51271.1788.2378.9383.03
102472.2688.0788.3883.75
Table 3. Hybrid feature-based State-of-the-Art methods for HAR [97].
Table 3. Hybrid feature-based State-of-the-Art methods for HAR [97].
MethodData TypeDatasetPerformanceSource
HxRSMRGBUCF101Accuracy: 93.0%[98]
GCNSkeletonNTU-RGBDAccuracy: 96.1 % [99]
PYSKLSkeletonNTU-RGBD,
UCF101
Accuracy:
97.4 % , 86.9 %
[99]
ActionCLIPRGB+TextKineticsAccuracy: 83.8 % [99]
IMGAUD2VIDRGB+AudieActivityNetAccuracy: 80.3 % [100]
Stacked LSTMSkeletonSBU Kinect,
HDM05, CMU
Accuracy:
90.41 % , 97.25 % ,
81.04 %
[101]
Stacked LSTMSkeletonMSRDailyActivity
3D, NTU-RGBD
(CS), CAD-60
Accuracy:
91.56 % , 64.9 % ,
67.64 %
[102]
Stacked LSTMRGBHMDB51,
UCF101,
Hollywood2
Accuracy:
41.31 % , 84.96 %
MAP: 43.91
[103]
Differential RNNRGB and
Skeleton
MSRAction3D
(CV), KTH-1(CV),
KTH-2(CV)
Accuracy:
92.03 % , 93.96 % ,
92.12 %
[104]
AGCNSkeletonNTU-RGBD(CS)
NTU-RGBD(CV)
Kinetics
Accuracy: 88.5 % , 95.1 % ,
Top 5% accuracy: 58.7 % ,
Top 1% accuracy: 36.1 %
[105]
Two-stream MiCTRGBHMDB51, UCF101Accuracy: 70.5 % , 94.7 % [106]
DBNDepthMHAD, MIVIAAccuracy: 85.8 % , 84.7 % [107]
GANRGBUCF101, HMDB51Accuracy: 47.2 % , 14.40 % [108]
Table 4. Classification Results of DL research in health care (pt. 1).
Table 4. Classification Results of DL research in health care (pt. 1).
Research StudyMain FocusCase StudyAdvantagesWeaknesses
Fonseca et al. [105]Improve home-based
healthcare for
patients with multiple
chronic conditions.
Patients with
multiple
chronic
conditions.
Introduces new
caregiving amenities.
Limited cost control;
lack of statistical
proof of
effectiveness.
Sandstrom et al. [109]Establish a link
between smartphone
sensor data and
individual health,
using deep-learning
methods.
Personal
health
assistance.
Simple structure, low
computational burden,
high performance.
Need for more types
of sensor data
research.
Liu et al. [110]Develop a smart-
dental-health-IoT
system with smart
hardware, deep
learning, and a mobile
terminal.
Smart-dental
health-IoT
system.
Compact dimensions
and adaptable lighting.
Incomplete coverage
of larger teeth.
Table 5. Classification results of Deep Learning research in health care (pt. 2).
Table 5. Classification results of Deep Learning research in health care (pt. 2).
Research StudyMain FocusCase StudyAdvantagesWeaknesses
Sagar et al. [111]Propose a deep neural
network (DNN) to
analyze sensor-array
data for patient-
condition monitoring.
Patient-
monitoring
system.
High accuracy, cost-
effectiveness.
Requires a
substantial amount
of data.
Klenk et al. [112]Employ classical and
non-classical
algorithms for fall-
detection models.
Fall detection.Promotes
generalization; non-
flawless deep-learning
model for activities of
daily living (ADLs).
Malasinghe et al. [113]Develop smart
patches or chips with
IoT sensors for
continuous health
monitoring.
Multi-access
physical
monitoring
system.
Promising results,
including precision,
efficiency, and mean
residual error.
Suggests advanced
multimedia methods
to reduce costs and
enhance privacy.
Wei et al. [114]Introduce an entirely
automated nutrition-
monitoring system
(Smart-Log)
Nutrition
monitoring.
Cost-efficient and
highly accurate.
Recommends
exploring alternative
methods for more
precise diet
prediction.
Table 6. Exploiting the internet of things for surveillance applications.
Table 6. Exploiting the internet of things for surveillance applications.
MethodApplicationBenefitsLimitationsReference
GRU-based
deep learning.
RPL attack detection
and prevention in IoT
networks.
Improved security.Limited to RPL-based
6LoWPAN of IoT.
[128]
Deep learning.IoT security.Improved security.Lack of systematic review
of DL approaches to IoT
security.
[129]
Deep learning.Intrusion detection in
the IoT.
Improved security.Limited to intrusion
detection.
[130]
Machine learning
and deep learning.
IoT security.Improved security.Limited to systematic
literature review.
[131]
Deep learning.Privacy of deep-
learning systems in the IoT.
Improved privacy.Limited to the privacy of
deep-learning systems in
the IoT.
[132]
Machine learning.Various IoT networking
domains.
Improved
performance.
Limited to machine-
learning applications in
the IoT.
[133]
Deep learning.Object detection in
surveillance videos.
Improved
surveillance.
Limited to object
detection.
[134]
Deep learning.Facial recognition in
surveillance videos.
Improved
surveillance.
Limited to facial
recognition.
[135]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Elhanashi, A.; Dini, P.; Saponara, S.; Zheng, Q. Integration of Deep Learning into the IoT: A Survey of Techniques and Challenges for Real-World Applications. Electronics 2023, 12, 4925. https://doi.org/10.3390/electronics12244925

AMA Style

Elhanashi A, Dini P, Saponara S, Zheng Q. Integration of Deep Learning into the IoT: A Survey of Techniques and Challenges for Real-World Applications. Electronics. 2023; 12(24):4925. https://doi.org/10.3390/electronics12244925

Chicago/Turabian Style

Elhanashi, Abdussalam, Pierpaolo Dini, Sergio Saponara, and Qinghe Zheng. 2023. "Integration of Deep Learning into the IoT: A Survey of Techniques and Challenges for Real-World Applications" Electronics 12, no. 24: 4925. https://doi.org/10.3390/electronics12244925

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop