Blending Artificial Intelligence and Machine Learning with the Internet of Things: Emerging Trends, Issues and Challenges

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 10 July 2024 | Viewed by 23335

Special Issue Editors


E-Mail Website
Guest Editor
1. Polytechnic Institute of Castelo Branco, Av. Pedro Álvares Cabral No 12, 6000-084 Castelo Branco, Portugal
2. Instituto de Telecomunicações, Rua Marquês d’Ávila e Bolama, 6201-001 Covilhã, Portugal
Interests: vehicular networks; delay/disruption-tolerant networks; Internet of Things; smart cities; smart farming
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Polytechnic Institute of Castelo Branco, Av. Pedro Álvares Cabral No 12, 6000-084 Castelo Branco, Portugal
2. Instituto de Telecomunicações, Rua Marquês d’Ávila e Bolama, 6201-001 Covilhã, Portugal
Interests: mobility support for wireless sensor networks; Internet of Things; smart cities; smart farming
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Computer Science Department, State University of Londrina (UEL), Londrina 86057-970, Brazil
Interests: security analytics; intrusion detection; Internet of Things
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer and Telematic Systems Engineering, School of Technology, University of Extremadura, Avda. de la Universidad s/n, 10003 Cáceres, Spain
Interests: software-defined networking; unmanned aerial vehicles; 5G; edge–fog computing; network function virtualization
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The Internet of Things (IoT) continues to revolutionize the world. However, gathering, processing, and analyzing the data is difficult due to the volume of data flowing from billions of connected IoT devices. Artificial intelligence can play a vital role here, as it can extract insights from data. Machine learning can detect patterns and anomalies in data obtained from IoT devices. As a result, networks and devices can learn from previous decisions, predict future activity, and continuously enhance their performance and decision-making capabilities.

This Special Issue aims to bring together researchers and scientists to present the latest experiences, findings, and developments regarding integrating artificial intelligence and machine learning with the Internet of Things. The topics of this Special Issue include, but are not limited to, the following:

  • Artificial intelligence and IoT;
  • Machine learning and IoT;
  • IoT recent trends;
  • IoT applications and services;
  • IoT networks;
  • IoT architectures;
  • IoT prototypes, testbeds, and case studies.

Prof. Dr. Vasco N. G. J. Soares
Prof. Dr. João M. L. P. Caldeira
Dr. Bruno Bogaz Zarpelão
Dr. Jaime Galán-Jiménez
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

 

Keywords

  • Internet of Things
  • artificial intelligence
  • machine learning
  • trends
  • issues
  • challenges

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

30 pages, 4934 KiB  
Article
A Survey of AI Techniques in IoT Applications with Use Case Investigations in the Smart Environmental Monitoring and Analytics in Real-Time IoT Platform
by Yohanes Yohanie Fridelin Panduman, Nobuo Funabiki, Evianita Dewi Fajrianti, Shihao Fang and Sritrusta Sukaridhoto
Information 2024, 15(3), 153; https://doi.org/10.3390/info15030153 - 9 Mar 2024
Cited by 1 | Viewed by 2051
Abstract
In this paper, we have developed the SEMAR (Smart Environmental Monitoring and Analytics in Real-Time) IoT application server platform for fast deployments of IoT application systems. It provides various integration capabilities for the collection, display, and analysis of sensor data on a single [...] Read more.
In this paper, we have developed the SEMAR (Smart Environmental Monitoring and Analytics in Real-Time) IoT application server platform for fast deployments of IoT application systems. It provides various integration capabilities for the collection, display, and analysis of sensor data on a single platform. Recently, Artificial Intelligence (AI) has become very popular and widely used in various applications including IoT. To support this growth, the integration of AI into SEMAR is essential to enhance its capabilities after identifying the current trends of applicable AI technologies in IoT applications. In this paper, we first provide a comprehensive review of IoT applications using AI techniques in the literature. They cover predictive analytics, image classification, object detection, text spotting, auditory perception, Natural Language Processing (NLP), and collaborative AI. Next, we identify the characteristics of each technique by considering the key parameters, such as software requirements, input/output (I/O) data types, processing methods, and computations. Third, we design the integration of AI techniques into SEMAR based on the findings. Finally, we discuss use cases of SEMAR for IoT applications with AI techniques. The implementation of the proposed design in SEMAR and its use to IoT applications will be in future works. Full article
Show Figures

Figure 1

30 pages, 2463 KiB  
Article
IoT-Assisted Automatic Driver Drowsiness Detection through Facial Movement Analysis Using Deep Learning and a U-Net-Based Architecture
by Shiplu Das, Sanjoy Pratihar, Buddhadeb Pradhan, Rutvij H. Jhaveri and Francesco Benedetto
Information 2024, 15(1), 30; https://doi.org/10.3390/info15010030 - 2 Jan 2024
Cited by 2 | Viewed by 2371
Abstract
The main purpose of a detection system is to ascertain the state of an individual’s eyes, whether they are open and alert or closed, and then alert them to their level of fatigue. As a result of this, they will refrain from approaching [...] Read more.
The main purpose of a detection system is to ascertain the state of an individual’s eyes, whether they are open and alert or closed, and then alert them to their level of fatigue. As a result of this, they will refrain from approaching an accident site. In addition, it would be advantageous for people to be promptly alerted in real time before the occurrence of any calamitous events affecting multiple people. The implementation of Internet-of-Things (IoT) technology in driver action recognition has become imperative due to the ongoing advancements in Artificial Intelligence (AI) and deep learning (DL) within Advanced Driver Assistance Systems (ADAS), which are significantly transforming the driving encounter. This work presents a deep learning model that utilizes a CNN–Long Short-Term Memory network to detect driver sleepiness. We employ different algorithms on datasets such as EM-CNN, VGG-16, GoogLeNet, AlexNet, ResNet50, and CNN-LSTM. The aforementioned algorithms are used for classification, and it is evident that the CNN-LSTM algorithm exhibits superior accuracy compared to alternative deep learning algorithms. The model is provided with video clips of a certain period, and it distinguishes the clip by analyzing the sequence of motions exhibited by the driver in the video. The key objective of this work is to promote road safety by notifying drivers when they exhibit signs of drowsiness, minimizing the probability of accidents caused by fatigue-related disorders. It would help in developing an ADAS that is capable of detecting and addressing driver tiredness proactively. This work intends to limit the potential dangers associated with drowsy driving, hence promoting enhanced road safety and a decrease in accidents caused by fatigue-related variables. This work aims to achieve high efficacy while maintaining a non-intrusive nature. This work endeavors to offer a non-intrusive solution that may be seamlessly integrated into current automobiles, hence enhancing accessibility to a broader spectrum of drivers through the utilization of facial movement analysis employing CNN-LSTM and a U-Net-based architecture. Full article
Show Figures

Figure 1

32 pages, 8511 KiB  
Article
The PolitiFact-Oslo Corpus: A New Dataset for Fake News Analysis and Detection
by Nele Põldvere, Zia Uddin and Aleena Thomas
Information 2023, 14(12), 627; https://doi.org/10.3390/info14120627 - 23 Nov 2023
Cited by 1 | Viewed by 3248
Abstract
This study presents a new dataset for fake news analysis and detection, namely, the PolitiFact-Oslo Corpus. The corpus contains samples of both fake and real news in English, collected from the fact-checking website PolitiFact.com. It grew out of a need for a more [...] Read more.
This study presents a new dataset for fake news analysis and detection, namely, the PolitiFact-Oslo Corpus. The corpus contains samples of both fake and real news in English, collected from the fact-checking website PolitiFact.com. It grew out of a need for a more controlled and effective dataset for fake news analysis and detection model development based on recent events. Three features make it uniquely placed for this: (i) the texts have been individually labelled for veracity by experts, (ii) they are complete texts that strictly correspond to the claims in question, and (iii) they are accompanied by important metadata such as text type (e.g., social media, news and blog). In relation to this, we present a pipeline for collecting quality data from major fact-checking websites, a procedure which can be replicated in future corpus building efforts. An exploratory analysis based on sentiment and part-of-speech information reveals interesting differences between fake and real news as well as between text types, thus highlighting the importance of adding contextual information to fake news corpora. Since the main application of the PolitiFact-Oslo Corpus is in automatic fake news detection, we critically examine the applicability of the corpus and another PolitiFact dataset built based on less strict criteria for various deep learning-based efficient approaches, such as Bidirectional Long Short-Term Memory (Bi-LSTM), LSTM fine-tuned transformers such as Bidirectional Encoder Representations from Transformers (BERT) and RoBERTa, and XLNet. Full article
Show Figures

Figure 1

18 pages, 6385 KiB  
Article
Predicting Abnormal Respiratory Patterns in Older Adults Using Supervised Machine Learning on Internet of Medical Things Respiratory Frequency Data
by Pedro C. Santana-Mancilla, Oscar E. Castrejón-Mejía, Silvia B. Fajardo-Flores and Luis E. Anido-Rifón
Information 2023, 14(12), 625; https://doi.org/10.3390/info14120625 - 21 Nov 2023
Viewed by 1801
Abstract
Wearable Internet of Medical Things (IoMT) technology, designed for non-invasive respiratory monitoring, has demonstrated considerable promise in the early detection of severe diseases. This paper introduces the application of supervised machine learning techniques to predict respiratory abnormalities through frequency data analysis. The principal [...] Read more.
Wearable Internet of Medical Things (IoMT) technology, designed for non-invasive respiratory monitoring, has demonstrated considerable promise in the early detection of severe diseases. This paper introduces the application of supervised machine learning techniques to predict respiratory abnormalities through frequency data analysis. The principal aim is to identify respiratory-related health risks in older adults using data collected from non-invasive wearable devices. This article presents the development, assessment, and comparison of three machine learning models, underscoring their potential for accurately predicting respiratory-related health issues in older adults. The convergence of wearable IoMT technology and machine learning holds immense potential for proactive and personalized healthcare among older adults, ultimately enhancing their quality of life. Full article
Show Figures

Figure 1

23 pages, 5243 KiB  
Article
Generative Adversarial Networks (GANs) for Audio-Visual Speech Recognition in Artificial Intelligence IoT
by Yibo He, Kah Phooi Seng and Li Minn Ang
Information 2023, 14(10), 575; https://doi.org/10.3390/info14100575 - 19 Oct 2023
Cited by 5 | Viewed by 2613
Abstract
This paper proposes a novel multimodal generative adversarial network AVSR (multimodal AVSR GAN) architecture, to improve both the energy efficiency and the AVSR classification accuracy of artificial intelligence Internet of things (IoT) applications. The audio-visual speech recognition (AVSR) modality is a classical multimodal [...] Read more.
This paper proposes a novel multimodal generative adversarial network AVSR (multimodal AVSR GAN) architecture, to improve both the energy efficiency and the AVSR classification accuracy of artificial intelligence Internet of things (IoT) applications. The audio-visual speech recognition (AVSR) modality is a classical multimodal modality, which is commonly used in IoT and embedded systems. Examples of suitable IoT applications include in-cabin speech recognition systems for driving systems, AVSR in augmented reality environments, and interactive applications such as virtual aquariums. The application of multimodal sensor data for IoT applications requires efficient information processing, to meet the hardware constraints of IoT devices. The proposed multimodal AVSR GAN architecture is composed of a discriminator and a generator, each of which is a two-stream network, corresponding to the audio stream information and the visual stream information, respectively. To validate this approach, we used augmented data from well-known datasets (LRS2-Lip Reading Sentences 2 and LRS3) in the training process, and testing was performed using the original data. The research and experimental results showed that the proposed multimodal AVSR GAN architecture improved the AVSR classification accuracy. Furthermore, in this study, we discuss the domain of GANs and provide a concise summary of the proposed GANs. Full article
Show Figures

Figure 1

16 pages, 5064 KiB  
Article
Particle Swarm Optimization-Based Control for Maximum Power Point Tracking Implemented in a Real Time Photovoltaic System
by Asier del Rio, Oscar Barambones, Jokin Uralde, Eneko Artetxe and Isidro Calvo
Information 2023, 14(10), 556; https://doi.org/10.3390/info14100556 - 11 Oct 2023
Cited by 3 | Viewed by 1407
Abstract
Photovoltaic panels present an economical and environmentally friendly renewable energy solution, with advantages such as emission-free operation, low maintenance, and noiseless performance. However, their nonlinear power-voltage curves necessitate efficient operation at the Maximum Power Point (MPP). Various techniques, including Hill Climb algorithms, are [...] Read more.
Photovoltaic panels present an economical and environmentally friendly renewable energy solution, with advantages such as emission-free operation, low maintenance, and noiseless performance. However, their nonlinear power-voltage curves necessitate efficient operation at the Maximum Power Point (MPP). Various techniques, including Hill Climb algorithms, are commonly employed in the industry due to their simplicity and ease of implementation. Nonetheless, intelligent approaches like Particle Swarm Optimization (PSO) offer enhanced accuracy in tracking efficiency with reduced oscillations. The PSO algorithm, inspired by collective intelligence and animal swarm behavior, stands out as a promising solution due to its efficiency and ease of integration, relying only on standard current and voltage sensors commonly found in these systems, not like most intelligent techniques, which require additional modeling or sensoring, significantly increasing the cost of the installation. The primary contribution of this study lies in the implementation and validation of an advanced control system based on the PSO algorithm for real-time Maximum Power Point Tracking (MPPT) in a commercial photovoltaic system to assess its viability by testing it against the industry-standard controller, Perturbation and Observation (P&O), to highlight its advantages and limitations. Through rigorous experiments and comparisons with other methods, the proposed PSO-based control system’s performance and feasibility have been thoroughly evaluated. A sensitivity analysis of the algorithm’s search dynamics parameters has been conducted to identify the most effective combination for optimal real-time tracking. Notably, experimental comparisons with the P&O algorithm have revealed the PSO algorithm’s remarkable ability to significantly reduce settling time up to threefold under similar conditions, resulting in a substantial decrease in energy losses during transient states from 31.96% with P&O to 9.72% with PSO. Full article
Show Figures

Figure 1

16 pages, 2179 KiB  
Article
A Service-Efficient Proxy Mobile IPv6 Extension for IoT Domain
by Habib Ullah Khan, Anwar Hussain, Shah Nazir, Farhad Ali, Muhammad Zubair Khan and Inam Ullah
Information 2023, 14(8), 459; https://doi.org/10.3390/info14080459 - 14 Aug 2023
Cited by 6 | Viewed by 1429
Abstract
The upcoming generation of communications can provide richer mobility, high data rate, reliable security, better quality of services, and supporting mobility requirements in the Internet of Things (IoT) environment. Integrating modern communication with IoT demands more secure, scalable, and resource-efficient mobility solutions for [...] Read more.
The upcoming generation of communications can provide richer mobility, high data rate, reliable security, better quality of services, and supporting mobility requirements in the Internet of Things (IoT) environment. Integrating modern communication with IoT demands more secure, scalable, and resource-efficient mobility solutions for better business opportunities. In a massive 6G-enabled IoT environment, modern mobility solutions such as proxy mobile IPv6 (PMIPv6) have the potential to provide enhanced mobility and resource efficiency. For supporting richer mobility, a cost-effective and resource-efficient mobility solution is required in a massive 6G-enabled IoT environment. The main objective of the presented study is to provide a resource-friendly mobility solution for supporting the effective integration of future communication in the massive IoT domain. In that context, a location-based, resource-efficient PMIPv6 extension protocol is proposed to provide resource efficiency in terms of required signaling, packet loss, and handover latency. To compare and analyze the proposed model’s effectiveness, mathematical equations are derived for the existing as well as for the proposed solution, and such equations are implemented. Based on the comparison among existing and proposed solutions, the results show that the proposed location-based service-oriented proxy mobile IPv6 extension is resource efficient for supporting mobility in 6G-enabled IoT. Full article
Show Figures

Figure 1

24 pages, 3759 KiB  
Article
A Novel Approach of Resource Allocation for Distributed Digital Twin Shop-Floor
by Haijun Zhang, Qiong Yan, Yan Qin, Shengwei Chen and Guohui Zhang
Information 2023, 14(8), 458; https://doi.org/10.3390/info14080458 - 13 Aug 2023
Viewed by 1412
Abstract
Facing global market competition and supply chain risks, many production companies are leaning towards distributed manufacturing because of their ability to utilize a network of manufacturing resources located around the world. Deriving from information and communication technologies and artificial intelligence, the digital twin [...] Read more.
Facing global market competition and supply chain risks, many production companies are leaning towards distributed manufacturing because of their ability to utilize a network of manufacturing resources located around the world. Deriving from information and communication technologies and artificial intelligence, the digital twin shop-floor (DTS) has received great attention from academia and industry. DTS is a virtual shop-floor that is almost identical to the physical shop-floor. Therefore, multiple physical shop-floors located in different places can easily be interconnected to realize a DT that is a distributed digital twin shop-floor (D2TS). However, some challenges still hinder effective and efficient resource allocation among D2TSs. In order to attempt to address the issues, firstly, this paper proposes an information architecture for D2TSs based on cloud–fog computing; secondly, a novel mechanism of D2TS resource allocation (D2TSRA) is designed. The proposed mechanism both makes full use of a digital twin to support dynamic allocation of geographic resources and avoids the centralized solutions of the digital twin which lead to a heavy burden on the network bandwidth; thirdly, the optimization problem in D2TSRA is solved by a BP neural network algorithm and an improved genetic algorithm; fourthly, a case study for distributed collaborative manufacturing of aero-engine casing is employed to validate the effectiveness and efficiency of the proposed method of resource allocation for D2TS; finally, the paper is summarized and the relevant research directions are prospected. Full article
Show Figures

Figure 1

17 pages, 4899 KiB  
Article
Combining Classifiers for Deep Learning Mask Face Recognition
by Wen-Chang Cheng, Hung-Chou Hsiao, Yung-Fa Huang and Li-Hua Li
Information 2023, 14(7), 421; https://doi.org/10.3390/info14070421 - 21 Jul 2023
Viewed by 1360
Abstract
This research proposes a single network model architecture for mask face recognition using the FaceNet training method. Three pre-trained convolutional neural networks of different sizes are combined, namely InceptionResNetV2, InceptionV3, and MobileNetV2. The models are augmented by connecting an otherwise fully connected network [...] Read more.
This research proposes a single network model architecture for mask face recognition using the FaceNet training method. Three pre-trained convolutional neural networks of different sizes are combined, namely InceptionResNetV2, InceptionV3, and MobileNetV2. The models are augmented by connecting an otherwise fully connected network with a SoftMax output layer. We combine triplet loss and categorical cross-entropy loss to optimize the training process. In addition, the learning rate of the optimizer is dynamically updated using the cosine annealing mechanism, which improves the convergence of the model during training. Mask face recognition (MFR) experimental results on a custom MASK600 dataset show that proposed InceptionResNetV2 and InceptionV3 use only 20 training epochs, and MobileNetV2 uses only 50 training epochs, but to achieve more than 93% accuracy than the previous works of MFR with annealing. In addition to reaching a practical level, it saves time for training models and effectively reduces energy costs. Full article
Show Figures

Figure 1

15 pages, 6561 KiB  
Article
Enhancing CSI-Based Human Activity Recognition by Edge Detection Techniques
by Hossein Shahverdi, Mohammad Nabati, Parisa Fard Moshiri, Reza Asvadi and Seyed Ali Ghorashi
Information 2023, 14(7), 404; https://doi.org/10.3390/info14070404 - 14 Jul 2023
Cited by 5 | Viewed by 1832
Abstract
Human Activity Recognition (HAR) has been a popular area of research in the Internet of Things (IoT) and Human–Computer Interaction (HCI) over the past decade. The objective of this field is to detect human activities through numeric or visual representations, and its applications [...] Read more.
Human Activity Recognition (HAR) has been a popular area of research in the Internet of Things (IoT) and Human–Computer Interaction (HCI) over the past decade. The objective of this field is to detect human activities through numeric or visual representations, and its applications include smart homes and buildings, action prediction, crowd counting, patient rehabilitation, and elderly monitoring. Traditionally, HAR has been performed through vision-based, sensor-based, or radar-based approaches. However, vision-based and sensor-based methods can be intrusive and raise privacy concerns, while radar-based methods require special hardware, making them more expensive. WiFi-based HAR is a cost-effective alternative, where WiFi access points serve as transmitters and users’ smartphones serve as receivers. The HAR in this method is mainly performed using two wireless-channel metrics: Received Signal Strength Indicator (RSSI) and Channel State Information (CSI). CSI provides more stable and comprehensive information about the channel compared to RSSI. In this research, we used a convolutional neural network (CNN) as a classifier and applied edge-detection techniques as a preprocessing phase to improve the quality of activity detection. We used CSI data converted into RGB images and tested our methodology on three available CSI datasets. The results showed that the proposed method achieved better accuracy and faster training times than the simple RGB-represented data. In order to justify the effectiveness of our approach, we repeated the experiment by applying raw CSI data to long short-term memory (LSTM) and Bidirectional LSTM classifiers. Full article
Show Figures

Figure 1

Review

Jump to: Research

31 pages, 5121 KiB  
Review
Bibliometric Analysis of IoT Lightweight Cryptography
by Zenith Dewamuni, Bharanidharan Shanmugam, Sami Azam and Suresh Thennadil
Information 2023, 14(12), 635; https://doi.org/10.3390/info14120635 - 28 Nov 2023
Viewed by 1783
Abstract
In the rapidly developing world of the Internet of Things (IoT), data security has become increasingly important since massive personal data are collected. IoT devices have resource constraints, which makes traditional cryptographic algorithms ineffective for securing IoT devices. To overcome resource limitations, lightweight [...] Read more.
In the rapidly developing world of the Internet of Things (IoT), data security has become increasingly important since massive personal data are collected. IoT devices have resource constraints, which makes traditional cryptographic algorithms ineffective for securing IoT devices. To overcome resource limitations, lightweight cryptographic algorithms are needed. To identify research trends and patterns in IoT security, it is crucial to analyze existing works, keywords, authors, journals, and citations. We conducted a bibliometric analysis using performance mapping, science mapping, and enrichment techniques to collect the necessary information. Our analysis included 979 Scopus articles, 214 WOS articles, and 144 IEEE Xplore articles published during 2015–2023, and duplicates were removed. We analyzed and visualized the bibliometric data using R version 4.3.1, VOSviewer version 1.6.19, and the bibliometrix library. We discovered that India is the leading country for this type of research. Archarya and Bansod are the most relevant authors; lightweight cryptography and cryptography are the most relevant terms; and IEEE Access is the most significant journal. Research on lightweight cryptographic algorithms for IoT devices (Raspberry Pi) has been identified as an important area for future research. Full article
Show Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Title: Uncovering the Unexpected: A Review of Anomaly Detection Techniques in IoT using Machine Learning
Authors: Rameez Asif
Affiliation: --
Abstract: With the rapid proliferation of the Internet of Things (IoT), the amount of data generated by these devices has grown exponentially, making it increasingly challenging to identify anomalies and potential security threats. In this review paper, we provide an overview of various anomaly detection techniques for IoT using machine learning algorithms. We first discuss the different types of anomalies that can occur in IoT systems and their potential impact. We then review the state-of-the-art machine learning algorithms used for anomaly detection in IoT, including unsupervised and supervised learning methods, deep learning techniques, and ensemble methods. We also examine the challenges and limitations of using machine learning for anomaly detection in IoT, including data quality issues, privacy concerns, and the need for interpretability. Our findings suggest that machine learning-based anomaly detection can be an effective tool for enhancing security and trust in IoT systems, but requires careful consideration of the data characteristics and the specific application context.

Title: Energy efficient lifetime enhancement for WSN using network trust and swarm intelligence optimization
Authors: Sung Won Kim
Affiliation: Department of Information and Communication Engineering, Yeungnam University, Gyeungsan, Gyeungbuk 38541, Korea
Abstract: For specialized telecommunication applications, recent developments in technology and manufacturing have made it possible to develop compact, powerful, energy-efficient, cost-effective sensor nodes which are “smart” enough to be capable of adaptability, self-awareness, and self-organization. Sensor network technologies improve social advancement and life quality while having little to no negative impact on the environment or natural resources of the planet are examined in sensor networks for sustainable development. Wireless sensor networks (WSNs) are advantageous in a wide range of applications, including military, healthcare, traffic monitoring, and remote sensing of images. Different levels of security are needed for these critical applications and it becomes difficult to use conventional algorithms due to the limitations of sensor networks. Sensor networks are also thought of as the foundation of the IoTs and smart cities, where security has emerged as one of the biggest issues with IoT and smart city applications. The WSN covers complex issues like energy consumption, an effective method for choosing cluster heads, a routing algorithm, network strength, packet loss, energy loss, and other issues. With the recent introduction of WSNs, it has become more difficult to supply trustworthy and reliable data because of the distinctive properties and limitations of nodes. Through the insertion of fake and malicious data as well as the launch of internal attacks, hostile nodes can easily compromise the integrity of the network. Using trust-based security to detect rogue nodes provides an efficient and portable defence. To increase dependability (cooperation) among sensor nodes in wireless sensor networks, trust evaluation models are a crucial security enhancement tool. To meet the security needs of WSNs, this study suggests the novel trust algorithm DFA U-Trust.

Title: Digital Twins for Smart Manufacturing: Opportunities and Challenges
Authors: Nader Mohamed and Jameela Al-Jaroodi
Affiliation: --
Abstract: Smart manufacturing represents the modern digital industrial innovations. This vision is realized through utilizing the advances in technologies like the Internet of Things (IoT), Artificial Intelligence (AI) and Machine Learning (ML) to enable various advanced smart applications. The smart manufacturing/industrial applications enable fast demand-based production and better optimization for the supply chain, in addition to reliable, efficient, and cost-effective production. One key technology that has significant potential to improve the capabilities and outcomes of smart manufacturing is digital twins. This paper investigates the opportunities and challenges of utilizing digital twins for smart manufacturing. It also discusses the current research trends in the field and future prospects.

 

Back to TopTop