sensors-logo

Journal Browser

Journal Browser

Massive Learning and Computing for the Reliable Internet of Everything

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: closed (20 July 2023) | Viewed by 15842

Special Issue Editors


E-Mail Website
Guest Editor
School of Electrical Engineering, Korea University, Seoul 02841, Korea
Interests: unmanned aircraft system; UTM; Internet of Things; wireless network; cyberphysical system; ubiquitous system
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Assistant Professor, Division of IT Convergence Engineering, Hansung University, Seoul, Korea
Interests: Artificial Intelligence; Internet of Things; Cyber Physical System.

E-Mail Website
Guest Editor
Platform Laboratory, KT R&D Center, Seoul, Korea
Interests: Network-based multi-UAV applications; Cooperative localization; Cyber Physical System

E-Mail Website
Guest Editor
Associate Professor, Department of Information and Communication Engineering, Dongguk University, Seoul, Korea
Interests: Wireless networks; Internet of Things; MAC protocol; resource allocation; Quality of Service

Special Issue Information

Dear Colleagues,

The Internet of Everything (IoE) technology is continuously expanding its impact on our daily lives, aided by the increase in the number of connected devices with higher intelligence compared to the past. Additionally, as the performance of devices has advanced significantly, their application also continues to expand and vary widely (e.g., collaborative data collection, smart city, smart factory, cooperating robots, or drone swarm performance). To perform their given missions more successfully, devices must improve their collective intelligence by collaborating with each other. In order to do this, they need to share collected data, learning methods, environmental information, mission objectives, and mission progresses. In addition, to increase the overall performance and broaden the application scope, high-efficiency, high-performance, and stable networks in massive IoE systems are necessary. Moreover, in order to increase the reliability of the results of learning and computing, the process of delivering data over the network must be evaluated, and its trustworthiness guaranteed.

The focus of this Special issue will be on dealing with the requirements, challenges, constraints, theoretical issues, innovative applications, and experimental results associated with the massive learning and computing for the Internet of Everything.

Topics of interest include but are not limited to:

  • Massive learning and computing methods, algorithms, and systems for the IoE;
  • Collaborative/federated/distributed learning in the IoE;
  • Dependable design and implementation for the IoE in the perspective of reliability, availability, and survivability;
  • Security and privacy schemes on massive learning and computing in the IoE;
  • Trusted devices, networks, and computing resource sharing and management for the IoE, such as blockchain-based management for devices, networks, and computing;
  • Low latency and highly reliable communications for collaborative computing in the IoE;
  • Collective intelligence IoE in 5G and beyond cellular communication systems;
  • Trusted and collaborative framework for deep learning in the IoE;
  • Various applications supported by collaborative learning and computing in IoE systems;
  • Data-driven analysis and model on massive learning and computing in the IoE.

Prof. Dr. Hwangnam Kim
Dr. Woonghee Lee
Dr. Seungho Yoo
Dr. Eun-Chan Park
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • massive learning
  • Internet of Everything
  • collaborative computing
  • cyberphysical systems
  • federated learning
  • pervasive computing
  • reliability
  • blockchain

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 23085 KiB  
Article
Reinforcement Learning Based Topology Control for UAV Networks
by Taehoon Yoo, Sangmin Lee, Kyeonghyun Yoo and Hwangnam Kim
Sensors 2023, 23(2), 921; https://doi.org/10.3390/s23020921 - 13 Jan 2023
Cited by 7 | Viewed by 2200
Abstract
The recent development of unmanned aerial vehicle (UAV) technology has shown the possibility of using UAVs in many research and industrial fields. One of them is for UAVs moving in swarms to provide wireless networks in environments where there is no network infrastructure. [...] Read more.
The recent development of unmanned aerial vehicle (UAV) technology has shown the possibility of using UAVs in many research and industrial fields. One of them is for UAVs moving in swarms to provide wireless networks in environments where there is no network infrastructure. Although this method has the advantage of being able to provide a network quickly and at a low cost, it may cause scalability problems in multi-hop connectivity and UAV control when trying to cover a large area. Therefore, as more UAVs are used to form drone networks, the problem of efficiently controlling the network topology must be solved. To solve this problem, we propose a topology control system for drone networks, which analyzes relative positions among UAVs within a swarm, then optimizes connectivity among them in perspective of both interference and energy consumption, and finally reshapes a logical structure of drone networks by choosing neighbors per UAV and mapping data flows over them. The most important function in the scheme is the connectivity optimization because it should be adaptively conducted according to the dynamically changing complex network conditions, which includes network characteristics such as user density and UAV characteristics such as power consumption. Since neither a simple mathematical framework nor a network simulation tool for optimization can be a solution, we need to resort to reinforcement learning, specifically DDPG, with which each UAV can adjust its connectivity to other drones. In addition, the proposed system minimizes the learning time by flexibly changing the number of steps used for parameter learning according to the deployment of new UAVs. The performance of the proposed system was verified through simulation experiments and theoretical analysis on various topologies consisting of multiple UAVs. Full article
Show Figures

Figure 1

19 pages, 6237 KiB  
Article
Distributed Raman Spectrum Data Augmentation System Using Federated Learning with Deep Generative Models
by Yaeran Kim and Woonghee Lee
Sensors 2022, 22(24), 9900; https://doi.org/10.3390/s22249900 - 16 Dec 2022
Cited by 8 | Viewed by 2175
Abstract
Chemical agents are one of the major threats to soldiers in modern warfare, so it is so important to detect chemical agents rapidly and accurately on battlefields. Raman spectroscopy-based detectors are widely used but have many limitations. The Raman spectrum changes unpredictably due [...] Read more.
Chemical agents are one of the major threats to soldiers in modern warfare, so it is so important to detect chemical agents rapidly and accurately on battlefields. Raman spectroscopy-based detectors are widely used but have many limitations. The Raman spectrum changes unpredictably due to various environmental factors, and it is hard for detectors to make appropriate judgments about new chemical substances without prior information. Thus, the existing detectors with inflexible techniques based on determined rules cannot deal with such problems flexibly and reactively. Artificial intelligence (AI)-based detection techniques can be good alternatives to the existing techniques for chemical agent detection. To build AI-based detection systems, sufficient amounts of data for training are required, but it is not easy to produce and handle fatal chemical agents, which causes difficulty in securing data in advance. To overcome the limitations, in this paper, we propose the distributed Raman spectrum data augmentation system that leverages federated learning (FL) with deep generative models, such as generative adversarial network (GAN) and autoencoder. Furthermore, the proposed system utilizes various additional techniques in combination to generate a large number of Raman spectrum data with reality along with diversity. We implemented the proposed system and conducted diverse experiments to evaluate the system. The evaluation results validated that the proposed system can train the models more quickly through cooperation among decentralized troops without exchanging raw data and generate realistic Raman spectrum data well. Moreover, we confirmed that the classification model on the proposed system performed learning much faster and outperformed the existing systems. Full article
Show Figures

Figure 1

15 pages, 5711 KiB  
Article
Efficient Massive Computing for Deformable Volume Data Using Revised Parallel Resampling
by Chailim Park and Heewon Kye
Sensors 2022, 22(16), 6276; https://doi.org/10.3390/s22166276 - 20 Aug 2022
Cited by 1 | Viewed by 1302
Abstract
In this paper, we propose an improved parallel resampling technique. Parallel resampling is a deformable object generation method based on volume data applied to medical simulations. Existing parallel resampling is not suitable for massive computing, because the number of samplings is high and [...] Read more.
In this paper, we propose an improved parallel resampling technique. Parallel resampling is a deformable object generation method based on volume data applied to medical simulations. Existing parallel resampling is not suitable for massive computing, because the number of samplings is high and floating-point precision problems may occur. This study addresses these problems to obtain improved user latency when performing medical simulations. Specifically, instead of interpolating values after volume sampling, the efficiency is improved by performing volume sampling after coordinate interpolation. Next, the floating-point error in the calculation of the sampling position is described, and the advantage of barycentric interpolation using a reference point is discussed. The experimental results showed a significant improvement over the existing method. Volume data comprising more than 600 images used in clinical practice were deformed and rendered at interactive speed. In an Internet of Everything environment, medical imaging systems are an important application, and simulation image generation is also valuable in the overall system. Through the proposed method, the performance of the whole system can be improved. Full article
Show Figures

Figure 1

22 pages, 2787 KiB  
Article
Time-Constrained Adversarial Defense in IoT Edge Devices through Kernel Tensor Decomposition and Multi-DNN Scheduling
by Myungsun Kim and Sanghyun Joo
Sensors 2022, 22(15), 5896; https://doi.org/10.3390/s22155896 - 07 Aug 2022
Viewed by 1361
Abstract
The development of deep learning technology has resulted in great contributions in many artificial intelligence services, but adversarial attack techniques on deep learning models are also becoming more diverse and sophisticated. IoT edge devices take cloud-independent on-device DNN (deep neural network) processing technology [...] Read more.
The development of deep learning technology has resulted in great contributions in many artificial intelligence services, but adversarial attack techniques on deep learning models are also becoming more diverse and sophisticated. IoT edge devices take cloud-independent on-device DNN (deep neural network) processing technology to exhibit a fast response time. However, if the computational complexity of the denoizer for adversarial noises is high, or if a single embedded GPU is shared by multiple DNN models, adversarial defense at the on-device level is bound to represent a long latency. To solve this problem, eDenoizer is proposed in this paper. First, it applies Tucker decomposition to reduce the computational amount required for convolutional kernel tensors in the denoizer. Second, eDenoizer effectively orchestrates both the denoizer and the model defended by the denoizer simultaneously. In addition, the priority of the CPU side can be projected onto the GPU which is completely priority-agnostic, so that the delay can be minimized when the denoizer and the defense target model are assigned a high priority. As a result of confirming through extensive experiments, the reduction of classification accuracy was very marginal, up to 1.78%, and the inference speed accompanied by adversarial defense was improved up to 51.72%. Full article
Show Figures

Figure 1

21 pages, 879 KiB  
Article
Forecasting Obsolescence of Components by Using a Clustering-Based Hybrid Machine-Learning Algorithm
by Kyoung-Sook Moon, Hee Won Lee, Hee Jean Kim, Hongjoong Kim, Jeehoon Kang and Won Chul Paik
Sensors 2022, 22(9), 3244; https://doi.org/10.3390/s22093244 - 23 Apr 2022
Cited by 3 | Viewed by 1509
Abstract
Product obsolescence occurs in every production line in the industry as better-performance or cost-effective products become available. A proactive strategy for obsolescence allows firms to prepare for such events and reduces the manufacturing loss, which eventually leads to positive customer satisfaction. We propose [...] Read more.
Product obsolescence occurs in every production line in the industry as better-performance or cost-effective products become available. A proactive strategy for obsolescence allows firms to prepare for such events and reduces the manufacturing loss, which eventually leads to positive customer satisfaction. We propose a machine learning-based algorithm to forecast the obsolescence date of electronic diodes, which has a limitation on the amount of data available. The proposed algorithm overcomes these limitations in two ways. First, an unsupervised clustering algorithm is applied to group the data based on their similarity and build independent machine-learning models specialized for each group. Second, a hybrid method including several reliable techniques is constructed to improve the prediction accuracy and overcome the limitation of the lack of data. It is empirically confirmed that the prediction accuracy of the obsolescence date for the electrical component data is improved through the proposed clustering-based hybrid method. Full article
Show Figures

Figure 1

19 pages, 13222 KiB  
Article
Federated Reinforcement Learning Based AANs with LEO Satellites and UAVs
by Seungho Yoo and Woonghee Lee
Sensors 2021, 21(23), 8111; https://doi.org/10.3390/s21238111 - 04 Dec 2021
Cited by 5 | Viewed by 2440
Abstract
Supported by the advances in rocket technology, companies like SpaceX and Amazon competitively have entered the satellite Internet business. These companies said that they could provide Internet service sufficiently to users using their communication resources. However, the Internet service might not be provided [...] Read more.
Supported by the advances in rocket technology, companies like SpaceX and Amazon competitively have entered the satellite Internet business. These companies said that they could provide Internet service sufficiently to users using their communication resources. However, the Internet service might not be provided in densely populated areas, as the satellites coverage is broad but its resource capacity is limited. To offload the traffic of the densely populated area, we present an adaptable aerial access network (AAN), composed of low-Earth orbit (LEO) satellites and federated reinforcement learning (FRL)-enabled unmanned aerial vehicles (UAVs). Using the proposed system, UAVs could operate with relatively low computation resources than centralized coverage management systems. Furthermore, by utilizing FRL, the system could continuously learn from various environments and perform better with the longer operation times. Based on our proposed design, we implemented FRL, constructed the UAV-aided AAN simulator, and evaluated the proposed system. Base on the evaluation result, we validated that the FRL enabled UAV-aided AAN could operate efficiently in densely populated areas where the satellites cannot provide sufficient Internet services, which improves network performances. In the evaluations, our proposed AAN system provided about 3.25 times more communication resources and had 5.1% lower latency than the satellite-only AAN. Full article
Show Figures

Figure 1

10 pages, 2214 KiB  
Communication
Low-Complexity Transmit Power Control for Secure Communications in Wireless-Powered Cognitive Radio Networks
by Kisong Lee
Sensors 2021, 21(23), 7837; https://doi.org/10.3390/s21237837 - 25 Nov 2021
Cited by 1 | Viewed by 1126
Abstract
In this study, wireless-powered cognitive radio networks (WPCRNs) are considered, in which N sets of transmitters, receivers and energy-harvesting (EH) nodes in secondary networks share the same spectrum with primary users (PUs) and none of the EH nodes is allowed to decode information [...] Read more.
In this study, wireless-powered cognitive radio networks (WPCRNs) are considered, in which N sets of transmitters, receivers and energy-harvesting (EH) nodes in secondary networks share the same spectrum with primary users (PUs) and none of the EH nodes is allowed to decode information but can harvest energy from the signals. Given that the EH nodes are untrusted nodes from the point of view of information transfer, the eavesdropping of secret information can occur if they decide to eavesdrop on information instead of harvesting energy from the signals transmitted by secondary users (SUs). For secure communications in WPCRNs, we aim to find the optimal transmit powers of SUs that maximize the average secrecy rate of SUs while maintaining the interference to PUs below an allowable level, while guaranteeing the minimum EH requirement for each EH node. First, we derive an analytical expression for the transmit power via dual decomposition and propose a suboptimal transmit power control algorithm, which is implemented in an iterative manner with low complexity. The simulation results confirm that the proposed scheme outperforms the conventional distributed schemes by more than 10% in terms of the average secrecy rate and outage probability and can also considerably reduce the computation time compared with the optimal scheme. Full article
Show Figures

Figure 1

16 pages, 840 KiB  
Article
Continuous Productivity Improvement Using IoE Data for Fault Monitoring: An Automotive Parts Production Line Case Study
by Yuchang Won, Seunghyeon Kim, Kyung-Joon Park and Yongsoon Eun
Sensors 2021, 21(21), 7366; https://doi.org/10.3390/s21217366 - 05 Nov 2021
Cited by 5 | Viewed by 2220
Abstract
This paper presents a case study of continuous productivity improvement of an automotive parts production line using Internet of Everything (IoE) data for fault monitoring. Continuous productivity improvement denotes an iterative process of analyzing and updating the production line configuration for productivity improvement [...] Read more.
This paper presents a case study of continuous productivity improvement of an automotive parts production line using Internet of Everything (IoE) data for fault monitoring. Continuous productivity improvement denotes an iterative process of analyzing and updating the production line configuration for productivity improvement based on measured data. Analysis for continuous improvement of a production system requires a set of data (machine uptime, downtime, cycle-time) that are not typically monitored by a conventional fault monitoring system. Although productivity improvement is a critical aspect for a manufacturing site, not many production systems are equipped with a dedicated data recording system towards continuous improvement. In this paper, we study the problem of how to derive the dataset required for continuous improvement from the measurement by a conventional fault monitoring system. In particular, we provide a case study of an automotive parts production line. Based on the data measured by the existing fault monitoring system, we model the production system and derive the dataset required for continuous improvement. Our approach provides the expected amount of improvement to operation managers in a numerical manner to help them make a decision on whether they should modify the line configuration or not. Full article
Show Figures

Figure 1

Back to TopTop