State-of-the-Art Future Internet Technology in USA 2022–2023

A special issue of Future Internet (ISSN 1999-5903).

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 34637

Special Issue Editor


E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, University of New Mexico, Albuquerque, NM 87131, USA
Interests: network modeling and optimization; IoT; cyber–physical systems; smart grid systems; network economics; wireless networks; social networks; cybersecurity; resource management; reinforcement learning; human behavior modeling; concentrated solar power systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue aims to provide a comprehensive overview of the current state of the art in Future Internet technology in the USA. We invite research articles that will consolidate our understanding in this area.

The Special Issue will publish full research papers and reviews. Potential topics include, but are not limited to, the following research areas:

  • Advanced communications network infrastructures;
  • Internet of Things;
  • Centralized and distributed data centers;
  • Industrial internet;
  • Embedded computing;
  • 5G/6G networking
  • IoT platforms, integration, and services
  • Software-defined network functions and network virtualization;
  • Quality of service in wireless and mobile networks
  • Vehicular cloud networks
  • Cloud-let and fog-computing;
  • Cyber-physical systems;
  • Smart energy systems;
  • Smart health-care systems;
  • Smart manufacturing lines;
  • Smart city;
  • Human–computer interaction and usability;
  • Smart learning systems;
  • Artificial and augmented intelligence;
  • Cyber security compliance.

Dr. Eirini Eleni Tsiropoulou
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Related Special Issue

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 6477 KiB  
Article
The Microverse: A Task-Oriented Edge-Scale Metaverse
by Qian Qu, Mohsen Hatami, Ronghua Xu, Deeraj Nagothu, Yu Chen, Xiaohua Li, Erik Blasch, Erika Ardiles-Cruz and Genshe Chen
Future Internet 2024, 16(2), 60; https://doi.org/10.3390/fi16020060 - 13 Feb 2024
Viewed by 1389
Abstract
Over the past decade, there has been a remarkable acceleration in the evolution of smart cities and intelligent spaces, driven by breakthroughs in technologies such as the Internet of Things (IoT), edge–fog–cloud computing, and machine learning (ML)/artificial intelligence (AI). As society begins to [...] Read more.
Over the past decade, there has been a remarkable acceleration in the evolution of smart cities and intelligent spaces, driven by breakthroughs in technologies such as the Internet of Things (IoT), edge–fog–cloud computing, and machine learning (ML)/artificial intelligence (AI). As society begins to harness the full potential of these smart environments, the horizon brightens with the promise of an immersive, interconnected 3D world. The forthcoming paradigm shift in how we live, work, and interact owes much to groundbreaking innovations in augmented reality (AR), virtual reality (VR), extended reality (XR), blockchain, and digital twins (DTs). However, realizing the expansive digital vista in our daily lives is challenging. Current limitations include an incomplete integration of pivotal techniques, daunting bandwidth requirements, and the critical need for near-instantaneous data transmission, all impeding the digital VR metaverse from fully manifesting as envisioned by its proponents. This paper seeks to delve deeply into the intricacies of the immersive, interconnected 3D realm, particularly in applications demanding high levels of intelligence. Specifically, this paper introduces the microverse, a task-oriented, edge-scale, pragmatic solution for smart cities. Unlike all-encompassing metaverses, each microverse instance serves a specific task as a manageable digital twin of an individual network slice. Each microverse enables on-site/near-site data processing, information fusion, and real-time decision-making within the edge–fog–cloud computing framework. The microverse concept is verified using smart public safety surveillance (SPSS) for smart communities as a case study, demonstrating its feasibility in practical smart city applications. The aim is to stimulate discussions and inspire fresh ideas in our community, guiding us as we navigate the evolving digital landscape of smart cities to embrace the potential of the metaverse. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in USA 2022–2023)
Show Figures

Figure 1

16 pages, 463 KiB  
Article
CROWDMATCH: Optimizing Crowdsourcing Matching through the Integration of Matching Theory and Coalition Games
by Adedamola Adesokan, Rowan Kinney and Eirini Eleni Tsiropoulou
Future Internet 2024, 16(2), 58; https://doi.org/10.3390/fi16020058 - 11 Feb 2024
Viewed by 1010
Abstract
This paper tackles the challenges inherent in crowdsourcing dynamics by introducing the CROWDMATCH mechanism. Aimed at enabling crowdworkers to strategically select suitable crowdsourcers while contributing information to crowdsourcing tasks, CROWDMATCH considers incentives, information availability and cost, and the decisions of fellow crowdworkers to [...] Read more.
This paper tackles the challenges inherent in crowdsourcing dynamics by introducing the CROWDMATCH mechanism. Aimed at enabling crowdworkers to strategically select suitable crowdsourcers while contributing information to crowdsourcing tasks, CROWDMATCH considers incentives, information availability and cost, and the decisions of fellow crowdworkers to model the utility functions for both the crowdworkers and the crowdsourcers. Specifically, the paper presents an initial Approximate CROWDMATCH mechanism grounded in matching theory principles, eliminating externalities from crowdworkers’ decisions and enabling each entity to maximize its utility. Subsequently, the Accurate CROWDMATCH mechanism is introduced, which is initiated by the outcome of the Approximate CROWDMATCH mechanism, and coalition game-theoretic principles are employed to refine the matching process by accounting for externalities. The paper’s contributions include the introduction of the CROWDMATCH system model, the development of both Approximate and Accurate CROWDMATCH mechanisms, and a demonstration of their superior performance through comprehensive simulation results. The mechanisms’ scalability in large-scale crowdsourcing systems and operational advantages are highlighted, distinguishing them from existing methods and highlighting their efficacy in empowering crowdworkers in crowdsourcer selection. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in USA 2022–2023)
Show Figures

Figure 1

23 pages, 933 KiB  
Article
Clustering on the Chicago Array of Things: Spotting Anomalies in the Internet of Things Records
by Kyle DeMedeiros, Chan Young Koh and Abdeltawab Hendawi
Future Internet 2024, 16(1), 28; https://doi.org/10.3390/fi16010028 - 16 Jan 2024
Viewed by 1368
Abstract
The Chicago Array of Things (AoT) is a robust dataset taken from over 100 nodes over four years. Each node contains over a dozen sensors. The array contains a series of Internet of Things (IoT) devices with multiple heterogeneous sensors connected to a [...] Read more.
The Chicago Array of Things (AoT) is a robust dataset taken from over 100 nodes over four years. Each node contains over a dozen sensors. The array contains a series of Internet of Things (IoT) devices with multiple heterogeneous sensors connected to a processing and storage backbone to collect data from across Chicago, IL, USA. The data collected include meteorological data such as temperature, humidity, and heat, as well as chemical data like CO2 concentration, PM2.5, and light intensity. The AoT sensor network is one of the largest open IoT systems available for researchers to utilize its data. Anomaly detection (AD) in IoT and sensor networks is an important tool to ensure that the ever-growing IoT ecosystem is protected from faulty data and sensors, as well as from attacking threats. Interestingly, an in-depth analysis of the Chicago AoT for anomaly detection is rare. Here, we study the viability of the Chicago AoT dataset to be used in anomaly detection by utilizing clustering techniques. We utilized K-Means, DBSCAN, and Hierarchical DBSCAN (H-DBSCAN) to determine the viability of labeling an unlabeled dataset at the sensor level. The results show that the clustering algorithm best suited for this task varies based on the density of the anomalous readings and the variability of the data points being clustered; however, at the sensor level, the K-Means algorithm, though simple, is better suited for the task of determining specific, at-a-glance anomalies than the more complex DBSCAN and HDBSCAN algorithms, though it comes with drawbacks. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in USA 2022–2023)
Show Figures

Figure 1

17 pages, 888 KiB  
Article
Addressing the Gaps of IoU Loss in 3D Object Detection with IIoU
by Niranjan Ravi and Mohamed El-Sharkawy
Future Internet 2023, 15(12), 399; https://doi.org/10.3390/fi15120399 - 11 Dec 2023
Cited by 1 | Viewed by 1491
Abstract
Three-dimensional object detection involves estimating the dimensions, orientations, and locations of 3D bounding boxes. Intersection of Union (IoU) loss measures the overlap between predicted 3D box and ground truth 3D bounding boxes. The localization task uses smooth-L1 loss with IoU to estimate the [...] Read more.
Three-dimensional object detection involves estimating the dimensions, orientations, and locations of 3D bounding boxes. Intersection of Union (IoU) loss measures the overlap between predicted 3D box and ground truth 3D bounding boxes. The localization task uses smooth-L1 loss with IoU to estimate the object’s location, and the classification task identifies the object/class category inside each 3D bounding box. Localization suffers a performance gap in cases where the predicted and ground truth boxes overlap significantly less or do not overlap, indicating the boxes are far away, and in scenarios where the boxes are inclusive. Existing axis-aligned IoU losses suffer performance drop in cases of rotated 3D bounding boxes. This research addresses the shortcomings in bounding box regression problems of 3D object detection by introducing an Improved Intersection Over Union (IIoU) loss. The proposed loss function’s performance is experimented on LiDAR-based and Camera-LiDAR-based fusion methods using the KITTI dataset. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in USA 2022–2023)
Show Figures

Figure 1

21 pages, 730 KiB  
Article
Neural Network Exploration for Keyword Spotting on Edge Devices
by Jacob Bushur and Chao Chen
Future Internet 2023, 15(6), 219; https://doi.org/10.3390/fi15060219 - 20 Jun 2023
Viewed by 1650
Abstract
The introduction of artificial neural networks to speech recognition applications has sparked the rapid development and popularization of digital assistants. These digital assistants constantly monitor the audio captured by a microphone for a small set of keywords. Upon recognizing a keyword, a larger [...] Read more.
The introduction of artificial neural networks to speech recognition applications has sparked the rapid development and popularization of digital assistants. These digital assistants constantly monitor the audio captured by a microphone for a small set of keywords. Upon recognizing a keyword, a larger audio recording is saved and processed by a separate, more complex neural network. Deep neural networks have become an effective tool for keyword spotting. Their implementation in low-cost edge devices, however, is still challenging due to limited resources on board. This research demonstrates the process of implementing, modifying, and training neural network architectures for keyword spotting. The trained models are also subjected to post-training quantization to evaluate its effect on model performance. The models are evaluated using metrics relevant to deployment on resource-constrained systems, such as model size, memory consumption, and inference latency, in addition to the standard comparisons of accuracy and parameter count. The process of deploying the trained and quantized models is also explored through configuring the microcontroller or FPGA onboard the edge devices. By selecting multiple architectures, training a collection of models, and comparing the models using the techniques demonstrated in this research, a developer can find the best-performing neural network for keyword spotting given the constraints of a target embedded system. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in USA 2022–2023)
Show Figures

Figure 1

13 pages, 1481 KiB  
Article
Feature Construction Using Persistence Landscapes for Clustering Noisy IoT Time Series
by Renjie Chen and Nalini Ravishanker
Future Internet 2023, 15(6), 195; https://doi.org/10.3390/fi15060195 - 28 May 2023
Cited by 1 | Viewed by 1180
Abstract
With the advancement of IoT technologies, there is a large amount of data available from wireless sensor networks (WSN), particularly for studying climate change. Clustering long and noisy time series has become an important research area for analyzing this data. This paper proposes [...] Read more.
With the advancement of IoT technologies, there is a large amount of data available from wireless sensor networks (WSN), particularly for studying climate change. Clustering long and noisy time series has become an important research area for analyzing this data. This paper proposes a feature-based clustering approach using topological data analysis, which is a set of methods for finding topological structure in data. Persistence diagrams and landscapes are popular topological summaries that can be used to cluster time series. This paper presents a framework for selecting an optimal number of persistence landscapes, and using them as features in an unsupervised learning algorithm. This approach reduces computational cost while maintaining accuracy. The clustering approach was demonstrated to be accurate on simulated data, based on only four, three, and three features, respectively, selected in Scenarios 1–3. On real data, consisting of multiple long temperature streams from various US locations, our optimal feature selection method achieved approximately a 13 times speed-up in computing. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in USA 2022–2023)
Show Figures

Figure 1

26 pages, 1872 KiB  
Article
SmartGroup: A Tool for Small-Group Learning Activities
by Haining Zhu, Na Li, Nitish Kumar Rai and John M. Carroll
Future Internet 2023, 15(1), 7; https://doi.org/10.3390/fi15010007 - 26 Dec 2022
Cited by 1 | Viewed by 2032
Abstract
Small-group learning activities (SGLAs) offer varied active learning opportunities and student benefits, but higher education instructors do not universally adopt SGLAs, in part owing to management burdens. We designed and deployed the SmartGroup system, a tool-based approach to minimize instructor burdens while facilitating [...] Read more.
Small-group learning activities (SGLAs) offer varied active learning opportunities and student benefits, but higher education instructors do not universally adopt SGLAs, in part owing to management burdens. We designed and deployed the SmartGroup system, a tool-based approach to minimize instructor burdens while facilitating SGLAs and associated benefits by managing peer group formation and peer group work assessment. SmartGroup was deployed in one course over 10 weeks; iterations of SmartGroup were provided continuously to meet the instructor’s needs. After deployment, the instructor and teaching assistant were interviewed, and 20 anonymous post-study survey responses were collected. The system exposed students to new perspectives, fostered meta-cognitive opportunities, and improved weaker students’ performances while being predominantly well-received in terms of usability and satisfaction. Our work contributes to the literature an exploration of tool-assisted peer group work assessment in higher education and how to promote wider SGLA adoption. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in USA 2022–2023)
Show Figures

Figure 1

Review

Jump to: Research

60 pages, 14922 KiB  
Review
The Power of Generative AI: A Review of Requirements, Models, Input–Output Formats, Evaluation Metrics, and Challenges
by Ajay Bandi, Pydi Venkata Satya Ramesh Adapa and Yudu Eswar Vinay Pratap Kumar Kuchi
Future Internet 2023, 15(8), 260; https://doi.org/10.3390/fi15080260 - 31 Jul 2023
Cited by 11 | Viewed by 23358
Abstract
Generative artificial intelligence (AI) has emerged as a powerful technology with numerous applications in various domains. There is a need to identify the requirements and evaluation metrics for generative AI models designed for specific tasks. The purpose of the research aims to investigate [...] Read more.
Generative artificial intelligence (AI) has emerged as a powerful technology with numerous applications in various domains. There is a need to identify the requirements and evaluation metrics for generative AI models designed for specific tasks. The purpose of the research aims to investigate the fundamental aspects of generative AI systems, including their requirements, models, input–output formats, and evaluation metrics. The study addresses key research questions and presents comprehensive insights to guide researchers, developers, and practitioners in the field. Firstly, the requirements necessary for implementing generative AI systems are examined and categorized into three distinct categories: hardware, software, and user experience. Furthermore, the study explores the different types of generative AI models described in the literature by presenting a taxonomy based on architectural characteristics, such as variational autoencoders (VAEs), generative adversarial networks (GANs), diffusion models, transformers, language models, normalizing flow models, and hybrid models. A comprehensive classification of input and output formats used in generative AI systems is also provided. Moreover, the research proposes a classification system based on output types and discusses commonly used evaluation metrics in generative AI. The findings contribute to advancements in the field, enabling researchers, developers, and practitioners to effectively implement and evaluate generative AI models for various applications. The significance of the research lies in understanding that generative AI system requirements are crucial for effective planning, design, and optimal performance. A taxonomy of models aids in selecting suitable options and driving advancements. Classifying input–output formats enables leveraging diverse formats for customized systems, while evaluation metrics establish standardized methods to assess model quality and performance. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in USA 2022–2023)
Show Figures

Figure 1

Back to TopTop