Next Issue
Volume 14, December
Previous Issue
Volume 14, October
 
 

Information, Volume 14, Issue 11 (November 2023) – 42 articles

Cover Story (view full-size image): This paper proposes an Intrusion Detection System (IDS) for in-vehicle security networks based on the multi-scale histograms of CAN-ID message identifiers of the CAN-bus protocol. The proposed approach uses sequences of two and three CAN-bus messages to create multi-scale dictionaries from windows of in-vehicle network traffic. A preliminary multi-scale histogram model and dictionaries are created using only legitimate traffic. Against this model, the IDS creates and combines the feature spaces for the different scales. The created feature space is given in input to a Convolutional Neural Network (CNN) to identify the traffic windows where the attack is present. The proposed approach has been evaluated in two different public data sets. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
22 pages, 3939 KiB  
Review
What Is Hidden in Clear Sight and How to Find It—A Survey of the Integration of Artificial Intelligence and Eye Tracking
by Maja Kędras and Janusz Sobecki
Information 2023, 14(11), 624; https://doi.org/10.3390/info14110624 - 20 Nov 2023
Viewed by 1814
Abstract
This paper presents an overview of the uses of the combination of eye tracking and artificial intelligence. In the paper, several aspects of both eye tracking and applied AI methods have been analyzed. It analyzes the eye tracking hardware used along with the [...] Read more.
This paper presents an overview of the uses of the combination of eye tracking and artificial intelligence. In the paper, several aspects of both eye tracking and applied AI methods have been analyzed. It analyzes the eye tracking hardware used along with the sampling frequency, the number of test participants, additional parameters, the extraction of features, the artificial intelligence methods used and the methods of verification of the results. Finally, it includes a comparison of the results obtained in the analyzed literature and a discussion about them. Full article
(This article belongs to the Special Issue Recent Advances and Perspectives in Human-Computer Interaction)
Show Figures

Figure 1

17 pages, 948 KiB  
Article
Robust Multiagent Reinforcement Learning for UAV Systems: Countering Byzantine Attacks
by Jishu K. Medhi, Rui Liu, Qianlong Wang and Xuhui Chen
Information 2023, 14(11), 623; https://doi.org/10.3390/info14110623 - 19 Nov 2023
Viewed by 1495
Abstract
Multiple unmanned aerial vehicle (multi-UAV) systems have gained significant attention in applications, such as aerial surveillance and search and rescue missions. With the recent development of state-of-the-art multiagent reinforcement learning (MARL) algorithms, it is possible to train multi-UAV systems in collaborative and competitive [...] Read more.
Multiple unmanned aerial vehicle (multi-UAV) systems have gained significant attention in applications, such as aerial surveillance and search and rescue missions. With the recent development of state-of-the-art multiagent reinforcement learning (MARL) algorithms, it is possible to train multi-UAV systems in collaborative and competitive environments. However, the inherent vulnerabilities of multiagent systems pose significant privacy and security risks when deploying general and conventional MARL algorithms. The presence of even a single Byzantine adversary within the system can severely degrade the learning performance of UAV agents. This work proposes a Byzantine-resilient MARL algorithm that leverages a combination of geometric median consensus and a robust state update model to mitigate, or even eliminate, the influence of Byzantine attacks. To validate its effectiveness and feasibility, the authors include a multi-UAV threat model, provide a guarantee of robustness, and investigate key attack parameters for multiple UAV navigation scenarios. Results from the experiments show that the average rewards during a Byzantine attack increased by up to 60% for the cooperative navigation scenario compared with conventional MARL techniques. The learning rewards generated by the baseline algorithms could not converge during training under these attacks, while the proposed method effectively converged to an optimal solution, proving its viability and correctness. Full article
Show Figures

Figure 1

27 pages, 4466 KiB  
Article
Understanding Website Privacy Policies—A Longitudinal Analysis Using Natural Language Processing
by Veronika Belcheva, Tatiana Ermakova and Benjamin Fabian
Information 2023, 14(11), 622; https://doi.org/10.3390/info14110622 - 19 Nov 2023
Cited by 1 | Viewed by 2099
Abstract
Privacy policies are the main method for informing Internet users of how their data are collected and shared. This study aims to analyze the deficiencies of privacy policies in terms of readability, vague statements, and the use of pacifying phrases concerning privacy. This [...] Read more.
Privacy policies are the main method for informing Internet users of how their data are collected and shared. This study aims to analyze the deficiencies of privacy policies in terms of readability, vague statements, and the use of pacifying phrases concerning privacy. This represents the undertaking of a step forward in the literature on this topic through a comprehensive analysis encompassing both time and website coverage. It characterizes trends across website categories, top-level domains, and popularity ranks. Furthermore, studying the development in the context of the General Data Protection Regulation (GDPR) offers insights into the impact of regulations on policy comprehensibility. The findings reveal a concerning trend: privacy policies have grown longer and more ambiguous, making it challenging for users to comprehend them. Notably, there is an increased proportion of vague statements, while clear statements have seen a decrease. Despite this, the study highlights a steady rise in the inclusion of reassuring statements aimed at alleviating readers’ privacy concerns. Full article
(This article belongs to the Special Issue Text Mining: Challenges, Algorithms, Tools and Applications)
Show Figures

Figure 1

22 pages, 3482 KiB  
Article
Optimal Radio Propagation Modeling and Parametric Tuning Using Optimization Algorithms
by Joseph Isabona, Agbotiname Lucky Imoize, Oluwasayo Akinloye Akinwumi, Okiemute Roberts Omasheye, Emughedi Oghu, Cheng-Chi Lee and Chun-Ta Li
Information 2023, 14(11), 621; https://doi.org/10.3390/info14110621 - 19 Nov 2023
Viewed by 1273
Abstract
Benchmarking different optimization algorithms is tasky, particularly for network-based cellular communication systems. The design and management process of these systems involves many stochastic variables and complex design parameters that demand an unbiased estimation and analysis. Though several optimization algorithms exist for different parametric [...] Read more.
Benchmarking different optimization algorithms is tasky, particularly for network-based cellular communication systems. The design and management process of these systems involves many stochastic variables and complex design parameters that demand an unbiased estimation and analysis. Though several optimization algorithms exist for different parametric modeling and tuning, an in-depth evaluation of their functional performance has not been adequately addressed, especially for cellular communication systems. Firstly, in this paper, nine key numerical and global optimization algorithms, comprising Gauss–Newton (GN), gradient descent (GD), Genetic Algorithm (GA), Levenberg–Marguardt (LM), Quasi-Newton (QN), Trust-Region–Dog-Leg (TR), pattern search (PAS), Simulated Annealing (SA), and particle swam (PS), have been benchmarked against measured data. The experimental data were taken from different radio signal propagation terrains around four eNodeB cells. In order to assist the radio frequency (RF) engineer in selecting the most suitable optimization method for the parametric model tuning, three-fold benchmarking criteria comprising the Accuracy Profile Benchmark (APB), Function Evaluation Benchmark (FEB), and Execution Speed Benchmark (ESB) were employed. The APB and FEB were quantitatively compared against the measured data for fair benchmarking. By leveraging the APB performance criteria, the QN achieved the best results with the preferred values of 98.34, 97.31, 97.44, and 96.65% in locations 1–4. The GD attained the worst performance with the lowest APE values of 98.25, 95.45, 96.10, and 95.70 in the tested locations. In terms of objective function values and their evaluation count, the QN algorithm shows the fewest function counts of 44, 44, 56, and 44, and the lowest objective values of 80.85, 37.77, 54.69, and 41.24, thus attaining the best optimization algorithm results across the study locations. The worst performance was attained by the GD with objective values of 86.45, 39.58, 76.66, and 54.27, respectively. Though the objective values achieved with global optimization methods, PAS, GA, PS, and SA, are relatively small compared to the QN, their function evaluation counts are high. The PAS, GA, PS, and SA recorded 1367, 2550, 3450, and 2818 function evaluation counts, which are relatively high. Overall, the QN algorithm achieves the best optimization, and it can serve as a reference for RF engineers in selecting suitable optimization methods for propagation modeling and parametric tuning. Full article
(This article belongs to the Special Issue Intelligent Information Processing for Sensors and IoT Communications)
Show Figures

Figure 1

26 pages, 3768 KiB  
Article
Comparative Analysis of Membership Inference Attacks in Federated and Centralized Learning
by Ali Abbasi Tadi, Saroj Dayal, Dima Alhadidi and Noman Mohammed
Information 2023, 14(11), 620; https://doi.org/10.3390/info14110620 - 19 Nov 2023
Cited by 1 | Viewed by 1659
Abstract
The vulnerability of machine learning models to membership inference attacks, which aim to determine whether a specific record belongs to the training dataset, is explored in this paper. Federated learning allows multiple parties to independently train a model without sharing or centralizing their [...] Read more.
The vulnerability of machine learning models to membership inference attacks, which aim to determine whether a specific record belongs to the training dataset, is explored in this paper. Federated learning allows multiple parties to independently train a model without sharing or centralizing their data, offering privacy advantages. However, when private datasets are used in federated learning and model access is granted, the risk of membership inference attacks emerges, potentially compromising sensitive data. To address this, effective defenses in a federated learning environment must be developed without compromising the utility of the target model. This study empirically investigates and compares membership inference attack methodologies in both federated and centralized learning environments, utilizing diverse optimizers and assessing attacks with and without defenses on image and tabular datasets. The findings demonstrate that a combination of knowledge distillation and conventional mitigation techniques (such as Gaussian dropout, Gaussian noise, and activity regularization) significantly mitigates the risk of information leakage in both federated and centralized settings. Full article
(This article belongs to the Special Issue International Database Engineered Applications)
Show Figures

Figure 1

39 pages, 1887 KiB  
Article
Efficient Resource Utilization in IoT and Cloud Computing
by Vivek Kumar Prasad, Debabrata Dansana, Madhuri D. Bhavsar, Biswaranjan Acharya, Vassilis C. Gerogiannis and Andreas Kanavos
Information 2023, 14(11), 619; https://doi.org/10.3390/info14110619 - 19 Nov 2023
Viewed by 2498
Abstract
With the proliferation of IoT devices, there has been exponential growth in data generation, placing substantial demands on both cloud computing (CC) and internet infrastructure. CC, renowned for its scalability and virtual resource provisioning, is of paramount importance in e-commerce applications. However, the [...] Read more.
With the proliferation of IoT devices, there has been exponential growth in data generation, placing substantial demands on both cloud computing (CC) and internet infrastructure. CC, renowned for its scalability and virtual resource provisioning, is of paramount importance in e-commerce applications. However, the dynamic nature of IoT and cloud services introduces unique challenges, notably in the establishment of service-level agreements (SLAs) and the continuous monitoring of compliance. This paper presents a versatile framework for the adaptation of e-commerce applications to IoT and CC environments. It introduces a comprehensive set of metrics designed to support SLAs by enabling periodic resource assessments, ensuring alignment with service-level objectives (SLOs). This policy-driven approach seeks to automate resource management in the era of CC, thereby reducing the dependency on extensive human intervention in e-commerce applications. This paper culminates with a case study that demonstrates the practical utilization of metrics and policies in the management of cloud resources. Furthermore, it provides valuable insights into the resource requisites for deploying e-commerce applications within the realms of the IoT and CC. This holistic approach holds the potential to streamline the monitoring and administration of CC services, ultimately enhancing their efficiency and reliability. Full article
(This article belongs to the Special Issue Systems Engineering and Knowledge Management)
Show Figures

Figure 1

20 pages, 8983 KiB  
Article
An Effective Ensemble Convolutional Learning Model with Fine-Tuning for Medicinal Plant Leaf Identification
by Mohd Asif Hajam, Tasleem Arif, Akib Mohi Ud Din Khanday and Mehdi Neshat
Information 2023, 14(11), 618; https://doi.org/10.3390/info14110618 - 18 Nov 2023
Cited by 3 | Viewed by 2731
Abstract
Accurate and efficient medicinal plant image classification is of utmost importance as these plants produce a wide variety of bioactive compounds that offer therapeutic benefits. With a long history of medicinal plant usage, different parts of plants, such as flowers, leaves, and roots, [...] Read more.
Accurate and efficient medicinal plant image classification is of utmost importance as these plants produce a wide variety of bioactive compounds that offer therapeutic benefits. With a long history of medicinal plant usage, different parts of plants, such as flowers, leaves, and roots, have been recognized for their medicinal properties and are used for plant identification. However, leaf images are extensively used due to their convenient accessibility and are a major source of information. In recent years, transfer learning and fine-tuning, which use pre-trained deep convolutional networks to extract pertinent features, have emerged as an extremely effective approach for image-identification problems. This study leveraged the power by three-component deep convolutional neural networks, namely VGG16, VGG19, and DenseNet201, to derive features from the input images of the medicinal plant dataset, containing leaf images of 30 classes. The models were compared and ensembled to make four hybrid models to enhance the predictive performance by utilizing the averaging and weighted averaging strategies. Quantitative experiments were carried out to evaluate the models on the Mendeley Medicinal Leaf Dataset. The resultant ensemble of VGG19+DensNet201 with fine-tuning showcased an enhanced capability in identifying medicinal plant images with an improvement of 7.43% and 5.8% compared with VGG19 and VGG16. Furthermore, VGG19+DensNet201 can outperform its standalone counterparts by achieving an accuracy of 99.12% on the test set. A thorough assessment with metrics such as accuracy, recall, precision, and the F1-score firmly established the effectiveness of the ensemble strategy. Full article
(This article belongs to the Special Issue Second Edition of Predictive Analytics and Data Science)
Show Figures

Figure 1

18 pages, 1219 KiB  
Article
Neural Network-Based Solar Irradiance Forecast for Edge Computing Devices
by Georgios Venitourakis, Christoforos Vasilakis, Alexandros Tsagkaropoulos, Tzouma Amrou, Georgios Konstantoulakis, Panagiotis Golemis and Dionysios Reisis
Information 2023, 14(11), 617; https://doi.org/10.3390/info14110617 - 18 Nov 2023
Cited by 1 | Viewed by 1220
Abstract
Aiming at effectively improving photovoltaic (PV) park operation and the stability of the electricity grid, the current paper addresses the design and development of a novel system achieving the short-term irradiance forecasting for the PV park area, which is the key factor for [...] Read more.
Aiming at effectively improving photovoltaic (PV) park operation and the stability of the electricity grid, the current paper addresses the design and development of a novel system achieving the short-term irradiance forecasting for the PV park area, which is the key factor for controlling the variations in the PV power production. First, it introduces the Xception long short-term memory (XceptionLSTM) cell tailored for recurrent neural networks (RNN). Second, it presents the novel irradiance forecasting model that consists of a sequence-to-sequence image regression NNs in the form of a spatio-temporal encoder–decoder including Xception layers in the spatial encoder, the novel XceptionLSTM in the temporal encoder and decoder and a multilayer perceptron in the spatial decoder. The proposed model achieves a forecast skill of 16.57% for a horizon of 5 min when compared to the persistence model. Moreover, the proposed model is designed for execution on edge computing devices and the real-time application of the inference on the Raspberry Pi 4 Model B 8 GB and the Raspberry Pi Zero 2W validates the results. Full article
(This article belongs to the Special Issue Artificial Intelligence on the Edge)
Show Figures

Figure 1

27 pages, 3278 KiB  
Article
Deep Learning Approach for Human Action Recognition Using a Time Saliency Map Based on Motion Features Considering Camera Movement and Shot in Video Image Sequences
by Abdorreza Alavigharahbagh, Vahid Hajihashemi, José J. M. Machado and João Manuel R. S. Tavares
Information 2023, 14(11), 616; https://doi.org/10.3390/info14110616 - 15 Nov 2023
Cited by 1 | Viewed by 1667
Abstract
In this article, a hierarchical method for action recognition based on temporal and spatial features is proposed. In current HAR methods, camera movement, sensor movement, sudden scene changes, and scene movement can increase motion feature errors and decrease accuracy. Another important aspect to [...] Read more.
In this article, a hierarchical method for action recognition based on temporal and spatial features is proposed. In current HAR methods, camera movement, sensor movement, sudden scene changes, and scene movement can increase motion feature errors and decrease accuracy. Another important aspect to take into account in a HAR method is the required computational cost. The proposed method provides a preprocessing step to address these challenges. As a preprocessing step, the method uses optical flow to detect camera movements and shots in input video image sequences. In the temporal processing block, the optical flow technique is combined with the absolute value of frame differences to obtain a time saliency map. The detection of shots, cancellation of camera movement, and the building of a time saliency map minimise movement detection errors. The time saliency map is then passed to the spatial processing block to segment the moving persons and/or objects in the scene. Because the search region for spatial processing is limited based on the temporal processing results, the computations in the spatial domain are drastically reduced. In the spatial processing block, the scene foreground is extracted in three steps: silhouette extraction, active contour segmentation, and colour segmentation. Key points are selected at the borders of the segmented foreground. The last used features are the intensity and angle of the optical flow of detected key points. Using key point features for action detection reduces the computational cost of the classification step and the required training time. Finally, the features are submitted to a Recurrent Neural Network (RNN) to recognise the involved action. The proposed method was tested using four well-known action datasets: KTH, Weizmann, HMDB51, and UCF101 datasets and its efficiency was evaluated. Since the proposed approach segments salient objects based on motion, edges, and colour features, it can be added as a preprocessing step to most current HAR systems to improve performance. Full article
(This article belongs to the Special Issue Computer Vision for Security Applications)
Show Figures

Figure 1

12 pages, 1332 KiB  
Article
Hierarchical Keyword Generation Method for Low-Resource Social Media Text
by Xinyi Guan and Shun Long
Information 2023, 14(11), 615; https://doi.org/10.3390/info14110615 - 15 Nov 2023
Viewed by 1199
Abstract
The exponential growth of social media text information presents a challenging issue in terms of retrieving valuable information efficiently. Utilizing deep learning models, we can automatically generate keywords that express core content and topics of social media text, thereby facilitating the retrieval of [...] Read more.
The exponential growth of social media text information presents a challenging issue in terms of retrieving valuable information efficiently. Utilizing deep learning models, we can automatically generate keywords that express core content and topics of social media text, thereby facilitating the retrieval of critical information. However, the performance of deep learning models is limited by the labeled text data in the social media domain. To address this problem, this paper presents a hierarchical keyword generation method for low-resource social media text. Specifically, the text segment is introduced as a hierarchical unit of social media text to construct a hierarchical model structure and design a text segment recovery task for self-supervised training of the model, which not only improves the ability of the model to extract features from social media text, but also reduces the dependence of the keyword generation model on the labeled data in the social media domain. Experimental results from publicly available social media datasets demonstrate that the proposed method can effectively improve the keyword generation performance even given limited social media labeled data. Further discussions demonstrate that the self-supervised training stage based on the text segment recovery task indeed benefits the model in adapting to the social media text domain. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

16 pages, 5211 KiB  
Article
Developing Integrated Performance Dashboards Visualisations Using Power BI as a Platform
by Célia Talma Gonçalves, Maria José Angélico Gonçalves and Maria Inês Campante
Information 2023, 14(11), 614; https://doi.org/10.3390/info14110614 - 15 Nov 2023
Viewed by 4028
Abstract
The rapid advance of business technologies in recent years has made knowledge an essential and strategic asset that determines the success or failure of an organisation. Access to the right information in real time and with high selectivity can be a competitive advantage [...] Read more.
The rapid advance of business technologies in recent years has made knowledge an essential and strategic asset that determines the success or failure of an organisation. Access to the right information in real time and with high selectivity can be a competitive advantage in the business environment. Business intelligence systems help corporate executives, business managers, and other operational workers make better and more informed business decisions. This study aimed to assess the impact of using business intelligence tools on the decision-making process in organisations, specifically in sales marketing. The methodology applied to realise the study’s objective was the Vercellis methodology. A set of data available on the sales marketing website SuperDataScience was used to implement a set of pressing KPIs for the business decision-making process in the area. Using these data, a complete business intelligence system solution was implemented. A data warehouse was created using the ETL (extract–transform–load) process, and the data were then explored using a set of dynamics dashboards with a view of the business metrics. The results showed the use of business intelligence systems that allow the integration and transformation of data from various sources stored in data warehouses, where it is possible to implement KPIs and carry out quick, concise, easy-to-interpret graphical analyses. This paper contributes to a better understanding of the importance of data-integrated dashboard visualisation for the decision-making process. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Economics and Business Management)
Show Figures

Figure 1

27 pages, 11398 KiB  
Article
An Approach for Risk Traceability Using Blockchain Technology for Tracking, Tracing, and Authenticating Food Products
by Urvashi Sugandh, Swati Nigam, Manju Khari and Sanjay Misra
Information 2023, 14(11), 613; https://doi.org/10.3390/info14110613 - 15 Nov 2023
Cited by 1 | Viewed by 1824
Abstract
Regulatory authorities, consumers, and producers alike are alarmed by the issue of food safety, which is a matter of international concern. The conventional approaches utilized in food quality management demonstrate deficiencies in their capacity to sufficiently address issues related to traceability, transparency, and [...] Read more.
Regulatory authorities, consumers, and producers alike are alarmed by the issue of food safety, which is a matter of international concern. The conventional approaches utilized in food quality management demonstrate deficiencies in their capacity to sufficiently address issues related to traceability, transparency, and accountability. The emergence of blockchain technology (BCT) has provided a feasible approach to tackle the challenge of regulating food safety. This research paper presents a methodology for implementing blockchain technology to establish risk traceability in the context of monitoring, tracing, and authenticating agricultural products. The proposed system underwent a comprehensive evaluation, which placed significant emphasis on simulation parameters and assessment standards. The aim of the study was to demonstrate the effectiveness of the system through the assessment of various quantitative metrics, including throughput, latency, and resource utilization. The Hyperledger Fabric and Hyperledger Caliper were employed in the formulation and assessment of algorithms intended for agricultural supply chain management. The configuration comprising two entities and two peers achieved the highest write throughput (205.87 transactions per second; TPS), thereby demonstrating the network’s effective transaction processing capability. In a two-organization, two-peer system, the mean latency for read operations exhibited variability spanning from 0.037 to 0.061 s, contingent upon the transaction rates and accounting for the duration needed for network processing and validation. The results were visually depicted, offering a distinct demonstration of the system’s efficacy under various conditions. This study presents a quantitative analysis that illustrates the efficacy of the blockchain system in enhancing the traceability of agricultural products across the entire supply chain. The results of this research suggest that the implementation of blockchain technology could potentially enhance both the security and efficacy of food supply management. Full article
(This article belongs to the Special Issue Models for Blockchain Systems: Analysis and Simulation)
Show Figures

Figure 1

18 pages, 466 KiB  
Article
Weakly Supervised Learning Approach for Implicit Aspect Extraction
by Aye Aye Mar, Kiyoaki Shirai and Natthawut Kertkeidkachorn
Information 2023, 14(11), 612; https://doi.org/10.3390/info14110612 - 13 Nov 2023
Viewed by 1259
Abstract
Aspect-based sentiment analysis (ABSA) is a process to extract an aspect of a product from a customer review and identify its polarity. Most previous studies of ABSA focused on explicit aspects, but implicit aspects have not yet been the subject of much attention. [...] Read more.
Aspect-based sentiment analysis (ABSA) is a process to extract an aspect of a product from a customer review and identify its polarity. Most previous studies of ABSA focused on explicit aspects, but implicit aspects have not yet been the subject of much attention. This paper proposes a novel weakly supervised method for implicit aspect extraction, which is a task to classify a sentence into a pre-defined implicit aspect category. A dataset labeled with implicit aspects is automatically constructed from unlabeled sentences as follows. First, explicit sentences are obtained by extracting explicit aspects from unlabeled sentences, while sentences that do not contain explicit aspects are preserved as candidates of implicit sentences. Second, clustering is performed to merge the explicit and implicit sentences that share the same aspect. Third, the aspect of the explicit sentence is assigned to the implicit sentences in the same cluster as the implicit aspect label. Then, the BERT model is fine-tuned for implicit aspect extraction using the constructed dataset. The results of the experiments show that our method achieves 82% and 84% accuracy for mobile phone and PC reviews, respectively, which are 20 and 21 percentage points higher than the baseline. Full article
Show Figures

Figure 1

15 pages, 13147 KiB  
Review
Science Mapping of Meta-Analysis in Agricultural Science
by Weiting Ding, Jialu Li, Heyang Ma, Yeru Wu and Hailong He
Information 2023, 14(11), 611; https://doi.org/10.3390/info14110611 - 11 Nov 2023
Cited by 1 | Viewed by 1677
Abstract
As a powerful statistical method, meta-analysis has been applied increasingly in agricultural science with remarkable progress. However, meta-analysis research reports in the agricultural discipline still need to be systematically combed. Scientometrics is often used to quantitatively analyze research on certain themes. In this [...] Read more.
As a powerful statistical method, meta-analysis has been applied increasingly in agricultural science with remarkable progress. However, meta-analysis research reports in the agricultural discipline still need to be systematically combed. Scientometrics is often used to quantitatively analyze research on certain themes. In this study, the literature from a 30-year period (1992–2021) was retrieved based on the Web of Science database, and a quantitative analysis was performed using the VOSviewer and CiteSpace visual analysis software packages. The objective of this study was to investigate the current application of meta-analysis in agricultural sciences, the latest research hotspots, and trends, and to identify influential authors, research institutions, countries, articles, and journal sources. Over the past 30 years, the volume of the meta-analysis literature in agriculture has increased rapidly. We identified the top three authors (Sauvant D, Kebreab E, and Huhtanen P), the top three contributing organizations (Chinese Academy of Sciences, National Institute for Agricultural Research, and Northwest A&F University), and top three productive countries (the USA, China, and France). Keyword cluster analysis shows that the meta-analysis research in agricultural sciences falls into four categories: climate change, crop yield, soil, and animal husbandry. Jeffrey (2011) is the most influential and cited research paper, with the highest utilization rate for the Journal of Dairy Science. This paper objectively evaluates the development of meta-analysis in the agricultural sciences using bibliometrics analysis, grasps the development frontier of agricultural research, and provides insights into the future of related research in the agricultural sciences. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

24 pages, 2604 KiB  
Article
An Integrated Time Series Prediction Model Based on Empirical Mode Decomposition and Two Attention Mechanisms
by Xianchang Wang, Siyu Dong and Rui Zhang
Information 2023, 14(11), 610; https://doi.org/10.3390/info14110610 - 11 Nov 2023
Cited by 1 | Viewed by 1410
Abstract
In the prediction of time series, Empirical Mode Decomposition (EMD) generates subsequences and separates short-term tendencies from long-term ones. However, a single prediction model, including attention mechanism, has varying effects on each subsequence. To accurately capture the regularities of subsequences using an attention [...] Read more.
In the prediction of time series, Empirical Mode Decomposition (EMD) generates subsequences and separates short-term tendencies from long-term ones. However, a single prediction model, including attention mechanism, has varying effects on each subsequence. To accurately capture the regularities of subsequences using an attention mechanism, we propose an integrated model for time series prediction based on signal decomposition and two attention mechanisms. This model combines the results of three networks—LSTM, LSTM-self-attention, and LSTM-temporal attention—all trained using subsequences obtained from EMD. Additionally, since previous research on EMD has been limited to single series analysis, this paper includes multiple series by employing two data pre-processing methods: ‘overall normalization’ and ‘respective normalization’. Experimental results on various datasets demonstrate that compared to models without attention mechanisms, temporal attention improves the prediction accuracy of short- and medium-term decomposed series by 15~28% and 45~72%, respectively; furthermore, it reduces the overall prediction error by 10~17%. The integrated model with temporal attention achieves a reduction in error of approximately 0.3%, primarily when compared to models utilizing only general forms of attention mechanisms. Moreover, after normalizing multiple series separately, the predictive performance is equivalent to that achieved for individual series. Full article
Show Figures

Figure 1

28 pages, 4736 KiB  
Article
Polarizing Topics on Twitter in the 2022 United States Elections
by Josip Katalinić, Ivan Dunđer and Sanja Seljan
Information 2023, 14(11), 609; https://doi.org/10.3390/info14110609 - 10 Nov 2023
Viewed by 1659
Abstract
Politically polarizing issues are a growing concern around the world, creating divisions along ideological lines, which was also confirmed during the 2022 United States midterm elections. The purpose of this study was to explore the relationship between the results of the 2022 U.S. [...] Read more.
Politically polarizing issues are a growing concern around the world, creating divisions along ideological lines, which was also confirmed during the 2022 United States midterm elections. The purpose of this study was to explore the relationship between the results of the 2022 U.S. midterm elections and the topics that were covered during the campaign. A dataset consisting of 52,688 tweets in total was created by collecting tweets of senators, representatives and governors who participated in the elections one month before the start of the elections. Using unsupervised machine learning, topic modeling is built on the collected data and visualized to represent topics. Furthermore, supervised machine learning is used to classify tweets to the corresponding political party, whereas sentiment analysis is carried out in order to detect polarity and subjectivity. Tweets from participating politicians, U.S. states and involved parties were found to correlate with polarizing topics. This study hereby explored the relationship between the topics that were creating a divide between Democrats and Republicans during their campaign and the 2022 U.S. midterm election outcomes. This research found that polarizing topics permeated the Twitter (today known as X) campaign, and that all elections were classified as highly subjective. In the Senate and House elections, this classification analysis showed significant misclassification rates of 21.37% and 24.15%, respectively, indicating that Republican tweets often aligned with traditional Democratic narratives. Full article
(This article belongs to the Special Issue 2nd Edition of Information Retrieval and Social Media Mining)
Show Figures

Figure 1

20 pages, 2842 KiB  
Article
Context-Aware Personalization: A Systems Engineering Framework
by Olurotimi Oguntola and Steven Simske
Information 2023, 14(11), 608; https://doi.org/10.3390/info14110608 - 10 Nov 2023
Viewed by 1478
Abstract
This study proposes a framework for a systems engineering-based approach to context-aware personalization, which is applied to e-commerce through the understanding and modeling of user behavior from their interactions with sales channels and media. The framework is practical and built on systems engineering [...] Read more.
This study proposes a framework for a systems engineering-based approach to context-aware personalization, which is applied to e-commerce through the understanding and modeling of user behavior from their interactions with sales channels and media. The framework is practical and built on systems engineering principles. It combines three conceptual components to produce signals that provide content relevant to the users based on their behavior, thus enhancing their experience. These components are the ‘recognition and knowledge’ of the users and their behavior (persona); the awareness of users’ current contexts; and the comprehension of their situation and projection of their future status (intent prediction). The persona generator is implemented by leveraging an unsupervised machine learning algorithm to assign users into cohorts and learn cohort behavior while preserving their privacy in an ethical framework. The component of the users’ current context is fulfilled as a microservice that adopts novel e-commerce data interpretations. The best result of 97.3% accuracy for the intent prediction component was obtained by tokenizing categorical features with a pre-trained BERT (bidirectional encoder representations from transformers) model and passing these, as the contextual embedding input, to an LSTM (long short-term memory) neural network. Paired cohort-directed prescriptive action is generated from learned behavior as a recommended alternative to users’ shopping steps. The practical implementation of this e-commerce personalization framework is demonstrated in this study through the empirical evaluation of experimental results. Full article
Show Figures

Figure 1

24 pages, 1251 KiB  
Article
Interoperability-Enhanced Knowledge Management in Law Enforcement: An Integrated Data-Driven Forensic Ontological Approach to Crime Scene Analysis
by Alexandros Z. Spyropoulos, Charalampos Bratsas, Georgios C. Makris, Emmanouel Garoufallou and Vassilis Tsiantos
Information 2023, 14(11), 607; https://doi.org/10.3390/info14110607 - 09 Nov 2023
Cited by 3 | Viewed by 2776
Abstract
Nowadays, more and more sciences are involved in strengthening the work of law enforcement authorities. Scientific documentation is evidence highly respected by the courts in administering justice. As the involvement of science in solving crimes increases, so does human subjectivism, [...] Read more.
Nowadays, more and more sciences are involved in strengthening the work of law enforcement authorities. Scientific documentation is evidence highly respected by the courts in administering justice. As the involvement of science in solving crimes increases, so does human subjectivism, which often leads to wrong conclusions and, consequently, to bad judgments. From the above arises the need to create a single information system that will be fed with scientific evidence such as fingerprints, genetic material, digital data, forensic photographs, information from the forensic report, etc., and also investigative data such as information from witnesses’ statements, the apology of the accused, etc., from various crime scenes that will be able, through formal reasoning procedure, to conclude possible perpetrators. The present study examines a proposal for developing an information system that can be a basis for creating a forensic ontologya semantic representation of the crime scene—through descriptive logic in the owl semantic language. The Interoperability-Enhanced information system to be developed could assist law enforcement authorities in solving crimes. At the same time, it would promote closer cooperation between academia, civil society, and state institutions by fostering a culture of engagement for the common good. Full article
(This article belongs to the Special Issue Semantic Interoperability and Knowledge  Building)
Show Figures

Figure 1

22 pages, 496 KiB  
Article
Small and Medium-Sized Enterprises in the Digital Age: Understanding Characteristics and Essential Demands
by Barbara Bradač Hojnik and Ivona Huđek
Information 2023, 14(11), 606; https://doi.org/10.3390/info14110606 - 09 Nov 2023
Cited by 1 | Viewed by 4235
Abstract
The article explores the implementation of digital technology in small and medium-sized Slovenian enterprises (SMEs), with a focus on understanding existing trends, obstacles, and necessary support measures during their digitalization progress. The surveyed companies mainly rely on conventional technologies like websites and teamwork [...] Read more.
The article explores the implementation of digital technology in small and medium-sized Slovenian enterprises (SMEs), with a focus on understanding existing trends, obstacles, and necessary support measures during their digitalization progress. The surveyed companies mainly rely on conventional technologies like websites and teamwork platforms, emphasizing the significance of strong online communication and presence in the modern business world. The adoption of advanced technologies such as blockchain is limited due to the perceived complexity and relevance to specific sectors. This study uses variance analysis to identify potential differences in the digitalization challenges faced by companies of different sizes. The results indicate that small companies face different financial constraints and require more differentiated support mechanisms than their larger counterparts, with a particular focus on improving digital competencies among employees. Despite obtaining enhancements such as elevated operational standards and uninterrupted telecommuting via digitalization, companies still face challenges of differentiation and organizational culture change. The study emphasizes the importance of recognizing and addressing the different challenges and support needs of different-sized companies to promote comprehensive progress in digital transformation. Our findings provide important insights for policymakers, industry stakeholders, and SMEs to formulate comprehensive strategies and policies that effectively address the diverse needs and challenges of the digital transformation landscape. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Economics and Business Management)
Show Figures

Figure 1

23 pages, 1629 KiB  
Article
In-Vehicle Network Intrusion Detection System Using Convolutional Neural Network and Multi-Scale Histograms
by Gianmarco Baldini
Information 2023, 14(11), 605; https://doi.org/10.3390/info14110605 - 08 Nov 2023
Viewed by 1676
Abstract
Cybersecurity in modern vehicles has received increased attention from the research community in recent years. Intrusion Detection Systems (IDSs) are one of the techniques used to detect and mitigate cybersecurity risks. This paper proposes a novel implementation of an IDS for in-vehicle security [...] Read more.
Cybersecurity in modern vehicles has received increased attention from the research community in recent years. Intrusion Detection Systems (IDSs) are one of the techniques used to detect and mitigate cybersecurity risks. This paper proposes a novel implementation of an IDS for in-vehicle security networks based on the concept of multi-scale histograms, which capture the frequencies of message identifiers in CAN-bus in-vehicle networks. In comparison to existing approaches in the literature based on a single histogram, the proposed approach widens the informative context used by the IDS for traffic analysis by taking into consideration sequences of two and three CAN-bus messages to create multi-scale dictionaries. The histograms are created from windows of in-vehicle network traffic. A preliminary multi-scale histogram model is created using only legitimate traffic. Against this model, the IDS performs traffic analysis to create a feature space based on the correlation of the histograms. Then, the created feature space is given in input to a Convolutional Neural Network (CNN) for the identification of the windows of traffic where the attack is present. The proposed approach has been evaluated on two different public data sets achieving a very competitive performance in comparison to the literature. Full article
(This article belongs to the Special Issue Wireless IoT Network Protocols II)
Show Figures

Figure 1

21 pages, 4397 KiB  
Article
POSS-CNN: An Automatically Generated Convolutional Neural Network with Precision and Operation Separable Structure Aiming at Target Recognition and Detection
by Jia Hou, Jingyu Zhang, Qi Chen, Siwei Xiang, Yishuo Meng, Jianfei Wang, Cimang Lu and Chen Yang
Information 2023, 14(11), 604; https://doi.org/10.3390/info14110604 - 07 Nov 2023
Viewed by 1195
Abstract
Artificial intelligence is changing and influencing our world. As one of the main algorithms in the field of artificial intelligence, convolutional neural networks (CNNs) have developed rapidly in recent years. Especially after the emergence of NASNet, CNNs have gradually pushed the idea of [...] Read more.
Artificial intelligence is changing and influencing our world. As one of the main algorithms in the field of artificial intelligence, convolutional neural networks (CNNs) have developed rapidly in recent years. Especially after the emergence of NASNet, CNNs have gradually pushed the idea of AutoML to the public’s attention, and large numbers of new structures designed by automatic searches are appearing. These networks are usually based on reinforcement learning and evolutionary learning algorithms. However, sometimes, the blocks of these networks are complex, and there is no small model for simpler tasks. Therefore, this paper proposes POSS-CNN aiming at target recognition and detection, which employs a multi-branch CNN structure with PSNC and a method of automatic parallel selection for super parameters based on a multi-branch CNN structure. Moreover, POSS-CNN can be broken up. By choosing a single branch or the combination of two branches as the “benchmark”, as well as the overall POSS-CNN, we can achieve seven models with different precision and operations. The test accuracy of POSS-CNN for a recognition task tested on a CIFAR10 dataset can reach 86.4%, which is equivalent to AlexNet and VggNet, but the operation and parameters of the whole model in this paper are 45.9% and 45.8% of AlexNet, and 29.5% and 29.4% of VggNet. The mAP of POSS-CNN for a detection task tested on the LSVH dataset is 45.8, inferior to the 62.3 of YOLOv3. However, compared with YOLOv3, the operation and parameters of the model in this paper are reduced by 57.4% and 15.6%, respectively. After being accelerated by WRA, POSS-CNN for a detection task tested on an LSVH dataset can achieve 27 fps, and the energy efficiency is 0.42 J/f, which is 5 times and 96.6 times better than GPU 2080Ti in performance and energy efficiency, respectively. Full article
(This article belongs to the Special Issue Deep Learning for Image, Video and Signal Processing)
Show Figures

Figure 1

19 pages, 693 KiB  
Article
Enhancing Privacy Preservation in Verifiable Computation through Random Permutation Masking to Prevent Leakage
by Yang Yang and Guanghua Song
Information 2023, 14(11), 603; https://doi.org/10.3390/info14110603 - 06 Nov 2023
Viewed by 1142
Abstract
Outsourcing computation has become increasingly popular due to its cost-effectiveness, enabling users with limited resources to conduct large-scale computations on potentially untrusted cloud platforms. In order to safeguard privacy, verifiable computing (VC) has emerged as a secure approach, ensuring that the cloud cannot [...] Read more.
Outsourcing computation has become increasingly popular due to its cost-effectiveness, enabling users with limited resources to conduct large-scale computations on potentially untrusted cloud platforms. In order to safeguard privacy, verifiable computing (VC) has emerged as a secure approach, ensuring that the cloud cannot discern users’ input and output. Random permutation masking (RPM) is a widely adopted technique in VC protocols to provide robust privacy protection. This work presents a precise definition of the privacy-preserving property of RPM by employing indistinguishability experiments. Moreover, an innovative attack exploiting the greatest common divisor and the least common multiple of each row and column in the encrypted matrices is introduced against RPM. Unlike previous density-based attacks, this novel approach offers a significant advantage by allowing the reconstruction of matrix values from the ciphertext based on RPM. A comprehensive demonstration was provided to illustrate the failure of protocols based on RPM in maintaining the privacy-preserving property under this proposed attack. Furthermore, an extensive series of experiments is conducted to thoroughly validate the effectiveness and advantages of the attack against RPM. The findings of this research highlight vulnerabilities in RPM-based VC protocols and underline the pressing need for further enhancements and alternative privacy-preserving mechanisms in outsourcing computation. Full article
(This article belongs to the Section Information Security and Privacy)
Show Figures

Figure 1

15 pages, 3739 KiB  
Article
Trend Analysis of Large Language Models through a Developer Community: A Focus on Stack Overflow
by Jungha Son and Boyoung Kim
Information 2023, 14(11), 602; https://doi.org/10.3390/info14110602 - 06 Nov 2023
Viewed by 2358
Abstract
In the rapidly advancing field of large language model (LLM) research, platforms like Stack Overflow offer invaluable insights into the developer community’s perceptions, challenges, and interactions. This research aims to analyze LLM research and development trends within the professional community. Through the rigorous [...] Read more.
In the rapidly advancing field of large language model (LLM) research, platforms like Stack Overflow offer invaluable insights into the developer community’s perceptions, challenges, and interactions. This research aims to analyze LLM research and development trends within the professional community. Through the rigorous analysis of Stack Overflow, employing a comprehensive dataset spanning several years, the study identifies the prevailing technologies and frameworks underlining the dominance of models and platforms such as Transformer and Hugging Face. Furthermore, a thematic exploration using Latent Dirichlet Allocation unravels a spectrum of LLM discussion topics. As a result of the analysis, twenty keywords were derived, and a total of five key dimensions, “OpenAI Ecosystem and Challenges”, “LLM Training with Frameworks”, “APIs, File Handling and App Development”, “Programming Constructs and LLM Integration”, and “Data Processing and LLM Functionalities”, were identified through intertopic distance mapping. This research underscores the notable prevalence of specific Tags and technologies within the LLM discourse, particularly highlighting the influential roles of Transformer models and frameworks like Hugging Face. This dominance not only reflects the preferences and inclinations of the developer community but also illuminates the primary tools and technologies they leverage in the continually evolving field of LLMs. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Economics and Business Management)
Show Figures

Figure 1

15 pages, 2085 KiB  
Article
EACH-COA: An Energy-Aware Cluster Head Selection for the Internet of Things Using the Coati Optimization Algorithm
by Ramasubbareddy Somula, Yongyun Cho and Bhabendu Kumar Mohanta
Information 2023, 14(11), 601; https://doi.org/10.3390/info14110601 - 05 Nov 2023
Cited by 2 | Viewed by 1599
Abstract
In recent years, the Internet of Things (IoT) has transformed human life by improving quality of life and revolutionizing all business sectors. The sensor nodes in IoT are interconnected to ensure data transfer to the sink node over the network. Owing to limited [...] Read more.
In recent years, the Internet of Things (IoT) has transformed human life by improving quality of life and revolutionizing all business sectors. The sensor nodes in IoT are interconnected to ensure data transfer to the sink node over the network. Owing to limited battery power, the energy in the nodes is conserved with the help of the clustering technique in IoT. Cluster head (CH) selection is essential for extending network lifetime and throughput in clustering. In recent years, many existing optimization algorithms have been adapted to select the optimal CH to improve energy usage in network nodes. Hence, improper CH selection approaches require more extended convergence and drain sensor batteries quickly. To solve this problem, this paper proposed a coati optimization algorithm (EACH-COA) to improve network longevity and throughput by evaluating the fitness function over the residual energy (RER) and distance constraints. The proposed EACH-COA simulation was conducted in MATLAB 2019a. The potency of the EACH-COA approach was compared with those of the energy-efficient rabbit optimization algorithm (EECHS-ARO), improved sparrow optimization technique (EECHS-ISSADE), and hybrid sea lion algorithm (PDU-SLno). The proposed EACH-COA improved the network lifetime by 8–15% and throughput by 5–10%. Full article
(This article belongs to the Special Issue Recent Advances in IoT and Cyber/Physical System)
Show Figures

Figure 1

26 pages, 7467 KiB  
Article
Exploring Key Issues in Cybersecurity Data Breaches: Analyzing Data Breach Litigation with ML-Based Text Analytics
by Dominik Molitor, Wullianallur Raghupathi, Aditya Saharia and Viju Raghupathi
Information 2023, 14(11), 600; https://doi.org/10.3390/info14110600 - 05 Nov 2023
Viewed by 3416
Abstract
While data breaches are a frequent and universal phenomenon, the characteristics and dimensions of data breaches are unexplored. In this novel exploratory research, we apply machine learning (ML) and text analytics to a comprehensive collection of data breach litigation cases to extract insights [...] Read more.
While data breaches are a frequent and universal phenomenon, the characteristics and dimensions of data breaches are unexplored. In this novel exploratory research, we apply machine learning (ML) and text analytics to a comprehensive collection of data breach litigation cases to extract insights from the narratives contained within these cases. Our analysis shows stakeholders (e.g., litigants) are concerned about major topics related to identity theft, hacker, negligence, FCRA (Fair Credit Reporting Act), cybersecurity, insurance, phone device, TCPA (Telephone Consumer Protection Act), credit card, merchant, privacy, and others. The topics fall into four major clusters: “phone scams”, “cybersecurity”, “identity theft”, and “business data breach”. By utilizing ML, text analytics, and descriptive data visualizations, our study serves as a foundational piece for comprehensively analyzing large textual datasets. The findings hold significant implications for both researchers and practitioners in cybersecurity, especially those grappling with the challenges of data breaches. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Intelligent Information Systems)
Show Figures

Figure 1

23 pages, 19503 KiB  
Article
Combining Software-Defined Radio Learning Modules and Neural Networks for Teaching Communication Systems Courses
by Luis A. Camuñas-Mesa and José M. de la Rosa
Information 2023, 14(11), 599; https://doi.org/10.3390/info14110599 - 04 Nov 2023
Viewed by 1535
Abstract
The paradigm known as Cognitive Radio (CR) proposes a continuous sensing of the electromagnetic spectrum in order to dynamically modify transmission parameters, making intelligent use of the environment by taking advantage of different techniques such as Neural Networks. This paradigm is becoming especially [...] Read more.
The paradigm known as Cognitive Radio (CR) proposes a continuous sensing of the electromagnetic spectrum in order to dynamically modify transmission parameters, making intelligent use of the environment by taking advantage of different techniques such as Neural Networks. This paradigm is becoming especially relevant due to the congestion in the spectrum produced by increasing numbers of IoT (Internet of Things) devices. Nowadays, many different Software-Defined Radio (SDR) platforms provide tools to implement CR systems in a teaching laboratory environment. Within the framework of a ‘Communication Systems’ course, this paper presents a methodology for learning the fundamentals of radio transmitters and receivers in combination with Convolutional Neural Networks (CNNs). Full article
Show Figures

Figure 1

35 pages, 1713 KiB  
Review
Deep Learning for Time Series Forecasting: Advances and Open Problems
by Angelo Casolaro, Vincenzo Capone, Gennaro Iannuzzo and Francesco Camastra
Information 2023, 14(11), 598; https://doi.org/10.3390/info14110598 - 04 Nov 2023
Cited by 4 | Viewed by 12236
Abstract
A time series is a sequence of time-ordered data, and it is generally used to describe how a phenomenon evolves over time. Time series forecasting, estimating future values of time series, allows the implementation of decision-making strategies. Deep learning, the currently leading field [...] Read more.
A time series is a sequence of time-ordered data, and it is generally used to describe how a phenomenon evolves over time. Time series forecasting, estimating future values of time series, allows the implementation of decision-making strategies. Deep learning, the currently leading field of machine learning, applied to time series forecasting can cope with complex and high-dimensional time series that cannot be usually handled by other machine learning techniques. The aim of the work is to provide a review of state-of-the-art deep learning architectures for time series forecasting, underline recent advances and open problems, and also pay attention to benchmark data sets. Moreover, the work presents a clear distinction between deep learning architectures that are suitable for short-term and long-term forecasting. With respect to existing literature, the major advantage of the work consists in describing the most recent architectures for time series forecasting, such as Graph Neural Networks, Deep Gaussian Processes, Generative Adversarial Networks, Diffusion Models, and Transformers. Full article
(This article belongs to the Special Issue New Deep Learning Approach for Time Series Forecasting)
Show Figures

Figure 1

21 pages, 2278 KiB  
Article
Multi-Agent Reinforcement Learning for Online Food Delivery with Location Privacy Preservation
by Suleiman Abahussein, Dayong Ye, Congcong Zhu, Zishuo Cheng, Umer Siddique and Sheng Shen
Information 2023, 14(11), 597; https://doi.org/10.3390/info14110597 - 03 Nov 2023
Cited by 1 | Viewed by 1410
Abstract
Online food delivery services today are considered an essential service that gets significant attention worldwide. Many companies and individuals are involved in this field as it offers good income and numerous jobs to the community. In this research, we consider the problem of [...] Read more.
Online food delivery services today are considered an essential service that gets significant attention worldwide. Many companies and individuals are involved in this field as it offers good income and numerous jobs to the community. In this research, we consider the problem of online food delivery services and how we can increase the number of received orders by couriers and thereby increase their income. Multi-agent reinforcement learning (MARL) is employed to guide the couriers to areas with high demand for food delivery requests. A map of the city is divided into small grids, and each grid represents a small area of the city that has different demand for online food delivery orders. The MARL agent trains and learns which grid has the highest demand and then selects it. Thus, couriers can get more food delivery orders and thereby increase long-term income. While increasing the number of received orders is important, protecting customer location is also essential. Therefore, the Protect User Location Method (PULM) is proposed in this research in order to protect customer location information. The PULM injects differential privacy (DP) Laplace noise based on two parameters: city area size and customer frequency of online food delivery orders. We use two datasets—Shenzhen, China, and Iowa, USA—to demonstrate the results of our experiments. The results show an increase in the number of received orders in the Shenzhen and Iowa City datasets. We also show the similarity and data utility of courier trajectories after we use our obfuscation (PULM) method. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

20 pages, 2381 KiB  
Article
Temporal Convolutional Networks and BERT-Based Multi-Label Emotion Analysis for Financial Forecasting
by Charalampos M. Liapis and Sotiris Kotsiantis
Information 2023, 14(11), 596; https://doi.org/10.3390/info14110596 - 03 Nov 2023
Cited by 1 | Viewed by 1453
Abstract
The use of deep learning in conjunction with models that extract emotion-related information from texts to predict financial time series is based on the assumption that what is said about a stock is correlated with the way that stock fluctuates. Given the above, [...] Read more.
The use of deep learning in conjunction with models that extract emotion-related information from texts to predict financial time series is based on the assumption that what is said about a stock is correlated with the way that stock fluctuates. Given the above, in this work, a multivariate forecasting methodology incorporating temporal convolutional networks in combination with a BERT-based multi-label emotion classification procedure and correlation feature selection is proposed. The results from an extensive set of experiments, which included predictions of three different time frames and various multivariate ensemble schemes that capture 28 different types of emotion-relative information, are presented. It is shown that the proposed methodology exhibits universal predominance regarding aggregate performance over six different metrics, outperforming all the compared schemes, including a multitude of individual and ensemble methods, both in terms of aggregate average scores and Friedman rankings. Moreover, the results strongly indicate that the use of emotion-related features has beneficial effects on the derived forecasts. Full article
Show Figures

Figure 1

23 pages, 11633 KiB  
Article
Enhancing Walking Accessibility in Urban Transportation: A Comprehensive Analysis of Influencing Factors and Mechanisms
by Yong Liu, Xueqi Ding and Yanjie Ji
Information 2023, 14(11), 595; https://doi.org/10.3390/info14110595 - 02 Nov 2023
Viewed by 2077
Abstract
The rise in “urban diseases” like population density, traffic congestion, and environmental pollution has renewed attention to urban livability. Walkability, a critical measure of pedestrian friendliness, has gained prominence in urban and transportation planning. This research delves into a comprehensive analysis of walking [...] Read more.
The rise in “urban diseases” like population density, traffic congestion, and environmental pollution has renewed attention to urban livability. Walkability, a critical measure of pedestrian friendliness, has gained prominence in urban and transportation planning. This research delves into a comprehensive analysis of walking accessibility, examining both subjective and objective aspects. This study aims to identify the influencing factors and explore the underlying mechanisms driving walkability within a specific area. Through a questionnaire survey, residents’ subjective perceptions were gathered concerning various factors such as traffic operations, walking facilities, and the living environment. Structural equation modeling was employed to analyze the collected data, revealing that travel experience significantly impacts perceived accessibility, followed by facility condition, traffic condition, and safety perception. In the objective analysis, various types of POI data served as explanatory variables, dividing the study area into grids using ArcGIS, with the Walk Score® as the dependent variable. Comparisons of OLS, GWR and MGWR demonstrated that MGWR yielded the most accurate fitting results. Mixed land use, shopping, hotels, residential, government, financial, and medical public services exhibited positive correlations with local walkability, while corporate enterprises and street greening showed negative correlations. These findings were attributed to the level of development, regional functions, population distribution, and supporting facility deployment, collectively influencing the walking accessibility of the area. In conclusion, this research presents crucial insights into enhancing walkability, with implications for urban planning and management, thereby enriching residents’ walking travel experience and promoting sustainable transportation practices. Finally, the limitations of the thesis are discussed. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop