Next Issue
Volume 14, May
Previous Issue
Volume 14, March
 
 

Information, Volume 14, Issue 4 (April 2023) – 52 articles

Cover Story (view full-size image): Mobile Augmented Reality (AR) is a promising technology for educational purposes as it enables interactive, engaging, and spatially independent learning. While the didactic benefits of AR have been well studied lately, and smartphones with AR capabilities are ubiquitously available, concepts and tools for a scalable deployment of AR are still missing. The open-source TrainAR framework combines an interaction concept, a didactic framework and an authoring tool for procedural AR training applications for smartphones. The contribution of this paper is the open-source visual scripting-based authoring tool of TrainAR and its evaluation. TrainAR is an extension of Unity which allows non-programmer domain experts to create their own procedural AR trainings through visual state machines. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 3979 KiB  
Article
EduARdo—Unity Components for Augmented Reality Environments
by Ilias Logothetis, Myron Sfyrakis and Nikolaos Vidakis
Information 2023, 14(4), 252; https://doi.org/10.3390/info14040252 - 21 Apr 2023
Cited by 1 | Viewed by 1966
Abstract
Contemporary software applications have shifted focus from 2D representations to 3D. Augmented and Virtual Reality (AR/VR) are two technologies that have captured the industry’s interest as they show great potential in many areas. This paper proposes a system that allows developers to create [...] Read more.
Contemporary software applications have shifted focus from 2D representations to 3D. Augmented and Virtual Reality (AR/VR) are two technologies that have captured the industry’s interest as they show great potential in many areas. This paper proposes a system that allows developers to create applications in AR and VR with a simple visual process, while also enabling all the powerful features provided by the Unity 3D game engine. The current system comprises two tools, one for the interaction and one for the behavioral configuration of 3D objects within the environment. Participants from different disciplines with a software-engineering background were asked to participate in the evaluation of the system. They were called to complete two tasks using their mobile phones and then answer a usability questionnaire to reflect on their experience using the system. The results (a) showed that the system is easy to use but still lacks some features, (b) provided insights on what educators seek from digital tools to assist them in the classroom, and (c) that educators often request a more whimsical UI as they want to use the system together with the learners. Full article
(This article belongs to the Special Issue eXtended Reality for Social Inclusion and Educational Purpose)
Show Figures

Figure 1

17 pages, 486 KiB  
Article
Exploring Neural Dynamics in Source Code Processing Domain
by Martina Saletta and Claudio Ferretti
Information 2023, 14(4), 251; https://doi.org/10.3390/info14040251 - 21 Apr 2023
Viewed by 1130
Abstract
Deep neural networks have proven to be able to learn rich internal representations, including for features that can also be used for different purposes than those the networks are originally developed for. In this paper, we are interested in exploring such ability and, [...] Read more.
Deep neural networks have proven to be able to learn rich internal representations, including for features that can also be used for different purposes than those the networks are originally developed for. In this paper, we are interested in exploring such ability and, to this aim, we propose a novel approach for investigating the internal behavior of networks trained for source code processing tasks. Using a simple autoencoder trained in the reconstruction of vectors representing programs (i.e., program embeddings), we first analyze the performance of the internal neurons in classifying programs according to different labeling policies inspired by real programming issues, showing that some neurons can actually detect different program properties. We then study the dynamics of the network from an information-theoretic standpoint, namely by considering the neurons as signaling systems and by computing the corresponding entropy. Further, we define a way to distinguish neurons according to their behavior, to consider them as formally associated with different abstract concepts, and through the application of nonparametric statistical tests to pairs of neurons, we look for neurons with unique (or almost unique) associated concepts, showing that the entropy value of a neuron is related to the rareness of its concept. Finally, we discuss how the proposed approaches for ranking the neurons can be generalized to different domains and applied to more sophisticated and specialized networks so as to help the research in the growing field of explainable artificial intelligence. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence)
Show Figures

Figure 1

32 pages, 1269 KiB  
Article
Evaluation of Automatic Legal Text Summarization Techniques for Greek Case Law
by Marios Koniaris, Dimitris Galanis, Eugenia Giannini and Panayiotis Tsanakas
Information 2023, 14(4), 250; https://doi.org/10.3390/info14040250 - 21 Apr 2023
Cited by 3 | Viewed by 2259
Abstract
The increasing amount of legal information available online is overwhelming for both citizens and legal professionals, making it difficult and time-consuming to find relevant information and keep up with the latest legal developments. Automatic text summarization techniques can be highly beneficial as they [...] Read more.
The increasing amount of legal information available online is overwhelming for both citizens and legal professionals, making it difficult and time-consuming to find relevant information and keep up with the latest legal developments. Automatic text summarization techniques can be highly beneficial as they save time, reduce costs, and lessen the cognitive load of legal professionals. However, applying these techniques to legal documents poses several challenges due to the complexity of legal documents and the lack of needed resources, especially in linguistically under-resourced languages, such as the Greek language. In this paper, we address automatic summarization of Greek legal documents. A major challenge in this area is the lack of suitable datasets in the Greek language. In response, we developed a new metadata-rich dataset consisting of selected judgments from the Supreme Civil and Criminal Court of Greece, alongside their reference summaries and category tags, tailored for the purpose of automated legal document summarization. We also adopted several state-of-the-art methods for abstractive and extractive summarization and conducted a comprehensive evaluation of the methods using both human and automatic metrics. Our results: (i) revealed that, while extractive methods exhibit average performance, abstractive methods generate moderately fluent and coherent text, but they tend to receive low scores in relevance and consistency metrics; (ii) indicated the need for metrics that capture better a legal document summary’s coherence, relevance, and consistency; (iii) demonstrated that fine-tuning BERT models on a specific upstream task can significantly improve the model’s performance. Full article
(This article belongs to the Special Issue Novel Methods and Applications in Natural Language Processing)
Show Figures

Figure 1

23 pages, 27370 KiB  
Article
Recognizing the Wadi Fluvial Structure and Stream Network in the Qena Bend of the Nile River, Egypt, on Landsat 8-9 OLI Images
by Polina Lemenkova and Olivier Debeir
Information 2023, 14(4), 249; https://doi.org/10.3390/info14040249 - 20 Apr 2023
Cited by 5 | Viewed by 2007
Abstract
With methods for processing remote sensing data becoming widely available, the ability to quantify changes in spatial data and to evaluate the distribution of diverse landforms across target areas in datasets becomes increasingly important. One way to approach this problem is through satellite [...] Read more.
With methods for processing remote sensing data becoming widely available, the ability to quantify changes in spatial data and to evaluate the distribution of diverse landforms across target areas in datasets becomes increasingly important. One way to approach this problem is through satellite image processing. In this paper, we primarily focus on the methods of the unsupervised classification of the Landsat OLI/TIRS images covering the region of the Qena governorate in Upper Egypt. The Qena Bend of the Nile River presents a remarkable morphological feature in Upper Egypt, including a dense drainage network of wadi aquifer systems and plateaus largely dissected by numerous valleys of dry rivers. To identify the fluvial structure and stream network of the Wadi Qena region, this study addresses the problem of interpreting the relevant space-borne data using R, with an aim to visualize the land surface structures corresponding to various land cover types. To this effect, high-resolution 2D and 3D topographic and geologic maps were used for the analysis of the geomorphological setting of the Qena region. The information was extracted from the space-borne data for the comparative analysis of the distribution of wadi streams in the Qena Bend area over several years: 2013, 2015, 2016, 2019, 2022, and 2023. Six images were processed using computer vision methods made available by R libraries. The results of the k-means clustering of each scene retrieved from the multi-temporal images covering the Qena Bend of the Nile River were thus compared to visualize changes in landforms caused by the cumulative effects of geomorphological disasters and climate–environmental processes. The proposed method, tied together through the use of R scripts, runs effectively and performs favorably in computer vision tasks aimed at geospatial image processing and the analysis of remote sensing data. Full article
(This article belongs to the Special Issue Computer Vision for Security Applications)
Show Figures

Figure 1

29 pages, 1605 KiB  
Article
Energy-Efficient Parallel Computing: Challenges to Scaling
by Alexey Lastovetsky and Ravi Reddy Manumachu
Information 2023, 14(4), 248; https://doi.org/10.3390/info14040248 - 20 Apr 2023
Cited by 3 | Viewed by 1707
Abstract
The energy consumption of Information and Communications Technology (ICT) presents a new grand technological challenge. The two main approaches to tackle the challenge include the development of energy-efficient hardware and software. The development of energy-efficient software employing application-level energy optimization techniques has become [...] Read more.
The energy consumption of Information and Communications Technology (ICT) presents a new grand technological challenge. The two main approaches to tackle the challenge include the development of energy-efficient hardware and software. The development of energy-efficient software employing application-level energy optimization techniques has become an important category owing to the paradigm shift in the composition of digital platforms from single-core processors to heterogeneous platforms integrating multicore CPUs and graphics processing units (GPUs). In this work, we present an overview of application-level bi-objective optimization methods for energy and performance that address two fundamental challenges, non-linearity and heterogeneity, inherent in modern high-performance computing (HPC) platforms. Applying the methods requires energy profiles of the application’s computational kernels executing on the different compute devices of the HPC platform. Therefore, we summarize the research innovations in the three mainstream component-level energy measurement methods and present their accuracy and performance tradeoffs. Finally, scaling the optimization methods for energy and performance is crucial to achieving energy efficiency objectives and meeting quality-of-service requirements in modern HPC platforms and cloud computing infrastructures. We introduce the building blocks needed to achieve this scaling and conclude with the challenges to scaling. Briefly, two significant challenges are described, namely fast optimization methods and accurate component-level energy runtime measurements, especially for components running on accelerators. Full article
(This article belongs to the Special Issue Advances in High Performance Computing and Scalable Software)
Show Figures

Figure 1

15 pages, 389 KiB  
Article
Power Control of Reed–Solomon-Coded OFDM Systems in Rayleigh Fading Channels
by Younggil Kim
Information 2023, 14(4), 247; https://doi.org/10.3390/info14040247 - 20 Apr 2023
Viewed by 755
Abstract
Power control in an RS-coded orthogonal frequency division multiplex (OFDM) system with error-and-erasure correction decoding in Rayleigh fading channels was investigated. The power of each symbol within a codeword was controlled to reduce the codeword error rate (WER). Several RS-coded OFDM systems with [...] Read more.
Power control in an RS-coded orthogonal frequency division multiplex (OFDM) system with error-and-erasure correction decoding in Rayleigh fading channels was investigated. The power of each symbol within a codeword was controlled to reduce the codeword error rate (WER). Several RS-coded OFDM systems with power control are proposed. The WERs of the proposed systems as a function of the signal-to-noise ratio per bit were derived. We found that channel inversion at the transmitter in combination with the erasure generation by ordering fading amplitudes at the receiver provided the lowest WER among the considered systems. Furthermore, the erasure generation by ordering fading amplitudes was found to be better than the erasure generation by comparing fading amplitudes with a threshold. Full article
(This article belongs to the Section Wireless Technologies)
Show Figures

Figure 1

19 pages, 2606 KiB  
Article
Target Positioning and Tracking in WSNs Based on AFSA
by Shu-Hung Lee, Chia-Hsin Cheng, Chien-Chih Lin and Yung-Fa Huang
Information 2023, 14(4), 246; https://doi.org/10.3390/info14040246 - 18 Apr 2023
Cited by 3 | Viewed by 1042
Abstract
In wireless sensor networks (WSNs), the target positioning and tracking are very important topics. There are many different methods used in target positioning and tracking, for example, angle of arrival (AOA), time of arrival (TOA), time difference of arrival (TDOA), and received signal [...] Read more.
In wireless sensor networks (WSNs), the target positioning and tracking are very important topics. There are many different methods used in target positioning and tracking, for example, angle of arrival (AOA), time of arrival (TOA), time difference of arrival (TDOA), and received signal strength (RSS). This paper uses an artificial fish swarm algorithm (AFSA) and the received signal strength indicator (RSSI) channel model for indoor target positioning and tracking. The performance of eight different method combinations of fixed or adaptive steps, the region segmentation method (RSM), Hybrid Adaptive Vision of Prey (HAVP) method, and a Dynamic AF Selection (DAFS) method proposed in this paper for target positioning and tracking is investigated when the number of artificial fish is 100, 72, 52, 24, and 12. The simulation results show that using the proposed HAVP total average positioning error is reduced by 96.1%, and the positioning time is shortened by 26.4% for the target position. Adopting HAVP, RSM, and DAFS in target tracking, the positioning time can be greatly shortened by 42.47% without degrading the tracking success rate. Full article
Show Figures

Figure 1

18 pages, 4114 KiB  
Article
Intelligent Augmented Reality for Learning Geometry
by Aldo Uriarte-Portillo, Ramón Zatarain-Cabada, María Lucía Barrón-Estrada, María Blanca Ibáñez and Lucía-Margarita González-Barrón
Information 2023, 14(4), 245; https://doi.org/10.3390/info14040245 - 17 Apr 2023
Cited by 2 | Viewed by 2194
Abstract
This work describes a learning tool named ARGeoITS that combines augmented reality with an intelligent tutoring system to support geometry learning. The work depicts a study developed in Mexico to measure the impact on the learning and motivation of students using two different [...] Read more.
This work describes a learning tool named ARGeoITS that combines augmented reality with an intelligent tutoring system to support geometry learning. The work depicts a study developed in Mexico to measure the impact on the learning and motivation of students using two different learning systems: an intelligent tutoring system with augmented reality (ARGeoITS) and a system with only augmented reality (ARGeo). To study the effect of this type of technology (ARGeoITS, ARGeo) and time of assessment (pre-, post-) on learning gains and motivation, we applied a 2 × 2 factorial design to 106 middle school students. Both pretest and post-test questionnaires were applied to each group to determine the students’ learning gains, as was an IMMS motivational survey to evaluate the students’ motivation. The results show that: (1) students who used the intelligent tutoring system ARGeoITS scored higher in learning gain (7.47) compared with those who used ARGeo (6.83); and (2) both the ARGeoITS and ARGeo learning tools have a positive impact on students’ motivation. The research findings imply that intelligent tutoring systems that integrate augmented reality can be exploited as an effective learning environment to help middle–high school students learn difficult topics such as geometry. Full article
(This article belongs to the Collection Augmented Reality Technologies, Systems and Applications)
Show Figures

Figure 1

14 pages, 1488 KiB  
Article
Minimizing Energy Depletion Using Extended Lifespan: QoS Satisfied Multiple Learned Rate (ELQSSM-ML) for Increased Lifespan of Mobile Adhoc Networks (MANET)
by Manikandan Rajagopal, Ramkumar Sivasakthivel, Jeyakrishnan Venugopal, Ioannis E. Sarris and Karuppusamy Loganathan
Information 2023, 14(4), 244; https://doi.org/10.3390/info14040244 - 17 Apr 2023
Cited by 2 | Viewed by 1125
Abstract
Mobile Adhoc Networks (MANETs) typically employ with the aid of new technology to increase Quality-of-Service (QoS) when forwarding multiple data rates. This kind of network causes high forwarding delays and improper data transfer rates because of the changes in the node’s vicinity. Although [...] Read more.
Mobile Adhoc Networks (MANETs) typically employ with the aid of new technology to increase Quality-of-Service (QoS) when forwarding multiple data rates. This kind of network causes high forwarding delays and improper data transfer rates because of the changes in the node’s vicinity. Although an optimized routing technique to transfer energy has been used to lessen the delay and improve the throughput by assigning a proper data rate, it does not consider the objective of minimizing the energy use, which results in less network lifetime. The goal of the proposed work is to minimize the energy depletion in a MANET, which results in an extended Lifespan of the network. In this research paper, an Extended Life span and QSSM-ML routing algorithm is proposed, which minimizes energy use and enhances the network lifetime. First, an optimization problem is formulated with the purpose of increasing the network’s lifetime while limiting the energy utilization and stability of the path along with residual. Second, an adaptive policy is applied for the asymmetric distribution of energy at both origin and intermediate nodes. In order to achieve maximum network lifespan and minimal energy depletion, the optimization problem was framed when power usage is a constraint by allowing the network to make use of the leftover power. An asymmetric energy transmission strategy was also designed for the adaptive allocation of maximum transmission energy in the origin. This made the network lifespan extended with the help of reducing the node’s energy use for broadcasting the data from the origin to the target. Moreover, the node’s energy use during packet forwarding is reduced to recover the network lifetime. The overall benefit of the proposed work is that it can achieve both minimal energy depletion and maximizes the lifetime of the network. Finally, the simulation findings reveal that the ELQSSM-ML algorithm accomplishes a better network performance than the classical algorithms. Full article
Show Figures

Figure 1

20 pages, 5908 KiB  
Article
Hyperparameter-Optimization-Inspired Long Short-Term Memory Network for Air Quality Grade Prediction
by Dushi Wen, Sirui Zheng, Jiazhen Chen, Zhouyi Zheng, Chen Ding and Lei Zhang
Information 2023, 14(4), 243; https://doi.org/10.3390/info14040243 - 17 Apr 2023
Cited by 4 | Viewed by 1327
Abstract
In the world, with the continuous development of modern society and the acceleration of urbanization, the problem of air pollution is becoming increasingly salient. Methods for predicting the air quality grade and determining the necessary governance are at present most urgent problems waiting [...] Read more.
In the world, with the continuous development of modern society and the acceleration of urbanization, the problem of air pollution is becoming increasingly salient. Methods for predicting the air quality grade and determining the necessary governance are at present most urgent problems waiting to be solved by human beings. In recent years, more and more machine-learning-based methods have been used to solve the air quality prediction problem. However, the uncertainty of environmental changes and the difficulty of precisely predicting quantitative values seriously influence prediction results. In this paper, the proposed air pollutant quality grade prediction method based on a hyperparameter-optimization-inspired long short-term memory (LSTM) network provides two advantages. Firstly, the definition of air quality grade is introduced in the air quality prediction task, which turns a fitting problem into a classification problem and makes the complex problem simple; secondly, the hunter–prey optimization algorithm is used to optimize the hyperparameters of the LSTM structure to obtain the optimal network structure adaptively determined through the use of input data, which can include more generalization abilities. The experimental results from three real Xi’an air quality datasets display the effectiveness of the proposed method. Full article
Show Figures

Figure 1

17 pages, 286 KiB  
Review
Transformers in the Real World: A Survey on NLP Applications
by Narendra Patwardhan, Stefano Marrone and Carlo Sansone
Information 2023, 14(4), 242; https://doi.org/10.3390/info14040242 - 17 Apr 2023
Cited by 10 | Viewed by 7588
Abstract
The field of Natural Language Processing (NLP) has undergone a significant transformation with the introduction of Transformers. From the first introduction of this technology in 2017, the use of transformers has become widespread and has had a profound impact on the field of [...] Read more.
The field of Natural Language Processing (NLP) has undergone a significant transformation with the introduction of Transformers. From the first introduction of this technology in 2017, the use of transformers has become widespread and has had a profound impact on the field of NLP. In this survey, we review the open-access and real-world applications of transformers in NLP, specifically focusing on those where text is the primary modality. Our goal is to provide a comprehensive overview of the current state-of-the-art in the use of transformers in NLP, highlight their strengths and limitations, and identify future directions for research. In this way, we aim to provide valuable insights for both researchers and practitioners in the field of NLP. In addition, we provide a detailed analysis of the various challenges faced in the implementation of transformers in real-world applications, including computational efficiency, interpretability, and ethical considerations. Moreover, we highlight the impact of transformers on the NLP community, including their influence on research and the development of new NLP models. Full article
Show Figures

Figure 1

22 pages, 3457 KiB  
Article
Application of Scenario Forecasting Methods and Fuzzy Multi-Criteria Modeling in Substantiation of Urban Area Development Strategies
by Natalia Sadovnikova, Oksana Savina, Danila Parygin, Alexey Churakov and Alexey Shuklin
Information 2023, 14(4), 241; https://doi.org/10.3390/info14040241 - 14 Apr 2023
Cited by 2 | Viewed by 1182
Abstract
The existing approaches to supporting the tasks of managing the urban areas development are aimed at choosing an alternative from a set of ready-made solutions. Little attention is paid to the procedure for the formation and analysis of acceptable options for the use [...] Read more.
The existing approaches to supporting the tasks of managing the urban areas development are aimed at choosing an alternative from a set of ready-made solutions. Little attention is paid to the procedure for the formation and analysis of acceptable options for the use of territories. The study's purpose is to understand how various factors affect the efficiency of using the city’s territory. In addition, we are trying to use this understanding to assess the possible consequences of the implementation of management decisions on the territory transformation. We use the method of structuring knowledge about the study area, taking into account the influence of the external environment. This method implements the significant factors list formation and assessment of their impact on development. Fuzzy cognitive modeling was used to build scenarios for identifying contradictions in achieving sustainable development goals. The scenario modeling results are necessary for the formation of the alternative. Alternatives are evaluated on the basis of fuzzy multi-criteria optimization. The integration of methods makes it possible to increase the objectivity of the analysis of strategies for urban areas development. The Belman-Zadeh method is used to analyze the selected options based on criteria that determine the feasibility and effectiveness of each project. Full article
(This article belongs to the Special Issue New Applications in Multiple Criteria Decision Analysis II)
Show Figures

Figure 1

17 pages, 6534 KiB  
Article
A Shallow System Prototype for Violent Action Detection in Italian Public Schools
by Erica Perseghin and Gian Luca Foresti
Information 2023, 14(4), 240; https://doi.org/10.3390/info14040240 - 14 Apr 2023
Cited by 1 | Viewed by 1559
Abstract
This paper presents a novel low-cost integrated system prototype, called School Violence Detection system (SVD), based on a 2D Convolutional Neural Network (CNN). It is used for classifying and identifying automatically violent actions in educational environments based on shallow cost hardware. Moreover, the [...] Read more.
This paper presents a novel low-cost integrated system prototype, called School Violence Detection system (SVD), based on a 2D Convolutional Neural Network (CNN). It is used for classifying and identifying automatically violent actions in educational environments based on shallow cost hardware. Moreover, the paper fills the gap of real datasets in educational environments by proposing a new one, called Daily School Break dataset (DSB), containing original videos recorded in an Italian high school yard. The proposed CNN has been pre-trained with an ImageNet model and a transfer learning approach. To extend its capabilities, the DSB was enriched with online images representing students in school environments. Experimental results analyze the classification performances of the SVD and investigate how it performs through the proposed DSB dataset. The SVD, which achieves a recognition accuracy of 95%, is considered computably efficient and low-cost. It could be adapted to other scenarios such as school arenas, gyms, playgrounds, etc. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

32 pages, 7410 KiB  
Article
Evaluation of a Resilience-Driven Operational Concept to Manage Drone Intrusions in Airports
by Domenico Pascarella, Gabriella Gigante, Angela Vozella, Maurizio Sodano, Marco Ippolito, Pierre Bieber, Thomas Dubot and Edgar Martinavarro
Information 2023, 14(4), 239; https://doi.org/10.3390/info14040239 - 13 Apr 2023
Cited by 1 | Viewed by 1117
Abstract
The drone market’s growth poses a serious threat to the negligent, illicit, or non-cooperative use of drones, especially in airports and their surroundings. Effective protection of an airport against drone intrusions should guarantee mandatory safety levels but should also rely on a resilience-driven [...] Read more.
The drone market’s growth poses a serious threat to the negligent, illicit, or non-cooperative use of drones, especially in airports and their surroundings. Effective protection of an airport against drone intrusions should guarantee mandatory safety levels but should also rely on a resilience-driven operational concept aimed at managing the intrusions without necessarily implying the closure of the airport. The concept faces both safety-related and security-related threats and is based on the definitions of: (i) new roles and responsibilities; (ii) a set of operational phases, accomplished by means of specific technological building blocks; (iii) a new operational procedure blending smoothly with existing aerodrome procedures in place. The paper investigates the evaluation of such a resilience-driven operational concept tailored to drone-intrusion features, airport features, and current operations. The proposed concept was evaluated by applying it to a concrete case study related to Milan Malpensa Airport. The evaluation was carried out by real-time simulations and event tree analysis, exploiting the implementation of specific simulation tools and the assessment of resilience-oriented metrics. The achieved results show the effectiveness of the proposed operational concept and elicit further requirements for future counter-drone systems in airports. Full article
(This article belongs to the Special Issue Systems Safety and Security—Challenges and Trends)
Show Figures

Figure 1

18 pages, 27646 KiB  
Article
Prediction of Road Traffic Accidents on a Road in Portugal: A Multidisciplinary Approach Using Artificial Intelligence, Statistics, and Geographic Information Systems
by Paulo Infante, Gonçalo Jacinto, Daniel Santos, Pedro Nogueira, Anabela Afonso, Paulo Quaresma, Marcelo Silva, Vitor Nogueira, Leonor Rego, José Saias, Patrícia Góis and Paulo R. Manuel
Information 2023, 14(4), 238; https://doi.org/10.3390/info14040238 - 13 Apr 2023
Viewed by 1953
Abstract
Road Traffic Accidents (RTA) cause human losses and irreparable physical and psychological damage to many of the victims. They also involve a very relevant economic dimension. It is urgent to improve the management of human and material resources for more effective prevention. This [...] Read more.
Road Traffic Accidents (RTA) cause human losses and irreparable physical and psychological damage to many of the victims. They also involve a very relevant economic dimension. It is urgent to improve the management of human and material resources for more effective prevention. This work makes an important contribution by presenting a methodology that allowed for achieving a predictive model for the occurrence of RTA on a road with a high RTA rate. The prediction is obtained for each road segment for a given time and day and combines results from statistical methods, spatial analysis, and artificial intelligence models. The performance of three Machine Learning (ML) models (Random Forest, C5.0 and Logistic Regression) is compared using different approaches for imbalanced data (random sampling, directional sampling, and Random Over-Sampling Examples (ROSE)) and using different segment lengths (500 m and 2000 m). This study used RTA data from 2016–2019 (training) and from May 2021–June 2022 (test). The most effective model was an ML logistic regression with the ROSE approach, using segments length 500 m (sensitivity = 87%, specificity = 60%, AUC = 0.82). The model was implemented in a digital application, and a Portuguese security force is already using it. Full article
Show Figures

Figure 1

24 pages, 4949 KiB  
Article
Modeling Chronic Pain Experiences from Online Reports Using the Reddit Reports of Chronic Pain Dataset
by Diogo A. P. Nunes, Joana Ferreira-Gomes, Fani Neto and David Martins de Matos
Information 2023, 14(4), 237; https://doi.org/10.3390/info14040237 - 12 Apr 2023
Viewed by 1949
Abstract
Reported experiences of chronic pain may convey qualities relevant to the exploration of this private and subjective experience. We propose this exploration by means of the Reddit Reports of Chronic Pain (RRCP) dataset. We define and validate the RRCP for a set of [...] Read more.
Reported experiences of chronic pain may convey qualities relevant to the exploration of this private and subjective experience. We propose this exploration by means of the Reddit Reports of Chronic Pain (RRCP) dataset. We define and validate the RRCP for a set of subreddits related to chronic pain, identify the main concerns discussed in each subreddit, model each subreddit according to their main concerns, and compare subreddit models. The RRCP dataset comprises 86,537 submissions from 12 subreddits related to chronic pain (each related to one pathological background). Each RRCP subreddit was found to have various main concerns. Some of these concerns are shared between multiple subreddits (e.g., the subreddit Sciatica semantically entails the subreddit backpain in their various concerns, but not the other way around), whilst some concerns are exclusive to specific subreddits (e.g., Interstitialcystitis and CrohnsDisease). Our analysis details each of these concerns and their (dis)similarity relations. Although limited by the intrinsic qualities of the Reddit platform, to the best of our knowledge, this is the first research work attempting to model the linguistic expression of various chronic pain-inducing pathologies and comparing these models to identify and quantify the similarities and differences between the corresponding emergent, chronic pain experiences. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2023)
Show Figures

Figure 1

19 pages, 2625 KiB  
Article
Novel Task-Based Unification and Adaptation (TUA) Transfer Learning Approach for Bilingual Emotional Speech Data
by Ismail Shahin, Ali Bou Nassif, Rameena Thomas and Shibani Hamsa
Information 2023, 14(4), 236; https://doi.org/10.3390/info14040236 - 12 Apr 2023
Viewed by 1417
Abstract
Modern developments in machine learning methodology have produced effective approaches to speech emotion recognition. The field of data mining is widely employed in numerous situations where it is possible to predict future outcomes by using the input sequence from previous training data. Since [...] Read more.
Modern developments in machine learning methodology have produced effective approaches to speech emotion recognition. The field of data mining is widely employed in numerous situations where it is possible to predict future outcomes by using the input sequence from previous training data. Since the input feature space and data distribution are the same for both training and testing data in conventional machine learning approaches, they are drawn from the same pool. However, because so many applications require a difference in the distribution of training and testing data, the gathering of training data is becoming more and more expensive. High performance learners that have been trained using similar, already-existing data are needed in these situations. To increase a model’s capacity for learning, transfer learning involves transferring knowledge from one domain to another related domain. To address this scenario, we have extracted ten multi-dimensional features from speech signals using OpenSmile and a transfer learning method to classify the features of various datasets. In this paper, we emphasize the importance of a novel transfer learning system called Task-based Unification and Adaptation (TUA), which bridges the disparity between extensive upstream training and downstream customization. We take advantage of the two components of the TUA, task-challenging unification and task-specific adaptation. Our algorithm is studied using the following speech datasets: the Arabic Emirati-accented speech dataset (ESD), the English Speech Under Simulated and Actual Stress (SUSAS) dataset and the Ryerson Audio-Visual Database of Emotional Speech and Song dataset (RAVDESS). Using the multidimensional features and transfer learning method on the given datasets, we were able to achieve an average speech emotion recognition rate of 91.2% on the ESD, 84.7% on the RAVDESS and 88.5% on the SUSAS datasets, respectively. Full article
Show Figures

Figure 1

16 pages, 3201 KiB  
Article
A Super-Efficient TinyML Processor for the Edge Metaverse
by Arash Khajooei, Mohammad (Behdad) Jamshidi and Shahriar B. Shokouhi
Information 2023, 14(4), 235; https://doi.org/10.3390/info14040235 - 10 Apr 2023
Cited by 3 | Viewed by 2062
Abstract
Although the Metaverse is becoming a popular technology in many aspects of our lives, there are some drawbacks to its implementation on clouds, including long latency, security concerns, and centralized infrastructures. Therefore, designing scalable Metaverse platforms on the edge layer can be a [...] Read more.
Although the Metaverse is becoming a popular technology in many aspects of our lives, there are some drawbacks to its implementation on clouds, including long latency, security concerns, and centralized infrastructures. Therefore, designing scalable Metaverse platforms on the edge layer can be a practical solution. Nevertheless, the realization of these edge-powered Metaverse ecosystems without high-performance intelligent edge devices is almost impossible. Neuromorphic engineering, which employs brain-inspired cognitive architectures to implement neuromorphic chips and Tiny Machine Learning (TinyML) technologies, can be an effective tool to enhance edge devices in such emerging ecosystems. Thus, a super-efficient TinyML processor to use in the edge-enabled Metaverse platforms has been designed and evaluated in this research. This processor includes a Winner-Take-All (WTA) circuit that was implemented via a simplified Leaky Integrate and Fire (LIF) neuron on an FPGA. The WTA architecture is a computational principle in a neuromorphic system inspired by the mini-column structure in the human brain. The resource consumption of the WTA architecture is reduced by employing our simplified LIF neuron, making it suitable for the proposed edge devices. The results have indicated that the proposed neuron improves the response speed to almost 39% and reduces resource consumption by 50% compared to recent works. Using our simplified neuron, up to 4200 neurons can be deployed on VIRTEX 6 devices. The maximum operating frequency of the proposed neuron and our spiking WTA is 576.319 MHz and 514.095 MHz, respectively. Full article
Show Figures

Figure 1

20 pages, 5402 KiB  
Article
FedUA: An Uncertainty-Aware Distillation-Based Federated Learning Scheme for Image Classification
by Shao-Ming Lee and Ja-Ling Wu
Information 2023, 14(4), 234; https://doi.org/10.3390/info14040234 - 10 Apr 2023
Cited by 1 | Viewed by 1850
Abstract
Recently, federated learning (FL) has gradually become an important research topic in machine learning and information theory. FL emphasizes that clients jointly engage in solving learning tasks. In addition to data security issues, fundamental challenges in this type of learning include the imbalance [...] Read more.
Recently, federated learning (FL) has gradually become an important research topic in machine learning and information theory. FL emphasizes that clients jointly engage in solving learning tasks. In addition to data security issues, fundamental challenges in this type of learning include the imbalance and non-IID among clients’ data and the unreliable connections between devices due to limited communication bandwidths. The above issues are intractable to FL. This study starts from the uncertainty analysis of deep neural networks (DNNs) to evaluate the effectiveness of FL, and proposes a new architecture for model aggregation. Our scheme improves FL’s performance by applying knowledge distillation and the DNN’s uncertainty quantification methods. A series of experiments on the image classification task confirms that our proposed model aggregation scheme can effectively solve the problem of non-IID data, especially when affordable transmission costs are limited. Full article
(This article belongs to the Special Issue Artificial Intelligence and Big Data Applications)
Show Figures

Figure 1

19 pages, 6223 KiB  
Article
Architecture and Data Knowledge of the Regional Data Center for Intelligent Agriculture
by Emil Doychev, Atanas Terziyski, Stoyan Tenev, Asya Stoyanova-Doycheva, Vanya Ivanova and Pepa Atanasova
Information 2023, 14(4), 233; https://doi.org/10.3390/info14040233 - 09 Apr 2023
Viewed by 1601
Abstract
The main task of the National Research Program “Smart crop production”, supported by the Ministry of Education and Science of Bulgaria and approved by the Council of Ministers, is the development of a regional data center to facilitate the work of farmers. The [...] Read more.
The main task of the National Research Program “Smart crop production”, supported by the Ministry of Education and Science of Bulgaria and approved by the Council of Ministers, is the development of a regional data center to facilitate the work of farmers. The regional data center is part of the implementation of a smart crop production environment called ZEMEL which provides personal assistants supporting the work of farmers. The environment provides intelligent services for crop analysis and prevention and assists farmers in performing basic tasks related to crop production. The objective of the proposed article is to present the implementation of the architecture, infrastructure, and data architecture of a regional data center in the Plovdiv region. In order to clearly present the results of this work, which are the architectural and physical implementations of a regional data center and the storage of dynamic data and background knowledge, a methodology consisting of several steps is followed: the system infrastructure of the data center and the data architecture are discussed; one of the local pieces of infrastructure, implemented in the Institute of Plant Genetic Resources (IPGR) in the town of Sadovo in the Plovdiv region, is presented in detail, including the different types of sensors and their connection to the data center in wheat cultivation; the data repositories are discussed where dynamic data and background knowledge are stored. The paper pays special attention to background knowledge developed as ontologies for winter wheat cultivation. The results are summarized by drawing some conclusions and recommendations for the design of the local infrastructure of the center and the stored data to improve its performance. Full article
(This article belongs to the Special Issue Virtual-Physical Architectures and Applications)
Show Figures

Figure 1

29 pages, 4153 KiB  
Article
Structure Learning and Hyperparameter Optimization Using an Automated Machine Learning (AutoML) Pipeline
by Konstantinos Filippou, George Aifantis, George A. Papakostas and George E. Tsekouras
Information 2023, 14(4), 232; https://doi.org/10.3390/info14040232 - 09 Apr 2023
Cited by 5 | Viewed by 2453
Abstract
In this paper, we built an automated machine learning (AutoML) pipeline for structure-based learning and hyperparameter optimization purposes. The pipeline consists of three main automated stages. The first carries out the collection and preprocessing of the dataset from the Kaggle database through the [...] Read more.
In this paper, we built an automated machine learning (AutoML) pipeline for structure-based learning and hyperparameter optimization purposes. The pipeline consists of three main automated stages. The first carries out the collection and preprocessing of the dataset from the Kaggle database through the Kaggle API. The second utilizes the Keras-Bayesian optimization tuning library to perform hyperparameter optimization. The third focuses on the training process of the machine learning (ML) model using the hyperparameter values estimated in the previous stage, and its evaluation is performed on the testing data by implementing the Neptune AI. The main technologies used to develop a stable and reusable machine learning pipeline are the popular Git version control system, the Google cloud virtual machine, the Jenkins server, the Docker containerization technology, and the Ngrok reverse proxy tool. The latter can securely publish the local Jenkins address as public through the internet. As such, some parts of the proposed pipeline are taken from the thematic area of machine learning operations (MLOps), resulting in a hybrid software scheme. The machine learning model was used to evaluate the pipeline, which is a multilayer perceptron (MLP) that combines typical dense, as well as polynomial, layers. The simulation results show that the proposed pipeline exhibits a reliable and accurate performance while managing to boost the network’s performance in classification tasks. Full article
(This article belongs to the Special Issue Systems Engineering and Knowledge Management)
Show Figures

Figure 1

18 pages, 6347 KiB  
Article
Probabilistic Forecasting of Residential Energy Consumption Based on SWT-QRTCN-ADSC-NLSTM Model
by Ning Jin, Linlin Song, Gabriel Jing Huang and Ke Yan
Information 2023, 14(4), 231; https://doi.org/10.3390/info14040231 - 08 Apr 2023
Cited by 1 | Viewed by 1461
Abstract
Residential electricity consumption forecasting plays a crucial role in the rational allocation of resources reducing energy waste and enhancing the grid-connected operation of power systems. Probabilistic forecasting can provide more comprehensive information for the decision-making and dispatching process by quantifying the uncertainty of [...] Read more.
Residential electricity consumption forecasting plays a crucial role in the rational allocation of resources reducing energy waste and enhancing the grid-connected operation of power systems. Probabilistic forecasting can provide more comprehensive information for the decision-making and dispatching process by quantifying the uncertainty of electricity load. In this study, we propose a method based on stationary wavelet transform (SWT), quantile regression (QR), Bidirectional nested long short-term memory (BiNLSTM), and Depthwise separable convolution (DSC) combined with attention mechanism for electricity consumption probability prediction methods. First, the data sequence is decomposed using SWT to reduce the complexity of the sequence; then, the combined neural network model with attention is used to obtain the prediction values under different quantile conditions. Finally, the probability density curve of electricity consumption is obtained by combining kernel density estimation (KDE). The model was tested using historical demand-side data from five UK households to achieve energy consumption predictions 5 min in advance. It is demonstrated that the model can achieve both reliable probabilistic prediction and accurate deterministic prediction. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2023)
Show Figures

Figure 1

18 pages, 6736 KiB  
Article
Enhanced Readability of Electrical Network Complex Emergency Modes Provided by Data Compression Methods
by Aleksandr Kulikov, Pavel Ilyushin and Anton Loskutov
Information 2023, 14(4), 230; https://doi.org/10.3390/info14040230 - 08 Apr 2023
Cited by 1 | Viewed by 1072
Abstract
Current microprocessor-based relay protection and automation (RPA) devices supported by IEC 61850 provide access to a large amount of information on the protected or controlled electric power facility in real time. The issue of using such information (Big Data) in order to improve [...] Read more.
Current microprocessor-based relay protection and automation (RPA) devices supported by IEC 61850 provide access to a large amount of information on the protected or controlled electric power facility in real time. The issue of using such information (Big Data) in order to improve the parameters of technical modification of intelligent electronic devices at digital substations remains unaddressed. Prerequisites arise for designing modern power systems with relay protection devices of a new generation based on new information algorithms. In particular, it is expedient to develop multi-parameter protections using more than one information parameter: modules of current, voltage, derivatives thereof, phase angles, active and reactive resistances, etc. An information approach based on multiple modeling and statistical processing of modeling results is also promising. This article explores the issues of enhanced sensitivity of multi-parameter relay protection using long-range redundancy protection as an example. Transition to “generalized features” is proposed in order to simplify multi-parameter protection and reduction in the computational load on the RPA device. Out of a large number of analyzed indicators (currents, voltages, their derivatives, resistances, increments of currents, angles between current and voltage, etc.), we specify the most informative by using the method of “data compression”. The transition to generalized features simplifies the parameterization of settings, and the process of making a decision by the relay protection device is reduced to obtaining a generalized feature and comparing it with a dimensionless setting in relative terms. For the formation of generalized information features, two mathematical methods are studied: the method of principal components and Fisher’s linear discriminant. Full article
(This article belongs to the Special Issue New Applications in Multiple Criteria Decision Analysis II)
Show Figures

Figure 1

22 pages, 473 KiB  
Article
Optimal Load Redistribution in Distribution Systems Using a Mixed-Integer Convex Model Based on Electrical Momentum
by Daniela Patricia Bohórquez-Álvarez, Karen Dayanna Niño-Perdomo and Oscar Danilo Montoya
Information 2023, 14(4), 229; https://doi.org/10.3390/info14040229 - 07 Apr 2023
Cited by 1 | Viewed by 1324
Abstract
This paper addresses the problem concerning the efficient minimization of power losses in asymmetric distribution grids from the perspective of convex optimization. This research’s main objective is to propose an approximation optimization model to reduce the total power losses in a three-phase network [...] Read more.
This paper addresses the problem concerning the efficient minimization of power losses in asymmetric distribution grids from the perspective of convex optimization. This research’s main objective is to propose an approximation optimization model to reduce the total power losses in a three-phase network using the concept of electrical momentum. To obtain a mixed-integer convex formulation, the voltage variables at each node are relaxed by assuming them to be equal to those at the substation bus. With this assumption, the power balance constraints are reduced to flow restrictions, allowing us to formulate a set of linear rules. The objective function is formulated as a strictly convex objective function by applying the concept of average electrical momentum, by representing the current flows in distribution lines as the active and reactive power variables. To solve the relaxed MIQC model, the GAMS software (Version 28.1.2) and its CPLEX, SBB, and XPRESS solvers are used. In order to validate the effectiveness of load redistribution in power loss minimization, the initial and final grid configurations are tested with the triangular-based power flow method for asymmetric distribution networks. Numerical results show that the proposed mixed-integer model allows for reductions of 24.34%, 18.64%, and 4.14% for the 8-, 15-, and 25-node test feeders, respectively, in comparison with the benchmark case. The sine–cosine algorithm and the black hole optimization method are also used for comparison, demonstrating the efficiency of the MIQC approach in minimizing the expected grid power losses for three-phase unbalanced networks. Full article
(This article belongs to the Special Issue Optimization Algorithms for Engineering Applications)
Show Figures

Figure 1

17 pages, 931 KiB  
Article
Graph Neural Networks and Open-Government Data to Forecast Traffic Flow
by Petros Brimos, Areti Karamanou, Evangelos Kalampokis and Konstantinos Tarabanis
Information 2023, 14(4), 228; https://doi.org/10.3390/info14040228 - 07 Apr 2023
Cited by 2 | Viewed by 2235
Abstract
Traffic forecasting has been an important area of research for several decades, with significant implications for urban traffic planning, management, and control. In recent years, deep-learning models, such as graph neural networks (GNN), have shown great promise in traffic forecasting due to their [...] Read more.
Traffic forecasting has been an important area of research for several decades, with significant implications for urban traffic planning, management, and control. In recent years, deep-learning models, such as graph neural networks (GNN), have shown great promise in traffic forecasting due to their ability to capture complex spatio–temporal dependencies within traffic networks. Additionally, public authorities around the world have started providing real-time traffic data as open-government data (OGD). This large volume of dynamic and high-value data can open new avenues for creating innovative algorithms, services, and applications. In this paper, we investigate the use of traffic OGD with advanced deep-learning algorithms. Specifically, we deploy two GNN models—the Temporal Graph Convolutional Network and Diffusion Convolutional Recurrent Neural Network—to predict traffic flow based on real-time traffic OGD. Our evaluation of the forecasting models shows that both GNN models outperform the two baseline models—Historical Average and Autoregressive Integrated Moving Average—in terms of prediction performance. We anticipate that the exploitation of OGD in deep-learning scenarios will contribute to the development of more robust and reliable traffic-forecasting algorithms, as well as provide innovative and efficient public services for citizens and businesses. Full article
(This article belongs to the Special Issue Artificial Intelligence and Big Data Applications)
Show Figures

Figure 1

19 pages, 18994 KiB  
Article
Convolutional Neural Networks Analysis Reveals Three Possible Sources of Bronze Age Writings between Greece and India
by Shruti Daggumati and Peter Z. Revesz
Information 2023, 14(4), 227; https://doi.org/10.3390/info14040227 - 07 Apr 2023
Cited by 3 | Viewed by 1980
Abstract
This paper analyzes the relationships among eight ancient scripts from between Greece and India. We used convolutional neural networks combined with support vector machines to give a numerical rating of the similarity between pairs of signs (one sign from each of two different [...] Read more.
This paper analyzes the relationships among eight ancient scripts from between Greece and India. We used convolutional neural networks combined with support vector machines to give a numerical rating of the similarity between pairs of signs (one sign from each of two different scripts). Two scripts that had a one-to-one matching of their signs were determined to be related. The result of the analysis is the finding of the following three groups, which are listed in chronological order: (1) Sumerian pictograms, the Indus Valley script, and the proto-Elamite script; (2) Cretan hieroglyphs and Linear B; and (3) the Phoenician, Greek, and Brahmi alphabets. Based on their geographic locations and times of appearance, Group (1) may originate from Mesopotamia in the early Bronze Age, Group (2) may originate from Europe in the middle Bronze Age, and Group (3) may originate from the Sinai Peninsula in the late Bronze Age. Full article
(This article belongs to the Special Issue International Database Engineered Applications)
Show Figures

Graphical abstract

18 pages, 973 KiB  
Article
Four Million Segments and Counting: Building an English-Croatian Parallel Corpus through Crowdsourcing Using a Novel Gamification-Based Platform
by Rafał Jaworski, Sanja Seljan and Ivan Dunđer
Information 2023, 14(4), 226; https://doi.org/10.3390/info14040226 - 06 Apr 2023
Viewed by 1674
Abstract
Parallel corpora have been widely used in the fields of natural language processing and translation as they provide crucial multilingual information. They are used to train machine translation systems, compile dictionaries, or generate inter-language word embeddings. There are many corpora available publicly; however, [...] Read more.
Parallel corpora have been widely used in the fields of natural language processing and translation as they provide crucial multilingual information. They are used to train machine translation systems, compile dictionaries, or generate inter-language word embeddings. There are many corpora available publicly; however, support for some languages is still limited. In this paper, the authors present a framework for collecting, organizing, and storing corpora. The solution was originally designed to obtain data for less-resourced languages, but it proved to work very well for the collection of high-value domain-specific corpora. The scenario is based on the collective work of a group of people who are motivated by the means of gamification. The rules of the game motivate the participants to submit large resources, and a peer-review process ensures quality. More than four million translated segments have been collected so far. Full article
(This article belongs to the Special Issue Machine Translation for Conquering Language Barriers)
Show Figures

Figure 1

22 pages, 316 KiB  
Article
An Exploratory Study of the Use of the Internet and E-Government by Older Adults in the Countryside of Brazil
by Leonardo Filipe da Silva, Emilene Zitkus and André Pimenta Freire
Information 2023, 14(4), 225; https://doi.org/10.3390/info14040225 - 06 Apr 2023
Cited by 1 | Viewed by 2384
Abstract
The ubiquity of the Internet and its technology and the increasing aging of the world’s population are ever more evident. Older users have different demands and capabilities when using the services offered in the digital environment. As a service provider to its population, [...] Read more.
The ubiquity of the Internet and its technology and the increasing aging of the world’s population are ever more evident. Older users have different demands and capabilities when using the services offered in the digital environment. As a service provider to its population, the government has sought to optimize the provision of services and access to information through information and communication technology. Older adults are a relevant group of users of public services and have significant demands in some specific public services. To identify questions about the factors that promote the use, perception and barriers to the older population regarding the use of the Internet and government websites, this study was inspired by a study carried out in the United Kingdom to identify these factors. The study reports on a survey with 143 participants recruited from different geographical regions of the countryside of Brazil. The research showed that although government websites are strongly inclined to offer quality content and maintain the satisfaction of older adult users, there is still a more significant number of users who, due to low technology skills, do not use these sites or the Internet. Older citizens also had high Internet penetration and mobile device use. Lower computer literacy in the countryside of Brazil was related to factors such as gender, education level, race and sociocultural factors. A partial comparison with a study in the United Kingdom showed a lag in the use of e-government services by older adults in the countryside of Brazil. Full article
16 pages, 8063 KiB  
Article
Using a Machine Learning Approach to Evaluate the NOx Emissions in a Spark-Ignition Optical Engine
by Federico Ricci, Luca Petrucci and Francesco Mariani
Information 2023, 14(4), 224; https://doi.org/10.3390/info14040224 - 06 Apr 2023
Cited by 4 | Viewed by 1503
Abstract
Currently, machine learning (ML) technologies are widely employed in the automotive field for determining physical quantities thanks to their ability to ensure lower computational costs and faster operations than traditional methods. Within this context, the present work shows the outcomes of forecasting activities [...] Read more.
Currently, machine learning (ML) technologies are widely employed in the automotive field for determining physical quantities thanks to their ability to ensure lower computational costs and faster operations than traditional methods. Within this context, the present work shows the outcomes of forecasting activities on the prediction of pollutant emissions from engines using an artificial neural network technique. Tests on an optical access engine were conducted under lean mixture conditions, which is the direction in which automotive research is developing to meet the ever-stricter regulations on pollutant emissions. A NARX architecture was utilized to estimate the engine’s nitrogen oxide emissions starting from in-cylinder pressure data and images of the flame front evolution recorded by a high-speed camera and elaborated through a Mask R-CNN technique. Based on the obtained results, the methodology’s applicability to real situations, such as metal engines, was assessed using a sensitivity analysis presented in the second part of the work, which helped identify and quantify the most important input parameters for the nitrogen oxide forecast. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

34 pages, 1619 KiB  
Article
AutoML with Bayesian Optimizations for Big Data Management
by Aristeidis Karras, Christos Karras, Nikolaos Schizas, Markos Avlonitis and Spyros Sioutas
Information 2023, 14(4), 223; https://doi.org/10.3390/info14040223 - 05 Apr 2023
Cited by 7 | Viewed by 2363
Abstract
The field of automated machine learning (AutoML) has gained significant attention in recent years due to its ability to automate the process of building and optimizing machine learning models. However, the increasing amount of big data being generated has presented new challenges for [...] Read more.
The field of automated machine learning (AutoML) has gained significant attention in recent years due to its ability to automate the process of building and optimizing machine learning models. However, the increasing amount of big data being generated has presented new challenges for AutoML systems in terms of big data management. In this paper, we introduce Fabolas and learning curve extrapolation as two methods for accelerating hyperparameter optimization. Four methods for quickening training were presented including Bag of Little Bootstraps, k-means clustering for Support Vector Machines, subsample size selection for gradient descent, and subsampling for logistic regression. Additionally, we also discuss the use of Markov Chain Monte Carlo (MCMC) methods and other stochastic optimization techniques to improve the efficiency of AutoML systems in managing big data. These methods enhance various facets of the training process, making it feasible to combine them in diverse ways to gain further speedups. We review several combinations that have potential and provide a comprehensive understanding of the current state of AutoML and its potential for managing big data in various industries. Furthermore, we also mention the importance of parallel computing and distributed systems to improve the scalability of the AutoML systems while working with big data. Full article
(This article belongs to the Special Issue Multidimensional Data Structures and Big Data Management)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop