Next Issue
Volume 7, December
Previous Issue
Volume 7, June
 
 

Big Data Cogn. Comput., Volume 7, Issue 3 (September 2023) – 37 articles

Cover Story (view full-size image): Large Language Models (LLMs) act as psycho-social mirrors that reflect the prevalent views and tendencies in society. Hence, it is important to understand the biases hidden in LLMs. In this study, we focus on the global phenomenon of anxiety about math and STEM subjects. We use network science and cognitive psychology to understand such biases in LLMs (i.e., GPT-3, GPT-3.5, and GPT-4), analyzing data obtained by probing the three LLMs in a language generation task. Our findings indicate that LLMs have negative perceptions of math and STEM fields, suggesting that advances in the architecture of LLMs may lead to increasingly less-biased models that could even aid in reducing stereotypes in society rather than perpetuating them. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
21 pages, 5814 KiB  
Article
Intelligent Method for Classifying the Level of Anthropogenic Disasters
by Khrystyna Lipianina-Honcharenko, Carsten Wolff, Anatoliy Sachenko, Ivan Kit and Diana Zahorodnia
Big Data Cogn. Comput. 2023, 7(3), 157; https://doi.org/10.3390/bdcc7030157 - 21 Sep 2023
Cited by 1 | Viewed by 1560
Abstract
Anthropogenic disasters pose a challenge to management in the modern world. At the same time, it is important to have accurate and timely information to assess the level of danger and take appropriate measures to eliminate disasters. Therefore, the purpose of the paper [...] Read more.
Anthropogenic disasters pose a challenge to management in the modern world. At the same time, it is important to have accurate and timely information to assess the level of danger and take appropriate measures to eliminate disasters. Therefore, the purpose of the paper is to develop an effective method for assessing the level of anthropogenic disasters based on information from witnesses to the event. For this purpose, a conceptual model for assessing the consequences of anthropogenic disasters is proposed, the main components of which are the following ones: the analysis of collected data, modeling and assessment of their consequences. The main characteristics of the intelligent method for classifying the level of anthropogenic disasters are considered, in particular, exploratory data analysis using the EDA method, classification based on textual data using SMOTE, and data classification by the ensemble method of machine learning using boosting. The experimental results confirmed that for textual data, the best classification is at level V and level I with an error of 0.97 and 0.94, respectively, and the average error estimate is 0.68. For quantitative data, the classification accuracy of Potential Accident Level relative to Industry Sector is 77%, and the f1-score is 0.88, which indicates a fairly high accuracy of the model. The architecture of a mobile application for classifying the level of anthropogenic disasters has been developed, which reduces the time required to assess consequences of danger in the region. In addition, the proposed approach ensures interaction with dynamic and uncertain environments, which makes it an effective tool for classifying. Full article
(This article belongs to the Special Issue Quality and Security of Critical Infrastructure Systems)
Show Figures

Figure 1

18 pages, 1526 KiB  
Article
Big Data Analytics with the Multivariate Adaptive Regression Splines to Analyze Key Factors Influencing Accident Severity in Industrial Zones of Thailand: A Study on Truck and Non-Truck Collisions
by Manlika Seefong, Panuwat Wisutwattanasak, Chamroeun Se, Kestsirin Theerathitichaipa, Sajjakaj Jomnonkwao, Thanapong Champahom, Vatanavongs Ratanavaraha and Rattanaporn Kasemsri
Big Data Cogn. Comput. 2023, 7(3), 156; https://doi.org/10.3390/bdcc7030156 - 21 Sep 2023
Viewed by 1657
Abstract
Machine learning currently holds a vital position in predicting collision severity. Identifying factors associated with heightened risks of injury and fatalities aids in enhancing road safety measures and management. Presently, Thailand faces considerable challenges with respect to road traffic accidents. These challenges are [...] Read more.
Machine learning currently holds a vital position in predicting collision severity. Identifying factors associated with heightened risks of injury and fatalities aids in enhancing road safety measures and management. Presently, Thailand faces considerable challenges with respect to road traffic accidents. These challenges are particularly acute in industrial zones, where they contribute to a rise in injuries and fatalities. The mixture of heavy traffic, comprising both trucks and non-trucks, significantly amplifies the risk of accidents. This situation, hence, generates profound concerns for road safety in Thailand. Consequently, discerning the factors that influence the severity of injuries and fatalities becomes pivotal for formulating effective road safety policies and measures. This study is specifically aimed at predicting the factors contributing to the severity of accidents involving truck and non-truck collisions in industrial zones. It considers a variety of aspects, including roadway characteristics, underlying assumptions of cause, crash characteristics, and weather conditions. Due to the fact that accident data is big data with specific characteristics and complexity, with the employment of machine learning in tandem with the Multi-variate Adaptive Regression Splines technique, we can make precise predictions to identify the factors influencing the severity of collision outcomes. The analysis demonstrates that various factors augment the severity of accidents involving trucks. These include darting in front of a vehicle, head-on collisions, and pedestrian collisions. Conversely, for non-truck related collisions, the significant factors that heighten severity are tailgating, running signs/signals, angle collisions, head-on collisions, overtaking collisions, pedestrian collisions, obstruction collisions, and collisions during overcast conditions. These findings illuminate the significant factors influencing the severity of accidents involving trucks and non-trucks. Such insights provide invaluable information for developing targeted road safety measures and policies, thereby contributing to the mitigation of injuries and fatalities. Full article
(This article belongs to the Special Issue Sustainable Big Data Analytics and Machine Learning Technologies)
Show Figures

Figure 1

22 pages, 727 KiB  
Article
Semi-Supervised Classification with A*: A Case Study on Electronic Invoicing
by Bernardo Panichi and Alessandro Lazzeri
Big Data Cogn. Comput. 2023, 7(3), 155; https://doi.org/10.3390/bdcc7030155 - 20 Sep 2023
Viewed by 1422
Abstract
This paper addresses the time-intensive task of assigning accurate account labels to invoice entries within corporate bookkeeping. Despite the advent of electronic invoicing, many software solutions still rely on rule-based approaches that fail to address the multifaceted nature of this challenge. While machine [...] Read more.
This paper addresses the time-intensive task of assigning accurate account labels to invoice entries within corporate bookkeeping. Despite the advent of electronic invoicing, many software solutions still rely on rule-based approaches that fail to address the multifaceted nature of this challenge. While machine learning holds promise for such repetitive tasks, the presence of low-quality training data often poses a hurdle. Frequently, labels pertain to invoice rows at a group level rather than an individual level, leading to the exclusion of numerous records during preprocessing. To enhance the efficiency of an invoice entry classifier within a semi-supervised context, this study proposes an innovative approach that combines the classifier with the A* graph search algorithm. Through experimentation across various classifiers, the results consistently demonstrated a noteworthy increase in accuracy, ranging between 1% and 4%. This improvement is primarily attributed to a marked reduction in the discard rate of data, which decreased from 39% to 14%. This paper contributes to the literature by presenting a method that leverages the synergy of a classifier and A* graph search to overcome challenges posed by limited and group-level label information in the realm of electronic invoicing classification. Full article
(This article belongs to the Special Issue Computational Finance and Big Data Analytics)
Show Figures

Figure 1

14 pages, 3084 KiB  
Article
Efficient and Controllable Model Compression through Sequential Knowledge Distillation and Pruning
by Leila Malihi and Gunther Heidemann
Big Data Cogn. Comput. 2023, 7(3), 154; https://doi.org/10.3390/bdcc7030154 - 19 Sep 2023
Viewed by 1529
Abstract
Efficient model deployment is a key focus in deep learning. This has led to the exploration of methods such as knowledge distillation and network pruning to compress models and increase their performance. In this study, we investigate the potential synergy between knowledge distillation [...] Read more.
Efficient model deployment is a key focus in deep learning. This has led to the exploration of methods such as knowledge distillation and network pruning to compress models and increase their performance. In this study, we investigate the potential synergy between knowledge distillation and network pruning to achieve optimal model efficiency and improved generalization. We introduce an innovative framework for model compression that combines knowledge distillation, pruning, and fine-tuning to achieve enhanced compression while providing control over the degree of compactness. Our research is conducted on popular datasets, CIFAR-10 and CIFAR-100, employing diverse model architectures, including ResNet, DenseNet, and EfficientNet. We could calibrate the amount of compression achieved. This allows us to produce models with different degrees of compression while still being just as accurate, or even better. Notably, we demonstrate its efficacy by producing two compressed variants of ResNet 101: ResNet 50 and ResNet 18. Our results reveal intriguing findings. In most cases, the pruned and distilled student models exhibit comparable or superior accuracy to the distilled student models while utilizing significantly fewer parameters. Full article
Show Figures

Figure 1

17 pages, 3217 KiB  
Article
Implementing a Synchronization Method between a Relational and a Non-Relational Database
by Cornelia A. Győrödi, Tudor Turtureanu, Robert Ş. Győrödi and Doina R. Zmaranda
Big Data Cogn. Comput. 2023, 7(3), 153; https://doi.org/10.3390/bdcc7030153 - 18 Sep 2023
Cited by 1 | Viewed by 2535
Abstract
The accelerating pace of application development requires more frequent database switching, as technological advancements demand agile adaptation. The increase in the volume of data and at the same time, the number of transactions has determined that some applications migrate from one database to [...] Read more.
The accelerating pace of application development requires more frequent database switching, as technological advancements demand agile adaptation. The increase in the volume of data and at the same time, the number of transactions has determined that some applications migrate from one database to another, especially from a relational database to a non-relational (NoSQL) alternative. In this transition phase, the coexistence of both databases becomes necessary. In addition, certain users choose to keep both databases permanently updated to exploit the individual strengths of each database in order to streamline operations. Existing solutions mainly focus on replication, failing to adequately address the management of synchronization between a relational and a non-relational (NoSQL) database. This paper proposes a practical IT approach to this problem and tests the feasibility of the proposed solution by developing an application that maintains the synchronization between a MySQL database as a relational database and MongoDB as a non-relational database. The performance and capabilities of the solution are analyzed to ensure data consistency and correctness. In addition, problems that arose during the development of the application are highlighted and solutions are proposed to solve them. Full article
Show Figures

Figure 1

27 pages, 3194 KiB  
Article
Predicting Forex Currency Fluctuations Using a Novel Bio-Inspired Modular Neural Network
by Christos Bormpotsis, Mohamed Sedky and Asma Patel
Big Data Cogn. Comput. 2023, 7(3), 152; https://doi.org/10.3390/bdcc7030152 - 15 Sep 2023
Viewed by 4242
Abstract
In the realm of foreign exchange (Forex) market predictions, Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) have been commonly employed. However, these models often exhibit instability due to vulnerability to data perturbations attributed to their monolithic architecture. Hence, this study proposes [...] Read more.
In the realm of foreign exchange (Forex) market predictions, Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) have been commonly employed. However, these models often exhibit instability due to vulnerability to data perturbations attributed to their monolithic architecture. Hence, this study proposes a novel neuroscience-informed modular network that harnesses closing prices and sentiments from Yahoo Finance and Twitter APIs. Compared to monolithic methods, the objective is to advance the effectiveness of predicting price fluctuations in Euro to British Pound Sterling (EUR/GBP). The proposed model offers a unique methodology based on a reinvigorated modular CNN, replacing pooling layers with orthogonal kernel initialisation RNNs coupled with Monte Carlo Dropout (MCoRNNMCD). It integrates two pivotal modules: a convolutional simple RNN and a convolutional Gated Recurrent Unit (GRU). These modules incorporate orthogonal kernel initialisation and Monte Carlo Dropout techniques to mitigate overfitting, assessing each module’s uncertainty. The synthesis of these parallel feature extraction modules culminates in a three-layer Artificial Neural Network (ANN) decision-making module. Established on objective metrics like the Mean Square Error (MSE), rigorous evaluation underscores the proposed MCoRNNMCD–ANN’s exceptional performance. MCoRNNMCD–ANN surpasses single CNNs, LSTMs, GRUs, and the state-of-the-art hybrid BiCuDNNLSTM, CLSTM, CNN–LSTM, and LSTM–GRU in predicting hourly EUR/GBP closing price fluctuations. Full article
Show Figures

Figure 1

14 pages, 379 KiB  
Article
Q8VaxStance: Dataset Labeling System for Stance Detection towards Vaccines in Kuwaiti Dialect
by Hana Alostad, Shoug Dawiek and Hasan Davulcu
Big Data Cogn. Comput. 2023, 7(3), 151; https://doi.org/10.3390/bdcc7030151 - 15 Sep 2023
Cited by 1 | Viewed by 1384
Abstract
The Kuwaiti dialect is a particular dialect of Arabic spoken in Kuwait; it differs significantly from standard Arabic and the dialects of neighboring countries in the same region. Few research papers with a focus on the Kuwaiti dialect have been published in the [...] Read more.
The Kuwaiti dialect is a particular dialect of Arabic spoken in Kuwait; it differs significantly from standard Arabic and the dialects of neighboring countries in the same region. Few research papers with a focus on the Kuwaiti dialect have been published in the field of NLP. In this study, we created Kuwaiti dialect language resources using Q8VaxStance, a vaccine stance labeling system for a large dataset of tweets. This dataset fills this gap and provides a valuable resource for researchers studying vaccine hesitancy in Kuwait. Furthermore, it contributes to the Arabic natural language processing field by providing a dataset for developing and evaluating machine learning models for stance detection in the Kuwaiti dialect. The proposed vaccine stance labeling system combines the benefits of weak supervised learning and zero-shot learning; for this purpose, we implemented 52 experiments on 42,815 unlabeled tweets extracted between December 2020 and July 2022. The results of the experiments show that using keyword detection in conjunction with zero-shot model labeling functions is significantly better than using only keyword detection labeling functions or just zero-shot model labeling functions. Furthermore, for the total number of generated labels, the difference between using the Arabic language in both the labels and prompt or a mix of Arabic labels and an English prompt is statistically significant, indicating that it generates more labels than when using English in both the labels and prompt. The best accuracy achieved in our experiments in terms of the Macro-F1 values was found when using keyword and hashtag detection labeling functions in conjunction with zero-shot model labeling functions, specifically in experiments KHZSLF-EE4 and KHZSLF-EA1, with values of 0.83 and 0.83, respectively. Experiment KHZSLF-EE4 was able to label 42,270 tweets, while experiment KHZSLF-EA1 was able to label 42,764 tweets. Finally, the average value of annotation agreement between the generated labels and human labels ranges between 0.61 and 0.64, which is considered a good level of agreement. Full article
Show Figures

Figure 1

16 pages, 2139 KiB  
Article
Impulsive Aggression Break, Based on Early Recognition Using Spatiotemporal Features
by Manar M. F. Donia, Wessam H. El-Behaidy and Aliaa A. A. Youssif
Big Data Cogn. Comput. 2023, 7(3), 150; https://doi.org/10.3390/bdcc7030150 - 14 Sep 2023
Viewed by 1291
Abstract
The study of human behaviors aims to gain a deeper perception of stimuli that control decision making. To describe, explain, predict, and control behavior, human behavior can be classified as either non-aggressive or anomalous behavior. Anomalous behavior is any unusual activity; impulsive aggressive, [...] Read more.
The study of human behaviors aims to gain a deeper perception of stimuli that control decision making. To describe, explain, predict, and control behavior, human behavior can be classified as either non-aggressive or anomalous behavior. Anomalous behavior is any unusual activity; impulsive aggressive, or violent behaviors are the most harmful. The detection of such behaviors at the initial spark is critical for guiding public safety decisions and a key to its security. This paper proposes an automatic aggressive-event recognition method based on effective feature representation and analysis. The proposed approach depends on a spatiotemporal discriminative feature that combines histograms of oriented gradients and dense optical flow features. In addition, the principal component analysis (PCA) and linear discriminant analysis (LDA) techniques are used for complexity reduction. The performance of the proposed approach is analyzed on three datasets: Hockey-Fight (HF), Stony Brook University (SBU)-Kinect, and Movie-Fight (MF), with accuracy rates of 96.5%, 97.8%, and 99.6%, respectively. Also, this paper assesses and contrasts the feature engineering and learned features for impulsive aggressive event recognition. Experiments show promising results of the proposed method compared to the state of the art. The implementation of the proposed work is available here. Full article
(This article belongs to the Special Issue Applied Data Science for Social Good)
Show Figures

Figure 1

12 pages, 4099 KiB  
Article
Visual Explanations of Differentiable Greedy Model Predictions on the Influence Maximization Problem
by Mario Michelessa, Christophe Hurter, Brian Y. Lim, Jamie Ng Suat Ling, Bogdan Cautis and Carol Anne Hargreaves
Big Data Cogn. Comput. 2023, 7(3), 149; https://doi.org/10.3390/bdcc7030149 - 05 Sep 2023
Viewed by 1587
Abstract
Social networks have become important objects of study in recent years. Social media marketing has, for example, greatly benefited from the vast literature developed in the past two decades. The study of social networks has taken advantage of recent advances in machine learning [...] Read more.
Social networks have become important objects of study in recent years. Social media marketing has, for example, greatly benefited from the vast literature developed in the past two decades. The study of social networks has taken advantage of recent advances in machine learning to process these immense amounts of data. Automatic emotional labeling of content on social media has, for example, been made possible by the recent progress in natural language processing. In this work, we are interested in the influence maximization problem, which consists of finding the most influential nodes in the social network. The problem is classically carried out using classical performance metrics such as accuracy or recall, which is not the end goal of the influence maximization problem. Our work presents an end-to-end learning model, SGREEDYNN, for the selection of the most influential nodes in a social network, given a history of information diffusion. In addition, this work proposes data visualization techniques to interpret the augmenting performances of our method compared to classical training. The results of this method are confirmed by visualizing the final influence of the selected nodes on network instances with edge bundling techniques. Edge bundling is a visual aggregation technique that makes patterns emerge. It has been shown to be an interesting asset for decision-making. By using edge bundling, we observe that our method chooses more diverse and high-degree nodes compared to the classical training. Full article
Show Figures

Figure 1

15 pages, 2331 KiB  
Article
Crafting a Museum Guide Using ChatGPT4
by Georgios Trichopoulos, Markos Konstantakis, George Caridakis, Akrivi Katifori and Myrto Koukouli
Big Data Cogn. Comput. 2023, 7(3), 148; https://doi.org/10.3390/bdcc7030148 - 04 Sep 2023
Cited by 4 | Viewed by 2590
Abstract
This paper introduces a groundbreaking approach to enriching the museum experience using ChatGPT4, a state-of-the-art language model by OpenAI. By developing a museum guide powered by ChatGPT4, we aimed to address the challenges visitors face in navigating vast collections of artifacts and interpreting [...] Read more.
This paper introduces a groundbreaking approach to enriching the museum experience using ChatGPT4, a state-of-the-art language model by OpenAI. By developing a museum guide powered by ChatGPT4, we aimed to address the challenges visitors face in navigating vast collections of artifacts and interpreting their significance. Leveraging the model’s natural-language-understanding and -generation capabilities, our guide offers personalized, informative, and engaging experiences. However, caution must be exercised as the generated information may lack scientific integrity and accuracy. To mitigate this, we propose incorporating human oversight and validation mechanisms. The subsequent sections present our own case study, detailing the design, architecture, and experimental evaluation of the museum guide system, highlighting its practical implementation and insights into the benefits and limitations of employing ChatGPT4 in the cultural heritage context. Full article
(This article belongs to the Special Issue Artificial Intelligence in Digital Humanities)
Show Figures

Figure 1

28 pages, 4173 KiB  
Review
Innovative Robotic Technologies and Artificial Intelligence in Pharmacy and Medicine: Paving the Way for the Future of Health Care—A Review
by Maryna Stasevych and Viktor Zvarych
Big Data Cogn. Comput. 2023, 7(3), 147; https://doi.org/10.3390/bdcc7030147 - 30 Aug 2023
Cited by 8 | Viewed by 5943
Abstract
The future of innovative robotic technologies and artificial intelligence (AI) in pharmacy and medicine is promising, with the potential to revolutionize various aspects of health care. These advances aim to increase efficiency, improve patient outcomes, and reduce costs while addressing pressing challenges such [...] Read more.
The future of innovative robotic technologies and artificial intelligence (AI) in pharmacy and medicine is promising, with the potential to revolutionize various aspects of health care. These advances aim to increase efficiency, improve patient outcomes, and reduce costs while addressing pressing challenges such as personalized medicine and the need for more effective therapies. This review examines the major advances in robotics and AI in the pharmaceutical and medical fields, analyzing the advantages, obstacles, and potential implications for future health care. In addition, prominent organizations and research institutions leading the way in these technological advancements are highlighted, showcasing their pioneering efforts in creating and utilizing state-of-the-art robotic solutions in pharmacy and medicine. By thoroughly analyzing the current state of robotic technologies in health care and exploring the possibilities for further progress, this work aims to provide readers with a comprehensive understanding of the transformative power of robotics and AI in the evolution of the healthcare sector. Striking a balance between embracing technology and preserving the human touch, investing in R&D, and establishing regulatory frameworks within ethical guidelines will shape a future for robotics and AI systems. The future of pharmacy and medicine is in the seamless integration of robotics and AI systems to benefit patients and healthcare providers. Full article
Show Figures

Figure 1

8 pages, 812 KiB  
Communication
Enhancing Speech Emotions Recognition Using Multivariate Functional Data Analysis
by Matthieu Saumard
Big Data Cogn. Comput. 2023, 7(3), 146; https://doi.org/10.3390/bdcc7030146 - 25 Aug 2023
Cited by 1 | Viewed by 1467
Abstract
Speech Emotions Recognition (SER) has gained significant attention in the fields of human–computer interaction and speech processing. In this article, we present a novel approach to improve SER performance by interpreting the Mel Frequency Cepstral Coefficients (MFCC) as a multivariate functional data object, [...] Read more.
Speech Emotions Recognition (SER) has gained significant attention in the fields of human–computer interaction and speech processing. In this article, we present a novel approach to improve SER performance by interpreting the Mel Frequency Cepstral Coefficients (MFCC) as a multivariate functional data object, which accelerates learning while maintaining high accuracy. To treat MFCCs as functional data, we preprocess them as images and apply resizing techniques. By representing MFCCs as functional data, we leverage the temporal dynamics of speech, capturing essential emotional cues more effectively. Consequently, this enhancement significantly contributes to the learning process of SER methods without compromising performance. Subsequently, we employ a supervised learning model, specifically a functional Support Vector Machine (SVM), directly on the MFCC represented as functional data. This enables the utilization of the full functional information, allowing for more accurate emotion recognition. The proposed approach is rigorously evaluated on two distinct databases, EMO-DB and IEMOCAP, serving as benchmarks for SER evaluation. Our method demonstrates competitive results in terms of accuracy, showcasing its effectiveness in emotion recognition. Furthermore, our approach significantly reduces the learning time, making it computationally efficient and practical for real-world applications. In conclusion, our novel approach of treating MFCCs as multivariate functional data objects exhibits superior performance in SER tasks, delivering both improved accuracy and substantial time savings during the learning process. This advancement holds great potential for enhancing human–computer interaction and enabling more sophisticated emotion-aware applications. Full article
Show Figures

Figure 1

20 pages, 1188 KiB  
Article
Applied Digital Twin Concepts Contributing to Heat Transition in Building, Campus, Neighborhood, and Urban Scale
by Ekaterina Lesnyak, Tabea Belkot, Johannes Hurka, Jan Philipp Hörding, Lea Kuhlmann, Pavel Paulau, Marvin Schnabel, Patrik Schönfeldt and Jan Middelberg
Big Data Cogn. Comput. 2023, 7(3), 145; https://doi.org/10.3390/bdcc7030145 - 25 Aug 2023
Viewed by 1964
Abstract
The heat transition is a central pillar of the energy transition, aiming to decarbonize and improve the energy efficiency of the heat supply in both the private and industrial sectors. On the one hand, this is achieved by substituting fossil fuels with renewable [...] Read more.
The heat transition is a central pillar of the energy transition, aiming to decarbonize and improve the energy efficiency of the heat supply in both the private and industrial sectors. On the one hand, this is achieved by substituting fossil fuels with renewable energy. On the other hand, it involves reducing overall heat consumption and associated transmission and ventilation losses. In addition to refurbishment, digitalization contributes significantly. Despite substantial research on Digital Twins (DTs) for heat transition at different scales, a cross-scale perspective on heat optimization still needs to be developed. In response to this research gap, the present study examines four instances of applied DTs across various scales: building, campus, neighborhood, and urban. The study compares their objectives and conceptual frameworks while also identifying common challenges and potential synergies. The study’s findings indicate that all DT scales face similar data-related challenges, such as gathering, ownership, connectivity, and reliability. Also, hierarchical synergy is identified among the DTs, implying the need for collaboration and exchange. In response to this, the “Wärmewende” data platform, whose objectives and concepts are presented in the paper, promotes research data and knowledge exchange with internal and external stakeholders. Full article
(This article belongs to the Special Issue Digital Twins for Complex Systems)
Show Figures

Figure 1

16 pages, 596 KiB  
Article
Enhancing the Early Detection of Chronic Kidney Disease: A Robust Machine Learning Model
by Muhammad Shoaib Arif, Aiman Mukheimer and Daniyal Asif
Big Data Cogn. Comput. 2023, 7(3), 144; https://doi.org/10.3390/bdcc7030144 - 16 Aug 2023
Cited by 10 | Viewed by 3140
Abstract
Clinical decision-making in chronic disorder prognosis is often hampered by high variance, leading to uncertainty and negative outcomes, especially in cases such as chronic kidney disease (CKD). Machine learning (ML) techniques have emerged as valuable tools for reducing randomness and enhancing clinical decision-making. [...] Read more.
Clinical decision-making in chronic disorder prognosis is often hampered by high variance, leading to uncertainty and negative outcomes, especially in cases such as chronic kidney disease (CKD). Machine learning (ML) techniques have emerged as valuable tools for reducing randomness and enhancing clinical decision-making. However, conventional methods for CKD detection often lack accuracy due to their reliance on limited sets of biological attributes. This research proposes a novel ML model for predicting CKD, incorporating various preprocessing steps, feature selection, a hyperparameter optimization technique, and ML algorithms. To address challenges in medical datasets, we employ iterative imputation for missing values and a novel sequential approach for data scaling, combining robust scaling, z-standardization, and min-max scaling. Feature selection is performed using the Boruta algorithm, and the model is developed using ML algorithms. The proposed model was validated on the UCI CKD dataset, achieving outstanding performance with 100% accuracy. Our approach, combining innovative preprocessing steps, the Boruta feature selection, and the k-nearest neighbors algorithm, along with a hyperparameter optimization using grid-search cross-validation (CV), demonstrates its effectiveness in enhancing the early detection of CKD. This research highlights the potential of ML techniques in improving clinical support systems and reducing the impact of uncertainty in chronic disorder prognosis. Full article
(This article belongs to the Special Issue Big Data in Health Care Information Systems)
Show Figures

Figure 1

24 pages, 9790 KiB  
Review
Ransomware Detection Using Machine Learning: A Survey
by Amjad Alraizza and Abdulmohsen Algarni
Big Data Cogn. Comput. 2023, 7(3), 143; https://doi.org/10.3390/bdcc7030143 - 16 Aug 2023
Cited by 6 | Viewed by 10553
Abstract
Ransomware attacks pose significant security threats to personal and corporate data and information. The owners of computer-based resources suffer from verification and privacy violations, monetary losses, and reputational damage due to successful ransomware assaults. As a result, it is critical to accurately and [...] Read more.
Ransomware attacks pose significant security threats to personal and corporate data and information. The owners of computer-based resources suffer from verification and privacy violations, monetary losses, and reputational damage due to successful ransomware assaults. As a result, it is critical to accurately and swiftly identify ransomware. Numerous methods have been proposed for identifying ransomware, each with its own advantages and disadvantages. The main objective of this research is to discuss current trends in and potential future debates on automated ransomware detection. This document includes an overview of ransomware, a timeline of assaults, and details on their background. It also provides comprehensive research on existing methods for identifying, avoiding, minimizing, and recovering from ransomware attacks. An analysis of studies between 2017 and 2022 is another advantage of this research. This provides readers with up-to-date knowledge of the most recent developments in ransomware detection and highlights advancements in methods for combating ransomware attacks. In conclusion, this research highlights unanswered concerns and potential research challenges in ransomware detection. Full article
Show Figures

Figure 1

21 pages, 3854 KiB  
Article
Breast Cancer Classification Using Concatenated Triple Convolutional Neural Networks Model
by Mohammad H. Alshayeji and Jassim Al-Buloushi
Big Data Cogn. Comput. 2023, 7(3), 142; https://doi.org/10.3390/bdcc7030142 - 16 Aug 2023
Viewed by 1607
Abstract
Improved disease prediction accuracy and reliability are the main concerns in the development of models for the medical field. This study examined methods for increasing classification accuracy and proposed a precise and reliable framework for categorizing breast cancers using mammography scans. Concatenated Convolutional [...] Read more.
Improved disease prediction accuracy and reliability are the main concerns in the development of models for the medical field. This study examined methods for increasing classification accuracy and proposed a precise and reliable framework for categorizing breast cancers using mammography scans. Concatenated Convolutional Neural Networks (CNN) were developed based on three models: Two by transfer learning and one entirely from scratch. Misclassification of lesions from mammography images can also be reduced using this approach. Bayesian optimization performs hyperparameter tuning of the layers, and data augmentation will refine the model by using more training samples. Analysis of the model’s accuracy revealed that it can accurately predict disease with 97.26% accuracy in binary cases and 99.13% accuracy in multi-classification cases. These findings are in contrast with recent studies on the same issue using the same dataset and demonstrated a 16% increase in multi-classification accuracy. In addition, an accuracy improvement of 6.4% was achieved after hyperparameter modification and augmentation. Thus, the model tested in this study was deemed superior to those presented in the extant literature. Hence, the concatenation of three different CNNs from scratch and transfer learning allows the extraction of distinct and significant features without leaving them out, enabling the model to make exact diagnoses. Full article
Show Figures

Figure 1

18 pages, 2076 KiB  
Article
Hadiths Classification Using a Novel Author-Based Hadith Classification Dataset (ABCD)
by Ahmed Ramzy, Marwan Torki, Mohamed Abdeen, Omar Saif, Mustafa ElNainay, AbdAllah Alshanqiti and Emad Nabil
Big Data Cogn. Comput. 2023, 7(3), 141; https://doi.org/10.3390/bdcc7030141 - 14 Aug 2023
Cited by 1 | Viewed by 4898
Abstract
Religious studies are a rich land for Natural Language Processing (NLP). The reason is that all religions have their instructions as written texts. In this paper, we apply NLP to Islamic Hadiths, which are the written traditions, sayings, actions, approvals, and discussions of [...] Read more.
Religious studies are a rich land for Natural Language Processing (NLP). The reason is that all religions have their instructions as written texts. In this paper, we apply NLP to Islamic Hadiths, which are the written traditions, sayings, actions, approvals, and discussions of the Prophet Muhammad, his companions, or his followers. A Hadith is composed of two parts: the chain of narrators (Sanad) and the content of the Hadith (Matn). A Hadith is transmitted from its author to a Hadith book author using a chain of narrators. The problem we solve focuses on the classification of Hadiths based on their origin of narration. This is important for several reasons. First, it helps determine the authenticity and reliability of the Hadiths. Second, it helps trace the chain of narration and identify the narrators involved in transmitting Hadiths. Finally, it helps understand the historical and cultural contexts in which Hadiths were transmitted, and the different levels of authority attributed to the narrators. To the best of our knowledge, and based on our literature review, this problem is not solved before using machine/deep learning approaches. To solve this classification problem, we created a novel Author-Based Hadith Classification Dataset (ABCD) collected from classical Hadiths’ books. The ABCD size is 29 K Hadiths and it contains unique 18 K narrators, with all their information. We applied machine learning (ML), and deep learning (DL) approaches. ML was applied on Sanad and Matn separately; then, we did the same with DL. The results revealed that ML performs better than DL using the Matn input data, with a 77% F1-score. DL performed better than ML using the Sanad input data, with a 92% F1-score. We used precision and recall alongside the F1-score; details of the results are explained at the end of the paper. We claim that the ABCD and the reported results will motivate the community to work in this new area. Our dataset and results will represent a baseline for further research on the same problem. Full article
Show Figures

Figure 1

23 pages, 468 KiB  
Article
An Intelligent Bat Algorithm for Web Service Selection with QoS Uncertainty
by Abdelhak Etchiali, Fethallah Hadjila and Amina Bekkouche
Big Data Cogn. Comput. 2023, 7(3), 140; https://doi.org/10.3390/bdcc7030140 - 10 Aug 2023
Cited by 1 | Viewed by 1022
Abstract
Currently, the selection of web services with an uncertain quality of service (QoS) is gaining much attention in the service-oriented computing paradigm (SOC). In fact, searching for a service composition that fulfills a complex user’s request is known to be NP-complete. The search [...] Read more.
Currently, the selection of web services with an uncertain quality of service (QoS) is gaining much attention in the service-oriented computing paradigm (SOC). In fact, searching for a service composition that fulfills a complex user’s request is known to be NP-complete. The search time is mainly dependent on the number of requested tasks, the size of the available services, and the size of the QoS realizations (i.e., sample size). To handle this problem, we propose a two-stage approach that reduces the search space using heuristics for ranking the task services and a bat algorithm metaheuristic for selecting the final near-optimal compositions. The fitness used by the metaheuristic aims to fulfil all the global constraints of the user. The experimental study showed that the ranking heuristics, termed “fuzzy Pareto dominance” and “Zero-order stochastic dominance”, are highly effective compared to the other heuristics and most of the existing state-of-the-art methods. Full article
Show Figures

Figure 1

14 pages, 6577 KiB  
Article
Executable Digital Process Twins: Towards the Enhancement of Process-Driven Systems
by Flavio Corradini, Sara Pettinari, Barbara Re, Lorenzo Rossi and Francesco Tiezzi
Big Data Cogn. Comput. 2023, 7(3), 139; https://doi.org/10.3390/bdcc7030139 - 08 Aug 2023
Cited by 1 | Viewed by 1559
Abstract
The development of process-driven systems and the advancements in digital twins have led to the birth of new ways of monitoring and analyzing systems, i.e., digital process twins. Specifically, a digital process twin can allow the monitoring of system behavior and the analysis [...] Read more.
The development of process-driven systems and the advancements in digital twins have led to the birth of new ways of monitoring and analyzing systems, i.e., digital process twins. Specifically, a digital process twin can allow the monitoring of system behavior and the analysis of the execution status to improve the whole system. However, the concept of the digital process twin is still theoretical, and process-driven systems cannot really benefit from them. In this regard, this work discusses how to effectively exploit a digital process twin and proposes an implementation that combines the monitoring, refinement, and enactment of system behavior. We demonstrated the proposed solution in a multi-robot scenario. Full article
(This article belongs to the Special Issue Digital Twins for Complex Systems)
Show Figures

Figure 1

16 pages, 1134 KiB  
Article
Cumulative and Rolling Horizon Prediction of Overall Equipment Effectiveness (OEE) with Machine Learning
by Péter Dobra and János Jósvai
Big Data Cogn. Comput. 2023, 7(3), 138; https://doi.org/10.3390/bdcc7030138 - 02 Aug 2023
Viewed by 1695
Abstract
Nowadays, one of the important and indispensable conditions for the effectiveness and competitiveness of industrial companies is the high efficiency of manufacturing and assembly. These enterprises based on different methods and tools systematically monitor their efficiency metrics with Key Performance Indicators (KPIs). One [...] Read more.
Nowadays, one of the important and indispensable conditions for the effectiveness and competitiveness of industrial companies is the high efficiency of manufacturing and assembly. These enterprises based on different methods and tools systematically monitor their efficiency metrics with Key Performance Indicators (KPIs). One of these most frequently used metrics is Overall Equipment Effectiveness (OEE), the product of availability, performance and quality. In addition to monitoring, it is also necessary to predict efficiency, which can be implemented with the support of machine learning techniques. This paper presents and compares several supervised machine learning techniques amongst other polynomial regression, lasso regression, ridge regression and gradient boost regression. The aim of this article is to determine the best estimation method for semiautomatic assembly line and large batch size. The case study presented with a real industrial example gives the answer as to which of the cumulative or rolling horizon prediction methods is more accurate. Full article
Show Figures

Figure 1

20 pages, 2261 KiB  
Article
Predicting the Price of Bitcoin Using Sentiment-Enriched Time Series Forecasting
by Markus Frohmann, Manuel Karner, Said Khudoyan, Robert Wagner and Markus Schedl
Big Data Cogn. Comput. 2023, 7(3), 137; https://doi.org/10.3390/bdcc7030137 - 31 Jul 2023
Cited by 3 | Viewed by 7220
Abstract
Recently, various methods to predict the future price of financial assets have emerged. One promising approach is to combine the historic price with sentiment scores derived via sentiment analysis techniques. In this article, we focus on predicting the future price of Bitcoin, which [...] Read more.
Recently, various methods to predict the future price of financial assets have emerged. One promising approach is to combine the historic price with sentiment scores derived via sentiment analysis techniques. In this article, we focus on predicting the future price of Bitcoin, which is currently the most popular cryptocurrency. More precisely, we propose a hybrid approach, combining time series forecasting and sentiment prediction from microblogs, to predict the intraday price of Bitcoin. Moreover, in addition to standard sentiment analysis methods, we are the first to employ a fine-tuned BERT model for this task. We also introduce a novel weighting scheme in which the weight of the sentiment of each tweet depends on the number of its creator’s followers. For evaluation, we consider periods with strongly varying ranges of Bitcoin prices. This enables us to assess the models w.r.t. robustness and generalization to varied market conditions. Our experiments demonstrate that BERT-based sentiment analysis and the proposed weighting scheme improve upon previous methods. Specifically, our hybrid models that use linear regression as the underlying forecasting algorithm perform best in terms of the mean absolute error (MAE of 2.67) and root mean squared error (RMSE of 3.28). However, more complicated models, particularly long short-term memory networks and temporal convolutional networks, tend to have generalization and overfitting issues, resulting in considerably higher MAE and RMSE scores. Full article
Show Figures

Figure 1

21 pages, 1535 KiB  
Article
An Approach Based on Recurrent Neural Networks and Interactive Visualization to Improve Explainability in AI Systems
by William Villegas-Ch, Joselin García-Ortiz and Angel Jaramillo-Alcazar
Big Data Cogn. Comput. 2023, 7(3), 136; https://doi.org/10.3390/bdcc7030136 - 31 Jul 2023
Viewed by 1756
Abstract
This paper investigated the importance of explainability in artificial intelligence models and its application in the context of prediction in Formula (1). A step-by-step analysis was carried out, including collecting and preparing data from previous races, training an AI model to make predictions, [...] Read more.
This paper investigated the importance of explainability in artificial intelligence models and its application in the context of prediction in Formula (1). A step-by-step analysis was carried out, including collecting and preparing data from previous races, training an AI model to make predictions, and applying explainability techniques in the said model. Two approaches were used: the attention technique, which allowed visualizing the most relevant parts of the input data using heat maps, and the permutation importance technique, which evaluated the relative importance of features. The results revealed that feature length and qualifying performance are crucial variables for position predictions in Formula (1). These findings highlight the relevance of explainability in AI models, not only in Formula (1) but also in other fields and sectors, by ensuring fairness, transparency, and accountability in AI-based decision making. The results highlight the importance of considering explainability in AI models and provide a practical methodology for its implementation in Formula (1) and other domains. Full article
(This article belongs to the Special Issue Deep Network Learning and Its Applications)
Show Figures

Figure 1

18 pages, 1352 KiB  
Article
EnviroStream: A Stream Reasoning Benchmark for Environmental and Climate Monitoring
by Elena Mastria, Francesco Pacenza, Jessica Zangari, Francesco Calimeri, Simona Perri and Giorgio Terracina
Big Data Cogn. Comput. 2023, 7(3), 135; https://doi.org/10.3390/bdcc7030135 - 31 Jul 2023
Viewed by 1427
Abstract
Stream Reasoning (SR) focuses on developing advanced approaches for applying inference to dynamic data streams; it has become increasingly relevant in various application scenarios such as IoT, Smart Cities, Emergency Management, and Healthcare, despite being a relatively new field of research. The current [...] Read more.
Stream Reasoning (SR) focuses on developing advanced approaches for applying inference to dynamic data streams; it has become increasingly relevant in various application scenarios such as IoT, Smart Cities, Emergency Management, and Healthcare, despite being a relatively new field of research. The current lack of standardized formalisms and benchmarks has been hindering the comparison between different SR approaches. We proposed a new benchmark, called EnviroStream, for evaluating SR systems on weather and environmental data. The benchmark includes queries and datasets of different sizes. We adopted I-DLV-sr, a recently released SR system based on Answer Set Programming, as a baseline for query modelling and experimentation. We also showcased continuous online reasoning via a web application. Full article
(This article belongs to the Special Issue Big Data and Cognitive Computing in 2023)
Show Figures

Figure 1

26 pages, 1185 KiB  
Article
Driving Excellence in Official Statistics: Unleashing the Potential of Comprehensive Digital Data Governance
by Hossein Hassani and Steve MacFeely
Big Data Cogn. Comput. 2023, 7(3), 134; https://doi.org/10.3390/bdcc7030134 - 29 Jul 2023
Viewed by 3463
Abstract
With the ubiquitous use of digital technologies and the consequent data deluge, official statistics faces new challenges and opportunities. In this context, strengthening official statistics through effective data governance will be crucial to ensure reliability, quality, and access to data. This paper presents [...] Read more.
With the ubiquitous use of digital technologies and the consequent data deluge, official statistics faces new challenges and opportunities. In this context, strengthening official statistics through effective data governance will be crucial to ensure reliability, quality, and access to data. This paper presents a comprehensive framework for digital data governance for official statistics, addressing key components, such as data collection and management, processing and analysis, data sharing and dissemination, as well as privacy and ethical considerations. The framework integrates principles of data governance into digital statistical processes, enabling statistical organizations to navigate the complexities of the digital environment. Drawing on case studies and best practices, the paper highlights successful implementations of digital data governance in official statistics. The paper concludes by discussing future trends and directions, including emerging technologies and opportunities for advancing digital data governance. Full article
Show Figures

Figure 1

22 pages, 1529 KiB  
Article
Evaluation Method of Electric Vehicle Charging Station Operation Based on Contrastive Learning
by Ze-Yang Tang, Qi-Biao Hu, Yi-Bo Cui, Lei Hu, Yi-Wen Li and Yu-Jie Li
Big Data Cogn. Comput. 2023, 7(3), 133; https://doi.org/10.3390/bdcc7030133 - 24 Jul 2023
Viewed by 1375
Abstract
This paper aims to address the issue of evaluating the operation of electric vehicle charging stations (EVCSs). Previous studies have commonly employed the method of constructing comprehensive evaluation systems, which greatly relies on manual experience for index selection and weight allocation. To overcome [...] Read more.
This paper aims to address the issue of evaluating the operation of electric vehicle charging stations (EVCSs). Previous studies have commonly employed the method of constructing comprehensive evaluation systems, which greatly relies on manual experience for index selection and weight allocation. To overcome this limitation, this paper proposes an evaluation method based on natural language models for assessing the operation of charging stations. By utilizing the proposed SimCSEBERT model, this study analyzes the operational data, user charging data, and basic information of charging stations to predict the operational status and identify influential factors. Additionally, this study compared the evaluation accuracy and impact factor analysis accuracy of the baseline and the proposed model. The experimental results demonstrate that our model achieves a higher evaluation accuracy (operation evaluation accuracy = 0.9464; impact factor analysis accuracy = 0.9492) and effectively assesses the operation of EVCSs. Compared with traditional evaluation methods, this approach exhibits improved universality and a higher level of intelligence. It provides insights into the operation of EVCSs and user demands, allowing for the resolution of supply–demand contradictions that are caused by power supply constraints and the uneven distribution of charging demands. Furthermore, it offers guidance for more efficient and targeted strategies for the operation of charging stations. Full article
Show Figures

Figure 1

16 pages, 3082 KiB  
Article
The Development of a Kazakh Speech Recognition Model Using a Convolutional Neural Network with Fixed Character Level Filters
by Nurgali Kadyrbek, Madina Mansurova, Adai Shomanov and Gaukhar Makharova
Big Data Cogn. Comput. 2023, 7(3), 132; https://doi.org/10.3390/bdcc7030132 - 20 Jul 2023
Viewed by 1786
Abstract
This study is devoted to the transcription of human speech in the Kazakh language in dynamically changing conditions. It discusses key aspects related to the phonetic structure of the Kazakh language, technical considerations in collecting the transcribed audio corpus, and the use of [...] Read more.
This study is devoted to the transcription of human speech in the Kazakh language in dynamically changing conditions. It discusses key aspects related to the phonetic structure of the Kazakh language, technical considerations in collecting the transcribed audio corpus, and the use of deep neural networks for speech modeling. A high-quality decoded audio corpus was collected, containing 554 h of data, giving an idea of the frequencies of letters and syllables, as well as demographic parameters such as the gender, age, and region of residence of native speakers. The corpus contains a universal vocabulary and serves as a valuable resource for the development of modules related to speech. Machine learning experiments were conducted using the DeepSpeech2 model, which includes a sequence-to-sequence architecture with an encoder, decoder, and attention mechanism. To increase the reliability of the model, filters initialized with symbol-level embeddings were introduced to reduce the dependence on accurate positioning on object maps. The training process included simultaneous preparation of convolutional filters for spectrograms and symbolic objects. The proposed approach, using a combination of supervised and unsupervised learning methods, resulted in a 66.7% reduction in the weight of the model while maintaining relative accuracy. The evaluation on the test sample showed a 7.6% lower character error rate (CER) compared to existing models, demonstrating its most modern characteristics. The proposed architecture provides deployment on platforms with limited resources. Overall, this study presents a high-quality audio corpus, an improved speech recognition model, and promising results applicable to speech-related applications and languages beyond Kazakh. Full article
(This article belongs to the Special Issue Advances in Natural Language Processing and Text Mining)
Show Figures

Figure 1

19 pages, 5958 KiB  
Article
A Real-Time Vehicle Speed Prediction Method Based on a Lightweight Informer Driven by Big Temporal Data
by Xinyu Tian, Qinghe Zheng, Zhiguo Yu, Mingqiang Yang, Yao Ding, Abdussalam Elhanashi, Sergio Saponara and Kidiyo Kpalma
Big Data Cogn. Comput. 2023, 7(3), 131; https://doi.org/10.3390/bdcc7030131 - 15 Jul 2023
Cited by 2 | Viewed by 2459
Abstract
At present, the design of modern vehicles requires improving driving performance while meeting emission standards, leading to increasingly complex power systems. In autonomous driving systems, accurate, real-time vehicle speed prediction is one of the key factors in achieving automated driving. Accurate prediction and [...] Read more.
At present, the design of modern vehicles requires improving driving performance while meeting emission standards, leading to increasingly complex power systems. In autonomous driving systems, accurate, real-time vehicle speed prediction is one of the key factors in achieving automated driving. Accurate prediction and optimal control based on future vehicle speeds are key strategies for dealing with ever-changing and complex actual driving environments. However, predicting driver behavior is uncertain and may be influenced by the surrounding driving environment, such as weather and road conditions. To overcome these limitations, we propose a real-time vehicle speed prediction method based on a lightweight deep learning model driven by big temporal data. Firstly, the temporal data collected by automotive sensors are decomposed into a feature matrix through empirical mode decomposition (EMD). Then, an informer model based on the attention mechanism is designed to extract key information for learning and prediction. During the iterative training process of the informer, redundant parameters are removed through importance measurement criteria to achieve real-time inference. Finally, experimental results demonstrate that the proposed method achieves superior speed prediction performance through comparing it with state-of-the-art statistical modelling methods and deep learning models. Tests on edge computing devices also confirmed that the designed model can meet the requirements of actual tasks. Full article
Show Figures

Figure 1

22 pages, 3764 KiB  
Article
A Guide to Data Collection for Computation and Monitoring of Node Energy Consumption
by Alberto del Rio, Giuseppe Conti, Sandra Castano-Solis, Javier Serrano, David Jimenez and Jesus Fraile-Ardanuy
Big Data Cogn. Comput. 2023, 7(3), 130; https://doi.org/10.3390/bdcc7030130 - 11 Jul 2023
Viewed by 1500
Abstract
The digital transition that drives the new industrial revolution is largely driven by the application of intelligence and data. This boost leads to an increase in energy consumption, much of it associated with computing in data centers. This fact clashes with the growing [...] Read more.
The digital transition that drives the new industrial revolution is largely driven by the application of intelligence and data. This boost leads to an increase in energy consumption, much of it associated with computing in data centers. This fact clashes with the growing need to save and improve energy efficiency and requires a more optimized use of resources. The deployment of new services in edge and cloud computing, virtualization, and software-defined networks requires a better understanding of consumption patterns aimed at more efficient and sustainable models and a reduction in carbon footprints. These patterns are suitable to be exploited by machine, deep, and reinforced learning techniques in pursuit of energy consumption optimization, which can ideally improve the energy efficiency of data centers and big computing servers providing these kinds of services. For the application of these techniques, it is essential to investigate data collection processes to create initial information points. Datasets also need to be created to analyze how to diagnose systems and sort out new ways of optimization. This work describes a data collection methodology used to create datasets that collect consumption data from a real-world work environment dedicated to data centers, server farms, or similar architectures. Specifically, it covers the entire process of energy stimuli generation, data extraction, and data preprocessing. The evaluation and reproduction of this method is offered to the scientific community through an online repository created for this work, which hosts all the code available for its download. Full article
Show Figures

Figure 1

17 pages, 4460 KiB  
Article
An End-to-End Online Traffic-Risk Incident Prediction in First-Person Dash Camera Videos
by Hilmil Pradana
Big Data Cogn. Comput. 2023, 7(3), 129; https://doi.org/10.3390/bdcc7030129 - 06 Jul 2023
Cited by 3 | Viewed by 1684
Abstract
Predicting traffic risk incidents in first-person helps to ensure a safety reaction can occur before the incident happens for a wide range of driving scenarios and conditions. One challenge to building advanced driver assistance systems is to create an early warning system for [...] Read more.
Predicting traffic risk incidents in first-person helps to ensure a safety reaction can occur before the incident happens for a wide range of driving scenarios and conditions. One challenge to building advanced driver assistance systems is to create an early warning system for the driver to react safely and accurately while perceiving the diversity of traffic-risk predictions in real-world applications. In this paper, we aim to bridge the gap by investigating two key research questions regarding the driver’s current status of driving through online videos and the types of other moving objects that lead to dangerous situations. To address these problems, we proposed an end-to-end two-stage architecture: in the first stage, unsupervised learning is applied to collect all suspicious events on actual driving; in the second stage, supervised learning is used to classify all suspicious event results from the first stage to a common event type. To enrich the classification type, the metadata from the result of the first stage is sent to the second stage to handle the data limitation while training our classification model. Through the online situation, our method runs 9.60 fps on average with 1.44 fps on standard deviation. Our quantitative evaluation shows that our method reaches 81.87% and 73.43% for the average F1-score on labeled data of CST-S3D and real driving datasets, respectively. Furthermore, the proposed method has the potential to assist distribution companies in evaluating the driving performance of their driver by automatically monitoring near-miss events and analyzing driving patterns for training programs to reduce future accidents. Full article
(This article belongs to the Special Issue Deep Network Learning and Its Applications)
Show Figures

Figure 1

19 pages, 5540 KiB  
Article
Transfer Learning Approach to Seed Taxonomy: A Wild Plant Case Study
by Nehad M. Ibrahim, Dalia G. Gabr, Atta Rahman, Dhiaa Musleh, Dania AlKhulaifi and Mariam AlKharraa
Big Data Cogn. Comput. 2023, 7(3), 128; https://doi.org/10.3390/bdcc7030128 - 04 Jul 2023
Cited by 6 | Viewed by 2064
Abstract
Plant taxonomy is the scientific study of the classification and naming of various plant species. It is a branch of biology that aims to categorize and organize the diverse variety of plant life on earth. Traditionally, plant taxonomy has been performed using morphological [...] Read more.
Plant taxonomy is the scientific study of the classification and naming of various plant species. It is a branch of biology that aims to categorize and organize the diverse variety of plant life on earth. Traditionally, plant taxonomy has been performed using morphological and anatomical characteristics, such as leaf shape, flower structure, and seed and fruit characters. Artificial intelligence (AI), machine learning, and especially deep learning can also play an instrumental role in plant taxonomy by automating the process of categorizing plant species based on the available features. This study investigated transfer learning techniques to analyze images of plants and extract features that can be used to cluster the species hierarchically using the k-means clustering algorithm. Several pretrained deep learning models were employed and evaluated. In this regard, two separate datasets were used in the study comprising of seed images of wild plants collected from Egypt. Extensive experiments using the transfer learning method (DenseNet201) demonstrated that the proposed methods achieved superior accuracy compared to traditional methods with the highest accuracy of 93% and F1-score and area under the curve (AUC) of 95%, respectively. That is considerable in contrast to the state-of-the-art approaches in the literature. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop