Advanced Machine Learning Applications in Big Data Analytics

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (20 February 2023) | Viewed by 69585

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
School of Economic Information Engineering, Southwestern University of Finance and Economics, Chengdu 610074, China
Interests: machine learning; time series analysis; image processing; artificial intelligence

E-Mail Website
Guest Editor
School of Economic Information Engineering, Southwestern University of Finance and Economics, Chengdu 610074, China
Interests: machine learning; data mining

Special Issue Information

Dear Colleagues,

With the development of computer technology and communication technology, various industries have collected a large amount of data in different forms, so-called big data. How to obtain valuable knowledge from these data is a very challenging task. Machine learning is such a direct and effective method for big data analytics. In recent years, a variety of advanced machine learning technologies have emerged, and they continue to play important roles in the era of big data.

This Special Issue is calling for high-quality papers in machine learning algorithms and applications in big data analytics. Topics include but are not limited to the following:

  • Machine learning
  • Supervised learning
  • Unsupervised learning
  • Deep learning
  • Reinforcement learning
  • Lifelong learning
  • Transfer learning
  • Automated machine learning
  • Big data analytics
  • Intelligent algorithm
  • Image and video analysis
  • Text analysis
  • Time series analysis
  • Energy analysis
  • Fault analysis
  • Business analysis
  • Healthcare data analysis

Prof. Dr. Taiyong Li
Prof. Dr. Wu Deng
Prof. Dr. Jiang Wu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • data mining
  • data science
  • deep learning
  • classification
  • clustering
  • big data analytics
  • real-world applications

Published Papers (34 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

7 pages, 196 KiB  
Editorial
Advanced Machine Learning Applications in Big Data Analytics
by Taiyong Li, Wu Deng and Jiang Wu
Electronics 2023, 12(13), 2940; https://doi.org/10.3390/electronics12132940 - 04 Jul 2023
Cited by 1 | Viewed by 1021
Abstract
We are currently living in the era of big data. [...] Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)

Research

Jump to: Editorial

29 pages, 2218 KiB  
Article
Financial Time Series Forecasting: A Data Stream Mining-Based System
by Zineb Bousbaa, Javier Sanchez-Medina and Omar Bencharef
Electronics 2023, 12(9), 2039; https://doi.org/10.3390/electronics12092039 - 28 Apr 2023
Cited by 3 | Viewed by 2166
Abstract
Data stream mining (DSM) represents a promising process to forecast financial time series exchange rate. Financial historical data generate several types of cyclical patterns that evolve, grow, decrease, and end up dying. Within historical data, we can notice long-term, seasonal, and irregular trends. [...] Read more.
Data stream mining (DSM) represents a promising process to forecast financial time series exchange rate. Financial historical data generate several types of cyclical patterns that evolve, grow, decrease, and end up dying. Within historical data, we can notice long-term, seasonal, and irregular trends. All these changes make traditional static machine learning models not relevant to those study cases. The statistically unstable evolution of financial market behavior yields a progressive deterioration in any trained static model. Those models do not provide the required characteristics to evolve continuously and sustain good forecasting performance as the data distribution changes. Online learning without DSM mechanisms can also miss sudden or quick changes. In this paper, we propose a possible DSM methodology, trying to cope with that instability by implementing an incremental and adaptive strategy. The proposed algorithm includes the online Stochastic Gradient Descent algorithm (SGD), whose weights are optimized using the Particle Swarm Optimization Metaheuristic (PSO) to identify repetitive chart patterns in the FOREX historical data by forecasting the EUR/USD pair’s future values. The data trend change is detected using a statistical technique that studies if the received time series instances are stationary or not. Therefore, the sliding window size is minimized as changes are detected and maximized as the distribution becomes more stable. Results, though preliminary, show that the model prediction is better using flexible sliding windows that adapt according to the detected distribution changes using stationarity compared to learning using a fixed window size that does not incorporate any techniques for detecting and responding to pattern shifts. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

21 pages, 1749 KiB  
Article
KNN-Based Consensus Algorithm for Better Service Level Agreement in Blockchain as a Service (BaaS) Systems
by Qingxiao Zheng, Lingfeng Wang, Jin He and Taiyong Li
Electronics 2023, 12(6), 1429; https://doi.org/10.3390/electronics12061429 - 16 Mar 2023
Cited by 5 | Viewed by 1939
Abstract
With services in cloud manufacturing expanding, cloud manufacturers increasingly use service level agreements (SLAs) to guarantee business processing cooperation between CSPs and CSCs (cloud service providers and cloud service consumers). Although blockchain and smart contract technologies are critical innovations in cloud computing, consensus [...] Read more.
With services in cloud manufacturing expanding, cloud manufacturers increasingly use service level agreements (SLAs) to guarantee business processing cooperation between CSPs and CSCs (cloud service providers and cloud service consumers). Although blockchain and smart contract technologies are critical innovations in cloud computing, consensus algorithms in Blockchain as a Service (BaaS) systems often overlook the importance of SLAs. In fact, SLAs play a crucial role in establishing clear commitments between a service provider and a customer. There are currently no effective consensus algorithms that can monitor the SLA and provide service level priority. To address this issue, we propose a novel KNN-based consensus algorithm that classifies transactions based on their priority. Any factor that impacts the priority of the transaction can be used to calculate the distance in the KNN algorithm, including the SLA definition, the smart contract type, the CSC type, and the account type. This paper demonstrates the full functionality of the enhanced consensus algorithm. With this new method, the CSP in BaaS systems can provide improved services to the CSC. Experimental results obtained by adopting the enhanced consensus algorithm show that the SLA is better satisfied in the BaaS systems. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Graphical abstract

15 pages, 3033 KiB  
Article
One-Dimensional Quadratic Chaotic System and Splicing Model for Image Encryption
by Chen Chen, Donglin Zhu, Xiao Wang and Lijun Zeng
Electronics 2023, 12(6), 1325; https://doi.org/10.3390/electronics12061325 - 10 Mar 2023
Cited by 9 | Viewed by 1113
Abstract
Digital image transmission plays a very significant role in information transmission, so it is very important to protect the security of image transmission. Based on the analysis of existing image encryption algorithms, this article proposes a new digital image encryption algorithm based on [...] Read more.
Digital image transmission plays a very significant role in information transmission, so it is very important to protect the security of image transmission. Based on the analysis of existing image encryption algorithms, this article proposes a new digital image encryption algorithm based on the splicing model and 1D secondary chaotic system. Step one is the algorithm of this article divides the plain image into four sub-parts by using quaternary coding, and these four sub-parts can be coded separately. Only by acquiring all the sub-parts at one time can the attacker recover the useful plain image. Therefore, the algorithm has high security. Additionally, the image encryption scheme in this article used a 1D quadratic chaotic system, which makes the key space big enough to resist exhaustive attacks. The experimental data show that the image encryption algorithm has high security and a good encryption effect. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

16 pages, 1730 KiB  
Article
Hybrid Graph Neural Network Recommendation Based on Multi-Behavior Interaction and Time Sequence Awareness
by Mingyu Jia, Fang’ai Liu, Xinmeng Li and Xuqiang Zhuang
Electronics 2023, 12(5), 1223; https://doi.org/10.3390/electronics12051223 - 03 Mar 2023
Cited by 4 | Viewed by 2214
Abstract
In recent years, mining user multi-behavior information for prediction has become a hot topic in recommendation systems. Usually, researchers only use graph networks to capture the relationship between multiple types of user-interaction information and target items, while ignoring the order of interactions. This [...] Read more.
In recent years, mining user multi-behavior information for prediction has become a hot topic in recommendation systems. Usually, researchers only use graph networks to capture the relationship between multiple types of user-interaction information and target items, while ignoring the order of interactions. This makes multi-behavior information underutilized. In response to the above problem, we propose a new hybrid graph network recommendation model called the User Multi-Behavior Graph Network (UMBGN). The model uses a joint learning mechanism to integrate user–item multi-behavior interaction sequences. We designed a user multi-behavior information-aware layer to focus on the long-term multi-behavior features of users and learn temporally ordered user–item interaction information through BiGRU units and AUGRU units. Furthermore, we also defined the propagation weights between the user–item interaction graph and the item–item relationship graph according to user behavior preferences to capture more valuable dependencies. Extensive experiments on three public datasets, namely MovieLens, Yelp2018, and Online Mall, show that our model outperforms the best baselines by 2.04%, 3.82%, and 3.23%. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Graphical abstract

19 pages, 1262 KiB  
Article
CEEMD-MultiRocket: Integrating CEEMD with Improved MultiRocket for Time Series Classification
by Panjie Wang, Jiang Wu, Yuan Wei and Taiyong Li
Electronics 2023, 12(5), 1188; https://doi.org/10.3390/electronics12051188 - 01 Mar 2023
Cited by 2 | Viewed by 1567
Abstract
Time series classification (TSC) is always a very important research topic in many real-world application domains. MultiRocket has been shown to be an efficient approach for TSC, by adding multiple pooling operators and a first-order difference transformation. To classify time series with higher [...] Read more.
Time series classification (TSC) is always a very important research topic in many real-world application domains. MultiRocket has been shown to be an efficient approach for TSC, by adding multiple pooling operators and a first-order difference transformation. To classify time series with higher accuracy, this study proposes a hybrid ensemble learning algorithm combining Complementary Ensemble Empirical Mode Decomposition (CEEMD) with improved MultiRocket, namely CEEMD-MultiRocket. Firstly, we utilize the decomposition method CEEMD to decompose raw time series into three sub-series: two Intrinsic Mode Functions (IMFs) and one residue. Then, the selection of these decomposed sub-series is executed on the known training set by comparing the classification accuracy of each IMF with that of raw time series using a given threshold. Finally, we optimize convolution kernels and pooling operators, and apply our improved MultiRocket to the raw time series, the selected decomposed sub-series and the first-order difference of the raw time series to generate the final classification results. Experiments were conducted on 109 datasets from the UCR time series repository to assess the classification performance of our CEEMD-MultiRocket. The extensive experimental results demonstrate that our CEEMD-MultiRocket has the second-best average rank on classification accuracy against a spread of the state-of-the-art (SOTA) TSC models. Specifically, CEEMD-MultiRocket is significantly more accurate than MultiRocket even though it requires a relatively long time, and is competitive with the currently most accurate model, HIVE-COTE 2.0, only with 1.4% of the computing load of the latter. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

21 pages, 1432 KiB  
Article
Improved Multi-Strategy Matrix Particle Swarm Optimization for DNA Sequence Design
by Wenyu Zhang, Donglin Zhu, Zuwei Huang and Changjun Zhou
Electronics 2023, 12(3), 547; https://doi.org/10.3390/electronics12030547 - 20 Jan 2023
Cited by 1 | Viewed by 1430
Abstract
The efficiency of DNA computation is closely related to the design of DNA coding sequences. For the purpose of obtaining superior DNA coding sequences, it is necessary to choose suitable DNA constraints to prevent potential conflicting interactions in different DNA sequences and to [...] Read more.
The efficiency of DNA computation is closely related to the design of DNA coding sequences. For the purpose of obtaining superior DNA coding sequences, it is necessary to choose suitable DNA constraints to prevent potential conflicting interactions in different DNA sequences and to ensure the reliability of DNA sequences. An improved matrix particle swarm optimization algorithm, referred to as IMPSO, is proposed in this paper to optimize DNA sequence design. In addition, this paper incorporates centroid opposition-based learning to fully preserve population diversity and develops and adapts a dynamic update on the basis of signal-to-noise ratio distance to search for high-quality solutions in a sufficiently intelligent manner. The results show that the proposal of this paper achieves satisfactory results and can obtain higher computational efficiency. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

15 pages, 3742 KiB  
Article
A Multi-Strategy Adaptive Particle Swarm Optimization Algorithm for Solving Optimization Problem
by Yingjie Song, Ying Liu, Huayue Chen and Wu Deng
Electronics 2023, 12(3), 491; https://doi.org/10.3390/electronics12030491 - 17 Jan 2023
Cited by 5 | Viewed by 1530
Abstract
In solving the portfolio optimization problem, the mean-semivariance (MSV) model is more complicated and time-consuming, and their relations are unbalanced because they conflict with each other due to return and risk. Therefore, in order to solve these existing problems, multi-strategy adaptive particle swarm [...] Read more.
In solving the portfolio optimization problem, the mean-semivariance (MSV) model is more complicated and time-consuming, and their relations are unbalanced because they conflict with each other due to return and risk. Therefore, in order to solve these existing problems, multi-strategy adaptive particle swarm optimization, namely APSO/DU, has been developed to solve the portfolio optimization problem. In the present study, a constraint factor is introduced to control velocity weight to reduce blindness in the search process. A dual-update (DU) strategy is based on new speed, and position update strategies are designed. In order to test and prove the effectiveness of the APSO/DU algorithm, test functions and a realistic MSV portfolio optimization problem are selected here. The results demonstrate that the APSO/DU algorithm has better convergence accuracy and speed and finds the least risky stock portfolio for the same level of return. Additionally, the results are closer to the global Pareto front (PF). The algorithm can provide valuable advice to investors and has good practical applications. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

15 pages, 4273 KiB  
Article
Monitoring Tomato Leaf Disease through Convolutional Neural Networks
by Antonio Guerrero-Ibañez and Angelica Reyes-Muñoz
Electronics 2023, 12(1), 229; https://doi.org/10.3390/electronics12010229 - 02 Jan 2023
Cited by 15 | Viewed by 6596
Abstract
Agriculture plays an essential role in Mexico’s economy. The agricultural sector has a 2.5% share of Mexico’s gross domestic product. Specifically, tomatoes have become the country’s most exported agricultural product. That is why there is an increasing need to improve crop yields. One [...] Read more.
Agriculture plays an essential role in Mexico’s economy. The agricultural sector has a 2.5% share of Mexico’s gross domestic product. Specifically, tomatoes have become the country’s most exported agricultural product. That is why there is an increasing need to improve crop yields. One of the elements that can considerably affect crop productivity is diseases caused by agents such as bacteria, fungi, and viruses. However, the process of disease identification can be costly and, in many cases, time-consuming. Deep learning techniques have begun to be applied in the process of plant disease identification with promising results. In this paper, we propose a model based on convolutional neural networks to identify and classify tomato leaf diseases using a public dataset and complementing it with other photographs taken in the fields of the country. To avoid overfitting, generative adversarial networks were used to generate samples with the same characteristics as the training data. The results show that the proposed model achieves a high performance in the process of detection and classification of diseases in tomato leaves: the accuracy achieved is greater than 99% in both the training dataset and the test dataset. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

20 pages, 702 KiB  
Article
Towards Adversarial Attacks for Clinical Document Classification
by Nina Fatehi, Qutaiba Alasad and Mohammed Alawad
Electronics 2023, 12(1), 129; https://doi.org/10.3390/electronics12010129 - 28 Dec 2022
Cited by 3 | Viewed by 1832
Abstract
Regardless of revolutionizing improvements in various domains thanks to recent advancements in the field of Deep Learning (DL), recent studies have demonstrated that DL networks are susceptible to adversarial attacks. Such attacks are crucial in sensitive environments to make critical and life-changing decisions, [...] Read more.
Regardless of revolutionizing improvements in various domains thanks to recent advancements in the field of Deep Learning (DL), recent studies have demonstrated that DL networks are susceptible to adversarial attacks. Such attacks are crucial in sensitive environments to make critical and life-changing decisions, such as health decision-making. Research efforts on using textual adversaries to attack DL for natural language processing (NLP) have received increasing attention in recent years. Among the available textual adversarial studies, Electronic Health Records (EHR) have gained the least attention. This paper investigates the effectiveness of adversarial attacks on clinical document classification and proposes a defense mechanism to develop a robust convolutional neural network (CNN) model and counteract these attacks. Specifically, we apply various black-box attacks based on concatenation and editing adversaries on unstructured clinical text. Then, we propose a defense technique based on feature selection and filtering to improve the robustness of the models. Experimental results show that a small perturbation to the unstructured text in clinical documents causes a significant drop in performance. Performing the proposed defense mechanism under the same adversarial attacks, on the other hand, avoids such a drop in performance. Therefore, it enhances the robustness of the CNN model for clinical document classification. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

35 pages, 3682 KiB  
Article
An Improved Whale Optimizer with Multiple Strategies for Intelligent Prediction of Talent Stability
by Hong Li, Sicheng Ke, Xili Rao, Caisi Li, Danyan Chen, Fangjun Kuang, Huiling Chen, Guoxi Liang and Lei Liu
Electronics 2022, 11(24), 4224; https://doi.org/10.3390/electronics11244224 - 18 Dec 2022
Cited by 2 | Viewed by 1777
Abstract
Talent resources are a primary resource and an important driving force for economic and social development. At present, researchers have conducted studies on talent introduction, but there is a paucity of research work on the stability of talent introduction. This paper presents the [...] Read more.
Talent resources are a primary resource and an important driving force for economic and social development. At present, researchers have conducted studies on talent introduction, but there is a paucity of research work on the stability of talent introduction. This paper presents the first study on talent stability in higher education, aiming to design an intelligent prediction model for talent stability in higher education using a kernel extreme learning machine (KELM) and proposing a differential evolution crisscross whale optimization algorithm (DECCWOA) for optimizing the model parameters. By introducing the crossover operator, the exchange of information regarding individuals is facilitated and the problem of dimensional lag is improved. Differential evolution operation is performed in a certain period of time to perturb the population by using the differences in individuals to ensure the diversity of the population. Furthermore, 35 benchmark functions of 23 baseline functions and CEC2014 were selected for comparison experiments in order to demonstrate the optimization performance of the DECCWOA. It is shown that the DECCWOA can achieve high accuracy and fast convergence in solving both unimodal and multimodal functions. In addition, the DECCWOA is combined with KELM and feature selection (DECCWOA-KELM-FS) to achieve efficient talent stability intelligence prediction for universities or colleges in Wenzhou. The results show that the performance of the proposed model outperforms other comparative algorithms. This study proposes a DECCWOA optimizer and constructs an intelligent prediction of talent stability system. The designed system can be used as a reliable method of predicting talent mobility in higher education. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

13 pages, 11555 KiB  
Article
Quantum Dynamic Optimization Algorithm for Neural Architecture Search on Image Classification
by Jin Jin, Qian Zhang, Jia He and Hongnian Yu
Electronics 2022, 11(23), 3969; https://doi.org/10.3390/electronics11233969 - 30 Nov 2022
Cited by 2 | Viewed by 1793
Abstract
Deep neural networks have proven to be effective in solving computer vision and natural language processing problems. To fully leverage its power, manually designed network templates, i.e., Residual Networks, are introduced to deal with various vision and natural language tasks. These hand-crafted neural [...] Read more.
Deep neural networks have proven to be effective in solving computer vision and natural language processing problems. To fully leverage its power, manually designed network templates, i.e., Residual Networks, are introduced to deal with various vision and natural language tasks. These hand-crafted neural networks rely on a large number of parameters, which are both data-dependent and laborious. On the other hand, architectures suitable for specific tasks have also grown exponentially with their size and topology, which prohibits brute force search. To address these challenges, this paper proposes a quantum dynamic optimization algorithm to find the optimal structure for a candidate network using Quantum Dynamic Neural Architecture Search (QDNAS). Specifically, the proposed quantum dynamics optimization algorithm is used to search for meaningful architectures for vision tasks and dedicated rules to express and explore the search space. The proposed quantum dynamics optimization algorithm treats the iterative evolution process of the optimization over time as a quantum dynamic process. The tunneling effect and potential barrier estimation in quantum mechanics can effectively promote the evolution of the optimization algorithm to the global optimum. Extensive experiments on four benchmarks demonstrate the effectiveness of QDNAS, which is consistently better than all baseline methods in image classification tasks. Furthermore, an in-depth analysis is conducted on the searchable networks that provide inspiration for the design of other image classification networks. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

15 pages, 3805 KiB  
Article
Machine Learning-Driven Approach for a COVID-19 Warning System
by Mushtaq Hussain, Akhtarul Islam, Jamshid Ali Turi, Said Nabi, Monia Hamdi, Habib Hamam, Muhammad Ibrahim, Mehmet Akif Cifci and Tayyaba Sehar
Electronics 2022, 11(23), 3875; https://doi.org/10.3390/electronics11233875 - 23 Nov 2022
Cited by 3 | Viewed by 1847
Abstract
The emergency of the pandemic and the absence of treatment have motivated researchers in all the fields to deal with the pandemic situation. In the field of computer science, major contributions include the development of methods for the diagnosis, detection, and prediction of [...] Read more.
The emergency of the pandemic and the absence of treatment have motivated researchers in all the fields to deal with the pandemic situation. In the field of computer science, major contributions include the development of methods for the diagnosis, detection, and prediction of COVID-19 cases. Since the emergence of information technology, data science and machine learning have become the most widely used techniques to detect, diagnose, and predict the positive cases of COVID-19. This paper presents the prediction of confirmed cases of COVID-19 and its mortality rate and then a COVID-19 warning system is proposed based on the machine learning time series model. We have used the date and country-wise confirmed, detected, recovered, and death cases features for training of the model based on the COVID-19 dataset. Finally, we compared the performance of time series models on the current study dataset, and we observed that PROPHET and Auto-Regressive (AR) models predicted the COVID-19 positive cases with a low error rate. Moreover, death cases are positively correlated with the confirmed detected cases, mainly based on different regions’ populations. The proposed forecasting system, driven by machine learning approaches, will help the health departments of underdeveloped countries to monitor the deaths and confirm detected cases of COVID-19. It will also help make futuristic decisions on testing and developing more health facilities, mostly to avoid spreading diseases. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

20 pages, 3388 KiB  
Article
A Lightweight Border Patrol Object Detection Network for Edge Devices
by Lei Yue, Haifeng Ling, Jianhu Yuan and Linyuan Bai
Electronics 2022, 11(22), 3828; https://doi.org/10.3390/electronics11223828 - 21 Nov 2022
Cited by 1 | Viewed by 1220
Abstract
Border patrol object detection is an important basis for obtaining information about the border patrol area and for analyzing and determining the mission situation. Border Patrol Staffing is now equipped with medium to close range UAVs and portable reconnaissance equipment to carry out [...] Read more.
Border patrol object detection is an important basis for obtaining information about the border patrol area and for analyzing and determining the mission situation. Border Patrol Staffing is now equipped with medium to close range UAVs and portable reconnaissance equipment to carry out its tasks. In this paper, we designed a detection algorithm TP-ODA for the border patrol object detection task in order to improve the UAV and portable reconnaissance equipment for the task of border patrol object detection, which is mostly performed in embedded devices with limited computing power and the detection frame imbalance problem is improved; finally, the PDOEM structure is designed in the neck network to optimize the feature fusion module of the algorithm. In order to verify the improvement effect of the algorithm in this paper, the Border Patrol object dataset BDP is constructed. The experiments show that, compared to the baseline model, the TP-ODA algorithm improves mAP by 2.9%, reduces GFLOPs by 65.19%, reduces model volume by 63.83% and improves FPS by 8.47%. The model comparison experiments were then combined with the requirements of the border patrol tasks, and it was concluded that the TP-ODA model is more suitable for UAV and portable reconnaissance equipment to carry and can better fulfill the task of border patrol object detection. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

18 pages, 1986 KiB  
Article
A Novel Multistrategy-Based Differential Evolution Algorithm and Its Application
by Jinyin Wang, Shifan Shang, Huanyu Jing, Jiahui Zhu, Yingjie Song, Yuangang Li and Wu Deng
Electronics 2022, 11(21), 3476; https://doi.org/10.3390/electronics11213476 - 26 Oct 2022
Cited by 3 | Viewed by 1176
Abstract
To address the poor searchability, population diversity, and slow convergence speed of the differential evolution (DE) algorithm in solving capacitated vehicle routing problems (CVRP), a new multistrategy-based differential evolution algorithm with the saving mileage algorithm, sequential encoding, and gravitational search algorithm, namely SEGDE, [...] Read more.
To address the poor searchability, population diversity, and slow convergence speed of the differential evolution (DE) algorithm in solving capacitated vehicle routing problems (CVRP), a new multistrategy-based differential evolution algorithm with the saving mileage algorithm, sequential encoding, and gravitational search algorithm, namely SEGDE, is proposed to solve CVRP in this paper. Firstly, an optimization model of CVRP with the shortest total vehicle routing is established. Then, the saving mileage algorithm is employed to initialize the population of the DE to improve the initial solution quality and the search efficiency. The sequential encoding approach is used to adjust the differential mutation strategy to legalize the current solution and ensure its effectiveness. Finally, the gravitational search algorithm is applied to calculate the gravitational relationship between points to effectively adjust the evolutionary search direction and further improve the search efficiency. Four CVRPs are selected to verify the effectiveness of the proposed SEGDE algorithm. The experimental results show that the proposed SEGDE algorithm can effectively solve the CVRPs and obtain the ideal vehicle routing. It adopts better search speed, global optimization ability, routing length, and stability. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

15 pages, 8647 KiB  
Article
A Novel Color Image Encryption Algorithm Using Coupled Map Lattice with Polymorphic Mapping
by Penghe Huang, Dongyan Li, Yu Wang, Huimin Zhao and Wu Deng
Electronics 2022, 11(21), 3436; https://doi.org/10.3390/electronics11213436 - 24 Oct 2022
Cited by 6 | Viewed by 1177
Abstract
Some typical security algorithms such as SHA, MD4, MD5, etc. have been cracked in recent years. However, these algorithms have some shortcomings. Therefore, the traditional one-dimensional-mapping coupled lattice is improved by using the idea of polymorphism in this paper, and a polymorphic mapping–coupled [...] Read more.
Some typical security algorithms such as SHA, MD4, MD5, etc. have been cracked in recent years. However, these algorithms have some shortcomings. Therefore, the traditional one-dimensional-mapping coupled lattice is improved by using the idea of polymorphism in this paper, and a polymorphic mapping–coupled map lattice with information entropy is developed for encrypting color images. Firstly, we extend a diffusion matrix with the original 4 × 4 matrix into an n × n matrix. Then, the Huffman idea is employed to propose a new pixel-level substitution method, which is applied to replace the grey degree value. We employ the idea of polymorphism and select f(x) in the spatiotemporal chaotic system. The pseudo-random sequence is more diversified and the sequence is homogenized. Finally, three plaintext color images of 256×256×3, “Lena”, “Peppers” and “Mandrill”, are selected in order to prove the effectiveness of the proposed algorithm. The experimental results show that the proposed algorithm has a large key space, better sensitivity to keys and plaintext images, and a better encryption effect. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

18 pages, 2395 KiB  
Article
Spatial and Temporal Normalization for Multi-Variate Time Series Prediction Using Machine Learning Algorithms
by Alimasi Mongo Providence, Chaoyu Yang, Tshinkobo Bukasa Orphe, Anesu Mabaire and George K. Agordzo
Electronics 2022, 11(19), 3167; https://doi.org/10.3390/electronics11193167 - 01 Oct 2022
Cited by 2 | Viewed by 2621
Abstract
Multi-variable time series (MTS) information is a typical type of data inference in the real world. Every instance of MTS is produced via a hybrid dynamical scheme, the dynamics of which are often unknown. The hybrid species of this dynamical service are the [...] Read more.
Multi-variable time series (MTS) information is a typical type of data inference in the real world. Every instance of MTS is produced via a hybrid dynamical scheme, the dynamics of which are often unknown. The hybrid species of this dynamical service are the outcome of high-frequency and low-frequency external impacts, as well as global and local spatial impacts. These influences impact MTS’s future growth; hence, they must be incorporated into time series forecasts. Two types of normalization modules, temporal and spatial normalization, are recommended to accomplish this. Each boosts the original data’s local and high-frequency processes distinctly. In addition, all components are easily incorporated into well-known deep learning techniques, such as Wavenet and Transformer. However, existing methodologies have inherent limitations when it comes to isolating the variables produced by each sort of influence from the real data. Consequently, the study encompasses conventional neural networks, such as the multi-layer perceptron (MLP), complex deep learning methods such as LSTM, two recurrent neural networks, support vector machines (SVM), and their application for regression, XGBoost, and others. Extensive experimental work on three datasets shows that the effectiveness of canonical frameworks could be greatly improved by adding more normalization components to how the MTS is used. This would make it as effective as the best MTS designs are currently available. Recurrent models, such as LSTM and RNN, attempt to recognize the temporal variability in the data; however, as a result, their effectiveness might soon decline. Last but not least, it is claimed that training a temporal framework that utilizes recurrence-based methods such as RNN and LSTM approaches is challenging and expensive, while the MLP network structure outperformed other models in terms of time series predictive performance. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

18 pages, 3735 KiB  
Article
A Hierarchical Heterogeneous Graph Attention Network for Emotion-Cause Pair Extraction
by Jiaxin Yu, Wenyuan Liu, Yongjun He and Bineng Zhong
Electronics 2022, 11(18), 2884; https://doi.org/10.3390/electronics11182884 - 12 Sep 2022
Cited by 2 | Viewed by 1402
Abstract
Recently, graph neural networks (GNN), due to their compelling representation learning ability, have been exploited to deal with emotion-cause pair extraction (ECPE). However, current GNN-based ECPE methods mostly concentrate on modeling the local dependency relation between homogeneous nodes at the semantic granularity of [...] Read more.
Recently, graph neural networks (GNN), due to their compelling representation learning ability, have been exploited to deal with emotion-cause pair extraction (ECPE). However, current GNN-based ECPE methods mostly concentrate on modeling the local dependency relation between homogeneous nodes at the semantic granularity of clauses or clause pairs, while they fail to take full advantage of the rich semantic information in the document. To solve this problem, we propose a novel hierarchical heterogeneous graph attention network to model global semantic relations among nodes. Especially, our method introduces all types of semantic elements involved in the ECPE, not just clauses or clause pairs. Specifically, we first model the dependency between clauses and words, in which word nodes are also exploited as an intermediary for the association between clause nodes. Secondly, a pair-level subgraph is constructed to explore the correlation between the pair nodes and their different neighboring nodes. Representation learning of clauses and clause pairs is achieved by two-level heterogeneous graph attention networks. Experiments on the benchmark datasets show that our proposed model achieves a significant improvement over 13 compared methods. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

17 pages, 2585 KiB  
Article
An Effective Model of Confidentiality Management of Digital Archives in a Cloud Environment
by Jian Xie, Shaolong Xuan, Weijun You, Zongda Wu and Huiling Chen
Electronics 2022, 11(18), 2831; https://doi.org/10.3390/electronics11182831 - 07 Sep 2022
Cited by 2 | Viewed by 1932
Abstract
Aiming at the problem of confidentiality management of digital archives on the cloud, this paper presents an effective solution. The basic idea is to deploy a local server between the cloud and each client of an archive system to run a confidentiality management [...] Read more.
Aiming at the problem of confidentiality management of digital archives on the cloud, this paper presents an effective solution. The basic idea is to deploy a local server between the cloud and each client of an archive system to run a confidentiality management model of digital archives on the cloud, which includes an archive release model, and an archive search model. (1) The archive release model is used to strictly encrypt each archive file and archive data released by an administrator and generate feature data for the archive data, and then submit them to the cloud for storage to ensure the security of archive-sensitive data. (2) The archive search model is used to transform each query operation defined on the archive data submitted by a searcher, so that it can be correctly executed on feature data on the cloud, to ensure the accuracy and efficiency of archive search. Finally, both theoretical analysis and experimental evaluation demonstrate the good performance of the proposed solution. The result shows that compared with others, our solution has better overall performance in terms of confidentiality, accuracy, efficiency and availability, which can improve the security of archive-sensitive data on the untrusted cloud without compromising the performance of an existing archive management system. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

15 pages, 6366 KiB  
Article
Hemerocallis citrina Baroni Maturity Detection Method Integrating Lightweight Neural Network and Dual Attention Mechanism
by Liang Zhang, Ligang Wu and Yaqing Liu
Electronics 2022, 11(17), 2743; https://doi.org/10.3390/electronics11172743 - 31 Aug 2022
Cited by 6 | Viewed by 1492
Abstract
North of Shanxi, Datong Yunzhou District is the base for the cultivation of Hemerocallis citrina Baroni, which is the main production and marketing product driving the local economy. Hemerocallis citrina Baroni and other crops’ picking rules are different: the picking cycle is [...] Read more.
North of Shanxi, Datong Yunzhou District is the base for the cultivation of Hemerocallis citrina Baroni, which is the main production and marketing product driving the local economy. Hemerocallis citrina Baroni and other crops’ picking rules are different: the picking cycle is shorter, the frequency is higher, and the picking conditions are harsh. Therefore, in order to reduce the difficulty and workload of picking Hemerocallis citrina Baroni, this paper proposes the GGSC YOLOv5 algorithm, a Hemerocallis citrina Baroni maturity detection method integrating a lightweight neural network and dual attention mechanism, based on a deep learning algorithm. First, Ghost Conv is used to decrease the model complexity and reduce the network layers, number of parameters, and Flops. Subsequently, combining the Ghost Bottleneck micro residual module to reduce the GPU utilization and compress the model size, feature extraction is achieved in a lightweight way. At last, the dual attention mechanism of Squeeze-and-Excitation (SE) and the Convolutional Block Attention Module (CBAM) is introduced to change the tendency of feature extraction and improve detection precision. The experimental results show that the improved GGSC YOLOv5 algorithm reduced the number of parameters and Flops by 63.58% and 68.95%, respectively, and reduced the number of network layers by about 33.12% in terms of model structure. In the case of hardware consumption, GPU utilization is reduced by 44.69%, and the model size was compressed by 63.43%. The detection precision is up to 84.9%, which is an improvement of about 2.55%, and the real-time detection speed increased from 64.16 FPS to 96.96 FPS, an improvement of about 51.13%. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

19 pages, 2350 KiB  
Article
An Improved Hierarchical Clustering Algorithm Based on the Idea of Population Reproduction and Fusion
by Lifeng Yin, Menglin Li, Huayue Chen and Wu Deng
Electronics 2022, 11(17), 2735; https://doi.org/10.3390/electronics11172735 - 30 Aug 2022
Cited by 7 | Viewed by 2051
Abstract
Aiming to resolve the problems of the traditional hierarchical clustering algorithm that cannot find clusters with uneven density, requires a large amount of calculation, and has low efficiency, this paper proposes an improved hierarchical clustering algorithm (referred to as PRI-MFC) based on the [...] Read more.
Aiming to resolve the problems of the traditional hierarchical clustering algorithm that cannot find clusters with uneven density, requires a large amount of calculation, and has low efficiency, this paper proposes an improved hierarchical clustering algorithm (referred to as PRI-MFC) based on the idea of population reproduction and fusion. It is divided into two stages: fuzzy pre-clustering and Jaccard fusion clustering. In the fuzzy pre-clustering stage, it determines the center point, uses the product of the neighborhood radius eps and the dispersion degree fog as the benchmark to divide the data, uses the Euclidean distance to determine the similarity of the two data points, and uses the membership grade to record the information of the common points in each cluster. In the Jaccard fusion clustering stage, the clusters with common points are the clusters to be fused, and the clusters whose Jaccard similarity coefficient between the clusters to be fused is greater than the fusion parameter jac are fused. The common points of the clusters whose Jaccard similarity coefficient between clusters is less than the fusion parameter jac are divided into the cluster with the largest membership grade. A variety of experiments are designed from multiple perspectives on artificial datasets and real datasets to demonstrate the superiority of the PRI-MFC algorithm in terms of clustering effect, clustering quality, and time consumption. Experiments are carried out on Chinese household financial survey data, and the clustering results that conform to the actual situation of Chinese households are obtained, which shows the practicability of this algorithm. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

15 pages, 1788 KiB  
Article
China Coastal Bulk (Coal) Freight Index Forecasting Based on an Integrated Model Combining ARMA, GM and BP Model Optimized by GA
by Zhaohui Li, Wenjia Piao, Lin Wang, Xiaoqian Wang, Rui Fu and Yan Fang
Electronics 2022, 11(17), 2732; https://doi.org/10.3390/electronics11172732 - 30 Aug 2022
Cited by 5 | Viewed by 1341
Abstract
The China Coastal Bulk Coal Freight Index (CBCFI) is the main indicator tracking the coal shipping price volatility in the Chinese market. This index indicates the variable performance of current status and trends in the coastal coal shipping sector. It is critical for [...] Read more.
The China Coastal Bulk Coal Freight Index (CBCFI) is the main indicator tracking the coal shipping price volatility in the Chinese market. This index indicates the variable performance of current status and trends in the coastal coal shipping sector. It is critical for the government and shipping companies to formulate timely policies and measures. After investigating the fluctuation patterns of the shipping index and the external factors in light of forecasting accuracy requirements of CBCFI, this paper proposes a nonlinear integrated forecasting model combining ARMA (Auto-Regressive and Moving Average), GM (Grey System Theory Model) and BP (Back-Propagation) Model Optimized by GA (Genetic Algorithms). This integrated model uses the predicted values of ARMA and GM as the input training samples of the neural network. Considering the shortcomings of the BP network in terms of slow convergence and the tendency to fall into local optimum, it innovatively uses a genetic algorithm to optimize the BP network, which can better exploit the prediction accuracy of the combined model. Thus, establishing the combined ARMA-GM-GABP prediction model. This work compares the short-term forecasting effects of the above three models on CBCFI. The results of the forecast fitting and error analysis show that the predicted values of the combined ARMA-GM-GABP model are fully consistent with the change trend of the actual values. The prediction accuracy has been improved to a certain extent during the observation period, which can better fit the CBCFI historical time series and can effectively solve the CBCFI forecasting problem. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

17 pages, 2456 KiB  
Article
An Intelligent Identification Approach Using VMD-CMDE and PSO-DBN for Bearing Faults
by Erbin Yang, Yingchao Wang, Peng Wang, Zheming Guan and Wu Deng
Electronics 2022, 11(16), 2582; https://doi.org/10.3390/electronics11162582 - 18 Aug 2022
Cited by 8 | Viewed by 1373
Abstract
In order to improve the fault diagnosis accuracy of bearings, an intelligent fault diagnosis method based on Variational Mode Decomposition (VMD), Composite Multi-scale Dispersion Entropy (CMDE), and Deep Belief Network (DBN) with Particle Swarm Optimization (PSO) algorithm—namely VMD-CMDE-PSO-DBN—is proposed in this paper. The [...] Read more.
In order to improve the fault diagnosis accuracy of bearings, an intelligent fault diagnosis method based on Variational Mode Decomposition (VMD), Composite Multi-scale Dispersion Entropy (CMDE), and Deep Belief Network (DBN) with Particle Swarm Optimization (PSO) algorithm—namely VMD-CMDE-PSO-DBN—is proposed in this paper. The number of modal components decomposed by VMD is determined by the observation center frequency, reconstructed according to the kurtosis, and the composite multi-scale dispersion entropy of the reconstructed signal is calculated to form the training samples and test samples of pattern recognition. Considering that the artificial setting of DBN node parameters cannot achieve the best recognition rate, PSO is used to optimize the parameters of DBN model, and the optimized DBN model is used to identify faults. Through experimental comparison and analysis, we propose that the VMD-CMDE-PSO-DBN method has certain application value in intelligent fault diagnosis. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

11 pages, 1544 KiB  
Article
Abnormal Cockpit Pilot Driving Behavior Detection Using YOLOv4 Fused Attention Mechanism
by Nongtian Chen, Yongzheng Man and Youchao Sun
Electronics 2022, 11(16), 2538; https://doi.org/10.3390/electronics11162538 - 13 Aug 2022
Cited by 3 | Viewed by 1776
Abstract
The abnormal behavior of cockpit pilots during the manipulation process is an important incentive for flight safety, but the complex cockpit environment limits the detection accuracy, with problems such as false detection, missed detection, and insufficient feature extraction capability. This article proposes a [...] Read more.
The abnormal behavior of cockpit pilots during the manipulation process is an important incentive for flight safety, but the complex cockpit environment limits the detection accuracy, with problems such as false detection, missed detection, and insufficient feature extraction capability. This article proposes a method of abnormal pilot driving behavior detection based on the improved YOLOv4 deep learning algorithm and by integrating an attention mechanism. Firstly, the semantic image features are extracted by running the deep neural network structure to complete the image and video recognition of pilot driving behavior. Secondly, the CBAM attention mechanism is introduced into the neural network to solve the problem of gradient disappearance during training. The CBAM mechanism includes both channel and spatial attention processes, meaning the feature extraction capability of the network can be improved. Finally, the features are extracted through the convolutional neural network to monitor the abnormal driving behavior of pilots and for example verification. The conclusion shows that the deep learning algorithm based on the improved YOLOv4 method is practical and feasible for the monitoring of the abnormal driving behavior of pilots during the flight maneuvering phase. The experimental results show that the improved YOLOv4 recognition rate is significantly higher than the unimproved algorithm, and the calling phase has a mAP of 87.35%, an accuracy of 75.76%, and a recall of 87.36%. The smoking phase has a mAP of 87.35%, an accuracy of 85.54%, and a recall of 85.54%. The conclusion shows that the deep learning algorithm based on the improved YOLOv4 method is practical and feasible for the monitoring of the abnormal driving behavior of pilots in the flight maneuvering phase. This method can quickly and accurately identify the abnormal behavior of pilots, providing an important theoretical reference for abnormal behavior detection and risk management. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

18 pages, 5589 KiB  
Article
Design Science Research Framework for Performance Analysis Using Machine Learning Techniques
by Mihaela Muntean and Florin Daniel Militaru
Electronics 2022, 11(16), 2504; https://doi.org/10.3390/electronics11162504 - 11 Aug 2022
Cited by 3 | Viewed by 1924
Abstract
We propose a methodological framework based on design science research for the design and development of data and information artifacts in data analysis projects, particularly managerial performance analysis. Design science research methodology is an artifact-centric creation and evaluation approach. Artifacts are used to [...] Read more.
We propose a methodological framework based on design science research for the design and development of data and information artifacts in data analysis projects, particularly managerial performance analysis. Design science research methodology is an artifact-centric creation and evaluation approach. Artifacts are used to solve real-life business problems. These are key elements of the proposed approach. Starting from the main current approaches of design science research, we propose a framework that contains artifact engineering aspects for a class of problems, namely data analysis using machine learning techniques. Several classification algorithms were applied to previously labelled datasets through clustering. The datasets contain values for eight competencies that define a manager’s profile. These values were obtained through a 360 feedback evaluation. A set of metrics for evaluating the performance of the classifiers was introduced, and a general algorithm was described. Our initiative has a predominant practical relevance but also ensures a theoretical contribution to the domain of study. The proposed framework can be applied to any problem involving data analysis using machine learning techniques. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

20 pages, 34792 KiB  
Article
Application of Improved YOLOv5 in Aerial Photographing Infrared Vehicle Detection
by Youchen Fan, Qianlong Qiu, Shunhu Hou, Yuhai Li, Jiaxuan Xie, Mingyu Qin and Feihuang Chu
Electronics 2022, 11(15), 2344; https://doi.org/10.3390/electronics11152344 - 27 Jul 2022
Cited by 13 | Viewed by 2601
Abstract
Aiming to solve the problems of false detection, missed detection, and insufficient detection ability of infrared vehicle images, an infrared vehicle target detection algorithm based on the improved YOLOv5 is proposed. The article analyzes the image characteristics of infrared vehicle detection, and then [...] Read more.
Aiming to solve the problems of false detection, missed detection, and insufficient detection ability of infrared vehicle images, an infrared vehicle target detection algorithm based on the improved YOLOv5 is proposed. The article analyzes the image characteristics of infrared vehicle detection, and then discusses the improved YOLOv5 algorithm in detail. The algorithm uses the DenseBlock module to increase the ability of shallow feature extraction. The Ghost convolution layer is used to replace the ordinary convolution layer, which increases the redundant feature graph based on linear calculation, improves the network feature extraction ability, and increases the amount of information from the original image. The detection accuracy of the whole network is enhanced by adding a channel attention mechanism and modifying loss function. Finally, the improved performance and comprehensive improved performance of each module are compared with common algorithms. Experimental results show that the detection accuracy of the DenseBlock and EIOU module added alone are improved by 2.5% and 3% compared with the original YOLOv5 algorithm, respectively, and the addition of the Ghost convolution module and SE module alone does not increase significantly. By using the EIOU module as the loss function, the three modules of DenseBlock, Ghost convolution and SE Layer are added to the YOLOv5 algorithm for comparative analysis, of which the combination of DenseBlock and Ghost convolution has the best effect. When adding three modules at the same time, the mAP fluctuation is smaller, which can reach 73.1%, which is 4.6% higher than the original YOLOv5 algorithm. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

14 pages, 12926 KiB  
Article
LST-GCN: Long Short-Term Memory Embedded Graph Convolution Network for Traffic Flow Forecasting
by Xu Han and Shicai Gong
Electronics 2022, 11(14), 2230; https://doi.org/10.3390/electronics11142230 - 17 Jul 2022
Cited by 8 | Viewed by 2748
Abstract
Traffic flow prediction is an important part of the intelligent transportation system. Accurate traffic flow prediction is of great significance for strengthening urban management and facilitating people’s travel. In this paper, we propose a model named LST-GCN to improve the accuracy of current [...] Read more.
Traffic flow prediction is an important part of the intelligent transportation system. Accurate traffic flow prediction is of great significance for strengthening urban management and facilitating people’s travel. In this paper, we propose a model named LST-GCN to improve the accuracy of current traffic flow predictions. We simulate the spatiotemporal correlations present in traffic flow prediction by optimizing GCN (graph convolutional network) parameters using an LSTM (long short-term memory) network. Specifically, we capture spatial correlations by learning topology through GCN networks and temporal correlations by embedding LSTM networks into the training process of GCN networks. This method improves the traditional method of combining the recurrent neural network and graph neural network in the original spatiotemporal traffic flow prediction, so it can better capture the spatiotemporal features existing in the traffic flow. Extensive experiments conducted on the PEMS dataset illustrate the effectiveness and outperformance of our method compared with other state-of-the-art methods. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

17 pages, 802 KiB  
Article
Fine-Grained Classification of Announcement News Events in the Chinese Stock Market
by Feng Miu, Ping Wang, Yuning Xiong, Huading Jia and Wei Liu
Electronics 2022, 11(13), 2058; https://doi.org/10.3390/electronics11132058 - 30 Jun 2022
Cited by 1 | Viewed by 1288
Abstract
Determining the event type is one of the main tasks of event extraction (EE). The announcement news released by listed companies contains a wide range of information, and it is a challenge to determine the event types. Some fine-grained event type frameworks have [...] Read more.
Determining the event type is one of the main tasks of event extraction (EE). The announcement news released by listed companies contains a wide range of information, and it is a challenge to determine the event types. Some fine-grained event type frameworks have been built from financial news or stock announcement news by domain experts manually or by clustering, ontology or other methods. However, we think there are still some improvements to be made based on the existing results. For example, a legal category has been created in previous studies, which considers violations of company rules and violations of the law the same thing. However, the penalties they face and the expectations they bring to investors are different, so it is more reasonable to consider them different types. In order to more finely classify the event type of stock announcement news, this paper proposes a two-step method. First, the candidate event trigger words and co-occurrence words satisfying the support value are extracted, and they are arranged in the order of common expressions through the algorithm. Then, the final event types are determined using three proposed criteria. Based on the real data of the Chinese stock market, this paper constructs 54 event types (p = 0.927, f = 0.946), and some reasonable and valuable types have not been discussed in previous studies. Finally, based on the unilateral trading policy of the Chinese stock market, we screened out some event types that may not be valuable to investors. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

12 pages, 1844 KiB  
Article
Improved LS-SVM Method for Flight Data Fitting of Civil Aircraft Flying at High Plateau
by Nongtian Chen, Youchao Sun, Zongpeng Wang and Chong Peng
Electronics 2022, 11(10), 1558; https://doi.org/10.3390/electronics11101558 - 13 May 2022
Cited by 5 | Viewed by 1669
Abstract
High-plateau flight safety is an important research hotspot in the field of civil aviation transportation safety science. Complete and accurate high-plateau flight data are beneficial for effectively assessing and improving the flight status of civil aviation aircrafts, and can play an important role [...] Read more.
High-plateau flight safety is an important research hotspot in the field of civil aviation transportation safety science. Complete and accurate high-plateau flight data are beneficial for effectively assessing and improving the flight status of civil aviation aircrafts, and can play an important role in carrying out high-plateau operation safety risk analysis. Due to various reasons, such as low temperature and low pressure in the harsh environment of high-plateau flights, the abnormality or loss of the quick access recorder (QAR) data affects the flight data processing and analysis results to a certain extent. In order to effectively solve this problem, an improved least squares support vector machines method is proposed. Firstly, the entropy weight method is used to obtain the index weights. Secondly, the principal component analysis method is used for dimensionality reduction. Finally, the data are fitted and repaired by selecting appropriate eigenvalues through multiple tests based on the LS-SVM. In order to verify the effectiveness of this method, the QAR data related to multiple real plateau flights are used for testing and comparing with the improved method for verification. The fitting results show that the error measurement index mean absolute error of the average error accuracy is more than 90%, and the error index value equal coefficient reaches a high fit degree of 0.99, which proves that the improved least squares support vector machines machine learning model can fit and supplement the missing QAR data in the plateau area through historical flight data to effectively meet application needs. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

22 pages, 4178 KiB  
Article
Short-Term Traffic-Flow Forecasting Based on an Integrated Model Combining Bagging and Stacking Considering Weight Coefficient
by Zhaohui Li, Lin Wang, Deyao Wang, Ming Yin and Yujin Huang
Electronics 2022, 11(9), 1467; https://doi.org/10.3390/electronics11091467 - 03 May 2022
Cited by 4 | Viewed by 1372
Abstract
This work proposed an integrated model combining bagging and stacking considering the weight coefficient for short-time traffic-flow prediction, which incorporates vacation and peak time features, as well as occupancy and speed information, in order to improve prediction accuracy and accomplish deeper traffic flow [...] Read more.
This work proposed an integrated model combining bagging and stacking considering the weight coefficient for short-time traffic-flow prediction, which incorporates vacation and peak time features, as well as occupancy and speed information, in order to improve prediction accuracy and accomplish deeper traffic flow data feature mining. To address the limitations of a single prediction model in traffic forecasting, a stacking model with ridge regression as the meta-learner is first established, then the stacking model is optimized from the perspective of the learner using the bagging model, and lastly the optimized learner is embedded into the stacking model as the new base learner to obtain the Ba-Stacking model. Finally, to address the Ba-Stacking model’s shortcomings in terms of low base learner utilization, the information structure of the base learners is modified by weighting the error coefficients while taking into account the model’s external features, resulting in a DW-Ba-Stacking model that can change the weights of the base learners to adjust the feature distribution and thus improve utilization. Using 76,896 data from the I5NB highway as the empirical study object, the DW-Ba-Stacking model is compared and assessed with the traditional model in this paper. The empirical results show that the DW-Ba-Stacking model has the highest prediction accuracy, demonstrating that the model is successful in predicting short-term traffic flows and can effectively solve traffic-congestion problems. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

28 pages, 3798 KiB  
Article
Random Replacement Crisscross Butterfly Optimization Algorithm for Standard Evaluation of Overseas Chinese Associations
by Hanli Bao, Guoxi Liang, Zhennao Cai and Huiling Chen
Electronics 2022, 11(7), 1080; https://doi.org/10.3390/electronics11071080 - 29 Mar 2022
Cited by 4 | Viewed by 2158
Abstract
The butterfly optimization algorithm (BOA) is a swarm intelligence optimization algorithm proposed in 2019 that simulates the foraging behavior of butterflies. Similarly, the BOA itself has certain shortcomings, such as a slow convergence speed and low solution accuracy. To cope with these problems, [...] Read more.
The butterfly optimization algorithm (BOA) is a swarm intelligence optimization algorithm proposed in 2019 that simulates the foraging behavior of butterflies. Similarly, the BOA itself has certain shortcomings, such as a slow convergence speed and low solution accuracy. To cope with these problems, two strategies are introduced to improve the performance of BOA. One is the random replacement strategy, which involves replacing the position of the current solution with that of the optimal solution and is used to increase the convergence speed. The other is the crisscross search strategy, which is utilized to trade off the capability of exploration and exploitation in BOA to remove local dilemmas whenever possible. In this case, we propose a novel optimizer named the random replacement crisscross butterfly optimization algorithm (RCCBOA). In order to evaluate the performance of RCCBOA, comparative experiments are conducted with another nine advanced algorithms on the IEEE CEC2014 function test set. Furthermore, RCCBOA is combined with support vector machine (SVM) and feature selection (FS)—namely, RCCBOA-SVM-FS—to attain a standardized construction model of overseas Chinese associations. It is found that the reasonableness of bylaws; the regularity of general meetings; and the right to elect, be elected, and vote are of importance to the planning and standardization of Chinese associations. Compared with other machine learning methods, the RCCBOA-SVM-FS model has an up to 95% accuracy when dealing with the normative prediction problem of overseas Chinese associations. Therefore, the constructed model is helpful for guiding the orderly and healthy development of overseas Chinese associations. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

16 pages, 27852 KiB  
Article
Innovative Hyperspectral Image Classification Approach Using Optimized CNN and ELM
by Ansheng Ye, Xiangbing Zhou and Fang Miao
Electronics 2022, 11(5), 775; https://doi.org/10.3390/electronics11050775 - 02 Mar 2022
Cited by 10 | Viewed by 2111
Abstract
In order to effectively extract features and improve classification accuracy for hyperspectral remote sensing images (HRSIs), the advantages of enhanced particle swarm optimization (PSO) algorithm, convolutional neural network (CNN), and extreme learning machine (ELM) are fully utilized to propose an innovative classification method [...] Read more.
In order to effectively extract features and improve classification accuracy for hyperspectral remote sensing images (HRSIs), the advantages of enhanced particle swarm optimization (PSO) algorithm, convolutional neural network (CNN), and extreme learning machine (ELM) are fully utilized to propose an innovative classification method of HRSIs (IPCEHRIC) in this paper. In the IPCEHRIC, an enhanced PSO algorithm (CWLPSO) is developed by improving learning factor and inertia weight to improve the global optimization performance, which is employed to optimize the parameters of the CNN in order to construct an optimized CNN model for effectively extracting the deep features of HRSIs. Then, a feature matrix is constructed and the ELM with strong generalization ability and fast learning ability is employed to realize the accurate classification of HRSIs. Pavia University data and actual HRSIs after Jiuzhaigou M7.0 earthquake are applied to test and prove the effectiveness of the IPCEHRIC. The experiment results show that the optimized CNN can effectively extract the deep features from HRSIs, and the IPCEHRIC can accurately classify the HRSIs after Jiuzhaigou M7.0 earthquake to obtain the villages, bareland, grassland, trees, water, and rocks. Therefore, the IPCEHRIC takes on stronger generalization, faster learning ability, and higher classification accuracy. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

29 pages, 5329 KiB  
Article
Multi-Population Enhanced Slime Mould Algorithm and with Application to Postgraduate Employment Stability Prediction
by Hongxing Gao, Guoxi Liang and Huiling Chen
Electronics 2022, 11(2), 209; https://doi.org/10.3390/electronics11020209 - 10 Jan 2022
Cited by 13 | Viewed by 2018
Abstract
In this study, the authors aimed to study an effective intelligent method for employment stability prediction in order to provide a reasonable reference for postgraduate employment decision and for policy formulation in related departments. First, this paper introduces an enhanced slime mould algorithm [...] Read more.
In this study, the authors aimed to study an effective intelligent method for employment stability prediction in order to provide a reasonable reference for postgraduate employment decision and for policy formulation in related departments. First, this paper introduces an enhanced slime mould algorithm (MSMA) with a multi-population strategy. Moreover, this paper proposes a prediction model based on the modified algorithm and the support vector machine (SVM) algorithm called MSMA-SVM. Among them, the multi-population strategy balances the exploitation and exploration ability of the algorithm and improves the solution accuracy of the algorithm. Additionally, the proposed model enhances the ability to optimize the support vector machine for parameter tuning and for identifying compact feature subsets to obtain more appropriate parameters and feature subsets. Then, the proposed modified slime mould algorithm is compared against various other famous algorithms in experiments on the 30 IEEE CEC2017 benchmark functions. The experimental results indicate that the established modified slime mould algorithm has an observably better performance compared to the algorithms on most functions. Meanwhile, a comparison between the optimal support vector machine model and other several machine learning methods on their ability to predict employment stability was conducted, and the results showed that the suggested the optimal support vector machine model has better classification ability and more stable performance. Therefore, it is possible to infer that the optimal support vector machine model is likely to be an effective tool that can be used to predict employment stability. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

21 pages, 4369 KiB  
Article
Peak Shaving and Frequency Regulation Coordinated Output Optimization Based on Improving Economy of Energy Storage
by Daobing Liu, Zitong Jin, Huayue Chen, Hongji Cao, Ye Yuan, Yu Fan and Yingjie Song
Electronics 2022, 11(1), 29; https://doi.org/10.3390/electronics11010029 - 22 Dec 2021
Cited by 10 | Viewed by 3100
Abstract
In this paper, a peak shaving and frequency regulation coordinated output strategy based on the existing energy storage is proposed to improve the economic problem of energy storage development and increase the economic benefits of energy storage in industrial parks. In the proposed [...] Read more.
In this paper, a peak shaving and frequency regulation coordinated output strategy based on the existing energy storage is proposed to improve the economic problem of energy storage development and increase the economic benefits of energy storage in industrial parks. In the proposed strategy, the profit and cost models of peak shaving and frequency regulation are first established. Second, the benefits brought by the output of energy storage, degradation cost and operation and maintenance costs are considered to establish an economic optimization model, which is used to realize the division of peak shaving and frequency regulation capacity of energy storage based on peak shaving and frequency regulation output optimization. Finally, the intra-day model predictive control method is employed for rolling optimization. An intra-day peak shaving and frequency regulation coordinated output optimization strategy of energy storage is proposed. Through the example simulation, the experiment results show that the electricity cost of the whole day is reduced by 10.96% by using the coordinated output strategy of peak shaving and frequency regulation. The obtained further comparative analysis results and the life cycle economic analysis show that the profit brought by the proposed coordinated output optimization strategy is greater than that for separate peak shaving or frequency modulation of energy storage under the same capacity. Full article
(This article belongs to the Special Issue Advanced Machine Learning Applications in Big Data Analytics)
Show Figures

Figure 1

Back to TopTop