Recent Analysis and Applications of Algorithms, Programs and Data Based on Artificial Intelligence

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 May 2023) | Viewed by 41783

Special Issue Editors


E-Mail Website
Guest Editor
College of Information and Computer Engineering, Northeast Forestry University, Harbin 150040, China
Interests: remote sensing; machine learning; artificial intelligence; cloud computing; parallel and distributed computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Intelligence and Computing, Tianjin University, Tianjin, China
Interests: knowledge graphs; graph databases; big data; distributed processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science, Southwest Petroleum University, Chengdu, China
Interests: medical image analysis; medical ultrasound imaging; GPU computation

E-Mail Website
Guest Editor
College of Computer Science and Technology, Jilin University, Changchun 130012, China
Interests: robots; autonomous driving; computer vision

Special Issue Information

Dear Colleagues,

The development of big data technology has led the world into a new era of intelligence. From daily life to industrial production, massive data and various intelligent algorithms, including machine learning and deep learning, assist in intelligent identification and decision making, bringing great convenience to human life. Cutting-edge research is rapidly developing, such as speech and image recognition, recommendation systems, smart cities, smart medical, and business intelligence. Moreover, artificial intelligence technology plays a significant role in software development and system design. Expectations for smart systems and applications are increasingly growing; it is thus crucial to analyze and optimize algorithms and data structures to improve their accuracy and efficiency.

This Special Issue aims to disseminate recent research results and advancements related to artificial intelligence and big data, with particular emphasis on their applications in algorithms, model optimization, data analysis, data processing, system development, AI systems, intelligent algorithms, integration, and other new technologies. We invite researchers and practitioners to contribute high-quality original research or review articles on these topics to this Special issue.

Prof. Dr. Weipeng Jing
Prof. Dr. Xin Wang
Prof. Dr. Bo Peng
Prof. Dr. Gang Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • big data processing
  • intelligent data analysis
  • AI for system
  • computer vision
  • intelligent signal processing
  • smart medical
  • business intelligence
  • hybrid methods
  • optimization algorithm
  • algorithm optimization

Published Papers (22 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 601 KiB  
Article
Enhancing Document Image Retrieval in Education: Leveraging Ensemble-Based Document Image Retrieval Systems for Improved Precision
by Yehia Ibrahim Alzoubi, Ahmet Ercan Topcu and Erdem Ozdemir
Appl. Sci. 2024, 14(2), 751; https://doi.org/10.3390/app14020751 - 16 Jan 2024
Cited by 1 | Viewed by 583
Abstract
Document image retrieval (DIR) systems simplify access to digital data within printed documents by capturing images. These systems act as bridges between print and digital realms, with demand in organizations handling both formats. In education, students use DIR to access online materials, clarify [...] Read more.
Document image retrieval (DIR) systems simplify access to digital data within printed documents by capturing images. These systems act as bridges between print and digital realms, with demand in organizations handling both formats. In education, students use DIR to access online materials, clarify topics, and find solutions in printed textbooks by photographing content with their phones. DIR excels in handling complex figures and formulas. We propose using ensembles of DIR systems instead of single-feature models to enhance DIR’s efficacy. We introduce “Vote-Based DIR” and “The Strong Decision-Based DIR”. These ensembles combine various techniques, like optical code reading, spatial analysis, and image features, improving document retrieval. Our study, using a dataset of university exam preparation materials, shows that ensemble DIR systems outperform individual ones, promising better accuracy and efficiency in digitizing printed content, which is especially beneficial in education. Full article
Show Figures

Figure 1

28 pages, 1472 KiB  
Article
A Novel Real-Time PV Error Handling Exploiting Evolutionary-Based Optimization
by Asimina Dimara, Alexios Papaioannou, Konstantinos Grigoropoulos, Dimitris Triantafyllidis, Ioannis Tzitzios, Christos-Nikolaos Anagnostopoulos, Stelios Krinidis, Dimosthenis Ioannidis and Dimitrios Tzovaras
Appl. Sci. 2023, 13(23), 12682; https://doi.org/10.3390/app132312682 - 26 Nov 2023
Cited by 1 | Viewed by 609
Abstract
The crucial need for perpetual monitoring of photovoltaic (PV) systems, particularly in remote areas where routine inspections are challenging, is of major importance. This paper introduces an advanced approach to optimizing the maximum power point while ensuring real-time PV error handling. The overarching [...] Read more.
The crucial need for perpetual monitoring of photovoltaic (PV) systems, particularly in remote areas where routine inspections are challenging, is of major importance. This paper introduces an advanced approach to optimizing the maximum power point while ensuring real-time PV error handling. The overarching problem of securing continuous monitoring of photovoltaic systems is highlighted, emphasizing the need for reliable performance, especially in remote and inaccessible locations. The proposed methodology employs an innovative genetic algorithm (GA) designed to optimize the maximum power point of photovoltaic systems. This approach takes into account critical PV parameters and constraints. The single-diode PV modeling process, based on environmental variables like outdoor temperature, illuminance, and irradiance, plays a pivotal role in the optimization process. To specifically address the challenge of perpetual monitoring, the paper introduces a technique for handling PV errors in real time using evolutionary-based optimization. The genetic algorithm is utilized to estimate the maximum power point, with the PV voltage and current calculated on the basis of simulated values. A meticulous comparison between the expected electrical output and the actual photovoltaic data is conducted to identify potential errors in the photovoltaic system. A user interface provides a dynamic display of the PV system’s real-time status, generating alerts when abnormal PV values are detected. Rigorous testing under real-world conditions, incorporating PV-monitored values and outdoor environmental parameters, demonstrates the remarkable accuracy of the genetic algorithm, surpassing 98% in predicting PV current, voltage, and power. This establishes the proposed algorithm as a potent solution for ensuring the perpetual and secure monitoring of PV systems, particularly in remote and challenging environments. Full article
Show Figures

Figure 1

15 pages, 5926 KiB  
Article
Non-Invasive Intraoral Stand-Alone Tongue Control System Based on RSIC-V Edge Computing
by Lijuan Shi, Xiong Peng, Jian Zhao, Zhejun Kuang, Tianbo An and Liu Wang
Appl. Sci. 2023, 13(17), 9490; https://doi.org/10.3390/app13179490 - 22 Aug 2023
Cited by 1 | Viewed by 680
Abstract
The intelligent tongue control system is of great significance for assisting the independent life of patients with a limb disability. In order to more accurately control the assisted living equipment of incompetent patients and solve the power-loss problem of the intelligent tongue control [...] Read more.
The intelligent tongue control system is of great significance for assisting the independent life of patients with a limb disability. In order to more accurately control the assisted living equipment of incompetent patients and solve the power-loss problem of the intelligent tongue control system, this research designs a non-invasive pressure sensor array for tongue touch signal detection in the oral cavity and proposes a tongue control system based on RSIC-V edge computing. The system converts the tongue touch pressure data into specific control instructions on the edge of the RSIC-V chip and transmits them to the receiver, thus reducing the transmission of data. This study takes control of the wheelchair motor as the test object. In the experiment, the speed response time test, the center click task, and the power consumption experiment are carried out, whose results show that the adaptive fuzzy PID control algorithm has good robustness in the system; when the DC motor with a given speed of 750 r/min reaches the steady state, its rise time is 0.108 s and the adjustment time is 0.59 s. The dynamic power consumption of the non-intrusive intraoral stand-alone tongue control system proposed in this paper is found to be 3.745 MW, which is 11.5% lower than the total power consumption of the sTD system. Full article
Show Figures

Figure 1

18 pages, 1183 KiB  
Article
Towards Delay Tolerant Networking for Connectivity Aware Routing Protocol for VANET-WSN Communications
by Linda Mohaisen and Laurie Joiner
Appl. Sci. 2023, 13(6), 4008; https://doi.org/10.3390/app13064008 - 21 Mar 2023
Cited by 1 | Viewed by 1130
Abstract
Vehicular Ad Hoc Networks (VANETs) are increasingly playing a fundamental role in improving driving safety. However, VANETs in a sparse environment may add risk to driving safety. The probability of a low density of vehicles in a rural area at midnight is very [...] Read more.
Vehicular Ad Hoc Networks (VANETs) are increasingly playing a fundamental role in improving driving safety. However, VANETs in a sparse environment may add risk to driving safety. The probability of a low density of vehicles in a rural area at midnight is very high. Consequently, the packet will be lost due to the lack of other vehicles, and the arrival of the following vehicles in the accident area is unavoidable. To overcome this problem, VANET is integrated with Wireless Sensor Network (WSN). The most challenging features of VANETs are their high mobility. This high mobility causes sensor nodes to consume most of their energy during communication with other nodes, leading to frequent network disconnectivity. With the evolution of VANET and WSN, the Store/Carry-Forward (SCF) paradigm has emerged as an exciting research area in the Delay Tolerant Networks (DTNs) to solve network disconnectivity. This paper proposes the Energy-Mobility-Connectivity aware routing protocol (EMCR) for a hybrid network of VANET-WSN. A comprehensive performance analysis that considers realistic propagation models and real city scenario traffic is performed in NS3. The simulation results show that the SCF mechanism is essential in the EMCR protocol to maximize the delivery ratio and minimize energy consumption and overhead. Full article
Show Figures

Figure 1

24 pages, 1417 KiB  
Article
Calibrated Q-Matrix-Enhanced Deep Knowledge Tracing with Relational Attention Mechanism
by Linqing Li and Zhifeng Wang
Appl. Sci. 2023, 13(4), 2541; https://doi.org/10.3390/app13042541 - 16 Feb 2023
Cited by 6 | Viewed by 1730
Abstract
With the development of online educational platforms, numerous research works have focused on the knowledge tracing task, which relates to the problem of diagnosing the changing knowledge proficiency of learners. Deep-neural-network-based models are used to explore the interaction information between students and their [...] Read more.
With the development of online educational platforms, numerous research works have focused on the knowledge tracing task, which relates to the problem of diagnosing the changing knowledge proficiency of learners. Deep-neural-network-based models are used to explore the interaction information between students and their answer logs in the current field of knowledge tracing studies. However, those models ignore the impact of previous interactions, including the exercise relation, forget factor, and student behaviors (the slipping factor and the guessing factor). Those models also do not consider the importance of the Q-matrix, which relates exercises to knowledge points. In this paper, we propose a novel relational attention knowledge tracing (RAKT) to track the students’ knowledge proficiency in exercises. Specifically, the RAKT model incorporates the students’ performance data with corresponding interaction information, such as the context of exercises and the different time intervals between exercises. The RAKT model also takes into account the students’ interaction behaviors, including the slipping factor and the guessing factor. Moreover, consider the relationship between exercise sets and knowledge sets and the relationship between different knowledge points in the same exercise. An extension model of RAKT is called the Calibrated Q-matrix relational attention knowledge tracing model (QRAKT), which was developed using a Q-matrix calibration method based on the hierarchical knowledge levels. Experiments were conducted on two public educational datasets, ASSISTment2012 and Eedi. The results of the experiments indicated that the RAKT model and the QRAKT model outperformed the four baseline models. Full article
Show Figures

Figure 1

15 pages, 3242 KiB  
Article
Towards Accurate Children’s Arabic Handwriting Recognition via Deep Learning
by Anfal Bin Durayhim, Amani Al-Ajlan, Isra Al-Turaiki and Najwa Altwaijry
Appl. Sci. 2023, 13(3), 1692; https://doi.org/10.3390/app13031692 - 29 Jan 2023
Cited by 3 | Viewed by 2216
Abstract
Automatic handwriting recognition has received considerable attention over the past three decades. Handwriting recognition systems are useful for a wide range of applications. Much research has been conducted to address the problem in Latin languages. However, less research has focused on the Arabic [...] Read more.
Automatic handwriting recognition has received considerable attention over the past three decades. Handwriting recognition systems are useful for a wide range of applications. Much research has been conducted to address the problem in Latin languages. However, less research has focused on the Arabic language, especially concerning recognizing children’s Arabic handwriting. This task is essential as the demand for educational applications to practice writing and spelling Arabic letters is increasing. Thus, the development of Arabic handwriting recognition systems and applications for children is important. In this paper, we propose two deep learning-based models for the recognition of children’s Arabic handwriting. The proposed models, a convolutional neural network (CNN) and a pre-trained CNN (VGG-16) were trained using Hijja, a recent dataset of Arabic children’s handwriting collected in Saudi Arabia. We also train and test our proposed models using the Arabic Handwritten Character Dataset (AHCD). We compare the performance of the proposed models with similar models from the literature. The results indicate that our proposed CNN outperforms the pre-trained CNN (VGG-16) and the other compared models from the literature. Moreover, we developed Mutqin, a prototype to help children practice Arabic handwriting. The prototype was evaluated by target users, and the results are reported. Full article
Show Figures

Figure 1

24 pages, 3181 KiB  
Article
Cooperative Content Caching Framework Using Cuckoo Search Optimization in Vehicular Edge Networks
by Sardar Khaliq uz Zaman, Saad Mustafa, Hajira Abbasi, Tahir Maqsood, Faisal Rehman, Muhammad Amir Khan, Mushtaq Ahmed, Abeer D. Algarni and Hela Elmannai
Appl. Sci. 2023, 13(2), 780; https://doi.org/10.3390/app13020780 - 05 Jan 2023
Cited by 3 | Viewed by 1489
Abstract
Vehicular edge networks (VENs) connect vehicles to share data and infotainment content collaboratively to improve network performance. Due to technological advancements, data growth is accelerating, making it difficult to always connect mobile devices and locations. For vehicle-to-vehicle (V2V) communication, vehicles are equipped with [...] Read more.
Vehicular edge networks (VENs) connect vehicles to share data and infotainment content collaboratively to improve network performance. Due to technological advancements, data growth is accelerating, making it difficult to always connect mobile devices and locations. For vehicle-to-vehicle (V2V) communication, vehicles are equipped with onboard units (OBU) and roadside units (RSU). Through back-haul, all user-uploaded data is cached in the cloud server’s main database. Caching stores and delivers database data on demand. Pre-caching the data on the upcoming predicted server, closest to the user, before receiving the request will improve the system’s performance. OBUs, RSUs, and base stations (BS) cache data in VENs to fulfill user requests rapidly. Pre-caching reduces data retrieval costs and times. Due to storage and computing expenses, complete data cannot be stored on a single device for vehicle caching. We reduce content delivery delays by using the cuckoo search optimization algorithm with cooperative content caching. Cooperation among end users in terms of data sharing with neighbors will positively affect delivery delays. The proposed model considers cooperative content caching based on popularity and accurate vehicle position prediction using K-means clustering. Performance is measured by caching cost, delivery cost, response time, and cache hit ratio. Regarding parameters, the new algorithm outperforms the alternative. Full article
Show Figures

Figure 1

19 pages, 672 KiB  
Article
Insider Threat Detection Using Machine Learning Approach
by Bushra Bin Sarhan and Najwa Altwaijry
Appl. Sci. 2023, 13(1), 259; https://doi.org/10.3390/app13010259 - 25 Dec 2022
Cited by 9 | Viewed by 9379
Abstract
Insider threats pose a critical challenge for securing computer networks and systems. They are malicious activities by authorised users that can cause extensive damage, such as intellectual property theft, sabotage, sensitive data exposure, and web application attacks. Organisations are tasked with the duty [...] Read more.
Insider threats pose a critical challenge for securing computer networks and systems. They are malicious activities by authorised users that can cause extensive damage, such as intellectual property theft, sabotage, sensitive data exposure, and web application attacks. Organisations are tasked with the duty of keeping their layers of network safe and preventing intrusions at any level. Recent advances in modern machine learning algorithms, such as deep learning and ensemble models, facilitate solving many challenging problems by learning latent patterns and modelling data. We used the Deep Feature Synthesis algorithm to derive behavioural features based on historical data. We generated 69,738 features for each user, then used PCA as a dimensionality reduction method and utilised advanced machine learning algorithms, both anomaly detection and classification models, to detect insider threats, achieving an accuracy of 91% for the anomaly detection model. The experimentation utilised a publicly available insider threat dataset called the CERT insider threats dataset. We tested the effect of the SMOTE balancing technique to reduce the effect of the imbalanced dataset, and the results show that it increases recall and accuracy at the expense of precision. The feature extraction process and the SVM model yield outstanding results among all other ML models, achieving an accuracy of 100% for the classification model. Full article
Show Figures

Figure 1

18 pages, 8022 KiB  
Article
Determination of Coniferous Wood’s Compressive Strength by SE-DenseNet Model Combined with Near-Infrared Spectroscopy
by Chao Li, Xun Chen, Lixin Zhang and Saipeng Wang
Appl. Sci. 2023, 13(1), 152; https://doi.org/10.3390/app13010152 - 22 Dec 2022
Cited by 1 | Viewed by 1357
Abstract
Rapid determination of the mechanical performance of coniferous wood has great importance for wood processing and utilization. Near-infrared spectroscopy (NIRS) is widely used in various production fields because of its high efficiency and non-destructive characteristics, however, the traditional NIR spectroscopy analysis techniques mainly [...] Read more.
Rapid determination of the mechanical performance of coniferous wood has great importance for wood processing and utilization. Near-infrared spectroscopy (NIRS) is widely used in various production fields because of its high efficiency and non-destructive characteristics, however, the traditional NIR spectroscopy analysis techniques mainly focus on the spectral pretreatment and dimension reduction methods, which are difficult to maximize use of effective spectral information and are time consuming and laborious. Deep learning methods can automatically extract features; data-driven artificial intelligence technology can discover the internal correlation between data and realize many detection tasks in life and production. In this paper, we propose a SE-DenseNet model, which can realize end-to-end prediction without complex spectral dimension reduction compared with traditional modeling methods. The experimental results show that the proposed SE-DenseNet model achieved classification accuracy and F1 values of 88.89% and 0.8831 on the larch’s test set, respectively. The proposed SE-DenseNet model achieved correlation coefficients (R) and root mean square errors (RMSE) of 0.9144 and 1.2389 MPa on the larch’s test set, respectively. Implementation of this study demonstrates that SE-DenseNet can realize automatic extraction of spectral features and the accurate determination of wood mechanical properties. Full article
Show Figures

Figure 1

26 pages, 2863 KiB  
Article
Study on Dynamic Evaluation of Sci-tech Journals Based on Time Series Model
by Yan Ma, Yingkun Han, Mengshi Chen and Yongqiang Che
Appl. Sci. 2022, 12(24), 12864; https://doi.org/10.3390/app122412864 - 14 Dec 2022
Cited by 2 | Viewed by 1054
Abstract
As science and technology continue to advance, sci-tech journals are developing rapidly, and the quality of these journals affects the development and progress of particular subjects. Whether sci-tech journals can be evaluated and predicted comprehensively and dynamically from multiple angles based on the [...] Read more.
As science and technology continue to advance, sci-tech journals are developing rapidly, and the quality of these journals affects the development and progress of particular subjects. Whether sci-tech journals can be evaluated and predicted comprehensively and dynamically from multiple angles based on the current qualitative and quantitative evaluations of sci-tech journals is related to a rational adjustment of journal resource allocation and development planning. In this study, we propose a time series analysis task for the comprehensive and dynamic evaluation of sci-tech journals, construct a multivariate short-time multi-series time series dataset that contains 18 journal evaluation metrics, and build models based on machine learning and deep learning methods commonly used in the field of time series analysis to carry out training and testing experiments on the dataset. We compare and analyze the experimental results to confirm the generalizability of these methods for the comprehensive dynamic evaluation of journals and find the LSTM model built on our dataset produced the best performance (MSE: 0.00037, MAE: 0.01238, accuracy based on 80% confidence: 72.442%), laying the foundation for subsequent research on this task. In addition, the dataset constructed in this study can support research on the co-analysis of multiple short time series in the field of time series analysis. Full article
Show Figures

Figure 1

18 pages, 3435 KiB  
Article
Social Recommendation Based on Quantified Trust and User’s Primary Preference Space
by Suqi Zhang, Ningjing Zhang, Ningning Li, Zhijian Xie, Junhua Gu and Jianxin Li
Appl. Sci. 2022, 12(23), 12141; https://doi.org/10.3390/app122312141 - 27 Nov 2022
Cited by 1 | Viewed by 1228
Abstract
Social recommendation has received great attention recently, which uses social information to alleviate the data sparsity problem and the cold-start problem of recommendation systems. However, the existing social recommendation methods have two deficiencies. First, the binary trust network used by current social recommendation [...] Read more.
Social recommendation has received great attention recently, which uses social information to alleviate the data sparsity problem and the cold-start problem of recommendation systems. However, the existing social recommendation methods have two deficiencies. First, the binary trust network used by current social recommendation methods cannot reflect the trust level of different users. Second, current social recommendation methods assume that users only consider the same influencial factors when purchasing goods and establishing friendships, which does not match the reality, since users may have different preferences in different scenarios. To address these issues, in this paper, we propose a novel social recommendation framework based on trust and preference, named TPSR, including a trust quantify method based on random walk with restart (TQ_RWR) and a user’s primary preference space model (UPPS). Our experimental results in four public real-world datasets show that TQ_RWR can improve the utilization of trust information, and improve the recommended accuracy. In addition, compared with current social recommendation methods/studies, TPSR can achieve a higher performance in different metrics, including root mean square error, precision, recall and F1 value. Full article
Show Figures

Figure 1

17 pages, 718 KiB  
Article
A Federated Transfer Learning Framework Based on Heterogeneous Domain Adaptation for Students’ Grades Classification
by Bin Xu, Sheng Yan, Shuai Li and Yidi Du
Appl. Sci. 2022, 12(21), 10711; https://doi.org/10.3390/app122110711 - 22 Oct 2022
Cited by 2 | Viewed by 2389
Abstract
In the field of educational data mining, the classification of students’ grades is a subject that receives widespread attention. However, solving this problem based on machine learning algorithms and deep learning algorithms is usually limited by large datasets. The privacy problem of educational [...] Read more.
In the field of educational data mining, the classification of students’ grades is a subject that receives widespread attention. However, solving this problem based on machine learning algorithms and deep learning algorithms is usually limited by large datasets. The privacy problem of educational data platforms also limits the possibility of building an extensive dataset of students’ information and behavior by gathering small datasets and then carrying out the federated training model. Therefore, the balance of educational data and the inconsistency of feature distribution are the critical problems that need to be solved urgently in educational data mining. Federated learning technology enables multiple participants to continue machine learning and deep learning in protecting data privacy and meeting legal compliance requirements to solve the data island problem. However, these methods are only applicable to the data environment with common characteristics or common samples under the alliance. This results in domain transfer between nodes. Therefore, in this paper, we propose a framework based on federated transfer learning for student classification with privacy protection. This framework introduces the domain adaptation method and extends the domain adaptation to the constraint of federated learning. Through the feature extractor, this method matches the feature distribution of each party in the feature space. Then, labels and domains are classified on each side, the model is trained, and the target model is updated by gradient aggregation. The federated learning framework based on this method can effectively solve the federated transfer learning on heterogeneous datasets. We evaluated the performance of the proposed framework for student classification on the datasets of two courses. We simulated four scenarios according to different situations in reality. Then, the results of only source domain training, only target domain training, and federated migration training are compared. The experimental results show that the heterogeneous federated transfer framework based on domain adaptation can solve federated learning and knowledge transfer problems when there are little data at the data source and can be used for students’ grades classification in small datasets. Full article
Show Figures

Figure 1

21 pages, 1429 KiB  
Article
Application of Extension Engineering in Safety Evaluation of Chemical Enterprises
by Qilong Han, Peng Liu and Zhiqiang Ma
Appl. Sci. 2022, 12(18), 9368; https://doi.org/10.3390/app12189368 - 19 Sep 2022
Viewed by 1359
Abstract
To effectively analyze the safety risk of chemical enterprises and ensure the safety of production and management of enterprises, the contradiction problems in the process of index selection and risk early warning model in practical application are addressed. In this paper, extension engineering [...] Read more.
To effectively analyze the safety risk of chemical enterprises and ensure the safety of production and management of enterprises, the contradiction problems in the process of index selection and risk early warning model in practical application are addressed. In this paper, extension engineering is introduced into the safety-security field of chemical enterprises to extract hidden useful information from the production environment and outdoor environment data and provide decision support for the managers of chemical enterprises. First, based on data preprocessing and extension analysis, the safety-security data of chemical enterprises that meet the quality requirements and can be efficiently mined are searched. Then, the outdoor environment is combined in the paper to conduct the mining of these data in two aspects: (1) comprehensive analysis and evaluation of data quality; (2) key factors affecting factory safety mining, realizing the safety-security evaluation of intelligent factories in chemical enterprises. Based on the proposed chemical factory safety extension prerisk model, the risk assessment of the safety status of a chemical enterprise in Hebei Province is carried out. The research results of this paper provide a theoretical basis for the safety production analysis of such chemical enterprises and put forward practical suggestions for preventing possible accidents in the production process. Full article
Show Figures

Figure 1

19 pages, 620 KiB  
Article
ContextKT: A Context-Based Method for Knowledge Tracing
by Minghe Yu, Fan Li, Hengyu Liu, Tiancheng Zhang and Ge Yu
Appl. Sci. 2022, 12(17), 8822; https://doi.org/10.3390/app12178822 - 02 Sep 2022
Cited by 4 | Viewed by 1635
Abstract
Knowledge tracing, which is used to predict students’ performance based on their previous practices, has attracted many researchers’ attention. Especially in this rising period of intelligent education, many knowledge tracing methods have been developed. However, most of the existing knowledge tracing methods focus [...] Read more.
Knowledge tracing, which is used to predict students’ performance based on their previous practices, has attracted many researchers’ attention. Especially in this rising period of intelligent education, many knowledge tracing methods have been developed. However, most of the existing knowledge tracing methods focus on the personality of practices and knowledge concepts but ignore the contexts related to the studying process. In this paper, we propose a context-based knowledge tracing model, which combines students’ historical performance and their studying contexts during knowledge mastery. To be specific, we first define five studying contexts for performance prediction. The basic context is the current knowledge state of a student, which is described by their practice sequences. Then, a QR-matrix is defined to represent the relationship among questions, knowledge concepts, and responses, which describes the contexts of questions and knowledge. Furthermore, an improved LSTM model is proposed to capture the context of students’ memory and forgetness, and a multi-head attention mechanism is designed to capture the context of students’ behaviors. Finally, based on the captured contexts, the prediction model ContextKT is established. Our prediction model is evaluated on two real educational datasets. The experimental results show our model is effective and efficient in student performance prediction, and it outperforms the other existing methods. Full article
Show Figures

Figure 1

15 pages, 1530 KiB  
Article
Knowledge Graph Recommendation Model Based on Feature Space Fusion
by Suqi Zhang, Xinxin Wang, Rui Wang, Junhua Gu and Jianxin Li
Appl. Sci. 2022, 12(17), 8764; https://doi.org/10.3390/app12178764 - 31 Aug 2022
Cited by 2 | Viewed by 1406
Abstract
The existing recommendation model based on a knowledge graph simply integrates the behavior features in a user–item bipartite graph and the content features in a knowledge graph. However, the difference between the two feature spaces is ignored. To solve this problem, this paper [...] Read more.
The existing recommendation model based on a knowledge graph simply integrates the behavior features in a user–item bipartite graph and the content features in a knowledge graph. However, the difference between the two feature spaces is ignored. To solve this problem, this paper presents a new recommendation model named the knowledge graph recommendation model based on feature space fusion (KGRFSF). Specifically, in the behavioral feature space, the behavioral features of users and items are constructed by extracting the behavioral feature from the user–item bipartite graph. In the content feature space, the content features related to users and items are extracted through the attention mechanism on the knowledge graph, and then the content feature vectors of users and items are constructed. Finally, through the feature space fusion model, the behavior features and content features are projected into the same preference feature space, and then the fusion of the two feature spaces is completed to construct the complete vector representations of users and items and calculate the vector similarity to predict the score of the user to the item. This paper applies the presented model to public datasets in the fields of music and film. It can be found through the experimental results that KGRFSF can effectively improve the recommendation accuracy compared with the existing models. Full article
Show Figures

Figure 1

15 pages, 723 KiB  
Article
A Sentence Prediction Approach Incorporating Trial Logic Based on Abductive Learning
by Long Ouyang, Ruizhang Huang, Yanping Chen and Yongbin Qin
Appl. Sci. 2022, 12(16), 7982; https://doi.org/10.3390/app12167982 - 09 Aug 2022
Cited by 1 | Viewed by 1716
Abstract
Sentencing prediction is an important direction of artificial intelligence applied to the judicial field. The purpose is to predict the trial sentence for the case based on the description of the case in the adjudication documents. Traditional methods mainly use neural networks exclusively, [...] Read more.
Sentencing prediction is an important direction of artificial intelligence applied to the judicial field. The purpose is to predict the trial sentence for the case based on the description of the case in the adjudication documents. Traditional methods mainly use neural networks exclusively, which are trained on a large amount of data to encode textual information and then directly regress or classify out the sentence. This shows that machine learning methods are effective, but are extremely dependent on the amount of data. We found that there is still external knowledge such as laws and regulations that are not used. Moreover, the prediction of sentences in these methods does not fit well with the trial process. Thus, we propose a sentence prediction method that incorporates trial logic based on abductive learning, called SPITL. The logic of the trial is reflected in two aspects: one is that the process of sentence prediction is more in line with the logic of the trial, and the other is that external knowledge, such as legal texts, is utilized in the process of sentence prediction. Specifically, we establish a legal knowledge base for the characteristics of theft cases, translating relevant laws and legal interpretations into first-order logic. At the same time, we designed the process of sentence prediction according to the trial process by dividing it into key circumstance element identification and sentence calculation. We fused the legal knowledge base as weakly supervised information into a neural network through the combination of logical inference and machine learning. Furthermore, a sentencing calculation method that is more consistent with the sentencing rules is proposed with reference to the Sentencing Guidelines. Under the condition of the same training data, the effect of this model in the experiment of responding to the legal documents of theft cases was improved compared with state-of-the-art models without domain knowledge. The results are not only more accurate as a sentencing aid in the judicial trial process, but also more explanatory. Full article
Show Figures

Figure 1

11 pages, 431 KiB  
Article
Kalman Filter-Based Differential Privacy Federated Learning Method
by Xiaohui Yang and Zijian Dong
Appl. Sci. 2022, 12(15), 7787; https://doi.org/10.3390/app12157787 - 02 Aug 2022
Cited by 2 | Viewed by 1717
Abstract
The data privacy leakage problem of federated learning has attracted widespread attention. Using differential privacy can protect the data privacy of each node in the federated learning, but adding noise to the model parameters will reduce the accuracy and convergence efficiency of the [...] Read more.
The data privacy leakage problem of federated learning has attracted widespread attention. Using differential privacy can protect the data privacy of each node in the federated learning, but adding noise to the model parameters will reduce the accuracy and convergence efficiency of the model. A Kalman Filter-based Differential Privacy Federated Learning Method (KDP-FL) has been proposed to solve this problem, which reduces the impact of the noise added on the model by Kalman filtering. Furthermore, the effectiveness of the proposed method is verified in the case of both Non-IID and IID data distributions. The experiments show that the accuracy of the proposed method is improved by 0.3–4.5% compared to differential privacy federated learning. Full article
Show Figures

Figure 1

15 pages, 588 KiB  
Article
Neural Graph Similarity Computation with Contrastive Learning
by Shengze Hu, Weixin Zeng, Pengfei Zhang and Jiuyang Tang
Appl. Sci. 2022, 12(15), 7668; https://doi.org/10.3390/app12157668 - 29 Jul 2022
Viewed by 1997
Abstract
Computing the similarity between graphs is a longstanding and challenging problem with many real-world applications. Recent years have witnessed a rapid increase in neural-network-based methods, which project graphs into embedding space and devise end-to-end frameworks to learn to estimate graph similarity. Nevertheless, these [...] Read more.
Computing the similarity between graphs is a longstanding and challenging problem with many real-world applications. Recent years have witnessed a rapid increase in neural-network-based methods, which project graphs into embedding space and devise end-to-end frameworks to learn to estimate graph similarity. Nevertheless, these solutions usually design complicated networks to capture the fine-grained interactions between graphs, and hence have low efficiency. Additionally, they rely on labeled data for training the neural networks and overlook the useful information hidden in the graphs themselves. To address the aforementioned issues, in this work, we put forward a contrastive neural graph similarity learning framework, Conga. Specifically, we utilize vanilla graph convolutional networks to generate the graph representations and capture the cross-graph interactions via a simple multilayer perceptron. We further devise an unsupervised contrastive loss to discriminate the graph embeddings and guide the training process by learning more expressive entity representations. Extensive experiment results on public datasets validate that our proposal has more robust performance and higher efficiency compared with state-of-the-art methods. Full article
Show Figures

Figure 1

15 pages, 2556 KiB  
Article
Research on the Construction of Malware Variant Datasets and Their Detection Method
by Faming Lu, Zhaoyang Cai, Zedong Lin, Yunxia Bao and Mengfan Tang
Appl. Sci. 2022, 12(15), 7546; https://doi.org/10.3390/app12157546 - 27 Jul 2022
Cited by 3 | Viewed by 1899
Abstract
Malware detection is of great significance for maintaining the security of information systems. Malware obfuscation techniques and malware variants are increasingly emerging, but their samples and API (application programming interface) sequences are difficult to obtain. This poses difficulties for the development of malware [...] Read more.
Malware detection is of great significance for maintaining the security of information systems. Malware obfuscation techniques and malware variants are increasingly emerging, but their samples and API (application programming interface) sequences are difficult to obtain. This poses difficulties for the development of malware variant detection models. To address this issue in this paper, we first generated a malware variant dataset using the obfuscation technique based on the disassembly and decompilation of malware. Then, an API call dataset of these malware variants was constructed through sandboxing. Compared to similar work, the malware variants and their obfuscated API call sequences generated in this paper were all runnable. After that, taking a public API call sequence dataset of obfuscation-free malware as input, a BERT (bidirectional encoder representation from transformers) pretrained model for malware detection was constructed. To enhance the ability of this pretrained model to handle obfuscation and variants, in this paper, we used adversarial training to improve the robustness and generalization of the detection model under obfuscation. As the experimental results show, the proposed scheme can improve the classification performance of malware variants under obfuscation. The accuracy of the malware variant classification was close to that of the unobfuscated case. Full article
Show Figures

Figure 1

18 pages, 1936 KiB  
Article
Evolution of the Complex Supply Chain Network Based on Deviation from the Power-Law Distribution
by Xiaodong Qian and Yufan Dai
Appl. Sci. 2022, 12(15), 7483; https://doi.org/10.3390/app12157483 - 26 Jul 2022
Cited by 1 | Viewed by 1256
Abstract
The power-law distribution is an important descriptive characteristic of scale-free complex supply chain networks (SCN). The power-law distribution and deviation phenomena of SCN nodes are explored in combination with complex network theory, so it is imporant to accurately characterize the dynamic characteristics of [...] Read more.
The power-law distribution is an important descriptive characteristic of scale-free complex supply chain networks (SCN). The power-law distribution and deviation phenomena of SCN nodes are explored in combination with complex network theory, so it is imporant to accurately characterize the dynamic characteristics of network evolution on a time scale. Based on the analysis of the topological structure and evolutionary characteristics of the small-world network and scale-free SCN, the single and double power-law distribution and evolutionary dynamic characteristics of the complex SCN are further analyzed, and the deviation phenomenon of the power-law distribution is analyzed. On the premise of setting three parameters of new nodes, new edges, and node reconnection in the process of network evolution, the power-law distribution deviation evolution model under a complex network environment is constructed, and then the parameters of the SCN evolution model are analyzed. Combined with numerical simulation and model simulation, the evolution of SCN with two kinds of power-law deviation is analyzed. The results show that the deviation of the two-stage power-law distribution is not caused by the process of adding nodes or connecting edges, while it has a certain influence on the change of the power index, and the deviation of the power-law distribution in SCN increases with the extension of the evolution time. When p1=0, the single power-law distribution of SCN tends to change to a δ distribution when the time step is large enough. Full article
Show Figures

Figure 1

19 pages, 2797 KiB  
Article
Knowledge Graph Recommendation Model Based on Adversarial Training
by Suqi Zhang, Ningjing Zhang, Shuai Fan, Junhua Gu and Jianxin Li
Appl. Sci. 2022, 12(15), 7434; https://doi.org/10.3390/app12157434 - 24 Jul 2022
Cited by 5 | Viewed by 1376
Abstract
The recommendation model based on the knowledge graph (KG) alleviates the problem of data sparsity in the recommendation to a certain extent and further improves the accuracy, diversity, and interpretability of recommendations. Therefore, the knowledge graph recommendation model has become a major research [...] Read more.
The recommendation model based on the knowledge graph (KG) alleviates the problem of data sparsity in the recommendation to a certain extent and further improves the accuracy, diversity, and interpretability of recommendations. Therefore, the knowledge graph recommendation model has become a major research topic, and the question of how to utilize the entity and relation information fully and effectively in KG has become the focus of research. This paper proposes a knowledge graph recommendation model based on adversarial training (ATKGRM), which can dynamically and adaptively adjust the knowledge graph aggregation weight based on adversarial training to learn the features of users and items more reasonably. First, the generator adopts a novel long- and short-term interest model to obtain user features and item features and generates a high-quality set of candidate items. Then, the discriminator discriminates candidate items by comparing the user’s scores of positive items, negative items, and candidate items. Finally, experimental studies on five real-world datasets with multiple knowledge graph recommendation models and multiple adversarial training recommendation models prove the effectiveness of our model. Full article
Show Figures

Figure 1

12 pages, 296 KiB  
Article
Public-Key Cryptography Based on Tropical Circular Matrices
by Huawei Huang, Chunhua Li and Lunzhi Deng
Appl. Sci. 2022, 12(15), 7401; https://doi.org/10.3390/app12157401 - 23 Jul 2022
Cited by 5 | Viewed by 1578
Abstract
Some public-key cryptosystems based on the tropical semiring have been proposed in recent years because of their increased efficiency, since the multiplication is actually an ordinary addition of numbers and there is no ordinary multiplication of numbers in the tropical semiring. However, most [...] Read more.
Some public-key cryptosystems based on the tropical semiring have been proposed in recent years because of their increased efficiency, since the multiplication is actually an ordinary addition of numbers and there is no ordinary multiplication of numbers in the tropical semiring. However, most of these tropical cryptosystems have security defects because they adopt a public matrix to construct commutative semirings. This paper proposes new public-key cryptosystems based on tropical circular matrices. The security of the cryptosystems relies on the NP-hard problem of solving tropical nonlinear systems of integers. Since the used commutative semiring of circular matrices cannot be expressed by a known matrix, the cryptosystems can resist KU attacks. There is no tropical matrix addition operation in the cryptosystem, and it can resist RM attacks. The new cryptosystems can be considered as a potential post-quantum cryptosystem. Full article
Back to TopTop