Artificial Intelligence and Data Science

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: 30 April 2024 | Viewed by 19020

Special Issue Editors

School of Computer Science and Technology, Dalian University of Technology, Dalian 116078, China
Interests: data science; network science; knowledge science; anomaly detection
Institute of Innovation, Science and Sustainability, Federation University Australia, Ballarat, VIC 3353, Australia
Interests: data science; artificial intelligence; graph learning; anomaly detection; systems engineering

Special Issue Information

Dear Colleagues,

Data science is the fundamental theory and methodology of data mining. The emergence of artificial intelligence (AI) technology has broadened and deepened data science, which further benefits a variety of applications, including cyber security, fraud detection, healthcare, transportation, etc. Based on a mixture of analysis, modeling, computation, and learning, a hybrid approach integrating AI technology has been proposed to study the process from data to information, to knowledge, and to decision. The development of AI technology will help us clarify the theoretical boundaries and provide new opportunities for the continuous development of data science. At the same time, the development of data science technology and the emergence of new intelligence paradigms will also facilitate the application of AI in many application scenarios.

Although big data and computational intelligence technologies have made great progress in many engineering applications, the theoretical basis and technical mechanism of AI and data science technology are still at an early stage. The single-point breakthrough of either AI or data science can hardly provide sustainable support for big data-driven intelligent applications. The fundamental issues of AI and data science should be considered deeply and urgently. Therefore, this Special Issue aims to enhance or reconstruct the theoretical cornerstones of AI and data science so as to promote the continuous progress and leapfrog development of real-world applications. Specifically, this Special Issue will try to answer the following questions. (1) How to break the boundaries among disciplines, methodologies, and theories to further promote AI and data science technologies? (2) What will be the new paradigm of AI and data science? (3) How can AI and data science technologies further benefit the real-world applications? The topics of interest for this Special Issue address the application of AI and data science methods including, but not limited to:

  • Knowledge-driven AI technologies;
  • Advanced deep learning approaches such as fairness learning;
  • Security, trust, and privacy;
  • Few-shot learning, one-shot learning, and zero-shot learning;
  • Data governance strategies and technologies;
  • Intelligent computing such as auto machine learning, lifelong learning, etc.;
  • Urgent applications such as anomaly detection;
  • Complexity theory;
  • High-performance computing;
  • Big data technologies and applications;
  • Data analytics and visualization;
  • Real-world AI and data science applications such as healthcare, transportation, etc.

Dr. Shuo Yu
Dr. Feng Xia
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • data science
  • deep learning
  • big data
  • data mining

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 2195 KiB  
Article
A Novel Method for Boosting Knowledge Representation Learning in Entity Alignment through Triple Confidence
by Xiaoming Zhang, Tongqing Chen and Huiyong Wang
Mathematics 2024, 12(8), 1214; https://doi.org/10.3390/math12081214 - 18 Apr 2024
Viewed by 293
Abstract
Entity alignment is an important task in knowledge fusion, which aims to link entities that have the same real-world identity in two knowledge graphs. However, in the process of constructing a knowledge graph, some noise may inevitably be introduced, which must affect the [...] Read more.
Entity alignment is an important task in knowledge fusion, which aims to link entities that have the same real-world identity in two knowledge graphs. However, in the process of constructing a knowledge graph, some noise may inevitably be introduced, which must affect the results of the entity alignment tasks. The triple confidence calculation can quantify the correctness of the triples to reduce the impact of the noise on entity alignment. Therefore, we designed a method to calculate the confidence of the triples and applied it to the knowledge representation learning phase of entity alignment. The method calculates the triple confidence based on the pairing rates of the three angles between the entities and relations. Specifically, the method uses the pairing rates of the three angles as features, which are then fed into a feedforward neural network for training to obtain the triple confidence. Moreover, we introduced the triple confidence into the knowledge representation learning methods to improve their performance in entity alignment. For the graph neural network-based method GCN, we considered entity confidence when calculating the adjacency matrix, and for the translation-based method TransE, we proposed a strategy to dynamically adjust the margin value in the loss function based on confidence. These two methods were then applied to the entity alignment, and the experimental results demonstrate that compared with the knowledge representation learning methods without integrating confidence, the confidence-based knowledge representation learning methods achieved excellent performance in the entity alignment task. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

16 pages, 1586 KiB  
Article
Semantic-Enhanced Knowledge Graph Completion
by Xu Yuan, Jiaxi Chen, Yingbo Wang, Anni Chen, Yiou Huang, Wenhong Zhao and Shuo Yu
Mathematics 2024, 12(3), 450; https://doi.org/10.3390/math12030450 - 31 Jan 2024
Viewed by 802
Abstract
Knowledge graphs (KGs) serve as structured representations of knowledge, comprising entities and relations. KGs are inherently incomplete, sparse, and have a strong need for completion. Although many knowledge graph embedding models have been designed for knowledge graph completion, they predominantly focus on capturing [...] Read more.
Knowledge graphs (KGs) serve as structured representations of knowledge, comprising entities and relations. KGs are inherently incomplete, sparse, and have a strong need for completion. Although many knowledge graph embedding models have been designed for knowledge graph completion, they predominantly focus on capturing observable correlations between entities. Due to the sparsity of KGs, potential semantic correlations are challenging to capture. To tackle this problem, we propose a model entitled semantic-enhanced knowledge graph completion (SE-KGC). SE-KGC effectively addresses the issue by incorporating predefined semantic patterns, enabling the capture of semantic correlations between entities and enhancing features for representation learning. To implement this approach, we employ a multi-relational graph convolution network encoder, which effectively encodes the KG. Subsequently, we utilize a scoring decoder to evaluate triplets. Experimental results demonstrate that our SE-KGC model outperforms other state-of-the-art methods in link-prediction tasks across three datasets. Specifically, compared to the baselines, SE-KGC achieved improvements of 11.7%, 1.05%, and 2.30% in terms of MRR on these three datasets. Furthermore, we present a comprehensive analysis of the contributions of different semantic patterns, and find that entities with higher connectivity play a pivotal role in effectively capturing and characterizing semantic information. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

21 pages, 640 KiB  
Article
Geometric Matrix Completion via Graph-Based Truncated Norm Regularization for Learning Resource Recommendation
by Yazhi Yang, Jiandong Shi, Siwei Zhou and Shasha Yang
Mathematics 2024, 12(2), 320; https://doi.org/10.3390/math12020320 - 18 Jan 2024
Viewed by 713
Abstract
In the competitive landscape of online learning, developing robust and effective learning resource recommendation systems is paramount, yet the field faces challenges due to high-dimensional, sparse matrices and intricate user–resource interactions. Our study focuses on geometric matrix completion (GMC) and introduces a novel [...] Read more.
In the competitive landscape of online learning, developing robust and effective learning resource recommendation systems is paramount, yet the field faces challenges due to high-dimensional, sparse matrices and intricate user–resource interactions. Our study focuses on geometric matrix completion (GMC) and introduces a novel approach, graph-based truncated norm regularization (GBTNR) for problem solving. GBTNR innovatively incorporates truncated Dirichlet norms for both user and item graphs, enhancing the model’s ability to handle complex data structures. This method synergistically combines the benefits of truncated norm regularization with the insightful analysis of user–user and resource–resource graph relationships, leading to a significant improvement in recommendation performance. Our model’s unique application of truncated Dirichlet norms distinctively positions it to address the inherent complexities in user and item data structures more effectively than existing methods. By bridging the gap between theoretical robustness and practical applicability, the GBTNR approach offers a substantial leap forward in the field of learning resource recommendations. This advancement is particularly critical in the realm of online education, where understanding and adapting to diverse and intricate user–resource interactions is key to developing truly personalized learning experiences. Moreover, our work includes a thorough theoretical analysis, complete with proofs, to establish the convergence property of the GMC-GBTNR model, thus reinforcing its reliability and effectiveness in practical applications. Empirical validation through extensive experiments on diverse real-world datasets affirms the model’s superior performance over existing methods, marking a groundbreaking advancement in personalized education and deepening our understanding of the dynamics in learner–resource interactions. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

20 pages, 3422 KiB  
Article
Progressively Multi-Scale Feature Fusion for Image Inpainting
by Wu Wen, Tianhao Li, Amr Tolba, Ziyi Liu and Kai Shao
Mathematics 2023, 11(24), 4908; https://doi.org/10.3390/math11244908 - 08 Dec 2023
Viewed by 751
Abstract
The rapid advancement of Wise Information Technology of med (WITMED) has made the integration of traditional Chinese medicine tongue diagnosis and computer technology an increasingly significant area of research. The doctor obtains patient’s tongue images to make a further diagnosis. However, the tongue [...] Read more.
The rapid advancement of Wise Information Technology of med (WITMED) has made the integration of traditional Chinese medicine tongue diagnosis and computer technology an increasingly significant area of research. The doctor obtains patient’s tongue images to make a further diagnosis. However, the tongue image may be broken during the process of collecting the tongue image. Due to the extremely complex texture of the tongue and significant individual differences, existing methods fail to fully obtain sufficient feature information, which result in inaccurate inpainted tongue images. To address this problem, we propose a recurrent tongue image inpainting algorithm based on multi-scale feature fusion called Multi-Scale Fusion Module and Recurrent Attention Mechanism Network (MSFM-RAM-Net). We first propose Multi-Scale Fusion Module (MSFM), which preserves the feature information of tongue images at different scales and enhances the consistency between structures. To simultaneously accelerate the inpainting process and enhance the quality of the inpainted results, Recurrent Attention Mechanism (RAM) is proposed. RAM focuses the network’s attention on important areas and uses known information to gradually inpaint image, which can avoid redundant feature information and the problem of texture confusion caused by large missing areas. Finally, we establish a tongue image dataset and use this dataset to qualitatively and quantitatively evaluate the MSFM-RAM-Net. The results shows that the MSFM-RAM-Net has a better effect on tongue image inpainting, with PSNR and SSIM increasing by 2.1% and 3.3%, respectively. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

20 pages, 1332 KiB  
Article
Aggregation Methods Based on Quality Model Assessment for Federated Learning Applications: Overview and Comparative Analysis
by Iuliana Bejenar, Lavinia Ferariu, Carlos Pascal and Constantin-Florin Caruntu
Mathematics 2023, 11(22), 4610; https://doi.org/10.3390/math11224610 - 10 Nov 2023
Viewed by 699
Abstract
Federated learning (FL) offers the possibility of collaboration between multiple devices while maintaining data confidentiality, as required by the General Data Protection Regulation (GDPR). Though FL can keep local data private, it may encounter problems when dealing with non-independent and identically distributed data [...] Read more.
Federated learning (FL) offers the possibility of collaboration between multiple devices while maintaining data confidentiality, as required by the General Data Protection Regulation (GDPR). Though FL can keep local data private, it may encounter problems when dealing with non-independent and identically distributed data (non-IID), insufficient local training samples or cyber-attacks. This paper introduces algorithms that can provide a reliable aggregation of the global model by investigating the accuracy of models received from clients. This allows reducing the influence of less confident nodes, who were potentially attacked or unable to perform successful training. The analysis includes the proposed FedAcc and FedAccSize algorithms, together with their new extension based on the Lasso regression, FedLasso. FedAcc and FedAccSize set the confidence in each client based only on local models’ accuracy, while FedLasso exploits additional details related to predictions, like predicted class probabilities, to support a refined aggregation. The ability of the proposed algorithms to protect against intruders or underperforming clients is demonstrated experimentally using testing scenarios involving independent and identically distributed (IID) data as well as non-IID data. The comparison with the established FedAvg and FedAvgM algorithms shows that exploiting the quality of the client models is essential for reliable aggregation, which enables rapid and robust improvement in the global model. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

17 pages, 6778 KiB  
Article
Research on Intelligent Control Method of Launch Vehicle Landing Based on Deep Reinforcement Learning
by Shuai Xue, Hongyang Bai, Daxiang Zhao and Junyan Zhou
Mathematics 2023, 11(20), 4276; https://doi.org/10.3390/math11204276 - 13 Oct 2023
Viewed by 1054
Abstract
A launch vehicle needs to adapt to a complex flight environment during flight, and traditional guidance and control algorithms can hardly deal with multi-factor uncertainties due to the high dependency on control models. To solve this problem, this paper designs a new intelligent [...] Read more.
A launch vehicle needs to adapt to a complex flight environment during flight, and traditional guidance and control algorithms can hardly deal with multi-factor uncertainties due to the high dependency on control models. To solve this problem, this paper designs a new intelligent flight control method for a rocket based on the deep reinforcement learning algorithm driven by knowledge and data. In this process, the Markov decision process of the rocket landing section is established by designing a reinforcement function with consideration of the combination effect on the return of the terminal constraint of the launch vehicle and the cumulative return of the flight process of the rocket. Meanwhile, to improve the training speed of the landing process of the launch vehicle and to enhance the generalization ability of the model, the strategic neural network model is obtained and trained via the form of a long short-term memory (LSTM) network combined with a full connection layer as a landing guidance strategy network. The proximal policy optimization (PPO) is the training algorithm of reinforcement learning network parameters combined with behavioral cloning (BC) as the reinforcement learning pre-training imitation learning algorithm. Notably, the rocket-borne environment is transplanted to the Nvidia Jetson TX2 embedded platform for the comparative testing and verification of this intelligent model, which is then used to generate real-time control commands for guiding the actual flying and landing process of the rocket. Further, comparisons of the results obtained from convex landing optimization and the proposed method in this work are performed to prove the effectiveness of this proposed method. The simulation results show that the intelligent control method in this work can meet the landing accuracy requirements of the launch vehicle with a fast convergence speed of 84 steps, and the decision time is only 2.5 ms. Additionally, it has the ability of online autonomous decision making as deployed on the embedded platform. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

24 pages, 2647 KiB  
Article
How Do Citizens View Digital Government Services? Study on Digital Government Service Quality Based on Citizen Feedback
by Xin Ye, Xiaoyan Su, Zhijun Yao, Lu-an Dong, Qiang Lin and Shuo Yu
Mathematics 2023, 11(14), 3122; https://doi.org/10.3390/math11143122 - 14 Jul 2023
Cited by 2 | Viewed by 2157
Abstract
Research on government service quality can help ensure the success of digital government services and has been the focus of numerous studies that proposed different frameworks and approaches. Most of the existing studies are based on traditional researcher-led methods, which struggle to capture [...] Read more.
Research on government service quality can help ensure the success of digital government services and has been the focus of numerous studies that proposed different frameworks and approaches. Most of the existing studies are based on traditional researcher-led methods, which struggle to capture the needs of citizens. In this paper, a citizen-feedback-based analysis framework was proposed to explore citizen demands and analyze the service quality of digital government. Citizen feedback data are a direct expression of citizens’ demands, so the citizen-feedback-based framework can help to obtain more targeted management insights and improve citizen satisfaction. Efficient machine learning methods used in the framework make data collection and processing more efficient, especially for large-scale internet data. With the crawled user feedback data from the Q&A e-government portal of Luzhou, Sichuan Province, China, we conducted experiments on the proposed framework to verify its feasibility. From citizens’ online feedback on Q&A services, we extracted five service quality factors: efficiency, quality, attitude, compliance, and execution of response. The analysis of five service quality factors provides some management insights, which can provide a guide for improvements in Q&A services. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

18 pages, 1440 KiB  
Article
Visual Analytics Using Machine Learning for Transparency Requirements
by Samiha Fadloun, Khadidja Bennamane, Souham Meshoul, Mahmood Hosseini and Kheireddine Choutri
Mathematics 2023, 11(14), 3091; https://doi.org/10.3390/math11143091 - 13 Jul 2023
Viewed by 1002
Abstract
Problem solving applications require users to exercise caution in their data usage practices. Prior to installing these applications, users are encouraged to read and comprehend the terms of service, which address important aspects such as data privacy, processes, and policies (referred to as [...] Read more.
Problem solving applications require users to exercise caution in their data usage practices. Prior to installing these applications, users are encouraged to read and comprehend the terms of service, which address important aspects such as data privacy, processes, and policies (referred to as information elements). However, these terms are often lengthy and complex, making it challenging for users to fully grasp their content. Additionally, existing transparency analytics tools typically rely on the manual extraction of information elements, resulting in a time-consuming process. To address these challenges, this paper proposes a novel approach that combines information visualization and machine learning analyses to automate the retrieval of information elements. The methodology involves the creation and labeling of a dataset derived from multiple software terms of use. Machine learning models, including naïve Bayes, BART, and LSTM, are utilized for the classification of information elements and text summarization. Furthermore, the proposed approach is integrated into our existing visualization tool TranspVis to enable the automatic detection and display of software information elements. The system is thoroughly evaluated using a database-connected tool, incorporating various metrics and expert opinions. The results of our study demonstrate the promising potential of our approach, serving as an initial step in this field. Our solution not only addresses the challenge of extracting information elements from complex terms of service but also provides a foundation for future research in this area. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

13 pages, 1167 KiB  
Article
STAB-GCN: A Spatio-Temporal Attention-Based Graph Convolutional Network for Group Activity Recognition
by Fang Liu, Chunhua Tian, Jinzhong Wang, Youwei Jin, Luxiang Cui and Ivan Lee
Mathematics 2023, 11(14), 3074; https://doi.org/10.3390/math11143074 - 12 Jul 2023
Viewed by 919
Abstract
Group activity recognition is a central theme in many domains, such as sports video analysis, CCTV surveillance, sports tactics, and social scenario understanding. However, there are still challenges in embedding actors’ relations in a multi-person scenario due to occlusion, movement, and light. Current [...] Read more.
Group activity recognition is a central theme in many domains, such as sports video analysis, CCTV surveillance, sports tactics, and social scenario understanding. However, there are still challenges in embedding actors’ relations in a multi-person scenario due to occlusion, movement, and light. Current studies mainly focus on collective and individual local features from the spatial and temporal perspectives, which results in inefficiency, low robustness, and low portability. To this end, a Spatio-Temporal Attention-Based Graph Convolution Network (STAB-GCN) model is proposed to effectively embed deep complex relations between actors. Specifically, we leverage the attention mechanism to attentively explore spatio-temporal latent relations between actors. This approach captures spatio-temporal contextual information and improves individual and group embedding. Then, we feed actor relation graphs built from group activity videos into our proposed STAB-GCN for further inference, which selectively attends to the relevant features while ignoring those irrelevant to the relation extraction task. We perform experiments on three available group activity datasets, acquiring better performance than state-of-the-art methods. The results verify the validity of our proposed model and highlight the obstructive impacts of spatio-temporal attention-based graph embedding on group activity recognition. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

29 pages, 13241 KiB  
Article
Predicting Popularity of Viral Content in Social Media through a Temporal-Spatial Cascade Convolutional Learning Framework
by Zhixuan Xu and Minghui Qian
Mathematics 2023, 11(14), 3059; https://doi.org/10.3390/math11143059 - 11 Jul 2023
Cited by 1 | Viewed by 1873
Abstract
The viral spread of online content can lead to unexpected consequences such as extreme opinions about a brand or consumers’ enthusiasm for a product. This makes the prediction of viral content’s future popularity an important problem, especially for digital marketers, as well as [...] Read more.
The viral spread of online content can lead to unexpected consequences such as extreme opinions about a brand or consumers’ enthusiasm for a product. This makes the prediction of viral content’s future popularity an important problem, especially for digital marketers, as well as for managers of social platforms. It is not surprising that conventional methods, which heavily rely on either hand-crafted features or unrealistic assumptions, are insufficient in dealing with this challenging problem. Even state-of-art graph-based approaches are either inefficient to work with large-scale cascades or unable to explain what spread mechanisms are learned by the model. This paper presents a temporal-spatial cascade convolutional learning framework called ViralGCN, not only to address the challenges of existing approaches but also to try to provide some insights into actual mechanisms of viral spread from the perspective of artificial intelligence. We conduct experiments on the real-world dataset (i.e., to predict the retweet popularity of micro-blogs on Weibo). Compared to the existing approaches, ViralGCN possesses the following advantages: the flexible size of the input cascade graph, a coherent method for processing both structural and temporal information, and an intuitive and interpretable deep learning architecture. Moreover, the exploration of the learned features also provides valuable clues for managers to understand the elusive mechanisms of viral spread as well as to devise appropriate strategies at early stages. By using the visualization method, our approach finds that both broadcast and structural virality contribute to online content going viral; the cascade with a gradual descent or ascent-then-descent evolving pattern at the early stage is more likely to gain significant eventual popularity, and even the timing of users participating in the cascade has an effect on future popularity growth. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

20 pages, 3655 KiB  
Article
Advance Landslide Prediction and Warning Model Based on Stacking Fusion Algorithm
by Zian Lin, Yuanfa Ji and Xiyan Sun
Mathematics 2023, 11(13), 2833; https://doi.org/10.3390/math11132833 - 24 Jun 2023
Viewed by 1016
Abstract
In landslide disaster warning, a variety of monitoring and warning methods are commonly adopted. However, most monitoring and warning methods cannot provide information in advance, and serious losses are often caused when landslides occur. To advance the warning time before a landslide, an [...] Read more.
In landslide disaster warning, a variety of monitoring and warning methods are commonly adopted. However, most monitoring and warning methods cannot provide information in advance, and serious losses are often caused when landslides occur. To advance the warning time before a landslide, an innovative advance landslide prediction and warning model based on a stacking fusion algorithm using Baishuihe landslide data is proposed in this paper. The Baishuihe landslide area is characterized by unique soil and is in the Three Gorges region of China, with a subtropical monsoon climate. Based on Baishuihe historical data and real-time monitoring of the landslide state, four warning level thresholds and trigger conditions for each warning level are established. The model effectively integrates the results of multiple prediction and warning submodels to provide predictions and advance warnings through the fusion of two stacking learning layers. The possibility that a risk priority strategy can be used as a substitute for the stacking model is also discussed. Finally, an experimental simulation verifies that the proposed improved model can not only provide advance landslide warning but also effectively reduce the frequency of false warnings and mitigate the issues of traditional single models. The stacking model can effectively support disaster prevention and reduction and provide a scientific basis for land use management. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

19 pages, 4183 KiB  
Article
SlowFast Multimodality Compensation Fusion Swin Transformer Networks for RGB-D Action Recognition
by Xiongjiang Xiao, Ziliang Ren, Huan Li, Wenhong Wei, Zhiyong Yang and Huaide Yang
Mathematics 2023, 11(9), 2115; https://doi.org/10.3390/math11092115 - 29 Apr 2023
Cited by 1 | Viewed by 1424
Abstract
RGB-D-based technology combines the advantages of RGB and depth sequences which can effectively recognize human actions in different environments. However, the spatio-temporal information between different modalities is difficult to effectively learn from each other. To enhance the information exchange between different modalities, we [...] Read more.
RGB-D-based technology combines the advantages of RGB and depth sequences which can effectively recognize human actions in different environments. However, the spatio-temporal information between different modalities is difficult to effectively learn from each other. To enhance the information exchange between different modalities, we introduce a SlowFast multimodality compensation block (SFMCB) which is designed to extract compensation features. Concretely, the SFMCB fuses features from two independent pathways with different frame rates into a single convolutional neural network to achieve performance gains for the model. Furthermore, we explore two fusion schemes to combine the feature from two independent pathways with different frame rates. To facilitate the learning of features from independent multiple pathways, multiple loss functions are utilized for joint optimization. To evaluate the effectiveness of our proposed architecture, we conducted experiments on four challenging datasets: NTU RGB+D 60, NTU RGB+D 120, THU-READ, and PKU-MMD. Experimental results demonstrate the effectiveness of our proposed model, which utilizes the SFMCB mechanism to capture complementary features of multimodal inputs. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

19 pages, 3544 KiB  
Article
A Novel Link Prediction Method for Social Multiplex Networks Based on Deep Learning
by Jiaping Cao, Tianyang Lei, Jichao Li and Jiang Jiang
Mathematics 2023, 11(7), 1705; https://doi.org/10.3390/math11071705 - 02 Apr 2023
Cited by 1 | Viewed by 1199
Abstract
Due to the great advances in information technology, an increasing number of social platforms have appeared. Friend recommendation is an important task in social media, but newly built social platforms have insufficient information to predict entity relationships. In this case, platforms with sufficient [...] Read more.
Due to the great advances in information technology, an increasing number of social platforms have appeared. Friend recommendation is an important task in social media, but newly built social platforms have insufficient information to predict entity relationships. In this case, platforms with sufficient information can help newly built platforms. To address this challenge, a model of link prediction in social multiplex networks (LPSMN) is proposed in this work. Specifically, we first extract graph structure features, latent features and explicit features and then concatenate these features as link representations. Then, with the assistance of external information from a mature platform, an attention mechanism is employed to construct a multiplex and enhanced forecasting model. Additionally, we consider the problem of link prediction to be a binary classification problem. This method utilises three different kinds of features to improve link prediction performance. Finally, we use five synthetic networks with various degree distributions and two real-world social multiplex networks (Weibo–Douban and Facebook–Twitter) to build an experimental scenario for further assessment. The numerical results indicate that the proposed LPSMN model improves the prediction accuracy compared with several baseline methods. We also find that with the decline in network heterogeneity, the performance of LPSMN increases. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

18 pages, 674 KiB  
Article
Efficient and Privacy-Preserving Categorization for Encrypted EMR
by Zhiliang Zhao, Shengke Zeng, Shuai Cheng and Fei Hao
Mathematics 2023, 11(3), 754; https://doi.org/10.3390/math11030754 - 02 Feb 2023
Cited by 1 | Viewed by 996
Abstract
Electronic Health Records (EHRs) must be encrypted for patient privacy; however, an encrypted EHR is a challenge for the administrator to categorize. In addition, EHRs are predictable and possible to be guessed, although they are in encryption style. In this work, we propose [...] Read more.
Electronic Health Records (EHRs) must be encrypted for patient privacy; however, an encrypted EHR is a challenge for the administrator to categorize. In addition, EHRs are predictable and possible to be guessed, although they are in encryption style. In this work, we propose a secure scheme to support the categorization of encrypted EHRs, according to some keywords. In regard to the predictability of EHRs, we focused on guessing attacks from not only the storage server but also the group administrator. The experiment result shows that our scheme is efficient and practical. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

28 pages, 557 KiB  
Article
Efficient Associate Rules Mining Based on Topology for Items of Transactional Data
by Bo Li, Zheng Pei, Chao Zhang and Fei Hao
Mathematics 2023, 11(2), 401; https://doi.org/10.3390/math11020401 - 12 Jan 2023
Cited by 1 | Viewed by 844
Abstract
A challenge in association rules’ mining is effectively reducing the time and space complexity in association rules mining with predefined minimum support and confidence thresholds from huge transaction databases. In this paper, we propose an efficient method based on the topology space of [...] Read more.
A challenge in association rules’ mining is effectively reducing the time and space complexity in association rules mining with predefined minimum support and confidence thresholds from huge transaction databases. In this paper, we propose an efficient method based on the topology space of the itemset for mining associate rules from transaction databases. To do so, we deduce a binary relation on itemset, and construct a topology space of itemset based on the binary relation and the quotient lattice of the topology according to transactions of itemsets. Furthermore, we prove that all closed itemsets are included in the quotient lattice of the topology, and generators or minimal generators of every closed itemset can be easily obtained from an element of the quotient lattice. Formally, the topology on itemset represents more general associative relationship among items of transaction databases, the quotient lattice of the topology displays the hierarchical structures on all itemsets, and provide us a method to approximate any template of the itemset. Accordingly, we provide efficient algorithms to generate Min-Max association rules or reduce generalized association rules based on the lower approximation and the upper approximation of a template, respectively. The experiment results demonstrate that the proposed method is an alternative and efficient method to generate or reduce association rules from transaction databases. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

18 pages, 618 KiB  
Article
Hierarchical Quantum Information Splitting of an Arbitrary Two-Qubit State Based on a Decision Tree
by Dongfen Li, Yundan Zheng, Xiaofang Liu, Jie Zhou, Yuqiao Tan, Xiaolong Yang and Mingzhe Liu
Mathematics 2022, 10(23), 4571; https://doi.org/10.3390/math10234571 - 02 Dec 2022
Cited by 2 | Viewed by 1031
Abstract
Quantum informatics is a new subject formed by the intersection of quantum mechanics and informatics. Quantum communication is a new way to transmit quantum states through quantum entanglement, quantum teleportation, and quantum information splitting. Based on the research of multiparticle state quantum information [...] Read more.
Quantum informatics is a new subject formed by the intersection of quantum mechanics and informatics. Quantum communication is a new way to transmit quantum states through quantum entanglement, quantum teleportation, and quantum information splitting. Based on the research of multiparticle state quantum information splitting, this paper innovatively combines the decision tree algorithm of machine learning with quantum communication to solve the problem of channel particle allocation in quantum communication, and experiments showed that the algorithm can make the optimal allocation scheme. Based on this scheme, we propose a two-particle state hierarchical quantum information splitting scheme based on the multi-particle state. First, Alice measures the Bell states of the particles she owns and tells the result to the receiver through the classical channel. If the receiver is a high-level communicator, he only needs the help of one of the low-level communicators and all the high-level communicators. After performing a single particle measurement on the z-basis, they send the result to the receiver through the classical channel. When the receiver is a low-level communicator, all communicators need to measure the particles they own and tell the receiver the results. Finally, the receiver performs the corresponding unitary operation according to the received results. In this regard, a complete hierarchical quantum information splitting operation is completed. On the basis of theoretical research, we also carried out experimental verification, security analysis, and comparative analysis, which shows that our scheme is reliable and has high security and efficiency. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

Back to TopTop