Topic Editors

School of Computer Science and Technology, Dalian University of Technology, Dalian 116024, China
Prof. Dr. Yifeng Zeng
School of Computing, Engineering and Information Sciences, Northumbria University, NE7 7YT Newcastle, UK

Complex Systems and Artificial Intelligence

Abstract submission deadline
closed (28 February 2023)
Manuscript submission deadline
closed (31 May 2023)
Viewed by
57864

Topic Information

Dear Colleagues,

Due to the rapid development of algorithms, frameworks, hardware, networks, and the increased volume of data, artificial intelligence (AI) algorithms (e.g., deep learning, neural computing, biological computing, etc.) have been widely employed to solve problems in many complex systems. AI-based techniques and emerging new computing paradigms could extract sophisticated features and help us to address the practical problems that occur in complex systems more easily. For example, DNA computing—an emerging branch of biological computing—uses DNA, biochemistry, and molecular biology hardware to overcome the limitations of traditional electronic computing architecture in storing technologies, synthetic controllers and reaction networks, etc.

This Topic aims to highlight and present the latest developments in complex systems and artificial intelligence, addressing the challenge of how to apply advanced artificial intelligence algorithms, frameworks, and technologies to complex systems for a better world. Research fields could include industry, traffic, biology, agriculture, economy, environment, management, etc.

Contributions will focus on challenging issues in the field of complex systems and artificial intelligence technologies, frameworks, architectures, algorithms, and applications. Both theoretical and experimental contributions containing novel applications with new insights and findings in the field of complex systems and artificial intelligence are welcome. Review articles detailing the current state of the art are also encouraged. Topics of interest include, but are not limited to, the following:

  • Complex evolutionary and adaptive systems;
  • Self-organizing collective systems;
  • AI-driven problem solving for complex systems;
  • Machine learning for complex systems;
  • Deep learning for complex systems;
  • Neural computing for complex systems;
  • Multi-agent for complex systems;
  • Knowledge graph for complex systems;
  • Data-driven AI for complex systems;
  • Feature extraction and optimization;
  • Multimodal feature fusion for complex systems;
  • Biological computing for complex systems;
  • AI in networks;
  • AI and optimization problems;
  • AI-based sensing, decision, and control for complex systems…

Prof. Dr. Qiang Zhang
Prof. Dr. Yifeng Zeng
Topic Editors

Keywords

  • artificial intelligence
  • complex systems
  • computational intelligence
  • data-driven artificial intelligence
  • cognitive computing
  • evolutionary computation
  • bio-inspired algorithms
  • reinforcement learning
  • human–machine shared control
  • multi-modal human-robot interaction
  • Metaverse
  • knowledge acquisition
  • information fusion
  • computational biology
  • molecular biology
  • genomics
  • DNA Computing
  • bioinformatics
  • health informatics
  • feature extraction
  • neuroscience and cognitive science
  • multi-objective game theory
  • data-driven AI
  • machine learning
  • complex evolutionary
  • brain–computer interface
  • brain–machine interface
  • human-machine interaction
  • human–computer interaction
  • system modelling
  • deep learning
  • signal processing
  • computer vision
  • image processing
  • speech processing
  • vedio processing
  • audio processing
  • neural computing
  • random systems
  • hybrid AI models
  • collaborative AI systems
  • large-scale system

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.838 3.7 2011 14.9 Days 2300 CHF
Entropy
entropy
2.738 4.4 1999 19.9 Days 2000 CHF
Sensors
sensors
3.847 6.4 2001 15 Days 2400 CHF
Genes
genes
4.141 5.0 2010 16.7 Days 2400 CHF
Journal of Personalized Medicine
jpm
3.508 1.8 2011 20.8 Days 2000 CHF

Preprints is a platform dedicated to making early versions of research outputs permanently available and citable. MDPI journals allow posting on preprint servers such as Preprints.org prior to publication. For more details about reprints, please visit https://www.preprints.org.

Published Papers (37 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
Review
Cross-Industry Principles for Digital Representations of Complex Technical Systems in the Context of the MBSE Approach: A Review
Appl. Sci. 2023, 13(10), 6225; https://doi.org/10.3390/app13106225 - 19 May 2023
Viewed by 360
Abstract
This scientific article discusses the process of digital transformation of enterprises, analyzed as complex technical systems. Digital transformation is essential for businesses to remain competitive in the global marketplace. One of the effective tools for such a transformation is model-based systems engineering (MBSE). [...] Read more.
This scientific article discusses the process of digital transformation of enterprises, analyzed as complex technical systems. Digital transformation is essential for businesses to remain competitive in the global marketplace. One of the effective tools for such a transformation is model-based systems engineering (MBSE). However, there is a gap in the practical application of knowledge regarding the uniform principles for the formation of a digital representation of complex technical systems, which limits the realization of the cross-industry potential of digital transformation in the economy. The motivation for this study is to identify common cross-industry principles for the formation of digital representations of complex technical systems that can lead companies to a sustainable and successful digital transformation. The purpose of this work is to identify and formulate these principles through an analysis of publications, using an inductive approach and classifying them by the category of application. As a result of the study, 23 principles were obtained, and the degree of their use in various industries associated with complex technical systems was determined. The results of this study will help to solve the problem of cross-industry integration and guide systemic changes in the organization of enterprises during their digital transformation. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
A Grid Search-Based Multilayer Dynamic Ensemble System to Identify DNA N4—Methylcytosine Using Deep Learning Approach
Genes 2023, 14(3), 582; https://doi.org/10.3390/genes14030582 - 25 Feb 2023
Viewed by 752
Abstract
DNA (Deoxyribonucleic Acid) N4-methylcytosine (4mC), a kind of epigenetic modification of DNA, is important for modifying gene functions, such as protein interactions, conformation, and stability in DNA, as well as for the control of gene expression throughout cell development and genomic imprinting. This [...] Read more.
DNA (Deoxyribonucleic Acid) N4-methylcytosine (4mC), a kind of epigenetic modification of DNA, is important for modifying gene functions, such as protein interactions, conformation, and stability in DNA, as well as for the control of gene expression throughout cell development and genomic imprinting. This simply plays a crucial role in the restriction–modification system. To further understand the function and regulation mechanism of 4mC, it is essential to precisely locate the 4mC site and detect its chromosomal distribution. This research aims to design an efficient and high-throughput discriminative intelligent computational system using the natural language processing method “word2vec” and a multi-configured 1D convolution neural network (1D CNN) to predict 4mC sites. In this article, we propose a grid search-based multi-layer dynamic ensemble system (GS-MLDS) that can enhance existing knowledge of each level. Each layer uses a grid search-based weight searching approach to find the optimal accuracy while minimizing computation time and additional layers. We have used eight publicly available benchmark datasets collected from different sources to test the proposed model’s efficiency. Accuracy results in test operations were obtained as follows: 0.978, 0.954, 0.944, 0.961, 0.950, 0.973, 0.948, 0.952, 0.961, and 0.980. The proposed model has also been compared to 16 distinct models, indicating that it can accurately predict 4mC. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
Data-Centric and Model-Centric AI: Twin Drivers of Compact and Robust Industry 4.0 Solutions
Appl. Sci. 2023, 13(5), 2753; https://doi.org/10.3390/app13052753 - 21 Feb 2023
Cited by 1 | Viewed by 840
Abstract
Despite its dominance over the past three decades, model-centric AI has recently come under heavy criticism in favor of data-centric AI. Indeed, both promise to improve the performance of AI systems, yet with converse points of focus. While the former successively upgrades a [...] Read more.
Despite its dominance over the past three decades, model-centric AI has recently come under heavy criticism in favor of data-centric AI. Indeed, both promise to improve the performance of AI systems, yet with converse points of focus. While the former successively upgrades a devised model (algorithm/code), holding the amount and type of data used in model training fixed, the latter enhances the quality of deployed data continuously, paying less attention to further model upgrades. Rather than favoring either of the two approaches, this paper reconciles data-centric AI with model-centric AI. In so doing, we connect current AI to the field of cybersecurity and natural language inference, and through the phenomena of ‘adversarial samples’ and ‘hypothesis-only biases’, respectively, showcase the limitations of model-centric AI in terms of algorithmic stability and robustness. Further, we argue that overcoming the alleged limitations of model-centric AI may well require paying extra attention to the alternative data-centric approach. However, this should not result in reducing interest in model-centric AI. Our position is supported by the notion that successful ‘problem solving’ requires considering both the way we act upon things (algorithm) as well as harnessing the knowledge derived from data of their states and properties. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
Performance Analysis on Trained and Recreational Runners in the Venice Marathon Events from 2007 to 2019
Appl. Sci. 2023, 13(3), 1982; https://doi.org/10.3390/app13031982 - 03 Feb 2023
Viewed by 545
Abstract
The Venice Marathon (VM) has gained fame and prestige over time. It is part of a group of marathons that are recognized worldwide. The aims of this study were to describe the attractiveness of the event over the years according to the gender [...] Read more.
The Venice Marathon (VM) has gained fame and prestige over time. It is part of a group of marathons that are recognized worldwide. The aims of this study were to describe the attractiveness of the event over the years according to the gender and age of participants, and to investigate their performances according to gender and age differences in the group of all finishers over 23 years old (AD), along with the best 10% performance (TOP) over a 13-year period. Methods: We conducted a retrospective analysis of VM race data from 2007 to 2019; the data were collected from the free Timing Data Service website and statistically analyzed. Results: In total, 82.3% of participants were male and 17.7% were female. A significant total increase in female participation was observed over the 13 editions of the VM. Linear regression analysis of AD speeds for each category showed a significant decrease in the youngest categories. Among the TOP athletes, the 40-year age category showed increased performance of both males and females. Analyzing the mean speed by age (AD13 and TOP13), there was a breakpoint in the speed decrease in AD13 in the age categories of 50 years in males and 55 years in females, while in TOP13 the breakpoints were in the 55- and 45-year age categories in males and females, respectively. Conclusion: The results obtained confirmed the reduction in running speed with age, as well as the definition of the VM as an example of a recreational marathon in which the participation of runners over 40 years will increase in the future, and for which specific adaptations will be required. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
Integral Reinforcement-Learning-Based Optimal Containment Control for Partially Unknown Nonlinear Multiagent Systems
Entropy 2023, 25(2), 221; https://doi.org/10.3390/e25020221 - 23 Jan 2023
Viewed by 684
Abstract
This paper focuses on the optimal containment control problem for the nonlinear multiagent systems with partially unknown dynamics via an integral reinforcement learning algorithm. By employing integral reinforcement learning, the requirement of the drift dynamics is relaxed. The integral reinforcement learning method is [...] Read more.
This paper focuses on the optimal containment control problem for the nonlinear multiagent systems with partially unknown dynamics via an integral reinforcement learning algorithm. By employing integral reinforcement learning, the requirement of the drift dynamics is relaxed. The integral reinforcement learning method is proved to be equivalent to the model-based policy iteration, which guarantees the convergence of the proposed control algorithm. For each follower, the Hamilton–Jacobi–Bellman equation is solved by a single critic neural network with a modified updating law which guarantees the weight error dynamic to be asymptotically stable. Through using input–output data, the approximate optimal containment control protocol of each follower is obtained by applying the critic neural network. The closed-loop containment error system is guaranteed to be stable under the proposed optimal containment control scheme. Simulation results demonstrate the effectiveness of the presented control scheme. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Review
Personalized Brain–Computer Interface and Its Applications
J. Pers. Med. 2023, 13(1), 46; https://doi.org/10.3390/jpm13010046 - 26 Dec 2022
Cited by 4 | Viewed by 1612
Abstract
Brain–computer interfaces (BCIs) are a new technology that subverts traditional human–computer interaction, where the control signal source comes directly from the user’s brain. When a general BCI is used for practical applications, it is difficult for it to meet the needs of different [...] Read more.
Brain–computer interfaces (BCIs) are a new technology that subverts traditional human–computer interaction, where the control signal source comes directly from the user’s brain. When a general BCI is used for practical applications, it is difficult for it to meet the needs of different individuals because of the differences among individual users in physiological and mental states, sensations, perceptions, imageries, cognitive thinking activities, and brain structures and functions. For this reason, it is necessary to customize personalized BCIs for specific users. So far, few studies have elaborated on the key scientific and technical issues involved in personalized BCIs. In this study, we will focus on personalized BCIs, give the definition of personalized BCIs, and detail their design, development, evaluation methods and applications. Finally, the challenges and future directions of personalized BCIs are discussed. It is expected that this study will provide some useful ideas for innovative studies and practical applications of personalized BCIs. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
SIS Epidemic Propagation on Scale-Free Hypernetwork
Appl. Sci. 2022, 12(21), 10934; https://doi.org/10.3390/app122110934 - 28 Oct 2022
Viewed by 681
Abstract
The hypergraph offers a platform to study structural properties emerging from more complicated and higher-order than pairwise interactions among constituents and dynamical behavior, such as the spread of information or disease. Considering the higher-order interaction between multiple nodes in the system, the mathematical [...] Read more.
The hypergraph offers a platform to study structural properties emerging from more complicated and higher-order than pairwise interactions among constituents and dynamical behavior, such as the spread of information or disease. Considering the higher-order interaction between multiple nodes in the system, the mathematical model of infectious diseases spreading on simple scale-free networks is extended to hypernetworks based on hypergraphs. A SIS propagation model based on reaction process strategy in a universal scale-free hypernetwork is constructed, and the theoretical and simulation analysis of the model is carried out. Using mean field theory, the analytical expressions between infection density and hypernetwork structure parameters as well as propagation parameters in steady state are given. Through individual-based simulation, the theoretical results are verified and the infectious disease spread process under the structure of the hypernetwork and simple scale-free network is compared and analyzed. It becomes apparent that infectious diseases are easier to spread on the hypernetworks, showing the clear clustering characteristics of epidemic spread. Furthermore, the influence of the hypernetwork structure and model parameters on the propagation process is studied. The results of this paper are helpful in further studying the propagation dynamics on the hypernetworks. At the same time, it provides a certain theoretical basis for the current COVID-19 prevention and control in China and the prevention of infectious diseases in the future. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
Entry Point Variation in the Osseous Fixation Pathway of the Anterior Column of the Pelvis—A Three-Dimensional Analysis
J. Pers. Med. 2022, 12(10), 1748; https://doi.org/10.3390/jpm12101748 - 21 Oct 2022
Viewed by 1025
Abstract
Fractures of the superior pubic ramus can be treated with screw insertion into the osseous fixation pathway (OFP) of the anterior column (AC). The entry point determines whether the screw exits the OFP prematurely. This can be harmful when it enters the hip [...] Read more.
Fractures of the superior pubic ramus can be treated with screw insertion into the osseous fixation pathway (OFP) of the anterior column (AC). The entry point determines whether the screw exits the OFP prematurely. This can be harmful when it enters the hip joint or damages soft tissues inside the lesser pelvis. The exact entry point varies between patients and can be difficult to ascertain on fluoroscopy during surgery. The aim of this study was to determine variation in the location of the entry point. A retrospective single center study was performed at a level 1 trauma center in the Netherlands. Nineteen adult patients were included with an undisplaced fracture of the superior pubic ramus on computer tomography (CT)-scan. Virtual three-dimensional (3D) models of the pelvises were created. Multiple screws were placed per AC and the models were superimposed. A total of 157 screws were placed, of which 109 did not exit the OFP prematurely. A universally reproducible entry point could not be identified. A typical crescent shaped region of entry points did exist and was located more laterally in females when compared to males. Three-dimensional virtual surgery planning can be helpful to identify the ideal entry points in each case. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
Dynamic Network Biomarker Analysis Reveals the Critical Phase Transition of Fruit Ripening in Grapevine
Genes 2022, 13(10), 1851; https://doi.org/10.3390/genes13101851 - 13 Oct 2022
Cited by 1 | Viewed by 825
Abstract
Grapevine (Vitisvinifera L.) fruit ripening is a complex biological process involving a phase transition from immature to mature. Understanding the molecular mechanism of fruit ripening is critical for grapevine fruit storage and quality improvement. However, the regulatory mechanism for the critical [...] Read more.
Grapevine (Vitisvinifera L.) fruit ripening is a complex biological process involving a phase transition from immature to mature. Understanding the molecular mechanism of fruit ripening is critical for grapevine fruit storage and quality improvement. However, the regulatory mechanism for the critical phase transition of fruit ripening from immature to mature in grapevine remains poorly understood. In this work, to identify the key molecular events controlling the critical phase transition of grapevine fruit ripening, we performed an integrated dynamic network analysis on time-series transcriptomic data of grapevine berry development and ripening. As a result, we identified the third time point as a critical transition point in grapevine fruit ripening, which is consistent with the onset of veraison reported in previous studies. In addition, we detected 68 genes as being key regulators involved in controlling fruit ripening. The GO (Gene Ontology) analysis showed that some of these genes participate in fruit development and seed development. This study provided dynamic network biomarkers for marking the initial transcriptional events that characterizes the transition process of fruit ripening, as well as new insights into fruit development and ripening. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
Re_Trans: Combined Retrieval and Transformer Model for Source Code Summarization
Entropy 2022, 24(10), 1372; https://doi.org/10.3390/e24101372 - 27 Sep 2022
Cited by 1 | Viewed by 764
Abstract
Source code summarization (SCS) is a natural language description of source code functionality. It can help developers understand programs and maintain software efficiently. Retrieval-based methods generate SCS by reorganizing terms selected from source code or use SCS of similar code snippets. Generative methods [...] Read more.
Source code summarization (SCS) is a natural language description of source code functionality. It can help developers understand programs and maintain software efficiently. Retrieval-based methods generate SCS by reorganizing terms selected from source code or use SCS of similar code snippets. Generative methods generate SCS via attentional encoder–decoder architecture. However, a generative method can generate SCS for any code, but sometimes the accuracy is still far from expectation (due to the lack of numerous high-quality training sets). A retrieval-based method is considered to have a higher accurac, but usually fails to generate SCS for a source code in the absence of a similar candidate in the database. In order to effectively combine the advantages of retrieval-based methods and generative methods, we propose a new method: Re_Trans. For a given code, we first utilize the retrieval-based method to obtain its most similar code with regard to sematic and corresponding SCS (S_RM). Then, we input the given code and similar code into the trained discriminator. If the discriminator outputs onr, we take S_RM as the result; otherwise, we utilize the generate model, transformer, to generate the given code’ SCS. Particularly, we use AST-augmented (AbstractSyntax Tree) and code sequence-augmented information to make the source code semantic extraction more complete. Furthermore, we build a new SCS retrieval library through the public dataset. We evaluate our method on a dataset of 2.1 million Java code-comment pairs, and experimental results show improvement over the state-of-the-art (SOTA) benchmarks, which demonstrates the effectiveness and efficiency of our method. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
Link Prediction in Complex Networks Using Recursive Feature Elimination and Stacking Ensemble Learning
Entropy 2022, 24(8), 1124; https://doi.org/10.3390/e24081124 - 15 Aug 2022
Viewed by 887
Abstract
Link prediction is an important task in the field of network analysis and modeling, and predicts missing links in current networks and new links in future networks. In order to improve the performance of link prediction, we integrate global, local, and quasi-local topological [...] Read more.
Link prediction is an important task in the field of network analysis and modeling, and predicts missing links in current networks and new links in future networks. In order to improve the performance of link prediction, we integrate global, local, and quasi-local topological information of networks. Here, a novel stacking ensemble framework is proposed for link prediction in this paper. Our approach employs random forest-based recursive feature elimination to select relevant structural features associated with networks and constructs a two-level stacking ensemble model involving various machine learning methods for link prediction. The lower level is composed of three base classifiers, i.e., logistic regression, gradient boosting decision tree, and XGBoost, and their outputs are then integrated with an XGBoost model in the upper level. Extensive experiments were conducted on six networks. Comparison results show that the proposed method can obtain better prediction results and applicability robustness. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
LGCCT: A Light Gated and Crossed Complementation Transformer for Multimodal Speech Emotion Recognition
Entropy 2022, 24(7), 1010; https://doi.org/10.3390/e24071010 - 21 Jul 2022
Cited by 1 | Viewed by 1281
Abstract
Semantic-rich speech emotion recognition has a high degree of popularity in a range of areas. Speech emotion recognition aims to recognize human emotional states from utterances containing both acoustic and linguistic information. Since both textual and audio patterns play essential roles in speech [...] Read more.
Semantic-rich speech emotion recognition has a high degree of popularity in a range of areas. Speech emotion recognition aims to recognize human emotional states from utterances containing both acoustic and linguistic information. Since both textual and audio patterns play essential roles in speech emotion recognition (SER) tasks, various works have proposed novel modality fusing methods to exploit text and audio signals effectively. However, most of the high performance of existing models is dependent on a great number of learnable parameters, and they can only work well on data with fixed length. Therefore, minimizing computational overhead and improving generalization to unseen data with various lengths while maintaining a certain level of recognition accuracy is an urgent application problem. In this paper, we propose LGCCT, a light gated and crossed complementation transformer for multimodal speech emotion recognition. First, our model is capable of fusing modality information efficiently. Specifically, the acoustic features are extracted by CNN-BiLSTM while the textual features are extracted by BiLSTM. The modality-fused representation is then generated by the cross-attention module. We apply the gate-control mechanism to achieve the balanced integration of the original modality representation and the modality-fused representation. Second, the degree of attention focus can be considered, as the uncertainty and the entropy of the same token should converge to the same value independent of the length. To improve the generalization of the model to various testing-sequence lengths, we adopt the length-scaled dot product to calculate the attention score, which can be interpreted from a theoretical view of entropy. The operation of the length-scaled dot product is cheap but effective. Experiments are conducted on the benchmark dataset CMU-MOSEI. Compared to the baseline models, our model achieves an 81.0% F1 score with only 0.432 M parameters, showing an improvement in the balance between performance and the number of parameters. Moreover, the ablation study signifies the effectiveness of our model and its scalability to various input-sequence lengths, wherein the relative improvement is almost 20% of the baseline without a length-scaled dot product. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
A Workflow for Computer-Aided Evaluation of Keloid Based on Laser Speckle Contrast Imaging and Deep Learning
J. Pers. Med. 2022, 12(6), 981; https://doi.org/10.3390/jpm12060981 - 16 Jun 2022
Viewed by 1082
Abstract
A keloid results from abnormal wound healing, which has different blood perfusion and growth states among patients. Active monitoring and treatment of actively growing keloids at the initial stage can effectively inhibit keloid enlargement and has important medical and aesthetic implications. LSCI (laser [...] Read more.
A keloid results from abnormal wound healing, which has different blood perfusion and growth states among patients. Active monitoring and treatment of actively growing keloids at the initial stage can effectively inhibit keloid enlargement and has important medical and aesthetic implications. LSCI (laser speckle contrast imaging) has been developed to obtain the blood perfusion of the keloid and shows a high relationship with the severity and prognosis. However, the LSCI-based method requires manual annotation and evaluation of the keloid, which is time consuming. Although many studies have designed deep-learning networks for the detection and classification of skin lesions, there are still challenges to the assessment of keloid growth status, especially based on small samples. This retrospective study included 150 untreated keloid patients, intensity images, and blood perfusion images obtained from LSCI. A newly proposed workflow based on cascaded vision transformer architecture was proposed, reaching a dice coefficient value of 0.895 for keloid segmentation by 2% improvement, an error of 8.6 ± 5.4 perfusion units, and a relative error of 7.8% ± 6.6% for blood calculation, and an accuracy of 0.927 for growth state prediction by 1.4% improvement than baseline. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Perspective
The Validity of Machine Learning Procedures in Orthodontics: What Is Still Missing?
J. Pers. Med. 2022, 12(6), 957; https://doi.org/10.3390/jpm12060957 - 11 Jun 2022
Cited by 3 | Viewed by 1509
Abstract
Artificial intelligence (AI) models and procedures hold remarkable predictive efficiency in the medical domain through their ability to discover hidden, non-obvious clinical patterns in data. However, due to the sparsity, noise, and time-dependency of medical data, AI procedures are raising unprecedented issues related [...] Read more.
Artificial intelligence (AI) models and procedures hold remarkable predictive efficiency in the medical domain through their ability to discover hidden, non-obvious clinical patterns in data. However, due to the sparsity, noise, and time-dependency of medical data, AI procedures are raising unprecedented issues related to the mismatch between doctors’ mentalreasoning and the statistical answers provided by algorithms. Electronic systems can reproduce or even amplify noise hidden in the data, especially when the diagnosis of the subjects in the training data set is inaccurate or incomplete. In this paper we describe the conditions that need to be met for AI instruments to be truly useful in the orthodontic domain. We report some examples of computational procedures that are capable of extracting orthodontic knowledge through ever deeper patient representation. To have confidence in these procedures, orthodontic practitioners should recognize the benefits, shortcomings, and unintended consequences of AI models, as algorithms that learn from human decisions likewise learn mistakes and biases. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
WRNFS: Width Residual Neuro Fuzzy System, a Fast-Learning Algorithm with High Interpretability
Appl. Sci. 2022, 12(12), 5810; https://doi.org/10.3390/app12125810 - 08 Jun 2022
Viewed by 894
Abstract
Although the deep neural network has a strong fitting ability, it is difficult to be applied to safety-critical fields because of its poor interpretability. Based on the adaptive neuro-fuzzy inference system (ANFIS) and the concept of residual network, a width residual neuro-fuzzy system [...] Read more.
Although the deep neural network has a strong fitting ability, it is difficult to be applied to safety-critical fields because of its poor interpretability. Based on the adaptive neuro-fuzzy inference system (ANFIS) and the concept of residual network, a width residual neuro-fuzzy system (WRNFS) is proposed to improve the interpretability performance in this paper. WRNFS is used to transform a regression problem of high-dimensional data into the sum of several low-dimensional neuro-fuzzy systems. The ANFIS model in the next layer is established based on the low dimensional data and the residual of the ANFIS model in the former layer. The performance of WRNFS is compared with traditional ANFIS on three data sets. The results showed that WRNFS has high interpretability (fewer layers, fewer fuzzy rules, and fewer adjustable parameters) on the premise of satisfying the fitting accuracy. The interpretability, complexity, time efficiency, and robustness of WRNFS are greatly improved when the input number of single low-dimensional systems decreases. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
Character Recognition in Endangered Archives: Shui Manuscripts Dataset, Detection and Application Realization
Appl. Sci. 2022, 12(11), 5361; https://doi.org/10.3390/app12115361 - 25 May 2022
Cited by 1 | Viewed by 1093
Abstract
Shui manuscripts provide a historical testimony of the national identity and spirit of the Shui people. In response to the lack of a high-quality Shui manuscripts dataset, we collected Shui manuscript images in the Shui area and used various methods to enhance them. [...] Read more.
Shui manuscripts provide a historical testimony of the national identity and spirit of the Shui people. In response to the lack of a high-quality Shui manuscripts dataset, we collected Shui manuscript images in the Shui area and used various methods to enhance them. Through our efforts, we created a well-labeled and sizable Shui manuscripts dataset, named Shuishu_T, which is the largest of its kind. Then, we applied target detection technology for Shui manuscript characters recognition. Specifically, we compared the advantages and disadvantages of Faster R-CNN, you only look once (YOLO), and single shot multibox detector (SSD), and subsequently chose Faster R-CNN to detect and recognize Shui manuscript characters. We trained and tested 111 classes of Shui manuscript characters with Faster R-CNN and achieved an average recognition rate of 87.8%. Finally, we designed a WeChat applet that can be used to quickly identify Shui manuscript characters in images obtained by scanning Shui manuscripts with a mobile phone. This work provides a basis for realizing the recognition of characters in Shui manuscripts on mobile terminals. Our research enables the intangible cultural heritage of the Shui people to be preserved, promoted, and shared, which is of great significance for the conservation and inheritance of Shui manuscripts. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
Scale-Invariant Localization of Electric Vehicle Charging Port via Semi-Global Matching of Binocular Images
Appl. Sci. 2022, 12(10), 5247; https://doi.org/10.3390/app12105247 - 22 May 2022
Cited by 3 | Viewed by 1006
Abstract
Automatic charging for electric vehicles has broad development prospects for meeting the personalized service experience of users while overcoming the inherent safety hazards. An identification and positioning approach suitable for engineering applications is the key to promoting automatic charging. In this paper, a [...] Read more.
Automatic charging for electric vehicles has broad development prospects for meeting the personalized service experience of users while overcoming the inherent safety hazards. An identification and positioning approach suitable for engineering applications is the key to promoting automatic charging. In this paper, a low-cost, high-precision method to identify and position charging ports based on SIFT and SGBM is proposed. The feature extraction approach based on SIFT is adopted to produce the difference of Gaussian (DOG) for scale space construction, and the feature matching algorithm with nearest-neighbor search, which is a kind of machine learning, is utilized to yield the map set of matching points. In addition, the disparity calculation is conducted with a semi-global matching algorithm to obtain high-precision positioning results for the charging port. In order to verify the feasibility of the method, a complete identification and positioning experiment of charging port was carried out based on OpenCV and MATLAB. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
Calculation of a Climate Change Vulnerability Index for Nakdong Watersheds Considering Non-Point Pollution Sources
Appl. Sci. 2022, 12(9), 4775; https://doi.org/10.3390/app12094775 - 09 May 2022
Viewed by 2068
Abstract
As a response to climate change, South Korea has established its third National Climate Change Adaptation Plan (2021–2025) alongside the local governments’ plans. In this study, proxy variables in 22 sub-watersheds of the Nakdong River, Korea were used to investigate climate exposure, sensitivity, [...] Read more.
As a response to climate change, South Korea has established its third National Climate Change Adaptation Plan (2021–2025) alongside the local governments’ plans. In this study, proxy variables in 22 sub-watersheds of the Nakdong River, Korea were used to investigate climate exposure, sensitivity, adaptive capacity, and non-point pollution in sub-watersheds, a climate change vulnerability index (CCVI) was established, and the vulnerability of each sub-watershed in the Nakdong River was evaluated. Climate exposure was highest in the Nakdong Estuary sub-watershed (75.5–81.7) and lowest in the Geumhogang sub-watershed (21.1–28.1). Sensitivity was highest (55.7) in the Nakdong Miryang sub-watershed and lowest (19.6) in the Habcheon dam sub-watershed. Adaptive capacity and the resulting CCVI were highest in the Geumhogang sub-watershed (96.2 and 66.2–67.9, respectively) and lowest in the Wicheon sub-watershed (2.61 and 18.5–20.4, respectively), indicating low and high vulnerabilities to climate change, respectively. The study revealed that the high CCVI sensitivity was due to adaptive capacity. These findings can help establish rational climate change response plans for regional water resource management. To assess climate change vulnerability more accurately, regional bias can be prevented by considering various human factors, including resources, budget, and facilities. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
Deep Link-Prediction Based on the Local Structure of Bipartite Networks
Entropy 2022, 24(5), 610; https://doi.org/10.3390/e24050610 - 27 Apr 2022
Cited by 3 | Viewed by 1398
Abstract
Link prediction based on bipartite networks can not only mine hidden relationships between different types of nodes, but also reveal the inherent law of network evolution. Existing bipartite network link prediction is mainly based on the global structure that cannot analyze the role [...] Read more.
Link prediction based on bipartite networks can not only mine hidden relationships between different types of nodes, but also reveal the inherent law of network evolution. Existing bipartite network link prediction is mainly based on the global structure that cannot analyze the role of the local structure in link prediction. To tackle this problem, this paper proposes a deep link-prediction (DLP) method by leveraging the local structure of bipartite networks. The method first extracts the local structure between target nodes and observes structural information between nodes from a local perspective. Then, representation learning of the local structure is performed on the basis of the graph neural network to extract latent features between target nodes. Lastly, a deep-link prediction model is trained on the basis of latent features between target nodes to achieve link prediction. Experimental results on five datasets showed that DLP achieved significant improvement over existing state-of-the-art link prediction methods. In addition, this paper analyzes the relationship between local structure and link prediction, confirming the effectiveness of a local structure in link prediction. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
A Generic View Planning System Based on Formal Expression of Perception Tasks
Entropy 2022, 24(5), 578; https://doi.org/10.3390/e24050578 - 20 Apr 2022
Viewed by 1038
Abstract
View planning (VP) is a technique that guides the adjustment of the sensor’s postures in multi-view perception tasks. It converts the perception process into active perception, which improves the intelligence and reduces the resource consumption of the robot. We propose a generic VP [...] Read more.
View planning (VP) is a technique that guides the adjustment of the sensor’s postures in multi-view perception tasks. It converts the perception process into active perception, which improves the intelligence and reduces the resource consumption of the robot. We propose a generic VP system for multiple kinds of visual perception. The VP system is built on the basis of the formal description of the visual task, and the next best view is calculated by the system. When dealing with a given visual task, we can simply update its description as the input of the VP system, and obtain the defined best view in real time. Formal description of the perception task includes the task’s status, the objects’ prior information library, the visual representation status and the optimization goal. The task’s status and the visual representation status are updated when data are received at a new view. If the task’s status has not reached its goal, candidate views are sorted based on the updated visual representation status, and the next best view that can minimize the entropy of the model space is chosen as the output of the VP system. Experiments of view planning for 3D recognition and reconstruction tasks are conducted, and the result shows that our algorithm has good performance on different tasks. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
LightEyes: A Lightweight Fundus Segmentation Network for Mobile Edge Computing
Sensors 2022, 22(9), 3112; https://doi.org/10.3390/s22093112 - 19 Apr 2022
Cited by 3 | Viewed by 1070
Abstract
Fundus is the only structure that can be observed without trauma to the human body. By analyzing color fundus images, the diagnosis basis for various diseases can be obtained. Recently, fundus image segmentation has witnessed vast progress with the development of deep learning. [...] Read more.
Fundus is the only structure that can be observed without trauma to the human body. By analyzing color fundus images, the diagnosis basis for various diseases can be obtained. Recently, fundus image segmentation has witnessed vast progress with the development of deep learning. However, the improvement of segmentation accuracy comes with the complexity of deep models. As a result, these models show low inference speeds and high memory usages when deploying to mobile edges. To promote the deployment of deep fundus segmentation models to mobile devices, we aim to design a lightweight fundus segmentation network. Our observation comes from the fact that high-resolution representations could boost the segmentation of tiny fundus structures, and the classification of small fundus structures depends more on local features. To this end, we propose a lightweight segmentation model called LightEyes. We first design a high-resolution backbone network to learn high-resolution representations, so that the spatial relationship between feature maps can be always retained. Meanwhile, considering high-resolution features means high memory usage; for each layer, we use at most 16 convolutional filters to reduce memory usage and decrease training difficulty. LightEyes has been verified on three kinds of fundus segmentation tasks, including the hard exudate, the microaneurysm, and the vessel, on five publicly available datasets. Experimental results show that LightEyes achieves highly competitive segmentation accuracy and segmentation speed compared with state-of-the-art fundus segmentation models, while running at 1.6 images/s Cambricon-1A speed and 51.3 images/s GPU speed with only 36k parameters. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
Research on Product Core Component Acquisition Based on Patent Semantic Network
Entropy 2022, 24(4), 549; https://doi.org/10.3390/e24040549 - 14 Apr 2022
Cited by 1 | Viewed by 1301
Abstract
Patent data contain plenty of valuable information. Recently, the lack of innovative ideas has resulted in some enterprises encountering bottlenecks in product research and development (R&D). Some enterprises point out that they do not have enough comprehension of product components. To improve efficiency [...] Read more.
Patent data contain plenty of valuable information. Recently, the lack of innovative ideas has resulted in some enterprises encountering bottlenecks in product research and development (R&D). Some enterprises point out that they do not have enough comprehension of product components. To improve efficiency of product R&D, this paper introduces natural-language processing (NLP) technology, which includes part-of-speech (POS) tagging and subject–action–object (SAO) classification. Our strategy first extracts patent keywords from products, then applies a complex network to obtain core components based on structural holes and centrality of eigenvector algorism. Finally, we use the example of US shower patents to verify the effectiveness and feasibility of the methodology. As a result, this paper examines the acquisition of core components and how they can help enterprises and designers clarify their R&D ideas and design priorities. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
Determination of the Severity and Percentage of COVID-19 Infection through a Hierarchical Deep Learning System
J. Pers. Med. 2022, 12(4), 535; https://doi.org/10.3390/jpm12040535 - 28 Mar 2022
Cited by 6 | Viewed by 1609
Abstract
The coronavirus disease 2019 (COVID-19) has caused millions of deaths and one of the greatest health crises of all time. In this disease, one of the most important aspects is the early detection of the infection to avoid the spread. In addition to [...] Read more.
The coronavirus disease 2019 (COVID-19) has caused millions of deaths and one of the greatest health crises of all time. In this disease, one of the most important aspects is the early detection of the infection to avoid the spread. In addition to this, it is essential to know how the disease progresses in patients, to improve patient care. This contribution presents a novel method based on a hierarchical intelligent system, that analyzes the application of deep learning models to detect and classify patients with COVID-19 using both X-ray and chest computed tomography (CT). The methodology was divided into three phases, the first being the detection of whether or not a patient suffers from COVID-19, the second step being the evaluation of the percentage of infection of this disease and the final phase is to classify the patients according to their severity. Stratification of patients suffering from COVID-19 according to their severity using automatic systems based on machine learning on medical images (especially X-ray and CT of the lungs) provides a powerful tool to help medical experts in decision making. In this article, a new contribution is made to a stratification system with three severity levels (mild, moderate and severe) using a novel histogram database (which defines how the infection is in the different CT slices for a patient suffering from COVID-19). The first two phases use CNN Densenet-161 pre-trained models, and the last uses SVM with LDA supervised learning algorithms as classification models. The initial stage detects the presence of COVID-19 through X-ray multi-class (COVID-19 vs. No-Findings vs. Pneumonia) and the results obtained for accuracy, precision, recall, and F1-score values are 88%, 91%, 87%, and 89%, respectively. The following stage manifested the percentage of COVID-19 infection in the slices of the CT-scans for a patient and the results in the metrics evaluation are 0.95 in Pearson Correlation coefficient, 5.14 in MAE and 8.47 in RMSE. The last stage finally classifies a patient in three degrees of severity as a function of global infection of the lungs and the results achieved are 95% accurate. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
Indirect Volume Estimation for Acute Ischemic Stroke from Diffusion Weighted Image Using Slice Image Segmentation
J. Pers. Med. 2022, 12(4), 521; https://doi.org/10.3390/jpm12040521 - 24 Mar 2022
Viewed by 1822
Abstract
The accurate estimation of acute ischemic stroke (AIS) using diffusion-weighted imaging (DWI) is crucial for assessing patients and guiding treatment options. This study aimed to propose a method that estimates AIS volume in DWI objectively, quickly, and accurately. We used a dataset of [...] Read more.
The accurate estimation of acute ischemic stroke (AIS) using diffusion-weighted imaging (DWI) is crucial for assessing patients and guiding treatment options. This study aimed to propose a method that estimates AIS volume in DWI objectively, quickly, and accurately. We used a dataset of DWI with AIS, including 2159 participants (1179 for internal validation and 980 for external validation) with various types of AIS. We constructed algorithms using 3D segmentation (direct estimation) and 2D segmentation (indirect estimation) and compared their performances with those annotated by neurologists. The proposed pretrained indirect model demonstrated higher segmentation performance than the direct model, with a sensitivity, specificity, F1-score, and Jaccard index of 75.0%, 77.9%, 76.0, and 62.1%, respectively, for internal validation, and 72.8%, 84.3%, 77.2, and 63.8%, respectively, for external validation. Volume estimation was more reliable for the indirect model, with 93.3% volume similarity (VS), 0.797 mean absolute error (MAE) for internal validation, VS of 89.2% and a MAE of 2.5% for external validation. These results suggest that the indirect model using 2D segmentation developed in this study can provide an accurate estimation of volume from DWI of AIS and may serve as a supporting tool to help physicians make crucial clinical decisions. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Graphical abstract

Article
Quasi-Consensus of Time-Varying Multi-Agent Systems with External Inputs under Deception Attacks
Entropy 2022, 24(4), 447; https://doi.org/10.3390/e24040447 - 23 Mar 2022
Cited by 1 | Viewed by 1256
Abstract
The quasi-consensus of a class of nonlinear time-varying multi-agent systems suffering from both external inputs and deception attacks is studied in this paper. This is different from a time-varying matrix, which is assumed to be bounded; further reasonable assumptions are supposed. In addition, [...] Read more.
The quasi-consensus of a class of nonlinear time-varying multi-agent systems suffering from both external inputs and deception attacks is studied in this paper. This is different from a time-varying matrix, which is assumed to be bounded; further reasonable assumptions are supposed. In addition, impulsive deception attacks modeled with Bernoulli variables are considered. Sufficient conditions to achieve quasi-consensus are given, and the upper bounds of the error state related to the deception attacks is derived. Finally, a numerical simulation example is provided to show the validity of the obtained results. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
Deep Hierarchical Ensemble Model for Suicide Detection on Imbalanced Social Media Data
Entropy 2022, 24(4), 442; https://doi.org/10.3390/e24040442 - 23 Mar 2022
Cited by 2 | Viewed by 1447
Abstract
As a serious worldwide problem, suicide often causes huge and irreversible losses to families and society. Therefore, it is necessary to detect and help individuals with suicidal ideation in time. In recent years, the prosperous development of social media has provided new perspectives [...] Read more.
As a serious worldwide problem, suicide often causes huge and irreversible losses to families and society. Therefore, it is necessary to detect and help individuals with suicidal ideation in time. In recent years, the prosperous development of social media has provided new perspectives on suicide detection, but related research still faces some difficulties, such as data imbalance and expression implicitness. In this paper, we propose a Deep Hierarchical Ensemble model for Suicide Detection (DHE-SD) based on a hierarchical ensemble strategy, and construct a dataset based on Sina Weibo, which contains more than 550 thousand posts from 4521 users. To verify the effectiveness of the model, we also conduct experiments on a public Weibo dataset containing 7329 users’ posts. The proposed model achieves the best performance on both the constructed dataset and the public dataset. In addition, in order to make the model applicable to a wider population, we use the proposed sentence-level mask mechanism to delete user posts with strong suicidal ideation. Experiments show that the proposed model can still effectively identify social media users with suicidal ideation even when the performance of the baseline models decrease significantly. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
Machine Learning Models and Statistical Complexity to Analyze the Effects of Posture on Cerebral Hemodynamics
Entropy 2022, 24(3), 428; https://doi.org/10.3390/e24030428 - 19 Mar 2022
Cited by 1 | Viewed by 1543
Abstract
The mechanism of cerebral blood flow autoregulation can be of great importance in diagnosing and controlling a diversity of cerebrovascular pathologies such as vascular dementia, brain injury, and neurodegenerative diseases. To assess it, there are several methods that use changing postures, such as [...] Read more.
The mechanism of cerebral blood flow autoregulation can be of great importance in diagnosing and controlling a diversity of cerebrovascular pathologies such as vascular dementia, brain injury, and neurodegenerative diseases. To assess it, there are several methods that use changing postures, such as sit-stand or squat-stand maneuvers. However, the evaluation of the dynamic cerebral blood flow autoregulation (dCA) in these postures has not been adequately studied using more complex models, such as non-linear ones. Moreover, dCA can be considered part of a more complex mechanism called cerebral hemodynamics, where others (CO2 reactivity and neurovascular-coupling) that affect cerebral blood flow (BF) are included. In this work, we analyzed postural influences using non-linear machine learning models of dCA and studied characteristics of cerebral hemodynamics under statistical complexity using eighteen young adult subjects, aged 27 ± 6.29 years, who took the systemic or arterial blood pressure (BP) and cerebral blood flow velocity (BFV) for five minutes in three different postures: stand, sit, and lay. With models of a Support Vector Machine (SVM) through time, we used an AutoRegulatory Index (ARI) to compare the dCA in different postures. Using wavelet entropy, we estimated the statistical complexity of BFV for three postures. Repeated measures ANOVA showed that only the complexity of lay-sit had significant differences. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
A Deep Reinforcement Learning-Based Scheme for Solving Multiple Knapsack Problems
Appl. Sci. 2022, 12(6), 3068; https://doi.org/10.3390/app12063068 - 17 Mar 2022
Cited by 3 | Viewed by 2092
Abstract
A knapsack problem is to select a set of items that maximizes the total profit of selected items while keeping the total weight of the selected items no less than the capacity of the knapsack. As a generalized form with multiple knapsacks, the [...] Read more.
A knapsack problem is to select a set of items that maximizes the total profit of selected items while keeping the total weight of the selected items no less than the capacity of the knapsack. As a generalized form with multiple knapsacks, the multi-knapsack problem (MKP) is to select a disjointed set of items for each knapsack. To solve MKP, we propose a deep reinforcement learning (DRL) based approach, which takes as input the available capacities of knapsacks, total profits and weights of selected items, and normalized profits and weights of unselected items and determines the next item to be mapped to the knapsack with the largest available capacity. To expedite the learning process, we adopt the Asynchronous Advantage Actor-Critic (A3C) for the policy model. The experimental results indicate that the proposed method outperforms the random and greedy methods and achieves comparable performance to an optimal policy in terms of the profit ratio of the selected items to the total profit sum, particularly when the profits and weights of items have a non-linear relationship such as quadratic forms. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
Genetic Connection between Hyperglycemia and Carotid Atherosclerosis in Hyperlipidemic Mice
Genes 2022, 13(3), 510; https://doi.org/10.3390/genes13030510 - 14 Mar 2022
Cited by 3 | Viewed by 1774
Abstract
Type 2 diabetes (T2D) is a major risk for atherosclerosis and its complications. Apoe-null (Apoe−/−) mouse strains exhibit a wide range of variations in susceptibility to T2D and carotid atherosclerosis, with the latter being a major cause of ischemic [...] Read more.
Type 2 diabetes (T2D) is a major risk for atherosclerosis and its complications. Apoe-null (Apoe−/−) mouse strains exhibit a wide range of variations in susceptibility to T2D and carotid atherosclerosis, with the latter being a major cause of ischemic stroke. To identify genetic connections between T2D and carotid atherosclerosis, 145 male F2 mice were generated from LP/J and BALB/cJ Apoe−/− mice and fed 12 weeks of a Western diet. Atherosclerotic lesions in the carotid arteries, fasting, and non-fasting plasma glucose levels were measured, and genotyping was performed using miniMUGA arrays. Two significant QTL (quantitative trait loci) on chromosomes (Chr) 6 and 15 were identified for carotid lesions. The Chr15 QTL coincided precisely with QTL Bglu20 for fasting and non-fasting glucose levels. Carotid lesion sizes showed a trend toward correlation with fasting and non-fasting glucose levels in F2 mice. The Chr15 QTL for carotid lesions was suppressed after excluding the influence from fasting or non-fasting glucose. Likely candidate genes for the causal association were Tnfrsf11b, Deptor, and Gsdmc2. These results demonstrate a causative role for hyperglycemia in the development of carotid atherosclerosis in hyperlipidemic mice. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Graphical abstract

Article
Machine Learning Model-Based Simple Clinical Information to Predict Decreased Left Atrial Appendage Flow Velocity
J. Pers. Med. 2022, 12(3), 437; https://doi.org/10.3390/jpm12030437 - 10 Mar 2022
Cited by 1 | Viewed by 1431
Abstract
Background: Transesophageal echocardiography (TEE) is the first technique of choice for evaluating the left atrial appendage flow velocity (LAAV) in clinical practice, which may cause some complications. Therefore, clinicians require a simple applicable method to screen patients with decreased LAAV. Therefore, we investigated [...] Read more.
Background: Transesophageal echocardiography (TEE) is the first technique of choice for evaluating the left atrial appendage flow velocity (LAAV) in clinical practice, which may cause some complications. Therefore, clinicians require a simple applicable method to screen patients with decreased LAAV. Therefore, we investigated the feasibility and accuracy of a machine learning (ML) model to predict LAAV. Method: The analysis included patients with atrial fibrillation who visited the general hospital of PLA and underwent transesophageal echocardiography (TEE) between January 2017 and December 2020. Three machine learning algorithms were used to predict LAAV. The area under the receiver operating characteristic curve (AUC) was measured to evaluate diagnostic accuracy. Results: Of the 1039 subjects, 125 patients (12%) were determined as having decreased LAAV (LAAV < 25 cm/s). Patients with decreased LAAV were fatter and showed a higher prevalence of persistent AF, heart failure, hypertension, diabetes and stroke, and the decreased LAAV group had a larger left atrium diameter and a higher serum level of NT-pro BNP than the control group (p < 0.05). Three machine-learning models (SVM model, RF model, and KNN model) were developed to predict LAAV. In the test data, the RF model performs best (R = 0.608, AUC = 0.89) among the three models. A fivefold cross-validation scheme further verified the predictive ability of the RF model. In the RF model, NT-proBNP was the factor with the strongest impact. Conclusions: A machine learning model (Random Forest model)-based simple clinical information showed good performance in predicting LAAV. The tool for the screening of decreased LAAV patients may be very helpful in the risk classification of patients with a high risk of LAA thrombosis. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
Detection of Target Genes for Drug Repurposing to Treat Skeletal Muscle Atrophy in Mice Flown in Spaceflight
Genes 2022, 13(3), 473; https://doi.org/10.3390/genes13030473 - 08 Mar 2022
Cited by 2 | Viewed by 2501
Abstract
Skeletal muscle atrophy is a common condition in aging, diabetes, and in long duration spaceflights due to microgravity. This article investigates multi-modal gene disease and disease drug networks via link prediction algorithms to select drugs for repurposing to treat skeletal muscle atrophy. Key [...] Read more.
Skeletal muscle atrophy is a common condition in aging, diabetes, and in long duration spaceflights due to microgravity. This article investigates multi-modal gene disease and disease drug networks via link prediction algorithms to select drugs for repurposing to treat skeletal muscle atrophy. Key target genes that cause muscle atrophy in the left and right extensor digitorum longus muscle tissue, gastrocnemius, quadriceps, and the left and right soleus muscles are detected using graph theoretic network analysis, by mining the transcriptomic datasets collected from mice flown in spaceflight made available by GeneLab. We identified the top muscle atrophy gene regulators by the Pearson correlation and Bayesian Markov blanket method. The gene disease knowledge graph was constructed using the scalable precision medicine knowledge engine. We computed node embeddings, random walk measures from the networks. Graph convolutional networks, graph neural networks, random forest, and gradient boosting methods were trained using the embeddings, network features for predicting links and ranking top gene-disease associations for skeletal muscle atrophy. Drugs were selected and a disease drug knowledge graph was constructed. Link prediction methods were applied to the disease drug networks to identify top ranked drugs for therapeutic treatment of skeletal muscle atrophy. The graph convolution network performs best in link prediction based on receiver operating characteristic curves and prediction accuracies. The key genes involved in skeletal muscle atrophy are associated with metabolic and neurodegenerative diseases. The drugs selected for repurposing using the graph convolution network method were nutrients, corticosteroids, anti-inflammatory medications, and others related to insulin. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
A Two-Branch CNN Fusing Temporal and Frequency Features for Motor Imagery EEG Decoding
Entropy 2022, 24(3), 376; https://doi.org/10.3390/e24030376 - 08 Mar 2022
Cited by 5 | Viewed by 2275
Abstract
With the development of technology and the rise of the meta-universe concept, the brain-computer interface (BCI) has become a hotspot in the research field, and the BCI based on motor imagery (MI) EEG has been widely concerned. However, in the process of MI-EEG [...] Read more.
With the development of technology and the rise of the meta-universe concept, the brain-computer interface (BCI) has become a hotspot in the research field, and the BCI based on motor imagery (MI) EEG has been widely concerned. However, in the process of MI-EEG decoding, the performance of the decoding model needs to be improved. At present, most MI-EEG decoding methods based on deep learning cannot make full use of the temporal and frequency features of EEG data, which leads to a low accuracy of MI-EEG decoding. To address this issue, this paper proposes a two-branch convolutional neural network (TBTF-CNN) that can simultaneously learn the temporal and frequency features of EEG data. The structure of EEG data is reconstructed to simplify the spatio-temporal convolution process of CNN, and continuous wavelet transform is used to express the time-frequency features of EEG data. TBTF-CNN fuses the features learned from the two branches and then inputs them into the classifier to decode the MI-EEG. The experimental results on the BCI competition IV 2b dataset show that the proposed model achieves an average classification accuracy of 81.3% and a kappa value of 0.63. Compared with other methods, TBTF-CNN achieves a better performance in MI-EEG decoding. The proposed method can make full use of the temporal and frequency features of EEG data and can improve the decoding accuracy of MI-EEG. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
A Novel Epidemic Model Base on Pulse Charging in Wireless Rechargeable Sensor Networks
Entropy 2022, 24(2), 302; https://doi.org/10.3390/e24020302 - 21 Feb 2022
Cited by 3 | Viewed by 1001
Abstract
As wireless rechargeable sensor networks (WRSNs) are gradually being widely accepted and recognized, the security issues of WRSNs have also become the focus of research discussion. In the existing WRSNs research, few people introduced the idea of pulse charging. Taking into account the [...] Read more.
As wireless rechargeable sensor networks (WRSNs) are gradually being widely accepted and recognized, the security issues of WRSNs have also become the focus of research discussion. In the existing WRSNs research, few people introduced the idea of pulse charging. Taking into account the utilization rate of nodes’ energy, this paper proposes a novel pulse infectious disease model (SIALS-P), which is composed of susceptible, infected, anti-malware and low-energy susceptible states under pulse charging, to deal with the security issues of WRSNs. In each periodic pulse point, some parts of low energy states (LS nodes, LI nodes) will be converted into the normal energy states (S nodes, I nodes) to control the number of susceptible nodes and infected nodes. This paper first analyzes the local stability of the SIALS-P model by Floquet theory. Then, a suitable comparison system is given by comparing theorem to analyze the stability of malware-free T-period solution and the persistence of malware transmission. Additionally, the optimal control of the proposed model is analyzed. Finally, the comparative simulation analysis regarding the proposed model, the non-charging model and the continuous charging model is given, and the effects of parameters on the basic reproduction number of the three models are shown. Meanwhile, the sensitivity of each parameter and the optimal control theory is further verified. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
Matching-Updating Mechanism: A Solution for the Stable Marriage Problem with Dynamic Preferences
Entropy 2022, 24(2), 263; https://doi.org/10.3390/e24020263 - 11 Feb 2022
Cited by 4 | Viewed by 1287
Abstract
We studied the stable marriage problem with dynamic preferences. The dynamic preference model allows the agent to change its preferences at any time, which may cause instability in a matching. However, preference changing in SMP instances does not necessarily break all pairs of [...] Read more.
We studied the stable marriage problem with dynamic preferences. The dynamic preference model allows the agent to change its preferences at any time, which may cause instability in a matching. However, preference changing in SMP instances does not necessarily break all pairs of an existing match. Sometimes, only a few couples want to change their partners, while others choose to stay with their current partners; this motivates us to find stable matching on a new instance by updating an existing match rather than restarting the matching process from scratch. By using the update mechanism, we try to minimize the revision cost when rematching occurs. The challenge when updating a matching is that a cyclic process may exist, and stable matching is never achieved. Our proposed mechanism can update a match and avoid the cyclic process. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
Design of a Low-Power Embedded System Based on a SoC-FPGA and the Honeybee Search Algorithm for Real-Time Video Tracking
Sensors 2022, 22(3), 1280; https://doi.org/10.3390/s22031280 - 08 Feb 2022
Cited by 1 | Viewed by 2233
Abstract
Video tracking involves detecting previously designated objects of interest within a sequence of image frames. It can be applied in robotics, unmanned vehicles, and automation, among other fields of interest. Video tracking is still regarded as an open problem due to a number [...] Read more.
Video tracking involves detecting previously designated objects of interest within a sequence of image frames. It can be applied in robotics, unmanned vehicles, and automation, among other fields of interest. Video tracking is still regarded as an open problem due to a number of obstacles that still need to be overcome, including the need for high precision and real-time results, as well as portability and low-power demands. This work presents the design, implementation and assessment of a low-power embedded system based on an SoC-FPGA platform and the honeybee search algorithm (HSA) for real-time video tracking. HSA is a meta-heuristic that combines evolutionary computing and swarm intelligence techniques. Our findings demonstrated that the combination of SoC-FPGA and HSA reduced the consumption of computational resources, allowing real-time multiprocessing without a reduction in precision, and with the advantage of lower power consumption, which enabled portability. A starker difference was observed when measuring the power consumption. The proposed SoC-FPGA system consumed about 5 Watts, whereas the CPU-GPU system required more than 200 Watts. A general recommendation obtained from this research is to use SoC-FPGA over CPU-GPU to work with meta-heuristics in computer vision applications when an embedded solution is required. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
Multi-Label Active Learning-Based Machine Learning Model for Heart Disease Prediction
Sensors 2022, 22(3), 1184; https://doi.org/10.3390/s22031184 - 04 Feb 2022
Cited by 26 | Viewed by 4891
Abstract
The rapid growth and adaptation of medical information to identify significant health trends and help with timely preventive care have been recent hallmarks of the modern healthcare data system. Heart disease is the deadliest condition in the developed world. Cardiovascular disease and its [...] Read more.
The rapid growth and adaptation of medical information to identify significant health trends and help with timely preventive care have been recent hallmarks of the modern healthcare data system. Heart disease is the deadliest condition in the developed world. Cardiovascular disease and its complications, including dementia, can be averted with early detection. Further research in this area is needed to prevent strokes and heart attacks. An optimal machine learning model can help achieve this goal with a wealth of healthcare data on heart disease. Heart disease can be predicted and diagnosed using machine-learning-based systems. Active learning (AL) methods improve classification quality by incorporating user–expert feedback with sparsely labelled data. In this paper, five (MMC, Random, Adaptive, QUIRE, and AUDI) selection strategies for multi-label active learning were applied and used for reducing labelling costs by iteratively selecting the most relevant data to query their labels. The selection methods with a label ranking classifier have hyperparameters optimized by a grid search to implement predictive modelling in each scenario for the heart disease dataset. Experimental evaluation includes accuracy and F-score with/without hyperparameter optimization. Results show that the generalization of the learning model beyond the existing data for the optimized label ranking model uses the selection method versus others due to accuracy. However, the selection method was highlighted in regards to the F-score using optimized settings. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
An Enhanced Full-Form Model-Free Adaptive Controller for SISO Discrete-Time Nonlinear Systems
Entropy 2022, 24(2), 163; https://doi.org/10.3390/e24020163 - 21 Jan 2022
Viewed by 1588
Abstract
This study focuses on the full-form model-free adaptive controller (FFMFAC) for SISO discrete-time nonlinear systems, and proposes enhanced FFMFAC. The proposed technique design incorporates long short-term memory neural networks (LSTMs) and fuzzy neural networks (FNNs). To be more precise, LSTMs are utilized to [...] Read more.
This study focuses on the full-form model-free adaptive controller (FFMFAC) for SISO discrete-time nonlinear systems, and proposes enhanced FFMFAC. The proposed technique design incorporates long short-term memory neural networks (LSTMs) and fuzzy neural networks (FNNs). To be more precise, LSTMs are utilized to adjust vital parameters of the FFMFAC online. Additionally, due to the high nonlinear approximation capabilities of FNNs, pseudo gradient (PG) values of the controller are estimated online. EFFMFAC is characterized by utilizing the measured I/O data for the online training of all introduced neural networks and does not involve offline training and specific models of the controlled system. Finally, the rationality and superiority are verified by two simulations and a supporting ablation analysis. Five individual performance indices are given, and the experimental findings show that EFFMFAC outperforms all other methods. Especially compared with the FFMFAC, EFFMFAC reduces the RMSE by 21.69% and 11.21%, respectively, proving it to be applicable for SISO discrete-time nonlinear systems. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Title: KTAT:A Complex Embedding Model of Knowledge Graph Integrating Type Information and Attention Mechanism
Authors: Ying Liu; Peng Wang
Affiliation: Ying Liu: School of Computer Science and Technology; Changchun University of Science and Technology; Changchun; China Peng Wang: School of Computer Science and Technology; Changchun University of Science and Technology; Changchun; China
Abstract: Knowledge graph embedding learning aims to represent the entities and relationships of real-world knowledge as low-dimensional dense vectors, existing knowledge representation learning methods mostly only aggregates the internal information of triples and graph structure information. Recent research has proved that multi-source information of entities is conducive to more accurate knowledge embedding tasks. In this paper, we proposed a model based attention mechanism and integrating type information of entities, KTAT, which embedding different representations for entities under different types information by maps the entities to type-specific hyperplanes and integrates text description information to supplement embedding to improve model performance. The experimental results show that KTAT outperformed to the previous advanced methods, and the performance of link prediction is effectively improved by combining type information.

Back to TopTop