Topic Editors

Department of Informatics, Ionian University, 491 32 Corfu, Greece
Department of Informatics, Ionian University, 491 32 Corfu, Greece
Department of Informatics, Ionian University, 491 32 Corfu, Greece

Artificial Intelligence Models, Tools and Applications

Abstract submission deadline
31 May 2024
Manuscript submission deadline
31 August 2024
Viewed by
74221

Topic Information

Dear Colleagues,

During the difficult years since the start of the ongoing COVID-19 pandemic, the need for efficient artificial intelligence models, tools, and applications has been more evident than ever. Machine learning and data science, not to mention the huge amount of data they produce, form a clear new source of valuable information. New and innovative approaches are required to tackle the new research challenges faced in this area. In this framework, artificial intelligence is crucial and thus may be described as one of the most important research areas of our time. Since this view is applicable to the research community, it also faces huge challenges from the perspective of data management and involves emerging disciplines in information processing and related tools and applications.

This Topic aims to bring together interdisciplinary approaches focusing on innovative applications and existing artificial intelligence methodologies. Since the typical notion of data is usually focused on heterogeneity and is rather dynamic in nature, computer science researchers are encouraged to develop new or adapt existing suitable artificial intelligence models, tools, and applications to effectively solve these problems. Therefore, this Topic is open to anyone who wants to submit a relevant research manuscript.

In addition to the open call for papers, articles that will be presented at SETN 2022 are invited to be submitted in extended versions to this Topic. In this case, the conference paper should be cited and noted on the first page of the submitted paper; authors are asked to disclose that it is a conference paper in their cover letter and include a statement on what has been changed compared to the original conference paper. Each submission to this journal issue should contain at least 50% new material—e.g., in the form of technical extensions, more in-depth evaluations, or additional use cases.

Prof. Dr. Phivos Mylonas
Dr. Katia Lida Kermanidis
Prof. Dr. Manolis Maragoudakis
Topic Editors

Keywords

  • artificial intelligence
  • machine learning
  • smart tools and applications
  • computational logic
  • multi-agent systems
  • cross-disciplinary AI applications

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.7 4.5 2011 15.8 Days CHF 2300 Submit
Computers
computers
2.8 4.7 2012 17.9 Days CHF 1600 Submit
Digital
digital
- - 2021 24.1 Days CHF 1000 Submit
Electronics
electronics
2.9 4.7 2012 15.8 Days CHF 2200 Submit
Smart Cities
smartcities
6.4 8.5 2018 16.5 Days CHF 1400 Submit

Preprints is a platform dedicated to making early versions of research outputs permanently available and citable. MDPI journals allow posting on preprint servers such as Preprints.org prior to publication. For more details about reprints, please visit https://www.preprints.org.

Published Papers (53 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
Article
Recommendation Method of Power Knowledge Retrieval Based on Graph Neural Network
Electronics 2023, 12(18), 3922; https://doi.org/10.3390/electronics12183922 - 18 Sep 2023
Viewed by 255
Abstract
With the development of the digital and intelligent transformation of the power grid, the structure and operation and maintenance technology of the power grid are constantly updated, which leads to problems such as difficulties in information acquisition and screening. Therefore, we propose a [...] Read more.
With the development of the digital and intelligent transformation of the power grid, the structure and operation and maintenance technology of the power grid are constantly updated, which leads to problems such as difficulties in information acquisition and screening. Therefore, we propose a recommendation method for power knowledge retrieval based on a graph neural network (RPKR-GNN). The method first uses a graph neural network to learn the network structure information of the power fault knowledge graph and realize the deep semantic embedding of power entities and relations. After this, it fuses the power knowledge graph paths to mine the potential power entity relationships and completes the power fault knowledge graph through knowledge inference. At the same time, we combine the user retrieval behavior features for knowledge aggregation to form a personal subgraph, and we analyze the user retrieval subgraph by matching the similarity of retrieval keyword features. Finally, we form a fusion subgraph based on the subgraph topology and reorder the entities of the subgraph to generate a recommendation list for the target users for the prediction of user retrieval intention. Through experimental comparison with various classical models, the results show that the models have a certain generalization ability in knowledge inference. The method performs well in terms of the MR and Hit@10 indexes on each dataset, and the F1 value can reach 87.3 in the retrieval recommendation effect, which effectively enhances the automated operation and maintenance capability of the power system. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Web-Based Malware Detection System Using Convolutional Neural Network
Digital 2023, 3(3), 273-285; https://doi.org/10.3390/digital3030017 - 12 Sep 2023
Viewed by 445
Abstract
In this article, we introduce a web-based malware detection system that leverages a deep-learning approach. Our primary objective is the development of a robust deep-learning model designed for classifying malware in executable files. In contrast to conventional malware detection systems, our approach relies [...] Read more.
In this article, we introduce a web-based malware detection system that leverages a deep-learning approach. Our primary objective is the development of a robust deep-learning model designed for classifying malware in executable files. In contrast to conventional malware detection systems, our approach relies on static detection techniques to unveil the true nature of files as either malicious or benign. Our method makes use of a one-dimensional convolutional neural network 1D-CNN due to the nature of the portable executable file. Significantly, static analysis aligns perfectly with our objectives, allowing us to uncover static features within the portable executable header. This choice holds particular significance given the potential risks associated with dynamic detection, often necessitating the setup of controlled environments, such as virtual machines, to mitigate dangers. Moreover, we seamlessly integrate this effective deep-learning method into a web-based system, rendering it accessible and user-friendly via a web interface. Empirical evidence showcases the efficiency of our proposed methods, as demonstrated in extensive comparisons with state-of-the-art models across three diverse datasets. Our results undeniably affirm the superiority of our approach, delivering a practical, dependable, and rapid mechanism for identifying malware within executable files. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Joint Location–Allocation Model for Multi-Level Maintenance Service Network in Agriculture
Appl. Sci. 2023, 13(18), 10167; https://doi.org/10.3390/app131810167 - 09 Sep 2023
Viewed by 363
Abstract
The maintenance service network is always designed as a multi-level service network to provide timely maintenance service for failed machinery, and is rarely studied in agriculture. Thus, this paper focuses on a three-level maintenance service network location–allocation problem in agriculture, which contains several [...] Read more.
The maintenance service network is always designed as a multi-level service network to provide timely maintenance service for failed machinery, and is rarely studied in agriculture. Thus, this paper focuses on a three-level maintenance service network location–allocation problem in agriculture, which contains several spare part centres, service stations, and service units. This research aims to obtain the optimal location of spare part centres and service stations while determining service vehicle allocation results for service stations, and the problem can be called a multi-level facility location and allocation problem (MLFLAP). Considering contiguity constraints and hierarchical relationships, the proposed MLFLAP is formulated as a mixed-integer linear programming (MILP) model integrating with P-region and set covering location problems to minimize total service costs, including spare part centre construction costs, service vehicle usage costs, and service mileage costs of service stations. The Benders decomposition-based solution method with several improvements is then applied to decompose the original MLFLAP into master problem and subproblems to find the optimal solutions effectively. Finally, a real-world case in China is proposed to evaluate the performance of the model and algorithm in agriculture, and sensitivity analysis is also conducted to demonstrate the impact of several parameters. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
A New Linear Model for the Calculation of Routing Metrics in 802.11s Using ns-3 and RStudio
Computers 2023, 12(9), 172; https://doi.org/10.3390/computers12090172 - 28 Aug 2023
Viewed by 310
Abstract
Wireless mesh networks (WMNs) offer a pragmatic solution with a cost-effective ratio when provisioning ubiquitous broadband internet access and diverse telecommunication systems. The conceptual underpinning of mesh networks finds application not only in IEEE networks, but also in 3GPP networks like LTE and [...] Read more.
Wireless mesh networks (WMNs) offer a pragmatic solution with a cost-effective ratio when provisioning ubiquitous broadband internet access and diverse telecommunication systems. The conceptual underpinning of mesh networks finds application not only in IEEE networks, but also in 3GPP networks like LTE and the low-power wide area network (LPWAN) tailored for the burgeoning Internet of Things (IoT) landscape. IEEE 802.11s is well known for its facto standard for WMN, which defines the hybrid wireless mesh protocol (HWMP) as a layer-2 routing protocol and airtime link (ALM) as a metric. In this intricate landscape, artificial intelligence (AI) plays a prominent role in the industry, particularly within the technology and telecommunication realms. This study presents a novel methodology for the computation of routing metrics, specifically the ALM. This methodology implements the network simulator ns-3 and the RStudio as a statistical computing environment for data analysis. The former has enabled for the creation of scripts that elicit a variety of scenarios for WMN where information is gathered and stored in databases. The latter (RStudio) takes this information, and at this point, two linear predictions are supported. The first uses linear models (lm) and the second employs general linear models (glm). To conclude this process, statistical tests are applied to the original model, as well as to the new suggested ones. This work substantially contributes in two ways: first, through the methodological tool for the metric calculation of the HWMP protocol that belongs to the IEEE 802.11s standard, using lm and glm for the selection and validation of the model regressors. At this stage the ANOVA and STEPWIZE tools of RStudio are used. The second contribution is a linear predictor that improves the WMN’s performance as a priori mechanism before the use of the ns-3 simulator. The ANCOVA tool of RStudio is employed in the latter. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Efficient On-Chip Learning of Multi-Layer Perceptron Based on Neuron Multiplexing Method
Electronics 2023, 12(17), 3607; https://doi.org/10.3390/electronics12173607 - 26 Aug 2023
Viewed by 297
Abstract
An efficient on-chip learning method based on neuron multiplexing is proposed in this paper to address the limitations of traditional on-chip learning methods, including low resource utilization and non-tunable parallelism. The proposed method utilizes a configurable neuron calculation unit (NCU) to calculate neural [...] Read more.
An efficient on-chip learning method based on neuron multiplexing is proposed in this paper to address the limitations of traditional on-chip learning methods, including low resource utilization and non-tunable parallelism. The proposed method utilizes a configurable neuron calculation unit (NCU) to calculate neural networks in different degrees of parallelism through multiplexing NCUs at different levels, and resource utilization can be increased by reducing the number of NCUs since the resource consumption is predominantly determined by the number of NCUs and the data bit-width, which are decoupled from the specific topology. To better support the proposed method and minimize RAM block usage, a weight segmentation and recombination method is introduced, accompanied by a detailed explanation of the access order. Moreover, a performance model is developed to facilitate parameter selection process. Experimental results conducted on an FPGA development board demonstrate that the proposed method has lower resource consumption, higher resource utilization, and greater generality compared to other methods. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Pm2.5 Time Series Imputation with Deep Learning and Interpolation
Computers 2023, 12(8), 165; https://doi.org/10.3390/computers12080165 - 16 Aug 2023
Viewed by 559
Abstract
Commonly, regression for time series imputation has been implemented directly through regression models, statistical, machine learning, and deep learning techniques. In this work, a novel approach is proposed based on a classification model that determines the NA value class, and from this, two [...] Read more.
Commonly, regression for time series imputation has been implemented directly through regression models, statistical, machine learning, and deep learning techniques. In this work, a novel approach is proposed based on a classification model that determines the NA value class, and from this, two types of interpolations are implemented: polynomial or flipped polynomial. An hourly pm2.5 time series from Ilo City in southern Peru was chosen as a study case. The results obtained show that for gaps of one NA value, the proposal in most cases presents superior results to techniques such as ARIMA, LSTM, BiLSTM, GRU, and BiGRU; thus, on average, in terms of R2, the proposal exceeds implemented benchmark models by between 2.4341% and 19.96%. Finally, supported by the results, it can be stated that the proposal constitutes a good alternative for short-gaps imputation in pm2.5 time series. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
A Mobile Solution for Enhancing Tourist Safety in Warm and Humid Destinations
Appl. Sci. 2023, 13(15), 9027; https://doi.org/10.3390/app13159027 - 07 Aug 2023
Viewed by 1233
Abstract
This research introduces a mobile application specifically designed to enhance tourist safety in warm and humid destinations. The proposed solution integrates advanced functionalities, including a comprehensive warning system, health recommendations, and a life rescue system. The study showcases the exceptional effectiveness of the [...] Read more.
This research introduces a mobile application specifically designed to enhance tourist safety in warm and humid destinations. The proposed solution integrates advanced functionalities, including a comprehensive warning system, health recommendations, and a life rescue system. The study showcases the exceptional effectiveness of the implemented system, consistently providing tourists with precise and timely weather and safety information. Notably, the system achieves an impressive average accuracy rate of 100%, coupled with an astonishingly rapid response time of just 0.001 s. Furthermore, the research explores the correlation between the System Usability Scale (SUS) score and tourist engagement and loyalty. The findings reveal a positive relationship between the SUS score and the level of tourist engagement and loyalty. The proposed mobile solution holds significant potential for enhancing the safety and comfort of tourists in hot and humid climates, thereby making a noteworthy contribution to the advancement of the tourism business in smart cities. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Toward Improved Machine Learning-Based Intrusion Detection for Internet of Things Traffic
Computers 2023, 12(8), 148; https://doi.org/10.3390/computers12080148 - 27 Jul 2023
Viewed by 674
Abstract
The rapid development of Internet of Things (IoT) networks has revealed multiple security issues. On the other hand, machine learning (ML) has proven its efficiency in building intrusion detection systems (IDSs) intended to reinforce the security of IoT networks. In fact, the successful [...] Read more.
The rapid development of Internet of Things (IoT) networks has revealed multiple security issues. On the other hand, machine learning (ML) has proven its efficiency in building intrusion detection systems (IDSs) intended to reinforce the security of IoT networks. In fact, the successful design and implementation of such techniques require the use of effective methods in terms of data and model quality. This paper encloses an empirical impact analysis for the latter in the context of a multi-class classification scenario. A series of experiments were conducted using six ML models, along with four benchmarking datasets, including UNSW-NB15, BOT-IoT, ToN-IoT, and Edge-IIoT. The proposed framework investigates the marginal benefit of employing data pre-processing and model configurations considering IoT limitations. In fact, the empirical findings indicate that the accuracy of ML-based IDS detection rapidly increases when methods that use quality data and models are deployed. Specifically, data cleaning, transformation, normalization, and dimensionality reduction, along with model parameter tuning, exhibit significant potential to minimize computational complexity and yield better performance. In addition, MLP- and clustering-based algorithms outperformed the remaining models, and the obtained accuracy reached up to 99.97%. One should note that the performance of the challenger models was assessed using similar test sets, and this was compared to the results achieved using the relevant pieces of research. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
An Incident Detection Model Using Random Forest Classifier
Smart Cities 2023, 6(4), 1786-1813; https://doi.org/10.3390/smartcities6040083 - 17 Jul 2023
Viewed by 583
Abstract
Traffic incidents have adverse effects on traffic operations, safety, and the economy. Efficient Automatic Incident Detection (AID) systems are crucial for timely and accurate incident detection. This paper develops a realistic AID model using the Random Forest (RF), which is a machine learning [...] Read more.
Traffic incidents have adverse effects on traffic operations, safety, and the economy. Efficient Automatic Incident Detection (AID) systems are crucial for timely and accurate incident detection. This paper develops a realistic AID model using the Random Forest (RF), which is a machine learning technique. The model is trained and tested on simulated data from VISSIM traffic simulation software. The model considers the variations in four critical factors: congestion levels, incident severity, incident location, and detector distance. Comparative evaluation with existing AID models, in the literature, demonstrates the superiority of the developed model, exhibiting higher Detection Rate (DR), lower Mean Time to Detect (MTTD), and lower False Alarm Rate (FAR). During training, the RF model achieved a DR of 96.97%, MTTD of 1.05 min, and FAR of 0.62%. During testing, it achieved a DR of 100%, MTTD of 1.17 min, and FAR of 0.862%. Findings indicate that detecting minor incidents during low traffic volumes is challenging. FAR decreases with the increase in Demand to Capacity ratio (D/C), while MTTD increases with D/C. Higher incident severity leads to lower MTTD values, while greater distance between an incident and upstream detector has the opposite effect. The FAR is inversely proportional to the incident’s location from the upstream detector, while being directly proportional to the distance between detectors. Larger detector spacings result in longer detection times. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Perspective
We Are Also Metabolites: Towards Understanding the Composition of Sweat on Fingertips via Hyperspectral Imaging
Digital 2023, 3(2), 137-145; https://doi.org/10.3390/digital3020010 - 19 Jun 2023
Viewed by 680
Abstract
AI-empowered sweat metabolite analysis is an emerging and open research area with great potential to add a third category to biometrics: chemical. Current biometrics use two types of information to identify humans: physical (e.g., face, eyes) and behavioral (i.e., gait, typing). Sweat offers [...] Read more.
AI-empowered sweat metabolite analysis is an emerging and open research area with great potential to add a third category to biometrics: chemical. Current biometrics use two types of information to identify humans: physical (e.g., face, eyes) and behavioral (i.e., gait, typing). Sweat offers a promising solution for enriching human identity with more discerning characteristics to overcome the limitations of current technologies (e.g., demographic differential and vulnerability to spoof attacks). The analysis of a biometric trait’s chemical properties holds potential for providing a meticulous perspective on an individual. This not only changes the taxonomy for biometrics, but also lays a foundation for more accurate and secure next-generation biometric systems. This paper discusses existing evidence about the potential held by sweat components in representing the identity of a person. We also highlight emerging methodologies and applications pertaining to sweat analysis and guide the scientific community towards transformative future research directions to design AI-empowered systems of the next generation. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Article
A Nonintrusive Load Identification Method Based on Improved Gramian Angular Field and ResNet18
Electronics 2023, 12(11), 2540; https://doi.org/10.3390/electronics12112540 - 05 Jun 2023
Viewed by 704
Abstract
Image classification methods based on deep learning have been widely used in the study of nonintrusive load identification. However, in the process of encoding the load electrical signals into images, how to fully retain features of the raw data and thus increase the [...] Read more.
Image classification methods based on deep learning have been widely used in the study of nonintrusive load identification. However, in the process of encoding the load electrical signals into images, how to fully retain features of the raw data and thus increase the recognizability of loads carried with very similar current signals are still challenging, and the loss of load features will cause the overall accuracy of load identification to decrease. To deal with this problem, this paper proposes a nonintrusive load identification method based on the improved Gramian angular field (iGAF) and ResNet18. In the proposed method, fast Fourier transform is used to calculate the amplitude spectrum and the phase spectrum to reconstruct the pixel matrices of the B channel, G channel, and R channel of generated GAF images so that the color image fused by the three channels contains more information. This improvement to the GAF method enables generated images to retain the amplitude feature and phase feature of the raw data that are usually missed in the general GAF image. ResNet18 is trained with iGAF images for nonintrusive load identification. Experiments are conducted on two private datasets, ESEAD and EMCAD, and two public datasets, PLAID and WHITED. Experimental results suggest that the proposed method performs well on both private and public datasets, achieving overall identification accuracies of 99.545%, 99.375%, 98.964%, and 100% on the four datasets, respectively. In particular, the method demonstrates significant identification effects for loads with similar current waveforms in private datasets. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Assisting Heart Valve Diseases Diagnosis via Transformer-Based Classification of Heart Sound Signals
Electronics 2023, 12(10), 2221; https://doi.org/10.3390/electronics12102221 - 13 May 2023
Cited by 1 | Viewed by 625
Abstract
Background: In computer-aided medical diagnosis or prognosis, the automatic classification of heart valve diseases based on heart sound signals is of great importance since the heart sound signal contains a wealth of information that can reflect the heart status. Traditional binary classification algorithms [...] Read more.
Background: In computer-aided medical diagnosis or prognosis, the automatic classification of heart valve diseases based on heart sound signals is of great importance since the heart sound signal contains a wealth of information that can reflect the heart status. Traditional binary classification algorithms (normal and abnormal) currently cannot comprehensively assess the heart valve diseases based on analyzing various heart sounds. The differences between heart sound signals are relatively subtle, but the reflected heart conditions differ significantly. Consequently, from a clinical point of view, it is of utmost importance to assist in the diagnosis of heart valve disease through the multiple classification of heart sound signals. Methods: We utilized a Transformer model for the multi-classification of heart sound signals. It has achieved results from four abnormal heart sound signals and the typical type. Results: According to 5-fold cross-validation strategy as well as 10-fold cross-validation strategy, e.g., in 5-fold cross-validation, the proposed method achieved a highest accuracy of 98.74% and a mean AUC of 0.99. Furthermore, the classification accuracy for Aortic Stenosis, Mitral Regurgitation, Mitral Stenosis, Mitral Valve Prolapse, and standard heart sound signals is 98.72%, 98.50%, 98.30%, 98.56%, and 99.61%, respectively. In 10-fold cross-validation, our model obtained the highest accuracy, sensitivity, specificity, precision, and F1 score all at 100%. Conclusion: The results indicate that the framework can precisely classify five classes of heart sound signals. Our method provides an effective tool for the ancillary detection of heart valve diseases in the clinical setting. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Review
A Review of Plant Disease Detection Systems for Farming Applications
Appl. Sci. 2023, 13(10), 5982; https://doi.org/10.3390/app13105982 - 12 May 2023
Viewed by 1509
Abstract
The globe and more particularly the economically developed regions of the world are currently in the era of the Fourth Industrial Revolution (4IR). Conversely, the economically developing regions in the world (and more particularly the African continent) have not yet even fully passed [...] Read more.
The globe and more particularly the economically developed regions of the world are currently in the era of the Fourth Industrial Revolution (4IR). Conversely, the economically developing regions in the world (and more particularly the African continent) have not yet even fully passed through the Third Industrial Revolution (3IR) wave, and Africa’s economy is still heavily dependent on the agricultural field. On the other hand, the state of global food insecurity is worsening on an annual basis thanks to the exponential growth in the global human population, which continuously heightens the food demand in both quantity and quality. This justifies the significance of the focus on digitizing agricultural practices to improve the farm yield to meet the steep food demand and stabilize the economies of the African continent and countries such as India that are dependent on the agricultural sector to some extent. Technological advances in precision agriculture are already improving farm yields, although several opportunities for further improvement still exist. This study evaluated plant disease detection models (in particular, those over the past two decades) while aiming to gauge the status of the research in this area and identify the opportunities for further research. This study realized that little literature has discussed the real-time monitoring of the onset signs of diseases before they spread throughout the whole plant. There was also substantially less focus on real-time mitigation measures such as actuation operations, spraying pesticides, spraying fertilizers, etc., once a disease was identified. Very little research has focused on the combination of monitoring and phenotyping functions into one model capable of multiple tasks. Hence, this study highlighted a few opportunities for further focus. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Chinese News Text Classification Method via Key Feature Enhancement
Appl. Sci. 2023, 13(9), 5399; https://doi.org/10.3390/app13095399 - 26 Apr 2023
Viewed by 686
Abstract
(1) Background: Chinese news text is a popular form of media communication, which can be seen everywhere in China. Chinese news text classification is an important direction in natural language processing (NLP). How to use high-quality text classification technology to help humans to [...] Read more.
(1) Background: Chinese news text is a popular form of media communication, which can be seen everywhere in China. Chinese news text classification is an important direction in natural language processing (NLP). How to use high-quality text classification technology to help humans to efficiently organize and manage the massive amount of web news is an urgent problem to be solved. It is noted that the existing deep learning methods rely on a large-scale tagged corpus for news text classification tasks and this model is poorly interpretable because the size is large. (2) Methods: To solve the above problems, this paper proposes a Chinese news text classification method based on key feature enhancement named KFE-CNN. It can effectively expand the semantic information of key features to enhance sample data and then combine the zero–one binary vector representation to transform text features into binary vectors and input them into CNN model for training and implementation, thus improving the interpretability of the model and effectively compressing the size of the model. (3) Results: The experimental results show that our method can significantly improve the overall performance of the model and the average accuracy and F1-score of the THUCNews subset of the public dataset reached 97.84% and 98%. (4) Conclusions: this fully proved the effectiveness of the KFE-CNN method for the Chinese news text classification task and it also fully demonstrates that key feature enhancement can improve classification performance. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Health Status Evaluation of Welding Robots Based on the Evidential Reasoning Rule
Electronics 2023, 12(8), 1755; https://doi.org/10.3390/electronics12081755 - 07 Apr 2023
Viewed by 656
Abstract
It is extremely important to monitor the health status of welding robots for the safe and stable operation of a body-in-white (BIW) welding production line. In the actual production process, the robot degradation rate is slow and the effective data are poor, which [...] Read more.
It is extremely important to monitor the health status of welding robots for the safe and stable operation of a body-in-white (BIW) welding production line. In the actual production process, the robot degradation rate is slow and the effective data are poor, which can reflect a degradation state in the large amount of obtained monitoring data, which causes difficulties in health status evaluation. In order to realize the accurate evaluation of the health status of welding robots, this paper proposes a health status evaluation method based on the evidential reasoning (ER) rule, which reflects the health status of welding robots by using the running state data monitored in actual engineering and through the qualitative knowledge of experts, which makes up for the lack of effective data. In the ER rule evaluation model, the covariance matrix adaptive evolutionary strategy (CMA-ES) algorithm is used to optimize the initial parameters of the evaluation model, which improved the accuracy of health status evaluations. Finally, a BIW welding robot was taken as an example for verification. The results show that the proposed model is able to accurately estimate the health status of the welding robot by using the monitored degradation data. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Clustering of Monolingual Embedding Spaces
Digital 2023, 3(1), 48-66; https://doi.org/10.3390/digital3010004 - 23 Feb 2023
Viewed by 905
Abstract
Suboptimal performance of cross-lingual word embeddings for distant and low-resource languages calls into question the isomorphic assumption integral to the mapping-based methods of obtaining such embeddings. This paper investigates the comparative impact of typological relationship and corpus size on the isomorphism between monolingual [...] Read more.
Suboptimal performance of cross-lingual word embeddings for distant and low-resource languages calls into question the isomorphic assumption integral to the mapping-based methods of obtaining such embeddings. This paper investigates the comparative impact of typological relationship and corpus size on the isomorphism between monolingual embedding spaces. To that end, two clustering algorithms were applied to three sets of pairwise degrees of isomorphisms. It is also the goal of the paper to determine the combination of the isomorphism measure and clustering algorithm that best captures the typological relationship among the chosen set of languages. Of the three measures investigated, Relational Similarity seemed to capture best the typological information of the languages encoded in their respective embedding spaces. These language clusters can help us identify, without any pre-existing knowledge about the real-world linguistic relationships shared among a group of languages, the related higher-resource languages of low-resource languages. The presence of such languages in the cross-lingual embedding space can help improve the performance of low-resource languages in a cross-lingual embedding space. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Learning and Compressing: Low-Rank Matrix Factorization for Deep Neural Network Compression
Appl. Sci. 2023, 13(4), 2704; https://doi.org/10.3390/app13042704 - 20 Feb 2023
Viewed by 1682
Abstract
Recently, the deep neural network (DNN) has become one of the most advanced and powerful methods used in classification tasks. However, the cost of DNN models is sometimes considerable due to the huge sets of parameters. Therefore, it is necessary to compress these [...] Read more.
Recently, the deep neural network (DNN) has become one of the most advanced and powerful methods used in classification tasks. However, the cost of DNN models is sometimes considerable due to the huge sets of parameters. Therefore, it is necessary to compress these models in order to reduce the parameters in weight matrices and decrease computational consumption, while maintaining the same level of accuracy. In this paper, in order to deal with the compression problem, we first combine the loss function and the compression cost function into a joint function, and optimize it as an optimization framework. Then we combine the CUR decomposition method with this joint optimization framework to obtain the low-rank approximation matrices. Finally, we narrow the gap between the weight matrices and the low-rank approximations to compress the DNN models on the image classification task. In this algorithm, we not only solve the optimal ranks by enumeration, but also obtain the compression result with low-rank characteristics iteratively. Experiments were carried out on three public datasets under classification tasks. Comparisons with baselines and current state-of-the-art results can conclude that our proposed low-rank joint optimization compression algorithm can achieve higher accuracy and compression ratios. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
A Multi-Channel Contrastive Learning Network Based Intrusion Detection Method
Electronics 2023, 12(4), 949; https://doi.org/10.3390/electronics12040949 - 14 Feb 2023
Cited by 1 | Viewed by 937
Abstract
Network intrusion data are characterized by high feature dimensionality, extreme category imbalance, and complex nonlinear relationships between features and categories. The actual detection accuracy of existing supervised intrusion-detection models performs poorly. To address this problem, this paper proposes a multi-channel contrastive learning network-based [...] Read more.
Network intrusion data are characterized by high feature dimensionality, extreme category imbalance, and complex nonlinear relationships between features and categories. The actual detection accuracy of existing supervised intrusion-detection models performs poorly. To address this problem, this paper proposes a multi-channel contrastive learning network-based intrusion-detection method (MCLDM), which combines feature learning in the multi-channel supervised contrastive learning stage and feature extraction in the multi-channel unsupervised contrastive learning stage to train an effective intrusion-detection model. The objective is to research whether feature enrichment and the use of contrastive learning for specific classes of network intrusion data can improve the accuracy of the model. The model is based on an autoencoder to achieve feature reconstruction with supervised contrastive learning and for implementing multi-channel data reconstruction. In the next stage of unsupervised contrastive learning, the extraction of features is implemented using triplet convolutional neural networks (TCNN) to achieve the classification of intrusion data. Through experimental analysis, the multichannel contrastive learning network-based intrusion-detection method achieves 98.43% accuracy in dataset CICIDS17 and 93.94% accuracy in dataset KDDCUP99. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Symbiotic Combination of a Bayesian Network and Fuzzy Logic to Quantify the QoS in a VANET: Application in Logistic 4.0
Computers 2023, 12(2), 40; https://doi.org/10.3390/computers12020040 - 14 Feb 2023
Cited by 2 | Viewed by 1030
Abstract
Intelligent transportation systems use new technologies to improve road safety. In them, vehicles have been equipped with wireless communication systems called on-board units (OBUs) to be able to communicate with each other. This type of wireless network refers to vehicular ad hoc networks [...] Read more.
Intelligent transportation systems use new technologies to improve road safety. In them, vehicles have been equipped with wireless communication systems called on-board units (OBUs) to be able to communicate with each other. This type of wireless network refers to vehicular ad hoc networks (VANET). The primary problem in a VANET is the quality of service (QoS) because a small problem in the services can extremely damage both human lives and the economy. From this perspective, this article makes a contribution within the framework of a new conceptual project called the Smart Digital Logistic Services Provider (Smart DLSP). This is intended to give freight vehicles more intelligence in the service of logistics on a global scale. This article proposes a model that combines two approaches—a Bayesian network and fuzzy logic for calculating the QoS in a VANET as a function of multiple criteria—and provides a database that helps determine the originality of the risk of degrading the QoS in the network. The outcome of this approach was employed in an event tree analysis to assess the impact of the system’s security mechanisms. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
An Improved Algorithm for Insulator and Defect Detection Based on YOLOv4
Electronics 2023, 12(4), 933; https://doi.org/10.3390/electronics12040933 - 13 Feb 2023
Cited by 1 | Viewed by 920
Abstract
To further improve the accuracy and speed of UAV inspection of transmission line insulator defects, this paper proposes an insulator detection and defect identification algorithm based on YOLOv4, which is called DSMH-YOLOv4. In the feature extraction network of the YOLOv4 model, the improved [...] Read more.
To further improve the accuracy and speed of UAV inspection of transmission line insulator defects, this paper proposes an insulator detection and defect identification algorithm based on YOLOv4, which is called DSMH-YOLOv4. In the feature extraction network of the YOLOv4 model, the improved algorithm improves the residual edges of the residual structure based on feature reuse and designs the backbone network D-CSPDarknet53, which greatly reduces the number of parameters and computation of the model. The SA-Net (Shuffle Attention Neural Networks) attention model is embedded in the feature fusion network to strengthen the attention of target features and improve the weight of the target. Multi-head output is added to the output layer to improve the ability of the model to recognize the small target of insulator damage. The experimental results show that the number of parameters of the improved algorithm model is only 25.98% of that of the original model, and the mAP (mean Average Precision) of the insulator and defect is increased from 92.44% to 96.14%, which provides an effective way for the implementation of edge end algorithm deployment. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Single Image Reflection Removal Based on Residual Attention Mechanism
Appl. Sci. 2023, 13(3), 1618; https://doi.org/10.3390/app13031618 - 27 Jan 2023
Cited by 1 | Viewed by 1394
Abstract
Affected by shooting angle and light intensity, shooting through transparent media may cause light reflections in an image and influence picture quality, which has a negative effect on the research of computer vision tasks. In this paper, we propose a Residual Attention Based [...] Read more.
Affected by shooting angle and light intensity, shooting through transparent media may cause light reflections in an image and influence picture quality, which has a negative effect on the research of computer vision tasks. In this paper, we propose a Residual Attention Based Reflection Removal Network (RABRRN) to tackle the issue of single image reflection removal. We hold that reflection removal is essentially an image separation problem sensitive to both spatial and channel features. Therefore, we integrate spatial attention and channel attention into the model to enhance spatial and channel feature representation. For a more feasible solution to solve the problem of gradient disappearance in the iterative training of deep neural networks, the attention module is combined with a residual network to design a residual attention module so that the performance of reflection removal can be ameliorated. In addition, we establish a reflection image dataset named the SCAU Reflection Image Dataset (SCAU-RID), providing sufficient real training data. The experimental results show that the proposed method achieves a PSNR of 23.787 dB and an SSIM value of 0.885 from four benchmark datasets. Compared with the other most advanced methods, our method has only 18.524M parameters, but it obtains the best results from test datasets. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Research on Wickerwork Patterns Creative Design and Development Based on Style Transfer Technology
Appl. Sci. 2023, 13(3), 1553; https://doi.org/10.3390/app13031553 - 25 Jan 2023
Viewed by 919
Abstract
Traditional craftsmanship and culture are facing a transformation in modern science and technology development, and the cultural industry is gradually stepping into the digital era, which can realize the sustainable development of intangible cultural heritage with the help of digital technology. To innovatively [...] Read more.
Traditional craftsmanship and culture are facing a transformation in modern science and technology development, and the cultural industry is gradually stepping into the digital era, which can realize the sustainable development of intangible cultural heritage with the help of digital technology. To innovatively generate wickerwork pattern design schemes that meets the user’s preferences, this study proposes a design method of wickerwork patterns based on a style migration algorithm. First, an image recognition experiment using residual network (ResNet) based on the convolutional neural network is applied to the Funan wickerwork patterns to establish an image recognition model. The experimental results illustrate that the optimal recognition rate is 93.37% for the entire dataset of ResNet50 of the pattern design images, where the recognition rate of modern patterns is 89.47%, while the recognition rate of traditional patterns is 97.14%, the recognition rate of wickerwork patterns is 95.95%, and the recognition rate of personality is 90.91%. Second, based on Cycle-Consistent Adversarial Networks (CycleGAN) to build design scheme generation models of the Funan wickerwork patterns, CycleGAN can automatically and innovatively generate the pattern design scheme that meets certain style characteristics. Finally, the designer uses the creative images as the inspiration source and participates in the detailed adjustment of the generated images to design the wickerwork patterns with various stylistic features. This proposed method could explore the application of AI technology in wickerwork pattern development, and providing more comprehensive and rich new material for the creation of wickerwork patterns, thus contributing to the sustainable development and innovation of traditional Funan wickerwork culture. In fact, this digital technology can empower the inheritance and development of more intangible cultural heritages. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Trunk Borer Identification Based on Convolutional Neural Networks
Appl. Sci. 2023, 13(2), 863; https://doi.org/10.3390/app13020863 - 08 Jan 2023
Cited by 1 | Viewed by 890
Abstract
The trunk borer is a great danger to forests because of its strong concealment, long lag and great destructiveness. In order to improve the early monitoring ability of trunk borers, the representative Agrilus planipennis Fairmaire was selected as the research object. The convolutional [...] Read more.
The trunk borer is a great danger to forests because of its strong concealment, long lag and great destructiveness. In order to improve the early monitoring ability of trunk borers, the representative Agrilus planipennis Fairmaire was selected as the research object. The convolutional neural network named TrunkNet was designed to identify the activity sounds of Agrilus planipennis Fairmaire larvae. The activity sounds were recorded as vibration signals in audio form. The detector was used to collect the activity sounds of Agrilus planipennis Fairmaire larvae in the wood segments and some typical outdoor noise. The vibration signal pulse duration is short, random and high energy. TrunkNet was designed to train and identify vibration signals of Agrilus planipennis Fairmaire. Over the course of the experiment, the test accuracy of TrunkNet was 96.89%, while MobileNet_V2, ResNet18 and VGGish showed 84.27%, 79.37% and 70.85% accuracy, respectively. TrunkNet based on the convolutional neural network can provide technical support for the automatic monitoring and early warning of the stealthy tree trunk borers. The work of this study is limited to a single pest. The experiment will further focus on the applicability of the network to other pests in the future. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
An IoT-Based Deep Learning Framework for Real-Time Detection of COVID-19 through Chest X-ray Images
Computers 2023, 12(1), 8; https://doi.org/10.3390/computers12010008 - 28 Dec 2022
Viewed by 1225
Abstract
Over the next decade, Internet of Things (IoT) and the high-speed 5G network will be crucial in enabling remote access to the healthcare system for easy and fast diagnosis. In this paper, an IoT-based deep learning computer-aided diagnosis (CAD) framework is proposed for [...] Read more.
Over the next decade, Internet of Things (IoT) and the high-speed 5G network will be crucial in enabling remote access to the healthcare system for easy and fast diagnosis. In this paper, an IoT-based deep learning computer-aided diagnosis (CAD) framework is proposed for online and real-time COVID-19 identification. The proposed work first fine-tuned the five state-of-the-art deep CNN models such as Xception, ResNet50, DenseNet201, MobileNet, and VGG19 and then combined these models into a majority voting deep ensemble CNN (DECNN) model in order to detect COVID-19 accurately. The findings demonstrate that the suggested framework, with a test accuracy of 98%, outperforms other relevant state-of-the-art methodologies in terms of overall performance. The proposed CAD framework has the potential to serve as a decision support system for general clinicians and rural health workers in order to diagnose COVID-19 at an early stage. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Framework of Meta-Heuristic Variable Length Searching for Feature Selection in High-Dimensional Data
Computers 2023, 12(1), 7; https://doi.org/10.3390/computers12010007 - 27 Dec 2022
Cited by 1 | Viewed by 1220
Abstract
Feature Selection in High Dimensional Space is a combinatory optimization problem with an NP-hard nature. Meta-heuristic searching with embedding information theory-based criteria in the fitness function for selecting the relevant features is used widely in current feature selection algorithms. However, the increase in [...] Read more.
Feature Selection in High Dimensional Space is a combinatory optimization problem with an NP-hard nature. Meta-heuristic searching with embedding information theory-based criteria in the fitness function for selecting the relevant features is used widely in current feature selection algorithms. However, the increase in the dimension of the solution space leads to a high computational cost and risk of convergence. In addition, sub-optimality might occur due to the assumption of a certain length of the optimal number of features. Alternatively, variable length searching enables searching within the variable length of the solution space, which leads to more optimality and less computational load. The literature contains various meta-heuristic algorithms with variable length searching. All of them enable searching in high dimensional problems. However, an uncertainty in their performance exists. In order to fill this gap, this article proposes a novel framework for comparing various variants of variable length-searching meta-heuristic algorithms in the application of feature selection. For this purpose, we implemented four types of variable length meta-heuristic searching algorithms, namely VLBHO-Fitness, VLBHO-Position, variable length particle swarm optimization (VLPSO) and genetic variable length (GAVL), and we compared them in terms of classification metrics. The evaluation showed the overall superiority of VLBHO over the other algorithms in terms of accomplishing lower fitness values when optimizing mathematical functions of the variable length type. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Multi-Objective Antenna Design Based on BP Neural Network Surrogate Model Optimized by Improved Sparrow Search Algorithm
Appl. Sci. 2022, 12(24), 12543; https://doi.org/10.3390/app122412543 - 07 Dec 2022
Cited by 2 | Viewed by 1014
Abstract
To solve the time-consuming, laborious, and inefficient problems of traditional methods using classical optimization algorithms combined with electromagnetic simulation software to design antennas, an efficient design method of the multi-objective antenna is proposed based on the multi-strategy improved sparrow search algorithm (MISSA) to [...] Read more.
To solve the time-consuming, laborious, and inefficient problems of traditional methods using classical optimization algorithms combined with electromagnetic simulation software to design antennas, an efficient design method of the multi-objective antenna is proposed based on the multi-strategy improved sparrow search algorithm (MISSA) to optimize a BP neural network. Three strategies, namely Bernoulli chaotic mapping, inertial weights, and t-distribution, are introduced into the sparrow search algorithm to improve its convergent speed and accuracy. Using the Bernoulli chaotic map to process the population of sparrows to enhance its population richness, the weight is introduced into the updated position of the sparrow to improve its search ability. The adaptive t-distribution is used to interfere and mutate some individual sparrows to make the algorithm reach the optimal solution more quickly. The initial parameters of the BP neural network were optimized using the improved sparrow search algorithm to obtain the optimized MISSA-BP antenna surrogate model. This model is combined with multi-objective particle swarm optimization (MOPSO) to solve the design problem of the multi-objective antenna and verified by a triple-frequency antenna. The simulated results show that this method can predict the performance of the antennas more accurately and can also design the multi-objective antenna that meets the requirements. The practicality of the method is further verified by producing a real antenna. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
From Ranking Search Results to Managing Investment Portfolios: Exploring Rank-Based Approaches for Portfolio Stock Selection
Electronics 2022, 11(23), 4019; https://doi.org/10.3390/electronics11234019 - 04 Dec 2022
Viewed by 1135
Abstract
The task of investing in financial markets to make profits and grow one’s wealth is not a straightforward task. Typically, financial domain experts, such as investment advisers and financial analysts, conduct extensive research on a target financial market to decide which stock symbols [...] Read more.
The task of investing in financial markets to make profits and grow one’s wealth is not a straightforward task. Typically, financial domain experts, such as investment advisers and financial analysts, conduct extensive research on a target financial market to decide which stock symbols are worthy of investment. The research process used by those experts generally involves collecting a large volume of data (e.g., financial reports, announcements, news, etc.), performing several analytics tasks, and making inferences to reach investment decisions. The rapid increase in the volume of data generated for stock market companies makes performing thorough analytics tasks impractical given the limited time available. Fortunately, recent advancements in computational intelligence methods have been adopted in various sectors, providing opportunities to exploit such methods to address investment tasks efficiently and effectively. This paper aims to explore rank-based approaches, mainly machine-learning based, to address the task of selecting stock symbols to construct long-term investment portfolios. Relying on these approaches, we propose a feature set that contains various statistics indicating the performance of stock market companies that can be used to train several ranking models. For evaluation purposes, we selected four years of Saudi Stock Exchange data and applied our proposed framework to them in a simulated investment setting. Our results show that rank-based approaches have the potential to be adopted to construct investment portfolios, generating substantial returns and outperforming the gains produced by the Saudi Stock Market index for the tested period. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Exploration of the Impact of Cybersecurity Awareness on Small and Medium Enterprises (SMEs) in Wales Using Intelligent Software to Combat Cybercrime
Computers 2022, 11(12), 174; https://doi.org/10.3390/computers11120174 - 03 Dec 2022
Cited by 2 | Viewed by 2748
Abstract
Intelligent software packages have become fast-growing in popularity for large businesses in both developed and developing countries, due to their higher availability in detecting and preventing cybercrime. However, small and medium enterprises (SMEs) are showing prominent gaps in this adoption due to their [...] Read more.
Intelligent software packages have become fast-growing in popularity for large businesses in both developed and developing countries, due to their higher availability in detecting and preventing cybercrime. However, small and medium enterprises (SMEs) are showing prominent gaps in this adoption due to their level of awareness and knowledge towards cyber security and the security mindset. This is due to their priority of running their businesses over requiring using the right technology in protecting their data. This study explored how SMEs in Wales are handling cybercrime and managing their daily online activities the best they can, in keeping their data safe in tackling cyber threats. The sample collected consisted of 122 Welsh SME respondents in a collection of data through a survey questionnaire. The results and findings showed that there were large gaps in the awareness and knowledge of using intelligent software, in particular the uses of machine learning integration within their technology to track and combat complex cybercrime that perhaps would have been missed by standard cyber security software packages. The study’s findings showed that only 30% of the sampled SMEs understood the terminology of cyber security. The awareness of machine learning and its algorithms was also questioned in the implementation of their cyber security software packages. The study further highlighted that Welsh SMEs were unaware of what this software could do to protect their data. The findings in this paper also showed that various elements such as education and the size of SME made an impact on their choices for the right software packages being implemented, compared to elements such as age, gender, role and being a decision maker, having no impact on these choices. The study finally shares the investigations of various SME strategies to help understand the risks, and to be able to plan for future contingencies and preparation in keeping data safe and secure for the future. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
An Improved Binary Owl Feature Selection in the Context of Android Malware Detection
Computers 2022, 11(12), 173; https://doi.org/10.3390/computers11120173 - 30 Nov 2022
Cited by 1 | Viewed by 1191
Abstract
Recently, the proliferation of smartphones, tablets, and smartwatches has raised security concerns from researchers. Android-based mobile devices are considered a dominant operating system. The open-source nature of this platform makes it a good target for malware attacks that result in both data exfiltration [...] Read more.
Recently, the proliferation of smartphones, tablets, and smartwatches has raised security concerns from researchers. Android-based mobile devices are considered a dominant operating system. The open-source nature of this platform makes it a good target for malware attacks that result in both data exfiltration and property loss. To handle the security issues of mobile malware attacks, researchers proposed novel algorithms and detection approaches. However, there is no standard dataset used by researchers to make a fair evaluation. Most of the research datasets were collected from the Play Store or collected randomly from public datasets such as the DREBIN dataset. In this paper, a wrapper-based approach for Android malware detection has been proposed. The proposed wrapper consists of a newly modified binary Owl optimizer and a random forest classifier. The proposed approach was evaluated using standard data splits given by the DREBIN dataset in terms of accuracy, precision, recall, false-positive rate, and F1-score. The proposed approach reaches 98.84% and 86.34% for accuracy and F-score, respectively. Furthermore, it outperforms several related approaches from the literature in terms of accuracy, precision, and recall. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Learning-Based Matched Representation System for Job Recommendation
Computers 2022, 11(11), 161; https://doi.org/10.3390/computers11110161 - 14 Nov 2022
Cited by 4 | Viewed by 2063
Abstract
Job recommender systems (JRS) are a subclass of information filtering systems that aims to help job seekers identify what might match their skills and experiences and prevent them from being lost in the vast amount of information available on job boards that aggregates [...] Read more.
Job recommender systems (JRS) are a subclass of information filtering systems that aims to help job seekers identify what might match their skills and experiences and prevent them from being lost in the vast amount of information available on job boards that aggregates postings from many sources such as LinkedIn or Indeed. A variety of strategies used as part of JRS have been implemented, most of them failed to recommend job vacancies that fit properly to the job seekers profiles when dealing with more than one job offer. They consider skills as passive entities associated with the job description, which need to be matched for finding the best job recommendation. This paper provides a recommender system to assist job seekers in finding suitable jobs based on their resumes. The proposed system recommends the top-n jobs to the job seekers by analyzing and measuring similarity between the job seeker’s skills and explicit features of job listing using content-based filtering. First-hand information was gathered by scraping jobs description from Indeed from major cities in Saudi Arabia (Dammam, Jeddah, and Riyadh). Then, the top skills required in job offers were analyzed and job recommendation was made by matching skills from resumes to posted jobs. To quantify recommendation success and error rates, we sought to compare the results of our system to reality using decision support measures. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
FFSCN: Frame Fusion Spectrum Center Net for Carrier Signal Detection
Electronics 2022, 11(20), 3349; https://doi.org/10.3390/electronics11203349 - 17 Oct 2022
Viewed by 905
Abstract
Carrier signal detection is a complicated and essential task in many domains because it demands a quick response to the existence of several carriers in the wideband, while also precisely predicting each carrier signal’s frequency centers and bandwidths, including single-carrier and multi-carrier modulation [...] Read more.
Carrier signal detection is a complicated and essential task in many domains because it demands a quick response to the existence of several carriers in the wideband, while also precisely predicting each carrier signal’s frequency centers and bandwidths, including single-carrier and multi-carrier modulation signals. Multi-carrier modulation signals, such as FSK and OFDM, could be incorrectly recognized as several single-carrier signals by using the spectrum center net (SCN) or FCN-based method. This paper designed a deep convolutional neural network (CNN) framework for multi-carrier signal detection by fusing the features of multiple consecutive frames of the broadband power spectra and estimating the information of each single-carrier or multi-carrier modulation signal in the broadband, called frame fusion spectrum center net (FFSCN), including FFSCN-R, FFSCN-MN, and FFSCN-FMN. FFSCN includes three base parts, the deep CNN-based backbone, the feature pyramid network (FPN) neck, and the regression network (RegNet) head. FFSCN-R and FFSCN-MN fusing the FPN out features, which use the Residual and MobileNetV3 backbone, respectively, and FFSCN-MN cost less inference time. To further reduce the complexity of FFSCN-MN, the designed FFSCN-FMN modifies the MobileNet blocks and fuses the features at each block of the backbone. The multiple consecutive frames of broadband power spectra not only preserve the high-resolution ratio of the broadband frequency, but also add the features of the signal changes in the time dimension. Extensive experimental results demonstrate that the proposed FFSCN can effectively detect multi-carrier and single-carrier modulation signals in the broadband power spectrum and outperform SCN in accuracy and efficiency. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
User Analytics in Online Social Networks: Evolving from Social Instances to Social Individuals
Computers 2022, 11(10), 149; https://doi.org/10.3390/computers11100149 - 07 Oct 2022
Viewed by 1286
Abstract
In our era of big data and information overload, content consumers utilise a variety of sources to meet their data and informational needs for the purpose of acquiring an in-depth perspective on a subject, as each source is focused on specific aspects. The [...] Read more.
In our era of big data and information overload, content consumers utilise a variety of sources to meet their data and informational needs for the purpose of acquiring an in-depth perspective on a subject, as each source is focused on specific aspects. The same principle applies to the online social networks (OSNs), as usually, the end-users maintain accounts in multiple OSNs so as to acquire a complete social networking experience, since each OSN has a different philosophy in terms of its services, content, and interaction. Contrary to the current literature, we examine the users’ behavioural and disseminated content patterns under the assumption that accounts maintained by users in multiple OSNs are not regarded as distinct accounts, but rather as the same individual with multiple social instances. Our social analysis, enriched with information about the users’ social influences, revealed behavioural patterns depending on the examined OSN, its social entities, and the users’ exerted influence. Finally, we ranked the examined OSNs based on three types of social characteristics, revealing correlations between the users’ behavioural and content patterns, social influences, social entities, and the OSNs themselves. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
A Fine-Grained Modeling Approach for Systolic Array-Based Accelerator
Electronics 2022, 11(18), 2928; https://doi.org/10.3390/electronics11182928 - 15 Sep 2022
Cited by 1 | Viewed by 1029
Abstract
The systolic array provides extremely high efficiency for running matrix multiplication and is one of the mainstream architectures of today’s deep learning accelerators. In order to develop efficient accelerators, people usually employ simulators to make design trade-offs. However, current simulators suffer from coarse-grained [...] Read more.
The systolic array provides extremely high efficiency for running matrix multiplication and is one of the mainstream architectures of today’s deep learning accelerators. In order to develop efficient accelerators, people usually employ simulators to make design trade-offs. However, current simulators suffer from coarse-grained modeling methods and ideal assumptions, which limits their ability to describe structural characteristics of systolic arrays. In addition, they do not support the exploration of microarchitecture. This paper presents FG-SIM, a fine-grained modeling approach for evaluating systolic array accelerators by using an event-driven method. FG-SIM can obtain accurate results and provide the best mapping scheme for different workloads due to its fine-grained modeling technique and deny of ideal assumption. Experimental results show that FG-SIM plays a significant role in design trade-offs and outperforms state-of-the-art simulators, with an accuracy of more than 95%. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Review
Impact of the Internet of Things on Psychology: A Survey
Smart Cities 2022, 5(3), 1193-1207; https://doi.org/10.3390/smartcities5030060 - 14 Sep 2022
Cited by 2 | Viewed by 1958
Abstract
The Internet of things (IoT) continues to “smartify” human life while influencing areas such as industry, education, economy, business, medicine, and psychology. The introduction of the IoT in psychology has resulted in various intelligent systems that aim to help people—particularly those with special [...] Read more.
The Internet of things (IoT) continues to “smartify” human life while influencing areas such as industry, education, economy, business, medicine, and psychology. The introduction of the IoT in psychology has resulted in various intelligent systems that aim to help people—particularly those with special needs, such as the elderly, disabled, and children. This paper proposes a framework to investigate the role and impact of the IoT in psychology from two perspectives: (1) the goals of using the IoT in this area, and (2) the computational technologies used towards this purpose. To this end, existing studies are reviewed from these viewpoints. The results show that the goals of using the IoT can be identified as morale improvement, diagnosis, and monitoring. Moreover, the main technical contributions of the related papers are system design, data mining, or hardware invention and signal processing. Subsequently, unique features of state-of-the-art research in this area are discussed, including the type and diversity of sensors, crowdsourcing, context awareness, fog and cloud platforms, and inference. Our concluding remarks indicate that this area is in its infancy and, consequently, the next steps of this research are discussed. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Superpixel Image Classification with Graph Convolutional Neural Networks Based on Learnable Positional Embedding
Appl. Sci. 2022, 12(18), 9176; https://doi.org/10.3390/app12189176 - 13 Sep 2022
Cited by 4 | Viewed by 1635
Abstract
Graph convolutional neural networks (GCNNs) have been successfully applied to a wide range of problems, including low-dimensional Euclidean structural domains representing images, videos, and speech and high-dimensional non-Euclidean domains, such as social networks and chemical molecular structures. However, in computer vision, the existing [...] Read more.
Graph convolutional neural networks (GCNNs) have been successfully applied to a wide range of problems, including low-dimensional Euclidean structural domains representing images, videos, and speech and high-dimensional non-Euclidean domains, such as social networks and chemical molecular structures. However, in computer vision, the existing GCNNs are not provided with positional information to distinguish between graphs of new structures; therefore, the performance of the image classification domain represented by arbitrary graphs is significantly poor. In this work, we introduce how to initialize the positional information through a random walk algorithm and continuously learn the additional position-embedded information of various graph structures represented over the superpixel images we choose for efficiency. We call this method the graph convolutional network with learnable positional embedding applied on images (IMGCN-LPE). We apply IMGCN-LPE to three graph convolutional models (the Chebyshev graph convolutional network, graph convolutional network, and graph attention network) to validate performance on various benchmark image datasets. As a result, although not as impressive as convolutional neural networks, the proposed method outperforms various other conventional convolutional methods and demonstrates its effectiveness among the same tasks in the field of GCNNs. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Improved Twin Delayed Deep Deterministic Policy Gradient Algorithm Based Real-Time Trajectory Planning for Parafoil under Complicated Constraints
Appl. Sci. 2022, 12(16), 8189; https://doi.org/10.3390/app12168189 - 16 Aug 2022
Cited by 2 | Viewed by 1207
Abstract
A parafoil delivery system has usually been used in the fields of military and civilian airdrop supply and aircraft recovery in recent years. However, since the altitude of the unpowered parafoil is monotonically decreasing, it is limited by the initial flight altitude. Thus, [...] Read more.
A parafoil delivery system has usually been used in the fields of military and civilian airdrop supply and aircraft recovery in recent years. However, since the altitude of the unpowered parafoil is monotonically decreasing, it is limited by the initial flight altitude. Thus, combining the multiple constraints, such as the ground obstacle avoidance and flight time, it puts forward a more stringent standard for the real-time performance of trajectory planning of the parafoil delivery system. Thus, to enhance the real-time performance, we propose a new parafoil trajectory planning method based on an improved twin delayed deep deterministic policy gradient. In this method, by pre-evaluating the value of the action, a scale of noise will be dynamically selected for improving the globality and randomness, especially for the actions with a low value. Furthermore, not like the traditional numerical computation algorithm, by building the planning model in advance, the deep reinforcement learning method does not recalculate the optimal flight trajectory of the system when the parafoil delivery system is launched at different initial positions. In this condition, the trajectory planning method of deep reinforcement learning has greatly improved in real-time performance. Finally, several groups of simulation data show that the trajectory planning theory in this paper is feasible and correct. Compared with the traditional twin delayed deep deterministic policy gradient and deep deterministic policy gradient, the landing accuracy and success rate of the proposed method are improved greatly. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Feature Augmentation Based on Pixel-Wise Attention for Rail Defect Detection
Appl. Sci. 2022, 12(16), 8006; https://doi.org/10.3390/app12168006 - 10 Aug 2022
Viewed by 1045
Abstract
Image-based rail defect detection could be conceptually defined as an object detection task in computer vision. However, unlike academic object detection tasks, this practical industrial application suffers from two unique challenges, including object ambiguity and insufficient annotations. To overcome these challenges, we introduce [...] Read more.
Image-based rail defect detection could be conceptually defined as an object detection task in computer vision. However, unlike academic object detection tasks, this practical industrial application suffers from two unique challenges, including object ambiguity and insufficient annotations. To overcome these challenges, we introduce the pixel-wise attention mechanism to fully exploit features of annotated defects, and develop a feature augmentation framework to tackle the defect detection problem. The pixel-wise attention is conducted through a learnable pixel-level similarity between input and support features to obtain augmented features. These augmented features contain co-existing information from input images and multi-class support defects. The final output features are augmented and refined by support features, thus endowing the model to distinguish between ambiguous defect patterns based on insufficient annotated samples. Experiments on the rail defect dataset demonstrate that feature augmentation can help balance the sensitivity and robustness of the model. On our collected dataset with eight defected classes, our algorithm achieves 11.32% higher mAP@.5 compared with original YOLOv5 and 4.27% higher mAP@.5 compared with Faster R-CNN. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
FocusedDropout for Convolutional Neural Network
Appl. Sci. 2022, 12(15), 7682; https://doi.org/10.3390/app12157682 - 30 Jul 2022
Cited by 4 | Viewed by 972
Abstract
In a convolutional neural network (CNN), dropout cannot work well because dropped information is not entirely obscured in convolutional layers where features are correlated spatially. Except for randomly discarding regions or channels, many approaches try to overcome this defect by dropping influential units. [...] Read more.
In a convolutional neural network (CNN), dropout cannot work well because dropped information is not entirely obscured in convolutional layers where features are correlated spatially. Except for randomly discarding regions or channels, many approaches try to overcome this defect by dropping influential units. In this paper, we propose a non-random dropout method named FocusedDropout, aiming to make the network focus more on the target. In FocusedDropout, we use a simple but effective method to search for the target-related features, retain these features and discard others, which is contrary to the existing methods. We find that this novel method can improve network performance by making the network more target focused. Additionally, increasing the weight decay while using FocusedDropout can avoid overfitting and increase accuracy. Experimental results show that with a slight cost, 10% of batches employing FocusedDropout, can produce a nice performance boost over the baselines on multiple datasets of classification, including CIFAR10, CIFAR100 and Tiny ImageNet, and has a good versatility for different CNN models. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
A Robust Bayesian Optimization Framework for Microwave Circuit Design under Uncertainty
Electronics 2022, 11(14), 2267; https://doi.org/10.3390/electronics11142267 - 20 Jul 2022
Cited by 1 | Viewed by 1302
Abstract
In modern electronics, there are many inevitable uncertainties and variations of design parameters that have a profound effect on the performance of a device. These are, among others, induced by manufacturing tolerances, assembling inaccuracies, material diversities, machining errors, etc. This prompts wide interests [...] Read more.
In modern electronics, there are many inevitable uncertainties and variations of design parameters that have a profound effect on the performance of a device. These are, among others, induced by manufacturing tolerances, assembling inaccuracies, material diversities, machining errors, etc. This prompts wide interests in enhanced optimization algorithms that take the effect of these uncertainty sources into account and that are able to find robust designs, i.e., designs that are insensitive to the uncertainties early in the design cycle. In this work, a novel machine learning-based optimization framework that accounts for uncertainty of the design parameters is presented. This is achieved by using a modified version of the expected improvement criterion. Moreover, a data-efficient Bayesian Optimization framework is leveraged to limit the number of simulations required to find a robust design solution. Two suitable application examples validate that the robustness is significantly improved compared to standard design methods. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
IRSDet: Infrared Small-Object Detection Network Based on Sparse-Skip Connection and Guide Maps
Electronics 2022, 11(14), 2154; https://doi.org/10.3390/electronics11142154 - 09 Jul 2022
Cited by 4 | Viewed by 1177
Abstract
Detecting small objects in infrared images remains a challenge because most of them lack shape and texture. In this study, we proposed an infrared small-object detection method to improve the capacity for detecting thermal objects in complex scenarios. First, a sparse-skip connection block [...] Read more.
Detecting small objects in infrared images remains a challenge because most of them lack shape and texture. In this study, we proposed an infrared small-object detection method to improve the capacity for detecting thermal objects in complex scenarios. First, a sparse-skip connection block is proposed to enhance the response of small infrared objects and suppress the background response. This block is used to construct the detection model backbone. Second, a region attention module is designed to emphasize the features of infrared small objects and suppress background regions. Finally, a batch-averaged biased classification loss function is designed to improve the accuracy of the detection model. The experimental results show that the proposed small-object detection framework significantly increases precision, recall, and F1-score, showing that, compared with the current advanced detection models for small-object detection, the proposed detection framework has better performance in infrared small-object detection under complex backgrounds. The insights gained from this study may provide new ideas for infrared small object detection and tracking. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Comparison of On-Policy Deep Reinforcement Learning A2C with Off-Policy DQN in Irrigation Optimization: A Case Study at a Site in Portugal
Computers 2022, 11(7), 104; https://doi.org/10.3390/computers11070104 - 24 Jun 2022
Cited by 9 | Viewed by 2742
Abstract
Precision irrigation and optimization of water use have become essential factors in agriculture because water is critical for crop growth. The proper management of an irrigation system should enable the farmer to use water efficiently to increase productivity, reduce production costs, and maximize [...] Read more.
Precision irrigation and optimization of water use have become essential factors in agriculture because water is critical for crop growth. The proper management of an irrigation system should enable the farmer to use water efficiently to increase productivity, reduce production costs, and maximize the return on investment. Efficient water application techniques are essential prerequisites for sustainable agricultural development based on the conservation of water resources and preservation of the environment. In a previous work, an off-policy deep reinforcement learning model, Deep Q-Network, was implemented to optimize irrigation. The performance of the model was tested for tomato crop at a site in Portugal. In this paper, an on-policy model, Advantage Actor–Critic, is implemented to compare irrigation scheduling with Deep Q-Network for the same tomato crop. The results show that the on-policy model Advantage Actor–Critic reduced water consumption by 20% compared to Deep Q-Network with a slight change in the net reward. These models can be developed to be applied to other cultures with high production in Portugal, such as fruit, cereals, and wine, which also have large water requirements. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Robust Fingerprint Minutiae Extraction and Matching Based on Improved SIFT Features
Appl. Sci. 2022, 12(12), 6122; https://doi.org/10.3390/app12126122 - 16 Jun 2022
Cited by 2 | Viewed by 5402
Abstract
Minutiae feature extraction and matching are not only two crucial tasks for identifying fingerprints, but also play an eminent role as core components of automated fingerprint recognition (AFR) systems, which first focus primarily on the identification and description of the salient minutiae points [...] Read more.
Minutiae feature extraction and matching are not only two crucial tasks for identifying fingerprints, but also play an eminent role as core components of automated fingerprint recognition (AFR) systems, which first focus primarily on the identification and description of the salient minutiae points that impart individuality to each fingerprint and differentiate one fingerprint from another, and then matching their relative placement in a candidate fingerprint and previously stored fingerprint templates. In this paper, an automated minutiae extraction and matching framework is presented for identification and verification purposes, in which an adaptive scale-invariant feature transform (SIFT) detector is applied to high-contrast fingerprints preprocessed by means of denoising, binarization, thinning, dilation and enhancement to improve the quality of latent fingerprints. As a result, an optimized set of highly-reliable salient points discriminating fingerprint minutiae is identified and described accurately and quickly. Then, the SIFT descriptors of the local key-points in a given fingerprint are matched with those of the stored templates using a brute force algorithm, by assigning a score for each match based on the Euclidean distance between the SIFT descriptors of the two matched keypoints. Finally, a postprocessing dual-threshold filter is adaptively applied, which can potentially eliminate almost all the false matches, while discarding very few correct matches (less than 4%). The experimental evaluations on publicly available low-quality FVC2004 fingerprint datasets demonstrate that the proposed framework delivers comparable or superior performance to several state-of-the-art methods, achieving an average equal error rate (EER) value of 2.01%. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Deep Learning-Based End-to-End Carrier Signal Detection in Broadband Power Spectrum
Electronics 2022, 11(12), 1896; https://doi.org/10.3390/electronics11121896 - 16 Jun 2022
Cited by 1 | Viewed by 1657
Abstract
This paper presents an end-to-end deep convolutional neural network (CNN) model for carrier signal detection in the broadband power spectrum, so-called spectrum center net (SCN). By regarding the broadband power spectrum sequence as a one-dimensional (1D) image and each subcarrier on the broadband [...] Read more.
This paper presents an end-to-end deep convolutional neural network (CNN) model for carrier signal detection in the broadband power spectrum, so-called spectrum center net (SCN). By regarding the broadband power spectrum sequence as a one-dimensional (1D) image and each subcarrier on the broadband as the target object, we can transform the carrier signal detection problem into a semantic segmentation problem on a 1D image. Here, the core task of the carrier signal detection problem turns into the frequency center (FC) and bandwidth (BW) regression. We design the SCN to classify the broadband power spectrum as inputs and extract the features of different length scales by the ResNet backbone. Then, the feature pyramid network (FPN) neck fuses the features and outputs the fusion features. Next, the RegNet head regresses the power spectrum distribution (PSD) prediction for FC and the corresponding BW prediction. Finally, we can achieve the subcarrier targets by applying non-maximum suppressions (NMS). Moreover, we train the SCN on a simulation dataset and validate it on a real satellite broadband power spectrum set. As an improvement of the fully convolutional network-based (FCN-based) method, the proposed method directly outputs the detection results without post-processing. Extensive experimental results demonstrate that the proposed method can effectively detect the subcarrier signal in the broadband power spectrum as well as achieve higher and more robust performance than the deep FCN- and threshold-based methods. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
MBHAN: Motif-Based Heterogeneous Graph Attention Network
Appl. Sci. 2022, 12(12), 5931; https://doi.org/10.3390/app12125931 - 10 Jun 2022
Cited by 2 | Viewed by 1867
Abstract
Graph neural networks are graph-based deep learning technologies that have attracted significant attention from researchers because of their powerful performance. Heterogeneous graph-based graph neural networks focus on the heterogeneity of the nodes and links in a graph. This is more effective at preserving [...] Read more.
Graph neural networks are graph-based deep learning technologies that have attracted significant attention from researchers because of their powerful performance. Heterogeneous graph-based graph neural networks focus on the heterogeneity of the nodes and links in a graph. This is more effective at preserving semantic knowledge when representing data interactions in real-world graph structures. Unfortunately, most heterogeneous graph neural networks tend to transform heterogeneous graphs into homogeneous graphs when using meta-paths for representation learning. This paper therefore presents a novel motif-based hierarchical heterogeneous graph attention network algorithm, MBHAN, that addresses this problem by incorporating a hierarchical dual attention mechanism at the node-level and motif-level. Node-level attention aims to learn the importance between a node and its neighboring nodes within its corresponding motif. Motif-level attention is capable of learning the importance of different motifs in the heterogeneous graph. In view of the different vector space features of different types of nodes in heterogeneous graphs, MBHAN also aggregates the features of different types of nodes, so that they can jointly participate in downstream tasks after passing through segregated independent shallow neural networks. MBHAN’s superior network representation learning capability has been validated by extensive experiments on two real-world datasets. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Review
A Review of Neural Network-Based Emulation of Guitar Amplifiers
Appl. Sci. 2022, 12(12), 5894; https://doi.org/10.3390/app12125894 - 09 Jun 2022
Cited by 1 | Viewed by 2636
Abstract
Vacuum tube amplifiers present sonic characteristics frequently coveted by musicians, that are often due to the distinct nonlinearities of their circuits, and accurately modelling such effects can be a challenging task. A recent rise in machine learning methods has lead to the ubiquity [...] Read more.
Vacuum tube amplifiers present sonic characteristics frequently coveted by musicians, that are often due to the distinct nonlinearities of their circuits, and accurately modelling such effects can be a challenging task. A recent rise in machine learning methods has lead to the ubiquity of neural networks in all fields of study including virtual analog modelling. This has lead to the appearance of a variety of architectures tailored to this task. This article aims to provide an overview of the current state of the research in neural emulation of analog distortion circuits by first presenting preceding methods in the field and then focusing on a complete review of the deep learning landscape that has appeared in recent years, detailing each subclass of available architectures. This is done in order to bring to light future possible avenues of work in this field. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
An Open Relation Extraction System for Web Text Information
Appl. Sci. 2022, 12(11), 5718; https://doi.org/10.3390/app12115718 - 04 Jun 2022
Viewed by 1211
Abstract
Web texts typically undergo the open-ended growth of new relations. Traditional relation extraction methods lack automatic annotation and perform poorly on new relation extraction tasks. We propose an open-domain relation extraction system (ORES) based on distant supervision and few-shot learning to solve this [...] Read more.
Web texts typically undergo the open-ended growth of new relations. Traditional relation extraction methods lack automatic annotation and perform poorly on new relation extraction tasks. We propose an open-domain relation extraction system (ORES) based on distant supervision and few-shot learning to solve this problem. More specifically, we utilize tBERT to design instance selector 1, implementing automatic labeling in the data mining component. Meanwhile, we design example selector 2 based on K-BERT in the new relation extraction component. The real-time data management component outputs new relational data. Experiments show that ORES can filter out higher quality and diverse instances for better new relation learning. It achieves significant improvement compared to Neural Snowball with fewer seed sentences. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
DeConNet: Deep Neural Network Model to Solve the Multi-Job Assignment Problem in the Multi-Agent System
Appl. Sci. 2022, 12(11), 5454; https://doi.org/10.3390/app12115454 - 27 May 2022
Cited by 2 | Viewed by 1402
Abstract
In a multi-agent system, multi-job assignment is an optimization problem that seeks to minimize total cost. This can be generalized as a complex problem in which several variations of vehicle routing problems are combined, and as an NP-hard problem. The parameters considered include [...] Read more.
In a multi-agent system, multi-job assignment is an optimization problem that seeks to minimize total cost. This can be generalized as a complex problem in which several variations of vehicle routing problems are combined, and as an NP-hard problem. The parameters considered include the number of agents and jobs, the loading capacity, the speed of the agents, and the sequence of consecutive positions of jobs. In this study, a deep neural network (DNN) model was developed to solve the job assignment problem in a constant time regardless of the state of the parameters. To generate a large training dataset for the DNN, the planning domain definition language (PDDL) was used to describe the problem, and the optimal solution that was obtained using the PDDL solver was preprocessed into a sample of the dataset. A DNN was constructed by concatenating the fully-connected layers. The assignment solution obtained via DNN inference increased the average traveling time by up to 13% compared with the ground cost. As compared with the ground cost, which required hundreds of seconds, the DNN execution time was constant at approximately 20 ms regardless of the number of agents and jobs. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
A Hierarchical Representation Model Based on Longformer and Transformer for Extractive Summarization
Electronics 2022, 11(11), 1706; https://doi.org/10.3390/electronics11111706 - 27 May 2022
Cited by 2 | Viewed by 1713
Abstract
Automatic text summarization is a method used to compress documents while preserving the main idea of the original text, including extractive summarization and abstractive summarization. Extractive text summarization extracts important sentences from the original document to serve as the summary. The document representation [...] Read more.
Automatic text summarization is a method used to compress documents while preserving the main idea of the original text, including extractive summarization and abstractive summarization. Extractive text summarization extracts important sentences from the original document to serve as the summary. The document representation method is crucial for the quality of the generated summarization. To effectively represent the document, we propose a hierarchical document representation model Long-Trans-Extr for Extractive Summarization, which uses Longformer as the sentence encoder and Transformer as the document encoder. The advantage of Longformer as sentence encoder is that the model can input long document up to 4096 tokens with adding relative a little calculation. The proposed model Long-Trans-Extr is evaluated on three benchmark datasets: CNN (Cable News Network), DailyMail, and the combined CNN/DailyMail. It achieves 43.78 (Rouge-1) and 39.71 (Rouge-L) on CNN/DailyMail and 33.75 (Rouge-1), 13.11 (Rouge-2), and 30.44 (Rouge-L) on the CNN datasets. They are very competitive results, and furthermore, they show that our model has better performance on long documents, such as the CNN corpus. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Improved Bidirectional GAN-Based Approach for Network Intrusion Detection Using One-Class Classifier
Computers 2022, 11(6), 85; https://doi.org/10.3390/computers11060085 - 26 May 2022
Cited by 7 | Viewed by 2368
Abstract
Existing generative adversarial networks (GANs), primarily used for creating fake image samples from natural images, demand a strong dependence (i.e., the training strategy of the generators and the discriminators require to be in sync) for the generators to produce as realistic fake samples [...] Read more.
Existing generative adversarial networks (GANs), primarily used for creating fake image samples from natural images, demand a strong dependence (i.e., the training strategy of the generators and the discriminators require to be in sync) for the generators to produce as realistic fake samples that can “fool” the discriminators. We argue that this strong dependency required for GAN training on images does not necessarily work for GAN models for network intrusion detection tasks. This is because the network intrusion inputs have a simpler feature structure such as relatively low-dimension, discrete feature values, and smaller input size compared to the existing GAN-based anomaly detection tasks proposed on images. To address this issue, we propose a new Bidirectional GAN (Bi-GAN) model that is better equipped for network intrusion detection with reduced overheads involved in excessive training. In our proposed method, the training iteration of the generator (and accordingly the encoder) is increased separate from the training of the discriminator until it satisfies the condition associated with the cross-entropy loss. Our empirical results show that this proposed training strategy greatly improves the performance of both the generator and the discriminator even in the presence of imbalanced classes. In addition, our model offers a new construct of a one-class classifier using the trained encoder–discriminator. The one-class classifier detects anomalous network traffic based on binary classification results instead of calculating expensive and complex anomaly scores (or thresholds). Our experimental result illustrates that our proposed method is highly effective to be used in network intrusion detection tasks and outperforms other similar generative methods on two datasets: NSL-KDD and CIC-DDoS2019 datasets. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Optimization of Apron Support Vehicle Operation Scheduling Based on Multi-Layer Coding Genetic Algorithm
Appl. Sci. 2022, 12(10), 5279; https://doi.org/10.3390/app12105279 - 23 May 2022
Cited by 5 | Viewed by 1225
Abstract
Operation scheduling of apron support vehicles is an important factor affecting aircraft support capability. However, at present, the traditional support methods have the problems of low utilization rate of support vehicles and low support efficiency in multi-aircraft support. In this paper, a vehicle [...] Read more.
Operation scheduling of apron support vehicles is an important factor affecting aircraft support capability. However, at present, the traditional support methods have the problems of low utilization rate of support vehicles and low support efficiency in multi-aircraft support. In this paper, a vehicle scheduling model is constructed, and a multi-layer coding genetic algorithm is designed to solve the vehicle scheduling problem. In this paper, the apron support vehicle operation scheduling problem is regarded as a Resource-Constrained Project Scheduling Problem (RCPSP), and the support vehicles and their support procedures are adjusted via the sequential sorting method to achieve the optimization goals of shortening the support time and improving the vehicle utilization rate. Based on a specific example, the job scheduling before and after the optimization of the number of support vehicles is simulated using a multi-layer coding genetic algorithm. The results show that compared with the traditional support scheme, the vehicle scheduling time optimized via the multi-layer coding genetic algorithm is obviously shortened; after the number of vehicles is optimized, the support time is further shortened and the average utilization rate of vehicles is improved. Finally, the optimized apron support vehicle number configuration and the best scheduling scheme are given. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Double Linear Transformer for Background Music Generation from Videos
Appl. Sci. 2022, 12(10), 5050; https://doi.org/10.3390/app12105050 - 17 May 2022
Viewed by 1597
Abstract
Many music generation research works have achieved effective performance, while rarely combining music with given videos. We propose a model with two linear Transformers to generate background music according to a given video. To enhance the melodic quality of the generated music, we [...] Read more.
Many music generation research works have achieved effective performance, while rarely combining music with given videos. We propose a model with two linear Transformers to generate background music according to a given video. To enhance the melodic quality of the generated music, we firstly input note-related and rhythm-related music features separately into each Transformer network. In particular, we pay attention to the connection and the independence of music features. Then, in order to generate the music that matches the given video, the current state-of-the-art cross-modal inference method is set up to establish the relationship between visual mode and sound mode. Subjective and objective experiment indicate that the generated background music matches the video well and is also melodious. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
Improving Non-Autoregressive Machine Translation Using Sentence-Level Semantic Agreement
Appl. Sci. 2022, 12(10), 5003; https://doi.org/10.3390/app12105003 - 16 May 2022
Viewed by 1239
Abstract
Theinference stage can be accelerated significantly using a Non-Autoregressive Transformer (NAT). However, the training objective used in the NAT model also aims to minimize the loss between the generated words and the golden words in the reference. Since the dependencies between the target [...] Read more.
Theinference stage can be accelerated significantly using a Non-Autoregressive Transformer (NAT). However, the training objective used in the NAT model also aims to minimize the loss between the generated words and the golden words in the reference. Since the dependencies between the target words are lacking, this training objective computed at word level can easily cause semantic inconsistency between the generated and source sentences. To alleviate this issue, we propose a new method, Sentence-Level Semantic Agreement (SLSA), to obtain consistency between the source and generated sentences. Specifically, we utilize contrastive learning to pull the sentence representations of the source and generated sentences closer together. In addition, to strengthen the capability of the encoder, we also integrate an agreement module into the encoder to obtain a better representation of the source sentence. The experiments are conducted on three translation datasets: the WMT 2014 EN → DE task, the WMT 2016 EN → RO task, and the IWSLT 2014 DE → DE task, and the improvement in the NAT model’s performance shows the effect of our proposed method. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

Article
A Multivariate Temporal Convolutional Attention Network for Time-Series Forecasting
Electronics 2022, 11(10), 1516; https://doi.org/10.3390/electronics11101516 - 10 May 2022
Cited by 4 | Viewed by 2750
Abstract
Multivariate time-series forecasting is one of the crucial and persistent challenges in time-series forecasting tasks. As a kind of data with multivariate correlation and volatility, multivariate time series impose highly nonlinear time characteristics on the forecasting model. In this paper, a new multivariate [...] Read more.
Multivariate time-series forecasting is one of the crucial and persistent challenges in time-series forecasting tasks. As a kind of data with multivariate correlation and volatility, multivariate time series impose highly nonlinear time characteristics on the forecasting model. In this paper, a new multivariate time-series forecasting model, multivariate temporal convolutional attention network (MTCAN), based on a self-attentive mechanism is proposed. MTCAN is based on the Convolution Neural Network (CNN) model, using 1D dilated convolution as the basic unit to construct asymmetric blocks, and then, the feature extraction is performed by the self-attention mechanism to finally obtain the prediction results. The input and output lengths of this network can be determined flexibly. The validation of the method is carried out with three different multivariate time-series datasets. The reliability and accuracy of the prediction results are compared with Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Convolutional Long Short-Term Memory (ConvLSTM), and Temporal Convolutional Network (TCN). The prediction results show that the model proposed in this paper has significantly improved prediction accuracy and generalization. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1