Advances in Artificial Intelligence Engineering

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 15 October 2024 | Viewed by 7479

Special Issue Editors


E-Mail Website
Guest Editor
1. Entwicklungs Professur (Eqv. Junior Professor), Faculty of Engineering and Computer Science, University of Applied Sciences Osnabrueck, Osnabrueck, Germany
2. Functional Safety Engineer, Innotec GmbH, Erlenweg 12, 49324 Melle, Germany
Interests: software engineering and quality assurance (with focus on AI); functional safety software engineering; model-based software development; embedded software engineering

E-Mail Website
Guest Editor
Institute for Computer Science, Software Engineering Research Group, Wachsbleiche 27, 49090 Osnabrück, Germany
Interests: quality assurance; automation in software development; embedded software engineering

Special Issue Information

Dear Colleagues,

In the last decades, Software Engineering (SE) has had a profound impact on various fields such as classic software industry, transformation towards digitization and Industry 4.0, IT-heavy service sectors such as banks, insurance companies, telecom providers, specialized fields such as embedded software and industrial and scientific research department and institutes. On the other hand, Artificial Intelligence (AI) and its sub-area Machine Learning (ML) is beginning to have a metamorphic impact on almost every major industry today. In this context, with recent advances in ML, there is widespread interest around integrating AI capabilities and software engineering. The convergence of two fields such as AI and SE can give rise to collaboration in two main ways such as (a) AI-guided SE and (b) SE for AI.

SE can benefit from integration of AI related technologies (such as for reasoning, problem solving, planning, and learning, among others) to increase its power, flexibility, user experience and quality. For instance, even simple AI/ML methods can help remove a lot of inefficiencies in the day-to-day life of software developer. Thus, it is intuitive to perceive that AI-powered SE should significantly increase the benefits and reduce the costs of adopting SE artefacts. In the case of using SE for AI, AI can primarily benefit from SE by integrating concepts and ideas from SE.

This special issue is aimed at addressing issues the opportunities and challenges derived by integration of AI and SE for both (a) and (b). This includes (but not limited to):

  • AI planning applied to SE development process;
  • Self-adapting code generators;
  • AI-based code analyzers for detecting code smells and anti-patterns;
  • Using ML of models, meta-models, and model transformation through search-based approaches in model-based software engineering (MBSE);
  • AI-based assistants such as bots for SE tools;
  • AI assistants for human-in-the loop modeling such as conversational virtual assistants for dialog-based optimization of SE tasks;
  • AI support for various stages of SE development process;
  • AI-based and automated natural language processing (NLP) (e.g., applied to various stages of any SE development process and model-based development);
  • Application of AI in semantic reasoning platforms;
  • Code recommendation engines;
  • ML-based automated code review and assessing the risk of a code change;
  • AI techniques for data, process and model mining and categorization;
  • Challenges in choice, evaluation, and adaptation of AI techniques to SE, such that they provide a compelling improvement to current systems during the entire software development process;
  • Automated frameworks and supporting environments for ML workflows and ML processes;
  • Model-driven processes for AI systems development and testing;
  • Automatic code generators for AI libraries;
  • Domain-specific modeling for ML;
  • Case studies of applications of AI/ML in SE and vice versa.

Dr. Padma Iyenghar
Prof. Dr. Elke Pulvermüller
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • software engineering
  • application of AI in software engineering
  • SE for AI
  • AI-based assistants
  • chatbots
  • AI assistant

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 1359 KiB  
Article
Utilizing Latent Diffusion Model to Accelerate Sampling Speed and Enhance Text Generation Quality
by Chenyang Li, Long Zhang and Qiusheng Zheng
Electronics 2024, 13(6), 1093; https://doi.org/10.3390/electronics13061093 - 15 Mar 2024
Viewed by 589
Abstract
Diffusion models have achieved tremendous success in modeling continuous data modalities, such as images, audio, and video, yet their application in discrete data domains (e.g., natural language) has been limited. Existing methods primarily represent discrete text in a continuous diffusion space, incurring significant [...] Read more.
Diffusion models have achieved tremendous success in modeling continuous data modalities, such as images, audio, and video, yet their application in discrete data domains (e.g., natural language) has been limited. Existing methods primarily represent discrete text in a continuous diffusion space, incurring significant computational overhead during training and resulting in slow sampling speeds. This paper introduces LaDiffuSeq, a latent diffusion-based text generation model incorporating an encoder–decoder structure. Specifically, it first employs a pretrained encoder to map sequences composed of attributes and corresponding text into a low-dimensional latent vector space. Then, without the guidance of a classifier, it performs the diffusion process for the sequence’s corresponding latent space. Finally, a pretrained decoder is used to decode the newly generated latent vectors, producing target texts that are relevant to themes and possess multiple emotional granularities. Compared to the benchmark model, DiffuSeq, this model achieves BERTScore improvements of 0.105 and 0.009 on two public real-world datasets (ChnSentiCorp and a debate dataset), respectively; perplexity falls by 3.333 and 4.562; and it effectively quadruples the text generation sampling speed. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)
Show Figures

Figure 1

19 pages, 2039 KiB  
Article
PatchRLNet: A Framework Combining a Vision Transformer and Reinforcement Learning for The Separation of a PTFE Emulsion and Paraffin
by Xinxin Wang, Lei Wu, Bingyu Hu, Xinduoji Yang, Xianghui Fan, Meng Liu, Kai Cheng, Song Wang, Jianqiang Miao and Haigang Gong
Electronics 2024, 13(2), 339; https://doi.org/10.3390/electronics13020339 - 12 Jan 2024
Viewed by 617
Abstract
During the production of a PolyTetraFluoroEthylene(PTFE) emulsion, it is crucial to detect the separation between the PTFE emulsion and liquid paraffin in order to purify the PTFE emulsion and facilitate subsequent polymerization. However, the current practice heavily relies on visual inspections conducted by [...] Read more.
During the production of a PolyTetraFluoroEthylene(PTFE) emulsion, it is crucial to detect the separation between the PTFE emulsion and liquid paraffin in order to purify the PTFE emulsion and facilitate subsequent polymerization. However, the current practice heavily relies on visual inspections conducted by on-site personnel, resulting in not only low efficiency and accuracy, but also posing potential threats to personnel safety. The incorporation of artificial intelligence for the automated detection of paraffin separation holds the promise of significantly improving detection accuracy and mitigating potential risks to personnel. Thus, we propose an automated detection framework named PatchRLNet, which leverages a combination of a vision transformer and reinforcement learning. Reinforcement learning is integrated into the embedding layer of the vision transformer in PatchRLNet, providing attention scores for each patch. This strategic integration compels the model to allocate greater attention to the essential features of the target, effectively filtering out ambient environmental factors and background noise. Building upon this foundation, we introduce a multimodal integration mechanism to further enhance the prediction accuracy of the model. To validate the efficacy of our proposed framework, we conducted performance testing using authentic data from China’s largest PTFE material production base. The results are compelling, demonstrating that the framework achieved an impressive accuracy rate of over 99% on the test set. This underscores its significant practical application value. To the best of our knowledge, this represents the first instance of automated detection applied to the separation of the PTFE emulsion and paraffin. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)
Show Figures

Figure 1

15 pages, 2160 KiB  
Article
Safe and Trustful AI for Closed-Loop Control Systems
by Julius Schöning and Hans-Jürgen Pfisterer
Electronics 2023, 12(16), 3489; https://doi.org/10.3390/electronics12163489 - 17 Aug 2023
Cited by 1 | Viewed by 1549
Abstract
In modern times, closed-loop control systems (CLCSs) play a prominent role in a wide application range, from production machinery via automated vehicles to robots. CLCSs actively manipulate the actual values of a process to match predetermined setpoints, typically in real time and with [...] Read more.
In modern times, closed-loop control systems (CLCSs) play a prominent role in a wide application range, from production machinery via automated vehicles to robots. CLCSs actively manipulate the actual values of a process to match predetermined setpoints, typically in real time and with remarkable precision. However, the development, modeling, tuning, and optimization of CLCSs barely exploit the potential of artificial intelligence (AI). This paper explores novel opportunities and research directions in CLCS engineering, presenting potential designs and methodologies incorporating AI. Combining these opportunities and directions makes it evident that employing AI in developing and implementing CLCSs is indeed feasible. Integrating AI into CLCS development or AI directly within CLCSs can lead to a significant improvement in stakeholder confidence. Integrating AI in CLCSs raises the question: How can AI in CLCSs be trusted so that its promising capabilities can be used safely? One does not trust AI in CLCSs due to its unknowable nature caused by its extensive set of parameters that defy complete testing. Consequently, developers working on AI-based CLCSs must be able to rate the impact of the trainable parameters on the system accurately. By following this path, this paper highlights two key aspects as essential research directions towards safe AI-based CLCSs: (I) the identification and elimination of unproductive layers in artificial neural networks (ANNs) for reducing the number of trainable parameters without influencing the overall outcome, and (II) the utilization of the solution space of an ANN to define the safety-critical scenarios of an AI-based CLCS. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)
Show Figures

Figure 1

28 pages, 1230 KiB  
Article
Robust Optimization with Interval Uncertainties Using Hybrid State Transition Algorithm
by Haochuan Zhang, Jie Han, Xiaojun Zhou and Yuxuan Zheng
Electronics 2023, 12(14), 3035; https://doi.org/10.3390/electronics12143035 - 11 Jul 2023
Cited by 1 | Viewed by 881
Abstract
Robust optimization is concerned with finding an optimal solution that is insensitive to uncertainties and has been widely used in solving real-world optimization problems. However, most robust optimization methods suffer from high computational costs and poor convergence. To alleviate the above problems, an [...] Read more.
Robust optimization is concerned with finding an optimal solution that is insensitive to uncertainties and has been widely used in solving real-world optimization problems. However, most robust optimization methods suffer from high computational costs and poor convergence. To alleviate the above problems, an improved robust optimization algorithm is proposed. First, to reduce the computational cost, the second-order Taylor series surrogate model is used to approximate the robustness indices. Second, to strengthen the convergence, the state transition algorithm is studied to explore the whole search space for candidate solutions, while sequential quadratic programming is adopted to exploit the local area. Third, to balance the robustness and optimality of candidate solutions, a preference-based selection mechanism is investigated which effectively determines the promising solution. The proposed robust optimization method is applied to obtain the optimal solutions of seven examples that are subject to decision variables and parameter uncertainties. Comparative studies with other robust optimization algorithms (robust genetic algorithm, Kriging metamodel-assisted robust optimization method, etc.) show that the proposed method can obtain accurate and robust solutions with less computational cost. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)
Show Figures

Figure 1

17 pages, 2280 KiB  
Article
An Enhanced Method on Transformer-Based Model for ONE2SEQ Keyphrase Generation
by Lingyun Shen and Xiaoqiu Le
Electronics 2023, 12(13), 2968; https://doi.org/10.3390/electronics12132968 - 05 Jul 2023
Viewed by 888
Abstract
Keyphrase generation is a long-standing task in scientific literature retrieval. The Transformer-based model outperforms other baseline models in this challenge dramatically. In cross-domain keyphrase generation research, topic information plays a guiding role during generation, while in keyphrase generation of individual text, titles can [...] Read more.
Keyphrase generation is a long-standing task in scientific literature retrieval. The Transformer-based model outperforms other baseline models in this challenge dramatically. In cross-domain keyphrase generation research, topic information plays a guiding role during generation, while in keyphrase generation of individual text, titles can replace topic roles and convey more semantic information. As a result, we proposed an enhanced model architecture named TAtrans. In this research, we investigate the advantages of title attention and sequence code representing phrase order in keyphrase sequence in improving Transformer-based keyphrase generation. We conduct experiments on five widely-used English datasets specifically designed for keyphrase generation. Our method achieves an F1 score in the top five, surpassing the Transformer-based model by 3.2% in KP20k. The results demonstrate that the proposed method outperforms all the previous models on prediction present keyphrases. To evaluate the performance of the proposed model in the Chinese dataset, we construct a new Chinese abstract dataset called CNKIL, which contains a total of 54,546 records. The F1 score of the top five for predicting present keyphrases on the CNKIL dataset exceeds 2.2% compared to the Transformer-based model. However, there is no significant improvement in the model’s performance in predicting absent keyphrases. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)
Show Figures

Figure 1

18 pages, 7072 KiB  
Article
An Intelligent Detection Method for Obstacles in Agricultural Soil with FDTD Modeling and MSVMs
by Yuanhong Li, Congyue Wang, Chaofeng Wang, Yangfan Luo and Yubin Lan
Electronics 2023, 12(11), 2447; https://doi.org/10.3390/electronics12112447 - 29 May 2023
Viewed by 895
Abstract
Unknown objects in agricultural soil can be important because they may impact the health and productivity of the soil and the crops that grow in it. Challenges in collecting soil samples present opportunities to utilize Ground Penetrating Radar (GPR) image processing and artificial [...] Read more.
Unknown objects in agricultural soil can be important because they may impact the health and productivity of the soil and the crops that grow in it. Challenges in collecting soil samples present opportunities to utilize Ground Penetrating Radar (GPR) image processing and artificial intelligence techniques to identify and locate unidentified objects in agricultural soil, which are important for agriculture. In this study, we used finite-difference time-domain (FDTD) simulated models to gather training data and predict actual soil conditions. Additionally, we propose a multi-class support vector machine (MSVM) that employs a semi-supervised algorithm to classify buried object materials and locate their position in soil. Then, we extract echo signals from the electromagnetic features of the FDTD simulation model, including soil type, parabolic shape, location, and energy magnitude changes. Lastly, we compare the performance of various MSVM models with different kernel functions (linear, polynomial, and radial basis function). The results indicate that the FDTD-Yee method enhances the accuracy of simulating real agricultural soils. The average recognition rate of the hyperbola position formed by the GPR echo signal is 91.13%, which can be utilized to detect the position and material of unknown and underground objects. For material identification, the directed acyclic graph support vector machine (DAG-SVM) model attains the highest classification accuracy among all soil layers when using an RBF kernel. Overall, our study demonstrates that an artificial intelligence model trained with the FDTD forward simulation model can effectively detect objects in farmland soil. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)
Show Figures

Figure 1

18 pages, 2725 KiB  
Article
Multi-Strategy Fusion of Sine Cosine and Arithmetic Hybrid Optimization Algorithm
by Lisang Liu, Hui Xu, Bin Wang and Chengyang Ke
Electronics 2023, 12(9), 1961; https://doi.org/10.3390/electronics12091961 - 23 Apr 2023
Cited by 2 | Viewed by 1048
Abstract
The goal was to address the problems of slow convergence speed, low solution accuracy and insufficient performance in solving complex functions in the search process of an arithmetic optimization algorithm (AOA). A multi-strategy improved arithmetic optimization algorithm (SSCAAOA) is suggested in this study. [...] Read more.
The goal was to address the problems of slow convergence speed, low solution accuracy and insufficient performance in solving complex functions in the search process of an arithmetic optimization algorithm (AOA). A multi-strategy improved arithmetic optimization algorithm (SSCAAOA) is suggested in this study. By enhancing the population’s initial distribution, optimizing the control parameters, integrating the positive cosine algorithm with improved parameters, and adding inertia weight coefficients and a population history information sharing mechanism to the PSO algorithm, the optimization accuracy and convergence speed of the AOA algorithm are improved. This increases the algorithm’s ability to perform a global search and prevents it from hitting a local optimum. Simulations of SSCAAOA using other optimization algorithms are used to examine their efficacy on benchmark test functions and engineering challenges. The analysis of the experimental data reveals that, when compared to other comparative algorithms, the improved algorithm presented in this paper has a convergence speed and accuracy that are tens of orders of magnitude faster for the unimodal function and significantly better for the multimodal function. Practical engineering tests also demonstrate that the revised approach performs better. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)
Show Figures

Figure 1

Back to TopTop