AI Test

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (31 January 2024) | Viewed by 1233

Special Issue Editors

School of Engineering, Computing and Mathematics, Oxford Brookes University, Oxford OX33 1HX, UK
Interests: software engineering; software development methodology; automated and intelligent software development tools
Special Issues, Collections and Topics in MDPI journals
Department of Information Science, University of North Texas, Denton, TX 76203, USA
Interests: machine learning; software engineering; legal intelligence; biomedical computation
Special Issues, Collections and Topics in MDPI journals
Grenoble INP*, University Grenoble Alpes, LCIS, 26000 Valence, France
Interests: security and trust in embedded and distributed, autonomous applications and systems; test and security of mobile applications; system-level online fault diagnosis (hardware components, middleware, software components)

Special Issue Information

Dear Colleagues,

With the rapid growth of AI applications, especially machine learning applications, software engineering is currently confronted with a grave challenge to test and ensure the quality of such computer applications. Many traditional software testing techniques, tools and methodologies cannot be simply applied to such applications or become less effective and efficient due to the fundamental differences between machine learning models and traditional coded programs.  On the other hand, AI techniques offer new approaches to solve software testing problems. The scope of this journal Special Issue covers both “testing for AI” (i.e., testing and quality assurance for AI applications) and “testing by AI” (i.e., testing and quality assurance of software applications by employing AI techniques).

It has been four years since the successful launch of IEEE International Conferences on Artificial Intelligence Testing (IEEE AITest) in April 2019. The AITest 2023 conference will celebrate its fifth edition in July 2023 in Athens, Greece. In this period of time, both “testing for AI” and “testing by AI” have been developed rapidly. More importantly, a new direction of research on “testing x AI”, which represents the interplay between “testing for AI” and “testing by AI”, starts to emerge, which was one of the key motivations of the IEEE AITest conferences. A rapidly increasing number of research works have been published on the subject. Yet, grave challenges remain in the practice as well as in the research community.

The journal Special Issue aims at setting a milestone in this rapid growing subject area with archive articles in the journal to reflect the current state of the art in the research and/or current practices, as well as a set of survey, review and visionary research papers that summarizes the results so far, analyses the challenges ahead and sets a roadmap for the future directions. The best papers of the AITest 2023 will be invited to submit a revised and extended paper with at least 30% new material than the conference paper. It will also combine with invited papers for authors who have made significant contributions to the subject area and open call for papers for new contributions. All submissions will be rigorously reviewed according to the standard of internationally top ranked journals fairly and scientifically according to the criteria of (a) the scientific and technological soundness, (b) the maturity of the research work, (c) the relevance to the theme of the Special Issue, (d) the timeliness of the work, (e) the significance of the contribution, and (f) the presentation quality.

Topics of interest include, but are not limited to:

  • Quality models, quality attributes and metrics for AI applications, such as robustness, fairness, reliability and performance.
  • Testing methods, techniques and tools for various aspects and activities of testing and quality assurance, especially for test case generation, test oracle, and test adequacy measurements.
  • Test automation environment and platforms for AI applications and by employing AI techniques.
  • Domain specific testing techniques and methods for various special domains of AI applications, such as AI applications in natural language processing such as ChatGPT, image recognitions, time series and Internet of Things data analysis, medical and healthcare, robotics, software code generation/debugging/design, etc.
  • Testing and quality assurance for various specific AI techniques, such as clusters and classifiers, regression machine learning models, deep neural networks, big models, recurrent neural networks, etc.

Prof. Dr. Hong Zhu
Prof. Dr. Junhua Ding
Prof. Dr. Aktouf Oum-El-Kheir
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • software testing
  • machine learning
  • software engineering

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 4090 KiB  
Article
Uncertainty Quantification of Machine Learning Model Performance via Anomaly-Based Dataset Dissimilarity Measures
by Gabriele Incorvaia, Darryl Hond and Hamid Asgari
Electronics 2024, 13(5), 939; https://doi.org/10.3390/electronics13050939 - 29 Feb 2024
Viewed by 381
Abstract
The use of Machine Learning (ML) models as predictive tools has increased dramatically in recent years. However, data-driven systems (such as ML models) exhibit a degree of uncertainty in their predictions. In other words, they could produce unexpectedly erroneous predictions if the uncertainty [...] Read more.
The use of Machine Learning (ML) models as predictive tools has increased dramatically in recent years. However, data-driven systems (such as ML models) exhibit a degree of uncertainty in their predictions. In other words, they could produce unexpectedly erroneous predictions if the uncertainty stemming from the data, choice of model and model parameters is not taken into account. In this paper, we introduce a novel method for quantifying the uncertainty of the performance levels attained by ML classifiers. In particular, we investigate and characterize the uncertainty of model accuracy when classifying out-of-distribution data that are statistically dissimilar from the data employed during training. A main element of this novel Uncertainty Quantification (UQ) method is a measure of the dissimilarity between two datasets. We introduce an innovative family of data dissimilarity measures based on anomaly detection algorithms, namely the Anomaly-based Dataset Dissimilarity (ADD) measures. These dissimilarity measures process feature representations that are derived from the activation values of neural networks when supplied with dataset items. The proposed UQ method for classification performance employs these dissimilarity measures to estimate the classifier accuracy for unseen, out-of-distribution datasets, and to give an uncertainty band for those estimates. A numerical analysis of the efficacy of the UQ method is conducted using standard Artificial Neural Network (ANN) classifiers and public domain datasets. The results obtained generally demonstrate that the amplitude of the uncertainty band associated with the estimated accuracy values tends to increase as the data dissimilarity measure increases. Overall, this research contributes to the verification and run-time performance prediction of systems composed of ML-based elements. Full article
(This article belongs to the Special Issue AI Test)
Show Figures

Figure 1

Back to TopTop