Software Reliability and Fault Injection

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information Systems".

Deadline for manuscript submissions: closed (30 November 2022) | Viewed by 4503

Special Issue Editor


E-Mail Website
Guest Editor
Department of Infomatics Engineering, Polytechnic of Coimbra University, 3030-199 Coimbra, Portugal
Interests: software reliability; testing; fault injection
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The MDPI journal Information invites submissions to a Special Issue on “Software Reliability and Fault Injection”.

Modern society is increasingly dependent on computer-based systems and their controlling software. This includes all aspects of life, from basic consumer services, including financial transactions to medical care and power distribution. This pervasive nature of software systems raises the necessity of assuring the quality of software. Given the increasing complexity of modern software and the inevitable difficulty in its assuring quality, methodologies and techniques for testing and measuring the reliability of software are very relevant to academia and industry.

Fault injection is a relevant methodology used in many critical application scenarios to evaluate system robustness, risk and worst-case scenarios. Its fundamental idea is to inject artificial faults representative of realistic ones into one or more specific modules of a system to evaluate the overall system behavior and the efficacy of fault tolerant systems. The use of fault injection can help developers and integrators to identify weak aspects of a system during development, and to measure the risk of using third-party components, allowing for the deployment of preventive mitigation actions, such as wrappers or further development.

Given the increasing complexity and size of software and its role in modern society, assuring software reliability is a very relevant topic. Fault injection is a time-proven technique that helps to evaluate the reliability of systems. Given the complexity of current software systems, fault injection tools and techniques must deal with hard problems, including reachability, observability, controllability and fault definition. Despite the very large body of work already existing, contributions concerning tools, models and methodologies on software reliability and on fault injection and software remain very relevant.

Prof. Dr. Joao Duraes
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • software reliability
  • fault injection
  • software testing
  • robustness testing
  • software quality

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

26 pages, 494 KiB  
Article
A Notional Understanding of the Relationship between Code Readability and Software Complexity
by Yahya Tashtoush, Noor Abu-El-Rub, Omar Darwish, Shorouq Al-Eidi, Dirar Darweesh and Ola Karajeh
Information 2023, 14(2), 81; https://doi.org/10.3390/info14020081 - 31 Jan 2023
Viewed by 1918
Abstract
Code readability and software complexity are considered essential components of software quality. They significantly impact software metrics, such as reusability and maintenance. The maintainability process consumes a high percentage of the software lifecycle cost, which is considered a very costly phase and should [...] Read more.
Code readability and software complexity are considered essential components of software quality. They significantly impact software metrics, such as reusability and maintenance. The maintainability process consumes a high percentage of the software lifecycle cost, which is considered a very costly phase and should be given more focus and attention. For this reason, the importance of code readability and software complexity is addressed by considering the most time-consuming component in all software maintenance activities. This paper empirically studies the relationship between code readability and software complexity using various readability and complexity metrics and machine learning algorithms. The results are derived from an analysis dataset containing roughly 12,180 Java files, 25 readability features, and several complexity metric variables. Our study empirically shows how these two attributes affect each other. The code readability affects software complexity with 90.15% effectiveness using a decision tree classifier. In addition, the impact of software complexity on the readability of code using the decision tree classifier has a 90.01% prediction accuracy. Full article
(This article belongs to the Special Issue Software Reliability and Fault Injection)
Show Figures

Figure 1

20 pages, 6941 KiB  
Article
Tool Support for Improving Software Quality in Machine Learning Programs
by Kwok Sun Cheng, Pei-Chi Huang, Tae-Hyuk Ahn and Myoungkyu Song
Information 2023, 14(1), 53; https://doi.org/10.3390/info14010053 - 16 Jan 2023
Cited by 1 | Viewed by 2094
Abstract
Machine learning (ML) techniques discover knowledge from large amounts of data. Modeling in ML is becoming essential to software systems in practice. The accuracy and efficiency of ML models have been focused on ML research communities, while there is less attention on validating [...] Read more.
Machine learning (ML) techniques discover knowledge from large amounts of data. Modeling in ML is becoming essential to software systems in practice. The accuracy and efficiency of ML models have been focused on ML research communities, while there is less attention on validating the qualities of ML models. Validating ML applications is a challenging and time-consuming process for developers since prediction accuracy heavily relies on generated models. ML applications are written by relatively more data-driven programming based on the black box of ML frameworks. All of the datasets and the ML application need to be individually investigated. Thus, the ML validation tasks take a lot of time and effort. To address this limitation, we present a novel quality validation technique that increases the reliability for ML models and applications, called MLVal. Our approach helps developers inspect the training data and the generated features for the ML model. A data validation technique is important and beneficial to software quality since the quality of the input data affects speed and accuracy for training and inference. Inspired by software debugging/validation for reproducing the potential reported bugs, MLVal takes as input an ML application and its training datasets to build the ML models, helping ML application developers easily reproduce and understand anomalies in the ML application. We have implemented an Eclipse plugin for MLVal that allows developers to validate the prediction behavior of their ML applications, the ML model, and the training data on the Eclipse IDE. In our evaluation, we used 23,500 documents in the bioengineering research domain. We assessed the ability of the MLVal validation technique to effectively help ML application developers: (1) investigate the connection between the produced features and the labels in the training model, and (2) detect errors early to secure the quality of models from better data. Our approach reduces the cost of engineering efforts to validate problems, improving data-centric workflows of the ML application development. Full article
(This article belongs to the Special Issue Software Reliability and Fault Injection)
Show Figures

Figure 1

Back to TopTop