Reliability and Risk of Complex Systems: Modelling, Analysis and Optimization

A special issue of Axioms (ISSN 2075-1680). This special issue belongs to the section "Mathematical Analysis".

Deadline for manuscript submissions: 31 December 2024 | Viewed by 10976

Special Issue Editors


E-Mail Website
Guest Editor
School of Mechanical Engineering, University of Science and Technology Beijing, Beijing, China
Interests: reliability modelling and analysis; Bayesian inference; uncertainty quantification; prognostics and health management

E-Mail Website
Guest Editor
School of Management, Shanghai University, Shanghai, China
Interests: industrial statistics; reliability engineering; degradation modeling
Special Issues, Collections and Topics in MDPI journals
School of Economics and Management, Beijing University of Technology, Beijing, China
Interests: reliability; maintenance; risk; energy system; optimization
Special Issues, Collections and Topics in MDPI journals
School of Mechanical Engineering, University of Science and Technology Beijing, Beijing, China
Interests: reliability, availability, maintainability, and safety (RAMS) analysis; system engineering; diagnostics and prognostics; maintenance optimization; asset management

Special Issue Information

Dear Colleagues, 

Modern industries are developing towards a high-integrated direction with overwhelming complexities, which simultaneously entails benefits and potential risks with catastrophic consequences. Reliability engineering, as an engineering discipline, develops rapidly throughout the whole lifecycle management of industrial systems, covering system analysis, system design, operation and maintenance, etc. 

This Special Issue aims to report cutting-edge methods and techniques in a reliability-related field by highlighting research on these key issues. 

In this Special Issue, original research articles and reviews are welcome. Research areas may include (but are not limited to) the following: 

  • Reliability modelling and analysis of complex systems;
  • Reliability-based design optimization (RBDO) methods;
  • Risk analysis and reliability assessment for large-scale complex systems;
  • Predictive maintenance scheme and decision-making optimization;
  • Statistical methods for degradation modelling;
  • Physical-based/data-driven prognostics and health management (PHM) techniques;
  • Machine learning techniques and applications in reliability engineering;
  • Uncertainty quantification and analysis for safety-critical systems;
  • Bayesian methods for reliability analysis. 

We look forward to receiving your contributions. 

Dr. Lechang Yang
Dr. Qingqing Zhai
Dr. Rui Peng
Dr. Aibo Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Axioms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • reliability modelling
  • optimization
  • risk analysis
  • statistical methods
  • degradation modelling
  • prognostics and health management
  • machine learning
  • uncertainty quantification
  • Bayesian methods

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

24 pages, 2639 KiB  
Article
Estimation and Optimal Censoring Plan for a New Unit Log-Log Model via Improved Adaptive Progressively Censored Data
by Refah Alotaibi, Mazen Nassar and Ahmed Elshahhat
Axioms 2024, 13(3), 152; https://doi.org/10.3390/axioms13030152 - 26 Feb 2024
Viewed by 744
Abstract
To gather enough data from studies that are ongoing for an extended duration, a newly improved adaptive Type-II progressive censoring technique has been offered to get around this difficulty and extend several well-known multi-stage censoring plans. This work, which takes this scheme into [...] Read more.
To gather enough data from studies that are ongoing for an extended duration, a newly improved adaptive Type-II progressive censoring technique has been offered to get around this difficulty and extend several well-known multi-stage censoring plans. This work, which takes this scheme into account, focuses on some conventional and Bayesian estimation missions for parameter and reliability indicators, where the unit log-log model acts as the base distribution. The point and interval estimations of the various parameters are looked at from a classical standpoint. In addition to the conventional approach, the Bayesian methodology is examined to derive credible intervals beside the Bayesian point by leveraging the squared error loss function and the Markov chain Monte Carlo technique. Under varied settings, a simulation study is carried out to distinguish between the standard and Bayesian estimates. To implement the proposed procedures, two actual data sets are analyzed. Finally, multiple precision standards are considered to pick the optimal progressive censoring scheme. Full article
Show Figures

Figure 1

23 pages, 832 KiB  
Article
Stress–Strength Reliability Analysis for Different Distributions Using Progressive Type-II Censoring with Binomial Removal
by Ibrahim Elbatal, Amal S. Hassan, L. S. Diab, Anis Ben Ghorbal, Mohammed Elgarhy and Ahmed R. El-Saeed
Axioms 2023, 12(11), 1054; https://doi.org/10.3390/axioms12111054 - 15 Nov 2023
Viewed by 995
Abstract
In the statistical literature, one of the most important subjects that is commonly used is stress–strength reliability, which is defined as δ=PW<V, where V and W are the strength and stress random variables, respectively, and δ is [...] Read more.
In the statistical literature, one of the most important subjects that is commonly used is stress–strength reliability, which is defined as δ=PW<V, where V and W are the strength and stress random variables, respectively, and δ is reliability parameter. Type-II progressive censoring with binomial removal is used in this study to examine the inference of δ=PW<V for a component with strength V and being subjected to stress W. We suppose that V and W are independent random variables taken from the Burr XII distribution and the Burr III distribution, respectively, with a common shape parameter. The maximum likelihood estimator of δ is derived. The Bayes estimator of δ under the assumption of independent gamma priors is derived. To determine the Bayes estimates for squared error and linear exponential loss functions in the lack of explicit forms, the Metropolis–Hastings method was provided. Utilizing comprehensive simulations and two metrics (average of estimates and root mean squared errors), we compare these estimators. Further, an analysis is performed on two actual data sets based on breakdown times for insulating fluid between electrodes recorded under varying voltages. Full article
Show Figures

Figure 1

24 pages, 7230 KiB  
Article
Personalized Transfer Learning Framework for Remaining Useful Life Prediction Using Adaptive Deconstruction and Dynamic Weight Informer
by Xue Liu, Jian Ma and Dengwei Song
Axioms 2023, 12(10), 963; https://doi.org/10.3390/axioms12100963 - 12 Oct 2023
Viewed by 898
Abstract
The precise remaining useful life (RUL) prediction of turbofan engines benefits maintenance decisions. The training data quantity and quality are crucial for effective prediction modeling and accuracy improvement. However, the performance degradation process of the same type of turbofan engine usually exhibits different [...] Read more.
The precise remaining useful life (RUL) prediction of turbofan engines benefits maintenance decisions. The training data quantity and quality are crucial for effective prediction modeling and accuracy improvement. However, the performance degradation process of the same type of turbofan engine usually exhibits different trajectories because of engines’ differences in degradation degrees, degradation rates, and initial health states. In addition, the initial part of the trajectory is a stationary health stage, which contains very little information on degradation and is not helpful for modeling. Considering the differential degradation characteristics and the requirement for accurate prediction modeling of the same type of turbofan engines with individual differences, we specifically propose a personalized transfer learning framework for RUL prediction by answering three key questions: when, what, and how to transfer in prediction modeling. The framework tries to maximumly utilize multi-source-domain data (samples of the same type of engines that run to failure) to improve the training data quantity and quality. Firstly, a transfer time identification method based on a dual-baseline performance assessment and the Wasserstein distance is designed to eliminate the worthless part of a trajectory for transfer and prediction modeling. Then, the transferability of each sample in the multi-source domain is measured by an approach, named the time-lag ensemble distance measurement, and then the source domain is ranked and adaptively deconstructed into two parts according to transferability. Ultimately, a new training loss function considering the transferability of the weighted multi-source-domain data and a two-stage transfer learning scheme is introduced into an informer-based RUL prediction model, which has a great advantage for long-time-series prediction. The simulation data of 100 of the same type of turbofan engine with individual differences and five comparison experiments validate the effectiveness and accuracy of the proposed method. Full article
Show Figures

Figure 1

12 pages, 289 KiB  
Article
Parameter Estimation of the Dirichlet Distribution Based on Entropy
by Büşra Şahin, Atıf Ahmet Evren, Elif Tuna, Zehra Zeynep Şahinbaşoğlu and Erhan Ustaoğlu
Axioms 2023, 12(10), 947; https://doi.org/10.3390/axioms12100947 - 05 Oct 2023
Viewed by 859
Abstract
The Dirichlet distribution as a multivariate generalization of the beta distribution is especially important for modeling categorical distributions. Hence, its applications vary within a wide range from modeling cell probabilities of contingency tables to modeling income inequalities. Thus, it is commonly used as [...] Read more.
The Dirichlet distribution as a multivariate generalization of the beta distribution is especially important for modeling categorical distributions. Hence, its applications vary within a wide range from modeling cell probabilities of contingency tables to modeling income inequalities. Thus, it is commonly used as the conjugate prior of the multinomial distribution in Bayesian statistics. In this study, the parameters of a bivariate Dirichlet distribution are estimated by entropy formalism. As an alternative to maximum likelihood and the method of moments, two methods based on the principle of maximum entropy are used, namely the ordinary entropy method and the parameter space expansion method. It is shown that in estimating the parameters of the bivariate Dirichlet distribution, the ordinary entropy method and the parameter space expansion method give the same results as the method of maximum likelihood. Thus, we emphasize that these two methods can be used alternatively in modeling bivariate and multinomial Dirichlet distributions. Full article
13 pages, 311 KiB  
Article
Cumulative Entropy of Past Lifetime for Coherent Systems at the System Level
by Mansour Shrahili and Mohamed Kayid
Axioms 2023, 12(9), 899; https://doi.org/10.3390/axioms12090899 - 21 Sep 2023
Viewed by 662
Abstract
This paper explores the cumulative entropy of the lifetime of an n-component coherent system, given the precondition that all system components have experienced failure at time t. This investigation utilizes the system signature to compute the cumulative entropy of the system’s [...] Read more.
This paper explores the cumulative entropy of the lifetime of an n-component coherent system, given the precondition that all system components have experienced failure at time t. This investigation utilizes the system signature to compute the cumulative entropy of the system’s lifetime, shedding light on a crucial facet of a system’s predictability. In the course of this research, we unearth a series of noteworthy discoveries. These include formulating expressions, defining bounds, and identifying orderings related to this measure. Further, we propose a technique to identify a preferred system on the basis of cumulative Kullback–Leibler discriminating information, which exhibits a strong relation with the parallel system. These findings contribute significantly to our understanding of the predictability of a coherent system’s lifetime, underscoring the importance of this field of study. The outcomes offer potential benefits for a wide range of applications where system predictability is paramount, and where the comparative evaluation of different systems on the basis of discriminating information is needed. Full article
Show Figures

Figure 1

12 pages, 324 KiB  
Article
Bayesian Estimation of Variance-Based Information Measures and Their Application to Testing Uniformity
by Luai Al-Labadi, Mohammed Hamlili and Anna Ly
Axioms 2023, 12(9), 887; https://doi.org/10.3390/axioms12090887 - 17 Sep 2023
Viewed by 808
Abstract
Entropy and extropy are emerging concepts in machine learning and computer science. Within the past decade, statisticians have created estimators for these measures. However, associated variability metrics, specifically varentropy and varextropy, have received comparably less attention. This paper presents a novel methodology for [...] Read more.
Entropy and extropy are emerging concepts in machine learning and computer science. Within the past decade, statisticians have created estimators for these measures. However, associated variability metrics, specifically varentropy and varextropy, have received comparably less attention. This paper presents a novel methodology for computing varentropy and varextropy, drawing inspiration from Bayesian nonparametric methods. We implement this approach using a computational algorithm in R and demonstrate its effectiveness across various examples. Furthermore, these new estimators are applied to test uniformity in data. Full article
19 pages, 548 KiB  
Article
Extropy Based on Concomitants of Order Statistics in Farlie-Gumbel-Morgenstern Family for Random Variables Representing Past Life
by Muhammed Rasheed Irshad, Krishnakumar Archana, Amer Ibrahim Al-Omari, Radhakumari Maya and Ghadah Alomani
Axioms 2023, 12(8), 792; https://doi.org/10.3390/axioms12080792 - 16 Aug 2023
Cited by 2 | Viewed by 727
Abstract
In this paper, we refined the concept of past extropy measure for concomitants of order statistics from Farlie-Gumbel-Morgenstern family. In addition, cumulative past extropy measure and dynamic cumulative past extropy measure for concomitant of rth order statistic are also conferred and their [...] Read more.
In this paper, we refined the concept of past extropy measure for concomitants of order statistics from Farlie-Gumbel-Morgenstern family. In addition, cumulative past extropy measure and dynamic cumulative past extropy measure for concomitant of rth order statistic are also conferred and their properties are studied. The problem of estimating the cumulative past extropy is investigated using empirical technique. The validity of the proposed estimator has been emphasized using simulation study. Full article
Show Figures

Figure 1

18 pages, 341 KiB  
Article
Stochastic Ordering Results on Implied Lifetime Distributions under a Specific Degradation Model
by Mohamed Kayid, Lolwa Alshagrawi and Mansour Shrahili
Axioms 2023, 12(8), 786; https://doi.org/10.3390/axioms12080786 - 13 Aug 2023
Cited by 1 | Viewed by 611
Abstract
In this paper, a novel strategy is employed in which a degradation model affects the implied distribution of lifetimes differently compared to the traditional method. It is recognized that an existing link between the degradation measurements and failure time constructs the underlying time-to-failure [...] Read more.
In this paper, a novel strategy is employed in which a degradation model affects the implied distribution of lifetimes differently compared to the traditional method. It is recognized that an existing link between the degradation measurements and failure time constructs the underlying time-to-failure model. We assume in this paper that the conditional survival function of a device under degradation is a piecewise linear function for a given level of degradation. The multiplicative degradation model is used as the underlying degradation model, which is often the case in many practical situations. It is found that the implied lifetime distribution is a classical mixture model. In this mixture model, the time to failure lies with some probabilities between two first passage times of the degradation process to reach two specified values. Stochastic comparisons in the model are investigated when the probabilities are changed. To illustrate the applicability of the results, several examples are given in cases when typical degradation models are candidates. Full article
12 pages, 318 KiB  
Article
Generalized Bayes Prediction Study Based on Joint Type-II Censoring
by Yahia Abdel-Aty, Mohamed Kayid and Ghadah Alomani
Axioms 2023, 12(7), 716; https://doi.org/10.3390/axioms12070716 - 23 Jul 2023
Cited by 1 | Viewed by 818
Abstract
In this paper, the problem of predicting future failure times based on a jointly type-II censored sample from k exponential populations is considered. The Bayesian prediction intervals and point predictors were then obtained. Generalized Bayes is a Bayesian study based on a learning [...] Read more.
In this paper, the problem of predicting future failure times based on a jointly type-II censored sample from k exponential populations is considered. The Bayesian prediction intervals and point predictors were then obtained. Generalized Bayes is a Bayesian study based on a learning rate parameter. This study investigated the effects of the learning rate parameters on the prediction results. The loss functions of squared error, Linex, and general entropy were used as point predictors. Monte Carlo simulations were performed to show the effectiveness of the learning rate parameter in improving the results of prediction intervals and point predictors. Full article
24 pages, 5231 KiB  
Article
Optimal Decision for Repairable Products Sale and Warranty under Two-Dimensional Deterioration with Consideration of Production Capacity and Customers’ Heterogeneity
by Ming-Nan Chen and Chih-Chiang Fang
Axioms 2023, 12(7), 701; https://doi.org/10.3390/axioms12070701 - 19 Jul 2023
Viewed by 794
Abstract
An effective warranty policy is not only an obligation for the manufacturer or vendor, but it also enhances the willingness of customers to purchase from them in the future. To earn more customers and increase sales, manufacturers or vendors should be inclined to [...] Read more.
An effective warranty policy is not only an obligation for the manufacturer or vendor, but it also enhances the willingness of customers to purchase from them in the future. To earn more customers and increase sales, manufacturers or vendors should be inclined to prolong the service life of their products as an effort to gain more customers. Nevertheless, manufacturers or vendors will not provide a boundless warranty in order to dominate the market, since the related warranty costs will eventually exceed the profits in the end. Therefore, it is a question of weighing the advantage of extending the warranty term in order to earn the trust of new customers against the investment. In addition, since deterioration depends on both time and usage, the deterioration estimation for durable products may be incorrect when considering only one factor. For such problems, a two-dimensional deterioration model is suitable, and the failure times are drawn from a non-homogeneous Poisson process (NHPP). Moreover, customers’ heterogeneity, manufacturers’ production capacity, and preventive maintenance services are also considered in this study. A mathematical model with the corresponding solution algorithm is proposed to assist manufacturers in making systematic decisions about pricing, production, and warranty. Finally, managerial implications are also provided for refining related decision-making. Full article
Show Figures

Figure 1

26 pages, 21096 KiB  
Article
Reliability Analysis and Applications of Generalized Type-II Progressively Hybrid Maxwell–Boltzmann Censored Data
by Ahmed Elshahhat, Osama E. Abo-Kasem and Heba S. Mohammed
Axioms 2023, 12(7), 618; https://doi.org/10.3390/axioms12070618 - 21 Jun 2023
Cited by 2 | Viewed by 798
Abstract
Today, the reliability or quality practitioner always aims to shorten testing duration and reduce testing costs without neglecting efficient statistical inference. So, a generalized progressively Type-II hybrid censored mechanism has been developed in which the experimenter prepays for usage of the testing facility [...] Read more.
Today, the reliability or quality practitioner always aims to shorten testing duration and reduce testing costs without neglecting efficient statistical inference. So, a generalized progressively Type-II hybrid censored mechanism has been developed in which the experimenter prepays for usage of the testing facility for T units of time. This paper investigates the issue of estimating the model parameter, reliability, and hazard rate functions of the Maxwell–Boltzmann distribution in the presence of generalized progressive Type-II hybrid censored data by making use of the likelihood and Bayesian inferential methods. Using an inverse gamma prior distribution, the Bayes estimators of the same unknown parameters with respect to the most commonly squared-error loss are derived. Since the joint likelihood function is produced in complex form, following the Monte-Carlo Markov-chain idea, the Bayes’ point estimators as well as the Bayes credible and highest posterior density intervals cannot be derived analytically, but they may be examined numerically. Via the normal approximation of the acquired maximum likelihood and log-maximum-likelihood estimators, the approximate confidence interval bounds of the unknown quantities are derived. Via comprehensive numerical comparisons, with regard to simulated root mean squared-error, mean relative absolute bias, average confidence length, and coverage probability, the actual behavior of the proposed estimation methodologies is examined. To illustrate how the offered methodologies may be used in real circumstances, two different applications, representing the failure time points of aircraft windscreens as well as the daily average wind speed in Cairo during 2009, are explored. Numerical evaluations recommend utilizing a Bayes model via the Metropolis-Hastings technique to produce samples from the posterior distribution to estimate any parameter of the Maxwell–Boltzmann distribution when collecting data from a generalized progressively Type-II hybrid censored mechanism. Full article
Show Figures

Figure 1

Review

Jump to: Research

13 pages, 2800 KiB  
Review
Modeling Bland–Altman Limits of Agreement with Fractional Polynomials—An Example with the Agatston Score for Coronary Calcification
by Oke Gerke and Sören Möller
Axioms 2023, 12(9), 884; https://doi.org/10.3390/axioms12090884 - 15 Sep 2023
Viewed by 927
Abstract
Bland–Altman limits of agreement are very popular in method comparison studies on quantitative outcomes. However, a straightforward application of Bland–Altman analysis requires roughly normally distributed differences, a constant bias, and variance homogeneity across the measurement range. If one or more assumptions are violated, [...] Read more.
Bland–Altman limits of agreement are very popular in method comparison studies on quantitative outcomes. However, a straightforward application of Bland–Altman analysis requires roughly normally distributed differences, a constant bias, and variance homogeneity across the measurement range. If one or more assumptions are violated, a variance-stabilizing transformation (e.g., natural logarithm, square root) may be sufficient before Bland–Altman analysis can be performed. Sometimes, fractional polynomial regression has been used when the choice of variance-stabilizing transformation was unclear and increasing variability in the differences was observed with increasing mean values. In this case, regressing the absolute differences on a function of the average and applying fractional polynomial regression to this end were previously proposed. This review revisits a previous inter-rater agreement analysis on the Agatston score for coronary calcification. We show the inappropriateness of a straightforward Bland–Altman analysis and briefly describe the nonparametric limits of agreement of the original investigation. We demonstrate the application of fractional polynomials, use the Stata packages fp and fp_select, and discuss the use of degree-2 (the default setting) and degree-3 fractional polynomials. Finally, we discuss conditions for evaluating the appropriateness of nonstandard limits of agreement. Full article
Show Figures

Figure 1

Back to TopTop