Next Article in Journal
Special Issue “Physics and Mechanics of New Materials and Their Applications 2019”
Previous Article in Journal
Multi-Objective Optimization of Rear Guide Vane of Diagonal Flow Fan Based on Robustness Design Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reliability Sensitivity Analysis by the Axis Orthogonal Importance Sampling Method Based on the Box-Muller Transformation

1
MOE Key Laboratory of Disaster Forecast and Control in Engineering, School of Mechanics and Construction Engineering, Jinan University, Guangzhou 510632, China
2
Earthquake Engineering Research and Test Center, Guangzhou University, Guangzhou 510405, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(19), 9860; https://doi.org/10.3390/app12199860
Submission received: 13 August 2022 / Revised: 25 September 2022 / Accepted: 27 September 2022 / Published: 30 September 2022
(This article belongs to the Section Civil Engineering)

Abstract

:
The axis orthogonal importance sampling method proves to be one version of efficient importance sampling methods since the quasi-Monte Carlo simulation is its basic ingredient, in which it is now a common practice to transform low-discrepancy sequences from the uniform distribution to the normal distribution by the well-known inverse transformation. As a valid transformation method for low-discrepancy sequences, the Box-Muller transformation is introduced into the axis orthogonal importance sampling method and compared with the inverse transformation in this paper for structural reliability sensitivity analysis. Three representative quasi-random sequences with low discrepancy are presented to generate samples following the target distribution and explore the interaction with the transformation method, which is used as a sample plan along the tangent plane at the most probable failure point in the axial orthogonal importance sampling for structural reliability analysis and reliability sensitivity analysis. The numerical experiments show that the reliability sensitivity analysis method by means of the Box-Muller transformation is a good alternative to the inverse transformation to generate samples from low-discrepancy sequences to the normal distribution. In particular, the scheme of the Box-Muller transformation combined with the Sobol sequence needs fewer samples with more accuracy and is more applicable for solving reliability sensitivity analysis in various nonlinear problems.

1. Introduction

Uncertainties existing in engineering structure problems are often associated with the material property, load and manufacturing error. Reliability and its sensitivity analysis play an important role in the design of engineering structures, with reliability analysis providing the probabilities of failure and sensitivity analysis identifying the contributions of random analysis inputs to those probabilities. Compared with the traditional analysis methods, a structural reliability analysis is helpful in improving the safety of the structure. In particular, the sensitivity analysis of structural reliability is of great value in identifying the factors that have an important influence on structure safety and improving them [1]. Various reliability and sensitivity analysis methods have been proposed for use, including the first-order reliability method [2,3,4,5,6,7,8], the moment method [9,10,11,12], the importance sampling method [13,14,15,16,17,18,19,20], etc. While in some circumstances, highly nonlinear performance functions, especially those involving non-normal random variables or dealing with a problem of extremely small failure probability, might result in a poor estimation of reliability and its sensitivity. The importance sampling method, as an alternative method for reliability and its sensitivity with less computation cost and good accuracy, has drawn much attention from scholars for a long time. For instance, an algorithm using the modified importance-sampling method with the continuously varying standard deviation of sampling density-function is developed to calculate the design point [21], the von Mises-Fisher mixture model is selected as the sampling density to generate direction vectors on the unit hypersphere to improve the computational efficiency compared to original directional sampling and other cross-entropy-based importance sampling methods [22], the relevance vector machine surrogate model is applied in importance sampling methods to decrease the number of expensive model simulations [23,24], the sensitivity of the failure probability estimator to uncertainties is quantified by the Gaussian process and the sampling strategy [25], artificial neural network based second order reliability method and importance sampling are combined to be applied towards failure probability and sensitivity estimates of variable stiffness composite laminate plates [26], the active Kriging-Monte Carlo simulation and the adaptive linked importance sampling are employed to estimate the small failure probability and the corresponding reliability sensitivity indices efficiently [27], etc. The importance sampling function has a significant effect on the accuracy of results. The regions that contribute more to the failure probability and its sensitivity often do not overlap in nonlinear problems [28]; in other words, the samples generated by the same importance sampling function may not be able to simultaneously obtain the high accuracy in reliability and sensitivity analysis.
The importance sampling function is often established upon the most probable failure point obtained by the first-order reliability method (FORM). The quasi-Monte Carlo method is beneficial to significantly reduce the sample size, obtain low-discrepancy samples, and improve the accuracy and stability of structural reliability analysis [29]. The axis orthogonal importance sampling method samples on the tangent plane at the most probable failure point of the limit state surface, in which reliability analysis becomes a simple problem of solving the distances from the sample points to a straight line. As a consequence, it lessens one dimension of samples and improves the sampling efficiency, implying more and more applications in the future for structural reliability analysis [17,30].
There exist many quasi-Monte Carlo sampling methods to generate a uniform sequence, including Latin hypercube sampling, the Halton sequence, the Sobol sequence, and other quasi-random sequences sampling in which the algorithms using the Sobol sequence may have superior convergence in the tasks such as the approximation of integrals in higher dimensions, and in global optimization since its sample space is more uniformly distributed [31], seismic reliability assessment of power systems [32], reliability assessment of structural dynamic systems [33], global sensitivity analysis [34], etc. The inverse transformation method is widely employed to convert the uniform sequences into random sequences following the normal distribution in structural reliability analyses [2,4,5,14,15]. The Box-Muller transformation turns out to be a valid method for low-discrepancy sequences in the literature [35] and is a good alternative to the inverse transformation. Unfortunately, it is rarely reported in the literature for reliability analysis and reliability sensitivity analysis. For this purpose, the Box-Muller transformation is introduced into the axis orthogonal importance sampling method in this paper and interacts with different quasi-random sequences. In other words, the combinations of the abovementioned quasi-random sequence and transformation method are respectively put into use in the axial orthogonal importance sampling method and their effects on the accuracy of reliability and its sensitivity are thoroughly explored. Several numerical examples are compared and the most suitable combination of the quasi-random sequence and transformation method is found out to achieve the best accuracy with the smallest sample size.
The remainder of this paper is organized as follows. Section 2 gives the basic theory of the axis orthogonal importance sampling method. Section 3 depicts the calculation equations of structural failure probability and its sensitivity. Section 4 reviews the basic theory of the quasi-Monte Carlo. Section 5 presents the reliability sensitivity analysis using the axis orthogonal importance method based on the Box-Muller transformation. Some numerical examples are given in Section 6. Section 7 concludes with a summary of the present procedure.

2. Axis Orthogonal Importance Sampling Method

As we know, the importance sampling function has a significant impact on accuracy of the probability of structural failure in the importance sampling method. The probability of structural failure is calculated as:
p f = + I [ g X ( v ) ] f X ( v ) p V ( v ) p V ( v ) d v = E { I [ g X ( v ) ] f X ( v ) p V ( v ) }
where p V ( v ) and g X ( v ) is the sampling probability density function (PDF) and the performance function of structure at v in the standard normal space V , I ( ) is the indicator function, and f X ( v ) is the joint PDF in the normal space V transformed from the origin space X . The importance sampling PDF is commonly defined based on the most probable failure point (MPFP) that are obtained by the first-order reliability method. The unbiased estimate p ^ f of the probability of structural failure p f can be yielded as the following by N samples v i = ( v i 1 , v i 2 , v i n ) T , i = 1 , 2 , , N that are generated from the important PDF:
p ^ f = 1 N i = 1 N I [ g X ( v i ) ] f X ( v i ) p V ( v i )
where N is the sample size, n is the number of random variables.
The variance of p ^ f is calculated by:
σ p ^ f 2 = 1 N E { I [ g X ( v ) ] f X 2 ( v ) p V 2 ( v ) } 1 N p ^ f 2                     = 1 N + I [ g X ( v ) ] f X 2 ( v ) p V ( v ) d ν 1 N   p ^ f 2
A sample plan of the axis orthogonal importance sampling in the transformed normal space is established in the direction of the approximate tangent hyperplane, as shown in Figure 1, where g X ( v ) = 0 is the limit state surface in standard normal space, β is the reliability index. The axial orthogonal importance sampling method takes the MPFP as the center and samples in the tangent plane which at the MPFP. N samples in the local coordinate system of the tangent plane form an N × ( n 1 ) matrix S , which just samples in the n 1 dimensions with more efficiency in the axis orthogonal importance sampling method. These samples projects onto the limit state surface along the normal direction m of the MPFP. The calculation of reliability sensitivity is equivalent to summing the reliability sensitivity of a group of performance functions defining those hyperplanes ( paraller   line   ( PL ) , P L i , i = 1 , , N ) parallel to the tangent plane of the limit state surface through the abovementioned projection points on the limit state surface. Suppose S i is the i th realization of S in the local coordinate system, the coordinates of the projection point in the local coordinate system v 1 O v 2 is [ b i   S i ] . In order to determine b i , the projection point [ b i   S i ] coordinate is firstly transformed to S i * in the global coordinate system v 1 O v 2 by a coordinate transformation matrix A , and then substituted into g ( v ) = 0 to solve the equation.
The MPFP is m , and the unit vector of m , a reads as [17]:
a = m m
The transformation matrix A of the local and global coordinate system can be defined as:
A = ( a , )
Except for the first column a , the other columns of A are all orthogonal to a with the property A T A = I , where I is the identity matrix. Thus A can be obtained by Schmidt or other orthogonalization methods. The global coordinate S i * of the projection point [ b i   S i ] in the local coordinate is given as the following [17]:
S i * = [ b i S i ] A
b i is computed by finding the root of [17]:
g ( S i * ) = g ( [ b i   S i ] A ) = 0
where b i is a reliability index that takes the i -th parallel hyperplane P L i , i = 1 , , N as the limit state surface. The unbiased estimate of the failure probability   p ^ f is the average value of the failure probability of these limit state surfaces, calculated as [17]:
p ^ f = 1 N i = 1 N Φ ( 0 , 1 ) ( b i )
and the standard deviation is
σ ( p ^ f ) = 1 N 1 ( 1 N i = 1 N Φ ( 0 , 1 ) 2 ( b i ) p ^ f 2 )

3. Reliability Sensitivity Calculation

The expression of the i -th hyperplane through the projection point S i *   and parallel to the tangent plane of the limit state surface at the MPFP can be written as follows [28]:
g i ( v ) = a ( S i * v ) = j = 1 n a j ( S i ˙ j * v j ) = 0
The reliability index can be easily evaluated as [28]:
b i = μ g i ( v ) σ g i ( v ) = j = 1 n a j ( S i j * μ v j ) ( j = 1 n a j 2 σ v j 2 ) 1 / 2
where μ v j and σ v j are the mean and standard deviation of the standard normal variable   v j , respectively. In consideration of j = 1 n a j 2 σ v j 2 = 1 , Φ ( 0 , 1 ) ( b i ) / b i = e b i 2 / 2 / 2 π , the reliability sensitivity of p ^ f with respect to θ v j can be calculated as [28]:
p ^ f μ v j = 1 N 2 π i = 1 N e b i 2 / 2 a j
The standard deviation is [28]
σ ( p ^ f μ v j ) = 1 N 1 [ E ( p ^ f μ v j ) 2 ( E ( p ^ f μ v j ) ) 2 ]                                                             = 1 N 1 { 1 N i = 1 N [ a j 2 π e b i 2 / 2 ] 2 ( p ^ f μ v j ) 2 }
σ ( p ^ f σ v j ) = 1 N 1 [ E ( p ^ f σ v j ) 2 ( E ( p ^ f σ v j ) ) 2 ]                                                                   = 1 N 1 { 1 N i = 1 N [ a j 2 π b i e b i 2 / 2 ] 2 ( p ^ f σ v j ) 2 }
respectively.
If random variables are all the normal variable, they can be transformed into the standard normal space by [28]:
v j = X j μ X j σ X j p ^ f = 1 N i = 1 N I [ g X ( v i ) ] f X ( v i ) p V ( v i )
Reliability sensitivities in the original space X are thus written as [28]:
p ^ f   μ X j = p ^ f   μ v j · μ v j   μ X j = 1 σ j 1 N 2 π i = 1 N e b i 2 / 2 a j
p ^ f   σ X j = p ^ f   σ v j · σ v j   σ X j = 1 σ j 1 N 2 π i = 1 N e b i 2 / 2 a j 2 b i
as well as the standard deviation with the formulas [28]:
σ ( p ^ f   μ X j ) = 1 σ j 1 N 1 { 1 N i = 1 N [ a 1 j 2 π e b i 2 / 2 ] 2 ( p ^ f μ v j ) 2 }
  σ ( p ^ f   σ X j ) = 1 σ j 1 N 1 { 1 N i = 1 N [ a 1 j 2 π b i e b i 2 / 2 ] 2 ( p ^ f σ v j ) 2 }
For those problems involving non-normal random variables with the nonlinear transformation between the original space and standard normal space, μ v j / μ X j is not easy to compute. The region that has the greatest influence on the calculation accuracy of reliability sensitivity often does not overlap with the region that has the greatest contribution to the failure probability. It means we have to choose different sampling PDF to calculate the failure probability and reliability sensitivity for the accuracy. To get around this, the reliability sensitivity can be calculated by the finite difference method, that is [28]
p ^ f θ p ^ f | θ + Δ θ p ^ f | θ Δ θ = 1 N ( Δ θ ) i = 1 N [ Φ ( 0 , 1 ) ( b i | θ + Δ θ ) Φ ( 0 , 1 ) ( b i | θ ) ]
where θ is the mean or standard deviation of a random variable, and the corresponding standard deviation is obtained as [28]:
σ ( p ^ f θ ) σ ( p ^ f | θ + Δ θ p ^ f | θ ) ( Δ θ ) = E [ ( p ^ f | θ + Δ θ p ^ f | θ ) 2 ] E [ ( p ^ f | θ + Δ θ p ^ f | θ ) ] 2 ( N 1 ) ( Δ θ ) 2 = 1 Δ θ 1 ( N 1 ) N i = 1 N ( Φ ( 0 , 1 ) ( b i | θ + Δ θ ) Φ ( 0 , 1 ) ( b i | θ ) ) 2 ( p ^ f | θ + Δ θ p ^ f | θ ) 2

4. Quasi-Monte Carlo Method

The quasi-Monte Carlo method overcomes the shortcomings of Monte Carlo approximation in terms of deterministic and smaller error bounds, which produces deterministic and uniformly distributed samples with low discrepancy that fill the space as fully as possible via Latin hypercube sampling and quasi-random sequence sampling (including the Halton sequence, Sobol sequence, Hammersley sequence and Faure sequence, etc.) [36,37,38,39]. These deterministic samples imply deterministic error bound, with the result that the quasi-Monte Carlo method in general has faster convergence rate than Monte Carlo method. Thus, the quasi-Monte Carlo method can greatly improve the computational efficiency of the Monte Carlo method as well as that of structural reliability analysis [28,40].

4.1. Latin Hypercubes Sampling (LHS)

LHS proposed by McKay, et al. [41] is popularly employed for its efficient stratification properties with relatively lower computation while preserving the desirable probabilistic features of simple random sampling with a relatively small sample size [28].
A basic sampling plan generated from LHS, denoted as an N × n matrix H , can be written as
H = 1 N ( P R )
where P is an N × n matrix, in which each column is a random permutation of N integers from 1 to N , R is an N × n matrix of independent random numbers following the uniform distribution in [ 0 , 1 ] . Each row of H is a realization of a random sample.

4.2. Quasi-Random Sequence

In many cases, quasi-random sequences instead of pseudo random numbers will improve the performance of Monte Carlo simulations with less computation times and higher accuracy. The quasi-random sequences are deterministic, uniformly distributed sequences with low discrepancy. The popular Quasi-random sequences such as the Sobol sequence, Halton sequence, Hammersley sequence and Faure sequence are defined as some kinds of radical inversion operation, which can be described as:
k = l = 0 M 1 a l ( k ) b l
k = l = 0 M 1 a l ( k ) b l
The integer k is first represented as a sequence in binary ( a 0 ( k ) , a 1 ( k ) , a M 1 ( k ) ) T , which is multiplied with a generated matrix C to get a new vector, and then mirrored to the right of the decimal point as a number on the base b . If C is a identity matrix, the Van der Corput sequence is obtained:
Φ b ( k ) = ( b 1 , b 2 , b M ) ( a 0 ( k ) , a 1 ( k ) , a M 1 ( k ) ) T
The Sobol sequence is defined on the base 2 ( b = 2 ) with different generated matrix C in each dimension of the radical inversion operation. It can be realized directly by the bitwise operation with good efficiency.
The Halton sequence is generated from the Van der Corput sequence on different bases b 1 , b 2 , , b n , where b 1 , b 2 , , b n are different prime numbers, which can be written as follows:
V i = ( Φ b 1 ( k ) , Φ b 2 ( k ) , Φ b n ( k ) )
For illustration, Figure 2 shows two-dimensional scatter plots with the sample size N = 100 for LHS and the Halton, Sobol and random sequences. The quasi-random sequences are more uniformly distributed than random sequences, which are often designed to have the best uniformity on the integration domain, and thereby have better theoretical distribution properties than the random sequences [40].

5. Quasi-Random Sequence and Box-Muller Transformation for Reliability Sensitivity Analysis Based on the Axis Orthogonal Importance Sampling Method

5.1. Random Sequence following a Target Distribution

Since the normal distribution occurs frequently in structural reliability analysis and its sensitivity analysis, one often needs a method to transform quasi-random sequences from the uniform distribution to the normal distribution. Two well-known methods available to accomplish this task are the Box-Muller transformation method and the inverse transformation method. The Box-Muller transformation method is regarded as slightly inferior, rarely applied in structural reliability analysis mainly for its two imperfections. One imperfection is that it might provide an extremely poor coverage of the space, and another is that it requires the computation of trigonometric and logarithmic functions. In fact, this is partly a mistake. The Box-Muller is a valid transformation method in the context of low-discrepancy sequences and is even proven to be better than the inverse transformation method when applied to the normal distribution [35].

5.1.1. Inverse Transformation

Suppose a random number u following the uniform distribution in [ 0 , 1 ] , for u = F X ( x ) , there exists the inverse function F X 1 ( u ) , then a random number of X , x follows the target distribution F X ( ) , namely x = F X 1 ( u ) . If X follows the standard normal distribution, x = Φ ( 0 , 1 ) 1 ( u ) .

5.1.2. Box-Muller Transformation

For u [ 0 , 1 ] d , where u ( j ) is the j -th component of u , d is the dimension of u . The random sequences of the standard normal variables v = ( v ( 1 ) , , v ( d ) ) T can be obtained by the Box-Muller transformation, described as Equations (27) or (28), where j = 1 , , d 2 .
v ( 2 j 1 ) = 2 ln u ( 2 j 1 ) c o s ( 2 π u ( 2 j ) )
      v ( 2 j ) = 2 ln u ( 2 j 1 ) s i n ( 2 π u ( 2 j ) )

5.2. Quasi-Random Sequence and Box-Muller Transformation for Reliability Sensitivity Analysis

The axis orthogonal importance sampling method for reliability sensitivity analysis based on quasi-random sequence and Box-Muller transformation (termed as QBM-AOIS) can be simply described as a deterministic version of the axis orthogonal importance sampling method. Considering the unbiased estimate of the failure probability in Equation (8) and its standard deviation in Equation (9), QBM-AOIS method approximates them through
p ^ f = 1 N i = 1 N Φ ( 0 , 1 ) ( b ( Θ i ) )
σ ( p ^ f ) = 1 N 1 ( 1 N i = 1 N Φ ( 0 , 1 ) 2 ( b ( Θ i ) ) p ^ f 2 )
where Θ i is the i th sample of a quasi-random sequence, b ( Θ i ) is the reliability index of the corresponding plane. In contrast to Equations (8) and (9), one can clearly see that the only difference between the two methods is that the random samples in the axis orthogonal importance sampling method are replaced by well-chosen deterministic sequences stemming from the quasi-random sequence and Box-Muller transformation. In the same way, if random variables are all the normal variable, reliability sensitivities in the original space X are rewritten as:
p ^ f   μ X j = p ^ f   μ v j · μ v j   μ X j = 1 σ j 1 N 2 π i = 1 N e b 2 ( Θ i ) / 2 a j
p ^ f   σ X j = p ^ f   σ v j · σ v j   σ X j = 1 σ j 1 N 2 π i = 1 N e b 2 ( Θ i ) / 2 a j 2 b ( Θ i )
Its standard deviations are re-expressed as:
σ ( p ^ f   μ X j ) = 1 σ j 1 N 1 { 1 N i = 1 N [ a 1 j 2 π e b 2 ( Θ i ) / 2 ] 2 ( p ^ f μ v j ) 2 }
  σ ( p ^ f   σ X j ) = 1 σ j 1 N 1 { 1 N i = 1 N [ a 1 j 2 π b ( Θ i ) e b 2 ( Θ i ) / 2 ] 2 ( p ^ f σ v j ) 2 }
If non-normal random variables are involved, reliability sensitivity and its standard deviation are respectively calculated as:
p ^ f θ p ^ f | θ + Δ θ p ^ f | θ Δ θ = 1 N ( Δ θ ) i = 1 N [ Φ ( 0 , 1 ) ( b ( Θ i ) | θ + Δ θ ) Φ ( 0 , 1 ) ( b ( Θ i ) | θ ) ]
σ ( p ^ f θ ) σ ( p ^ f | θ + Δ θ p ^ f | θ ) ( Δ θ ) = E [ ( p ^ f | θ + Δ θ p ^ f | θ ) 2 ] E [ ( p ^ f | θ + Δ θ p ^ f | θ ) ] 2 ( N 1 ) ( Δ θ ) 2 = 1 Δ θ 1 ( N 1 ) N i = 1 N ( Φ ( 0 , 1 ) ( b ( Θ i ) | θ + Δ θ ) Φ ( 0 , 1 ) ( b ( Θ i ) | θ ) ) 2 ( p ^ f | θ + Δ θ p ^ f | θ ) 2
The procedure of the axis orthogonal important sampling method for structure reliability and its sensitivity in this paper is in detail described as follows (shown in Figure 3):
  • Step 1. The performance function g ( x ) is transformed into the standard normal space as g ( v ) in terms of Rosenblatt transform. A MPFP v * is obtained by the first-order reliability method.
  • Step 2. An N × ( n 1 )   matrix H of quasi-random sequence matrix (LHS, Sobol, Halton) following the uniform distribution in   [ 0 , 1 ] , which is transformed into a matrix Θ with each column following the standard normal distribution by the Box-Muller transformation. Each row of Θ is used as a sample point Θ i , i = 1 , , N , in the tangent plane of the limit state surface at the importance sampling center, namely the MPFP v * .
  • Step 3. For the projection point of each sample point S i * defined in Equation (6) on the limit state surface, solving Equation (7), a distance b ( Θ i ) , i = 1 , 2 , , N , can be calculated.
  • Step 4. The structural failure probability p f and its standard deviation σ ( p f ) are estimated according to Equations (29) and (30) respectively.
  • Step 5. For the cases of not involving non-normal random variables, sensitivity of structural failure probability and its standard deviation with respect to the mean and standard deviation of a random variable are calculated by Equations (31)–(34), respectively. Otherwise, they are evaluated by Equations (35) and (36) respectively.

6. Numerical Examples

In this section, four examples are illustrated below to demonstrate the computational efficiency and the accuracy of the method. The LHS, Halton and Sobol quasi-random sequences and the inverse transform and Box-Muller transformation are respectively applied and compared with each other for failure probability and its sensitivity estimation. HaltonIT denotes the Halton quasi-random sequence employed to generate the uniform sequence together with the inverse transformation to obtain the target distribution, LHSBM represents LHS together with the Box-Muller transformation, and so on. SMC refers to the standard Monte Carlo method.
Example 1.
Consider the following performance function
g ( X ) = X 1 X 2 X 3 = 0
where X 1 , X 2 , X 3 are independent random variable with normal distribution. The basic random variables and their distribution characteristic parameters are listed in Table 1. The results of reliability, its sensitivity and variance using different methods with the sample size N = 15 are compared to those results obtained by the HaltonBM method at N = 10 5   as the accurate answer, as is shown in Table 2. The maximum relative errors of sensitivity of reliability and failure probability yielded by different methods at sample sizes N = 15 and 100 are shown in Figure 4, respectively. From these, we can conclude that the results of the quasi-Monte Carlo method with the sample size 15 are in agreement with those of the HaltonBM method with the sample size N = 10 5 . The maximum relative error comes from the Sobol sequence; the combination SobolIT and SobolBM respectively produce 2.124% and 2.795% errors in failure probability estimation, and 1.925% and 2.528% errors in sensitivity estimation of failure probability, respectively. The minimum relative error stems from the Halton sequence, the combination HaltonIT and HaltonBM produce 0.068% and 1.368% errors in failure probability estimation, respectively, and 0.096% and 1.247% errors in sensitivity estimation of failure probability, respectively. As the sample size increases to N = 100 , the Sobol sequence performs better, and the relative errors are near to zero. The combination SobolIT and SobolBM respectively produce 0.051% and 0.128% errors in failure probability estimation, and 0.066% and 0.091% errors in sensitivity estimation of failure probability. LHSBM has the largest relative errors, 0.535% in failure probability estimation and 0.476% in sensitivity estimation of failure probability, respectively. In contrast, the SMC method with the sample size N = 107 shows more errors.
Example 2.
A highly non-linear performance function
g ( X ) = X 1 X 2 X 3 X 4 X 5 X 6 2 8 = 0
which involves both normal and non-normal basic random variables and their distribution parameters is given in Table 3 . The results of reliability, its sensitivity and variance using different methods with the sample size N = 100 are compared to those results obtained by the HaltonBM method at N = 10 6   as the accurate answer, as is shown in Table 4. The maximum relative errors of sensitivity of reliability and failure probability yielded by different methods at the sample size N = 100 and 1000 are shown in Figure 5 respectively. It can be found that, similar to Example 1, compared to the results of SMC with N = 10 7 , the result of the quasi-Monte Carlo method with N = 100 are much closer to the HaltonBM method with N = 10 6 . Moreover, the relative error of failure probability is less than 5 % and sensitivity error of failure probability is more than   10 % when N = 100 . The maximum and minimum relative error in failure probability estimation come from the combination HaltonIT and LHSBM, producing 4.423% and 0.864% errors, respectively. The relative errors of the SobolBM are 0.911% and 34.305% in failure probability and its sensitivity estimation, respectively. Nevertheless, HaltonIT and LHSBM produce 47.020% and 39.761% errors in sensitivity estimation of failure probability, respectively. Even the minimum relative error exceeds 15%, arriving at 16.816% obtained by the LHSIT. As the sample size increases to 1000 , the accuracy of the sensitivity is greatly enhanced, while the relative error of the LHS is still more than 15 % , 15.338% and 23.155% in the LHSIT and LHSBM, respectively. The HaltonIT produces 7.555% error in sensitivity estimation of failure probability. The relative errors of the SobolBM are 0.064% and 2.371% in failure probability and its sensitivity estimation, respectively. The relative errors of the SobolIT are 0.365% and 1.798% in failure probability and its sensitivity estimation, respectively. Taking the accuracy of reliability and its sensitivity into account, the combination of the SobolBM and SobolIT obtains a satisfactory accuracy with a relatively small number of samples.
Example 3.
As shown in Figure 6, a roof truss is considered, whose top chords and compression bars are made of steel-reinforced concrete, and the rest members are made of steel. The uniformly distributed load q applied on the roof truss is equivalent to the nodal load P = q l / 4 . The limit state function is written as:
g = 0.03 Δ C = 0.03 q l 2 2 ( 3.81 A C E C + 1.13 A S E S )
where l is the span length of the truss, A S , A C are the cross-sectional areas of reinforce-concrete and steel bars, respectively, E S , E C are the corresponding elastic modules. Six independent normal random variables, their means and standard deviations are given in Table 5.
The results of failure probability, its sensitivity and standard deviation using different quasi random sequences with N = 50 , 1000 are compared to those results obtained by the LHSIT with correlation elimination at N = 10 6   as the accurate answer, as is shown in Table 6 and Table 7, respectively. The maximum relative errors of sensitivity of reliability and failure probability yielded by different quasi random sequences and transformations at the sample size N = 50 and 1000 are shown in Figure 7. Similarly, compared to the results of SMC with N = 5 × 10 7 , the results of the quasi-Monte Carlo method are much closer to the accurate answer, especially in the sensitivity estimation of failure probability. The SMC produces more than 11% relative errors in p f / u q , p f / u l , etc. When N = 50 , the maximum and minimum relative error in failure probability estimation come from the combination SobolBM and HaltonBM, producing 4.576% and 0.701% errors, respectively. In the meantime, the SobolBM and HaltonBM produce 3.797% and 0.399% errors in sensitivity estimation of failure probability, respectively. As the sample size increases to 1000 , the accuracy of the sensitivity is further enhanced, especially in the SobolBM, which has the minimum relative error in both failure probability and its sensitivity estimation, reducing to 0.048% and 0.080% relative errors, respectively. The LHSIT has the maximum relative error in failure probability and its sensitivity estimation with 0.923% and 0.782% errors, respectively. From these, we can see that the SobolBM converges and gives a better and better accuracy, and as the sample size increases to a certain level, e.g., N = 1000 ,   the accuracy will improve. Even under a small sample size, e.g., N = 50 , the SobolBM gives an answer with a satisfactory accuracy, with less than 5% relative error.
Example 4.
A more complex 23-bar truss with 30 independent normal random variables is considered (shown in Figure 8), including cross-sectional area for members, applied loads, and modulus of elasticity, whose distribution parameters are listed in Table 8. The limit state function is given based on vertical displacement of the middle node D ( X ) , as follows
G ( X ) = 0.14 D ( X )
where D ( X ) is the vertical displacement of middle node 7. The failure probability and reliability index calculated by the FORM are 1.44 × 10 4 and   3.6266 , respectively.
The failure probability and reliability index obtained by the SMC are 1.8709 × 10 4 and 3.5684 with 1,000,000 samples, respectively. The failure probability reliability index calculated by the importance LHS are 1.82 × 10 4 and 3.5577 with 1,000,000 samples. Ref. [28] shows that these methods are consistent with each other in terms of the failure probability, while the SMC obtains sensitivity estimations with a large error and the sensitivities of standard deviations obtained by the common importance LHS are inaccurate. The trend of maximum relative error of sensitivity with sample size   N are shown in Figure 9. When N = 50 , the maximum and minimum relative error in sensitivity estimation of failure probability result from the combination HaltonIT and LHSIT, producing 44.465% and 0.374% errors, respectively. In the meantime, the SobolBM, SobolIT and HaltonBM produce 1.956%, 1.326% and 20.915% errors in sensitivity estimation of failure probability, respectively. The accuracy of the HaltonIT and HaltonBM are unsatisfactory with more than 20% relative errors. As the sample size increases, the accuracy of all the combinations is improved. When N = 100 , the HaltonIT and HaltonBM still have more than 10% relative errors, down to 18.841% and 13.282%, respectively. The SobolBM obtains more accuracy with 0.032%, 0.016%, 0.192 % and 0.235% relative errors when N = 100 , 200, 500 and 1000, respectively. The HaltonBM has the largest relative errors with 7.237%, 5.372 % and 3.704% when N = 200, 500 and 1000, respectively. The LSHIT and LSHBM also gives quite small relative errors.

7. Conclusions

Different quasi-random sequences together with two transformation approaches are combined for reliability sensitivity analysis in the axis orthogonal importance sampling method. The ranks of the combinations according to the relative errors of failure probability sensitivity in the four representative numerical examples with high nonlinearity are listed in Table 9. The method of relative errors is less than 1%, 5%, 1% and 1% in the four examples that are marked in bold, respectively. From the results, we can conclude that:
(1) The common LHS, importance LHS and SMC will give lower-accuracy answers compared to the axis orthogonal importance sampling method based on the quasi-random sequences, even the wrong answer in estimating the sensitivity of failure probability even if the sample size increases to a considerable number. The quasi-random sequences including the Halton, LHS and Sobol sequences in the axis orthogonal importance sampling method are beneficial for the accuracy of failure probability estimation and commonly in line with each other if the sample size increases to a considerable number, while they show a big difference in estimating the sensitivity of failure probability at the same sample size, especially for those small sample sizes.
(2) In the case of small sample sizes, the Sobol sequence is usually more likely to obtain the most satisfactory sensitivity estimation of failure probability with a good accuracy in the axis orthogonal importance sampling method, especially associated with the Box-Muller transformation to generate the normal sequence. The scheme of the Box-Muller transformation combined with the Sobol sequence needs fewer samples with a satisfactory accuracy, which is demonstrated by the four typical examples of nonlinear problems (including problems of higher-order polynomial, non-normal variables involved, and nonlinear structural response) in this paper.
In summary, an appropriate quasi-random sequence and transformation approach to the normal space affects the effectiveness for reliability sensitivity analysis in the axis orthogonal importance sampling method. The scheme of the Box-Muller transformation combined with the Sobol sequence is a good alternative to be efficiently applied in the axis orthogonal importance sampling method with good accuracy. The ongoing development of the axis orthogonal importance sampling method focuses on the other quasi-random sequence and applications in the other complicated problems, which will become a part of future studies.

Author Contributions

Author funding acquisition, W.Z.; investigation, Y.C.; methodology, W.Z.; project administration, Y.O.; resources, Y.O.; supervision, Y.O.; visualization, Y.W.; writing—original draft, Y.W.; writing—review & editing, W.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (NSFC), grant number 12072130.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the first author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, H.; Chen, W.; Sudjianto, A. Probabilistic Sensitivity Analysis Methods for Design Under Uncertainty. In Proceedings of the 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Albany, NY, USA, 30 August–1 September 2004; American Institute of Aeronautics and Astronautics: Albany, NY, USA, 2004. [Google Scholar]
  2. Keshtegar, B.; Meng, Z. A Hybrid Relaxed First-Order Reliability Method for Efficient Structural Reliability Analysis. Struct. Saf. 2017, 66, 84–93. [Google Scholar] [CrossRef]
  3. Meng, Z.; Li, G.; Yang, D.; Zhan, L. A New Directional Stability Transformation Method of Chaos Control for First Order Reliability Analysis. Struct. Multidiscip. Optim. 2017, 55, 601–612. [Google Scholar] [CrossRef]
  4. Yang, D.; Li, G.; Cheng, G. Convergence Analysis of First Order Reliability Method Using Chaos Theory. Comput. Struct. 2006, 84, 563–571. [Google Scholar] [CrossRef]
  5. Dudzik, A.; Potrzeszcz-Sut, B. Hybrid Approach to the First Order Reliability Method in the Reliability Analysis of a Spatial Structure. Appl. Sci. 2021, 11, 648. [Google Scholar] [CrossRef]
  6. Hu, Z.; Du, X. First Order Reliability Method for Time-Variant Problems Using Series Expansions. Struct. Multidiscip. Optim. 2015, 51, 1–21. [Google Scholar] [CrossRef]
  7. Li, G.; Li, B.; Hu, H. A Novel First-Order Reliability Method Based on Performance Measure Approach for Highly Nonlinear Problems. Struct. Multidiscip. Optim. 2018, 57, 1593–1610. [Google Scholar] [CrossRef]
  8. Du, X.; Zhang, J. Second-Order Reliability Method With First-Order Efficiency. In Proceedings of the Volume 1: 36th Design Automation Conference, Parts A and B, New Orleans, LA, USA, 15–18 August 2010; ASMEDC: Montreal, QC, Canada, 2010; pp. 973–984. [Google Scholar]
  9. Fan, W.; Liu, R.; Ang, A.H.-S.; Li, Z. A New Point Estimation Method for Statistical Moments Based on Dimension-Reduction Method and Direct Numerical Integration. Appl. Math. Model. 2018, 62, 664–679. [Google Scholar] [CrossRef]
  10. Zhao, Y.-G.; Ono, T. Moment Methods for Structural Reliability. Struct. Saf. 2001, 23, 47–75. [Google Scholar] [CrossRef]
  11. Zhao, Y.-G.; Zhang, X.-Y.; Lu, Z.-H. A Flexible Distribution and Its Application in Reliability Engineering. Reliab. Eng. Syst. Saf. 2018, 176, 1–12. [Google Scholar] [CrossRef]
  12. Lu, Z.; Song, J.; Song, S.; Yue, Z.; Wang, J. Reliability Sensitivity by Method of Moments. Appl. Math. Model. 2010, 34, 2860–2871. [Google Scholar] [CrossRef]
  13. Balesdent, M.; Morio, J.; Marzat, J. Kriging-Based Adaptive Importance Sampling Algorithms for Rare Event Estimation. Struct. Saf. 2013, 44, 1–10. [Google Scholar] [CrossRef]
  14. Dai, H.; Zhang, H.; Wang, W. A New Maximum Entropy-Based Importance Sampling for Reliability Analysis. Struct. Saf. 2016, 63, 71–80. [Google Scholar] [CrossRef]
  15. Melchers, R.E.; Li, C.Q. A Benchmark Study on Importance Sampling Techniques in Structural Reliability. Struct. Saf. 1994, 14, 299–302. [Google Scholar] [CrossRef]
  16. Melchers, R.E. Search-Based Importance Sampling. Struct. Saf. 1990, 9, 117–128. [Google Scholar] [CrossRef]
  17. Olsson, A.; Sandberg, G.; Dahlblom, O. On Latin Hypercube Sampling for Structural Reliability Analysis. Struct. Saf. 2003, 25, 47–68. [Google Scholar] [CrossRef]
  18. Ibrahim, Y. Observations on Applications of Importance Sampling in Structural Reliability Analysis. Struct. Saf. 1991, 9, 269–281. [Google Scholar] [CrossRef]
  19. Shayanfar, M.A.; Barkhordari, M.A.; Barkhori, M.; Barkhori, M. An Adaptive Directional Importance Sampling Method for Structural Reliability Analysis. Struct. Saf. 2018, 70, 14–20. [Google Scholar] [CrossRef]
  20. Au, S.K.; Beck, J.L. Important Sampling in High Dimensions. Struct. Saf. 2003, 25, 139–163. [Google Scholar] [CrossRef]
  21. Malakzadeh, K.; Daei, M. Finding Design Point Base on a Quasi-Importance Sampling Method in Structural Reliability Analysis. Structures 2022, 43, 271–284. [Google Scholar] [CrossRef]
  22. Zhang, X.; Lu, Z.; Cheng, K. Cross-Entropy-Based Directional Importance Sampling with von Mises-Fisher Mixture Model for Reliability Analysis. Reliab. Eng. Syst. Saf. 2022, 220, 108306. [Google Scholar] [CrossRef]
  23. Wang, Y.; Xie, B.; E, S. Adaptive Relevance Vector Machine Combined with Markov-Chain-Based Importance Sampling for Reliability Analysis. Reliab. Eng. Syst. Saf. 2022, 220, 108287. [Google Scholar] [CrossRef]
  24. Xie, B.; Peng, C.; Wang, Y. Combined Relevance Vector Machine Technique and Subset Simulation Importance Sampling for Structural Reliability. Appl. Math. Model. 2022, 113, 129–143. [Google Scholar] [CrossRef]
  25. Menz, M.; Dubreuil, S.; Morio, J.; Gogu, C.; Bartoli, N.; Chiron, M. Variance Based Sensitivity Analysis for Monte Carlo and Importance Sampling Reliability Assessment with Gaussian Processes. Struct. Saf. 2021, 93, 102116. [Google Scholar] [CrossRef]
  26. Mathew, T.V.; Prajith, P.; Ruiz, R.O.; Atroshchenko, E.; Natarajan, S. Adaptive Importance Sampling Based Neural Network Framework for Reliability and Sensitivity Prediction for Variable Stiffness Composite Laminates with Hybrid Uncertainties. Compos. Struct. 2020, 245, 112344. [Google Scholar] [CrossRef]
  27. Liu, F.; Wei, P.; Zhou, C.; Yue, Z. Reliability and Reliability Sensitivity Analysis of Structure by Combining Adaptive Linked Importance Sampling and Kriging Reliability Method. Chin. J. Aeronaut. 2020, 33, 1218–1227. [Google Scholar] [CrossRef]
  28. Zhao, W.; Chen, Y.; Liu, J. Reliability Sensitivity Analysis Using Axis Orthogonal Importance Latin Hypercube Sampling Method. Adv. Mech. Eng. 2019, 11, 168781401982641. [Google Scholar] [CrossRef]
  29. Juang, C.H.; Gong, W.; Martin, J.R. Subdomain Sampling Methods—Efficient Algorithm for Estimating Failure Probability. Struct. Saf. 2017, 66, 62–73. [Google Scholar] [CrossRef]
  30. Liu, P. Structural Reliability Analysis Based on Improved Latin Hypercube Important Sampling. Master’s Thesis, Jinan University, Guangzhou, China, 2016. [Google Scholar]
  31. Atanassov, E.; Ivanovska, S. On the Use of Sobol’ Sequence for High Dimensional Simulation. In Proceedings of the Computational Science—ICCS 2022; Groen, D., de Mulatier, C., Paszynski, M., Krzhizhanovskaya, V.V., Dongarra, J.J., Sloot, P.M.A., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 646–652. [Google Scholar]
  32. Liu, X.; Zheng, S.; Wu, X.; Chen, D.; He, J. Research on a Seismic Connectivity Reliability Model of Power Systems Based on the Quasi-Monte Carlo Method. Reliab. Eng. Syst. Saf. 2021, 215, 107888. [Google Scholar] [CrossRef]
  33. Xu, J.; Zhang, W.; Sun, R. Efficient Reliability Assessment of Structural Dynamic Systems with Unequal Weighted Quasi-Monte Carlo Simulation. Comput. Struct. 2016, 175, 37–51. [Google Scholar] [CrossRef]
  34. Ökten, G.; Liu, Y. Randomized Quasi-Monte Carlo Methods in Global Sensitivity Analysis. Reliab. Eng. Syst. Saf. 2021, 210, 107520. [Google Scholar] [CrossRef]
  35. Ökten, G.; Göncü, A. Generating Low-Discrepancy Sequences from the Normal Distribution: Box-Muller or Inverse Transform? Math. Comput. Model. 2011, 53, 1268–1281. [Google Scholar] [CrossRef]
  36. Niederreiter, H. Error Bounds for Quasi-Monte Carlo Integration with Uniform Point Sets. J. Comput. Appl. Math. 2003, 150, 283–292. [Google Scholar] [CrossRef]
  37. Ökten, G.; Eastman, W. Randomized Quasi-Monte Carlo Methods in Pricing Securities. J. Econ. Dyn. Control 2004, 28, 2399–2426. [Google Scholar] [CrossRef]
  38. Papageorgiou, A. Fast Convergence of Quasi-Monte Carlo for a Class of Isotropic Integrals. Math. Comput. 2000, 70, 297–306. [Google Scholar] [CrossRef]
  39. Papageorgiou, A. The Brownian Bridge Does Not Offer a Consistent Advantage in Quasi-Monte Carlo Integration. J. Complex. 2002, 18, 171–186. [Google Scholar] [CrossRef]
  40. Dai, H.; Wang, W. Application of Low-Discrepancy Sampling Method in Structural Reliability Analysis. Struct. Saf. 2009, 31, 55–64. [Google Scholar] [CrossRef]
  41. McKay, M.D.; Beckman, R.J.; Conover, W.J. A Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of Output from a Computer Code. Technometrics 1979, 21, 239. [Google Scholar] [CrossRef]
Figure 1. Axis orthogonal importance Latin hypercube sampling method.
Figure 1. Axis orthogonal importance Latin hypercube sampling method.
Applsci 12 09860 g001
Figure 2. Two–dimensional scatter plots of different sequences. (a) LHS sequence; (b) Halton sequence; (c) Sobol sequence. (d) Random sequence.
Figure 2. Two–dimensional scatter plots of different sequences. (a) LHS sequence; (b) Halton sequence; (c) Sobol sequence. (d) Random sequence.
Applsci 12 09860 g002aApplsci 12 09860 g002b
Figure 3. Procedure of structural reliability and its sensitivity analysis.
Figure 3. Procedure of structural reliability and its sensitivity analysis.
Applsci 12 09860 g003
Figure 4. Maximum relative errors of sensitivity and failure probability when N = 15 and 100 in Example 1. (a) Maximum relative errors of sensitivity (%); (b) Maximum relative errors of failure probability (%).
Figure 4. Maximum relative errors of sensitivity and failure probability when N = 15 and 100 in Example 1. (a) Maximum relative errors of sensitivity (%); (b) Maximum relative errors of failure probability (%).
Applsci 12 09860 g004
Figure 5. Maximum relative errors of sensitivity and failure probability when N = 100 and 1000 in Example 2. (a) Maximum relative errors of sensitivity (%); (b) Maximum relative errors of failure probability (%).
Figure 5. Maximum relative errors of sensitivity and failure probability when N = 100 and 1000 in Example 2. (a) Maximum relative errors of sensitivity (%); (b) Maximum relative errors of failure probability (%).
Applsci 12 09860 g005
Figure 6. Roof truss structure in Example 3.
Figure 6. Roof truss structure in Example 3.
Applsci 12 09860 g006
Figure 7. Maximum relative errors of sensitivity and failure probability when N = 50 and 1000 in Example 3. (a) Maximum relative errors of sensitivity (%); (b) Maximum relative errors of failure probability (%).
Figure 7. Maximum relative errors of sensitivity and failure probability when N = 50 and 1000 in Example 3. (a) Maximum relative errors of sensitivity (%); (b) Maximum relative errors of failure probability (%).
Applsci 12 09860 g007
Figure 8. Schematic diagram of truss structure in Example 4.
Figure 8. Schematic diagram of truss structure in Example 4.
Applsci 12 09860 g008
Figure 9. (a) Trend of maximum relative error of sensitivity with sample size N in Example 4 and (b) its local magnification.
Figure 9. (a) Trend of maximum relative error of sensitivity with sample size N in Example 4 and (b) its local magnification.
Applsci 12 09860 g009
Table 1. Random variables and their parameters in Example 1.
Table 1. Random variables and their parameters in Example 1.
VariableMeanStandard DeviationDistribution
X 1 40.05.0Normal
X 2 50.02.5Normal
X 3 1000.0200.0Normal
Table 2. Reliability and sensitivity results in Example 1 when N = 15 .
Table 2. Reliability and sensitivity results in Example 1 when N = 15 .
HaltonBMHaltonITSobolBMSobolITLHSBMLHSIT HaltonBM   ( N = 10 5 ) SMC   ( N = 10 7 )
p f / u x 1 ( × 10 4 )
μ −5.9807−5.9121−5.758−5.7936−5.8274−5.8116−5.9071−5.95
σ 0.155590.115220.0979770.0982880.105870.138670.0017258-
p f / u x 2 ( × 10 4 )
μ −3.3503−3.3119−3.2256−3.2455−3.2644−3.2556−3.3091−3.48
σ 0.0871570.0645470.0548860.055060.059310.0776810.0009668-
p f / u x 3 ( × 10 5 )
μ 1.22561.21151.17991.18721.19411.19091.21051.13
σ 0.0318830.0236120.0200780.0201420.0216960.0284170.0003537-
p f / σ x 1 ( × 10 3 )
μ 1.37241.35871.32711.33451.34131.33781.35741.31
σ 0.0316360.0233610.0201640.0201860.0217060.028190.0003490-
p f / σ x 2 ( × 10 4 )
μ 2.15342.13192.08232.09382.10462.09922.12992.60
σ 0.0496380.0366560.0316380.0316740.0340580.0442320.0005475-
p f / σ x 3 ( × 10 5 )
μ 2.30532.28222.2922.24152.25312.24732.28012.12
σ 0.053140.0392410.0338690.0339080.036460.0473520.0005862-
p f ( × 10 5 )
μ 1.19301.17781.14421.15191.15931.15611.17691.175
σ 0.0034070.0252890.02120.021350.0230310.0303740.0003798-
β
3.03743.04133.05003.04803.04603.04693.04153.042
μ : the estimated value, σ : the standard deviation of estimate.
Table 3. Random variables and their parameters in Example 2.
Table 3. Random variables and their parameters in Example 2.
VariableMeanStandard DeviationDistribution
X 1 4.00.1Weibull
X 2 250002000Lognormal
X 3 0.8750.1Type I extreme value for maxima
X 4 20.01.0Uniform
X 5 100.0100.0Exponential
X 6 150.010.0Normal
Table 4. Reliability and sensitivity results of different methods in Example 2 when N = 100 .
Table 4. Reliability and sensitivity results of different methods in Example 2 when N = 100 .
HaltonBMHaltonITSobolBMSobolITLHSBMLHSIT HaltonBM   ( N = 10 6 ) SMC   ( N = 10 6 )
p f / u x 1 ( × 10 3 )
μ −4.4805−4.2795−4.413−4.337−4.4310−4.5150−4.4548−5.20
σ 0.27139140.21372510.22502170.2080.26007370.26092300.0024682-
p f / u x 2 ( × 10 7 )
μ −7.4163−7.0930−7.301−7.203−7.2940−7.4026−7.3439−8.70
σ 0.42675010.34268470.36085650.3320.41104190.40149350.0038877-
p f / u x 3 ( × 10 2 )
μ −2.1632−2.0696−2.135−2.091−2.1589−2.1658−2.1454−2.49
σ 0.14534190.11448160.12130830.1100.14258010.13203120.0013030-
p f / u x 4 ( × 10 4 )
μ −9.0523−8.6619−8.904−8.783−8.9912−9.1386−8.9977−11.1
σ 0.52418100.41979710.43610380.4070.51713870.51279790.0048062-
p f / u x 5 ( × 10 4 )
μ 1.78951.70251.7571.7291.76861.79981.77561.62
σ 0.11155950.08715070.09106510.0850.10656090.10611270.0010073-
p f / u x 6 ( × 10 4 )
μ 2.30322.20542.2662.2272.26762.31852.29042.18
σ 0.14206610.11260450.11640290.1090.13033290.13352430.0012916-
p f / σ x 1 ( × 10 4 )
μ 2.86049.45438.6375.9323.87376.36026.43065.80
σ 5.30806205.15757874.99491244.8855.55450885.83295060.0519940-
p f / σ x 2 ( × 10 7 )
μ 3.23093.53803.4193.5822.74302.54463.01822.86
σ 0.72014430.62412390.64921020.7220.75334070.81380010.0074955-
p f / σ x 3 ( × 10 2 )
μ 1.03961.09041.1231.0121.21070.96011.02261.17
σ 0.22236530.20285530.22559910.2030.23660530.17886390.0021001-
p f / σ x 4 ( × 10 4 )
μ 2.10302.99262.4062.7702.96292.79892.39603.02
σ 1.09702560.86816920.95274310.8970.89662801.01812690.0099550-
p f / σ x 5 ( × 10 4 )
μ 1.78951.70251.7571.7291.76861.79981.77561.62
σ 0.11155950.08715070.09106510.0850.10656090.10611270.0010073-
p f / σ x 6 ( × 10 4 )
μ 1.24120.96921.1481.1741.35881.21721.15701.17
σ 0.25362330.21774630.23254030.2360.26906020.26345660.0025039-
p f ( × 10 3 )
3.46673.27373.3943.3293.45483.47393.42523.54
β
2.70002.71902.7072.7142.70122.69932.70402.69
Table 5. Basic random variables in Example 3.
Table 5. Basic random variables in Example 3.
VariableUnitMeanStandard Deviation
q N / m 20.00 1400
l m 12 0.12
A S m 2 9.82 × 10 4 5.892 × 10 5
A C m 2 0.040.0048
E s N / m 2 1 × 10 11 6 × 10 9
E C N / m 2 2 × 10 10 1.2 × 10 9
Table 6. Reliability and sensitivity results of different methods in Example 3 when N = 50 .
Table 6. Reliability and sensitivity results of different methods in Example 3 when N = 50 .
HaltonBMHaltonITSobolBMSobolITLHSBMLHSIT CLHSIT   ( N = 10 6 ) SMC   ( N = 5 × 10 7 )
p f / u q ( × 10 6 )
μ 9.9189.647710.335410.180110.200910.12069.957611.059
σ 0.384060.398450.516190.660.453540.42740.0032015-
p f / u l ( × 10 2 )
μ 4.3674.2484.55084.48244.49164.45624.384511.059
σ 0.16910.175440.227280.290610.19970.188190.0014096-
p f / u A s ( × 10 2 )
μ −1.847−1.7966−1.9247−1.8958−1.8997−1.8847−1.8543−1.86262
σ 0.071520.07420.0961260.122910.0844590.0795920.0005962-
p f / u A c
μ −2.4147−2.3489−2.5163−2.4785−2.4836−2.464−2.4243−2.1299
σ 0.0935040.0970080.125670.160690.110420.104060.0007795-
p f / u E s ( × 10 12 )
μ −1.7958−1.7469−1.8714−1.8433−1.847−1.8325−1.8030−1.8265
σ 0.0695390.0721450.0934630.11950.082120.0773880.0005797-
p f / u E c ( × 10 12 )
μ −4.3967−4.2769−4.5817−4.5129−4.5221−4.4865−4.4142−3.7592
σ 0.170250.176630.228830.292580.201060.189470.0014192-
p f / σ q ( × 10 5 )
μ 1.29651.26691.3351.31211.32411.31731.29801.6187
σ 0.0408190.04110.0521020.0590320.046310.0426170.0003199-
p f / σ l ( × 10 2 )
μ 2.15462.10532.21852.18042.20032.1892.15691.85
σ 0.067830.06830.0865830.09810.0769570.070820.0005316-
p f / σ A s ( × 10 2 )
μ 1.89231.8491.94841.9151.93251.92251.89442.05932
σ 0.0595760.0599860.076040.086160.0675890.0621990.0004669-
p f / σ A c
μ 2.63492.57472.71312.66652.69092.6772.63782.5381
σ 0.0829560.0835270.105890.119970.0941150.0866090.0006501-
p f / σ E s ( × 10 12 )
μ 1.82171.781.87571.84361.86041.85081.82742.0119
σ 0.0573530.0577480.0732060.0829430.0650680.0598780.014554-
p f / σ E c ( × 10 12 )
μ 2.18392.1342.24872.21022.23032.21892.19081.9986
σ 0.0687580.0692310.0877630.0994360.0780060.0717860.017448-
p f ( × 10 3 )
9.29429.01119.78819.68079.61869.52119.39929.373
β
2.35112.36522.33442.33852.34092.34472.34952.3505
Table 7. Reliability and sensitivity results of different methods in Example 3 when N = 1000 .
Table 7. Reliability and sensitivity results of different methods in Example 3 when N = 1000 .
HaltonBMHaltonITSobolBMSobolITLHSBMLHSIT CLHSIT   ( N = 10 6 ) SMC   ( N = 5 × 10 7 )
p f / u q ( × 10 6 )
μ 9.9899.92979.96469.975210.200910.12069.957611.059
σ 0.105080.0976910.0991420.105580.453540.42740.0032015-
p f / u l ( × 10 2 )
μ 4.39834.37224.38754.39224.49164.45624.384511.059
σ 0.046270.0430150.0436540.046490.19970.188190.0014096-
p f / u A s ( × 10 2 )
μ −1.8602−1.8491−1.8556−1.8576−1.8997−1.8847−1.8543−1.86262
σ 0.0195690.0181920.0184630.0196620.0844590.0795920.0005962-
p f / u A c
μ −2.432−2.4175−2.426−2.4286−2.4836−2.464−2.4243−2.1299
σ 0.0255840.0237840.0241380.0257060.110420.104060.0007795-
p f / u E s ( × 10 12 )
μ −1.8086−1.7979−1.8042−1.8062−1.847−1.8325−1.8030−1.8265
σ 0.0190270.0176880.0179510.0191170.082120.0773880.0005797-
p f / u E c ( × 10 12 )
μ −4.4282−4.4019−4.4173−4.422−4.4315−4.4487−4.4142−3.7592
σ 0.0465840.0433070.043950.0468050.04521640.04584430.0014192-
p f / σ q ( × 10 5 )
μ 1.30061.29561.2991.2991.30191.30581.29801.6187
σ 0.0103580.00987960.0100670.0103960.01025190.01032750.0003199-
p f / σ l ( × 10 2 )
μ 2.16142.1532.15862.15872.16352.172.15691.85
σ 0.0172130.0164180.016730.0172760.01703650.01716210.0005316-
p f / σ A s ( × 10 2 )
μ 1.89831.89091.89581.89591.90011.90591.89442.05932
σ 0.0151180.0144190.0146930.0151730.01496260.01507290.0004669-
p f / σ A c
μ 2.64322.6332.63992.642.64582.65382.63782.5381
σ 0.0210510.0200780.020460.0211280.02083470.02098830.0006501-
p f / σ E s ( × 10 12 )
μ 1.82741.82041.82511.82521.82921.83471.82742.0119
σ 0.0145540.0138810.0141450.0146070.01440430.01451050.014554-
p f / σ E c ( × 10 12 )
μ 2.19082.18242.1882.18822.1932.19962.19081.9986
σ 0.0174480.0166420.0169580.0175110.01726870.0173960.017448-
p f ( × 10 3 )
9.39929.32449.36439.38539.40269.44629.39929.373
β
2.34952.35252.35092.35012.34942.34762.34952.3505
Table 8. Basic random variables in Example 4.
Table 8. Basic random variables in Example 4.
VariableUnitMeanStandard Deviation
A 1 A 23 m20.00140.00014
E GPa20020
P 1 P 6 kN404
Table 9. Rank of the methods according to the relative errors of failure probability sensitivity.
Table 9. Rank of the methods according to the relative errors of failure probability sensitivity.
ExampleNMethod (In Ascending Order of the Relative Errors)
1 (<1%)15HaltonITHaltonBMLHSBMLHSITSobolITSobolBM
100SobolITSobolBMLHSITHaltonITHaltonBMLHSBM
2 (<5%)100LHSITSobolITSobolBMLHSBMHaltonITHaltonBM
1000HaltonBMSobolITSobolBMHaltonITLHSITLHSBM
3 (<1%)50HaltonBMLHSITLHSBMSobolITHaltonITSobolBM
1000SobolBMSobolITHaltonITHaltonBMLHSBMLHSIT
4 (<1%)15LHSITLHSBMSobolITSobolBMHaltonBMHaltonIT
100SobolBMLHSITLHSBMSobolITHaltonBMHaltonIT
200SobolBMLHSITSobolITHaltonITLHSBMHaltonBM
500LHSITLHSBMSobolBMHaltonITSobolITHaltonBM
1000HaltonITSobolBMSobolITLHSITLHSBMHaltonBM
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhao, W.; Wu, Y.; Chen, Y.; Ou, Y. Reliability Sensitivity Analysis by the Axis Orthogonal Importance Sampling Method Based on the Box-Muller Transformation. Appl. Sci. 2022, 12, 9860. https://doi.org/10.3390/app12199860

AMA Style

Zhao W, Wu Y, Chen Y, Ou Y. Reliability Sensitivity Analysis by the Axis Orthogonal Importance Sampling Method Based on the Box-Muller Transformation. Applied Sciences. 2022; 12(19):9860. https://doi.org/10.3390/app12199860

Chicago/Turabian Style

Zhao, Wei, Yeting Wu, Yangyang Chen, and Yanjun Ou. 2022. "Reliability Sensitivity Analysis by the Axis Orthogonal Importance Sampling Method Based on the Box-Muller Transformation" Applied Sciences 12, no. 19: 9860. https://doi.org/10.3390/app12199860

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop