Next Article in Journal
NSNet: An N-Shaped Convolutional Neural Network with Multi-Scale Information for Image Denoising
Previous Article in Journal
Generalized Moment Method for Smoluchowski Coagulation Equation and Mass Conservation Property
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Study of Dynamic Solutions for Human–Machine System with Human Error and Common-Cause Failure

College of Mathematics and Systems Science, Xinjiang University, Urumqi 830046, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(12), 2771; https://doi.org/10.3390/math11122771
Submission received: 21 April 2023 / Revised: 30 May 2023 / Accepted: 16 June 2023 / Published: 19 June 2023

Abstract

:
This work investigates a dynamic solution of human–machine systems with human error and common-cause failure. By means of functional analysis, it is proved that the semigroup generated by the underlying operator converges exponentially to a projection operator by analyzing the spectral property of the underlying operator, and the asymptotic expressions of the system’s time-dependent solutions are presented. We also provide numerical examples to illustrate the effects of different parameters on the system and the theoretical analysis’s validity.

1. Introduction

As modern technology becomes more advanced, the study of the reliability of HMS has become more significant, developing into an essential part of the research on reliability theory. The HMS is a general term for systems where the human is the subject and various types of machines are controlled. Due to the development of automation and microelectronics, the allocation of human–machine functions is continually shifting to the machine side. The high precision and performance of machines increase the importance of human work responsibility, and there is the risk of major accidents caused by HR. Additionally, machines are affected by external environmental contingencies, such as earthquakes, floods, fires, and so on. When these destructive contingencies occur, numerous components in the overall system fail simultaneously, which is known as CCF.
By studying the history of reliability theory, it is easy to see that many reliability problems have been solved by establishing reliability models in which the SVT plays an essential role. The SVT was first proposed by Cox [1] in 1955 and was first introduced into the reliability theory by Gaver [2]. Following this research approach, researchers such as Abbas and Kuo [3], Yang and Dhillon [4], Sridharan and Mohanavadivu [5], Asadzadeh and Azadeh [6], and Wang [7] studied various human–machine reparable systems.
In general, the mathematical model established by the SVT is described by a finite or infinite set of partial differential integral equations with integral boundary conditions. Therefore, obtaining exact solutions is quite challenging. Because of this, some papers on reliability models assume that TDS converges to SSS. However, they do not address whether or not this assumption is correct. In 2003, Gupur [8] was the first to apply C 0 -semigroup theory to investigate the TDS of an HMS consisting of an active and a standby component by the SVT. The strong [9] and exponential [10] convergence of the TDS to its SSS is then obtained separately. Wang and Xu [11] examined the well-posedness and asymptotic behavior of the TDS of an HMS with two parallel working components and a standby component. As systems become more complex, these simple systems are no longer adequate for engineering needs. In addition, as equipment becomes more reliable, the contribution of human error to system problems is relatively more significant. HR and CCF are vital issues in system reliability. Therefore, Chung [12], Narmada and Jacob [13], Hajeeh [14], Liu et al. [15], Shneiderman [16], and others (see the references therein) have examined the reliability of systems with HR and CCF. Yang and Dhillon [4] established a complex HMS consisting of n ( n 2 ) active components and m ( m 2 ) standby components with HR and CCF. Xu et al. [17] identified that the above system exists with a unique TDS, and it converges to the SSS. Other than the above results, there are no further results on this model’s dynamic analysis.
In this paper, first of all, we study the spectral properties of the underlying operator and demonstrate that, in a strip region on the left-half complex plane, it has a maximum number of finitely many eigenvalues and has an algebraic multiplicity of 1, of which 0 is strictly the dominant eigenvalue. Then, we show that the semigroup generated by the underlying operator of the model is quasi-compact and it is exponentially convergent to a projection operator. By studying the essential growth bound of the operator semigroup, it is shown that 0 is a pole of order 1. These results give an explicit expression for the projection operator by the residue theorem of the operator form. Finally, we provide the asymptotic expressions of the TDS of the system.

2. Mathematical Model of the System

The assumptions and symbols associated with our mathematical model are as follows.
  • In the system, there are n active components and m standby components.
  • When one of the operating components fails, the standby component switches into operation; when all components fail, the system fails.
  • A CCF or a HR can trigger system failure from any of the system operable states. λ c i represents the constant CCF rate from state i to state m + n + 1; λ h i denotes the constant critical HR rate from state i to state m + n + 2; for i = 0, 1, 2, …, m + n − 1.
  • common-cause and other failure rates are constant. r i denotes the constant hardware failure rate of a unit in state i; i = 0, 1, 2, …, m + n − 1.
  • The failed system repair times are arbitrarily distributed; μ j ( x ) represents the system’s time-dependent repair rate when the system is in state j and satisfies μ j ( x ) 0 , 0 μ j ( x ) d x = for j = m + n, m + n + 1, m + n + 2.
  • The repaired component or system is as good as new. μ i denotes the constant repair rate of a failed unit in state i, where i = 1, 2, …, m + n − 1.
  • All failures including HR are statistically independent, and the switchover mechanism is perfect and instantaneous.
Based on the above assumptions and descriptions, the state transition diagram of the system Figure 1 can be presented as below.
According to Yang and Dhillon [4], the following partial differential integral equations describe the mathematical model of the HMS with HR and CCF.
d Φ 0 ( t ) dt = a 0 Φ 0 ( t ) + μ 1 Φ 1 ( t ) + j = ϱ ϱ + 2 0 Φ j ( x , t ) μ j ( x ) d x ,
d Φ i ( t ) d t = r i 1 Φ i 1 ( t ) a i Φ i ( t ) + μ i + 1 Φ i + 1 ( t ) , i = 1 , 2 , , ϱ 2 ,
d Φ ϱ 1 ( t ) d t = r ϱ 2 Φ ϱ 2 ( t ) a ϱ 1 Φ ϱ 1 ( t ) ,
Φ j ( x , t ) t + Φ j ( x , t ) x = μ j ( x ) Φ j ( x , t ) , j = m + n , m + n + 1 , m + n + 2 ,
Φ ϱ ( 0 , t ) = r ϱ 1 Φ ϱ 1 ( t ) ,
Φ ϱ + 1 ( 0 , t ) = i = 0 ϱ 1 λ c i Φ i ( t ) ,
Φ ϱ + 2 ( 0 , t ) = i = 0 ϱ 1 λ h i Φ i ( t ) ,
Φ 0 ( 0 ) = 1 , Φ i ( 0 ) = 0 , i = 1 , 2 , , ϱ 1 , Φ j ( x , 0 ) = 0 .
where ( x , t ) [ 0 , ) × [ 0 , ) , a 0 = r 0 + λ c 0 + λ h 0 , a i = r i + μ i + λ c i + λ h i , i = 1 , 2 , , ϱ 1 and ϱ = m + n . We use this denotation throughout the article.
Φ 0 ( t ) = P a t t i m e t , n w o r k i n g u n i t s a n d m s t a n d b y u n i t s o f t h e s y s t e m a r e i n g o o d c o n d i t i o n ,
Φ i ( t ) = P n u n i t s w o r k i n g a n d m i u n i t s o n s t a n d b y i n t h e s y s t e m , i m , P ϱ i u n i t s w o r k i n g a n d n o u n i t s o n s t a n d b y i n t h e s y s t e m , i m .
Φ ϱ ( x , t ) = P a t t i m e t , t h e s y s t e m f a i l s d u e t o t h e f a i l u r e o f a l l u n i t s , a n d t h e r e p a i r t i m e c o n s u m e d b y o n e o f t h e c o m p o n e n t s i s x , Φ ϱ + 1 ( x , t ) = P a t t i m e t , t h e s y s t e m f a i l s d u e t o C C F , a n d t h e r e p a i r t i m e c o n s u m e d i s x , Φ ϱ + 2 ( x , t ) = P a t t i m e t , t h e s y s t e m f a i l s d u e t o H R , a n d t h e r e p a i r t i m e c o n s u m e d i s x .
In the following, we introduce some notations:
Γ = 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 r ϱ 1 λ c 0 λ c 1 λ c 2 λ c ϱ 1 0 λ h 0 λ h 1 λ h 2 λ h ϱ 1
B j f j ( x ) = d f j ( x ) d x μ j ( x ) , f j W 1 [ 0 , ) ; ϕ j f j ( x ) = 0 μ j ( x ) f j ( x ) d x , f j L 1 [ 0 , ) .
e j ( x ) = e η x 0 x μ j ( τ ) d τ ; E j g j ( x ) = e η x 0 x μ j ( τ ) d τ 0 x g j ( ξ ) e η ξ + 0 ξ μ j ( τ ) d τ d ξ , g L 1 [ 0 , ) .
Take the following Banach space X as a state space:
X = Φ Φ R ϱ × L 1 [ 0 , ) × L 1 [ 0 , ) × L 1 [ 0 , ) Φ = i = 0 ϱ 1 | Φ i | + j = ϱ ϱ + 2 Φ j L 1 [ 0 , ) < .
Define the operators and their domain.
D ( H ) = Φ X d Φ j ( x ) d x L 1 [ 0 , ) , Φ j ( x ) a r e a b s o l u t e l y c o n t i n u o u s a n d Φ ( 0 ) = Γ Φ ( x ) ,
H Φ 0 Φ 1 Φ ϱ 1 Φ ϱ Φ ϱ + 1 Φ ϱ + 2 ( x ) = a 0 0 0 0 a 1 0 0 0 0 0 0 0 a ϱ 1 0 0 0 0 B ϱ 0 0 0 0 0 B ϱ + 1 0 0 0 0 B ϱ + 2 Φ 0 Φ 1 Φ ϱ 1 Φ ϱ ( x ) Φ ϱ + 1 ( x ) Φ ϱ + 2 ( x ) ,
M Φ 0 Φ 1 Φ ϱ 1 Φ ϱ Φ ϱ + 1 Φ ϱ + 2 ( x ) = 0 μ 1 0 0 0 0 0 0 r 0 0 μ 2 0 0 0 0 0 0 r 1 0 0 0 0 0 0 0 0 0 0 μ ϱ 1 0 0 0 r ϱ 2 0 0 0 0 0 Φ 0 Φ 1 Φ ϱ 1 Φ ϱ ( x ) Φ ϱ + 1 ( x ) Φ ϱ + 2 ( x ) ,
S Φ 0 Φ 1 Φ ϱ 1 Φ ϱ Φ ϱ + 1 Φ ϱ + 2 ( x ) = j = ϱ ϱ + 2 ϕ j 0 , D ( M ) = D ( S ) = X .
Then, (7) and (8) can be written as an abstract Cauchy problem: X:
d Φ ( t ) d t = ( H + M + S ) Φ ( t ) , t ( 0 , ) , Φ ( 0 ) = ( 1 , 0 , 0 , , 0 ) .

3. Well-Posedness of (9)

Theorem 1. 
H + M + S generates a positive contraction C 0 - semigroup T ( t ) when μ ¯ = sup x [ 0 , ) μ j ( x ) < .
The proof of Theorem 1 is omitted.
The dual space of X is given by
X = Q Q R ϱ × L [ 0 , ) × L [ 0 , ) × L [ 0 , ) , | Q | = m a x sup 0 i ϱ 1 | Q i | , sup ϱ j ϱ + 2 Q j L [ 0 , ) .
Obviously, X is a Banach space. Let
W = Φ X Φ ( x ) = ( Φ 0 , Φ 1 , , Φ ϱ 1 , Φ ϱ ( x ) , Φ ϱ + 1 ( x ) , Φ ϱ + 2 ( x ) ) , Φ i 0 , 0 i ϱ 1 , Φ j ( x ) 0 , x [ 0 , ) .
Then, W X , and Theorem 1 ensures that T ( t ) W W . For Φ D ( H ) W , take Q ( x ) = Φ ( 1 , 1 , , 1 ) ; then, Q X , and
( H + M + S ) Φ , Q = a 0 Φ 0 + μ 1 Φ 1 + j = ϱ ϱ + 2 0 Φ j ( x ) μ j ( x ) d x Φ + i = 1 ϱ 2 r i 1 Φ i 1 a i Φ i + μ i + 1 Φ i + 1 Φ + r ϱ 2 Φ ϱ 2 a ϱ 1 Φ ϱ 1 Φ + 0 d Φ ϱ ( x ) d x μ ϱ ( x ) Φ ϱ ( x ) Φ d x + 0 d Φ ϱ + 1 ( x ) d x μ ϱ + 1 ( x ) Φ ϱ + 1 ( x ) Φ d x + 0 d Φ ϱ + 2 ( x ) d x μ ϱ + 2 ( x ) Φ ϱ + 2 ( x ) Φ d x = a 0 Φ 0 Φ + μ 1 Φ 1 Φ + j = ϱ ϱ + 2 Φ 0 Φ j ( x ) μ j ( x ) d x + i = 1 ϱ 2 Φ r i 1 Φ i 1 i = 1 ϱ 2 Φ a i Φ i + i = 1 ϱ 2 Φ μ i + 1 Φ i + 1 + r ϱ 2 Φ ϱ 2 Φ a ϱ 1 Φ ϱ 1 Φ + Φ Φ ϱ ( 0 ) Φ j = ϱ ϱ + 2 0 Φ j ( x ) μ j ( x ) d x + Φ Φ ϱ + 1 ( 0 ) + Φ Φ ϱ + 2 ( 0 ) = i = 0 ϱ 1 a i Φ i Φ + i = 0 ϱ 1 r i Φ i Φ + i = 0 ϱ 1 μ i Φ i Φ + i = 0 ϱ 1 λ c i Φ i Φ + i = 0 ϱ 1 λ h i Φ i Φ = 0 .
In (10), we use Φ j L 1 [ 0 , ) Φ j ( ) = 0 .
Equation (10) implies that H + M + S is a conservative operator with respect to the set Ξ ( Φ ) = Q X Φ , Q = Φ 2 = | Q | 2 .
Because of Φ ( 0 ) D ( H 2 ) W and by using the Fattorini theorem [18], we obtain the following result.
Theorem 2. 
T ( t ) is isometric for Φ ( 0 ) , i.e.,
T ( t ) Φ ( 0 ) = Φ ( 0 ) , t [ 0 , ) .
We can obtain the system’s well-posedness from Theorems 1 and 2.
Theorem 3. 
If μ ¯ = sup μ j ( x ) < , then (9) has a unique positive TDS Φ ( x , t ) satisfying
Φ ( · , t ) = 1 , t [ 0 , ) .

4. Spectrum of the Operator H + M + S

Lemma 1. 
If 0 μ j ̲ μ j ( x ) μ j ¯ < , then H + M + S has at most finite eigenvalues in { η C | min { μ ϱ ̲ , μ ϱ + 1 ̲ , μ ϱ + 2 ̲ } < η 0 } ; the geometric multiplicity of each eigenvalue is one, and 0 is a strictly dominant eigenvalue.
Proof.  
( H + M + S ) Φ = η Φ , i.e.,
( a 0 + η ) Φ 0 μ 1 Φ 1 = j = ϱ ϱ + 2 0 Φ j ( x ) μ j ( x ) d x ,
( a i + η ) Φ i = r i 1 Φ i 1 + μ i + 1 Φ i + 1 , i = 1 , 2 , , ϱ 2 ,
( a ϱ 1 + η ) Φ ϱ 1 = r ϱ 2 Φ ϱ 2 ,
d Φ j ( x ) d x = ( η + μ j ( x ) ) Φ j ( x ) ,
Φ ϱ ( 0 ) = r ϱ 1 Φ ϱ 1 ,
Φ ϱ + 1 ( 0 ) = i = 0 ϱ 1 λ c i Φ i ,
Φ ϱ + 2 ( 0 ) = i = 0 ϱ 1 λ h i Φ i .
By solving (15), we have
Φ j ( x ) = Φ j ( 0 ) e j ( x ) ,
By substituting (16)–(18) into (19), we deduce
Φ ϱ ( x ) = r ϱ 1 Φ ϱ 1 e ϱ ( x ) ,
Φ ϱ + 1 ( x ) = i = 0 ϱ 1 λ c i Φ i e ϱ + 1 ( x ) ,
Φ ϱ + 2 ( x ) = i = 0 ϱ 1 λ h i Φ i e ϱ + 2 ( x ) .
From (13) and (14), we obtain
a 1 + η μ 2 0 0 0 r 1 a 2 + η μ 3 0 0 0 r 2 a 3 + η 0 0 0 0 0 r ϱ 2 a ϱ 1 + η Φ 1 Φ 2 Φ 3 Φ ϱ 1 = r 0 Φ 0 0 0 0
By Cramer’s rule, we have
Φ i = | U i | | U | Φ 0 , i = 1 , 2 , , ϱ 1 .
Here, U is the coefficient matrix of the above equations, and U i is the matrix where the i column of the coefficient matrix is replaced by the following vector:
r 0 , 0 , 0 , , 0 .
By inserting (20)–(23) into (12), we obtain
I ( η ) Φ 0 = 0 ,
where
I ( η ) = a 0 + η μ 1 | U 1 | | U | r ϱ 1 | U ϱ 1 | | U | ϕ ϱ e ϱ ( x ) i = 0 ϱ 1 λ c i | U i | | U | ϕ ϱ + 1 e ϱ + 1 ( x ) i = 0 ϱ 1 λ h i | U i | | U | ϕ ϱ + 2 e ϱ + 2 ( x ) ,
and U 0 = U .
If Φ 0 = 0 , then (20)–(23) imply Φ i = 0 , Φ j ( x ) = 0 ( i = 0 , 1 , , ϱ 1 ) ; that is to say, Φ ( x ) = ( 0 , 0 , , 0 ) . Thus, η is not an eigenvalue of H + M + S .
If Φ 0 0 , then (24) gives
I ( η ) = 0 .
That is to say,
I ( η ) = 0 Φ 0 0 .
By (20)–(23) and the condition of Lemma 1 we estimate
Φ = | Φ 0 | + | Φ 1 | + + | Φ ϱ 1 | + j = ϱ ϱ + 2 Φ j L 1 [ 0 , ) = 1 + i = 1 ϱ 1 | U i | | U | | Φ 0 | + 0 r ϱ 1 | Φ ϱ 1 | e ϱ ( x ) d x + 0 i = 0 ϱ 1 λ c i | Φ i | e ϱ + 1 ( x ) d x + 0 i = 0 ϱ 1 λ h i | Φ i | e ϱ + 2 ( x ) d x 1 + i = 1 ϱ 1 | U i | | U | | Φ 0 | + r ϱ 1 | U ϱ 1 | | U | | Φ 0 | 0 e ( η + μ ϱ ̲ ) x d x + i = 0 ϱ 1 λ c i | U i | | U | | Φ 0 | 0 e ( η + μ ϱ + 1 ̲ ) x d x + i = 0 ϱ 1 λ h i | U i | | U | | Φ 0 | 0 e ( η + μ ϱ + 2 ̲ ) x d x = | Φ 0 | [ 1 + i = 1 ϱ 1 | U i | | U | + r ϱ 1 | U ϱ 1 | | U | ( η + μ ϱ ̲ ) + i = 0 ϱ 1 λ c i | U i | | U | ( η + μ ϱ + 1 ̲ ) + i = 0 ϱ 1 λ h i | U i | | U | ( η + μ ϱ + 2 ̲ ) ] .
By (26) and (27), it is not difficult to know that all zeros of I ( η ) in
Δ = { η C | min { μ ϱ ̲ , μ ϱ + 1 ̲ , μ ϱ + 2 ̲ } < η 0 }
are eigenvalues of H + M + S . Since I ( η ) is analytic in Δ , it follows that I ( η ) has, at most, countable isolated zero points in Δ from the zero-point theorem of the analytic function.
In the following, we verify the above results. If I ( η ) has infinitely many zero points in Δ , and we assume that they are η l = α l + i β l Δ , α l ( min { μ ϱ ̲ , μ ϱ + 1 ̲ , μ ϱ + 2 ̲ } , 0 ] , β l R , then we know that there is a convergent subsequence by the Bolzano–Weierstrass theorem. Without losing generality, assume η k = α k + i β k such that lim k α k = α ( min { μ ϱ ̲ , μ ϱ + 1 ̲ , μ ϱ + 2 ̲ } , 0 ] , lim k | β k | = , I ( η k ) = 0 , k 1 . By inserting η k = α k + i β k into (25), we obtain
a 0 + α k + i β k μ 1 | U 1 | | U | r ϱ 1 | U ϱ 1 | | U | 0 μ ϱ ( x ) e ( α k + i β k ) x 0 x μ ϱ ( τ ) d τ d x i = 0 ϱ 1 λ c i | U i | | U | 0 μ ϱ + 1 ( x ) e ( α k + i β k ) x 0 x μ ϱ + 1 ( τ ) d τ d x i = 0 ϱ 1 λ h i | U i | | U | 0 μ ϱ + 2 ( x ) e ( α k + i β k ) x 0 x μ ϱ + 2 ( τ ) d τ d x = 0 a 0 + α k + i β k μ 1 | U 1 | | U | r ϱ 1 | U ϱ 1 | | U | × 0 μ ϱ ( x ) e α k x 0 x μ ϱ ( τ ) d τ [ cos ( β k x ) i sin ( β k x ) ] d x i = 0 ϱ 1 λ c i | U i | | U | × 0 μ ϱ + 1 ( x ) e α k x 0 x μ ϱ + 1 ( τ ) d τ [ cos ( β k x ) i sin ( β k x ) ] d x i = 0 ϱ 1 λ h i | U i | | U | × 0 μ ϱ + 2 ( x ) e α k x 0 x μ ϱ + 2 ( τ ) d τ [ cos ( β k x ) i sin ( β k x ) ] d x = 0 a 0 + α k μ 1 | U 1 | | U | r ϱ 1 | U ϱ 1 | | U | 0 μ ϱ ( x ) e α k x 0 x μ ϱ ( τ ) d τ cos ( β k x ) d x i = 0 ϱ 1 λ c i | U i | | U | 0 μ ϱ + 1 ( x ) e α k x 0 x μ ϱ + 1 ( τ ) d τ cos ( β k x ) d x i = 0 ϱ 1 λ h i | U i | | U | 0 μ ϱ + 2 ( x ) e α k x 0 x μ ϱ + 2 ( τ ) d τ cos ( β k x ) d x = 0 ,
β k + r ϱ 1 | U ϱ 1 | | U | 0 μ ϱ ( x ) e α k x 0 x μ ϱ ( τ ) d τ sin ( β k x ) d x + i = 0 ϱ 1 λ c i | U i | | U | 0 μ ϱ + 1 ( x ) e α k x 0 x μ ϱ + 1 ( τ ) d τ sin ( β k x ) d x + i = 0 ϱ 1 λ h i | U i | | U | 0 μ ϱ + 2 ( x ) e α k x 0 x μ ϱ + 2 ( τ ) d τ sin ( β k x ) d x = 0 .
By η k Δ and the Riemann–Lebesgue theorem, we know that
lim k 0 μ j ( x ) e α k x 0 x μ j ( τ ) d τ cos ( β k x ) d x = 0 ,
lim k 0 μ j ( x ) e α k x 0 x μ j ( τ ) d τ sin ( β k x ) d x = 0 .
From (29)–(31) and taking the limit k in (29), we obtain that = 0 . Obviously, this is a contradiction. Therefore, I ( η ) has at most finite zero points in Δ ; in other words, H + M + S has finite eigenvalues at most in Δ . Furthermore, according to (20)–(23), the eigenvectors corresponding to each η generate a linear space of one dimension. That is, the geometric multiplicity of each η is one. □
Remark 1. 
It is not difficult to prove that I ( 0 ) = 0 . Hence, 0 is the eigenvalue of H + M + S with a geometric multiplicity of one. Because H + M + S has finite eigenvalues and the real part of all non-zero eigenvalues is strictly less than 0 in Δ, 0 is a strictly dominant eigenvalue of H + M + S .
Proof. 
when η = 0 ,
I ( 0 ) = a 0 μ 1 | U 1 0 | | U 0 | r ϱ 1 | U ϱ 1 0 | | U 0 | i = 0 ϱ 1 λ c i | U i 0 | | U 0 | i = 0 ϱ 1 λ h i | U i 0 | | U 0 | = a 0 λ c 0 λ h 0 i = 2 ϱ 2 ( a i r i μ i ) | U i 0 | | U 0 | ( a 1 r 1 ) | U 1 0 | | U 0 | ( a ϱ 1 μ ϱ 1 ) | U ϱ 1 0 | | U 0 | ,
where
U 0 = a 1 μ 2 0 0 0 r 1 a 2 μ 3 0 0 0 r 2 a 3 0 0 0 0 0 r ϱ 2 a ϱ 1
U i 0 denotes the matrix where the i column of U 0 is replaced by the vector
r 0 , 0 , 0 , , 0 .
According to the properties of the determinant, we obtain
r ϱ 2 | U ϱ 2 0 | | U 0 | a ϱ 1 | U ϱ 1 0 | | U 0 | = 0 , r i 1 | U i 1 0 | | U 0 | a i | U i 0 | | U 0 | + μ i + 1 | U i + 1 0 | | U 0 | = 0 , i = 2 , 3 , , ϱ 2 .
Then, (32) becomes
I ( 0 ) = a 0 λ c 0 λ h 0 a 1 | U 1 0 | | U 0 | μ 2 | U 1 0 | | U 0 | = a 0 λ c 0 λ h 0 r 0 = 0 .
Lemma 2. 
( H + M + S ) is given by
( H + M + S ) Q = ( C + N + V ) Q , Q D ( C )
where
C Q ( x ) = a 0 0 0 0 0 0 0 0 0 0 0 a ϱ 1 0 0 0 0 0 d d x μ ϱ ( x ) 0 0 0 0 0 d d x μ ϱ + 1 ( x ) 0 0 0 0 0 d d x μ ϱ + 2 ( x ) × Q 0 Q 1 Q ϱ 1 Q ϱ ( x ) Q ϱ + 1 ( x ) Q ϱ + 2 ( x ) , N Q ( x ) = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 μ ϱ ( x ) 0 μ ϱ + 1 ( x ) 0 0 μ ϱ + 2 ( x ) 0 Q 0 Q 1 Q ϱ 1 Q ϱ ( 0 ) Q ϱ + 1 ( 0 ) Q ϱ + 2 ( 0 ) + 0 r 0 0 0 0 0 0 0 0 r 1 0 0 0 0 0 0 0 r ϱ 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 Q 1 Q ϱ 2 Q ϱ 1 Q ϱ ( x ) Q ϱ + 1 ( x ) Q ϱ + 2 ( x ) , V Q ( x ) = 0 0 0 μ 1 0 0 0 0 μ 2 0 0 0 μ ϱ 1 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 Q 1 Q 2 Q ϱ 1 Q ϱ ( x ) Q ϱ + 1 ( x ) Q ϱ + 2 ( x ) + 0 0 0 0 λ c 0 λ h 0 0 0 0 0 λ c 1 λ h 1 0 0 0 r ϱ 1 λ c ( ϱ 1 ) λ h ( ϱ 1 ) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q 0 Q 1 Q ϱ 1 Q ϱ ( 0 ) Q ϱ + 1 ( 0 ) Q ϱ + 2 ( 0 ) ,
D ( C ) = Q X d Q ( x ) d x e x i s t s a n d Q j ( ) = ϵ .
Lemma 3. 
If 0 μ j ̲ μ j ( x ) μ j ¯ < , then ( H + M + S ) has at most finite eigenvalues in { η C | min { μ ϱ ̲ , μ ϱ + 1 ̲ , μ ϱ + 2 ̲ } < η 0 } , and if the geometric multiplicity of each eigenvalue is one, then 0 is a strictly dominant eigenvalue.
Proof. 
Consider ( H + M + S ) Q = η Q , i.e.,
( a 0 + η ) Q 0 = λ c 0 Q ϱ + 1 ( 0 ) + λ h 0 Q ϱ + 2 ( 0 ) + r 0 Q 1 ,
( a i + η ) Q i = r i Q i + 1 + μ i Q i 1 + λ c i Q ϱ + 1 ( 0 ) + λ h i Q ϱ + 2 ( 0 ) , i = 1 , 2 , , ϱ 2 ,
( a ϱ 1 + η ) Q ϱ 1 = μ ϱ 1 Q ϱ 2 + r ϱ 1 Q ϱ ( 0 ) + λ c ( ϱ 1 ) Q ϱ + 1 ( 0 ) + λ h ( ϱ 1 ) Q ϱ + 2 ( 0 ) ,
d Q j ( x ) d x = ( η + μ j ( x ) ) Q j ( x ) μ j ( x ) Q 0 ,
Q j ( ) = ϵ .
By solving (36), we deduce
Q j ( x ) = Q j ( 0 ) e η x + 0 x μ j ( ξ ) d ξ e η x + 0 x μ j ( ξ ) d ξ × 0 x e η τ 0 τ μ j ( ξ ) d ξ μ j ( τ ) Q 0 d τ .
Multiplying both sides of e j ( x ) by (38), taking the limit x , and using (37), we obtain
Q j ( 0 ) = Q 0 0 e η τ 0 τ μ j ( ξ ) d ξ μ j ( τ ) d τ .
According to (34) and (35), we can obtain
( a 1 + η ) Q 1 r 1 Q 2 = λ c 1 Q ϱ + 1 ( 0 ) + λ h 1 Q ϱ + 2 ( 0 ) + μ 1 Q 0 ,
μ 2 Q 1 + ( a 2 + η ) Q 2 r 2 Q 3 = λ c 2 Q ϱ + 1 ( 0 ) + λ h 2 Q ϱ + 2 ( 0 ) ,
μ ϱ 2 Q ϱ 3 + ( a ϱ 2 + η ) Q ϱ 2 r ϱ 2 Q ϱ 1 = λ c ( ϱ 2 ) Q ϱ + 1 ( 0 ) + λ h ( ϱ 2 ) Q ϱ + 2 ( 0 ) ,
μ ϱ 1 Q ϱ 2 + ( a ϱ 1 + η ) Q ϱ 1 = r ϱ 1 Q ϱ ( 0 ) + λ c ( ϱ 1 ) Q ϱ + 1 ( 0 ) + λ h ( ϱ 1 ) Q ϱ + 2 ( 0 ) .
The above equations are written in matrix form as
a 1 + η r 1 0 0 0 μ 2 a 2 + η r 2 0 0 0 μ 3 a 3 + η 0 0 0 0 0 μ ϱ 1 a ϱ 1 + η Q 1 Q 2 Q 3 Q ϱ 1 = κ 1 Q 0 κ 2 Q 0 κ 3 Q 0 κ ϱ 1 Q 0
where
κ 1 = λ c 1 ϕ ϱ + 1 e ϱ + 1 ( τ ) + λ h 1 ϕ ϱ + 2 e ϱ + 2 ( τ ) + μ 1 , κ i = λ c i ϕ ϱ + 1 e ϱ + 1 ( τ ) + λ h i ϕ ϱ + 2 e ϱ + 2 ( τ ) , i = 2 , 3 , , ϱ 2 , κ ϱ 1 = r ϱ 1 ϕ ϱ e ϱ ( τ ) + λ c ( ϱ 1 ) ϕ ϱ + 1 e ϱ + 1 ( τ ) + λ h ( ϱ 1 ) ϕ ϱ + 2 e ϱ + 2 ( τ ) .
Thus, by Cramer’s rule, we have
Q i = | D i | | D | Q 0 , i = 1 , 2 , , ϱ 1 .
Here, D is the coefficient matrix of the above equations, and D i is the matrix where the i column of the coefficient matrix is replaced by the following vector:
κ 1 , κ 2 , κ 3 , , κ ϱ 1 .
We can substitute (44) into (33) to obtain
( a 0 + η ) Q 0 = λ c 0 Q 0 ϕ ϱ + 1 e ϱ + 1 ( τ ) + λ h 0 Q 0 ϕ ϱ + 2 e ϱ + 2 ( τ ) + r 0 | D 1 | | D | Q 0 , a 0 + η r 0 | D 1 | | D | λ c 0 ϕ ϱ + 1 e ϱ + 1 ( τ ) λ h 0 ϕ ϱ + 2 e ϱ + 2 ( τ ) Q 0 = 0 .
If Q 0 = 0 , (38) and (44) means Q i = 0 ( i = 1 , 2 , , ϱ 1 ) , Q j ( x ) = 0 ; that is, Q ( x ) = ( 0 , 0 , , 0 ) , which indicates that η is not an eigenvalue of ( H + M + S ) .
If Q 0 0 , then
B ( η ) = a 0 + η r 0 | D 1 | | D | λ c 0 ϕ ϱ + 1 e ϱ + 1 ( τ ) λ h 0 ϕ ϱ + 2 e ϱ + 2 ( τ ) = 0 .
Now, we prove that I ( η ) = B ( η ) . Because | U | = | D | and
B ( η ) = a 0 + η λ c 0 ϕ ϱ + 1 e ϱ + 1 ( τ ) λ h 0 ϕ ϱ + 2 e ϱ + 2 ( τ ) r 0 κ 1 r 1 0 0 0 κ 2 a 2 + η r 2 0 0 κ 3 μ 3 a 3 + η 0 0 κ ϱ 1 0 0 μ ϱ 1 a ϱ 1 + η | D | ,
we have
I ( η ) = a 0 + η λ c 0 ϕ ϱ + 1 e ϱ + 1 ( x ) λ h 0 ϕ ϱ + 2 e ϱ + 2 ( x ) μ 1 | U 1 | | U | r ϱ 1 | U ϱ 1 | | U | ϕ ϱ e ϱ ( x ) i = 1 ϱ 1 λ c i | U i | | U | ϕ ϱ + 1 e ϱ + 1 ( x ) i = 1 ϱ 1 λ h i | U i | | U | ϕ ϱ + 2 e ϱ + 2 ( x ) = a 0 + η λ c 0 ϕ ϱ + 1 e ϱ + 1 ( x ) λ h 0 ϕ ϱ + 2 e ϱ + 2 ( x ) ( μ 1 + λ c 1 ϕ ϱ + 1 e ϱ + 1 ( x ) + λ h 1 ϕ ϱ + 2 e ϱ + 2 ( x ) ) | U 1 | | U | i = 2 ϱ 2 ( λ c i ϕ ϱ + 1 e ϱ + 1 ( x ) + λ h i ϕ ϱ + 2 e ϱ + 2 ( x ) ) | U i | | U | ( r ϱ 1 ϕ ϱ e ϱ ( x ) + λ c ( ϱ 1 ) ϕ ϱ + 1 e ϱ + 1 ( x ) + λ h ( ϱ 1 ) ϕ ϱ + 2 e ϱ + 2 ( x ) ) | U ϱ 1 | | U | = a 0 + η λ c 0 ϕ ϱ + 1 e ϱ + 1 ( x ) λ h 0 ϕ ϱ + 2 e ϱ + 2 ( x ) κ 1 | U 1 | + i = 2 ϱ 2 κ i | U i | + κ ϱ 1 | U ϱ 1 | | U | = B ( η ) .
Therefore, (46) is equivalent to I ( η ) = 0 , which is to say,
I ( η ) = 0 Q 0 0 .
According to (38), we can estimate (assume η + μ j ̲ > 0 )
Q j L [ 0 , ) sup x [ 0 , ) | Q 0 e η x + 0 x μ j ( τ ) d τ × x μ j ( ζ ) e η ζ 0 ζ μ j ( τ ) d τ d ζ | | Q 0 | sup x [ 0 , ) e η x + 0 x μ j ( τ ) d τ × x μ j ( ζ ) e η ζ 0 ζ μ j ( τ ) d τ d ζ = | Q 0 | sup x [ 0 , ) x μ j ( ζ ) e η ( ζ x ) x ζ μ j ( τ ) d τ d ζ | Q 0 | sup x [ 0 , ) x μ j ¯ e η ( ζ x ) μ j ̲ ( ζ x ) d ζ = | Q 0 | μ j ¯ η + μ j ̲ .
Combining (48) with (44), we obtain
Q = sup | Q 0 | , | Q 1 | , , | Q ϱ 1 | , Q ϱ L [ 0 , ) , Q ϱ + 1 L [ 0 , ) , Q ϱ + 2 L [ 0 , ) = | Q 0 | sup 1 , | D 1 | | D | , , | D ϱ 1 | | D | , μ ϱ ¯ η + μ ϱ ̲ , μ ϱ + 1 ¯ η + μ ϱ + 1 ̲ , μ ϱ + 2 ¯ η + μ ϱ + 2 ̲
(47) and (48) implies that all zeros of B ( η ) in
Δ = { η C | min { μ ϱ ̲ , μ ϱ + 1 ̲ , μ ϱ + 2 ̲ } < η 0 }
are the eigenvalues of ( H + M + S ) . Because B ( η ) is analytic in Δ , we know from the zero-point theorem for analytic functions that B ( η ) has at most countable isolated zero points in Δ . Because this is the same as in Lemma 1, we can obtain that B ( η ) has a finite number of zero points at most in Δ ; in other words, ( H + M + S ) in Δ has at most finite eigenvalues. □
Remark 2. 
According to a 0 = r 0 + λ c 0 + λ h 0 , we can obtain that B ( 0 ) = 0 . Thus, 0 is the eigenvalue of ( H + M + S ) with geometric multiplicity of one.

5. Asymptotic Behavior of the TDS of (9)

From Section 3, we can obtain that the operator H also generates a positive contraction C 0 semigroup, T 0 ( t ) . In this section, first of all, we prove that T 0 ( t ) is a quasi-compact operator. Since M and S are compact operators, it is obtained that the semigroup T ( t ) generated by H + M + S is a quasi-compact C 0 semigroup according to the perturbation of the quasi-compact operators. Next, we prove that T ( t ) converges exponentially to a projection operator and provide a concrete expression for this convergence. Therefore, we obtain that the TDS of (9) converges exponentially to its steady-state solution.
Proposition 1. 
Let
d Φ ( t ) d t = H Φ ( t ) , t ( 0 , ) , Φ ( 0 ) = u ( x ) D ( H ) .
If Φ ( x , t ) = ( T 0 ( t ) u ) ( x ) is a solution of (50), then
Φ ( x , t ) = ( T 0 ( t ) u ) ( x ) = u 0 e a 0 t u 1 e a 1 t u ϱ 1 e a ϱ 1 t Φ ϱ ( 0 , t x ) e 0 x μ ϱ ( ζ ) d ζ Φ ϱ + 1 ( 0 , t x ) e 0 x μ ϱ + 1 ( ζ ) d ζ Φ ϱ + 2 ( 0 , t x ) e 0 x μ ϱ + 2 ( ζ ) d ζ , x < t , u 0 e a 0 t u 1 e a 1 t u ϱ 1 e a ϱ 1 t u ϱ ( x t ) e x t x μ ϱ ( ζ ) d ζ u ϱ + 1 ( x t ) e x t x μ ϱ + 1 ( ζ ) d ζ u ϱ + 2 ( x t ) e x t x μ ϱ + 2 ( ζ ) d ζ , x > t ,
where Φ j ( 0 , t x ) is determined by (5)–(7).
Theorem 4. 
If μ j ( x ) is Lipschitz-continuous and satisfies 0 μ j ̲ μ j ( x ) μ j ¯ < , then T 0 ( t ) is a quasi-compact C 0 semigroup in X.
Proof. 
First of all, we define two operators for ψ X :
( V ( t ) ψ ) ( x ) = { 0 , x [ 0 , t ) , ( T 0 ( t ) ψ ) ( x ) , x [ t , ) .
( W ( t ) ψ ) ( x ) = { ( T 0 ( t ) ψ ) ( x ) , x [ 0 , t ) , 0 , x [ t , ) .
Obviously,
T 0 ( t ) ψ = V ( t ) ψ + W ( t ) ψ , ψ X .
From [19] in Theorem 1.35 and the definition of W ( t ) , we know that we only need to prove Condition 1 in Theorem 1.35 [19]. For u D ( H ) , we set Φ ( x , t ) = ( T 0 ( t ) u ) ( x ) ; then Φ ( x , t ) is a solution of (50). Therefore, according to Proposition 1 we have, for x [ 0 , t ) , h [ 0 , t ) , x + h [ 0 , t ) ,
j = ϱ ϱ + 2 0 t | Φ j ( x + h , t ) Φ j ( x , t ) | d x = 0 t | Φ ϱ ( 0 , ϖ ) e 0 x + h μ ϱ ( τ ) d τ Φ ϱ ( 0 , ς ) e 0 x μ ϱ ( τ ) d τ | d x + 0 t | Φ ϱ + 1 ( 0 , ϖ ) e 0 x + h μ ϱ + 1 ( τ ) d τ Φ ϱ + 1 ( 0 , ς ) e 0 x μ ϱ + 1 ( τ ) d τ | d x + 0 t | Φ ϱ + 2 ( 0 , ϖ ) e 0 x + h μ ϱ + 2 ( τ ) d τ Φ ϱ + 2 ( 0 , ς ) e 0 x μ ϱ + 2 ( τ ) d τ | d x 0 t | Φ ϱ ( 0 , ϖ ) | | e 0 x + h μ ϱ ( τ ) d τ e 0 x μ ϱ ( τ ) d τ | d x + 0 t | Φ ϱ ( 0 , ϖ ) Φ ϱ ( 0 , ς ) | e 0 x μ ϱ ( τ ) d τ d x + 0 t | Φ ϱ + 1 ( 0 , ϖ ) | | e 0 x + h μ ϱ + 1 ( τ ) d τ e 0 x μ ϱ + 1 ( τ ) d τ | d x + 0 t | Φ ϱ + 1 ( 0 , ϖ ) Φ ϱ + 1 ( 0 , ς ) | e 0 x μ ϱ + 1 ( τ ) d τ d x + 0 t | Φ ϱ + 2 ( 0 , ϖ ) | | e 0 x + h μ ϱ + 2 ( τ ) d τ e 0 x μ ϱ + 2 ( τ ) d τ | d x + 0 t | Φ ϱ + 2 ( 0 , ϖ ) Φ ϱ + 2 ( 0 , ς ) | e 0 x μ ϱ + 2 ( τ ) d τ d x
where t x h = ϖ , t x = ς . Thus, x t = ς .
In the following, we estimate each term in (53). According to the properties of the semigroup and the boundary conditions, we have
| Φ ϱ ( 0 , ϖ ) | = | r ϱ 1 Φ ϱ 1 ( ϖ ) | r ϱ 1 Φ ( · , ϖ ) X = r ϱ 1 T 0 ( ϖ ) u ( · ) X r ϱ 1 u X ,
| Φ ϱ + 1 ( 0 , ϖ ) | max 0 i ϱ 1 { λ c i } i = 0 ϱ 1 | Φ i ( ϖ ) | max 0 i ϱ 1 { λ c i } u X ,
| Φ ϱ + 2 ( 0 , ϖ ) | max 0 i ϱ 1 { λ h i } i = 0 ϱ 1 | Φ i ( ϖ ) | max 0 i ϱ 1 { λ h i } u X .
From (54)–(56), we can estimate the first, third, and fifth terms of (53):
0 t | Φ ϱ ( 0 , ϖ ) | | e 0 x + h μ ϱ ( τ ) d τ e 0 x μ ϱ ( τ ) d τ | d x r ϱ 1 u X 0 t | e 0 x + h μ ϱ ( τ ) d τ e 0 x μ ϱ ( τ ) d τ | d x 0 , as | h | 0 , uniformly for u ,
0 t | Φ ϱ + 1 ( 0 , ϖ ) | | e 0 x + h μ ϱ + 1 ( τ ) d τ e 0 x μ ϱ + 1 ( τ ) d τ | d x max 0 i ϱ 1 { λ c i } u X 0 t | e 0 x + h μ ϱ + 1 ( τ ) d τ e 0 x μ ϱ + 1 ( τ ) d τ | d x 0 , as | h | 0 , uniformly for u ,
0 t | Φ ϱ + 2 ( 0 , ϖ ) | | e 0 x + h μ ϱ + 2 ( τ ) d τ e 0 x μ ϱ + 2 ( τ ) d τ | d x max 0 i ϱ 1 { λ h i } u X 0 t | e 0 x + h μ ϱ + 2 ( τ ) d τ e 0 x μ ϱ + 2 ( τ ) d τ | d x 0 , as | h | 0 , uniformly for u .
Using the boundary condition and Proposition 1, we obtain
| Φ ϱ ( 0 , ϖ ) Φ ϱ ( 0 , ς ) | = r ϱ 1 | u ϱ 1 | | e a ϱ 1 ( ϖ ) e a ϱ 1 ( ς ) | r ϱ 1 u X | e a ϱ 1 ( ϖ ) e a ϱ 1 ( ς ) | 0 , as | h | 0 , uniformly for u ,
| Φ ϱ + 1 ( 0 , ϖ ) Φ ϱ + 1 ( 0 , ς ) | i = 0 ϱ 1 λ c i | u i | | e a i ( ϖ ) e a i ( ς ) | u X i = 0 ϱ 1 λ c i | e a i ( ϖ ) e a i ( ς ) | 0 , as | h | 0 , uniformly for u ,
| Φ ϱ + 2 ( 0 , ϖ ) Φ ϱ + 2 ( 0 , ς ) | i = 0 ϱ 1 λ h i | u i | | e a i ( ϖ ) e a i ( ς ) | u X i = 0 ϱ 1 λ h i | e a i ( ϖ ) e a i ( ς ) | 0 , as | h | 0 , uniformly for u .
Combining (57)–(62) with (53), we obtain
j = ϱ ϱ + 2 0 t | Φ j ( x + h , t ) Φ j ( x , t ) | d x 0 , as | h | 0 , uniformly for u .
If h ( t , 0 ) , x [ 0 , t ) , then x + h < 0 and Φ j ( x + h , t ) = 0 . Thus,
j = ϱ ϱ + 2 0 t | Φ j ( x + h , t ) Φ j ( x , t ) | d x = 0 h | Φ ϱ ( x + h , t ) Φ ϱ ( x , t ) | d x + h t | Φ ϱ ( x + h , t ) Φ ϱ ( x , t ) | d x + 0 h | Φ ϱ + 1 ( x + h , t ) Φ ϱ + 1 ( x , t ) | d x + h t | Φ ϱ + 1 ( x + h , t ) Φ ϱ + 1 ( x , t ) | d x + 0 h | Φ ϱ + 2 ( x + h , t ) Φ ϱ + 2 ( x , t ) | d x + h t | Φ ϱ + 2 ( x + h , t ) Φ ϱ + 2 ( x , t ) | d x = 0 h | Φ ϱ ( x , t ) | d x + h t | Φ ϱ ( x + h , t ) Φ ϱ ( x , t ) | d x + 0 h | Φ ϱ + 1 ( x , t ) | d x + h t | Φ ϱ + 1 ( x + h , t ) Φ ϱ + 1 ( x , t ) | d x + 0 h | Φ ϱ + 2 ( x , t ) | d x + h t | Φ ϱ + 2 ( x + h , t ) Φ ϱ + 2 ( x , t ) | d x
For x [ 0 , t ) , h [ 0 , t ) , x + h [ 0 , t ) , in the same way as (63), the second, fourth, and sixth items in (64) are obtained as follows:
h t | Φ j ( x + h , t ) Φ j ( x , t ) | d x 0 .
In the following, we estimate the other three terms in (64), and by using Propositions 1 and (54)–(56), we obtain
0 h | Φ ϱ ( x , t ) | d x r ϱ 1 u X 0 h e 0 x μ ϱ ( τ ) d τ d x 0 ,
0 h | Φ ϱ + 1 ( x , t ) | d x max 0 i ϱ 1 { λ c i } u X 0 h e 0 x μ ϱ + 1 ( τ ) d τ d x 0 ,
0 h | Φ ϱ + 2 ( x , t ) | d x max 0 i ϱ 1 { λ h i } u X 0 h e 0 x μ ϱ + 2 ( τ ) d τ d x 0 .
When h ( t , 0 ) , x [ 0 , t ) , x + h [ 0 , t ) , we have
j = ϱ ϱ + 2 0 t | Φ j ( x + h , t ) Φ j ( x , t ) | d x 0 .
As | h | 0 , (65)–(69) uniformly for u . Therefore, (69) and (63) show that W ( t ) is a compact operator.
Now, by the definition of V ( t ) , we have, for u X ,
V ( t ) u ( · )     | u 0 | e a 0 t + | u 1 | e a 1 t + + | u ϱ 1 | e a ϱ 1 t + sup x [ t , ) | e x t x μ ϱ ( τ ) d τ | t | u ϱ ( ς ) | d x + sup x [ t , ) | e x t x μ ϱ + 1 ( τ ) d τ | t | u ϱ + 1 ( ς ) | d x + sup x [ t , ) | e x t x μ ϱ + 2 ( τ ) d τ | t | u ϱ + 2 ( ς ) | d x | u 0 | e a 0 t + | u 1 | e a 1 t + + | u ϱ 1 | e a ϱ 1 t | + e μ ϱ ̲ t t | u ϱ ( ς ) | d x + e μ ϱ + 1 ̲ t t | u ϱ + 1 ( ς ) | d x + e μ ϱ + 2 ̲ t t | u ϱ + 2 ( ς ) | d x e min a 0 , a 1 , , a m + n 1 , μ ϱ ̲ , μ ϱ + 1 ̲ , μ ϱ + 2 ̲ t u X .
From these results, we obtain
0 T 0 ( t ) W ( t ) = V ( t ) e min a 0 , a 1 , , a ϱ 1 , μ ϱ ̲ , μ ϱ + 1 ̲ , μ ϱ + 2 ̲ t 0 , t .
Combining the definition of the quasi-compact operator (see Gupur [19], Definition 1.85), we obtain that T 0 ( t ) is a quasi-compact C 0 semigroup in X. Obviously, M : X R m + n + 3 and S : X R m + n + 3 are compact operators on X (see Gupur [19], Definition 1.7). According to this result and Proposition 2.9 in Nagel [20], we obtain the following results. □
Corollary 1. 
If the conditions are the same as in Theorem 4, then T ( t ) is a quasi-compact C 0 - semigroup in X.
By Lemmas 1 and 3, we know that the algebraic multiplicity of 0 is one.
Thus, according to Theorem 1, Lemma 1, Lemma 3, and Corollary 1 with Theorem 1.90 [19], the following result is concluded.
Theorem 5. 
If μ j ( x ) is Lipschitz-continuous and satisfies 0 < μ j ̲ μ j ( x ) μ j ¯ < , then there is a positive projection operator P and appropriate constants ω > 0 , B ¯ 0 such that
T ( t ) P B ¯ e ω t ,
where P = 1 2 π i Λ ¯ ( z I H M S ) 1 d z and Λ ¯ is a circle of sufficiently small radius with its center at 0.
From Theorem 1, Lemma 1, and Corollary 1, we have
{ η σ ( H + M + S ) η = 0 } = { 0 } .
This means that only 0 is the spectral point of H + M + S on the imaginary axis.
Next, we calculate the explicit expression of the project operator P .
Theorem 6. 
If the conditions are the same as in Theorem 4, then the TDS of (9) converges exponentially to its steady-state solution, i.e.,
Φ ( · , t ) Φ ( · ) B ¯ e ω t , t 0 .
Proof. 
Theorems 4 and 5 imply
T 0 ( t ) W ( t ) = V ( t ) e min { a 0 , a 1 , , a ϱ 1 , μ j ̲ } t ln T 0 ( t ) W ( t ) = ln V ( t ) min { a 0 , a 1 , , a ϱ 1 , μ j ̲ } t ln T 0 ( t ) W ( t ) t min { a 0 , a 1 , , a ϱ 1 , μ j ̲ }
From this, together with Proposition 2.10 of Nagel and Engel [16], we have ω e s s ( T 0 ( t ) ) (i.e., ω e s s ( H ) ) , the essential growth bound of T 0 ( t ) (i.e., H ), satisfying
ω e s s ( T 0 ( t ) ) min { a 0 , a 1 , , a ϱ 1 , μ j ̲ } .
Since M : X R m + n + 3 and S : X R m + n + 3 are compact operators, by Proposition 2.12 in [16], we have
ω e s s ( H + M + S ) = ω e s s ( T ( t ) ) = ω e s s ( T 0 ( t ) ) min { a 0 , a 1 , , a ϱ 1 , μ j ̲ } .
Using this result and combining it with Theorem 4 and Corollary 2.11 of Engel and Nagel [21], we obtain that 0 is a pole of ( η I H M S ) 1 of order 1. Therefore, from Theorem 5 and the residue theorem, we have
P y ( x ) = 1 2 π i Λ ¯ ( z I H M S ) 1 y ( x ) d z = lim η 0 η ( η I H M S ) 1 y ( x )
To calculate this limit, we need to give the expression of ( η I H M S ) 1 .
For y X , consider the equation ( η I H M S ) Φ = y ; that is,
( η + a 0 ) Φ 0 μ 1 Φ 1 j = ϱ ϱ + 2 0 Φ j ( x ) μ j ( x ) d x = y 0 ,
r i 1 Φ i 1 + ( η + a i ) Φ i μ i + 1 Φ i + 1 = y i , i = 1 , 2 , , ϱ 2 ,
r ϱ 2 Φ ϱ 2 + ( η + a ϱ 1 ) Φ ϱ 1 = y ϱ 1 ,
d Φ j ( x ) d x = ( η + μ j ( x ) ) Φ j ( x ) + y j ( x ) ,
Φ ϱ ( 0 ) = r ϱ 1 Φ ϱ 1 ,
Φ ϱ + 1 ( 0 ) = i = 0 ϱ 1 λ c i Φ i ,
Φ ϱ + 2 ( 0 ) = i = 0 ϱ 1 λ h i Φ i .
By solving (73), we have
Φ j ( x ) = Φ j ( 0 ) e j ( x ) + E j y j ( x ) .
Hence,
ϕ j Φ j ( x ) = 0 Φ j ( 0 ) μ j ( x ) e j ( x ) d x + 0 μ j ( x ) E j y j ( x ) d x = Φ j ( 0 ) ϕ j e j ( x ) + ϕ j E j y j ( x ) .
Notice that
ϕ j e j ( x ) = 1 η 0 e j ( x ) d x
and set
α 0 = 0 e ϱ ( x ) d x , β 0 = 1 η α 0 , α 1 = 0 e ϱ + 1 ( x ) d x , β 1 = 1 η α 1 , α 2 = 0 e ϱ + 2 ( x ) d x , β 2 = 1 η α 2 . α 2 = 0 e ϱ + 2
Then, by substituting (79) into (70), we can derive
η + ( 1 β 1 β 2 ) a 0 + ( β 1 + β 2 ) r 0 + β 1 λ h 0 + β 2 λ c 0 Φ 0 + ( β 1 + β 2 ) ( r 1 a 1 ) + ( β 1 + β 2 1 ) μ 1 + β 1 λ h 1 + β 2 λ c 1 Φ 1 + i = 2 ϱ 2 ( β 1 + β 2 ) ( r i + μ i a i ) + β 1 λ h i + β 2 λ c i Φ i + [ ( β 1 + β 2 β 0 ) r ϱ 1 + ( β 1 + β 2 ) ( μ ϱ 1 a ϱ 1 ) + β 1 λ h ( ϱ 1 ) + β 2 λ c ( ϱ 1 ) ] Φ ϱ 1 = y 0 + j = ϱ ϱ + 2 ϕ j E j y j ( x ) .
Furthermore, (80), (71), and (72) can be written as
ϑ 0 ϑ 1 ϑ 2 ϑ ϱ 2 ϑ ϱ 1 r 0 η + a 1 μ 2 0 0 0 r 1 η + a 2 0 0 η + a ϱ 2 μ ϱ 1 0 r ϱ 1 η + a ϱ 1 Φ 0 Φ 1 Φ 2 Φ ϱ 2 Φ ϱ 1
= y 0 + j = ϱ ϱ + 2 ϕ j E j y j ( x ) y 1 y 2 y ϱ 2 y ϱ 1 ,
where
ϑ 0 = η + ( 1 β 1 β 2 ) a 0 + ( β 1 + β 2 ) r 0 + β 1 λ h 0 + β 2 λ c 0 , ϑ 1 = ( β 1 + β 2 ) ( r 1 a 1 ) + ( β 1 + β 2 1 ) μ 1 + β 1 λ h 1 + β 2 λ c 1 , ϑ i = ( β 1 + β 2 ) ( r i + μ i a i ) + β 1 λ h i + β 2 λ c i , i = 2 , , ϱ 2 , ϑ ϱ 1 = ( β 1 + β 2 β 0 ) r ϱ 1 + ( β 1 + β 2 ) ( μ ϱ 1 a ϱ 1 ) + β 1 λ h ( ϱ 1 ) + β 2 λ c ( ϱ 1 ) .
According to Cramer’s rule,
Φ 0 = | F 0 ( η ) | | F ( η ) | , Φ 1 = | F 1 ( η ) | | F ( η ) | , , Φ ϱ 1 = | F ϱ 1 ( η ) | | F ( η ) | .
Here, F ( η ) is the coefficient matrix of the above equations, and F i ( η ) is the matrix where the i + 1 column of the coefficient matrix is replaced by the following vector:
y 0 + j = ϱ ϱ + 2 ϕ j E j y j ( x ) , y 1 , , y ϱ 2 , y ϱ 1 .
Simplifying F ( η ) yields
| F ( η ) | = η Θ 0 Θ 1 Θ 2 Θ ϱ 2 Θ ϱ 1 r 0 η + a 1 μ 2 0 0 0 r 1 η + a 2 0 0 η + a ϱ 2 μ ϱ 1 0 r ϱ 1 η + a ϱ 1 ,
where
Θ 0 = 1 + ( α 1 + α 2 ) a 0 α 1 λ h 0 α 2 λ c 0 , Θ 1 = 1 ( α 1 + α 2 ) ( η + μ 1 ) α 1 λ h 1 α 2 λ c 1 , Θ i = 1 ( α 1 + α 2 ) η α 1 λ h i α 2 λ c i , i = 2 , , ϱ 2 , Θ ϱ 1 = 1 ( α 1 + α 2 ) η ( α 1 + α 2 α 0 ) r ϱ 1 α 1 λ h ( ϱ 1 ) α 2 λ c ( ϱ 1 ) .
Therefore,
Φ 0 = | F 0 ( η ) | | F ( η ) | = d ( η ) Θ 1 Θ 2 Θ ϱ 2 Θ ϱ 1 y 1 η + a 1 μ 2 0 0 y 2 r 1 η + a 2 0 0 y ϱ 2 0 0 η + a ϱ 2 μ ϱ 1 y ϱ 1 0 0 r ϱ 1 η + a ϱ 1 η Θ 0 Θ 1 Θ 2 Θ ϱ 2 Θ ϱ 1 r 0 η + a 1 μ 2 0 0 0 r 1 η + a 2 0 0 η + a ϱ 2 μ ϱ 1 0 r ϱ 1 η + a ϱ 1 ,
Φ 1 = | F 1 ( η ) | | F ( η ) | = Θ 0 d ( η ) Θ 2 Θ ϱ 2 Θ ϱ 1 r 0 y 1 μ 2 0 0 0 y 2 η + a 2 0 0 0 y ϱ 2 0 η + a ϱ 2 μ ϱ 1 0 y ϱ 1 0 r ϱ 1 η + a ϱ 1 η Θ 0 Θ 1 Θ 2 Θ ϱ 2 Θ ϱ 1 r 0 η + a 1 μ 2 0 0 0 r 1 η + a 2 0 0 η + a ϱ 2 μ ϱ 1 0 r ϱ 1 η + a ϱ 1 ,
Φ ϱ 1 = | F ϱ 1 ( η ) | | F ( η ) | = Θ 0 Θ 1 Θ 2 Θ ϱ 2 d ( η ) r 0 η + a 1 μ 2 0 y 1 0 r 1 η + a 2 0 y 2 η + a ϱ 2 y ϱ 2 0 r ϱ 1 y ϱ 1 η Θ 0 Θ 1 Θ 2 Θ ϱ 2 Θ ϱ 1 r 0 η + a 1 μ 2 0 0 0 r 1 η + a 2 0 0 η + a ϱ 2 μ ϱ 1 0 r ϱ 1 η + a ϱ 1 ,
where d ( η ) = y 0 + j = ϱ ϱ + 2 ϕ j E j y j ( x ) .
Substituting (83)–(85) into (74)–(76), respectively, and using (77), we derive
Φ ϱ ( x ) = | F ϱ 1 ( η ) | | F ( η ) | r ϱ 1 e ϱ ( x ) + E ϱ y ϱ ( x ) ,
Φ ϱ + 1 ( x ) = i = 0 ϱ 1 | F i ( η ) | | F ( η ) | λ c i e ϱ + 1 ( x ) + E ϱ + 1 y ϱ + 1 ( x ) ,
Φ ϱ + 2 ( x ) = i = 0 ϱ 1 | F i ( η ) | | F ( η ) | λ h i e ϱ + 2 ( x ) + E ϱ + 2 y ϱ + 2 ( x ) .
Summing up, we have
P y ( x ) = lim η 0 η | F 0 ( η ) | | F ( η ) | | F 1 ( η ) | | F ( η ) | | F ϱ 1 ( η ) | | F ( η ) | | F ϱ 1 ( η ) | | F ( η ) | r ϱ 1 e ϱ ( x ) + E ϱ y ϱ ( x ) i = 0 ϱ 1 | F i ( η ) | | F ( η ) | λ c i e ϱ + 1 ( x ) + E ϱ + 1 y ϱ + 1 ( x ) i = 0 ϱ 1 | F i ( η ) | | F ( η ) | λ h i e ϱ + 2 ( x ) + E ϱ + 2 y ϱ + 2 ( x )
= lim η 0 η | F 0 ( η ) | | F ( η ) | lim η 0 η | F 1 ( η ) | | F ( η ) | lim η 0 η | F ϱ 1 ( η ) | | F ( η ) | lim η 0 η | F ϱ 1 ( η ) | | F ( η ) | r ϱ 1 e 0 x μ ϱ ( ξ ) d ξ i = 0 ϱ 1 lim η 0 η | F i ( η ) | | F ( η ) | λ c i e 0 x μ ϱ + 1 ( ξ ) d ξ i = 0 ϱ 1 lim η 0 η | F i ( η ) | | F ( η ) | λ h i e 0 x μ ϱ + 2 ( ξ ) d ξ = H 0 H b H 1 H b H ϱ 1 H b H ϱ H e 0 x μ ϱ ( ξ ) d ξ b H ϱ + 1 H e 0 x μ ϱ + 1 ( ξ ) d ξ b H ϱ + 2 H e 0 x μ ϱ + 2 ( ξ ) d ξ b .
Here,
H = Υ 0 Υ 1 Υ 2 Υ ϱ 2 Υ ϱ 1 r 0 a 1 μ 2 0 0 0 r 1 a 2 0 0 0 0 0 a ϱ 2 μ ϱ 1 0 0 0 r ϱ 2 a ϱ 1 , H 0 = a 1 μ 2 0 0 r 1 a 2 0 0 0 0 a ϱ 2 μ ϱ 1 0 0 r ϱ 2 a ϱ 1 , H i = a i + 1 μ i + 2 0 0 r i + 1 a i + 2 0 0 0 0 a ϱ 2 μ ϱ 1 0 0 r ϱ 2 a ϱ 1 × k = 0 i 1 r k , i = 1 , 2 , , ϱ 2 , H ϱ 1 = k = 0 ϱ 2 r k , b = i = 0 ϱ 1 y i + j = ϱ ϱ + 2 0 y j ( τ ) d τ , Υ 0 = 1 + ( l 1 + l 2 ) a 0 l 1 λ h 0 l 2 λ c 0 , Υ 1 = 1 ( l 1 + l 2 ) μ 1 l 1 λ h 1 l 2 λ c 1 , Υ i = 1 l 1 λ h i l 2 λ c i , i = 2 , , ϱ 2 , Υ ϱ 1 = 1 ( l 1 + l 2 l 0 ) r ϱ 1 l 1 λ h ( ϱ 1 ) l 2 λ c ( ϱ 1 ) ,
l 0 = 0 e 0 x μ ϱ ( ξ ) d ξ d x , l 1 = 0 e 0 x μ ϱ + 1 ( ξ ) d ξ d x , l 2 = 0 e 0 x μ ϱ + 2 ( ξ ) d ξ d x .
H ϱ = k = 0 ϱ 1 r k ,
H ϱ + 1 = λ c 0 λ c 1 λ c 2 λ c ( ϱ 2 ) λ c ( ϱ 1 ) r 0 a 1 μ 2 0 0 0 r 1 a 2 0 0 0 0 0 a ϱ 2 μ ϱ 1 0 0 0 r ϱ 2 a ϱ 1 , H ϱ + 2 = λ h 0 λ h 1 λ h 2 λ h ( ϱ 2 ) λ h ( ϱ 1 ) r 0 a 1 μ 2 0 0 0 r 1 a 2 0 0 a ϱ 2 μ ϱ 1 0 r ϱ 2 a ϱ 1 .
In particular, for Φ ( 0 ) = ( 1 , 0 , , 0 ) , we obtain
P Φ ( 0 ) = H 0 H H 1 H H ϱ 1 H H ϱ H e 0 x μ ϱ ( ξ ) d ξ H ϱ + 1 H e 0 x μ ϱ + 1 ( ξ ) d ξ H ϱ + 2 H e 0 x μ ϱ + 2 ( ξ ) d ξ = Φ ( x ) .
According to Theorem 3, (89), and Theorem 6, we have
Φ ( · , t ) Φ ( · ) = T ( t ) Φ ( 0 ) P Φ ( 0 ) T ( t ) P Φ ( 0 ) B ¯ e ω t Φ ( 0 ) = B ¯ e ω t , t 0 .

6. Asymptotic Expression of the TDS of (9)

Firstly, we can prove that the algebraic multiplicity of all eigenvalues of H + M + S in Δ is 1. In fact, if this state is wrong, then the algebraic multiplicities of all eigenvalues of H + M + S are greater than 1 [22]. Without losing generality, suppose that their algebraic multiplicities are equal to 2; then,
[ η k I ( H + M + S ) ] Φ ˜ ( k ) = Φ ( k )
has a solution in D ( H ) , where Φ ( k ) is an eigenvector in Lemma 1, namely [ η k I ( H + M + S ) ] Φ ( k ) = 0 . In (90), on either side of the role of Q ( k ) , where Q ( k ) is the eigenvector in Lemma 3, i.e., [ η k I ( H + M + S ) ] Q ( k ) = 0 , we launch
[ η k I ( H + M + S ) ] Φ ˜ ( k ) , Q ( k ) = Φ ( k ) , Q ( k ) Φ ˜ ( k ) , [ η k I ( H + M + S ) ] Q ( k ) = Φ ( k ) , Q ( k ) Φ ˜ ( k ) , 0 = Φ ( k ) , Q ( k ) 0 = Φ ( k ) , Q ( k )
which contradicts Φ ( k ) , Q ( k ) 0 . Therefore, the algebraic multiplicity of all eigenvalues of H + M + S is one in Δ .
Without losing generality, suppose that there are s + 1 real eigenvalues of H + M + S in Δ , and they are
η k Δ = { η C min { μ ϱ ̲ , μ ϱ + 1 ̲ , μ ϱ + 2 ̲ } < η 0 } , k = 0 , 1 , , s ,
min { μ ϱ ̲ , μ ϱ + 1 ̲ , μ ϱ + 2 ̲ } < η s < η s 1 < < η 1 < η 0 = 0 .
Thus, combining Theorem 1.89 in [19] (see also Nagel [20]) with Theorem 1, we obtain
Φ ( x , t ) = T ( t ) Φ ( 0 ) = k = 0 s T k ( t ) Φ ( 0 ) + R s ( t ) Φ ( 0 ) ,
T k ( t ) Φ ( 0 ) = e η k t P k Φ ( 0 ) , k = 0 , 1 , , s ,
P k Φ ( 0 ) = 1 2 π i Γ k ( η I H M S ) 1 Φ ( 0 ) d η , k = 0 , 1 , , s ,
R s ( t ) B e ω t , B > 0 , ω > 0 .
Here, Γ k is a circle with a sufficiently small radius and a center η k ( k = 0 , 1 , , s ) . Since the algebraic multiplicity of η k is 1, η k is a pole of ( η I H M S ) 1 of order one. Thus, we know by the residual theorem that
P k Φ ( 0 ) = lim η η k η ( η I H M S ) 1 Φ ( 0 ) .
The expression of ( η I H M S ) 1 is given as follows:
( η I H M S ) 1 y 0 y 1 y ϱ + 2 = Φ 0 Φ 1 Φ ϱ + 2 , y X ,
where Φ i ( i = 0 , 1 , , ϱ + 2 ) is given by (83)–(88). By (95) and (96), we can determine all P k Φ ( 0 ) ( k = 0 , 1 , , s ) .
P 0 Φ ( 0 ) = Φ ( 0 ) , Q Φ ˜ ( x ) ,
where Φ ˜ ( x ) and Q satisfy ( H + M + S ) Φ ˜ ( x ) = 0 , ( H + M + S ) Q = 0 , Φ ˜ , Q = 1 .
Finally, we deduce the following main results.
Theorem 7. 
If 0 < μ j ̲ μ j ( x ) μ j ¯ < and μ j ( x ) are Lipschitz-continuous, then the TDS of (9) can be written as
Φ ( x , t ) = Φ ( 0 ) , Q Φ ˜ ( x ) + k = 1 s e η k t lim η η k η ( η I H M S ) 1 Φ ( 0 ) + R s ( t ) Φ ( 0 ) , R s ( t ) B e ω t , B > 0 , ω > 0 .
where η k ( k = 1 , , s ) are isolated eigenvalues of H + M + S in { η C min { μ ϱ ̲ , μ ϱ + 1 ̲ , and μ ϱ + 2 ̲ } < η 0 } .

7. Numerical Results

In this section, we discuss some reliability indices of the system through specific examples, such as the system availability A ( t ) , reliability R ( t ) , and M T T F , and analyze the impact of changes in system parameters on system reliability indices. First of all, without loss of generality, let us consider the case of two active units and two standby units system, i.e., m = 2 and n = 2 , and assume that the repair time of the system is gamma-distributed and the repair rate is constant, i.e., μ j ( x ) = μ j . The influence of parameter changes on the instantaneous reliability index of the system is discussed below.
In Figure 2, we describe the influence of different β changes with time t on the instantaneous availability of the system ( β is another parameter of the gamma distribution). It is easy to see from Figure 2 that A ( t ) decreases rapidly with increased time. After the system runs for a long time, it stabilizes and reaches a fixed value.
In the following, we assume β = 1 (i.e., the repair time of the system is exponentially distributed) and continue to discuss the influence of different λ c 0 and λ h 0 on the instantaneous availability of the system.
Figure 3a,b show that A ( t ) decreases with increases in λ c 0 and λ h 0 . In addition, as time goes to infinity, the instantaneous availability of the system converges to a certain value.
Figure 4 reveals the effect of different μ 4 on the instantaneous availability of the system. It is not difficult to find that A ( t ) increases with increased μ 4 .
Figure 5 indicates the effect of λ c 0 and λ h 0 on the system reliability and mean time to failure ( M T T F ). We note that R ( t ) (Figure 5a) and M T T F (Figure 5b) decrease as λ c 0 increases. Obviously, reliability vanishes as time goes to infinity.
Obviously, a similar conclusion can be drawn for the system’s failure frequency, m f ( t ) , and the renewal frequency, m r ( t ) . Therefore, it can be seen from the above figure and discussion that when the time tends to infinity, the instantaneous reliability index of the system tends to a constant value, which verifies the main results obtained in Section 5.

8. Conclusions

In this paper, we studied the dynamical solution problem of human–machine systems with human error and common-cause failure. We started from theory and used the theory of semigroups in functional analyses to model the system. The integral-differential equation was transformed into an abstract Cauchy problem in Banach space. Then, we proved the well-posedness of (9), studied the asymptotic behavior of its time-dependent solution, and showed that the time-dependent solution converges exponentially to its steady-state solution, obtaining asymptotic expressions for the time-dependent solution. In addition, the influence of each parameter on the system reliability were analyzed through concrete numerical examples. Therefore, engineers can design a more reliable, safe, and cost-effective system by using the results obtained in this paper. To a certain extent, it provides a theoretical basis for system reliability management and optimal scheduling.
If we know the spectral distribution of H + M + S in { η C η min { μ ϱ ̲ , μ ϱ + 1 ̲ ,
and μ ϱ + 2 ̲ } } , we may directly estimate B , ω in Theorem 7, which is important for engineers. Based on our knowledge of this subject, we believe H + M + S has a continuous spectrum in { η C η min { μ ϱ ̲ , μ ϱ + 1 ̲ , μ ϱ + 2 ̲ } } . However, it is necessary to verify this and investigate more results.

Author Contributions

Conceptualization, J.Z. and E.K.; methodology, J.Z. and E.K.; validation, J.Z.; writing—original draft preparation, J.Z.; writing—review and editing, J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Natural Science Foundation of Xinjiang Uygur Autonomous Region, 2022D01C46.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors would like to thank the editor and referees for their valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HMSHuman–machine system
HRHuman error
CCFCommon-cause failure
SVTSupplementary variable technique
TDSTime-dependent solution
SSSSteady-state solutions

References

  1. Cox, D.R. The analysis of non-Markovian stochastic processes by the inclusion of supplementary variables. Math. Proc. Camb. Philos. Soc. 1955, 51, 433–441. [Google Scholar] [CrossRef]
  2. Gaver, D.P. Time to failure and availability of parallel redundant systems with repair. IEEE Trans. Reliab. 1963, 12, 30–38. [Google Scholar] [CrossRef]
  3. Abbas, B.S.; Kuo, W. Stochastic effectiveness models for human-machine systems. IEEE Trans. Syst. Man Cybern. 1990, 20, 826–834. [Google Scholar] [CrossRef]
  4. Yang, N.F.; Dhillon, B.S. Stochastic analysis of a general standby system with constant human error and arbitrary system repair rates. Microelectron. Reliab. 1995, 35, 1037–1045. [Google Scholar] [CrossRef]
  5. Sridharan, V.; Mohanavadivu, P. Some statistical characteristics of a repairable, standby, human and machine system. IEEE Trans. Reliab. 1998, 47, 431–435. [Google Scholar] [CrossRef]
  6. Asadzadeh, S.M.; Azadeh, A. An integrated systemic model for optimization of condition-based maintenance with human error. Reliab. Eng. Syst. Saf. 2014, 124, 117–131. [Google Scholar] [CrossRef]
  7. Wang, J.; Xie, N.; Yang, N. Reliability analysis of a two-dissimilar-unit warm standby repairable system with priority in use. Commun. Stat. Theory Methods 2021, 50, 792–814. [Google Scholar] [CrossRef]
  8. Gupur, G. Well-posedness of the model describing a repairable, standby human & machine system. J. Syst. Sci. Complex. 2003, 16, 483–493. [Google Scholar]
  9. Gupur, G. Asymptotic property of the solution of a repairable, standby, human and machine system. Int. J. Pure Appl. Math. 2006, 8, 35–54. [Google Scholar]
  10. Aili, M.; Gupur, G. Further results on a repairable, standby, human and machine system. Int. J. Pure Appl. Math. 2015, 101, 571–594. [Google Scholar]
  11. Guo, W.H.; Wu, S.L. The Asymptotic Stability of a Solution of a Repairable Standby Human-Machine System. Math. Pract. Theory 2004, 10, 104–109. (In Chinese) [Google Scholar]
  12. Wang, W.L.; Xu, G.Q. The well-posedness and stability of a repairable standby human-machine system. Math. Comput. Model. 2006, 44, 1044–1052. [Google Scholar] [CrossRef]
  13. Narmada, S.; Jacob, M. Reliability analysis of a complex system with a deteriorating standby unit under common-cause failure and critical human error. Microelectron. Reliab. 1996, 36, 1287–1290. [Google Scholar] [CrossRef]
  14. Hajeeh, M.A. Availability of deteriorated system with inspection subject to common-cause failure and human error. Int. J. Oper. Res. 2011, 12, 207–222. [Google Scholar] [CrossRef]
  15. Liu, Z.; Liu, Y.; Cai, B. Dynamic Bayesian network modeling of reliability of subsea blowout preventer stack in presence of common cause failures. J. Loss Prev. Process Ind. 2015, 38, 58–66. [Google Scholar] [CrossRef]
  16. Shneiderman, B. Human-centered artificial intelligence: Reliable, safe & trustworthy. Int. J. Hum.–Comput. Interact. 2020, 36, 495–504. [Google Scholar]
  17. Xu, H.; Guo, W.; Guo, L. Stability of a General Repairable Human-Machine System. In Proceedings of the 2008 IEEE International Conference on Industrial Engineering and Engineering Management, Singapore, 8–11 December 2008; pp. 512–515. [Google Scholar]
  18. Fattorini, H.O. The Cauchy Problem; Cambridge University Press: Cambridge, UK, 1984. [Google Scholar]
  19. Gupur, G. Functional Analysis Methods for Reliability Models; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  20. Nagel, R. One-Parameter Semigroups of Positive Operators; Springer: Berlin/Heidelberg, Germany, 1986. [Google Scholar]
  21. Engel, K.J.; Nagel, R. One-Parameter Semigroups for Linear Evolution Equations; Springer: New York, NY, USA, 2000. [Google Scholar]
  22. Gupur, G. On the asymptotic expression of the time-dependent solution of an M/G/1 queueing model. Partial Differ. Equ. Appl. 2022, 3, 21. [Google Scholar] [CrossRef]
Figure 1. The state transition diagram of the system.
Figure 1. The state transition diagram of the system.
Mathematics 11 02771 g001
Figure 2. The repair time is A ( t ) and falls in a gamma distribution for different β .
Figure 2. The repair time is A ( t ) and falls in a gamma distribution for different β .
Mathematics 11 02771 g002
Figure 3. Effect of parameters λ c 0 and λ h 0 on A ( t ) . ( a ) A ( t ) for different λ c 0 ; ( b ) A ( t ) for different λ h 0 .
Figure 3. Effect of parameters λ c 0 and λ h 0 on A ( t ) . ( a ) A ( t ) for different λ c 0 ; ( b ) A ( t ) for different λ h 0 .
Mathematics 11 02771 g003
Figure 4. The repair time is A ( t ) for an exponential distribution for different μ 4 .
Figure 4. The repair time is A ( t ) for an exponential distribution for different μ 4 .
Mathematics 11 02771 g004
Figure 5. Reliability and M T T F for exponentially distributed repair time. ( a ) Reliability for different λ c 0 . ( b ) Effect of λ h 0 on MTTF.
Figure 5. Reliability and M T T F for exponentially distributed repair time. ( a ) Reliability for different λ c 0 . ( b ) Effect of λ h 0 on MTTF.
Mathematics 11 02771 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, J.; Kasim, E. Study of Dynamic Solutions for Human–Machine System with Human Error and Common-Cause Failure. Mathematics 2023, 11, 2771. https://doi.org/10.3390/math11122771

AMA Style

Zhao J, Kasim E. Study of Dynamic Solutions for Human–Machine System with Human Error and Common-Cause Failure. Mathematics. 2023; 11(12):2771. https://doi.org/10.3390/math11122771

Chicago/Turabian Style

Zhao, Juan, and Ehmet Kasim. 2023. "Study of Dynamic Solutions for Human–Machine System with Human Error and Common-Cause Failure" Mathematics 11, no. 12: 2771. https://doi.org/10.3390/math11122771

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop