Next Article in Journal
Two-Dimensional Correlation Analysis of Periodicity in Noisy Series: Case of VLF Signal Amplitude Variations in the Time Vicinity of an Earthquake
Next Article in Special Issue
Hybrid Method for Inverse Elastic Obstacle Scattering Problems
Previous Article in Journal
Nonlinear Mixed Convection in a Reactive Third-Grade Fluid Flow with Convective Wall Cooling and Variable Properties
Previous Article in Special Issue
Quality-Enhancing Techniques for Model-Based Reconstruction in Magnetic Particle Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Reconstruction Algorithm Using Weighted Mean of Ordered-Subsets EM and MART for Computed Tomography

1
Faculty of Science, Tanta University, El-Giesh St., Tanta 31527, Gharbia, Egypt
2
Institute of Biomedical Sciences, Tokushima University, 3-18-15 Kuramoto, Tokushima 770-8509, Japan
3
Shikoku Medical Center for Children and Adults, National Hospital Organization, 2-1-1 Senyu, Zentsuji 765-8507, Japan
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(22), 4277; https://doi.org/10.3390/math10224277
Submission received: 3 October 2022 / Revised: 12 November 2022 / Accepted: 14 November 2022 / Published: 15 November 2022
(This article belongs to the Special Issue Inverse Problems and Imaging: Theory and Applications)

Abstract

:
Iterative image reconstruction algorithms have considerable advantages over transform methods for computed tomography, but they each have their own drawbacks. In particular, the maximum-likelihood expectation-maximization (MLEM) algorithm reconstructs high-quality images even with noisy projection data, but it is slow. On the other hand, the simultaneous multiplicative algebraic reconstruction technique (SMART) converges faster at early iterations but is susceptible to noise. Here, we construct a novel algorithm that has the advantages of these different iterative schemes by combining ordered-subsets EM (OS-EM) and MART (OS-MART) with weighted geometric or hybrid means. It is theoretically shown that the objective function decreases with every iteration and the amount of decrease is greater than the mean between the decreases for OS-EM and OS-MART. We conducted image reconstruction experiments on simulated phantoms and deduced that our algorithm outperforms OS-EM and OS-MART alone. Our algorithm would be effective in practice since it incorporates OS-EM, which is currently the most popular technique of iterative image reconstruction from noisy measured projections.

1. Introduction

In computed tomography (CT), image reconstruction, an inverse problem of estimating pixel values of a tomographic image from measured projections, is performed in practice by using transform and iterative methods [1,2,3,4,5,6,7]. Iterative image reconstruction [2,5,6,8] algorithms based on the optimization strategy have considerable advantages over transform methods such as the filtered back-projection procedure. Although the maximum-likelihood expectation-maximization (MLEM) [2] algorithm, a well-known iterative method, reconstructs high-quality images even for noisy projection data, it is slow to converge [7,9,10,11]. On the other hand, while the simultaneous multiplicative algebraic reconstruction technique (SMART) [12,13,14,15] converges quickly at early iterations, it is susceptible to noise [16,17].
In this study, we constructed a novel algorithm that has the advantages of these different iterative schemes by combining ordered-subsets versions of MLEM (OS-EM) [7,10] and MART (OS-MART) [14] with a weighted mean. Our method employs combinations of weighted geometric or hybrid means of iterative points. We theoretically show that the objective function for the geometric mean decreases with every iteration and the amount of decrease is greater than the mean between the decreases for OS-EM and OS-MART when the tomographic inverse problem is consistent. By exploiting the complementary advantages of MLEM and SMART, we have created a fast-converging algorithm for reconstructing high-quality images.
Furthermore, we developed a system of switched differential equations with a piecewise smooth vector field, whose numerical discretization corresponds to a combination of the iterative equation for the OS-EM and OS-MART methods. Here, this approach is considered to be an application of the dynamical method [18,19,20,21,22,23] for solving ill-posed inverse problems. The dynamical systems that are applied to constrained tomographic inverse problems [24,25,26,27,28,29] enable us to prove the stability of the equilibrium corresponding to the true image by using the Lyapunov stability theorem [30] and provide a policy for methodically designing a novel algorithm of iterative reconstruction.
A drawback of the weighted-mean iterative algorithm is that a sequential calculation of its terms entails longer computation times per iteration than that of the terms of the individual equation for OS-EM and OS-MART. We show that this issue can be resolved by constructing a fast sequential algorithm with the above discretization scheme for the dynamical system of differential equations. Moreover, we present a time-varying method of improving performance by replacing the constant weight parameter included in the autonomous system with a function of the iteration number.
We conducted numerical experiments on image reconstruction, and the results indicate that our algorithm outperformed OS-EM and OS-MART.

2. Preliminary

The main problem in image reconstruction is to find pixel values x R + J satisfying
y = A x + δ
where y R + I , A R + I × J , and δ R I indicate the projection value, projection operator, and noise, respectively, with R + being the set of nonnegative real numbers. We call the noise-free system in Equation (1) consistent when it has a solution e R + J . The tomographic inverse problem of minimizing an objective function can be transformed into one of finding an unknown x, which can be achieved by employing an optimization procedure using either a discrete-time (iterative) or continuous-time system.
The following ideas are related to the ordered-subsets algorithms. Let A m R + I m × J be a submatrix consisting of I m partial rows of A and y m R + I m be a subvector of y with the same corresponding rows of A m , for m = 1 , 2 , , M , with M indicating the total number of divisions.
With R + + being the set of positive real numbers, we define the OS-EM algorithm of images u ( n ) R + + as
u j ( n + 1 ) = u j ( n ) f j m ( u ( n ) ) , u ( 0 ) = z 0 R + + J
and the OS-MART algorithm of images v ( n ) R + + as
v j ( n + 1 ) = v j ( n ) g j m ( v ( n ) ) , v ( 0 ) = z 0 R + + J
for the iteration number n = 0 , 1 , 2 , , N 1 and the subset number m = ( n   mod   M ) + 1 , where
f j m ( x ) : = 1 i = 1 I m A i j m i = 1 I m A i j m y i m A m x i
and
g j m ( x ) : = exp 1 i = 1 I m A i j m i = 1 I m A i j m log y i m A m x i
for j = 1 , 2 , , J . The iterative solutions u ( n ) and v ( n ) , n = 0 , 1 , 2 , , N 1 for the Equations (2) and (3) remain nonnegative due to the selection of the initial values.
As theoretical measures of convergence, we will use the generalized Kullback–Leibler (KL) divergence [31,32],
KL ( α , β ) : = α log α β + β α
for nonnegative vectors α and β and the weighted KL-divergence,
WKL ( e , x , Ψ ) : = j = 1 J KL ( e j , x j ) k = 1 K Ψ k j
for evaluating x R + J compared with e R + J where Ψ R K × J denotes a projection operator with K rows. The reason why the KL-divergence is used for evaluating the convergence is that it can be considered as an objective function of which there exists a minimizer for each of MLEM and SMART [27,32,33,34].

3. Results

3.1. Proposed Method

Our method of solving the tomographic inverse problem consists of iterative algorithms and dynamical systems.
  • Weighted geometric mean:
The iterative reconstruction of an image z involves taking the weighted geometric means of u and v in Equations (2) and (3) with a weight parameter α [ 0 , 1 ] and a relaxation or scaling parameter h > 0 :
z j ( n + 1 ) = z j ( n ) f j m ( z ( n ) ) h ( 1 α ) g j m ( z ( n ) ) h α
where z ( 0 ) = z 0 R + + J for j = 1 , 2 , , J , n = 0 , 1 , 2 , , N 1 , and m = ( n   mod   M ) + 1 . The accompanying system of equations is a continuous analog based on the dynamical method. It is a switched nonlinear system consisting of the following family of M subsystems:
d x j ( t ) d t = x j ( t ) ( 1 α ) log ( f j m ( x ( t ) ) ) + α log ( g j m ( x ( t ) ) )
where the derivative is taken with respect to the state variable x ( t ) as an image at t [ t m 1 + k τ , t m + k τ ) with x ( 0 ) = z 0 and a sequence of times 0 = t 0 < t 1 < t 2 < < t M = τ , where k = ( n m + 1 ) / M for j = 1 , 2 , , J , n = 0 , 1 , 2 , , N 1 , and m = ( n   mod   M ) + 1 . It is easy to attain the iterative formula in (8) by using the multiplicative Euler method [35,36] with a step size of h to discretize (9).
  • Weighted hybrid mean:
A second pair of discrete- and continuous-time systems can be constructed by making a hybrid combination of additive and multiplicative calculi defined by the iteration,
z j ( n + 1 ) = z j ( n ) 1 + h ( 1 α ) ( f j m ( z ( n ) ) 1 ) + g j m ( z ( n ) ) h α ,
of images z ( n ) with z ( 0 ) = z 0 , where, for a real number c, c + : = max { c , 0 } and its continuous analog:
d x j ( t ) d t = x j ( t ) ( 1 α ) ( f j m ( x ( t ) ) 1 ) + α log ( g j m ( x ( t ) ) )
of images x ( t ) with x ( 0 ) = z 0 for j = 1 , 2 , , J . The conditions on n, m, t, t for = 0 , 1 , , M , τ , and k are the same as in Equations (8) and (9). When h ( 1 α ) 1 , the right-hand side of Equation (10) does not require the clipping operator ( · ) + . We can obtain the iterative formula in Equation (10) by discretizing the differential equation in (11). Note that ( f j m ( x ) 1 ) and log ( g j m ( x ) ) are each zero if and only if x = e holds. The vector field is constructed by adding the terms ( 1 α ) x j ( f j m ( x ) 1 ) and α x j log ( g j m ( x ) ) , for which additive and multiplicative discretizations using a hybrid Euler method [28] are applied, respectively. The discrete formula in Equation (10) can be regarded as an ordered-subsets version of the iterative reconstruction algorithm [28] derived from an optimization of Jeffreys’ α -skew J-divergence [37,38] between the forward and measured projections. Namely, Equation (10) with M = 1 and h = δ coincides with Equation (6) in [28].
Notice that each of the iterative Equations (8) and (10) as combinations of weighted geometric and hybrid means reduces to an OS-EM algorithm when α = 0 and h = 1 and to the OS-MART algorithm when α = 1 and h = 1 . A larger value of h in Equations (8) and (10) can accelerate convergence, but the algorithm diverges or oscillates when it is too large. The value of h in our theoretical and experimental results was chosen to be 1 according to the experimental results of OS-EM and OS-MART algorithms as multiplicative Euler discretizations of their continuous analogs [27,34].

3.2. Theoretical Findings

In this section, we present theoretical results on the discrete- and continuous-time systems defined in the previous section under the assumption that the generalized subset balance [33], described as
i = 1 I m A i j m = σ m i = 1 I A i j
for some positive constant σ m , holds for m = 1 , 2 , , M .
  • Discrete-time systems:
We consider the discrete-time system in Equation (8) under the assumption h = 1 and prove that the iterative sequence,
WKL ( e , z ( n ) , A ) n = 0
with z ( n ) R + J for n = 0 , 1 , 2 , is decreasing.
Theorem 1.
Let y = A e for some e R + + J . Given a solution z to the system in Equation (8) for h = 1 , the inequality,
WKL ( e , z ( 0 ) , A ) WKL ( e , z ( 1 ) , A ) 1 σ m KL ( y m , A m z ( 0 ) ) ,
is satisfied for any M and m = 1 , 2 , , M .
Note that, in Theorem 1 and its proof below, since a discrete time shift has no effect on the autonomous difference system, we have written iteration numbers 0 and 1 instead of n and n + 1 , respectively, for any given m = 1 , 2 , , M with n = m 1 , m , m + 1 , , in order to facilitate the description.
Proof 
From the assumption of the generalized subset balance in Equation (12), Inequality (14) is equivalent to
WKL ( e , z ( 0 ) , A m ) WKL ( e , z ( 1 ) , A m ) KL ( y m , A m z ( 0 ) ) .
Let
u j ( 1 ) : = u j ( 0 ) f j m ( u ( 0 ) )
and
v j ( 1 ) : = v j ( 0 ) g j m ( v ( 0 ) )
for j = 1 , 2 , , J , which are one-step updates of the OS-EM and OS-MART equation on the variables u and v with initial values u ( 0 ) and v ( 0 ) . Then, we have
WKL ( e , z 0 , A m ) WKL ( e , u ( 1 ) , A m ) KL ( y m , A m z 0 )
and
WKL ( e , z 0 , A m ) WKL ( e , v ( 1 ) , A m ) KL ( y m , A m z 0 )
where the initial values u ( 0 ) and v ( 0 ) are the same and equal to z 0 (namely, u ( 0 ) = v ( 0 ) = z 0 ). The proof of Inequality (18) is given in [39]. Inequality (19) was first obtained by Byrne [33]. It can also be obtained by applying the procedure of direct reduction [39] using the concavity of the log function and Jensen’s inequality. By multiplying the first and second inequalities by ( 1 α ) and α , respectively, and adding the resulting two inequalities side-by-side together, we obtain
WKL ( e , z 0 , A m ) ( 1 α ) WKL ( e , u ( 1 ) , A m ) + α WKL ( e , v ( 1 ) , A m ) KL ( y m , A m z 0 ) .
Then, the one-step update z ( 1 ) of the algorithm in Equation (8) with h = 1 and the same initial value,
z j ( 0 ) = u j ( 0 ) 1 α v j ( 0 ) α = z j 0 ,
can be written as
z j ( 1 ) = u j ( 1 ) 1 α v j ( 1 ) α
for j = 1 , 2 , , J . Therefore, we have
WKL ( e , z 0 , A m ) WKL ( e , z ( 1 ) , A m ) WKL ( e , z 0 , A m ) ( 1 α ) WKL ( e , u ( 1 ) , A m ) + α WKL ( e , v ( 1 ) , A m )
according to the following inequality:
( 1 α ) u j ( 1 ) + α v j ( 1 ) u j ( 1 ) 1 α v j ( 1 ) α
which is in turn derived from Jensen’s inequality,
log ( 1 α ) u j ( 1 ) e j + α v j ( 1 ) e j ( 1 α ) log u j ( 1 ) e j + α log v j ( 1 ) e j .
The theorem follows directly from inequalities (20) and (23). □
  • Continuous-time systems:
When there is no noise and the system in Equation (1) is consistent, we demonstrate that any solution to the continuous analog converges to the wanted solution.
Theorem 2.
Assume there exists e R + + J satisfying y = A e . Then, e is an equilibrium for each of the continuous-time systems in Equations (9) and (11) and is asymptotically stable.
Proof 
Notice that e is an equilibrium for each mth subsystem since f j m ( e ) = 1 and g j m ( e ) = 1 simultaneously hold for j = 0 , 1 , , J and m = 1 , 2 , , M . The solutions to the subsystem are in R + + J because the initial state value at t m 1 , m = 1 , 2 , , M , belongs to R + + J and the flow cannot traverse the invariant subspace x j = 0 for j = 1 , 2 , , J in the state space. Thus, the following nonnegative function:
V ( x ) : = WKL ( e , x , A ) = j = 1 J e j x j w e j w d w i = 1 I A i j
of x j > 0 is a well-defined common Lyapunov function. Then, we have the derivatives of V along the trajectories of Equations (9) and (11):
d V d t ( x ) ( 9 ) = j = 1 J x j e j x j d x j d t i = 1 I A i j = ( 1 α ) j = 1 J ( e j x j ) log 1 i = 1 I m A i j m i = 1 I m A i j m y i m ( A m x ) i i = 1 I A i j α σ m j = 1 J ( e j x j ) i = 1 I m A i j m log y i m ( A m x ) i 1 α σ m i = 1 I m y i m log y i m ( A m x ) i + 1 α σ m i = 1 I m ( A m x ) i y i m ( A m x ) i 1 α σ m i = 1 I m ( y i m ( A m x ) i ) log ( y i m ) log ( ( A m x ) i ) = 1 α σ m KL ( y m , A m x ) α σ m KL ( y m , A m x ) + KL ( A m x , y m ) = 1 σ m KL ( y m , A m x ) + α KL ( A m x , y m ) 0
and
d V d t ( x ) ( 11 ) = j = 1 J x j e j x j d x j d t i = 1 I A i j = 1 α σ m j = 1 J ( e j x j ) i = 1 I m A i j m y i m ( A m x ) i 1 α σ m j = 1 J ( e j x j ) i = 1 I m A i j m log y i m ( A m x ) i = 1 α σ m i = 1 I m ( y i m ( A m x ) i ) y i m ( A m x ) i 1 α σ m i = 1 I m ( y i m ( A m x ) i ) log ( y i m ) log ( ( A m x ) i ) = 1 α σ m i = 1 I m y i m ( A m x ) i 2 ( A m x ) i α σ m KL ( y m , A m x ) + KL ( A m x , y m ) 0 .
Therefore, for each system, V is a common Lyapunov function and the equilibrium e is uniformly asymptotically stable. □
The above theorem ensures that the suggested difference system as a first-order approximation to the differential equation has a stable fixed point e when the chosen step size h is small enough to guarantee numerical stability.

3.3. Fast Discretization Algorithm

In Equations (8) and (10), the computational cost of both terms f j m ( z ) and g j m ( z ) is higher than that of each single term. Thus, a sequential calculation of the two terms results in a longer computation time. However, because the parts
A i j m i = 1 I m A i j m
and
y i m A m z i
for m = 1 , 2 , , M , i = 1 , 2 , , I , and j = 1 , 2 , , J are commonly included in both terms, an effective coding of the program can reduce the computational cost. On the other hand, if approximately the same computation time as OS-EM or OS-MART is required, another method that works by discretizing the continuous-time systems with lower accuracy can be used.
Here, we provide a fast sequential calculation algorithm in the case where M = 1 and h = 1 , e.g., for the weighted geometric mean:
z j ( 1 ) = z j ( 0 ) p j ( 0 )   with   calculating   p j ( 0 ) : = f j ( z ( 0 ) ) and   using   an   initial   z j ( 0 ) = z j 0 z j ( 2 ) = z j ( 1 ) p j ( 0 ) 1 α q j ( 1 ) α with   calculating   q j ( 1 ) : = g j ( z ( 1 ) ) and   using   p j ( 0 ) z j ( 3 ) = z j ( 2 ) p j ( 2 ) 1 α q j ( 1 ) α with   calculating   p j ( 2 ) : = f j ( z ( 2 ) ) and   using   q j ( 1 ) z j ( 4 ) = z j ( 3 ) p j ( 2 ) 1 α q j ( 3 ) α with   calculating   q j ( 3 ) : = g j ( z ( 3 ) ) and   using   p j ( 2 ) z j ( 5 ) = z j ( 4 ) p j ( 4 ) 1 α q j ( 3 ) α with   calculating   p j ( 4 ) : = f j ( z ( 4 ) ) and   using   q j ( 3 )
for j = 1 , 2 , , J at each step. The vectors p and q whose elements are p j and q j , respectively, for j = 1 , 2 , , J are calculated alternately. Namely, to calculate the state variable z in the current step, one vector, p or q, is calculated from the previous step and the other is the same one from two steps ago. Although the use of a vector derived from an old state variable leads to a similar effect as having a larger step size and may unstabilize the steady state, the computational cost required for the sequential algorithm is the same as that for either MLEM or SMART. A fast iterative algorithm for the weighted hybrid mean method can be derived in a similar manner by applying the additive and multiplicative terms in Equation (10).

3.4. Time Varying System

In the vector fields of the continuous-time dynamical systems in Equations (9) and (11) with a small value of α , the second term α x j log ( g j m ( x ) ) can be considered to be a regularization with which to optimize the metric for OS-MART. When replacing the constant α with a discrete time-varying parameter α 0 λ n where the coefficients α 0 and λ are in the range of 0 to 1, one takes λ n 0 , and then the corresponding discretized term z j ( n ) g j m ( z ( n ) ) α 0 λ n tends to z j ( n ) for n . The iterative process has the effect of a time-dependent regularization, which makes it possible to optimize dynamically multiple metrics for OS-EM and OS-MART. The discrete time-varying parameter strategy includes the hybrid-cascaded method [40], a two-phased algorithm using the simultaneous algebraic reconstruction technique (SART) as the primary reconstruction to produce an initial estimate for the secondary steps by MLEM, when the constant parameter α is replaced with either 1 if n L , or 0 otherwise, for a given nonnegative integer L and iteration numbers n = 0 , 1 , 2 , , N 1 , although the primary reconstruction method used in [40] is not SMART but rather SART.

4. Experiments and Discussion

We will demonstrate the above-mentioned theory and efficiency of the suggested method by conducting numerical CT experiments. Image reconstructions using combinations of weighted geometric and hybrid means (referred to as GM and HM, respectively) were compared with those of MLEM and SMART. The experiments were conducted using a 3.5 GHz Intel Xeon processor with 96 GB memory and computing tools provided by MATLAB (MathWorks, Natick, MA, USA) without a multi-threading or parallel-processing.
We used a modified Shepp–Logan phantom image composed of e [ 0 , 1 ] J and 256 × 256 pixels ( J = 65,536), as shown in Figure 1, and a projection y R + I created by specifying the number of projections and detectors to 360 and 365, respectively ( I = 131,400), with 180-degree sampling. The calculation of the forward projection is performed using a highly optimized library for matrix-vector multiplication in MATLAB.

4.1. Verification of Theory

We performed a reconstruction using the weighted geometric mean defined in Equation (8) with h = 1 , M = 30 , α = 0.01 , and a noise-free projection y = A e R + I . An example indicating the validity of Equation (15) is shown in Figure 2, which plots KL ( y m , A m z ( 0 ) ) versus WKL ( e , z ( 0 ) , A m ) WKL ( e , z ( 1 ) , A m ) applying a one-step iteration z ( 1 ) computed from a given initial state z ( 0 ) through random elements for m = 1 , 2 , , 30 . It can be seen that all the points lie above the line of equality (identity line), as stated in Theorem 1. Another experiment with different values of α yielded similar results.

4.2. Evaluation of Reconstructed Images

The noisy projection y R + I obtained from the phantom image was simulated employing Equation (1) with δ standing for white Gaussian noise so that the signal-to-noise ratio (SNR) was 30 dB unless otherwise specified. We also set a fixed initial value z j 0 for j = 1 , 2 , , J and h = 1 in Equations (2), (3), (8), and (10). Although putting h > 1 would hasten convergence, the alteration of h is outside the area of this paper.
To compare the reconstructed image x with the true image e, we defined evaluation functions
d j ( e , x ) : = | e j x j |
for j = 1 , 2 , , J and
D ( e , x ) : = | | e x | | 2 = j = 1 J d j ( e , x ) 2 1 2
which can evaluate the quality of images more directly than WKL ( e , x , A ) with weighted coefficients.
Figure 3 displays the evaluation functions D ( e , z ( n ) ) defined in accordance with Equation (31) for the case of M = 1 , α = 0.01 , and noisy projection data for n = 0 , 1 , 2 , , 50 . Here, we can observe the following. In the early iterations, the SMART algorithm decreases the value of the evaluation function. While the time course of SMART does not exhibit a monotonic decrease, the evaluation function for MLEM monotonically decreases as the number of iterations increases in this range of iterations. It can be seen that the proposed algorithm reduces the evaluation function more than MLEM or SMART.
Figure 4 graphs the evaluation functions D ( e , z ( n ) ) versus the real computation time s ( n ) (in seconds) as a function of the iteration number n for n = 0 , 1 , 2 , , 50 under the same settings as in Figure 3. At each iteration of the one-step algorithm, the coefficient in Equation (28) was pre-calculated for all algorithms, and the ratio between the measured and forward projections defined in Equation (29) for each of GM and HM was calculated at every i = 1 , 2 , , I . The reconstruction time of the proposed algorithm is approximately 22 s, whereas MLEM and SMART take approximately 20 s in total. Although the proposed algorithm takes 10% longer than the conventional ones, it provides a smaller evaluation function for almost the same computation time ( s ( 43 ) for GM or HM) as MLEM or SMART at the 50th iteration. Figure 5 shows images reconstructed by MLEM, SMART, and GM, and the corresponding subtraction images at every pixel, defined as in Equation (30) with x = u ( 50 ) , v ( 50 ) , and z ( 43 ) . The density value of d is on a common scale. By comparing, e.g., the edges of the high-density outer rims in the images, it can be seen that GM generates high-quality reconstructions; this is quantitatively confirmed by the small evaluation function between the phantom and reconstructed images.
To examine the effectiveness of the sequential discretization algorithm discussed in Section 3.3, we performed reconstructions under the same settings. The time courses of the evaluation functions for MLEM, SMART, GM, and F-GM (short for the fast sequential algorithm using the weighted geometric mean) are plotted in Figure 6 as functions of the computation time. We can see that F-GM has both the same level of performance as GM and the same computational time as MLEM or SMART alone. The oscillatory phenomenon in the iterations that affects F-GM is due to the low accuracy of discretizing the continuous-time system using the state two steps ago. The oscillation can be reduced by choosing the step size h to be less than one in exchange for an increase in computation time, though it is not necessary in this case.
The proper selection of parameter α is important. As shown in Figure 7, which represents the evaluation functions for GM with α = 0.005 , 0.01 , and 0.05 , 0.01 provides the best performance. However, Figure 8 shows the effect of changing the SNR of the projections from 30 to 20 dB. Compared with the nonmonotonic decrease in the evaluation function when the SNR is 20 dB and α is 0.05 (Figure 8), the time-varying system defined in Section 3.4 yields an improvement; for example, by using the exponential function α 0 λ n with coefficients α 0 = 0.05 and λ = 0.95 plotted in Figure 9, the values of the evaluation function monotonically decrease up to the 40th iteration. On the other hand, a discontinuous time-varying approach mimicking the hybrid-cascaded method that can be obtained by using SMART in the initial step ( L = 0 ) and using MLEM in the remaining steps is inferior to the continuous time-varying system (Figure 9). The reason why iterations by the discontinuous time-varying (or MLEM after the second iteration in this case) and pure MLEM algorithms converge to a steady state with the same evaluation function value is conjectured to be that there is a global nonnegative minimizer of an objective function for MLEM.
The purpose of the next example is to verify the effectiveness of using divided subsets for fixed α = 0.01 . Projections in 180 directions were divided into M nonoverlapping subsets. The ordering of the M subsets was determined using a random projection permutation [41,42]. We can see in Figure 10 that OS-GM with M = 8 provides better convergence performance with respect to the speed and the evaluation function value compared with not only OS-EM and OS-MART with the same M but also GM and OS-GM with M = 2 . We can also see that, for M = 8 , OS-GM is capable of suppressing the oscillatory phenomenon that appears for successive M iterations in the results of OS-EM. Figure 11 shows images reconstructed by OS-EM with M = 8 (at the 20th iteration) and GM and OS-GM with M = 8 (at the 48th and 20th iterations, respectively) and the corresponding subtraction images that were evaluated using the difference from the true image. We can see that OS-GM with M = 8 provides the best image quality in terms of contrast and artifact.

5. Conclusions

We presented a novel iterative algorithm that combines OS-EM and OS-MART by using weighted geometric or hybrid means. The theoretical results support the convergence of iterative points in view of decreasing of the objective function with increasing of the iteration number. Numerical experiments illustrate that the suggested algorithm and its fast version using a sequential calculation have advantages over the optimization of the evaluation function by MLEM after sufficient iterations as well as by SMART in early iterations. Moreover, our experimental results indicate that the iterative algorithms with a time-varying parameter and the ordered-subsets scheme perform well. In the future, we will use methodologies such as machine learning to make decisions about the best applicable parameter controlled by the numbers of pixels and projections and the noise level of projections, etc.

Author Contributions

Conceptualization, T.Y.; data curation, O.M.A.A.-O. and T.Y.; formal analysis, O.M.A.A.-O., R.K., Y.Y., T.K. and T.Y.; methodology, O.M.A.A.-O., R.K., Y.Y., T.K. and T.Y.; software, R.K., Y.Y. and T.Y.; supervision, T.Y.; validation, O.M.A.A.-O. and T.Y.; writing—original draft, O.M.A.A.-O. and T.Y.; and writing—review and editing, O.M.A.A.-O., R.K., Y.Y., T.K. and T.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by JSPS KAKENHI, Grant Number 21K04080.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ramachandran, G.N.; Lakshminarayanan, A.V. Three-dimensional reconstruction from radiographs and electron micrographs: Application of convolutions instead of Fourier transforms. Proc. Natl. Acad. Sci. USA 1971, 68, 2236–2240. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Shepp, L.A.; Vardi, Y. Maximum likelihood reconstruction for emission tomography. IEEE Trans. Med. Imaging 1982, 1, 113–122. [Google Scholar] [CrossRef] [PubMed]
  3. Lewitt, R.M. Reconstruction algorithms: Transform methods. Proc. IEEE 1983, 71, 390–408. [Google Scholar] [CrossRef]
  4. Natterer, F. Computerized tomography. In The Mathematics of Computerized Tomography; Springer: Berlin/Heidelberg, Germany, 1986; pp. 1–8. [Google Scholar]
  5. Kak, A.C.; Slaney, M. Principles of Computerized Tomographic Imaging; IEEE Press: Piscataway, NJ, USA, 1988. [Google Scholar]
  6. Stark, H. Image Recovery: Theory and Application; Academic Press: Washington, DC, USA, 1987. [Google Scholar]
  7. Hudson, H.M.; Larkin, R.S. Accelerated image reconstruction using ordered subsets of projection data. IEEE Trans. Med. Imaging 1994, 13, 601–609. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Gordon, R.; Bender, R.; Herman, G.T. Algebraic reconstruction techniques (ART) for three-dimensional electron microscopy and X-ray photography. J. Theor. Biol. 1970, 29, 471–481. [Google Scholar] [CrossRef]
  9. Fessler, J.A.; Hero, A.O. Penalized maximum-likelihood image reconstruction using space-alternating generalized EM algorithms. IEEE Trans. Image Process. 1995, 4, 1417–1429. [Google Scholar] [CrossRef]
  10. Byrne, C.L. Accelerating the EMML algorithm and related iterative algorithms by rescaled block-iterative methods. IEEE Trans. Image Process. 1998, 7, 100–109. [Google Scholar] [CrossRef]
  11. DoSik, H.; Gengsheng, Z.L. Convergence study of an accelerated ML-EM algorithm using bigger step size. Phys. Med. Biol. 2006, 51, 237–252. [Google Scholar]
  12. Darroch, J.; Ratcliff, D. Generalized iterative scaling for log-linear models. Ann. Math. Stat. 1972, 43, 1470–1480. [Google Scholar] [CrossRef]
  13. Schmidlin, P. Iterative separation of sections in tomographic scintigrams. J. Nucl. Med. 1972, 11, 1–16. [Google Scholar] [CrossRef]
  14. Badea, C.; Gordon, R. Experiments with the nonlinear and chaotic behaviour of the multiplicative algebraic reconstruction technique (MART) algorithm for computed tomography. Phys. Med. Biol. 2004, 49, 1455–1474. [Google Scholar] [CrossRef]
  15. Byrne, C.L. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef] [Green Version]
  16. Gustavsson, B. Tomographic inversion for ALIS noise and resolution. J. Geophys. Res. Space Phys. 1998, 103, 26621–26632. [Google Scholar] [CrossRef]
  17. Jiang, W.; Zhang, X. Relaxation Factor Optimization for Common Iterative Algorithms in Optical Computed Tomography. Math. Probl. Eng. 2017, 2017, 4850317. [Google Scholar] [CrossRef] [Green Version]
  18. Schropp, J. Using dynamical systems methods to solve minimization problems. Appl. Numer. Math. 1995, 18, 321–335. [Google Scholar] [CrossRef]
  19. Airapetyan, R.G.; Ramm, A.G.; Smirnova, A.B. Continuous analog of gauss-newton method. Math. Model. Methods Appl. Sci. 1999, 9, 463–474. [Google Scholar] [CrossRef] [Green Version]
  20. Airapetyan, R.G.; Ramm, A.G. Dynamical systems and discrete methods for solving nonlinear ill-posed problems. In Applied Mathematics Reviews; GA, A., Ed.; World Scientific Publishing Company: Singapore, 2000; Volume 1, pp. 491–536. [Google Scholar]
  21. Airapetyan, R.G.; Ramm, A.G.; Smirnova, A.B. Continuous methods for solving nonlinear ill-posed problems. In Operator Theory and Its Applications; Ramm, A.G., Shivakumar, P.N., Vilgelmovich Strauss, A., Eds.; American Mathematical Society: Providence, RI, USA, 2000; Volume 25, pp. 111–136. [Google Scholar]
  22. Ramm, A.G. Dynamical systems method for solving operator equations. Commun. Nonlinear Sci. Numer. Simul. 2004, 9, 383–402. [Google Scholar] [CrossRef] [Green Version]
  23. Li, L.; Han, B. A dynamical system method for solving nonlinear ill-posed problems. Appl. Math. Comput. 2008, 197, 399–406. [Google Scholar] [CrossRef]
  24. Fujimoto, K.; Abou Al-Ola, O.M.; Yoshinaga, T. Continuous-time image reconstruction using differential equations for computed tomography. Commun. Nonlinear Sci. Numer. Simul. 2010, 15, 1648–1654. [Google Scholar] [CrossRef]
  25. Abou Al-Ola, O.M.; Fujimoto, K.; Yoshinaga, T. Common Lyapunov function based on Kullback–Leibler divergence for a switched nonlinear system. Math. Probl. Eng. 2011, 2011, 723509. [Google Scholar] [CrossRef] [Green Version]
  26. Yamaguchi, Y.; Fujimoto, K.; Abou Al-Ola, O.M.; Yoshinaga, T. Continuous-time image reconstruction for binary tomography. Commun. Nonlinear Sci. Numer. Simul. 2013, 18, 2081–2087. [Google Scholar] [CrossRef]
  27. Tateishi, K.; Yamaguchi, Y.; Abou Al-Ola, O.M.; Yoshinaga, T. Continuous Analog of Accelerated OS-EM Algorithm for Computed Tomography. Math. Probl. Eng. 2017, 2017, 1564123. [Google Scholar] [CrossRef] [Green Version]
  28. Kasai, R.; Yamaguchi, Y.; Kojima, T.; Yoshinaga, T. Tomographic Image Reconstruction Based on Minimization of Symmetrized Kullback-Leibler Divergence. Math. Probl. Eng. 2018, 2018, 8973131. [Google Scholar] [CrossRef] [Green Version]
  29. Kasai, R.; Yamaguchi, Y.; Kojima, T.; Abou Al-Ola, O.M.; Yoshinaga, T. Noise-Robust Image Reconstruction Based on Minimizing Extended Class of Power-Divergence Measures. Entropy 2021, 23, 1005. [Google Scholar] [CrossRef] [PubMed]
  30. Lyapunov, A.M. Stability of Motion; Academic Press: New York, NY, USA, 1966. [Google Scholar]
  31. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  32. Byrne, C.L. Iterative image reconstruction algorithms based on cross-entropy minimization. IEEE Trans. Image Process. 1993, 2, 96–103. [Google Scholar] [CrossRef]
  33. Byrne, C.L. Block-iterative algorithms. Int. Trans. Oper. Res. 2009, 16, 427–463. [Google Scholar] [CrossRef]
  34. Tateishi, K.; Yamaguchi, Y.; Abou Al-Ola, O.M.; Kojima, T.; Yoshinaga, T. Continuous analog of multiplicative algebraic reconstruction technique for computed tomography. In Proceedings of the SPIE, Medical Imaging 2016; SPIE Medical Imaging: San Diego, CA, USA, 2016; Volume 9783-4Q. [Google Scholar]
  35. Aniszewska, D. Multiplicative Runge–Kutta methods. Nonlinear Dyn. 2007, 50, 265–272. [Google Scholar] [CrossRef]
  36. Bashirov, A.E.; Kurpinar, E.M.; Oezyapici, A. Multiplicative calculus and its applications. J. Math. Anal. Appl. 2008, 337, 36–48. [Google Scholar] [CrossRef] [Green Version]
  37. Jeffreys, H. Theory of Probability; Clarendon Press: Oxford, UK, 1939. [Google Scholar]
  38. Jeffreys, H. An invariant form for the prior probability in estimation problems. R. Soc. Lond. 1946, 186, 453–461. [Google Scholar]
  39. Ishikawa, K.; Yamaguchi, Y.; Abou Al-Ola, O.M.; Kojima, T.; Yoshinaga, T. Block-Iterative Reconstruction from Dynamically Selected Sparse Projection Views Using Extended Power-Divergence Measure. Entropy 2022, 24, 740. [Google Scholar] [CrossRef]
  40. Tiwari, S.; Srivastava, R. A Hybrid-Cascaded Iterative Framework for Positron Emission Tomography and Single-Photon Emission Computed Tomography Image Reconstruction. J. Med. Imaging Health Inform. 2016, 6, 1001–1012. [Google Scholar] [CrossRef]
  41. Kazantsev, I.G.; Matej, S.; Lewitt, R.M. Optimal Ordering of Projections using Permutation Matrices and Angles between Projection Subspaces. Electron. Notes Discret. Math. 2005, 20, 205–216. [Google Scholar] [CrossRef]
  42. Van Dijke, M.C. Iterative Methods in Image Reconstruction. Ph.D. Thesis, Rijksuniversiteit Utrecht, Utrecht, The Netherlands, 1992. [Google Scholar]
Figure 1. Image of phantom.
Figure 1. Image of phantom.
Mathematics 10 04277 g001
Figure 2. Scatter plot with the line of equality (red) for OS-GM in Equation (8) with m = 1 , 2 , , 30 .
Figure 2. Scatter plot with the line of equality (red) for OS-GM in Equation (8) with m = 1 , 2 , , 30 .
Mathematics 10 04277 g002
Figure 3. Evaluation functions for proposed and conventional algorithms at each iteration. The values of the plotted points for GM and HM are almost identical.
Figure 3. Evaluation functions for proposed and conventional algorithms at each iteration. The values of the plotted points for GM and HM are almost identical.
Mathematics 10 04277 g003
Figure 4. Evaluation functions of proposed and conventional algorithms versus computation time for n = 0 , 1 , 2 , , 50 . The values of the plotted points for GM and HM are almost identical.
Figure 4. Evaluation functions of proposed and conventional algorithms versus computation time for n = 0 , 1 , 2 , , 50 . The values of the plotted points for GM and HM are almost identical.
Mathematics 10 04277 g004
Figure 5. Reconstructed images (top plate) and subtraction images (bottom plate) for MLEM and SMART at 50th iteration and for GM at 43rd iteration.
Figure 5. Reconstructed images (top plate) and subtraction images (bottom plate) for MLEM and SMART at 50th iteration and for GM at 43rd iteration.
Mathematics 10 04277 g005
Figure 6. Evaluation functions of proposed and conventional algorithms versus computation time for n = 0 , 1 , 2 , , 50 .
Figure 6. Evaluation functions of proposed and conventional algorithms versus computation time for n = 0 , 1 , 2 , , 50 .
Mathematics 10 04277 g006
Figure 7. Evaluation functions of GM for different values of α versus computation time for n = 0 , 1 , 2 , , 50 .
Figure 7. Evaluation functions of GM for different values of α versus computation time for n = 0 , 1 , 2 , , 50 .
Mathematics 10 04277 g007
Figure 8. Evaluation functions of GM using projections with SNR of 20 dB versus computation time for n = 0 , 1 , 2 , , 50 .
Figure 8. Evaluation functions of GM using projections with SNR of 20 dB versus computation time for n = 0 , 1 , 2 , , 50 .
Mathematics 10 04277 g008
Figure 9. Evaluation functions of proposed and conventional algorithms using projections with SNR of 20 dB at each iteration. U denotes a unit step function.
Figure 9. Evaluation functions of proposed and conventional algorithms using projections with SNR of 20 dB at each iteration. U denotes a unit step function.
Mathematics 10 04277 g009
Figure 10. Evaluation functions for OS-EM, OS-MART, GM, and OS-GM versus computation time for n = 0 , 1 , 2 , , N along with N equal 60, 45, and 20 for M being 1, 2, and 8, respectively.
Figure 10. Evaluation functions for OS-EM, OS-MART, GM, and OS-GM versus computation time for n = 0 , 1 , 2 , , N along with N equal 60, 45, and 20 for M being 1, 2, and 8, respectively.
Mathematics 10 04277 g010
Figure 11. Reconstructed images (top plate) and images of subtraction (bottom plate) for GM at 48th iteration and for OS-EM and OS-GM with M = 8 at 20th iteration.
Figure 11. Reconstructed images (top plate) and images of subtraction (bottom plate) for GM at 48th iteration and for OS-EM and OS-GM with M = 8 at 20th iteration.
Mathematics 10 04277 g011
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Abou Al-Ola, O.M.; Kasai, R.; Yamaguchi, Y.; Kojima, T.; Yoshinaga, T. Image Reconstruction Algorithm Using Weighted Mean of Ordered-Subsets EM and MART for Computed Tomography. Mathematics 2022, 10, 4277. https://doi.org/10.3390/math10224277

AMA Style

Abou Al-Ola OM, Kasai R, Yamaguchi Y, Kojima T, Yoshinaga T. Image Reconstruction Algorithm Using Weighted Mean of Ordered-Subsets EM and MART for Computed Tomography. Mathematics. 2022; 10(22):4277. https://doi.org/10.3390/math10224277

Chicago/Turabian Style

Abou Al-Ola, Omar M., Ryosuke Kasai, Yusaku Yamaguchi, Takeshi Kojima, and Tetsuya Yoshinaga. 2022. "Image Reconstruction Algorithm Using Weighted Mean of Ordered-Subsets EM and MART for Computed Tomography" Mathematics 10, no. 22: 4277. https://doi.org/10.3390/math10224277

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop