Next Article in Journal
Quantification of Gas Flaring from Satellite Imagery: A Comparison of Two Methods for SLSTR and BIROS Imagery
Previous Article in Journal
Quantitative CT Metrics Associated with Variability in the Diffusion Capacity of the Lung of Post-COVID-19 Patients with Minimal Residual Lung Lesions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fast Compressed Sensing of 3D Radial T1 Mapping with Different Sparse and Low-Rank Models

Department of Technical Physics, University of Eastern Finland, 70211 Kuopio, Finland
*
Author to whom correspondence should be addressed.
J. Imaging 2023, 9(8), 151; https://doi.org/10.3390/jimaging9080151
Submission received: 16 June 2023 / Revised: 14 July 2023 / Accepted: 21 July 2023 / Published: 26 July 2023
(This article belongs to the Section Medical Imaging)

Abstract

:
Knowledge of the relative performance of the well-known sparse and low-rank compressed sensing models with 3D radial quantitative magnetic resonance imaging acquisitions is limited. We use 3D radial T1 relaxation time mapping data to compare the total variation, low-rank, and Huber penalty function approaches to regularization to provide insights into the relative performance of these image reconstruction models. Simulation and ex vivo specimen data were used to determine the best compressed sensing model as measured by normalized root mean squared error and structural similarity index. The large-scale compressed sensing models were solved by combining a GPU implementation of a preconditioned primal-dual proximal splitting algorithm to provide high-quality T1 maps within a feasible computation time. The model combining spatial total variation and locally low-rank regularization yielded the best performance, followed closely by the model combining spatial and contrast dimension total variation. Computation times ranged from 2 to 113 min, with the low-rank approaches taking the most time. The differences between the compressed sensing models are not necessarily large, but the overall performance is heavily dependent on the imaged object.

1. Introduction

Compressed sensing (CS) is one way to speed up magnetic resonance imaging (MRI) by collecting less data while simultaneously retaining high image quality. Essentially, CS theory states that only a small subset of k-space samples is required for accurate image reconstruction [1,2]. In an MRI, CS can be realized by undersampling the k-space incoherently, which produces noise-like artifacts in the image domain. Then, by utilizing sparsifying transforms, compressibility of the MRI images is enforced in the image reconstruction, which minimizes image noise and artifacts, allowing for the original image to be mostly recovered [3,4].
One of the MRI methods most in need of fast acquisition strategies is the 3D quantitative MRI (qMRI) approach. qMRI methods offer extra information that can be valuable in certain situations, but they are also multiple times slower than the standard contrast-weighted MRI. The benefits of 3D imaging include more comprehensive coverage of the anatomy of interest, which in most cases is a welcome addition, but at a further cost to acquisition times. Thus, combining these two time-consuming methods is problematic and necessitates an acceleration of the acquisition methods. Coincidentally, 3D qMRI is a good candidate for CS—undersampling in three dimensions spreads the arising artifacts better when compared to 2D, and the added parametric dimension can be exploited in various ways for better image quality and more aggressive undersampling [5,6,7,8,9,10].
For all CS methods, significant attention should be paid to the undersampling scheme of the acquisition, as it heavily influences how the artifacts manifest in the images and, thus, how well the original image can be recovered [3,11]. Practical considerations and scanner hardware limit the possible trajectories. From the standard k-space sampling schemes, non-cartesian radial trajectories are well suited for accelerated 3D imaging. For radial acquisitions, even for uniform undersampling, the undersampling artifacts are inherently spread out in all three dimensions, and the k-space center is densely sampled, making the acquisition resilient to motion and undersampling.
The downside of the non-cartesian trajectories is that the image reconstruction is computationally much more demanding since the standard fast Fourier transform needs to be replaced by a non-uniform fast Fourier transform (NUFFT) [12]. In addition, usually iterative reconstruction methods are utilized for non-cartesian acquisitions, further increasing the computational cost. Since the large-scale optimization problem of the 3D qMRI CS is usually solved with first-order optimization methods, which require many iterations, the computational aspect of 3D radial qMRI CS needs to be considered. Recently, preconditioned primal-dual proximal splitting was demonstrated to significantly speed up the convergence of CS image reconstructions without the increased noise amplification of the traditional density correction preconditioner approach [13]. The novel preconditioner combined with algorithm implementation on a graphics processing unit (GPU) seems to offer feasible reconstruction times for 3D qMRI CS, negating a large part of the downsides of the non-cartesian CS.
In this study, we approach the topic of CS qMRI by demonstrating the performance of multiple CS models with variable flip angle 3D radial T1 mapping. The T1 relaxation time, for example, shows promise as a potential biomarker for lung disease [14] and myocardial fibrosis [15]. Additionally, a specific variable flip angle acquisition shows promise in proton resonance frequency shift thermometry [16]. T1 mapping has also been combined with compressed sensing to, for example, make single-breath-hold multiparametric MRI feasible [17].
The CS models tested include spatial and contrast dimension total variation, spatial and contrast dimension finite differences with the Huber penalty function, and locally low rank and locally low rank combined with spatial total variation. These sparsifying priors are well studied, but their performance with true 3D radial acquisitions is not well defined. The large-scale image reconstruction problems are solved in a feasible time frame with the primal-dual proximal splitting (PDPS) algorithm [18], as it permits preconditioning for faster convergence [13,19] and allows the implementation of the sparsifying priors [20]. Reconstructions are computed with various amounts of data to investigate the undersampling tolerance of the methods. Ex vivo and simulation data are used, as the former provides evidence of real-world performance and the latter gives us the possibility to measure performance with respect to the known ground truth.

2. Theory

The standard MRI measurement model is
m = F u + e ,  
where m C M is the measured k-space, F is the discrete Fourier transform, u C N is the complex-valued image, N = N x × N y × N z , and e C M models the complex-valued measurement noise. With radial acquisitions, the discrete Fourier transform is usually approximated with the non-uniform Fourier transform (NUFFT) operator. Considering also that an image series is acquired, the measurement model can be written as
m n = A n u n + e n ,
where the subscript n denotes the contrast index, A n = P n F C n is the NUFFT operator, where P n is the combined interpolation and sampling matrix, and C n is the scaling matrix. Note that the NUFFT operator changes with the contrast, as the sampling trajectories differ between the contrast images.
The optimization problem for the standard non-regularized radial reconstruction then reads
u ^ = a r g m i n u n N c A n u n m n 2 2   ,
where u is the 4D image stack containing all N c of the 3D contrast images. This is the least squares solution to the problem and will be referenced as LS for the rest of the publication.

2.1. Total Variation Model (sTV + cTV)

One of the most popular regularization functions is the total variation (TV). TV regularizes the L1 norm of the spatial or contrast dimension image gradient, promoting piecewise constancy in the respective dimensions. The performance of TV has been well documented in numerous applications. However, it is plagued by the so-called staircasing effect, where originally smooth image areas are reconstructed as piecewise constant, leading to sub-optimal performance. By utilizing TV in the spatial and contrast dimensions separately, the minimization problem is
u ^ = a r g m i n u n N c A n u n m n 2 2   + α s u n 1 + β c u 1   ,  
where s is the discrete spatial gradient operator, c is the discrete contrast dimension gradient operator, and parameters α , β > 0 determine the weighting of the spatial and contrast dimension regularization terms, respectively. In our testing, this model is comparable to spatio-temporal TV, where the spatial and contrast image gradients are combined under the same square root, i.e., in the same regularization term.

2.2. Huber Model (sH + cH)

The Huber penalty function is of the form
H γ x = x 2 2 γ ,       x γ , x γ 2 ,       x > γ ,
where γ is the tunable Huber parameter. Below γ parameter, the Huber regularization is close to the standard L2-norm smoothness regularization, and above γ , it approximates the TV regularization. In the context of MRI, the γ parameter can be interpreted as a threshold where discontinuous signal change is assumed [21]. The Huber penalty function has been proposed as a remedy to the stair-casing of TV in the denoising context [18], and it has been applied, e.g., as a temporal regularization for DCE data [21].
The image reconstruction problem utilizing the Huber penalty function on the spatial and contrast dimension image gradients is
u ^ = a r g m i n u n N c A n u n m n 2 2 + α H γ 1 s u n 1 + β H γ 2 c u 1   .
The two separate Huber parameters, γ 1 and γ 2 , provide a way to spatially alleviate the piecewise constant nature of TV while also limiting excessive contrast dimension smoothness. Contrast dimension smoothness is not a desirable property with a qMRI image series because the resulting temporal blur can lose small nuances of the contrast modulation, and the stair-casing effect is not as significant an issue temporally as the images undergo the signal fitting procedure.

2.3. Locally Low-Rank Model (LLR)

A low-rank model assumes that for the N x N y N z × N c Casorati matrix, formed of the N x × N y × N z × N c qMRI image series, the rank of the Casorati matrix is much smaller than N c , i.e., it is rank deficient. It has been shown that this rank deficiency can be improved if only a small part of the image series that contains mostly similar signal dynamics is considered. This approach is known as the locally low-rank (LLR) model [22]. The superior performance of the LLR model when compared to the (globally) low-rank model has been, for example, demonstrated with DCE MRI [23] and quantitative parameter mapping of T1 and T2 relaxation times [9]. If the low-rank assumption does not hold and low-rank regularization is enforced during image reconstruction, contrast dimension blurring or unsatisfying noise suppression will occur, depending on the strength of the regularization [22].
Rank-constrained optimization problems are usually infeasible in practice, but luckily, convex relaxations that utilize nuclear norm regularization can be used [24]. Thus, the LLR optimization problem can be written as
u ^ = a r g m i n u n N c A n u n m n 2 2 + α b Ω C b u *   ,
where   . * denotes the nuclear norm, C b is an operator that extracts an image block of size b × b × b × N c from the image series and reshapes it to a Casorati matrix of size b 3 × N c , and Ω denotes the whole set of image blocks. The blocks can be chosen as overlapping or non-overlapping, with the computational complexity increasing as a function of the overlap. Translational invariance can be achieved by randomly spatially shifting the contrast images prior to LLR regularization at each iteration during the reconstruction [25].

2.4. Spatial Total Variation and Locally Low-Rank Model (sTV + LLR)

Adding spatial TV regularization to an LLR model can provide multiple benefits [26,27,28]. If the qMRI series is not exactly low rank, the spatial TV regularization can be used to suppress the left-over noise. Additionally, spatial TV can also be used to minimize the blocking artifacts that the LLR might induce. The joint optimization problem then reads
u ^ = a r g m i n u n N c A n u n m n 2 2   + α s u n 1 + β b Ω C b u *   .

2.5. Primal-Dual Proximal Splitting Algorithm

All the models detailed above can be solved by the PDPS algorithm [18]. The PDPS solves an optimization problem simultaneously with its dual and can be applied to primal minimization problems of the form
min x F K x + G x ,
where K is a linear transform, and F and G are convex functions. For Equation (9), the PDPS algorithm reads (Algorithm 1) [20].
Algorithm 1. Primal-dual proximal splitting.
Choose θ [ 0 , 1 ] and σ , τ such that σ τ < 1 / K 2 2
while not converged do
   y k + 1 p r o x σ [ F * ] ( y k + σ K x ¯ k )
   x k + 1 p r o x τ [ G ] ( x k + τ K T y k + 1 )
   x ¯ k + 1 x k + 1 + θ ( x k + 1 x k )
end while
Where p r o x denotes the proximal mapping, which is used to generate the descent direction for the function argument.
As an example, for Equation (3) and a single-contrast image, the optimization problem can be written as in Equation (9) by setting the primal variable x as the vectorized complex image u , K as the corresponding NUFFT operator A , F p = 0.5 p m 2 2 , where p = A u , and G x = 0 . The proximal mapping of the convex conjugate of F is p r o x σ F * y = ( y σ m ) / ( 1 + σ ) . Additionally, we let the algorithm utilize a diagonal preconditioner P for the dual variable following [13]. With these, the preconditioned PDPS algorithm reads (Algorithm 2).
Algorithm 2. Preconditioned primal-dual proximal splitting for least squares.
Choose θ [ 0 , 1 ] and σ , τ such that σ τ < 1 / A T P A 2 2
while not converged do
   p k + 1 ( p k + P σ ( A u ¯ k m ) ) / ( 1 + P σ )
   u k + 1 u k τ A T p k + 1
   u ¯ k + 1 u k + 1 + θ ( u k + 1 u k )
end while
The PDPS implementations for Equations (4) and (6)–(8) are detailed in the Appendix A (Algorithms A1–A4). For further information about the algorithm, the reader is instructed, e.g., to the references [18,20].

2.6. Variable Flip Angle Acquisition

T1 mapping can be performed by a variable flip angle (VFA) acquisition, where multiple images with different flip angles (FA) but with the same repetition time (TR) are acquired from which the T1 value can be solved. It is possible to estimate the T1 from only two flip angles [29]. However, this estimation is accurate only for a very limited range of T1s and, thus, the use of multiple flip angles allows for a more robust estimation [30].
For a SWIFT-type acquisition, where the T2* effects can be neglected and the signal is acquired in a steady state [31], the signal equation is
u n = S 0 1 exp T R / T 1 sin F A n 1 exp T R / T 1 cos F A n ,  
where S0 is the proton density and n = 1 , , N c . The T1 and S0 values can be solved by either linearizing the signal equation [29,32] or by performing non-linear least squares fitting.

3. Methods

The image reconstruction is performed with the preconditioned PDPS algorithm detailed in the previous section and in the Appendix A. Each image reconstruction problem, Equations (3), (4), and (6)–(8), is evaluated on two radial datasets: a VFA Multi-Band-Sweep Imaging with Fourier Transform (MB-SWIFT) [33] ex vivo acquisition of a mouse spine and a simulated phantom. The ex vivo specimen allows us to measure real-world performance of the models, and the phantom gives a proper ground truth reference. We detail the two datasets and the image reconstruction specifics below.
In the spirit of open and reproducible research, both datasets, the Python codes for image reconstruction, and the Matlab codes for signal fitting are available from Zenodo (https://doi.org/10.5281/zenodo.8177285).

3.1. Ex Vivo Multi-Band-SWIFT Acquisition

A section of a mouse spine with the surrounding tissues was collected from a mouse that was sacrificed according to ethical permits (for an unrelated study ESAVI/270/ 04.10.07/2017 [34]) and subsequently transcardially perfused, fixed, and stored with 4% paraformaldehyde. For imaging, the sample was inserted into a 3D-printed holder and immersed in perfluoropolyether (Galden HS240, Vacuumservice Oy, Helsinki, Finland).
A VFA acquisition with the MB-SWIFT was performed with a small-bore 9.4 T Varian/Agilent MRI scanner (VnmrJ DirectDrive software v.3.2, Varian Associates Inc., Palo Alto, CA, USA) and a 19 mm diameter quadrature radiofrequency volume transceiver (Rapid Biomedical, Rimpar, Germany). The magnet shimming was performed manually. The essential MB-SWIFT parameters were a bandwidth of 385 kHz, an isotropic field of view of 3 cm, 63488 radial spokes per contrast, an image size of 256 3 , and flip angles of 1, 2, 3, 4, 5, 6, 7, 8, 10, 12, 14, and 20 degrees. The standard MB-SWIFT data processing pipeline [33] was followed until a radial k-space was acquired, wherefrom all subsequent data processing was performed with the codes developed here and shared in the Zenodo archive.
Our VFA implementation utilized unique gradient trajectories for the acquisition of each contrast. This choice was made to achieve sampling where the image noise and undersampling artifacts would be incoherent, to a degree, between images, thus allowing contrast dimension regularization to effectively eliminate these in the image reconstruction. Specifically, each contrast acquisition utilized 128 unique and complementary gradient trajectories, each consisting of 512 center-out spokes. All these trajectories filled the k-space uniformly, but none of them were identical. Retrospective downsampling, mimicking a sparse acquisition, was completed using a part of the trajectories in the reconstruction, making the downsampling readily transformable to practice. Imaging time for the whole VFA series was 22 min and 24 s.

3.2. Simulated Phantom

For the simulated phantom, separate S0 and T1 volumes were created [35]. For the phase, a simple linear ramp from zero to zero with a wrap in the middle was used (Figure 1). The gradient trajectories, flip angles, and repetition time for the data simulation were taken from the MB-SWIFT acquisition. The complex image series was generated with (10) and the phase-component, and non-uniform discrete Fourier transform was used to generate the complex radial k-space. Complex Gaussian white noise with standard deviation of two percent of the mean absolute noiseless k-space was added to the simulated k-space data.

3.3. Error Metrics

The performance of the CS image reconstruction was evaluated with the normalized root mean squared error (nRMSE) and structural similarity index (SSIM) [36] in a manually segmented region of interest covering the whole specimen. nRMSE is defined by n R M S E ( x ) = x x r e f 2 / x r e f 2 , where x r e f is the ground truth. Obviously, ground truth data are only available for the simulation. Thus, for the ex vivo data, for each model, the reference T1 and S0 maps were formed by performing non-linear least squares fitting on image series reconstructed from the full data with slight regularization. Regularization for the reference reconstructions is important, as even fully sampled radial reconstruction can suffer from increased noise and undersampling artifacts [37]. Our approach, thus, visualizes the relative tolerance to undersampling for each method for the ex vivo data. The simulation data then describe each model’s absolute tolerance to undersampling.

3.4. Computation

The image reconstruction was performed by solving Equations (3), (4), and (6)–(8) with the preconditioned PDPS algorithm. The data used in the reconstruction are presented with acceleration factor (AF), which is defined as (#full data)/(#data used in the reconstruction). Our definition of full data reconstruction is close to that of a corresponding cartesian acquisition, i.e., our full data reconstruction is already accelerated by about a factor of pi. Since the amount of data acquired during imaging is almost directly proportional to imaging time, AF mirrors the proportional imaging time speed up.
The preconditioner was calculated for each contrast image prior to the PDPS reconstruction. The step-size scheme for the PDPS algorithm was adapted from [13] as we set σ = 1 and τ = 1 / K T P K 2 , where the operator norm was calculated with the power method. Each reconstruction was continued for a maximum of 500 iterations or until relative difference of 20 objective function values less than 10 2 for the ex vivo data or 10 3 for the phantom data was reached. All the optimal reconstructions, regularization-wise, stopped due to the stop criterion. Different stopping tolerances were used because of the possible slight k-coordinate inaccuracies and higher noise level of the ex vivo data. The optimal regularization parameters were chosen with a grid search by minimizing T1 nRMSE against the respective reference reconstruction. For the grid search, linearized T1 fitting was completed for every reconstruction to obtain quick estimate for the T1. Once optimal reconstruction was found, the T1 and S0 maps of the optimal reconstruction were updated to the non-linear least squares fit estimates.

4. Results

The significantly faster convergence of the preconditioned PDPS algorithm was verified and is visualized in Figure 2, where the objective function value of Equation (3) for fully sampled simulation data reconstruction is plotted. With the preconditioner, the PDPS algorithm reached the stop criterion in 72 iterations. The non-preconditioned variant that utilized step sizes of σ = τ = 1 / K 2 [18,20] ran 5000 iterations (500 shown in the image) and did not achieve the same objective function value as the preconditioned version. Similar gains in convergence were achieved for all the reconstruction models. Slight oscillations that are visible in the objective function value were largely unaffected by the choice of the step sizes.
With the simulation data, we observed good tolerance for undersampling for all the regularized models. T1 maps, with AFs up to 20, remain of high image quality. The sTV + cTV and sTV + LLR models outperform every other method, with the latter achieving the lowest nRMSE scores while being almost identical in performance SSIM-wise. Staircasing is increasingly visible as AF grows, with the sTV + LLR model generally suffering less from it. The sH + cH model clearly leaves more noise in the reconstructions and subsequently performs slightly worse nRMSE-wise and a lot worse SSIM-wise when compared to the TV model. The LLR model demonstrates some residual noise in areas where the blocks overlap multiple image details, e.g., around the outer ring of the phantom, and the reconstructions also look over-regularized with faint block artifacts (Figure 3a and Figure 4).
Generally, for the S0 component, the small details on the phantom are visible for all models for all Afs but with varying amounts of blur, and the nRMSE values are overall larger than with the T1 maps. The sTV + LLR model is again the best model, with the sTV + cTV model a close second, as agreed by both error metrics. The sH + cH model suffers from image blur the most and is the second-worst model here. The LLR model has the most noise and thus the highest nRMSE and the worst SSIM values (Figure 3b and Figure 4).
The ex vivo data do not tolerate undersampling as well as the phantom data. For all regularized models, T1 maps with Afs up to four remain visually similar and under 10% in nRMSE when compared to the respective reference. Afs larger than four suffer from significant image blur, and image details are consequently lost. This is clearly visible in the space between the vertebra, where the areas with high T1 values are lost. All the regularized models are relatively close in performance, with the sTV + LLR outperforming others for all Afs tested nRMSE-wise and being the optimal model for the Afs up to four SSIM-wise. The LLR model outperforms the sH + cH and sTV + cTV models in nRMSE, but the reconstructions are clearly noisier, and the SSIM suffers greatly. The SSIM favors the Huber model as it outperforms the LLR and sTV + cTV models for all Afs. Additionally, the Huber model is the optimal one, SSIM-wise, for Afs of 10 and 20. Note that the error metrics are calculated with respect to the corresponding full-data reference reconstructions (Figure 5a and Figure 6).
For Afs of four and below, all the reconstruction models perform similarly and produce low nRMSE and high SSIM values for the S0-component, but further data reduction quickly deteriorates the image quality. The sTV + LLR model is the best below fourfold acceleration by a small margin in both error metrics. However, the LLR model retains the most image details and noise as AF increases, while all others suffer from significant image blur (Figure 5b and Figure 6).

5. Discussion

We demonstrated the performance of multiple CS image reconstruction models in a qMRI setting and how the preconditioned PDPS algorithm, relying additionally on fast GPU implementation, allows for a computationally fast solution of the chosen models. Without the preconditioner, computation times for the reconstructions would have been prohibitively long, even with the GPU implementation. By nRMSE and SSIM of the T1 maps, generally the best-performing model was the sTV + LLR for both phantom and specimen data. However, the differences between all the tested models are not necessarily large.
The similar performance of the models is not very surprising, as they do promote sparsity in a reasonably similar manner. For the models utilizing contrast dimensions TV and Huber, the minimum solution, regularization-wise, would be an image series of constant contrast, as the contrast differences would then be zero. Similarly, for the low-rank model, the minimum solution would be a rank-one matrix, meaning that the matrix can be represented as a matrix product between a contrast dimension vector and a spatial dimension vector (vectorized image). That is, the minimum solution is a matrix where only specific (though not necessarily constant) signal evolution is allowed in the contrast dimension. For locally low rank, this holds block-wise. These slight fundamental differences seem not to significantly affect the performance of the models.
The best-performing model was the sTV + LLR, with the TV model being a close second, indicating that LLR regularization is slightly better than cTV for the VFA acquisition. The combination of large (from a flip angle of one to two) and small (from a flip angle of seven to eight) signal jumps is known to be non-optimal for TV. Thus, discarding the flip angle of one from the acquisition might close this performance gap.
The Huber model was tuned so that most of the small signal changes were below the Huber parameter value, i.e., flat image areas would be reconstructed piecewise-smooth. However, here the models are compared after a non-linear fitting procedure, and the effect of the Huber penalty function is mostly seen as increased noise in the phantom T1 maps. A positive feature of the Huber model is that it also permits classical gradient-based reconstruction methods (e.g., conjugate gradient) to be used, but in our case, the loss of the preconditioner with the classical methods makes these still slower than the PDPS. However, as the Huber and TV models were close in performance, Huber could still be an interesting choice for cartesian reconstructions where no preconditioner is needed.
With the LLR model, optimal regularization would threshold out the singular vectors corresponding to pure noise, with over-regularization then affecting the signal evolution itself and under-regularization leading to residual noise. Thus, by under-regularizing, noise can be selectively retained without imposing strict a priori information (i.e., piecewise smoothness) on the image estimate, which might explain why the LLR model performs well with the noisy ex vivo specimen. For the phantom data, the LLR minimum nRMSE T1 maps suffer from block artifacts, which could be alleviated with 50% block overlap, with substantial increases in computation time or with an sTV as shown. The block artifacts are also amplified in the signal fitting, as no clear artifacts are visible on the magnitude images (not shown here).
The model combining spatial TV and LLR alleviates the block artifacts and further suppresses the leftover noise of the LLR regularization, making it the best model with the simulation data. With the ex vivo specimen, the sTV + LLR is not a significant improvement over the LLR model, and the LLR reconstructions can retain even more image details, as is particularly visible in the S0 maps. This behavior can probably be admitted to the noise and more complex structure of the reference reconstruction since, with the simulation data, the sTV + LLR clearly outperform the LLR model.
In this study, we are missing one widely acknowledged regularization model, the total generalized variation (TGV) [38]. TGV was not included because the memory footprint of the model is much larger than that of the TV approach. The larger memory footprint prevented us from doing the reconstruction using the GPU, as all other reconstructions were completed, and the CPU implementation would have been infeasibly slow. As the 3D TV demonstrated some staircasing artifacts, it is a shame that no comparison to TGV could be performed, as minimizing the staircasing artifacts is what TGV was originally proposed for.
Ex vivo data were utilized to tie the performance of the tested models to a more realistic case compared to the ideal phantom data. Although the ex vivo data cannot be used to address the possible motion and complications of multiple coils of clinical in vivo data, the ex vivo data at least introduce slight k-coordinate inconsistencies that are inherent in radial acquisitions. Additionally, the noise might also differ slightly from the ideal assumed complex white Gaussian noise, as in ex vivo acquisition, there are multiple possible sources of noise and artifacts, such as eddy currents.
The comparison of the methods between the simulation and ex vivo data is not straightforward because of the lack of real ground truth for the ex vivo specimen. However, the ex vivo specimen can be thought to be the more “correct” comparison, as it represents the image that we are usually trying to match with CS methods. With our ex vivo specimen, the reference happens to be quite noisy, which opens up the topic that the fully sampled reference could also be improved with stronger regularization (suppressing more noise). Doing this would likely increase the performance of the tested models, as the CS methods inherently denoise images. Thus, trying to optimize denoised CS reconstructions for a noisy reference with error metrics that have their own problems does impose some biases. However, as it is unknown which parts of the (reference) image are noise and which are actual details, and further considering that the regularizations impose a priori information on the reconstruction, tuning the optimal reference reconstruction is not clear. Here, we thus chose to use the noisy reference.
Clear differences between the behavior of nRMSE and SSIM error metrics were observed. Especially SSIM seems to greatly punish the presence of any image noise, even if, to the eye of the reader, the slightly noisy reconstruction looks more “natural”. nRMSE, on the other hand, does not discriminate where the error comes from, i.e., whether it is image noise or loss of detail, and just measures the global error. Using the lowest nRMSE of the T1 maps seemed to produce images that matched with the expectation of natural-looking reconstructions; hence, that criterion was used. If the same was completed with SSIM, the individual contrast images and the T1 maps looked over-regularized.
We chose not to report any computation time metrics in the results, as the absolute performance is greatly related to the hardware used and implementation details. We note here that for our Python implementation, the reconstruction times in minutes on a computational server with an RTX A6000 GPU ranged between 2.2 and 13.9; 2.3 and 14.3; 26.4 and 111.5; 28.0 and 113.5; and 0.9 and 7.7 for the sTV + cTV, sH + cH, LLR, sTV + LLR, and LS models, respectively. The models utilizing LLR regularization are considerably slower than the rest, as multiple singular value decompositions are needed per iteration. Still, from the reconstruction times, it can be concluded that, with the preconditioner, the reconstruction times are feasible for the large-scale radial 3D qMRI problem. This still ignores the problem of how to set the regularization parameters without a reference reconstruction, which remains an issue for most models. There are some methods that utilize automatic or data-driven regularization parameters, for example, with the wavelet regularization [39] and with TV regularization [40], but to our knowledge, no such methods exist for the models tested here in the qMRI setting.
Lastly, with the chosen VFA acquisition, the ex vivo results are affected by the uniformity of the B1 + field, which we did not correct in any way. However, it does not affect the results error-metric-wise since the reference contains the same error.

6. Conclusions

The performance of the tested models varied depending on the dataset and error metric used, but the differences between models were not necessarily large. By T1 map nRMSE and SSIM, the best-performing model was sTV + LLR, with sTV + cTV generally being a close second. The latter model might even be preferred due to the much shorter computation time. The Huber model did remedy the staircasing artifact at the cost of increased T1 map noise, and, overall, the performance of the Huber model was inferior to the sTV + cTV. The significantly faster convergence of the preconditioned PDPS algorithm with 3D radial data was also verified.

Author Contributions

Conceptualization, A.P., M.H., O.N., V.K. and M.J.N.; methodology, A.P.; software, A.P. and M.H.; validation, A.P.; formal analysis, A.P.; investigation, A.P.; resources, M.J.N.; data curation, N.H. and M.J.N.; writing—original draft preparation, A.P.; writing—review and editing, A.P., M.H., N.H., O.N., V.K. and M.J.N.; visualization, A.P.; supervision, O.N., V.K. and M.J.N.; project administration, O.N., V.K. and M.J.N.; funding acquisition, A.P., M.H., V.K. and M.J.N. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Academy of Finland (grants #285909 and #325146); Academy of Finland, Finnish Centre of Excellence of Inverse Modelling and Imaging (grants #336791 and #353084); Finnish Cultural Foundation, North Savo Regional Fund (grant #65211960); Instrumentarium Science Foundation (#12-4972-46); Päivikki and Sakari Sohlberg foundation; Olvi Foundation; Jane and Aatos Erkko Foundation, and Doctoral Programme in Science, Technology and Computing of University of Eastern Finland. Part of the work was carried out with the support of Kuopio Biomedical Imaging Unit, University of Eastern Finland, Kuopio, Finland (part of Finnish Biomedical Imaging Node, EuroBioImaging).

Institutional Review Board Statement

The animal experiment was approved by the Animal Health Welfare and Ethics Committee of the University of Eastern Finland (ESAVI/270/ 04.10.07/2017).

Informed Consent Statement

Not applicable.

Data Availability Statement

In the spirit of open and reproducible research, both datasets, the Python codes for image reconstruction, and the Matlab codes for signal fitting are available from Zenodo (https://doi.org/10.5281/zenodo.8177285).

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A

The pseudocodes for the reconstruction algorithms are detailed below (Algorithms A1–A4). Here, all the necessary proximal mappings have been explicitly written out. The gradient operator and its transpose are set as T = d i v , with the subscript denoting the dimensionality of the operation. The x operator denotes the magnitude of a vector-valued image x . Operator b S V T α ( x ) exhibits singular value soft thresholding to the block Casorati matrices extracted from the image series.
Algorithm A1. Preconditioned primal-dual proximal splitting for the TV model of Equation (4).
Choose θ [ 0 , 1 ] and σ , τ such that σ τ < 1 / K T P K 2 2
while not converged do
   p k + 1 ( p k + P σ ( A u ¯ k m ) ) / ( 1 + P σ )
   q k + 1 α ( q k + σ s u ¯ k ) / max ( α , q k + σ s u ¯ k )
   r k + 1 β ( r k + σ c u ¯ k ) / max ( β , r k + σ c u ¯ k )
   u k + 1 u k τ A T p k + 1 + τ d i v s q k + 1 + τ d i v c r k + 1  
   u ¯ k + 1 u k + 1 + θ ( u k + 1 u k )
end while
Algorithm A2. Preconditioned primal-dual proximal splitting for the Huber model of Equation (6).
Choose θ [ 0 , 1 ] and σ , τ such that σ τ < 1 / K T P K 2 2
while not converged do
   p k + 1 ( p k + P σ ( A u ¯ k m ) ) / ( 1 + P σ )
   q k + 1 α ( ( q k + σ s u ¯ k ) / ( 1 + σ γ 1 / α ) ) / max ( α , ( ( q k + σ s u ¯ k ) / ( 1 + σ γ 1 / α ) ) )
   r k + 1 β ( ( r k + σ c u ¯ k ) / ( 1 + σ γ 2 / β ) ) / max ( β , ( ( r k + σ c u ¯ k ) / ( 1 + σ γ 2 / β ) ) )  
   u k + 1 u k τ A T p k + 1 + τ d i v s q k + 1 + τ d i v c r k + 1  
   u ¯ k + 1 u k + 1 + θ ( u k + 1 u k )
end while
Algorithm A3. Preconditioned primal-dual proximal splitting for the locally low-rank model of Equation (7).
Choose   θ [ 0 , 1 ]   and   σ , τ   such   that   σ τ < 1 / K T P K 2 2
while not converged do
     p k + 1 ( p k + P σ ( A u ¯ k m ) ) / ( 1 + P σ )
     u k + 1 b S V T α ( u k τ A T p k + 1 )
     u ¯ k + 1 u k + 1 + θ ( u k + 1 u k )
end while
Algorithm A4. Preconditioned primal-dual proximal splitting for the spatial total variation combined with locally low-rank model of Equation (8).
Choose   θ [ 0 , 1 ]   and   σ , τ   such   that   σ τ < 1 / K T P K 2 2
while not converged do
     p k + 1 ( p k + P σ ( A u ¯ k m ) ) / ( 1 + P σ )
     q k + 1 α ( q k + σ s u ¯ k ) / max ( α , q k + σ s u ¯ k )
     u k + 1 b S V T β ( u k τ A T p k + 1 + τ d i v s q k + 1 )
     u ¯ k + 1 u k + 1 + θ ( u k + 1 u k )
end while

References

  1. Donoho, D.L. Compressed Sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  2. Candès, E.J.; Romberg, J.; Tao, T. Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef] [Green Version]
  3. Lustig, M.; Donoho, D.; Pauly, J.M. Sparse MRI: The Application of Compressed Sensing for Rapid MR Imaging. Magn. Reson. Med. 2007, 58, 1182–1195. [Google Scholar] [CrossRef] [PubMed]
  4. Lustig, M.; Donoho, D. Compressed Sensing MRI. IEEE Signal Process Mag. 2008, 25, 72–82. [Google Scholar] [CrossRef]
  5. Huang, C.; Graff, C.G.; Clarkson, E.W.; Bilgin, A.; Altbach, M.I. T2 Mapping from Highly Undersampled Data by Reconstruction of Principal Component Coefficient Maps Using Compressed Sensing. Magn. Reson. Med. 2012, 67, 1355–1366. [Google Scholar] [CrossRef] [Green Version]
  6. Doneva, M.; Börnert, P.; Eggers, H.; Stehning, C.; Sénégas, J.; Mertins, A. Compressed Sensing Reconstruction for Magnetic Resonance Parameter Mapping. Magn. Reson. Med. 2010, 64, 1114–1120. [Google Scholar] [CrossRef]
  7. Velikina, J.V.; Alexander, A.L.; Samsonov, A. Accelerating MR Parameter Mapping Using Sparsity-Promoting Regularization in Parametric Dimension. Magn. Reson. Med. 2013, 70, 1263–1273. [Google Scholar] [CrossRef] [Green Version]
  8. Tamir, J.I.; Ong, F.; Anand, S.; Karasan, E.; Wang, K.; Lustig, M. Computational MRI with Physics-Based Constraints: Application to Multicontrast and Quantitative Imaging. IEEE Signal Process Mag. 2020, 37, 94–104. [Google Scholar] [CrossRef]
  9. Zhang, T.; Pauly, J.M.; Levesque, I.R. Accelerating Parameter Mapping with a Locally Low Rank Constraint. Magn. Reson. Med. 2015, 73, 655–661. [Google Scholar] [CrossRef] [Green Version]
  10. Hanhela, M.; Paajanen, A.; Nissi, M.J.; Kolehmainen, V. Embedded Quantitative MRI T1&rho; Mapping Using Non-Linear Primal-Dual Proximal Splitting. J. Imaging 2022, 8, 157. [Google Scholar] [CrossRef]
  11. Lazarus, C.; Weiss, P.; El Gueddari, L.; Mauconduit, F.; Massire, A.; Ripart, M.; Vignaud, A.; Ciuciu, P. 3D Variable-Density SPARKLING Trajectories for High-Resolution T2*-Weighted Magnetic Resonance Imaging. NMR Biomed. 2020, 33, e4349. [Google Scholar] [CrossRef]
  12. Fessler, J.A.; Sutton, B.P. Nonuniform Fast Fourier Transforms Using Min-Max Interpolation. IEEE Trans. Signal Process. 2003, 51, 560–574. [Google Scholar] [CrossRef] [Green Version]
  13. Ong, F.; Uecker, M.; Lustig, M. Accelerating Non-Cartesian MRI Reconstruction Convergence Using k-Space Preconditioning. IEEE Trans. Med. Imaging 2020, 39, 1646–1654. [Google Scholar] [CrossRef]
  14. Alamidi, D.F.; Smailagic, A.; Bidar, A.W.; Parker, N.S.; Olsson, M.; Hockings, P.D.; Lagerstrand, K.M.; Olsson, L.E. Variable Flip Angle 3D Ultrashort Echo Time (UTE) T1 Mapping of Mouse Lung: A Repeatability Assessment. J. Magn. Reson. Imaging 2018, 48, 846–852. [Google Scholar] [CrossRef] [PubMed]
  15. Everett, R.J.; Stirrat, C.G.; Semple, S.I.R.; Newby, D.E.; Dweck, M.R.; Mirsadraee, S. Assessment of Myocardial Fibrosis with T1 Mapping MRI. Clin. Radiol. 2016, 71, 768–778. [Google Scholar] [CrossRef] [PubMed]
  16. Zhang, L.; Armstrong, T.; Li, X.; Wu, H.H. A Variable Flip Angle Golden-Angle-Ordered 3D Stack-of-Radial MRI Technique for Simultaneous Proton Resonant Frequency Shift and T1-Based Thermometry. Magn. Reson. Med. 2019, 82, 2062–2076. [Google Scholar] [CrossRef]
  17. Tamada, D.; Wakayama, T.; Onishi, H.; Motosugi, U. Multiparameter Estimation Using Multi-Echo Spoiled Gradient Echo with Variable Flip Angles and Multicontrast Compressed Sensing. Magn. Reson. Med. 2018, 80, 1546–1555. [Google Scholar] [CrossRef]
  18. Chambolle, A.; Pock, T. A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging. J. Math. Imaging Vis. 2011, 40, 120–145. [Google Scholar] [CrossRef] [Green Version]
  19. Pock, T.; Chambolle, A. Diagonal Preconditioning for First Order Primal-Dual Algorithms in Convex Optimization. In Proceedings of the IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 1762–1769. [Google Scholar]
  20. Sidky, E.Y.; Jorgensen, J.H.; Pan, X. Convex Optimization Problem Prototyping for Image Reconstruction in Computed Tomography with the ChambollePock Algorithm. Phys. Med. Biol. 2012, 57, 3065–3091. [Google Scholar] [CrossRef]
  21. Hanhela, M.; Kettunen, M.; Gröhn, O.; Vauhkonen, M.; Kolehmainen, V. Temporal Huber Regularization for DCE-MRI. J. Math. Imaging Vis. 2020, 62, 1334–1346. [Google Scholar] [CrossRef]
  22. Trzasko, J.; Manduca, A. Local versus Global Low-Rank Promotion in Dynamic MRI Series Reconstruction. In Proceedings of the 19th Annual Meeting of ISMRM, Montreal, QC, Canada, 29 April 2011; p. 4371. [Google Scholar]
  23. Zhang, T.; Cheng, J.Y.; Potnick, A.G.; Barth, R.A.; Alley, M.T.; Uecker, M.; Lustig, M.; Pauly, J.M.; Vasanawala, S.S. Fast Pediatric 3D Free-Breathing Abdominal Dynamic Contrast Enhanced MRI with High Spatiotemporal Resolution. J. Magn. Reson. Imaging 2015, 41, 460–473. [Google Scholar] [CrossRef] [Green Version]
  24. Candès, E.J.; Recht, B.; Todd Candès, M.E.; Recht, B. Exact Matrix Completion via Convex Optimization. Found. Comput. Math. 2009, 9, 717–772. [Google Scholar] [CrossRef] [Green Version]
  25. Tamir, J.I.; Uecker, M.; Chen, W.; Lai, P.; Alley, M.T.; Vasanawala, S.S.; Lustig, M. T2 Shuffling: Sharp, Multicontrast, Volumetric Fast Spin-Echo Imaging. Magn. Reson. Med. 2017, 77, 180–195. [Google Scholar] [CrossRef] [Green Version]
  26. Lingala, S.G.; Hu, Y.; Dibella, E.; Jacob, M. Accelerated Dynamic MRI Exploiting Sparsity and Low-Rank Structure: K-t SLR. IEEE Trans. Med. Imaging 2011, 30, 1042–1054. [Google Scholar] [CrossRef] [Green Version]
  27. Zhao, B.; Haldar, J.P.; Christodoulou, A.G.; Liang, Z.P. Image Reconstruction from Highly under Sampled (k, t)-Space Data with Joint Partial Separability and Sparsity Constraints. IEEE Trans. Med. Imaging 2012, 31, 1809–1820. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Feng, L. 4D Golden-Angle Radial MRI at Subsecond Temporal Resolution. NMR Biomed. 2023, 36, e4844. [Google Scholar] [CrossRef]
  29. Wang, H.Z.; Riederer, S.J.; Lee, J.N. Optimizing the Precision in T1 Relaxation Estimation Using Limited Flip Angles. Magn. Reson. Med. 1987, 5, 399–416. [Google Scholar] [CrossRef] [PubMed]
  30. Deoni, S.C.L.; Peters, T.M.; Rutt, B.K. Determination of Optimal Angles for Variable Nutation Proton Magnetic Spin-Lattice, T1, and Spin-Spin, T2, Relaxation Times Measurement. Magn. Reson. Med. 2004, 51, 194–199. [Google Scholar] [CrossRef] [PubMed]
  31. Wang, L.; Corum, C.A.; Idiyatullin, D.; Garwood, M.; Zhao, Q. T1 Estimation for Aqueous Iron Oxide Nanoparticle Suspensions Using a Variable Flip Angle SWIFT Sequence. Magn. Reson. Med. 2013, 70, 341–347. [Google Scholar] [CrossRef] [Green Version]
  32. Gupta, R.K. A New Look at the Method of Variable Nutation Angle for the Measurement of Spin-Lattice Relaxation Times Using Fourier Transform NMR. J. Magn. Reson. 1977, 25, 231–235. [Google Scholar] [CrossRef]
  33. Idiyatullin, D.; Corum, C.A.; Garwood, M. Multi-Band-SWIFT. J. Magn. Reson. 2015, 251, 19–25. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Hänninen, N.E.; Liimatainen, T.; Hanni, M.; Gröhn, O.; Nieminen, M.T.; Nissi, M.J. Relaxation Anisotropy of Quantitative MRI Parameters in Biological Tissues. Sci. Rep. 2022, 12, 12155. [Google Scholar] [CrossRef] [PubMed]
  35. Schabel, M. 3D Shepp-Logan Phantom. Available online: https://www.mathworks.com/matlabcentral/fileexchange/9416-3d-shepp-logan-phantom (accessed on 1 October 2022).
  36. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  37. Zibetti, M.V.W.; Sharafi, A.; Otazo, R.; Regatte, R.R. Accelerated Mono- and Biexponential 3D-T1ρ Relaxation Mapping of Knee Cartilage Using Golden Angle Radial Acquisitions and Compressed Sensing. Magn. Reson. Med. 2019, 83, 1291–1309. [Google Scholar] [CrossRef] [PubMed]
  38. Knoll, F.; Bredies, K.; Pock, T.; Stollberger, R. Second Order Total Generalized Variation (TGV) for MRI. Magn. Reson. Med. 2011, 65, 480–491. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Varela-Mattatall, G.; Baron, C.A.; Menon, R.S. Automatic Determination of the Regularization Weighting for Wavelet-Based Compressed Sensing MRI Reconstructions. Magn. Reson. Med. 2021, 86, 1403–1419. [Google Scholar] [CrossRef] [PubMed]
  40. Hanhela, M.; Gröhn, O.; Kettunen, M.; Niinimäki, K.; Vauhkonen, M.; Kolehmainen, V. Data-Driven Regularization Parameter Selection in Dynamic MRI. J. Imaging 2021, 7, 38. [Google Scholar] [CrossRef]
Figure 1. The simulated phantom.
Figure 1. The simulated phantom.
Jimaging 09 00151 g001
Figure 2. Convergence of Equation (3) with and without utilizing the preconditioner [13] in the reconstruction. The preconditioned PDPS algorithm convergences significantly faster.
Figure 2. Convergence of Equation (3) with and without utilizing the preconditioner [13] in the reconstruction. The preconditioned PDPS algorithm convergences significantly faster.
Jimaging 09 00151 g002
Figure 3. The estimated T1 (a) and S0 (b) maps for the multiple tested acceleration factors (AF) for the simulated phantom. The lower left corner of each image shows the SSIM (top) and nRMSE (bottom) values of the respective reconstruction.
Figure 3. The estimated T1 (a) and S0 (b) maps for the multiple tested acceleration factors (AF) for the simulated phantom. The lower left corner of each image shows the SSIM (top) and nRMSE (bottom) values of the respective reconstruction.
Jimaging 09 00151 g003
Figure 4. The T1 and S0 map error metrics as a function of AF for the simulated phantom.
Figure 4. The T1 and S0 map error metrics as a function of AF for the simulated phantom.
Jimaging 09 00151 g004
Figure 5. The estimated T1 (a) and S0 (b) maps for the multiple tested acceleration factors (AF) for the ex vivo acquisition. The lower left corner of each image shows the SSIM (top) and nRMSE (bottom) values of the respective reconstruction.
Figure 5. The estimated T1 (a) and S0 (b) maps for the multiple tested acceleration factors (AF) for the ex vivo acquisition. The lower left corner of each image shows the SSIM (top) and nRMSE (bottom) values of the respective reconstruction.
Jimaging 09 00151 g005
Figure 6. The T1 and S0 map error metrics as a function of AF for the ex vivo acquisition.
Figure 6. The T1 and S0 map error metrics as a function of AF for the ex vivo acquisition.
Jimaging 09 00151 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Paajanen, A.; Hanhela, M.; Hänninen, N.; Nykänen, O.; Kolehmainen, V.; Nissi, M.J. Fast Compressed Sensing of 3D Radial T1 Mapping with Different Sparse and Low-Rank Models. J. Imaging 2023, 9, 151. https://doi.org/10.3390/jimaging9080151

AMA Style

Paajanen A, Hanhela M, Hänninen N, Nykänen O, Kolehmainen V, Nissi MJ. Fast Compressed Sensing of 3D Radial T1 Mapping with Different Sparse and Low-Rank Models. Journal of Imaging. 2023; 9(8):151. https://doi.org/10.3390/jimaging9080151

Chicago/Turabian Style

Paajanen, Antti, Matti Hanhela, Nina Hänninen, Olli Nykänen, Ville Kolehmainen, and Mikko J. Nissi. 2023. "Fast Compressed Sensing of 3D Radial T1 Mapping with Different Sparse and Low-Rank Models" Journal of Imaging 9, no. 8: 151. https://doi.org/10.3390/jimaging9080151

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop