Next Article in Journal
An Attention-Based Method for the Minimum Vertex Cover Problem on Complex Networks
Previous Article in Journal
Numbers Do Not Lie: A Bibliometric Examination of Machine Learning Techniques in Fake News Research
Previous Article in Special Issue
Investigating Routing in the VANET Network: Review and Classification of Approaches
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Algorithms in Tomography and Related Inverse Problems—A Review

by
Styliani Tassiopoulou
,
Georgia Koukiou
and
Vassilis Anastassopoulos
*
Electronics Laboratory, Physics Department, University of Patras, Rio, 26504 Patras, Greece
*
Author to whom correspondence should be addressed.
Algorithms 2024, 17(2), 71; https://doi.org/10.3390/a17020071
Submission received: 9 December 2023 / Revised: 15 January 2024 / Accepted: 30 January 2024 / Published: 5 February 2024
(This article belongs to the Collection Featured Reviews of Algorithms)

Abstract

:
In the ever-evolving landscape of tomographic imaging algorithms, this literature review explores a diverse array of themes shaping the field’s progress. It encompasses foundational principles, special innovative approaches, tomographic implementation algorithms, and applications of tomography in medicine, natural sciences, remote sensing, and seismology. This choice is to show off the diversity of tomographic applications and simultaneously the new trends in tomography in recent years. Accordingly, the evaluation of backprojection methods for breast tomographic reconstruction is highlighted. After that, multi-slice fusion takes center stage, promising real-time insights into dynamic processes and advanced diagnosis. Computational efficiency, especially in methods for accelerating tomographic reconstruction algorithms on commodity PC graphics hardware, is also presented. In geophysics, a deep learning-based approach to ground-penetrating radar (GPR) data inversion propels us into the future of geological and environmental sciences. We venture into Earth sciences with global seismic tomography: the inverse problem and beyond, understanding the Earth’s subsurface through advanced inverse problem solutions and pushing boundaries. Lastly, optical coherence tomography is reviewed in basic applications for revealing tiny biological tissue structures. This review presents the main categories of applications of tomography, providing a deep insight into the methods and algorithms that have been developed so far so that the reader who wants to deal with the subject is fully informed.

1. Introduction

The material in this review has been classified into four general categories regarding the research in tomography, in related inverse problems, as well as its applications in various fields. The relative categories addressed herein are the following: tomographic reconstruction techniques, principles, and new approaches, special topics in tomography, tomographic implementation algorithms, and tomography in natural sciences. The more recent and updated literature is extensively investigated in the following four subsections. In the next sections, selected representative applications are briefly presented.

1.1. Tomographic Reconstruction Techniques, Principles, and New Approaches

In the realm of tomographic reconstruction, the journey begins with a fundamental categorization of algorithms, as detailed by Gordon and Herman [1]. This classification neatly segments the methods for reconstructing objects from their projections into four distinct categories. These categories include the summation technique, the Fourier transform utilization, solving integral equations analytically, and employing series expansion methodologies. This initial categorization not only provides a structured approach to the field but also offers a basis for understanding the diversity of techniques at play.
As we delve into this domain, according to Colsher [2], a pivotal study is provided where four key algorithms are adapted to directly reconstruct three-dimensional objects from projections. This transformative step showcases the practical application of these methods. Among the selected algorithms, the Algebraic Reconstruction Technique, the Simultaneous Iterative Reconstruction Technique, and the Iterative Least Squares Technique are featured. This comprehensive approach exemplifies the potential for substantial streamlining of calculations and introduces the concept of tomographic projections. Diving further into the fine distinction of tomography reconstruction, Clackdoyle and Defrise [3] distinguish between the 2D and 3D reconstruction problems. To comprehend the foundations of computed tomography image reconstruction, Hornegger, Maier, and Kowarschik provide essential insights in their work [4]. These foundational principles are paramount for understanding the core concepts and techniques applied in the field. In the realm of classical tomography, the 2D problem is prevalent, but as we move into 3D, considerations expand to density functions and lines with arbitrary orientations in space. Building upon this knowledge, the research conducted by Khan, Yasin et al. [5] offers an overview of cutting-edge 3D modeling algorithms developed over the past four decades. This rich resource equips researchers and practitioners with the latest advancements in tomographic reconstruction techniques. This differentiation is crucial, as it influences the choice of reconstruction methods in various applications.
Of particular significance is the enhanced temporal resolution for cardiac imaging and the ability to acquire dual-energy information by operating the two tubes at varying voltages. The synergy of technological innovation and practical application is exemplified by Goshtasby and Turner [6]. This reference introduces an automated technique capable of converting a series of tomographic image slices into an isotropic volume dataset. By establishing correspondence between points in consecutive slices and employing linear interpolation, this method offers a substantial enhancement in data accessibility. As reported by Fessler [7], the research delves into algorithms tailored for reconstructing attenuation images from transmission scans characterized by a low count of photons per beam. In situations where the average number of photons per beam is modest enough to warrant caution when employing traditional filtered backprojection imaging techniques, these algorithms offer a critical solution. The ongoing evolution underscores the dynamic nature of tomographic image reconstruction. As reported by Yu and Fessler [8], their research indicates a critical advancement by integrating nonlocal boundary information into the regularization approach. This incorporation stems from the insights drawn from the reference and demonstrates the growing importance of statistical methods in tomographic image reconstruction. Beyond mere system modeling, these methods offer statistical models and adherence to physical constraints that surpass the traditional filtered backprojection method. Addressing specific challenges, Chandra et al. [9] introduce a swift and precise approach to tackle circular artifacts that often stem from missing segments of the Discrete Fourier Theorem. This method refines the precision and efficiency of image reconstruction, contributing to the overall quality of results. This approach is rooted in the precise partitioning of the Discrete Fourier Theorem space under the projective Discrete Radon Transform, as denoted in the Discrete Fourier Theorem. To further augment image quality and move beyond the confines of conventional backprojection methods, Zhou, Lu et al. [10] precede two innovative backprojection variants: the α-trimmed backprojection and the principal component analysis-based backprojection. These variants offer the promise of superior image quality, which is a critical factor in numerous tomographic applications. As we advance in our exploration, Chetihand Messali [11] adopts a comprehensive methodology by implementing both the Algebraic Reconstruction Technique and Filter Backprojection methods. The study then rigorously compares the ensuing experimental results using performance metrics across various test cases. Such comparisons are instrumental in assisting researchers in selecting the most suitable method for their specific scenarios.
As we delve into the field of computed tomography, Somigliana, Zonca et al. [12] focus on the correlation between the thickness of acquired computed tomography slices and the accuracy of three-dimensional volume reconstruction. This correlation is pivotal for radiation therapy planning and disease diagnosis. On the theoretical front, Gourion and Noll [13] explore the theoretical framework of emission-computed tomography. The article delves into novel numerical approaches based on regularization methods. Understanding the theoretical underpinnings of the image reconstruction process is instrumental in advancing the field. In a noteworthy departure from conventional approaches, Petersilka, Bruderet al. [14] point out a pioneering system concept and design for a computed tomography scanner. This innovative design, featuring two X-ray tubes and two detectors, holds the potential to surmount the limitations of traditional multi-detector row computed tomography. In the pursuit of innovation, Saha, Tahtali et al. [15] center on an innovative computed tomography acquisition method. This method enables simultaneous projection captures, potentially offering exceptionally rapid scans and reductions in radiation doses. This innovation exemplifies the ongoing efforts to optimize technology in the field. Continuing the pursuit of efficiency, Miqueles, Koshev et al. [16] introduce a novel rapid backprojection operator for processing tomographic data. This algorithm offers a cost-effective solution and is compared against other swift transformation techniques using extensive real and simulated datasets. Peering into the future, as outlined by Willemink and Noël [17], forthcoming advances are anticipated in both hardware and software. Innovations like photon-counting computed tomography and the integration of artificial intelligence are poised to reshape the field. Returning to the practical realm, Wang, Ye et al. [18] emphasize the real-world impact of tomographic reconstruction, particularly in the realm of medical imaging. This reference provides a broader context, shedding light on representative outcomes and the pressing concerns that demand attention in the field.
Beyond algorithmic aspects, the technological side of tomographic imaging is illuminated by Jung [19]. This article presents an assessment of the fundamental physical principles and technical facets of the computed tomography scanner. It encompasses noteworthy advancements in computed tomography technology, positioning the field at the forefront of diagnostic and research applications. In the paper authored by Withers, Bouman et al. [20], the authors delve into the fundamental tenets of computed tomography, offering insights into the methodologies for acquiring computed tomography scans. These methods employ X-ray tubes and synchrotron sources, as well as various feasible contrast modes. Such a thorough understanding of the technology underpinning tomographic imaging is vital for researchers and clinicians alike. Understanding the impact of acquisition parameters on reconstruction quality is essential. The significance of post-processing techniques comes to the forefront in work by Seletci and Duliu [21]. As outlined by Mia, Förster et al. [22], the research introduces the concept of equally inclined tomography, an advanced method for reconstructing three-dimensional objects from multiple two-dimensional projections. This innovative approach supersedes traditional tomography, which relies on equally angled two-dimensional projections. The result is a significant enhancement in three-dimensional reconstruction quality.
Meanwhile, Whiteled, Luk et al. [23] take a pioneering step in the realm of neural network design for positron emission tomography. The direct PET neural network is proficient in reconstructing multi-domain image volumes from sinograms, underlining the growing role of artificial intelligence in the field. As reported by Lee, Choi et al. [24], the research sets the stage for a deep learning revolution in tomographic imaging. The primary objective of the study is to attain high-quality three-dimensional reconstructed images in the context of sparse sampling conditions. Deep learning methods promise to revolutionize accuracy and efficiency in the field. In a fusion of innovation and network architecture, Zhou, Kevin Zhou et al. [25] introduce a cascaded residual dense spatial-channel attention network. This network aims to reconstruct tomographic images from a limited number of projection views, amplifying the power of deep learning and data fidelity layers. For scenarios with limited data, Luther and Seung [26] present a direct approach for limited-angle tomographic reconstruction, employing convolutional networks. The network training process involves minimizing the mean squared error between the network-generated reconstructions and a ground truth three-dimensional volume. This reference underscores the importance of employing software tools like Adobe Photoshop, ImageJ, Corel PHOTO-PAINT, and Origin to enhance the quality of images for quantitative analysis. Such post-processing techniques are instrumental in distinguishing between various diseases and disorders. These guide us from fundamental principles to cutting-edge innovations, all emphasizing the real-world impact in medical, industrial, and scientific applications. All these underscore the dynamic evolution of the field as it adapts to emerging technologies, harnesses the power of artificial intelligence, and continually strives for higher quality and efficiency. In this ever-advancing discipline, these references serve as beacons, illuminating the path for researchers and practitioners, ensuring that they remain at the forefront of tomographic imaging, and delivering high-quality solutions for an array of applications.

1.2. Special Topics in Tomography

In this subsection, we embark on the particulars of advanced imaging techniques. This comprehensive recall provides explanations on pivotal areas within the field of tomography, uncovering the latest innovations and theoretical considerations. From multi-slice computed tomography to Super-Resolution Reconstruction in magnetic resonance imaging, we delve into cutting-edge technology and novel algorithms. As we navigate this field, we also examine the creation of detailed 3D phantoms for various imaging applications, ultimately emphasizing the relevance of these ‘Special Topics in Tomography’ across the spectrum of medical and scientific research. In the domain of medical imaging, this review explores several key references that collectively highlight significant challenges, advancements, and interconnected issues in the field.
The introduction of multi-slice CT scanners, as detailed by Hu [27], represents a significant leap forward in the world of CT technology. These advanced scanners enable high-resolution imaging of extensive longitudinal volumes while introducing unique challenges. As observed by Dawson and Lees [28], multi-slice systems are placed in the broader context of CT technology, dropping light on their origins and enduring relevance. These collectively address challenges and innovations in CT scanning, offering insights into the technology’s evolution. As reported by Majee, Balke et al. [29], a pioneering algorithm known as “multi-slice fusion” is introduced. This approach combines various denoising techniques within low-dimensional spaces and finds applications in 4D cone beam X-ray CT reconstruction.
Singh, Kalra et al. [30] conducted a comparative analysis of image quality in abdominal CT images, considering different X-ray tube current–time products and reconstruction techniques. Therefore, collectively, there is a need for improved MRI reconstruction techniques and image quality. As examined by Aibinu, Salami et al. [31], the tutorial places significant emphasis on three key aspects related to the utilization of Inverse Fast Fourier Transformation in Magnetic Resonance Image reconstruction. Furthermore, it delivers a succinct introduction to the fundamentals of Magnetic Resonance Image physics, the instrumental perspective of Magnetic Resonance Image systems, K-space signal processing, and the procedures involved in Inverse Direct Fourier Transformation and Inverse Fast Fourier Transformation for one-dimensional (1D) and two-dimensional (2D) data. Super-resolution imaging in MRI is explored in [32,33]. The authors Plenge, Poot et al. [32] introduce an innovative method for Super-Resolution Reconstruction in MRI, leveraging deep learning techniques, specifically a three-dimensional convolutional neural network. This technique harnesses high-resolution content in 2D slices to reconstruct high-resolution 3D images. The field of magnetic resonance imaging (MRI) reconstruction sees noteworthy advancements as well. Zhang, Shinomiya, and Yoshida [33] advocate the use of two-dimensional super-resolution technology to enhance the resolution of MRI, further enhancing the quality of MRI images.
The development of detailed phantoms is also a crucial aspect of medical imaging research. As outlined by Hoffman, Cutler et al., the study in [34] describes the creation of a three-dimensional brain phantom for simulating studies related to cerebral blood flow and metabolism in positron emission tomography. Additionally, Collins, Zijdenbos et al. [35] outline the construction of a digital volumetric phantom of the human brain, offering valuable tools for simulating head tomographic images. Glick and Ikejimba [36] provide an overview of research efforts aimed at developing digital and physical breast phantoms to advance breast imaging studies. All of these collectively address the need for realistic phantoms for various imaging modalities. The study of Klingenbeck-Regn, Schaller et al. [37] delves into the theoretical aspects of multi-slice scanners, with a focus on detector design and strategies for spiral interpolation. Moreover, it validates these theoretical constructs through phantom measurements. The authors Aibinu, Salami et al. emphasize Inverse Fast Fourier Transformation in MRI, highlighting the significance of K-space signal processing. Michael O’ Connor, Das et al. [38] focus on the creation of high-resolution models for simulating three-dimensional breast imaging techniques, addressing the need for realistic breast tissue simulations.
In conclusion, the central issues explored across these references include enhancing image resolution, improving image quality, and providing tools for realistic simulations and studies. Together, these references form integral pieces of the puzzle, addressing interconnected issues across various imaging modalities in the field of medical imaging.

1.3. Tomographic Implementation Algorithms

In this subsection, we focus on the topics of image quality improvement and artifact reduction, elucidating techniques and strategies to increase the accuracy and fidelity of tomographic images. Dobbins and Godfrey [39] take us into the realm of tomosynthesis reconstruction algorithms. The discussion of residual blur minimization expands the dialogue that was initiated, emphasizing the practical challenge of improving image quality and accuracy, especially in 3D reconstruction. Goosens, Labate et al. [40] further investigate the challenge of region-of-interest computed tomography in the presence of measurement noise. They introduce a relaxation of data fidelity and consistency requirements, highlighting the complex nature of handling real-world imperfections in imaging processes. Su, Deng et al. [41] lead us to improve image quality and reduce artifacts through the deep learning process in breast tomosynthesis, illustrating the impact of state-of-the-art technology. This is in line with the theme of advancing tomographic reconstruction using modern computational techniques. Additionally, Quillent, Bismuth et al. [42] add deep learning to the discussion for mitigating sparse-view and limited-angle artifacts in digital breast tomosynthesis, highlighting the role of artificial intelligence in improving tomographic image quality. The pioneering approach presented by Lyu, Wu et al. [43] concerning metal artifact reduction emphasizes the critical practical aspect of image artifact reduction, further linking to the general issue of image quality improvement.
Referring to the optimization process, Abreu, Tyndall, and Ludlow [44] investigate the effect of projection geometry on caries detection. This is crucial as it addresses the real issue of optimizing image acquisition for specific diagnostic purposes, highlighting the importance of tailored imaging strategies. The authors Pekel, Lavilla et al. [45] lead us into the field of optimizing X-ray CT trajectories. Customizing imaging paths for specific samples addresses the practical challenge of efficient data acquisition and high-quality image production, coordinating the need for precision in tomographic imaging. Moving on to more widespread and synchronous processes, Jin, McCann et al. [46] bridge the gap between iterative methods and deep learning, highlighting the potential of convolutional neural networks in dealing with ill-posed inverse problems. The regression approach discussed by Hou, Alansary et al. [47] demonstrated the integration of deep learning techniques with 3D spatial mapping, contributing to the multidimensional understanding of tomographic images. Finally, Morani and Unay [48] incorporate the current trend of image preprocessing and hyperparameter tuning using convolutional neural networks.
The use of floating-point GPUs for image reconstruction by Fang and Mueller [49] links technology to efficiency, demonstrating the importance of hardware developments in the field of tomography just as Wang, Zhang et al. [50] shed light on software solutions to streamline image reconstruction, highlighting the need for user-friendly tools that simplify the often-complex rebuilding process. The focus is shifting toward a hybrid gradient descent approach for region-of-interest CT. This methodology bridges the theoretical concepts, as reported by Pham, Yuan et al. [51], with practical applications, aiming to improve the accuracy and efficiency of tomographic reconstruction in specific regions of interest. Lyons, Raj, and Cheney [52] introduce innovative methodologies for linear inverse problems in tomography. These methods resonate with the need to develop robust algorithms for accurate image reconstruction, a common thread among the discussed references. In the realm of electrical impedance tomography, Goharian, Soleimani, and Moran [53] tackle the intricacies of image reconstruction, bringing forth the importance of regularization methods in dealing with ill-posed problems. The concept of Radon Transformation and its application in Electrical Impedance Topographic Images, as introduced by Hossain, Ambia et al. [54], contributes to our understanding of the theoretical principles underpinning tomographic imaging. With the introduction of innovative techniques to create volumetric models from fire images, as denoted by Ihrke and Magnor [55], we touch upon a unique aspect of tomography, highlighting its diverse applications.
As we delve into optical tomography, highlighted by Arridge [56], we gain insights into both forward and inverse problems. This comprehensive review classifies algorithms and sets the stage for future research directions, strengthening the foundation of optical tomography. The Fourier reconstruction method detailed by Zhang T., Zhang L. et al. [57], specifically tailored for symmetric geometry computed tomography, adds a layer of technical sophistication to our discussion, emphasizing the importance of innovative reconstruction techniques.
In conclusion, the broader picture that emerges is a dynamic field that adapts to emerging technologies and continually strives for higher quality and more efficient solutions. The integration of deep learning techniques underscores the growing role of artificial intelligence in reshaping the landscape of tomography. These references, when woven together, form a comprehensive narrative depicting the multifaceted nature of tomography.

1.4. Tomographic Imaging: From SAR, Geology, to Medical Advances

In this comprehensive subsection, we are exploring advanced tomographic imaging techniques in various disciplines. The study presented by the authors Reigber and Moreira [58] leads us to the sphere of radar tomography (SAR), an innovative achievement that utilizes phase differences for the assessment of soil topography. It addresses a crucial issue, enhancing our ability to solve complex cases of stay-in SAR images, especially in multi-line imaging geometries. Fornaro and Serafino [59] expand the understanding of SAR spacecraft, underlining its ability to distinguish the mechanisms of thoughts within pixels. This progress in SAR space tomography is aligned with the need for improved clarity and image accuracy. The acquisition of images based on the circular track, as analyzed by Oriot and Cantalloube [60], opens the capabilities of optimizing image processing at various azimile angles. This approach is particularly beneficial in studies such as building mining, highlighting the practical advantages of SAR data processing.
As we deepen our searches, the authors Zhu and Bamler [61] focus on the concept of TomoSAR, pushing the limits of 3D imaging. They introduce us to the lifting diaphragm, a new concept that enhances our ability to rebuild reflective functions along the lifting direction. This echoes with the subject of advanced imaging techniques. Sportouche, Tupin, and Denise [62] suggest a complete semi-automatic processing chain for the reconstruction of 3D urban buildings, incorporating high-resolution SAR optic pairs. This seamless integration faces the practical challenge of rebuilding urban buildings and presents the need for a fusion cross-section. The spectral analysis approach described by Zhu and Bamler [63] treats the reversal of SAR as a spectral problem, emphasizing the role of hyper-analysis in the monitoring of urban infrastructure. The ability to distinguish multiple sources of scattering is vital to urban studies, contributing to continuing research in the field. In addition, Zhu and Ge [64] emphasize the importance of the integration of SAR data with visual images because it is an example of the power of the combination of different imaging to create 3D information. This approach opens the doors to more complete and accurate 3D rebuilding.
Synthetic aperture radar data are integrated with optical imagery to generate 3D information using stereogrammetric methods, as described by Bagheri, Schmitt et al. [65]. The exploration of Polarimetric tomography SAR (Pol-Tomosar) performed by Budillon, Johnsoy, and Schirinzi [66] demonstrates its potential in urban applications by resolving multiple scatterers within the same analysis cell. This innovative technique immediately faces the need for increased accuracy in complex urban environments. Continuing the review in the field of synthetic radar, authors Ren, Zhang et al. [67] introduce the concept of Aetomo-Net, a visual network that uses multidimensional features for SAR tomography. This neuronal network highlights the growing role of artificial intelligence in the reconstruction of the tomographic image. By completing the search in this field, the discussion of the performance by Devaney [68] expands our exploration beyond SAR, offering information on seismic exploration applications. It highlights the interdisciplinary nature of tomographic imaging, providing valuable information on seismic studies.
As we continue with an overview of the world's seismic tomography, Trampert [69] emphasizes the importance of quantitative interpretations in promoting the understanding of geodynamics. He emphasizes the transition from qualitative to quantitative approaches, reflecting the evolution in the field of tomography. Rector and Washbourne [70] introduced us to the utilization of cross-well seismic data, emphasizing the importance of the Fourier projection slice theorem and its role in characterizing the resolution and uniqueness of tomograms. This aligns with the theme of the theoretical foundations of tomographic imaging. Akin and Kovscek [71] discuss the critical role of X-ray computed tomography in the imaging of porosity, permeability, and fluid phase distribution in porous media. The importance of spatial resolution and adaptability in various flow conditions, connecting with the need for versatile imaging tools, is emphasized. The use of multistatic ground-penetrating radar signals, as analyzed by Worthmann, Chambers et al. [72], introduces a novel approach to tomographic imaging, particularly in the context of intensity distributions. This innovative approach highlights the need for adaptive imaging solutions.
Subsequently, in the field of interdisciplinary and theoretical tomography, as mentioned by Patella [73], a new interpretation of self-confident data highlights the need for innovative approaches to the interpretation of tomographic data. This is in line with the primary issue of pushing the boundaries of traditional imaging techniques. The introduction of 3DInvNet by Dai, Lee et al. [74] addresses the challenges of non-linearity and computational cost in 3D reconstruction algorithms. This innovative scheme demonstrates the evolving landscape of tomographic imaging techniques. Delving into the field of medical tomography, Concharsky and Romanov [75] present efficient methods for ultrasound tomography with attenuation. This expands the horizons of tomographic imaging into the realm of medical diagnostics, emphasizing the role of sound wave attenuation in imaging. The application of ultrasound computed tomography in breast tissue imaging, as discussed by Martiatu, Boehm, and Fichner [76], highlights the potential for quantitative 3D imaging. The introduction of finite-frequency travel-time tomography underscores the need for precision in medical tomographic applications. The exploration of non-interference three-dimensional refractive index tomography by Hauer, Haberfehlner et al. [77] highlights the application of tomographic imaging in the life science field. This approach focuses on simplicity and robust imaging performance, emphasizing the need for adaptability.
The innovative deep prior diffraction tomography method introduced by Zhou and Horstmeyer [78] offers a high-resolution reconstruction of refractive indices within dense biological samples. It demonstrates the potential of unconventional imaging methods in life sciences. Webber [79] presents a fast method for reconstructing electron density in X-ray scanning applications. This approach aligns with the theme of efficient imaging solutions, particularly in scenarios dominated by Compton scattering. Finally, as highlighted by Yang, Zhang et al. [80], a multi-slice neural network with an optical structure presents the fusion of advanced technology with optical science. This innovative approach highlights the synergy between different disciplines.
Optical Coherence Tomography is a cutting-edge technology used for non-invasive cross-sectional imaging within biological systems. This method utilizes low-coherence interferometry to create a two-dimensional image that reveals the way light scatters from internal tissue microstructures, much like how ultrasound pulse-echo imaging works. Optical Coherence Tomography provides incredibly precise longitudinal and lateral spatial resolutions, down to just a few micrometers, and has the capability to detect extremely faint reflected signals, as minute as approximately one-tenth of a billionth of the incoming optical power [81,82,83,84].
In conclusion, whether in the domain of SAR, seismic exploration, medical diagnostics, or life sciences, these references underscore the dynamic nature of tomography and its ever-evolving role in diverse applications. The integration of advanced algorithms, artificial intelligence, and innovative methodologies reflects the continuous pursuit of higher image quality, precision, and efficiency.
In the rest of the sections, this review briefly presents representative tomographic reconstruction methods. These methods cover most of the disciplines in which current reconstruction approaches have been applied. Analytically, the rest of the sections are presented as shown in Figure 1.

2. Evaluation of Backprojection Methods

It is universally accepted that mammography is the most efficacious tool for the early detection of breast cancer. With traditional mammography, the object is projected onto the detector or film to generate the 2D projection image of the breast. Superimposed objects on the projection images, caused by overlapped anatomical structures, bring limitations to mammography [85,86], such as 20% false-negative rates and high recall rates, which may result in unnecessary anxiety to the patients and increase medical costs. Compared to standard mammography, the digital breast tomosynthesis (DBT) technique may overcome the limitations by removing the ambiguities of overlapped tissues and providing 3D localization. Since 3D slice images of the breast can be partially reconstructed based on a few limited-angle projection images, DBT has the potential to help decrease recall rates, improve the accuracy of breast cancer detection, and, therefore, reduce the number of women who die from such cancer [86]. In the process of tomosynthesis, sequences of limited-angle 2D projection images are acquired first and then reconstructed into slice images of the breast. A few image reconstruction algorithms have been investigated by various research groups, including the backprojection (BP) reconstruction algorithm [87], filtered backprojection (FBP) algorithm [88], matrix inversion tomosynthesis (MITS) [89], maximum likelihood expectation maximization (MLEM) [90,91], simultaneous algebraic reconstruction techniques (SART) [92,93], etc.
The work in [8] focuses on the investigation of BP algorithms. Two BP variants, including α-trimmed BP and principal component analysis-based (PCA) BP, were proposed. Their performance in improving the conspicuity of lesions and suppressing noise was studied by computer simulations and phantom experiments. The shift-and-add (SAA) tomosynthesis reconstruction algorithm [87] reconstructs the plane at the specified height by lining up each projection image according to its relative shift amount. Objects at different locations above the detector will be projected onto the detector in positions depending on the relative locations of the objects. In order to reconstruct 3D slices of the breast, each projection image should be shifted by an amount appropriate for the plane of reconstruction. The shift amount can be calculated based on projected positions from the central points of each reconstruction plane. The shifted planes are added together to emphasize structures in the in-focus plane and blur out structures in other planes. Figure 2 shows a parallel tomosynthesis imaging geometry. The reconstructed plane S can be derived by taking the average of all the projection images that have undergone the necessary shifts [39,87].
The SAA algorithm facilitates the acquisition of 3D reconstructed slices. To enhance the reconstruction of a single pixel on a specific plane located at a certain height above the detector, it is imperative to calculate the shift amounts along both the x and y directions for each pixel on the reconstruction plane. This technique is commonly referred to as point-by-point backprojection [87]. Point-by-point backprojection involves the computation of shift values for every individual pixel position within each reconstructed plane, taking into account the 2D projection of the reconstructed objects within those planes. Figure 3 illustrates this process. The pixels resulting from the backprojection process provide estimations of the object’s internal structure. In the conventional BP algorithm, the final pixel value at point A is calculated as the mean of the backprojected pixels derived from all N projection images (where N denotes the number of projection images). To leverage the statistical properties inherent in these N values and thereby enhance image quality, two distinct variants have been introduced.
The α-trimmed BP technique involves the removal of extreme values within the backprojected pixels. This process entails sorting all pixel values present in the backprojection images, eliminating the lowest α/2 values and the highest α/2 values, and then computing the mean value.
The principal component analysis method is a sophisticated multivariate analysis technique rooted in the concept of eigenvectors. It offers a valuable orthogonal linear transformation that shifts from an initial n-dimensional coordinate system to a novel m-dimensional coordinate system (where m < n ). In the implementation of PCA-based backprojection, a pivotal step involves computing the first principal eigenvectors. These eigenvectors are derived from a matrix comprised of N backprojected pixel values. They serve as the foundational components used to generate the reconstructed image.
In the following, the SAA tomosynthesis reconstruction algorithm is explained, and two variants of the BP technique are provided for enhancing the reconstruction of 3D slices. A summary of the key steps follows:
Step 1.
SAA Tomosynthesis Reconstruction:
The SAA algorithm reconstructs a plane (Figure 2) at a specified height by aligning each projection image based on its relative shift amount.
The shift amount, s h i f t i S , is calculated using the relationship s h i f t i S = s h i f t O = H S I D H · R x O x , where H is the height, S I D is the source-to-image distance, R x is the pixel position on the detector, and O x is the central point of the reconstruction plane.
The reconstructed plane S is obtained by averaging all projection images that have undergone the necessary shifts.
Step 2.
Point-by-Point Backprojection:
To enhance the reconstruction of a single pixel on a specific plane, point-by-point backprojection is employed.
The shift amounts along both the x and y directions (Figure 3) for each pixel on the reconstruction plane are calculated using the mathematical relationship described by equations B x = R x + R z R z A z · A x R x and B y = R y + R z R z A z · A y R y .
Backprojection provides estimations of the object’s internal structure.
Step 3.
Backprojection Variants. α-Trimmed BP Technique:
The technique involves removing extreme values within the backprojected pixels by sorting and eliminating the lowest and highest d / 2 values.
The final pixel value is calculated as the mean of the remaining values ( s = 1 N d i = d / 2 + 1 N d / 2 I ( B i ) ).
Parameter d controls the degree of trimming, ranging from 0 to N 1 .
PCA-based BP utilizes PCA, a multivariate analysis technique, to transform the coordinate system for enhanced reconstruction. It involves computing the first principal eigenvectors from a matrix of backprojected pixel values.
In summary, the 3D reconstructed images produced through the examined reconstruction techniques, including conventional BP, α-trimmed BP, and PCA-based BP, effectively unveiled masses and micro-calcifications. Among these methods, α-trimmed BP showcased superior noise reduction capabilities and proficiently addressed out-of-plane artifacts, thereby enhancing the visibility of in-plane objects. Additionally, it commendably preserved the contours of objects situated near the boundaries. When integrated with FBP as the backprojection step, α-trimmed BP demonstrated the potential to enhance the overall image quality of the reconstructed slices.

3. Multi-Slice Fusion in CT Reconstruction

The complexity of reconstruction problems has evolved beyond the conventional 2D and 3D spatial representations to tackle more intricate 4D and even 5D challenges involving the dimensions of space and time, as well as specific aspects like heart or respiratory phases [94,95,96,97,98,99,100]. This heightened dimensionality in reconstruction offers valuable opportunities to enhance the quality of reconstruction by capitalizing on the inherent patterns within this multidimensional space. In particular, for time-sensitive imaging applications, we can harness the regularity within the images to reconstruct each frame using fewer data points, thus elevating temporal resolution. Take, for example, the domain of 4D CT imaging, where notable contributions from references [95,101,102] have significantly enhanced temporal resolution by leveraging the inherent spatiotemporal regularities of the objects being imaged. These approaches rely on model-based iterative reconstruction (MBIR) techniques [103,104], which enforce regularity in the 4D data by incorporating straightforward spatiotemporal prior models. Furthermore, there has been a proposition to employ deep learning-based post-processing methods for 4D reconstruction, aiming to further enhance the quality of the reconstructed images [105].
Recent developments have demonstrated the significant potential of previous plug-and-play (PnP) programs, as documented in references [106,107,108,109], to significantly improve the quality of reconstructions. This is achieved by allowing state-of-the-art denoising techniques to be incorporated as model priors within model-based iterative reconstruction (MBIR). Consequently, PnP methods promise a remarkable improvement in reconstruction quality in the context of 4D CT imaging challenges. However, there is a notable limitation when it comes to applying state-of-the-art denoisers such as deep convolutional neural networks (CNN) and BM4D, as they are primarily designed for 2D and occasionally 3D images. Extending these techniques to higher dimensions, as discussed in references [100,110,111], presents significant computational and memory challenges. Specifically, adapting CNNs to 4D requires computationally intensive 4D convolutions applied to 5D feature tensor structures. Additionally, training PnP launchers with 4D CNNs requires access to 4D ground truth data, which can be difficult or even impossible to obtain.
The novel 4D X-ray CT reconstruction algorithm introduced in [29] leverages multiple low-dimensional CNN denoisers to generate a highly efficient 4D prior model. The methodology, known as “multipartite fusion”, seamlessly integrates these different low-dimensional priors using a multi-agent equilibrium consensus (MACE) technique [112]. A visual representation of the basic idea behind this approach is provided in Figure 4. A technique called “multi-slice fusion” is used, which combines three discrete CNN denominators, each specifically trained to remove additive white Gaussian noise from lower-dimensional slices (hyperplanes) of the 4D object. When the multi-agent equilibrium consensus (MACE) process merges these jammers, it does so while simultaneously enforcing the constraints imposed by each jammer. As a result, the reconstructed images are forced to exhibit smoothness in all four dimensions. This approach, known as multipart fusion, produces excellent quality reconstructions and remains practical for training and computation even when dealing with high-dimensional reconstruction tasks. The solution for MACE can be computed using a variety of algorithms, as documented in references [106,107,113,114]. To implement multipart fusion, distributed heterogeneous clusters are used, where different agent updates are distributed to various cluster nodes. In particular, the computationally intensive cone beam inversion processes are distributed across multiple CPU nodes, while the CNN denoising calculations are simultaneously distributed across multiple GPU nodes. Experimental results demonstrate that multi-slice fusion is highly effective in significantly reducing artifacts and improving resolution compared to alternative reconstruction methods.
In the realm of 4D X-ray CT imaging, a dynamic object undergoes rotation, and multiple 2D projections (radiographs) of this object are captured at various angles. The core challenge lies in reconstructing the 4D array of X-ray attenuation coefficients using these measurements, where the four dimensions are allocated as follows: three for spatial dimensions and the fourth for time. It is defined that N t is the number of time points, the number of measurements at each time point is denoted by M n , and each time point of the 4D volume N s is the number of voxels. For each time point n within the range { 1 , , N t } , defined y n R M n encapsulates the sinogram measurements taken at time n , and x n R N s serves as the vector representation of the 3D volume containing X-ray attenuation coefficients for that time point.
Combining all measurements yields a comprehensive measurement vector y = y 1 T , ,   y   N t T T R M . Here, M represents the total number of measurements, which can be expressed as   M = n = 1 N t M n . Similarly, the 3D volumes at each time point can be stacked to form a vectorized 4D volume, denoted as   x = x 1 T , , x N t T T R N , where N = N t N s represents the total number of voxels within the 4D volume. Recovering the 4D volume of attenuation coefficients x from the series of sinogram measurements y is actually the 4D reconstruction problem.
The provided text describes a 4D reconstruction problem in the context of 4D X-ray CT imaging, where a dynamic object undergoes rotation, and multiple 2D projections are captured at various angles. The goal is to reconstruct the 4D array of X-ray attenuation coefficients using these measurements. The reconstruction is formulated using a Maximum A Posteriori (MAP) approach, incorporating data fidelity and a 4D regularizer.
Here is an outline of the algorithm steps mentioned in the text:
Step 1.
Formulate the reconstruction problem using the MAP approach:
x * = a r g   min x { l ( x ) + β h ( x ) }
Step 2.
Express the data fidelity term l ( x ) as the sum of squared differences between sinogram measurements and the forward model:
l x = 1 2 n = 1 N t y n A n x n Λ n 2
Step 3.
Define the weight matrix   Λ n = d i a g { c   e x p y n } to address non-uniform noise variance by approximating it through a Gaussian approximation [95,115] of the underlying Poisson noise.
Step 4.
Formulate each H k x   :   R N   R N as the MAP estimation for a Gaussian denoising problem, where h k ( x ) ,   k = 1 , , K represents a prior model, and σ is the noise standard deviation.
H k ( x ) = a r g min z R N { 1 2 σ 2 x z 2 2 + h k ( z ) }
Step 5.
Modify the optimization problem to incorporate K different regularizers, resulting in a consensus equilibrium formulation.
x * = a r g   min x { l ( x ) + β K k = 1 K h k ( x ) }
Step 6.
Define proximal maps L x = a r g min z R N { l ( z ) + 1 2 σ 2 x z 2 2 } :   R N   R N and H k x   :   R N   R N for each term in the optimization problem. Create a stacked operator F ( W ) that maps from R K + 1 N to R K + 1 N , where W R ( K + 1 ) N represents the stacked representative variable:
F ( W ) = L W 0 H 1 ( W 1 ) H K ( W K )
Step 7.
Formulate the consensus equilibrium equation F W * = G ( W * ) , where G is an averaging operator.
Step 8.
Derive the fixed-point relationship 2 G I 2 F I W * = W * for the consensus equilibrium solution W * , which stands as a fixed point within the mapping denoted as   T = ( 2 G I ) ( 2 F I ) .
Step 9.
Implement an iterative fixed-point algorithm (e.g., Mann iteration) to compute the equilibrium solution.
Step 10.
Use a modified update operator L ~ ( W 0 ,   X 0 ) that involves iterative coordinate descent (ICD) for computational efficiency.
F ~ ( W ; X ) = L ~ W 0 ; X 0 H 1 ( W 1 ) H K ( W K )
We have to comment that the algorithm involves mathematical concepts and notations related to optimization, proximal maps, consensus equilibrium, and iterative fixed-point methods. Implementation of these steps requires a suitable understanding of numerical optimization and relevant mathematical frameworks.

4. Accelerating Popular Tomographic Reconstruction Algorithms on Commodity PC Graphics Hardware

All algorithms designed for 3D computed tomography (CT) share a common challenge, as highlighted by Fang and Mueller [49], primarily involving a series of backprojection operations that significantly contribute to the computational burden. Moreover, iterative algorithms introduce additional computational overhead through forward projections, which pose similar computational demands. Therefore, to render these operations practical for clinical applications, it is imperative to optimize the efficiency of both backprojections and projections. Each projection and backprojection operation inherently possesses a complexity that scales with the volume dataset’s size, often denoted as O N 3 . In the context under discussion, straightforward projection and backprojection in the spatial domain are considered. In such cases, the primary avenue for reducing the actual computational expense lies in diminishing the constant factor, denoted as k , which relates complexity to computational cost and is responsible for the k · N 3 term. Typically, in iterative reconstruction, a widely adopted strategy involves precomputing weight matrices, often referred to as look-up tables. While this approach has demonstrated remarkable acceleration in two-dimensional (2D) reconstruction scenarios, its applicability to 3D reconstruction is hindered by the substantial memory requirements involved. Consequently, for 3D reconstruction, various commercial solutions have emerged, often built on custom hardware, to address these challenges [49].
When considering an appropriate platform, it is important to recognize that the projection and backprojection operations are fundamentally voxel and pixel-based tasks with minimal dependencies. Typically, these operations are computed as array processes within extended loops. An ideal platform for handling such calculations includes vector processors or massively parallel architectures [116]. However, it is worth noting that vector processors like the Cray supercomputer family tend to be expensive. A noteworthy recent development in this domain is the emergence of mainstream computing platforms that share many characteristics with vector processors, notably graphics processors (GPUs). By framing the projection, backprojection, and all other CT computations as stream operations, we can harness the capabilities of these affordable mainstream architectures to achieve rapid CT imaging. In consequence, an outline emerges regarding the adaptation of the most frequently utilized CT algorithms, encompassing filtered backprojection, algebraic methods, and EM methods, to GPU architectures. This transition results in significantly enhanced processing speed while upholding accuracy standards.
In a general context, it is evident that previous methods faced a common challenge stemming from the limitations of the graphics hardware they utilized. These hardware systems were confined to integer-arithmetic precision, typically at either 8-bit (PC) or 12-bit (SGI) precision levels. This constraint had a notable impact on their overall accuracy and computational performance. However, a significant stride forward has been achieved with the advent of newer GPU generations. These advanced GPUs introduce a pivotal feature, offering support for floating-point precision at two critical stages within the graphics pipeline. This enhancement carries significant implications. It now allows for the complete reconstruction process to be executed directly within the GPU, performing at the precision levels typically associated with CPUs. Furthermore, the computational tasks handled by the GPU facilitate effortless visualization of the generated data, thus enhancing the overall utility of the technology.
Graphics elements are typically constructed using polygon meshes. To introduce finer surface details, images or textures representing the desired intricacies are often applied or mapped onto these polygons during the rendering phase. This method, known as texture mapping, offers an efficient way to enhance surface detail without necessitating an increase in the object’s polygon count. Importantly, graphics hardware is finely tuned for rapid texture mapping, even when confronted with perspective distortion [117]. The graphics pipeline comprises three fundamental stages, as depicted in Figure 5: the geometry processing stage, the polygon rasterization stage, and the fragment processing stage.
In [49], the work of Lewitt [118] was referenced, where a volume is represented as an assemblage of point samples positioned at grid points. In this model, values at positions between grid points are approximated by interpolation using a specific kernel function. Linear functions are chosen for this purpose, a choice that has also gained extensive popularity in backprojectors and is amenable to efficient implementation in graphics hardware.
The provided text discusses the utilization of GPUs for computations within various common CT algorithms. It introduces a standardized notation and then delves into the specifics of imaging modalities, such as transmission and emission X-rays. The text further describes mathematical formulations for the CT process, including projection and backprojection operations. Three reconstruction methods—the Feldkamp algorithm, SART (Simultaneous Algebraic Reconstruction Technique), and OS-EM (Ordered Subsets Expectation Maximization)—are explained using the established notation.
Here is a breakdown of the four main points:
Step 1.
Notation and Imaging Modalities:
  • A volumetric object is defined by its attenuation function μ ( x ,   y ,   z ) .
  • Two imaging modalities are considered: transmission X-ray (external source) and emission X-ray (metabolic sources within the object).
  • Mathematical formulations C φ Q u , v = Q 0 · e 0 L μ t d t and C φ E u , v = 0 L E ( s ) · e 0 s μ t d t d s for recording intensity values on a 2D detector for both modalities are provided.
Step 2.
Vector Processing for CT:
  • Introduction of vector processing for CT using a standardized notation ( C i Q = C φ Q ( u , v ) , C i E = C φ E ( u , v ) , q i = j = 0 N 3 1 μ j w i j , e i ( s ) = j = 0 N 3 1 E j w i j ( s ) , etc.).
  • The shift from pixel-centric to voxel-centric representation for transmission X-ray.
  • Formulation of voxel-centric representation for emission X-ray.
Step 3.
Projection and Backprojection Operators:
  • Introduction of projection ( P φ ) and backprojection ( B φ ) operators as matrices.
  • Dynamic computation of elements using interpolators integrated into rasterization hardware.
Step 4.
Reconstruction Methods:
  • Feldkamp algorithm
    Depth correction factor ( w i j d = w i j a 2 a + Y v j + Z ( v j ) cos φ φ r 2 ) during backprojection.
    Grid update equation expressed in condensed notation.
  • SART (Simultaneous Algebraic Reconstruction Technique)
    Grid update equation for SART V = V + B φ λ I φ P φ ( V ) P φ ( W ) B φ ( W ) involving a relaxation factor ( λ ).
  • OS-EM (Ordered Subsets Expectation Maximization)
    Grid update equation for OS-EM algorithm V = V φ O S B φ α ( W ) φ O S B φ α I φ P φ α ( V ) .
The text concludes by emphasizing the potential of technological advancements in GPU capabilities for computed tomography. The provided information outlines the mathematical formulations and algorithms used in CT reconstruction, showcasing the significance of leveraging GPU capabilities for efficient computations in these processes.

5. A Deep Learning-Based 3D Ground-Penetrating Radar Data Inversion

Ground-penetrating radar (GPR) has found extensive use in geophysical exploration and civil engineering applications owing to its cost-effectiveness and non-destructive properties. The reconstruction of 3D permittivity maps from GPR data is invaluable for extracting crucial information about subsurface objects. These maps offer insights into various aspects, including the shapes, sizes, positions, orientations, and permittivity properties of these subsurface entities.
Numerous conventional algorithms have been developed for reconstructing 3D subsurface images from GPR C-scans. These migration algorithms offer approximations of object positions and shapes but do not provide detailed permittivity maps. The permittivity data are vital for tasks such as object identification and health assessments. To address the challenge of reconstructing subsurface permittivity maps, a full-wave inversion (FWI) algorithm was introduced in [119,120,121,122]. This algorithm enables the reconstruction of subsurface structure permittivity maps from GPR data through a nonlinear least-squared optimization process. However, processing 3D GPR data with the 3D FWI method can be computationally intensive. To enhance efficiency, a modified total variation (MTV) regularization scheme was introduced [123]. It is noteworthy that only two studies have applied 3D FWI to reconstruct subsurface permittivity or conductivity distributions from GPR data. The high computational complexity and limited applicability of FWI algorithms have posed challenges in their use for reconstructing intricate 3D subsurface scenarios.
In recent times, the integration of deep learning methodologies has emerged as a viable solution to tackle challenges in inverse scattering and electromagnetic (EM) imaging domains [124,125,126]. These investigations encompass a spectrum of approaches, ranging from entirely data-driven techniques to those that blend physics-based insights. They have exemplified the remarkable effectiveness of deep learning in addressing a wide array of EM inverse problems. The utility of deep learning-based methods has also extended to GPR applications [127,128], with a particular emphasis on resolving GPR inverse problems, including image classification, signature recognition, subsurface object detection, and the restoration of object properties [129,130,131,132,133]. Within this context, Deep Neural Networks (DNNs) have been harnessed to reconstruct 2D subsurface permittivity maps based on GPR B-scans [134,135,136,137]. However, it is worth noting that the scope of these endeavors has been confined to the restoration of 2D domain permittivity maps. Consequently, they offer insights primarily at the sectional level of subsurface scenarios, leaving room for improvement in capturing critical details such as object orientation and shape, especially when dealing with intricate subsurface structures. Furthermore, it is crucial to recognize that the use of full-wave simulations for 2D inversion may not faithfully replicate the intricacies of EM phenomena in the actual 3D world. In the 2D modeling paradigm, assumptions are made, including the invariance of scattering in one coordinate direction, the treatment of line sources as infinitely long, and the limitation to linear polarization. These assumptions deviate from the reality of complex 3D modeling scenarios, potentially yielding results that diverge significantly from actual 3D modeling outcomes [122]. Therefore, it becomes imperative to explore the potential of deep learning techniques in the reconstruction of 3D subsurface permittivity maps, transcending the limitations of 2D reconstruction and offering a more accurate representation of real-world EM phenomena.
A significant obstacle in the reconstruction of subsurface permittivity maps is the interference posed by various noise patterns. These include direct coupling, reflections from the ground, and environmental noise, which can obscure object reflections in GPR data. In the work by Dai, Lee et al. [74], a novel deep learning framework named 3DInvNet was introduced to address this challenge by reconstructing subsurface 3D permittivity maps from GPR C-scans, with a prior denoising step. The key contributions of this approach, as distinguished from existing deep learning-based GPR detection and 2D reconstruction methods, can be summarized as follows:
  • A dedicated 3D denoising network, referred to as the “Denoiser,” has been meticulously crafted to combat noise interference within GPR C-scans, particularly in the presence of complex and heterogeneous soil environments. This denoiser incorporates a compact 3D convolutional neural network (CNN) architecture, leveraging residual learning principles and a feature attention mechanism to effectively distill the reflection signatures of subsurface objects from noisy C-scans.
  • Following the denoising process, a 3D U-shaped encoder-decoder network, aptly named the “Inverter,” is purposefully designed. Its primary function is to translate the denoised C-scans, as predicted by the denoiser, into comprehensive subsurface 3D permittivity maps. To ensure robust feature extraction across a spectrum of objects with diverse properties, the inverter incorporates multi-scale feature aggregation modules.
  • To achieve optimal performance, a meticulously devised three-step independent learning strategy is employed, facilitating the pre-training and fine-tuning of both the denoiser and inverter components.

DENOISING, Inverter, and Training

The above key contributions for GPR are provided in stages below. The denoiser has three main stages:
  • Initial Feature Extraction:
    • An initial feature extraction module is employed, consisting of a 3 × 3 × 3 convolutional layer with C 1 channels and 1 × 1 × 1 strides.
      This module captures the initial features ( F 0 = δ ( Κ y ) ) from the noisy input C-scans ( y R D × H × W ).
    • The process involves a 3D convolutional layer K ( ) ) and a Rectified Linear Unit (ReLU) activation function ( ) .
  • Feature Learning Modules:
    • After initial feature extraction, m feature learning modules are applied, each consisting of two residual blocks and one feature attention block.
    • Residual blocks utilize identity mapping to address gradient explosion concerns.
    • Residual learning is formulated for each block ( F 1 = δ Κ δ K F 0 + F 0 and F 2 = δ Κ δ K F 1 + F 1 ) and then a feature attention block is introduced to emphasize the significance of features.
    • The attention mechanism involves global average pooling to compute channel-wise statistics and a gating mechanism using fully connected layers and a Sigmoid function.
    • The attended feature map is generated through channel-wise multiplication and added to the original feature map via a residual connection.
  • Reconstruction Module:
    • A reconstruction module featuring a one-channel convolutional layer with residual learning is employed.
    • This module reconstructs the denoised C-scan ( y D = δ y + K F 0 + F M m ) using the learned feature representations.
In a similar way, the key points about the inverter architecture are as follows:
  • 3D U-Net Structure:
    • The inverter follows the structure of the 3D U-Net architecture, comprising both an encoder and a decoder with skip connections.
  • Multi-Scale Feature Aggregation (MSFA) Mechanism:
    • MSFA is introduced within each encoding and decoding block to capture features at various scales effectively.
    • Each MSFA module includes three 3 × 3 × 3 convolutional layers with 1 × 1 × 1 strides.
    • The increased number of convolutional layers deepens the network, enhancing its nonlinear mapping capabilities and facilitating the extraction of larger-scale features from object reflections.
  • Receptive Field (RF) Calculation:
    • The RF size of the output feature map F r f generated by the f t h convolutional layer in the MSFA module is calculated using the formula r f = r f 1 + k f 1 × i = 1 f 1 s i .
    • The choice of fixed kernel size ( k = 3 and s = 1) leads to different RF sizes, allowing for the capture of multiple scales.
  • Multi-Scale Feature Map Combination:
    • Feature maps F r 1 R C 2 × D × H × W , F r 2 R C 2 × D × H × W , and F r 3 R C 2 × D × H × W with different RF sizes are combined in the channel dimension within each encoding and decoding block.
    • The consolidated multi-scale feature map F r 1 ~ 3 = C o n c a t ( F r 1 , F r 2 , F r 3 ) ) is obtained by concatenating these feature maps.
  • Efficient Multi-Scale Feature Capture:
    • Unlike approaches that introduce additional parallel convolutional layers, the MSFA module directly integrates feature maps from successive convolutional layers with different receptive fields.
    • This design choice aims to efficiently capture multi-scale features from reflection patterns in GPR C-scans influenced by diverse subsurface object properties.
    • Overall, the MSFA mechanism is introduced to enhance the network’s ability to represent the nonlinear mapping from C-scans to 3D permittivity maps, taking into account multi-scale features in subsurface imaging.
Finally, the following outlines a three-step process for the 3DInvNet, a two-stage scheme designed for denoising GPR C-scans and reconstructing 3D subsurface permittivity maps. The three steps involve denoiser pre-training, inverter pre-training, and fine-tuning the pre-trained networks using transfer learning. Here is a summary of each step:
Step 1:
Denoiser Pre-training
  • Objective: Train the denoiser component using a diverse dataset of noisy and noise-free C-scans.
  • Loss Function: Mean Squared Error (MSE) between the predicted denoised C-scan y D and the corresponding ground truth ( y ^ D ).
  • Loss Function Formula: L 1 y D , y ^ D = 1 D · H · W d = 1 D h = 1 H w = 1 W y D d , h , w y ^ D d , h , w 2
  • Optimizer: Adam optimizer.
Step 2:
Inverter Pre-training
  • Objective: Pre-train the inverter using noise-free C-scan ground truth ( y ^ D ) as input data.
  • Loss Function: Mean Absolute Error (MAE) between the predicted permittivity map X and the ground truth X ^ .
  • Loss Function Formula: L 2 X , X ^ = 1 D · H · W d = 1 D h = 1 H w = 1 W X d , h , w X ^ d , h , w
  • Optimizer: Adam optimizer.
Step 3:
Fine-tune the Pre-trained Networks (Transfer Learning)
  • Additional Data Creation: Generate a small dataset containing new scenarios.
  • Initial Network States: Utilize the pre-trained networks as the starting point for fine-tuning.
  • Parameter Updates: Further refine the network parameters by minimizing the loss functions L 1 y D , y ^ D and L 2 X , X ^ using the new training dataset until convergence.
  • Enhanced Networks: After fine-tuning, the networks are better suited to handle a broader range of scenarios.
  • Objective: Improve networks’ adaptability and robustness.
In Summary:
Denoiser: Captures informative features from subsurface objects and mitigates environmental noise in GPR C-scans.
Inverter: Establishes a relationship between discriminative features extracted from denoised C-scans and corresponding subsurface scenarios.
Comprehensive Testing: Validates the capability of the proposed method to accurately and efficiently reconstruct 3D permittivity maps across various subsurface scenarios using both numerical simulations and real measurement data.

6. Global Seismic Tomography: The Inverse Problem and Beyond

Global seismic tomography has remained an active area of research since its initial systematic exploration in the early 1980s. Seismic waves generated by sufficiently large earthquakes propagate across the globe, and as they traverse the Earth’s interior, they carry valuable information about the medium through which they travel, including their arrival times and waveform characteristics [138,139]. As explained by Trampert [69], the fundamental challenge in seismic tomography is to reconstruct the three-dimensional elastic velocity distribution within the Earth using extensive datasets of arrival times, waveforms from both body and surface waves, and free oscillations. A recent comprehensive assessment encompassing various research studies proposed by Ritzwoller and Lavely [140] has demonstrated a robust convergence of information regarding the Earth’s structural characteristics, employing diverse datasets and distinct mapping techniques. Regions characterized by lower seismic velocities exhibit correlations with geoid highs and the locations of numerous global hotspots. These three-dimensional velocity models hold promise in establishing a solid foundation for comprehending the driving mechanisms underlying plate tectonics.
Seismic tomography plays a crucial role in mapping the current thermodynamic and compositional characteristics of heterogeneities within the convicting mantle, imposing stringent constraints on potential models of mantle convection [141,142,143]. Key geological features, such as the thickness of continental roots, the depth range of mid-ocean ridge signals, and variations in lithospheric velocity with age [142,144], offer valuable insights into the genesis and evolution of continental and oceanic lithospheric structures. Additionally, the relative changes in P-wave velocities compared to S-wave velocities provide substantial mineralogical constraints on the mantle’s composition [141]. The thermal conditions prevailing in the lower mantle establish critical boundary conditions for potential geodynamo models aiming to explain the Earth’s magnetic field dynamics [139]. Notably, a longstanding correlation between the geoid and seismic models has been acknowledged [138], suggesting that both gravity and seismological data can collectively contribute to our understanding of three-dimensional density fluctuations within the Earth.
Seismic tomography primarily revolves around the reconstruction of the Earth’s three-dimensional velocity field by analyzing surface observations of elastic waves. The forward problem entails the prediction of a seismogram. The tomographic inverse problem is inherently characterized by an uneven distribution of seismic sources (earthquakes) and receivers (limited to seismic stations on continents and oceanic islands). This results in an irregular sampling of the Earth’s interior using elastic waves, where some regions are over-sampled while others remain undersampled. In such circumstances, the inverse problem becomes ill-posed, typically exhibiting a rapidly declining eigenvalue spectrum. This steep decline signifies that minor data errors can lead to substantial variations in the solution, rendering the problem ill-conditioned. Both ill-posedness and ill-conditioning are associated with substantial null spaces, indicating non-unique solutions. The remedy for these challenges involves the application of implicit or explicit regularization techniques. Regularization serves the purpose of either constraining the potential model space or selecting a specific solution from the multitude of possible solutions.
The propagation of seismic waves within an elastic medium is primarily governed by the elastodynamic equations. Assuming a known source excitation and instrument response, this leads to a nonlinear relationship between the observed data and the provided Earth model through the underlying physics of wave propagation encapsulated in the forward theory. In summary, global seismic tomography has achieved significant advancements in mapping three-dimensional elastic wave velocity fields. This progress has yielded valuable insights that have sparked interdisciplinary discussions within the field of Earth sciences.
Moreover, seismic full waveform inversion (FWI) for imaging Earth’s interior was introduced in the late 1970s. As highlighted by Tromp [145], its goal is to use all of the information in a seismogram to understand the structure and dynamics of Earth, such as hydrocarbon reservoirs, the nature of hotspots, and the forces behind plate motions and earthquakes [145] (see Figure 6). FWI in seismology starts by choosing seismic sources, ranging from earthquakes to controlled sources like air guns or explosions. It involves comparing observed and simulated seismograms to optimize model parameters, such as wave speeds and densities. However, FWI focuses on specific waveform characteristics, often using time windows where seismograms align [146,147]. Selecting an accurate starting model is crucial but challenging, particularly in the exploration of seismology. Forward simulations calculate synthetic seismograms, vital in comparison with observed data, using various numerical methods. FWI needs a fast solver for wave propagation, with a shift toward frequency-domain approaches [148]. Comparison of observed and simulated seismograms requires a chosen misfit function, measuring differences like phase and amplitude. Inversion aims to minimize this misfit using the adjoint-state method, calculating gradients of model parameters [149,150,151]. Yet, challenges like cycle skipping and data volume remain [152,153,154]. FWI algorithms use optimization techniques like L-BFGS but face issues such as local minima and data quality mismatches. This method distinguishes itself from ray-based methods, as it involves comprehensive 3D numerical simulations and iterative updates of Earth models, allowing for a detailed study of seismic-wave propagation [155,156,157]. FWI finds applications in controlled-source exploration, earthquake studies, and ambient-noise seismology, including hydrocarbon exploration and seismic interferometry. Notable examples demonstrate its diverse use in various seismic contexts worldwide.
In the following, a summary of six key points is provided for seismic tomography, focusing on the inverse problem and the associated challenges and techniques involved in reconstructing the Earth’s three-dimensional velocity field:
A. The Inverse Problem basic stages
Forward Problem: In seismic tomography, the forward problem predicts a seismogram s ( t ,   Δ ) at a distance Δ from the seismic source over time t based on a prescribed velocity field υ ( r ) .
Integral Formulation: The forward problem is represented as an integral equation involving the velocity field, spatial location r and the underlying principles of elastic wave propagation s i t , Δ = Ω υ ( r ) g i t , Δ , υ ( r ) d r .
Inverse Problem Challenges: The inverse problem involves reconstructing the velocity field υ ( r ) from observed seismograms, and it is inherently ill-posed due to the uneven distribution of seismic sources and receivers.
Regularization: Regularization techniques, whether implicit or explicit, are applied to address ill-posedness and ill-conditioning issues in the inverse problem. These techniques constrain the model space or select a specific solution.
B. Forward Theory
Seismogram Characteristics: A typical seismogram exhibits sequences of P and S body waves, along with dispersed surface waves, providing information about the Earth’s interior structure.
Propagation Equations: The propagation of seismic waves within an elastic medium is governed by elastodynamic equations. A simplified form of seismic tomography relies on the arrival times of body waves.
Ray Theory and Travel-Time Tomography: Travel-time tomography is based on the integration of slowness to determine travel time. Fermat’s principle and Rayleigh’s principle are utilized in the context of ray theory.
C. Parametrization of the Model
Model Parameters δ υ ( r ) : Model aims to deduce from the data and exhibit continuity across positions. The model is expanded using fundamental functions   B j ( r ) , and the problem is transformed into a linear inverse problem υ ( r ) = j = 1 m j B j ( r ) .
D. Cost Function
Cost Function C λ = Δ D δ d , A m + λ Δ Μ m , m 0 : A cost function is defined to balance the fit to observed data and the size of the model. Regularization plays a role in determining the trade-off parameter λ .
E. Regularization
Implicit and Explicit Regularization: Regularization is applied implicitly through choices such as upper limit selection L and basic function types. Explicit regularization involves parameters like λ and the use of a reference model m 0 .
F. Inverse Operator
Bayesian Perspective: Global tomography investigations often adopt a Bayesian perspective, seeking the most likely solution at the minimum of the cost function.
Newton Approximation: The Newton approximation is a widely recognized algorithm m n + 1 = m n + A n C d 1 A n + λ C m 1 1 A n C d 1 δ d λ C m 1 m n for minimizing the cost function, involving the Hessian matrix C λ n where C λ m n = C λ n and gradient vector.
In conclusion, global seismic tomography over the past two decades has made significant advancements in mapping three-dimensional elastic wave velocity fields, providing valuable insights for interdisciplinary discussions in Earth sciences.

7. Optical Coherence Tomography

Tomographic imaging methods, including X-ray computed tomography [81], magnetic resonance imaging [82], and ultrasound imaging [83], have established extensive utility within the field of medicine. Each of these techniques measures distinct physical properties and offers advantages in terms of resolution and penetration depth for specific medical applications. Optical Coherence Tomography (OCT) enables non-invasive cross-sectional imaging of internal biological tissue structures [158] by assessing the way light reflects within these tissues, as described by Huang, Swanson et al. [159]. A block diagram of a simplified structure of OCT is provided in Figure 7.
Both low-coherence light and ultra-short laser pulses have the capacity to assess the internal structures of biological systems. When optical signals pass through or bounce off biological tissues, they contain time-of-flight data, which subsequently provide insight into the spatial details of tissue microstructures. Time-resolved transmission spectroscopy, for instance, has been utilized to gauge the absorption and scattering characteristics within tissues, offering a non-invasive means of diagnosing hemoglobin oxygenation in the brain [84]. In addition, femtosecond laser pulses have enabled optical ranging measurements of microstructures in the eye and skin. Time gating techniques, both coherent and non-coherent, have been employed to selectively capture directly transmitted light, allowing for the acquisition of transmission images in optically opaque tissue. Furthermore, low-coherence reflectometry has proven useful in various applications, including ranging measurements in optical components, surface contour mapping in integrated circuits, and measuring the distance within the retina and other ocular structures.
Unlike time-domain techniques, low-coherence reflectometry can be conducted using continuous-wave light, eliminating the necessity for ultra-short pulse laser sources. Recent technological progress in low-coherence reflectometry has made it possible to create compact, modular systems that employ diode light sources and fiber optics, resulting in the achievement of micrometer-level spatial resolutions and heightened detection sensitivities [158,159].
OCT’s ability to provide optical sectioning is comparable to confocal microscopy systems. However, while the longitudinal resolution in confocal microscopy relies on the numerical aperture at hand, OCT’s resolution is primarily constrained by the coherence length of the light source. Consequently, OCT can maintain exceptional depth resolution, even when the available aperture is limited. This characteristic is especially advantageous for conducting in vivo assessments of deep tissues, such as in transpupillary imaging of the posterior eye and endoscopic imaging [159].
OCT, being an optical method, offers a versatile range of optical properties that can be harnessed to discern tissue structure and composition. Certain tissues with a defined orientation, such as the elastic lamina of arteries and the retinal nerve fiber layer (RNFL), exhibit birefringence. In OCT, the analysis of reflected light’s polarization can be employed to enhance the differentiation of these birefringent tissue structures. Moreover, OCT systems can function across multiple wavelengths to assess spectral properties. This allows for the detection of various characteristics, including chromophore content, hemoglobin oxygenation, hydration levels, or the dimensions of light-scattering structures. Consequently, OCT emerges as a promising technique suitable for both fundamental research and clinical applications.
Fercher, Hitzenberger et al. [160] present theoretical models that illustrate the enhanced sensitivity of Swept Source and Fourier Domain that OCT techniques present in comparison to the conventional time-domain approach. The alternative method [160] proposed to achieve coherence gating without using a scanning delay line involves collecting the interferometric signal as a function of optical wavenumber by combining sample and reference light at a fixed group delay [161]. Two distinct techniques have been developed based on this spectral discrimination (SD) approach. The first is Fourier domain OCT [161,162,163,164,165,166], which utilizes a broadband light source and employs spectral discrimination with a dispersive spectrometer in the detector arm. The second technique is swept source OCT [161,165,166,167,168], which encodes time with wavenumber by rapidly tuning a narrowband light source across a broad optical bandwidth.
Dental OCT applications in oral tissue imaging, caries, periodontal disease, and oral cancers have been discussed by Hsieh, Ho et al. [169]. The article also compares OCT with other oral diagnostic methods. Dental OCT enables the qualitative and quantitative assessment of morphological changes in dental hard and soft tissues in vivo. Early detection and treatment can enhance both tooth and patient survival rates. Another advantage of dental OCT is its three-dimensional imaging capability, facilitating more precise and rapid identification of issues in soft and hard tissues.
In the early days of OCT, the primary focus was on investigating dental soft and hard tissue morphology. This was partly due to limitations in the size of the OCT systems and the technology for manufacturing light sources [170,171,172]. However, in recent years, with the advancement of components and technology, this powerful tool has found new applications in advanced diagnostic challenges. Dental OCT is useful for visualizing tissues like the gingival, periodontal structures, and mucosa. With longer center wavelengths, OCT can also be applied to imaging bone-related conditions. Looking ahead, the development of an OCT system with a handheld optical probe and a more streamlined setup holds promise for telemedicine integration, where it could be utilized with Picture Archiving and Communication Systems (PACS). This advancement could prove invaluable for home nursing care plans in our aging society.
The work by Cogliati, Canavesi et al. [173] introduces distortion-free OCT volumetric imaging via a handheld probe equipped with a dual-axis micro-electro-mechanical system (MEMS). In the context of this imaging probe, where optics are positioned between the 2D MEMS scanner and the sample, the work discusses the implementation of pre-shaped open-loop input signals containing customized nonlinear elements on a dedicated control board. Unlike the common use of sinusoidal signals for MEMS scanning, this approach enables real-time distortion-free imaging without the need for post-processing. The MEMS mirror has been successfully integrated into a compact and lightweight handheld probe, achieving a significant 12-fold reduction in volume and a 17-fold reduction in weight compared to a previous dual-mirror galvanometer-based scanner. Experimental results demonstrate distortion-free imaging without post-processing using a Gabor-domain optical coherence microscope (GD-OCM) with exceptional 2 μm axial and lateral resolutions, covering a 1 × 1 mm2 field of view. This work presents utilizing a MEMS-based scanning device for the generation of distortion-free images in conjunction with a GD-OCM. A novel aspect introduced in this paper is the concurrent placement of dual-axis MEMS at the pupil location.
Creating a detailed block diagram for optical coherence tomography algorithms involves representing the key components and stages in the process. Keep in mind that the actual implementation can vary based on the specific OCT system and application. Here is a simplified block diagram for OCT algorithms:
  • Data Acquisition:
    • Light Source: Generates coherent light.
    • Interferometer: Splits the light into sample and reference arms.
    • Sample Arm: Directs light onto the sample.
    • Reference Arm: Sends light to a reference mirror.
    • Interference Detection: Combines sample and reference beams; interference is detected.
  • Signal Processing:
    • Interference Signal Processing: Extracts the interference signal.
    • Fourier Transform: Converts the interference signal from time to frequency domain.
    • A-Scan Generation: Produces an A-scan (depth profile).
  • Image Reconstruction:
    • B-Scan Formation: Combines multiple A-scans to form a B-scan (cross-sectional image).
    • En-face Image Generation: Constructs en-face images at different depths.
  • Image Enhancement and Analysis:
    • Speckle Reduction: Techniques to reduce speckle noise.
    • Contrast Enhancement: Improves visibility of structures.
    • Segmentation: Identifies boundaries and structures in the OCT images.
    • 3D Rendering: Creates three-dimensional representations of the imaged volume.
  • Image Display and Analysis:
    • Visualization: Displays OCT images in real-time.
    • Quantitative Analysis: Extracts numerical information from images.
    • Clinical Decision Support: Provides support for medical diagnoses.
  • Advanced Algorithms:
    • Motion Correction: Compensates for motion artifacts.
    • Doppler OCT: Measures blood flow within tissues.
    • Polarization-Sensitive OCT: Provides additional tissue information based on polarization properties.
    • Machine Learning: Incorporates machine learning techniques for image analysis and pattern recognition.
  • Data Storage and Management:
    • Database: Stores acquired OCT data.
    • Archiving: Manages storage of large datasets for future reference.
  • Integration with Other Modalities:
    • Multimodal Imaging: Integrates OCT with other imaging modalities for comprehensive diagnostics.
  • Clinical Applications:
    • Ophthalmology: Retinal imaging, anterior segment imaging.
    • Dermatology: Skin imaging.
    • Cardiology: Cardiovascular imaging.
    • Endoscopy: Imaging within body cavities.
  • Feedback Loop:
    • System Calibration: Ensures accuracy and reliability.
    • User Feedback: Allows for adjustments based on user input.
    • System Optimization: Continuous improvement based on performance feedback.

8. Conclusions

An extensive reference on tomographic methods, techniques, effective algorithm implementation, and applications in various physical problems (medical, geophysics, solid state physics, etc.) is provided in the introduction of the paper.
Furthermore, this comprehensive review provides an insightful analysis of six key themes in the domain of tomographic reconstruction. By critically assessing and synthesizing findings from various research papers, we have gained a comprehensive perspective on the evolution and potential future directions of tomographic imaging algorithms. Accordingly, optimization of breast tomosynthesis image reconstruction, highlighting the importance of refining these methods for more accurate diagnostics and improved patient care, is exposed. The emergence of multi-slice fusion as an innovative approach promises real-time insights into dynamic physiological processes, pushing the boundaries of medical diagnosis. Shifting our attention to computational efficiency, we witnessed a significant transformation in the acceleration of tomographic reconstruction algorithms using commodity PC graphics hardware. This advancement offers enhanced accessibility to high-speed reconstruction, making it more affordable and accessible for researchers and practitioners. In the realm of geophysics, the 3DInvNet introduced a revolutionary deep learning-based approach to GPR data inversion. This integration of artificial intelligence with traditional sensing methods opens new possibilities for understanding geological and environmental sciences. Exploring Earth sciences, advanced inverse problem solutions in global seismic tomography are found, providing valuable insights into the Earth’s interior and expanding our perspectives beyond conventional techniques. Optical coherence tomography was presented with extensive reference to four different papers to show off the fine detail captured from biological tissues. Recent topics such as “large-scale image reconstruction” can be elaborated in references [174,175]. Recently, iterative reconstruction algorithms with total variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data in order to reduce the imaging dose [176,177].
In summary, this review paper weaves together a tapestry of advancements in tomographic imaging techniques. These six interconnected themes, spanning medical imaging, computational acceleration, and deep learning in geophysics, underscore the versatility and potential of tomography. The multidimensional examination offers a holistic view of how tomographic reconstruction can shape the future of various scientific disciplines and medical diagnostics.

Author Contributions

Conceptualization, S.T., G.K. and V.A.; Methodology, G.K. and V.A.; Resources, S.T.; Writing—original draft preparation, S.T.; Writing—review and editing, G.K. and V.A. All authors have read and agreed to the published version of the manuscript.

Funding

Styliani Tassiopoulou was financially supported by the Andreas Mentzelopoulos Foundation.

Data Availability Statement

Data are available upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gordon, R.; Herman, G.T. Three-Dimensional Reconstruction from Projections: A Review of Algorithms. Int. Rev. Cytol. 1974, 38, 111–151. [Google Scholar] [CrossRef]
  2. Colsher, J.G. Iterative three-dimensional image reconstruction from tomographic projections. Comput. Graph. Image Process 1977, 6, 513–537. [Google Scholar] [CrossRef]
  3. Clackdoyle, R.; Defrise, M. Tomographic Reconstruction in the 21st Century. IEEE Signal Process. Mag. 2010, 27, 60–80. [Google Scholar] [CrossRef]
  4. Hornegger, J.; Maier, A.; Kowarschik, M. CT Image Reconstruction Basics. 2016 [Source: Radiology Key]. Available online: https://radiologykey.com/ct-image-reconstruction-basics/ (accessed on 15 October 2023).
  5. Khan, U.; Yasin, A.; Abid, M.; Awan, I.S.; Khan, S.A. A Methodological Review of 3D Reconstruction Techniques in Tomographic Imaging. J. Med. Syst. J. Med Syst. 2018, 42, 190. [Google Scholar] [CrossRef]
  6. Goshtasby, A.; Turner, D.A.; Ackerman, L.V. Matching of tomographic slices for interpolation. IEEE Trans. Med. Imaging 1992, 11, 507–516. [Google Scholar] [CrossRef]
  7. Fessler, J.A. Statistical Image Reconstruction Methods for Transmission Tomography. In Handbook of Medical Imaging; SPIE Press: Bellingham, WA, USA, 2000; Volume 1, pp. 1–70. [Google Scholar] [CrossRef]
  8. Yu, D.F.; Fessler, J.A. Edge-preserving tomographic reconstruction with nonlocal regularization. IEEE Trans. Med. Imaging 2002, 21, 159–173. [Google Scholar] [CrossRef] [PubMed]
  9. Chandra, S.S.; Svalbe, I.D.; Guedon, J.; Kingston, A.M.; Normand, N. Recovering Missing Slices of the Discrete Fourier Transform Using Ghosts. IEEE Trans. Image Process. 2012, 21, 4431–4441. [Google Scholar] [CrossRef] [PubMed]
  10. Zhou, W.; Lu, J.; Zhou, O.; Chen, Y. Evaluation of Back Projection Methods for Breast Tomosynthesis Image Reconstruction. J. Digit. Imaging 2014, 28, 338–345. [Google Scholar] [CrossRef] [PubMed]
  11. Chetih, N.; Messali, Z. Tomographic image reconstruction using filtered back projection (FBP) and algebraic reconstruction technique (ART). In Proceedings of the 3rd International CEIT 2015, Tlemcen, Algeria, 25 May 2015. [Google Scholar] [CrossRef]
  12. Somigliana, A.; Zonca, G.; Loi, G.; Sichirollo, A.E. How Thick Should CT/MR Slices be to Plan Conformal Radiotherapy? A Study on the Accuracy of Three-Dimensional Volume Reconstruction. Tumori J. 1996, 82, 470–472. [Google Scholar] [CrossRef]
  13. Gourion, D.; Noll, D. The inverse problem of emission tomography. IOP Publ. Inverse Probl. 2002, 18, 1435–1460. [Google Scholar] [CrossRef]
  14. Petersilka, M.; Bruder, H.; Krauss, B.; Stierstorfer, K.; Flohr, T.G. Technical principles of dual source CT. Eur. J. Radiol. 2008, 68, 362–368. [Google Scholar] [CrossRef]
  15. Saha, S.K.; Tahtali, M.; Lambert, A.; Pickering, M. CT reconstruction from simultaneous projections: A step towards capturing CT in One Go. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2014, 5, 87–99. [Google Scholar] [CrossRef]
  16. Miqueles, E.; Koshev, N.; Helou, E.S. A Backprojection Slice Theorem for Tomographic Reconstruction. IEEE Trans. Image Process. 2018, 27, 894–906. [Google Scholar] [CrossRef]
  17. Willemink, M.J.; Noël, P.B. The evolution of image reconstruction for CT—From filtered back projection to artificial intelligence. Eur. Radiol. 2019, 29, 2185–2195. [Google Scholar] [CrossRef] [PubMed]
  18. Wang, G.; Ye, J.C.; De Man, B. Deep learning for tomographic image reconstruction. Nat. Mach. Intell. 2020, 2, 737–748. [Google Scholar] [CrossRef]
  19. Jung, H. Basic Physical Principles and Clinical Applications of Computed Tomography. Prog. Med. Phys. 2021, 32, 1–17. [Google Scholar] [CrossRef]
  20. Withers, P.J.; Bouman, C.; Carmignato, S.; Cnudde, V.; Grimaldi, D.; Hagen, C.K.; Stock, S.R. X-ray computed tomography. Nat. Rev. Dis. Primers 2021, 1, 18. [Google Scholar] [CrossRef]
  21. Seletci, E.D.; Duliu, O.G. Image Processing and Data Analysis in Computed Tomography. Rom. J. Phys. 2007, 72, 764–774. [Google Scholar]
  22. Miao, J.; Förster, F.; Levi, O. Equally sloped tomography with oversampling reconstruction. Phys. Rev. B 2005, 72, 052103. [Google Scholar] [CrossRef]
  23. Whiteley, W.; Luk, W.K.; Gregor, J. Direct PET: Full Size Neural Network PET Reconstruction from Sinogram Data. J. Med Imaging 2019, 7, 032503. [Google Scholar] [CrossRef]
  24. Lee, D.; Choi, S.; Kim, H.-J. High quality imaging from sparsely sampled computed tomography data with deep learning and wavelet transform in various domains. J. Med. Phys. 2018, 46, 104–115. [Google Scholar] [CrossRef]
  25. Zhou, B.; Kevin Zhou, S.; Duncan, J.S.; Liu, C. Limited View Tomographic Reconstruction using a Cascaded Residual Dense Spatial-Channel Attention Network with Projection Data Fidelity Layer. IEEE Trans. Med. Imaging 2021, 40, 1792–1804. [Google Scholar] [CrossRef] [PubMed]
  26. Luther, K.; Seung, S. Stretched sinograms for limited-angle tomographic reconstruction with neural networks. arXiv 2023, arXiv:2306.10201. [Google Scholar]
  27. Hu, H. Multi-slice helical CT: Scan and reconstruction. J. Med. Phys. 1999, 26, 5–18. [Google Scholar] [CrossRef] [PubMed]
  28. Dawson, P.; Lees, W.R. Multi-slice Technology in Computed Tomography. Clin. Radiol. 2001, 56, 302–309. [Google Scholar] [CrossRef] [PubMed]
  29. Majee, S.; Balke, T.; Kemp, C.; Buzzard, G.; Bouman, C. Multi-Slice Fusion for Sparse-View and Limited-Angle 4D CT Reconstruction. IEEE Trans. Comput. Imaging 2021, 7, 448–462. [Google Scholar] [CrossRef]
  30. Singh, S.; Kalra, M.K.; Hsieh, J.; Licato, P.E.; Do, S.; Pien, H.H.; Blake, M.A. Abdominal CT: Comparison of Adaptive Statistical Iterative and Filtered Back Projection Reconstruction Techniques. Radiology 2010, 257, 373–383. [Google Scholar] [CrossRef]
  31. Aibinu, A.M.; Salami, M.J.; Shafie, A.A.; Najeeb, A.R. MRI Reconstruction Using Discrete Fourier Transform: A tutorial. WASET 2008, 2, 1852–1858. [Google Scholar] [CrossRef]
  32. Plenge, E.; Poot, D.H.J.; Niessen, W.J.; Meijering, E. Super-resolution reconstruction using cross-scale self-similarity in multi-slice MRI. MICCAI 2013, 16, 123–130. [Google Scholar] [CrossRef]
  33. Zhang, H.; Shinomiya, Y.; Yoshida, S. 3D MRI Reconstruction Based on 2D Generative Adversarial Network Super-Resolution. Sensors 2021, 21, 2978. [Google Scholar] [CrossRef]
  34. Hoffman, E.J.; Cutler, P.D.; Digby, W.M.; Mazziotta, J.C. 3-D phantom to simulate cerebral blood flow and metabolic images for PET. IEEE Trans. Nucl. Sci. 1990, 37, 616–620. [Google Scholar] [CrossRef]
  35. Collins, D.L.; Zijdenbos, A.P.; Kollokian, V.; Sled, J.G.; Kabani, N.J.; Holmes, C.J.; Evans, A.C. Design and construction of a realistic digital brain phantom. IEEE Trans. Med. Imaging 1998, 17, 463–468. [Google Scholar] [CrossRef]
  36. Glick, S.J.; Ikejimba, L.C. Advances in digital and physical anthropomorphic breast phantoms for X-ray imaging. J. Med. Phys. 2018, 45, 870–885. [Google Scholar] [CrossRef]
  37. Klingenbeck-Regn, K.; Schaller, S.; Flohr, T.; Ohnesorge, B.; Kopp, A.F.; Baum, U. Subsecond multi-slice computed tomography: Basics and applications. Eur. J. Radiol. 1999, 31, 110–124. [Google Scholar] [CrossRef]
  38. Michael O’Connor, J.; Das, M.; Dider, C.S.; Mahd, M.; Glick, S.J. Generation of voxelized breast phantoms from surgical mastectomy specimens. J. Med. Phys. 2013, 40, 041915. [Google Scholar] [CrossRef] [PubMed]
  39. Dobbins, J.T.; Godfrey, D.J. Digital X-ray tomosynthesis: Current state of the art and clinical potential. Phys. Med. Biol. 2003, 48, R65–R106. [Google Scholar] [CrossRef] [PubMed]
  40. Goossens, B.; Labate, D.; Bodmann, B.G. Robust and stable region-of-interest tomographic reconstruction using a robust width prior. Inverse Probl. Imaging 2020, 14, 291–316. [Google Scholar] [CrossRef]
  41. Su, T.; Deng, X.; Yang, J.; Wang, Z.; Fang, S.; Zheng, H.; Liang, D.; Ge, Y. DIR-DBTnet: Deep iterative reconstruction network for three-dimensional digital breast tomosynthesis imaging. Med. Phys. 2021, 48, 2289–2300. [Google Scholar] [CrossRef] [PubMed]
  42. Quillent, A.; Bismuth, V.J.; Bloch, I.; Kervazo, C.; Ladjal, S. A deep learning method trained on synthetic data for digital breast tomosynthesis reconstruction. MIDL Poster 2023, 1–13. Available online: https://openreview.net/pdf?id=xcMTcyk2v69 (accessed on 15 October 2023).
  43. Lyu, T.; Wu, Z.; Ma, G.; Jiang, C.; Zhong, X.; Xi, Y.; Chen, Y.; Zhu, W. PDS-MAR: A fine-grained Projection-Domain Segmentation-based Metal Artifact Reduction method for intraoperative CBCT images with guide wires. arXiv 2023, arXiv:2306.11958. [Google Scholar] [CrossRef]
  44. Abreu, M.; Tyndall, D.A.; Ludlow, J.B. Effect of angular disparity of basis images and projection geometry on caries detection using tuned-aperture computed tomography. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2001, 92, 353–360. [Google Scholar] [CrossRef] [PubMed]
  45. Pekel, E.; Lavilla, M.L.; Pfeiffer, F.; Lasser, T. Runtime Optimization of Acquisition Trajectories for X-ray Computed Tomography with a Robotic Sample Holder. arXiv 2023, arXiv:2306.13786. [Google Scholar] [CrossRef]
  46. Jin, K.H.; McCann, M.T.; Froustey, E.; Unser, M. Deep Convolutional Neural Network for Inverse Problems in Imaging. IEEE Trans. Image Process. 2017, 26, 4509–4522. [Google Scholar] [CrossRef]
  47. Hou, B.; Alansary, A.; McDonagh, S.; Davidson, A.; Rutherford, M.; Hajnal, J.V.; Kainz, B. Predicting Slice-to-Volume Transformation in Presence of Arbitrary Subject Motion. MICCAI 2017, 20, 296–304. [Google Scholar] [CrossRef]
  48. Morani, K.; Unay, D. Deep learning-based automated COVID-19 classification from computed tomography images. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2021, 11, 2145–2160. [Google Scholar] [CrossRef]
  49. Fang, X.; Mueller, K. Accelerating popular tomographic reconstruction algorithms on commodity PC graphics hardware. IEEE Trans. Nucl. Sci. 2005, 52, 654–663. [Google Scholar] [CrossRef]
  50. Wang, S.-H.; Zhang, K.; Wang, Z.-L.; Gao, K.; Wu, Z.; Zhu, P.-P.; Wu, Z.-Y. A User-Friendly Nano-CT Image Alignment and 3D Reconstruction Platform Based on LabVIEW. Chin. Phys. C 2015, 39, 018001. [Google Scholar] [CrossRef]
  51. Pham, M.; Yuan, Y.; Rana, A.; Miao, J.; Osher, S. RESIRE: Real space iterative reconstruction engine for Tomography. arXiv 2020, arXiv:2004.10445. [Google Scholar]
  52. Lyons, C.; Raj, R.G.; Cheney, M. A Compound Gaussian Network for Solving Linear Inverse Problems. arXiv 2023, arXiv:2305.11120. [Google Scholar]
  53. Goharian, G.; Soleimani, M.; Moran, G.R. A trust region subproblem for 3D electrical impedance tomography inverse problem using experimental data. Prog. Electromagn. Res. 2009, 94, 19–32. [Google Scholar] [CrossRef]
  54. Hossain, M.A.; Ambia, A.U.; Aktaruzzaman, M.; Khan, M.A. Implementation of Radon Transformation for Electrical Impedance Tomography (EIT). IJIST 2012, 2, 11–22. [Google Scholar] [CrossRef]
  55. Ihrke, I.; Magnor, M. Image-based tomographic reconstruction of flames. In Proceedings of the Eurographics Symposium on Computer, Grenoble, France, 27–29 August 2004; pp. 365–373. [Google Scholar] [CrossRef]
  56. Arridge, S.R. Optical tomography in medical imaging. Inverse Probl. 1999, 15, R41–R93. [Google Scholar] [CrossRef]
  57. Zhang, T.; Zhang, L.; Chen, Z.; Xing, Y.; Gao, H. Fourier Properties of Symmetric-Geometry Computed Tomography and Its Linogram Reconstruction with Neural Network. IEEE Trans. Med. Imaging 2020, 39, 4445–4457. [Google Scholar] [CrossRef] [PubMed]
  58. Reigber, A.; Moreira, A. Firstdemonstration of airborne SAR tomography using multibaseline L-band data. IEEE Geosci. Remote Sens. 2000, 38, 2142–2152. [Google Scholar] [CrossRef]
  59. Fornaro, G.; Serafino, F. Imaging of Single and Double Scatterers in Urban Areas via SAR Tomography. IEEE Geosci. Remote Sens. 2006, 44, 3497–3505. [Google Scholar] [CrossRef]
  60. Oriot, H.; Cantalloube, H. Circular SAR imagery for urban remote sensing. In Proceedings of the 7th EUSAR, Friedrichshafen, Germany, 2–5 June 2008. [Google Scholar]
  61. Zhu, X.X.; Bamler, R. Very High Resolution Spaceborne SAR Tomography in Urban Environment. IEEE Geosci. Remote Sens. 2010, 48, 4296–4308. [Google Scholar] [CrossRef]
  62. Sportouche, H.; Tupin, F.; Denise, L. Extraction and Three-Dimensional Reconstruction of Isolated Buildings in Urban Scenes From High-Resolution Optical and SAR Spaceborne Images. IEEE Geosci. Remote Sens. 2011, 49, 3932–3946. [Google Scholar] [CrossRef]
  63. Zhu, X.X.; Bamler, R. Demonstration of Super-Resolution for Tomographic SAR Imaging in Urban Environment. IEEE Geosci. Remote Sens. 2012, 50, 3150–3157. [Google Scholar] [CrossRef]
  64. Zhu, X.X.; Ge, N.; Shahzad, M. JointSparsity in SAR Tomography for Urban Mapping. IEEE J. Sel. Top. Signal Process. 2015, 9, 1498–1509. [Google Scholar] [CrossRef]
  65. Bagheri, H.; Schmitt, M.; d’Angelo, P.; Zhu, X.X. A framework for SAR-optical stereogrammetry over urban areas. ISPRS J. Photogramm. Remote Sens. 2018, 146, 389–408. [Google Scholar] [CrossRef]
  66. Budillon, A.; Johnsy, A.; Schirinzi, G. Urban Tomographic Imaging Using Polarimetric SAR Data. J. Remote Sens. 2019, 11, 132 . [Google Scholar] [CrossRef]
  67. Ren, Y.; Zhang, X.; Hu, Y.; Zhan, X. AETomo-Net: A Novel Deep Learning Network for Tomographic SAR Imaging Based on Multi-dimensional Features. In Proceedings of the IGARSS 2022—2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 1–4. [Google Scholar]
  68. Devaney, A.J. Geophysical Diffraction Tomography. IEEE Geosci. Remote Sens. 1984, GE-22, 3–13. [Google Scholar] [CrossRef]
  69. Trampert, J. Global seismic tomography: The inverse problem and beyond. Inverse Probl. 1998, 14, 371–385. [Google Scholar] [CrossRef]
  70. Rector, J.W.; Washbourne, J.K. Characterization of resolution and uniqueness in crosswell direct-arrival traveltime tomography using the Fourier projection slice theorem. J. Geophys. 1994, 59, 1642–1649. [Google Scholar] [CrossRef]
  71. Akin, S.; Kovscek, A.R. Computed tomography in petroleum engineering research. Geol. Soc. Spec. Publ. 2003, 215, 23–38. [Google Scholar] [CrossRef]
  72. Worthmann, B.M.; Chambers, D.H.; Perlmutter, D.S.; Mast, J.E.; Paglieroni, D.W.; Pechard, C.T.; Bond, S.W. Clutter Distributions for Tomographic Image Standardization in Ground-Penetrating Radar. IEEE Geosci. Remote Sens. 2021, 59, 7957–7967. [Google Scholar] [CrossRef]
  73. Patella, D. Introduction to ground surface self-potential tomography. Geophys. Prospect. 1997, 45, 653–681. [Google Scholar] [CrossRef]
  74. Dai, Q.; Lee, Y.H.; Sun, H.-H.; Ow, G.; Mohd Yusof, M.L.; Yucel, A.C. 3DInvNet: A Deep Learning-Based 3D Ground-Penetrating Radar Data Inversion. arXiv 2023, arXiv:2305.05425. [Google Scholar] [CrossRef]
  75. Goncharsky, A.V.; Romanov, S.Y. Inverse problems of ultrasound tomography in models with attenuation. Phys. Med. Biol. 2014, 59, 1979–2004. [Google Scholar] [CrossRef]
  76. Martiartu, N.K.; Boehm, C.; Fichtner, A. 3D Wave-Equation-Based Finite-Frequency Tomography for Ultrasound Computed Tomography. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2019, 67, 1332–1343. [Google Scholar] [CrossRef]
  77. Hauer, R.; Haberfehlner, G.; Kothleitner, G.; Kociak, M.; Hohenester, U. Tomographic Reconstruction of Quasistatic Surface Polariton Fields. ACS Photonics 2022, 10, 185–196. [Google Scholar] [CrossRef]
  78. Zhou, K.C.; Horstmeyer, R. Diffraction tomography with a deep image prior. Opt. Express 2020, 28, 12872–12896. [Google Scholar] [CrossRef] [PubMed]
  79. Webber, J. X-ray Compton scattering tomography. Inverse Probl. Sci. Eng. 2015, 24, 1323–1346. [Google Scholar] [CrossRef]
  80. Yang, D.-C.; Zhang, S.; Hu, Y.; Hao, Q. Refractive Index Tomography with a Physics Based Optical Neural Network. arXiv 2023, arXiv:2306.06558. [Google Scholar] [CrossRef] [PubMed]
  81. Hounsfield, G.N. Computerized transverse axial scanning (tomography): Part 1. Description of system. BJR 1973, 46, 1016–1022. [Google Scholar] [CrossRef] [PubMed]
  82. Damadian, R.; Goldsmith, M.; Minkoff, L. NMR in cancer: XVI. FONAR image of the live human body. Physiol. Chem. Phys. 1977, 9, 97–100. [Google Scholar] [PubMed]
  83. Wild, J.J.; Reid, J.M. Application of Echo-Ranging Techniques to the Determination of Structure of Biological Tissues. Science 1952, 115, 226. [Google Scholar] [CrossRef]
  84. Chance, B.; Leigh, J.S.; Miyake, H.; Smith, D.S.; Nioka, S.; Greenfeld, R.; Young, M. Comparison of time-resolved and -unresolved measurements of deoxyhemoglobin in brain. Proc. Nati. Acad. Sci. USA 1988, 85, 4971–4975. [Google Scholar] [CrossRef]
  85. Niklason, L.T.; Christian, B.T.; Niklason, L.E.; Kopans, D.B.; Castleberry, D.E.; Opsahl-Ong, B.H.; Landberg, C.E.; Slanetz, P.J.; Giardino, A.A.; Moore, R.H.; et al. Digital tomosynthesis in breast imaging. Radiology 1997, 205, 399–406. [Google Scholar] [CrossRef]
  86. Park, J.M.; Franken, E.A.; Garg, M.; Fajardo, L.L.; Niklason, L.T. Breast tomosynthesis: Present considerations and future applications. Radiographics 2007, 27 (Suppl. S1), 231–240. [Google Scholar] [CrossRef]
  87. Chen, Y.; Lo, J.Y.; Dobbins, J.T. III. Importance of point-by-point back projection (BP) correction for isocentric motion in digital breast tomosynthesis: Relevance to morphology of microcalcifications. Med. Phys. 2007, 34, 3885–3892. [Google Scholar] [CrossRef]
  88. Mertelemeier, T.; Orman, J.; Haerer, W.; Dudam, M.K. Optimizing filtered backprojection reconstruction for a breast tomosynthesis prototype device. Proc. SPIE 2006, 6142, 131–142. [Google Scholar]
  89. Chen, Y.; Lo, J.Y.; Dobbins, J.T., III. Matrix Inversion Tomosynthesis (MITS) of the Breast: Preliminary Results. In Proceedings of the RSNA 90th Scientific Assembly, Chicago, IL, USA, 28 November–3 December 2004. [Google Scholar]
  90. Wu, T.; Stewart, A.; Stanton, M.; McCauley, T.; Philips, W.; Kopans, D.B.; Moore, R.H.; Eberhard, J.W.; Opsahl-Ong, B.; Niklason, L.; et al. Tomographic mammography using a limited number of low-dose cone-beam projection images. Med. Phys. 2003, 30, 365–380. [Google Scholar] [CrossRef] [PubMed]
  91. Zhou, W.; Balla, A.; Chen, Y. Tomosynthesis Reconstruction Using an Accelerated Expectation Maximization Algorithm with Novel Data Structure Based on Sparse Matrix Ray-Tracing Method. Int. J. Funct. Inform. Pers. Med. 2008, 1, 355–365. [Google Scholar]
  92. Andersen, A.H. Algebraic reconstruction in CT from limited views. IEEE Trans. Med. Imaging 1989, 8, 50–55. [Google Scholar] [CrossRef]
  93. Zhang, Y.; Chan, H.; Sahiner, B.; Wei, J.; Goodsitt, M.M.; Hadjiiski, L.M.; Ge, J.; Zhou, C. A comparative study of limited-angle cone-beam reconstruction methods for breast tomosynthesis. Med. Phys. 2006, 33, 3781–3795. [Google Scholar] [CrossRef] [PubMed]
  94. Huang, C.; Ackerman, J.L.; Petibon, Y.; Brady, T.J.; El Fakhri, G.; Ouyang, J. MR-based motion correction for PET imaging using wired active MR microcoils in simultaneous PET-MR: Phantom study. Med. Phys. 2014, 41, 041910. [Google Scholar] [CrossRef]
  95. Mohan, K.A.; Venkatakrishnan, S.V.; Gibbs, J.W.; Gulsoy, E.B.; Xiao, X.; De Graef, M.; Voorhees, P.W.; Bouman, C.A. TIMBIR: A method for time-space reconstruction from interlaced views. IEEE Trans. Comput. Imaging 2015, 1, 96–111. [Google Scholar] [CrossRef]
  96. Balke, T.; Majee, S.; Buzzard, G.T.; Poveromo, S.; Howard, P.; Groeber, M.A.; McClure, J.; Bouman, C.A. Separable models for cone-beam MBIR reconstruction. Electron. Imaging 2018, 15, 181. [Google Scholar] [CrossRef]
  97. Majee, S.; Balke, T.; Kemp, C.A.; Buzzard, G.T.; Bouman, C.A. 4D X-ray CT reconstruction using multi-slice fusion. In Proceedings of the 2019 IEEE International Conference on Computational Photography (ICCP), Tokyo, Japan, 15–17 May 2019; pp. 1–8. [Google Scholar]
  98. Nadir, Z.; Brown, M.S.; Comer, M.L.; Bouman, C.A. A model-based iterative reconstruction approach to tunable diode laser absorption tomography. IEEE Trans. Comput. Imaging 2017, 3, 876–890. [Google Scholar] [CrossRef]
  99. Majee, S.; Ye, D.H.; Buzzard, G.T.; Bouman, C.A. A model-based neuron detection approach using sparse location priors. Electron. Imaging 2017, 17, 10–17. [Google Scholar] [CrossRef]
  100. Ziabari, A.; Ye, D.H.; Sauer, K.D.; Thibault, J.; Bouman, C.A. 2.5D deep learning for CT image reconstruction using a multi-GPU implementation. In Proceedings of the 2018 52nd Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 28–31 October 2018; pp. 2044–2049. [Google Scholar]
  101. Gibbs, J.W.; Mohan, K.A.; Gulsoy, E.B.; Shahani, A.J.; Xiao, X.; Bouman, C.A.; De Graef, M.; Voorhees, P.W. The three-dimensional morphology of growing dendrites. Sci. Rep. 2015, 5, 11824. [Google Scholar] [CrossRef] [PubMed]
  102. Zang, G.; Idoughi, R.; Tao, R.; Lubineau, G.; Wonka, P.; Heidrich, W. Space-time tomography for continuously deforming objects. ACM Trans. Graph. 2018, 37, 1–14. [Google Scholar] [CrossRef]
  103. Kisner, S.J.; Haneda, E.; Bouman, C.A.; Skatter, S.; Kourinny, M.; Bedford, S. Model-based CT reconstruction from sparse views. In Proceedings of the Second International Conference on Image Formation in X-ray Computed Tomography, Salt Lake City, UT, USA, 24–27 June 2012; pp. 444–447. [Google Scholar]
  104. Sauer, K.; Bouman, C. A local update strategy for iterative reconstruction from projections. IEEE Trans. Signal Process. 1993, 41, 534–548. [Google Scholar] [CrossRef]
  105. Clark, D.; Badea, C. Convolutional regularization methods for 4D, X-ray CT reconstruction. Phys. Med. Imaging 2019, 10948, 574–585. [Google Scholar]
  106. Sreehari, S.; Venkatakrishnan, S.V.; Wohlberg, B.; Buzzard, G.T.; Drummy, L.F.; Simmons, J.P.; Bouman, C.A. Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2016, 2, 408–423. [Google Scholar] [CrossRef]
  107. Venkatakrishnan, S.V.; Bouman, C.A.; Wohlberg, B. Plug-and-play priors for model-based reconstruction. In Proceedings of the 2013 IEEE Global Conference on Signal and Information Processing, Austin, TX, USA, 3–5 December 2013; pp. 945–948. [Google Scholar]
  108. Sun, Y.; Wohlberg, B.; Kamilov, U.S. An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 2019, 5, 395–408. [Google Scholar] [CrossRef]
  109. Kamilov, U.S.; Mansour, H.; Wohlberg, B. A plug-and-play priors approach for solving nonlinear imaging inverse problems. IEEE Signal Process. Lett. 2017, 24, 1872–1876. [Google Scholar] [CrossRef]
  110. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
  111. Maggioni, M.; Boracchi, G.; Foi, A.; Egiazarian, K. Video denoising using separable 4D nonlocal spatiotemporal transforms. In Image Processing: Algorithms and Systems IX; SPIE: Bellingham, WA, USA, 2011; p. 787003. [Google Scholar]
  112. Buzzard, G.T.; Chan, S.H.; Sreehari, S.; Bouman, C.A. Plug-and-play unplugged: Optimization-free reconstruction using consensus equilibrium. SIAM J. Imaging Sci. 2018, 11, 2001–2020. [Google Scholar] [CrossRef]
  113. Sun, Y.; Wohlberg, B.; Kamilov, U.S. Plug-in stochastic gradient method. arXiv 2018, arXiv:1811.03659. [Google Scholar]
  114. Sun, Y.; Xu, S.; Li, Y.; Tian, L.; Wohlberg, B.; Kamilov, U.S. Regularized Fourier ptychography using an online plug-and-play algorithm. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 7665–7669. [Google Scholar]
  115. Bouman, C.A.; Sauer, K. A unified approach to statistical tomography using coordinate descent optimization. IEEE Trans. Image Process. 1996, 5, 480–492. [Google Scholar] [CrossRef]
  116. Butler, S.; Miller, M.I. Maximum a posteriori estimation for SPECT using regularization techniques on massively parallel computers. IEEE Trans. Med. Imaging 1993, 12, 84–89. [Google Scholar] [CrossRef]
  117. Foley, J.; van Dam, A.; Feiner, S.; Hughes, J. Computer Graphics: Principles and Practice; Addison-Wesley: New York, NY, USA, 1990. [Google Scholar]
  118. Lewitt, R.M. Alternatives to voxels for image representation in iterative reconstruction algorithms. Phys. Med. Biol. 1992, 37, 705–715. [Google Scholar] [CrossRef]
  119. Feng, D.; Wang, X.; Zhang, B. Improving reconstruction of tunnel lining defects from ground-penetrating radar profiles by multi-scale inversion and bi-parametric full-waveform inversion. Adv. Eng. Inform. 2019, 41, 100931. [Google Scholar] [CrossRef]
  120. Lavoué, F.; Brossier, R.; Métivier, L.; Garambois, S.; Virieux, J. Two-dimensional permittivity and conductivity imaging by full waveform inversion of multioffset GPR data: A frequency-domain quasi-Newton approach. Geophys. J. Int. 2014, 197, 248–268. [Google Scholar] [CrossRef]
  121. Qin, H.; Xie, X.; Vrugt, J.A.; Zeng, K.; Hong, G. Underground structure defect detection and reconstruction using crosshole GPR and Bayesian waveform inversion. Autom. Constr. 2016, 68, 156–169. [Google Scholar] [CrossRef]
  122. Watson, F. Towards 3D full-wave inversion for GPR. In Proceedings of the 2016 IEEE Radar Conference (RadarConf), Philadelphia, PA, USA, 2 May 2016; pp. 1–6. [Google Scholar]
  123. Wang, X.; Feng, D. Multiparameter full-waveform inversion of 3-D on-ground GPR with a modified total variation regularization scheme. IEEE Geosci. Remote Sens. Lett. 2021, 18, 466–470. [Google Scholar] [CrossRef]
  124. Salucci, M.; Arrebola, M.; Shan, T.; Li, M. Artificial intelligence: New frontiers in real-time inverse scattering and electromagnetic imaging. IEEE Trans. Antennas Propag. 2022, 70, 6349–6364. [Google Scholar] [CrossRef]
  125. Chen, X.; Wei, Z.; Li, M.; Rocca, P. A review of deep learning approaches for inverse scattering problems (invited review). Prog. Electromagn. Res. 2020, 167, 67–81. [Google Scholar] [CrossRef]
  126. Massa, A.; Marcantonio, D.; Chen, X.; Li, M.; Salucci, M. DNNs as applied to electromagnetics, antennas, and propagation—A review. IEEE Antennas Wirel. Propag. Lett. 2019, 18, 2225–2229. [Google Scholar] [CrossRef]
  127. Tong, Z.; Gao, J.; Yuan, D. Advances of deep learning applications in ground-penetrating radar: A survey. Constr. Build Mater. 2020, 258, 120371. [Google Scholar] [CrossRef]
  128. Travassos, X.L.; Avila, S.L.; Ida, N. Artificial neural networks and machine learning techniques applied to ground penetrating radar: A review. Appl. Comput. Inform. 2021, 17, 296–308. [Google Scholar] [CrossRef]
  129. Besaw, L.E.; Stimac, P.J. Deep convolutional neural networks for classifying GPR B-scans. Proc. SPIE 2015, 9454, 385–394. [Google Scholar] [CrossRef]
  130. Lei, W.; Zhang, J.; Yang, X.; Li, W.; Zhang, S.; Jia, Y. Automatic hyperbola detection and fitting in GPR B-scan image. Automat. Constr. 2019, 106, 102839. [Google Scholar] [CrossRef]
  131. Bestagini, P.; Lombardi, F.; Lualdi, M.; Picetti, F.; Tubaro, S. Landmine detection using autoencoders on multipolarization GPR volumetric data. IEEE Trans. Geosci. Remote Sens. 2021, 59, 182–195. [Google Scholar] [CrossRef]
  132. Sun, H.-H.; Lee, Y.H.; Li, C.; Ow, G.; Yusof, M.L.M.; Yucel, A.C. The orientation estimation of elongated underground objects via multipolarization aggregation and selection neural network. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  133. Sun, H.H.; Lee, Y.H.; Dai, Q.; Li, C.; Ow, G.; Yusof, M.L.M.; Yucel, A.C. Estimating parameters of the tree root in heterogeneous soil environments via mask-guided multi-polarimetric integration neural network. IEEE Trans. Geosci. Remote Sens. 2022, 20, 5108716. [Google Scholar] [CrossRef]
  134. Alvarez, J.K.; Kodagoda, S. Application of deep learning image-to-image transformation networks to GPR radar-grams for sub-surface imaging in infrastructure monitoring. In Proceedings of the 2018 13th IEEE Conference on Industrial Electronics and Applications (ICIEA), Wuhan, China, 31 May–2 June 2018; pp. 611–616. [Google Scholar]
  135. Xie, L.; Zhao, Q.; Ma, C.; Liao, B.; Huo, J. Ü-Net: Deep-learning schemes for ground penetrating radar data inversion. J. Environ. Eng. Geophys. 2021, 25, 287–292. [Google Scholar] [CrossRef]
  136. Liu, B.; Ren, Y.; Liu, H.; Xu, H.; Wang, Z.; Cohn, A.G.; Jiang, P. GPRInvNet: Deep learning-based ground-penetrating radar data inversion for tunnel linings. IEEE Trans. Geosci. Remote Sens. 2021, 59, 8305–8325. [Google Scholar] [CrossRef]
  137. Ji, Y.; Zhang, F.; Wang, J.; Wang, Z.; Jiang, P.; Liu, H.; Sui, Q. Deep neural network-based permittivity inversions for ground penetrating radar data. IEEE Sens. J. 2021, 21, 817. [Google Scholar] [CrossRef]
  138. Hager, B.H.; Clayton, R.W.; Richards, M.A.; Comer, R.P.; Dziewonski, A.M. Lower mantle heterogeneity, dynamic topography, and the geoid. Nature 1985, 313, 541–545. [Google Scholar] [CrossRef]
  139. Olsen, P.; Glatzmaier, G.A. Magnetoconvection and thermal coupling of the Earth’s core and mantle. Phil. Trans. R. Soc. 1996, 354, 1413–1424. [Google Scholar]
  140. Ritzwoller, M.H.; Lavely, E.M. Three-dimensional seismic models of the Earth’s mantle. Rev. Geophys. 1995, 33, 1–66. [Google Scholar] [CrossRef]
  141. Robertson, G.S.; Woodhouse, J.H. Constraints on the physical properties of the mantle from seismology and mineral physics. Earth Planet. Sci. Lett. 1996, 143, 197–205. [Google Scholar] [CrossRef]
  142. Su, W.J.; Woodward, R.L.; Dziewonski, A.M. Deep origin of mid-oceanic ridge seismic velocity anomalies. Nature 1992, 360, 149–152. [Google Scholar] [CrossRef]
  143. Tackley, P.J.; Stevenson, D.J.; Glatzmaier, G.A.; Schubert, G. Effects of multiple phase transitions in a 3-D spherical model of convection in the Earth’s mantle. J. Geophys. Res. 1994, 99, 15877–15902. [Google Scholar] [CrossRef]
  144. Woodhouse, J.H.; Trampert, J. New geodynamical constraints from seismic tomography. Earth Planet. Sci. Lett. 1996, 143, 1–15. [Google Scholar]
  145. Tromp, J. Seismic wavefield imaging of Earth’s interior across scales. Nat. Rev. Earth Environ. 2020, 1, 40–53. [Google Scholar] [CrossRef]
  146. Nocedal, J.; Wright, S. Numerical Optimization, 2nd ed.; Springer: New York, NY, USA, 2006. [Google Scholar]
  147. Biegler, L.; Ghattas, O.; Heinkenschloss, M.; Van Bloemen Waanders, B. (Eds.) Large-Scale PDE Constrained Optimization; Springer: Berlin/Heidelberg, Germany, 2003; Volume 30, pp. 3–13. [Google Scholar]
  148. Igel, H. Chapter 5: The Pseudospectral Method. In Computational Seismology; Oxford University Press: Oxford, UK, 2016; pp. 116–152. [Google Scholar]
  149. Lions, J.L.; Magenes, E. Non-Homogeneous Boundary Value Problems and Applications; Springer: Berlin/Heidelberg, Germany, 1972; Volume 1. [Google Scholar] [CrossRef]
  150. Chavent, G. Identification of Parameter Distributed Systems; Goodson, R.E., Polis, M.P., Eds.; American Society of Mechanical Engineers: New York, NY, USA, 1974; pp. 65–74. [Google Scholar]
  151. Plessix, R.-E. A review of the adjoint-state method for computing the gradient of a functional with geophysical applications. Geophys. J. Int. 2006, 167, 495–503. [Google Scholar] [CrossRef]
  152. Métivier, L.; Brossier, R.; Mérigot, Q.; Oudet, E.; Virieux, J. Measuring the misfit between seismograms using an optimal transport distance: Application to full waveform inversion. Geophys. J. Int. 2016, 205, 345–377. [Google Scholar] [CrossRef]
  153. Warner, M.; Guasch, L. Adaptive waveform inversion: Theory. Geophysics 2016, 81, R429–R445. [Google Scholar] [CrossRef]
  154. Ramos-Martínez, J.; Qiu, L.; Valenciano, A.A.; Jiang, X.; Chemingui, N. Long-wavelength FWI updates in the presence of cycle skipping. Lead. Edge 2019, 38, 193–196. [Google Scholar] [CrossRef]
  155. Liu, D.; Nocedal, J. On the limited memory BFGS method for large scale optimization. Math. Program. 1989, 45, 504–528. [Google Scholar] [CrossRef]
  156. Nash, S.; Nocedal, J. A numerical study of the limited memory BFGS method and the truncated Newton method for large scale optimization. SIAM J. Optim. 1991, 1, 358–372. [Google Scholar] [CrossRef]
  157. Zou, X.; Navon, I.M.; Berger, M.; Phua, K.H.; Schlick, T.; Le Dimet, F.X. Numerical experience with limitedmemory quasi-Newton and truncated Newton methods. SIAM J. Optim. 1993, 3, 582–608. [Google Scholar] [CrossRef]
  158. Spaide, R.F.; Fujimoto, J.G.; Waheed, N.K.; Sadda, S.R.; Staurenghi, G. Optical coherence tomography angiography. Prog. Retin. Eye Res. 2018, 64, 1–55. [Google Scholar] [CrossRef] [PubMed]
  159. Huang, D.; Swanson, E.A.; PLin, C.; Schuman, J.S.; Stinson, W.G.; Chang, W.; Hee, M.R.; Flotte, T.; Gregory, K.; Puliafito, C.A.; et al. Optical coherence tomography. Science 1991, 254, 1178–1181. [Google Scholar] [CrossRef]
  160. Choma, M.A.; Sarunic, M.V.; Yang, C.; Izatt, J.A. Sensitivity advantage of swept-source and Fourier-domain optical coherence tomography. Opt. Express 2003, 11, 2183–2189. [Google Scholar] [CrossRef]
  161. Fercher, A.F.; Hitzenberger, C.K.; Kamp, G.; Elzaiat, S.Y. Measurement of Intraocular Distances by Backscattering Spectral Interferometry. Opt. Commun. 1995, 117, 43–48. [Google Scholar] [CrossRef]
  162. Häusler, G.; Lindner, M.W. Coherence Radar” and “Spectral Radar”—New Tools for Dermatological Diagnosis. J. Biomed. Opt. 1998, 3, 21–31. [Google Scholar] [CrossRef] [PubMed]
  163. Wojtkowski, M.; Leitgeb, R.; Kowalczyk, A.; Bajraszewski, T.; Fercher, A.F. In vivo human retinal imaging by Fourier domain optical coherence tomography. J. Biomed. Opt. 2002, 7, 457–463. [Google Scholar] [CrossRef] [PubMed]
  164. Wojtkowski, M.; Kowalczyk, A.; Leitgeb, R.; Fercher, A.F. Full range complex spectral optical coherence tomography technique in eye imaging. Opt. Lett. 2002, 27, 1415–1417. [Google Scholar] [CrossRef] [PubMed]
  165. Chinn, S.R.; Swanson, E.A.; Fujimoto, J.G. Optical coherence tomography using a frequency-tunable optical source. Opt. Lett. 1997, 22, 340–342. [Google Scholar] [CrossRef] [PubMed]
  166. Golubovic, B.; Bouma, B.E.; Tearney, G.J.; Fujimoto, J.G. Optical frequency-domain reflectometry using rapid wavelength tuning of a Cr4+:forsterite laser. Opt. Lett. 1997, 22, 1704–1706. [Google Scholar] [CrossRef]
  167. Lexer, F.; Hitzenberger, C.K.; Fercher, A.F.; Kulhavy, M. Wavelength-tuning interferometry of intraocular distances. Appl. Opt. 1997, 36, 6548–6553. [Google Scholar] [CrossRef]
  168. Haberland, U.H.P.; Blazek, V.; Schmitt, H.J. Chirp Optical Coherence Tomography of Layered Scattering Media. J. Biomed. Opt. 1998, 3, 259–266. [Google Scholar] [CrossRef]
  169. Hsieh, Y.-S.; Ho, Y.-C.; Lee, S.-Y.; Chuang, C.-C.; Tsai, J.; Lin, K.-F.; Sun, C.-W. Dental Optical Coherence Tomography. Sensors 2013, 13, 8928–8949. [Google Scholar] [CrossRef]
  170. Feldchtein, F.; Gelikonov, V.; Iksanov, R.; Gelikonov, G.; Kuranov, R.; Sergeev, A.; Gladkova, N.; Ourutina, M.; Reitze, D.; Warren, J. In vivo OCT imaging of hard and soft tissue of the oral cavity. Opt. Express 1998, 3, 239–250. [Google Scholar] [CrossRef]
  171. Wang, X.J.; Milner, T.E.; de Boer, J.F.; Zhang, Y.; Pashley, D.H.; Nelson, J.S. Characterization of dentin and enamel by use of optical coherence tomography. Appl. Opt. 1999, 38, 2092–2096. [Google Scholar] [CrossRef] [PubMed]
  172. Otis, L.L.; Matthew, J.E.; Ujwal, S.S.; Colson, B.W., Jr. Optical coherence tomography: A new imaging technology for dentistry. J. Am. Dent. Assoc. 2000, 131, 511–514. [Google Scholar] [CrossRef] [PubMed]
  173. Cogliati, A.; Canavesi, C.; Hayes, A.; Tankam, P.; Duma, V.-F.; Santhanam, A.; Thompson, K.P.; Rolland, J.P. MEMS-based handheld scanning probe with pre-shaped input signals for distortion-free images in Gabor-Domain Optical Coherence Microscopy. Opt. Express 2016, 24, 13365–13374. [Google Scholar] [CrossRef] [PubMed]
  174. Hong, Y.; Zhang, K.; Gu, J.; Sai Bi, S.; Zhou, Y.; Liu, D.; Liu, F.; Sunkavalli, K.; Bui, T.; Tan, H. LRM: Large Reconstruction Model for Single Image to 3D. arXiv 2023, arXiv:2311.04400. [Google Scholar]
  175. Strecha, C.; Pylvänäinen, T.; Pascal Fua, P. Dynamic and scalable large scale image reconstruction. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010. [Google Scholar] [CrossRef]
  176. Tian, Z.; Jia, X.; Yuan, K.; Pan, T.; Steve, B.; Jiang, S.B. Low-dose CT reconstruction via edge-preserving total variation regularization. Phys. Med. Biol. 2011, 56, 5949–5967. [Google Scholar] [CrossRef] [PubMed]
  177. Wu, D.; Kyungsang Kim, K.; Li, Q. Low-dose CT reconstruction with Noise2Noise network and testing-time fine-tuning. Med Phys. 2021, 48, 7657–7672. [Google Scholar] [CrossRef]
Figure 1. The organization of the present review paper in sections.
Figure 1. The organization of the present review paper in sections.
Algorithms 17 00071 g001
Figure 2. Parallel X-ray breast tomosynthesis imaging geometry.
Figure 2. Parallel X-ray breast tomosynthesis imaging geometry.
Algorithms 17 00071 g002
Figure 3. Projection geometry. The X-ray source projects the point A onto B (detector plane).
Figure 3. Projection geometry. The X-ray source projects the point A onto B (detector plane).
Algorithms 17 00071 g003
Figure 4. This visual representation showcases the proposed innovative multi-slice fusion approach. Each CNN denoiser is designed to function within the temporal dimension and two spatial dimensions. These CNN denoisers are seamlessly integrated with the measurement model to generate a 4D reconstruction that is inherently regularized.
Figure 4. This visual representation showcases the proposed innovative multi-slice fusion approach. Each CNN denoiser is designed to function within the temporal dimension and two spatial dimensions. These CNN denoisers are seamlessly integrated with the measurement model to generate a 4D reconstruction that is inherently regularized.
Algorithms 17 00071 g004
Figure 5. Pipeline for rendering graphics using hardware acceleration.
Figure 5. Pipeline for rendering graphics using hardware acceleration.
Algorithms 17 00071 g005
Figure 6. FWI accurately computes highly detailed, data-driven models of subsurface velocity, absorption Q, and reflectivity for use in seismic imaging and interpretation by minimizing the difference between observed and modeled seismic waveforms.
Figure 6. FWI accurately computes highly detailed, data-driven models of subsurface velocity, absorption Q, and reflectivity for use in seismic imaging and interpretation by minimizing the difference between observed and modeled seismic waveforms.
Algorithms 17 00071 g006
Figure 7. OCT simplified block diagram. The output from the super luminescent diode is coupled into a single-mode fiber and divided at a 50/50 coupler. The resulting optical signals are directed into both the sample and the reference arm. The reflections are combined at the sample coupler and subsequently detected by a photodiode. The detector output is then demodulated to generate the envelope of the interferometric signal, which is subsequently digitized and stored on a computer. This process involves a series of longitudinal scans, with the lateral beam position being translated after each longitudinal scan.
Figure 7. OCT simplified block diagram. The output from the super luminescent diode is coupled into a single-mode fiber and divided at a 50/50 coupler. The resulting optical signals are directed into both the sample and the reference arm. The reflections are combined at the sample coupler and subsequently detected by a photodiode. The detector output is then demodulated to generate the envelope of the interferometric signal, which is subsequently digitized and stored on a computer. This process involves a series of longitudinal scans, with the lateral beam position being translated after each longitudinal scan.
Algorithms 17 00071 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tassiopoulou, S.; Koukiou, G.; Anastassopoulos, V. Algorithms in Tomography and Related Inverse Problems—A Review. Algorithms 2024, 17, 71. https://doi.org/10.3390/a17020071

AMA Style

Tassiopoulou S, Koukiou G, Anastassopoulos V. Algorithms in Tomography and Related Inverse Problems—A Review. Algorithms. 2024; 17(2):71. https://doi.org/10.3390/a17020071

Chicago/Turabian Style

Tassiopoulou, Styliani, Georgia Koukiou, and Vassilis Anastassopoulos. 2024. "Algorithms in Tomography and Related Inverse Problems—A Review" Algorithms 17, no. 2: 71. https://doi.org/10.3390/a17020071

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop