Next Article in Journal
Hyperspectral Imaging Combined with Deep Learning to Detect Ischemic Necrosis in Small Intestinal Tissue
Next Article in Special Issue
Copper Sulfide Small Nanoparticles as Efficient Contrast Agent for Photoacoustic Imaging
Previous Article in Journal
Sensitivity Modal Analysis of Long Reflective Multimode Interferometer for Small Angle Detection and Temperature
Previous Article in Special Issue
Hand-Held Optoacoustic System for the Localization of Mid-Depth Blood Vessels
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Advanced Image Post-Processing Methods for Photoacoustic Tomography: A Review

1
School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
2
Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China
3
Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China
*
Author to whom correspondence should be addressed.
Photonics 2023, 10(7), 707; https://doi.org/10.3390/photonics10070707
Submission received: 12 May 2023 / Revised: 9 June 2023 / Accepted: 16 June 2023 / Published: 21 June 2023
(This article belongs to the Special Issue Advances of Photoacoustic Tomography)

Abstract

:
Photoacoustic tomography (PAT) is a promising imaging technique that utilizes the detection of light-induced acoustic waves for both morphological and functional biomedical imaging. However, producing high-quality images using PAT is still challenging and requires further research. Besides improving image reconstruction, which turns the raw photoacoustic signal into a PAT image, an alternative way to address this issue is through image post-processing, which can enhance and optimize the reconstructed PAT image. Image post-processing methods have rapidly emerged in PAT and are proven to be essential in improving image quality in recent research. In this review, we investigate the need for image post-processing in PAT imaging. We conduct a thorough literature review on the latest PAT image post-processing articles, including both general and PAT-specific post-processing techniques. In contrast to previous reviews, our analysis focuses specifically on advanced image post-processing rather than image reconstruction methods. By highlighting their potential applications, we hope to encourage further research and development in PAT image post-processing technology.

1. Introduction

Photoacoustic tomography (PAT) is a non-invasive, non-ionizing biomedical imaging technique that enables the reconstruction of the spatial distribution of photoacoustic pressure in the body. The photoacoustic effect [1], which underlies photoacoustic imaging, was first reported by Alexander Graham Bell in 1880. This effect occurs when a pulse, such as an optical or radio frequency wave, is absorbed by tissue. PAT offers high ultrasonic resolution and strong optical contrast in a single imaging modality. It is capable of providing high-resolution structural and molecular imaging in vivo by optically scattering biological media. By utilizing the photoacoustic effect, PAT overcomes the scattering of high-intensity light photons within biological tissues. As shown in Figure 1a, short pulses of laser-generated energy are directed to the tissue, generating thermal and acoustic pulse responses. The absorbed light is converted into heat, causing a rise in pressure due to the thermoelastic expansion of the irradiated tissues. The increased pressure is associated with localized energy injection and absorption from the light, along with other thermal and mechanical characteristics of the tissues, resulting in the generation of a photoacoustic signal. This signal is recorded using the tissue-facing side of an ultrasonic transducer, which is then amplified and digitized. A computational algorithm is used to process the PA signal, thus forming a PAT image.
The quality of PAT images is limited by the imaging hardware, signal acquisition and processing operations, and image reconstruction algorithm. The size and bandwidth of transducer elements are typically limited, and the geometry of the constituent arrays has a significant impact on the imaging results. The inverse model in the reconstruction algorithm is often an approximate and simplified description of the transducer array and medium properties. Imperfect illumination conditions and discrete data acquisition also affect the quality of the collected PA signal, ultimately impacting the fidelity in the resulting images. As a result of these factors, PAT images can be noisy and blurred or even distorted. Numerous methods were previously suggested to deal with the above challenges, including improved image reconstruction [2,3,4,5] and image post-processing methods. Specifically, as shown in Figure 1b, image reconstruction involves turning the raw signal into a desired image, while image post-processing entails enhancing and optimizing the reconstructed images. While a majority of previous research in PAT imaging was dedicated to image reconstruction [6], there has been a recent surge in studies focused on PAT image post-processing methods. As researchers delve deeper into the potential of the image post-processing technique, it is becoming increasingly evident that it plays a crucial role in enhancing the accuracy and reliability of PAT.
In this review, we aim to provide a comprehensive understanding of the usability and effectiveness of image post-processing methods in PAT. To achieve this goal, we first identified relevant articles through database screening and manual classification, and then analyzed the sources of demand for post-processing methods to determine future research directions. As shown in Figure 1c, we then classified the methods into two categories: general image processing techniques and PAT-specific processing techniques. General image processing techniques are common image-enhancement methods used to improve the quality of an existing image, such as denoising, deconvolution, segmentation, etc. In contrast, PAT-specific processing techniques are post-processing methods that are unique in terms of PAT imaging, i.e., light fluence correction, speed-of-sound correction, etc. We showcase the effectiveness of these methods by analyzing their theoretical advancement and experimental results. Finally, we discuss the potential of image post-processing methods in the field of PAT and suggest some future research directions. To the best of our knowledge, this study is the first article to discuss image post-processing as a specific topic for PAT imaging. Overall, this review aims to contribute to the development of image post-processing research on PAT imaging and foster new insights and ideas in this exciting and rapidly evolving field.

2. Material

We conducted a search in four databases—PubMed, Web of Science, IEEE Xplore, and Google Scholar—for articles published from 1 January 2010 to 1 March 2023. Three sections of keywords were used (photoacoustic* OR photo-acoustic OR Optoacoustic OR photo-acoustics OR photoacoustic* OR photoacoustic OR opto-acoustics) AND (tomography* OR imaging) AND (post-processing OR image domain OR image processing OR image analysis) NOT (image reconstruction) NOT (endoscopy) NOT (microscopy). The keywords in the three sections must appear at least once in the controlled indexing title, publication titles, or abstracts before we consider them as documents that satisfy the search criteria. Articles matching the topic were initially filtered according to the advanced search function.
To ensure that we focused on relevant studies, we used specific inclusion criteria to narrow down the list of records. Firstly, we limited our search to articles that discussed photoacoustic tomography as an imaging method, rather than topics related to PAT instrumentation, diagnostic procedures, contrast agents, or other peripheral areas. Secondly, we only included studies where the primary focus was on image post-processing, rather than signal processing or image reconstruction. Using these criteria, we aimed to identify studies that were most relevant to our review topic and provided meaningful insights into the post-processing of photoacoustic tomography images.
To identify relevant literature on image processing techniques for PAT, we first eliminated overlapping articles across the four literature databases. Next, abstracts of remaining articles were screened to further refine the list of relevant literature. Upon analyzing the articles, we identified two main categories for PAT post-processing methods: (1) general image post-processing and (2) PAT-specific image post-processing. By organizing the methods in this manner, we better observed their progress and compared them to similar methods.

3. The Importance of PAT Image Post-Processing

In an ideal PAT imaging scenario, a stationary object is illuminated by a high-power light source in an acoustically homogeneous medium. A transducer array with a wide bandwidth and detection angle is used to detect the PA signal. The image reconstruction algorithm must be robust, taking into account tissue properties while performing real-time operation. However, real-world conditions do not always satisfy these criteria. The cost and technical level of the imaging hardware reduces the accuracy of the acquisition process. Also, the biological properties of the tissue are obscured in the reconstruction algorithm by the crude estimation of the simplified imaging model. This issue leads to a deterioration in the reconstructed images’ quality. Therefore, to compensate for these problems, image enhancement using post-processing methods is required.
In this chapter, we discuss the various factors that can impact the quality of PAT images, including hardware limitations, tissue heterogeneity, and image reconstruction algorithms. We highlight the specific ways in which each of these factors can lead to a degradation in image quality, as shown in Table 1, underscoring the importance of image post-processing techniques for improving the final result.

3.1. Hardware

We provide an overview of the hardware limitations in PAT systems, which generally include three essential components: illumination source, ultrasound transducer, and signal acquisition unit.
In PAT imaging, the intensity of illumination is restricted by safety guidelines, necessitating that the imaged tissue be irradiated within a standard range to meet the non-invasive imaging requirements. Excessive optical irradiation can cause tissue damage due to the absorption of excessive heat, whereas inadequate illumination can result in weak signals being obscured by electronic or thermal noise [7].
Although PAT imaging allows the illumination of tissues at a certain depth, the absorbing molecules at that depth have different characteristics from those at the skin surface. As a result of the light fluence, tissues at depth receive less light, resulting in weaker pressure waveforms.
The photoacoustic wave produced by the excited tissue is broadband, and the bandwidths of the actual transducer probes used for detection are limited. The emitted signals from large-volume targets tend to be at lower frequencies, while those from small-volume targets tend to be at higher frequencies. Given that the area of interest in PAT imaging often contains targets of varying sizes, the limited receiving bandwidth of the transducers is clearly inadequate to capture the entire photoacoustic signal [8]. This issue results in information being missing from the image [9], which may be seen as the hollowing out of the middle part or the loss of finer structures at the image boundaries (Figure 2A).
The appearance of the transducer element and the geometry of the array can significantly affect the received signal. The surface of the transducer element, as well as its geometry, determines its sensitivity angle, which limits its ability to respond to signals from all directions [11]. Even if arranged in a circular array to achieve a complete imaging field of view, there may still be blind spots in the near-edge field section. Furthermore, a closely spaced array can be costly, which is another important limitation to consider.
The accuracy of raw data in PAT imaging is also significantly affected by the acquisition process. Due to hardware cost constraints, the arrangement of transducer elements is usually sparse, causing insufficient spatial sampling [12]. This issue can lead to information being missing and affect the overall accuracy of the final image [13]. Furthermore, unexpected movements during acquisition, such as breath or cardiac pulsatile response, can cause relative displacements, resulting in misalignment and blurred image features (Figure 2B) [10]. This problem is due to the fact that the resolution of the hardware is not sufficient to observe continuous small movements. Therefore, it is difficult to align or compare images at adjacent locations during sequential acquisition, which may negatively impact the accuracy of the final image.

3.2. Tissue Heterogeneity

Except for the limitations in imaging hardware, the imaged object also affects the quality of the PAT image. One of the key challenges in PAT imaging is imaging depth, which is primarily limited by light attenuation [14] in tissue caused by absorption and multiple scattering. The laser illuminated at the object surface must penetrate biological tissue to reach the desired imaging depth. However, in this process, the light energy is gradually absorbed by the tissue and is not uniformly absorbed, resulting in a non-uniform and unknown distribution of light fluence [15].
In addition, the resolution of PAT imaging is affected by the return of the acoustic signal after optical–acoustic conversion. The acoustic signal is subject to attenuation and scattering during propagation, resulting in lower amplitude and a wider waveform of the high-frequency component. Consequently, the imaging resolution is reduced (Figure 3A) [16], as the high-frequency information of fine structures is lost and cannot be displayed in the image. Moreover, biological tissues are acoustically heterogeneous due to variations in the types and concentrations of constituent molecules, leading to differences in SOS (speed of sound) as a medium. At locations where there are boundaries with high acoustic contrast, incident sound waves generate acoustic reflections. These reflections overlap with the original image features to form acoustic reflection artifacts and can be mistaken for a real existing structure (Figure 3B) [17], which can greatly impact the accuracy of the reconstructed image.
In addition, when there are molecules with strong optical absorption outside the imaging plane of interest, the pressure wave signals they generate cannot be ignored. These spurious signals can even obscure the weak signal from deep tissue, resulting in spatial blending or clutter artifacts formed by the same transducer acceptance [18].

3.3. Image Reconstruction Algorithms

The recorded PA signal needs to undergo a mathematical transformation to form the final PAT image, which is referred to as image reconstruction. Ideally, the reconstruction algorithm should account for the PA wave generation and propagation physics. However, due to the uncertainty and complexity of the process, the reconstruction algorithm is often simplified and approximated, leading to a reconstruction error and resulting in low-quality reconstructed images.
Currently, common PAT image reconstruction algorithms include delay and sum (DAS) [6,19], back projection (BP) [20], and model-based (MB) [21,22] methods. Under ideal conditions, no additional processing steps are required for image reconstruction. However, in real-world PAT-imaging scenarios, algorithms possess distinct characteristics and may produce images with different levels of degradation [9,23,24], as shown Figure 4. Direct reconstruction algorithms, such as BP and DAS, can suffer from artifacts [13] and noise, resulting in low contrast and resolution of reconstructed images. Model-based algorithms can address some of these limitations by incorporating prior knowledge of the tissue and improving the physical acoustic wave propagation model; however, they are sensitive to errors in the model matrix and require higher computational cost.

4. General Image Post-Processing Methods for PAT

Image post-processing can effectively improve the usability of photoacoustic images and is a necessary step in many PAT applications. In this chapter, we will explore the common image post-processing methods that can enhance image quality and expand the range of applications of PAT imaging. These methods are summarized in Table 2.

4.1. Artifacts Suppression

PAT imaging artifacts [13,61,62] include negativity artifacts, streak artifacts, splitting artifacts, reflection artifacts, etc. They not only obscure important information, but also hinder the implementation of quantitative image analysis. Image post-processing methods were proposed to suppress these artifacts.
As for artifacts from strong optical absorption, Nguyen et al. [17] proposed a method to detect and eliminate in-plane reflection artifacts in multi-wavelength PAT imaging, as shown in Figure 5A. The approach involves segmentating and estimating their spectral responses using the remaining images to identify and group features in the clearest image as the real absorbers. Fainter features appearing at greater depths are considered to be reflection artifacts and are removed by setting the corresponding pixel values to zero. Jaeger et al. [28] proposed a distortion compensation (DC) method to reduce artifacts by applying a moving time average to the PA image sequence, as shown in Figure 5B. The signal from the light absorber located in the image plane persists throughout the PA sequence and is, therefore, unaffected by the averaging, while the de-correlated clutter is reduced to improve the contrast-to-clutter ratio (CCR).
For motion artifact correction, Erlov et al. [29] conducted a study to explore the applicability of a regional motion correction algorithm that enables the interpretation of internal tissue motion in handheld 2D PAT. As shown in Figure 5C, the technique leverages intensity phase tracking (IPT) of interlaced ultrasound images that are co-registered with the PAT images to avoid the motion artifacts.
For negativity artifacts, Shen et al. [11] investigated the formation mechanisms and evaluated two post-processing approaches to address them. As shown in Figure 5D, the first approach, which was forced-zeroing, involved setting negative values to zero. The second approach, which was envelope detection, involved reversing the negative components of the image and using Hilbert transform to extract amplitude profiles.
Moreover, deep learning (DL) shows promising results for PAT image artifact suppression. Different types of deep convolutional neural (CNN) architectures were previously applied and tested in various studies, such as U-net [25,26,63], Fully Dense (FD)-Unet [64], simple CNN [65], and generative adversarial network (GAN) [66]. One example was provided by Allman et al. [27], who used a CNN network to locate and classify source and reflection artifacts in the traditional beamformed image. The network was trained on simulated data and transferred to experimental images. Among them, most of these methods use simulation datasets created using k-Wave [67] for network training, only [25] using real PAT images acquired via a dedicated PAT imaging system [25]. In addition, ref. [26] also used an online dataset (the DRIVE database [68]) for simulation.
Due to the complex conditions in real-world PAT imaging, it is not possible to fully identify the multiple above-mentioned types of artifacts and remove them completely. Therefore, a standardized evaluation tool for artefact suppression performance will facilitate subjective verification, especially for real PAT images.

4.2. Deconvolution

Due to the signal detection geometric, the reconstructed PAT images usually suffer from the loss of resolution and contrast. In addition, the signal generated by strong absorbers outside the imaging plane degrades the resolution of PAT images, and the resulting artifacts exhibit anisotropy [69,70]. In terms of improving image resolution, one of the most effective solutions is image deconvolution, which aims to potentially recover clear results from the degraded images.
PAT imaging can be modelled via the convolution of the point spread function (PSF) of the imaging system with the underlying absorption image. The PSF of an imaging system can be determined through experimental measurements using objects that are significantly smaller than the imaging system’s resolution. Qi et al. [31] suggested utilizing measured spatially variant PSFs for PAT image restoration. By acquiring PSF data from a specific PAT system, a high-quality PAT image can be generated using a maximum a posteriori (MAP) framework, which enhances both the resolution and overall image quality, as shown in Figure 6A. In the case of unknown PSF, Jetzfellner et al. [30] conducted photoacoustic measurements on six-day-old mice using a broadband hydrophone in a circular scanning configuration and near-infrared imaging. The results indicate that the resolution and contrast of the images can be improved through blind deconvolution, which provides estimates of the unknown PSF. However, the blind deconvolution method depends heavily on the accuracy of the estimated PSF; therefore, its performance is usually not as good as that of image deconvolution with measured PSF.
In addition, a model matrix for deconvolution can be used as the step following image reconstruction. As shown in Figure 6B, Nakshatri et al. [33] proposed a two-step model–resolution matrix-based deconvolution approach to improve the reconstruction image quality. The model–resolution matrix was developed in the context of different penalty functions, such as Quadratic and Geman–McClure. In addition, Awasthi et al. [32] proposed to use a basis pursuit deconvolution method, which also includes a model resolution matrix to improve Total Variational (TV) regularization results, as shown in Figure 6C. However, model-based deconvolution is usually computationally expensive, and the hyperparameters used during iterative optimization require empirical fine-tuning.
Moreover, image super resolution techniques can be applied to PAT, for example, multiple PAT images can be integrated into a single high-resolution image after alignment using optical flow estimation [36]. Learning-based methods were also introduced to PAT image deconvolution. For example, Deep-E, which is a fully dense neural network, provides improved elevational resolution by only using the 2D slices in the axial and elevational plane [35]. For network training, Deep-E uses the 2D images generated via K-Wave [67] as input data, and ground truth vascular images are generated using the Insight Segmentation and Registration Toolkit (ITK) [34]. This work reveals the potential of DL-based deconvolution methods in PAT image post-processing.

4.3. Segmentation

Image segmentation is a commonly used image processing technique that separates the area of interest from the background. In PAT imaging, it is particularly useful for images where the target object and background exhibit weak contrast. It can be used for problems such as light fluence correction, artifact removal, identification of regions of interest, SOS correction, etc.
Threshold segmentation is a widely used algorithm [17,37,38,39]. For example, as shown in Figure 7A, Liang et. al. [38] extracted a rough body region of the animal through thresholding. However, local threshold segmentation based on pixel amplitude usually has poor precision. To solve this problem, several automatic global threshold segmentation methods are proposed to segment and describe the boundaries of biological tissues. Nguyen et al. [17] used the automatic segmentation, as shown in Figure 7B, which is based on the Sobel edge detection algorithm by applying a threshold value, to detect image features. As shown in Figure 7C, Khodaverdi et al. [39] presented an automatic threshold selection (ATS) algorithm that can accurately distinguish targets from the background in adaptive matched filter (AMF) detection images. In another application, Raumonen et al. [37] proposed a vessel segmentation technique in a probabilistic framework by utilizing image voxels and vessel clustering, as shown in Figure 7D. This method achieved more robust and accurate segmentation by calculating the probability of each voxel belonging to blood vessels.
In addition, Mandal et al. [43] employed the segmented mask to construct a two-compartment active contour model for boundary segmentation. Liang et al. [38] proposed an automatic 3D segmentation method for PAT images based on an optimal 3-D graph search. These methods model the segmentation target based on its geometric properties and, therefore, achieve good results for specific tissue. Recently, since the advance of DL technology, PAT image segmentation methods based on deep neural network received increased attention [40,41,42]. For example, Chlis et al. [40] proposed the Sparse-UNET (S-UNET) method to obtain a segmentation mask for automatic vascular segmentation using real-world PAT images labeled by a human annotator. Zhang [71] designed segmentation software that identifies the six grades of breast cancer, which used transfer learning for deep classifier training. Other authors trained their network model using images generated via the MCXYZ program [72]. Compared to traditional methods, these DL-based approaches are shown to be able to better handle low contrast, noise, and complicated artifacts in PAT images.

4.4. Image Post-Processing for Multimodal Imaging

PAT can be integrated with conventional medical imaging modalities, such as magnetic resonance imaging (MRI) and Ultrasound (US) imaging. This multimodal imaging strategy can provide complementary information about the imaging tissue. However, different imaging modalities have their own characteristics, and thus, post-processing of the images is often necessary to improve the quality of the images. Various image post-processing techniques can be employed for PAT-based multimodal imaging, such as feature extraction, image registration, image fusion and so on.
For multimodal PAT-US imaging, much research on image post-processing have been proposed [28,44,45,46,47,48,49]. PA-US imaging enables high quality multimodal images for a wide range of applications, where the alignment and coupling of the two modalities depends heavily on the application of post-processing techniques. As shown in Figure 8A, Jaeger et al. [28] proposed a deformation compensation method to achieve registration of features between PAT and US images acquired from a commercial medical ultrasound imaging equipment. Kim et al. [47] used a dual-modal US-PA contrast agent. The US images are used to construct a masking image that contains the location information about the target site and is applied to the PAT image acquired after contrast agent injection. Han et al. [44] proposed a 3D modeling method to calculate the optical fluence distribution based on a dual-modality PA/US system, and then used this information to compensate the PA imaging results.
For multimodal PAT-MRI imaging of rigid parts, such as the animal head [50,51,52,53,54,55,56,57,58,59], Ren et al. developed a toolbox “RegOA” [50] and proposed a fully automated registration method for PAT-MRI multimodal brain imaging empowered by deep learning [53]. The datasets for network training were acquired in the PAT system and MRI scanner experimentally. However, they only applied image registration to rigid areas, such as the animal head, but not to abdomen. To improve image registration of both body and tumor contours between PAT and MRI, Gehrung et al. [54] combined a novel MRI animal holder and a landmark-based software co-registration algorithm for deformable tissues, which achieved the first co-alignment of soft tissue. However, the custom silicone MRI holder cannot be reused and has relatively low registration accuracy. As shown in Figure 8B, Zhang et al. [56] developed a novel dual-modality animal imaging bed to achieve dual-modality successive data acquisition and co-registration of PAT and MRI data in in vivo imaging applications. Based on their design, they then employed an automated rigid image registration algorithm for PAT and MRI, as well as proposing a PAT image restoration technique that used MRI information as guidance [59]. They achieved robust whole-body MRI-PAT image registration, and their registration tool is reusable. Finally, an attempt was made to integrate a PAT imaging system into a MRI scanner, with Chen et al. [58] developing a parallel hybrid magnetic resonance and optoacoustic tomography (MROT) system to acquire MRI and photoacoustic signals simultaneously. Based on the MROT system, they proposed a tailored data processing pipeline to register MRI image volumes acquired using different sequences onto the corresponding vascular and oxygenation data recorded via PAT.
For other PAT-related dual-modality imaging methods [73,74], Ni et al. [73] introduced a multiscale optical molecular imaging approach combining fluorescence microscopy and multispectral PAT. It enabled unique transcranial imaging capacity, single-plaque resolution, and non-invasive real-time visualization of the entire mouse brain. Moreover, based on real-time volumetric PAT and electrophysiological recordings, Gottschalk et al. [75] visualized real-time thalamocortical activity non-invasively.
For multimodal image fusion, Park et al. [60] proposed a fusion imaging technique in which overlapping PA, US, and MR images could be displayed concurrently in real time via co-registration of pre-acquired MR and real-time PA/US images. In this work, an image fusion algorithm was proposed to seek the spatial relationships between the MR volume and the PA/US image by performing rigid transformation.
Multimodal imaging requires the coupling of different imaging modalities, as well as quantitative analysis [76]. To achieve this goal, image post-processing techniques must continue to play an important role in the rapidly developing field of PAT multimodal imaging.

5. PAT-Specific Image Processing Methods

PAT-specific image processing methods are proposed to deal with image degradation problems that are unique to PAT imaging techniques. Accounting for the optical and acoustic properties of the imaging tissue, these methods include light fluence correction, acoustic correction, and spectral unmixing. With these techniques, researchers can better understand the structure and function of biological tissues and improve the accuracy and effectiveness of PAT imaging. Table 3 gives a brief summary of these methods.

5.1. Light Fluence Correction

Light fluence correction refers to the process of correcting for image variations of light fluence deposited in tissue, which can occur due to the tissue absorption and scattering along the light path. Light fluence correction is important for accurately quantifying the distribution of optical absorption properties in tissue, which is a key aspect of PAT imaging.
A number of mathematical and numerical models are proposed for light fluence correction. For example, optical propagation models [82,83], fixed-point iterative methods [80,81], logarithmic unmixing of multispectral datasets [79], and measurement-based methods [77,78] were used. Zhou et al. [82] developed a real-time correction algorithm based on application of the diffusion dipole model (DDM), which simulates the fluence distribution as the responses to a pair of point sources produced using a collimated pencil beam in a semi-infinite turbid medium. Deán-Ben et al. [77] used the switching kinetics of reversibly switchable fluorescent proteins (RSFPs) to correct for the dynamic light fluence distribution deep in scattering media.
In addition, data-driven approaches are proposed to compensate for the non-linear light fluence distribution. Madasamy et al. [84] trained DL models, such as U-Net, FD U-Net, Y-Net, FD Y-Net, Deep residual U-Net (Deep ResU-Net), and GAN, with blood vessel and numerical breast phantom datasets to evaluate the performance of optical absorption coefficient recovery. They accumulated datasets from different online repositories, such as Kaggle [105], the retinal fundus multi-disease image dataset (RFMID) [106], the optical and acoustic breast phantom database [107], and the 3D acoustic numerical breast phantoms [108]. However, all above supervised learning methods may suffer from the lack of ground truth for network training. To address this problem, as shown in Figure 9A, Li et al. [109] proposed quantitative optoacoustic tomography (QOAT)-Net, which is a dual-path convolutional network, to estimate the absorption coefficients after training with data-label pairs generated via unsupervised “simulation-to-experiment” data translation.
Moreover, the automatic extraction and segmentation of PAT images can be crucial in improving image analysis efficiency and enabling advanced PAT-specific image post-processing, such as light fluence correction. As shown in Figure 9B, Liang et al. [38] presented a volumetric method for estimating light fluence in PAT images. The method employs the 3-D optimal graph search (3-D GS) algorithm, which takes into consideration the continuity among image slices for volume image segmentation. Using the 3D segmentation results of the entire animal body, the simulation volumetric light fluence distribution can be obtained to correct PAT images. To improve the accuracy of light fluence estimation, Brochu et al. [110] suggested organ-level segmentation for performing light fluence correction on regions with distinct optical properties. However, the poor soft tissue contrast of PAT images poses a challenge that hinders accurate organ segmentation. As shown in Figure 9C, Zhang et al. [59] introduced a method for estimating and correcting light fluence in PAT using guidance from MRI structural information. The method involves segmenting the registered MRI image obtained from a dual-modality imaging approach and using the segmentation result to guide the estimation of light fluence distribution. In fact, organ-level segmentation on MRI images is easier than on PAT images and, therefore, has higher segmentation accuracy. This approach also leads to the improved accuracy of light fluence correction.

5.2. Acoustic Correction

Tissues are acoustically heterogeneous, meaning that there is variation in the distribution of SOS among different tissue types. The acoustic properties of tissue surfaces can also vary significantly, resulting in reflections of acoustic waves [111,112]. However, during image reconstruction, the algorithm typically assumes a constant and uniform SOS distribution, which contradicts the acoustic heterogeneity of tissues. This issue can lead to artifacts and spatial aliasing, as well as structural distortion, due to incorrect algorithmic assumptions [113,114].
Image distortion caused by heterogeneous tissues can be suppressed by choosing an optimal SOS. As shown in Figure 10A, Treeby et al. [85] employed the autofocus approach to choose a SOS optimized for visual assessment of image reconstruction. In this approach, the SOS used for image reconstruction is manually adjusted to maximize the sharpness of prominent image features. This approach is intuitive, though it is prone to noise and artifacts that produce similar high-intensity image features. Jeon et al. [89] introduces a DL-based end-to-end SOS correction algorithm that uses eight different SOS images as training inputs [4]. Dehner et al. [88] propose passing the adjusted input SOS value to the trainable network layers to improve image fidelity of deep-learning-based PAT reconstruction, using the k-wave toolbox [67] to obtain labelled datasets by employing a diverse collection of publicly available real-world images as the initial pressure distribution. However, data-driven approaches for acoustic correction primarily rely on high-fidelity simulated data and have not yet been successfully applied to an in vivo imaging study.
Automatic selection of SOS [85,86,87] is a straightforward but limited approach that assumes uniform velocities throughout the tissue. In contrast, partitioning different tissue regions is helpful for correcting their inhomogeneous SOS distributions [43,115,116,117]. Lafci et al. [115] demonstrated the use of inhomogeneous SOS maps for image quality enhancement. The image was segmented manually into two regions: foreground (representing the mouse body) and background (representing the surrounding coupling medium). In general, segmentation-based SOS correction methods require the knowledge of tissue/organ distribution and have certain requirements for segmentation accuracy; however, in terms of image quality, it can significantly improve correction performance.
In addition, joint correction approaches were proposed to simultaneously extract optical absorption information and SOS distribution [90,91,92,93,94]. As shown in Figure 10B, Cai et al. [90] proposed a method to jointly reconstruct the SOS and PAT image. They updated the SOS distribution by maximizing the similarity of images from the partial arrays and then performing image reconstruction. The simulation data for network training were generated using k-wave [67]. Shan et al. [91] proposed a simultaneous reconstruction network (SR-Net) to update the initial pressure and sound speed iteratively. Even though it may suffer from numerical instability, this method still significantly improves PAT image quality. Also, the SOS distribution can be obtained via experimentally measuring [118,119,120] using additional techniques. Xia et al. [120] incorporated ultrasonic computed tomography (USCT) into the full-ring array PAT system to measure the SOS distribution within the object. In addition, the SOS correction model used in this USCT-enhanced method was obtained through experiments and, therefore, is more accurate than simplified assumptions.

5.3. Spectral Unmixing

In multispectral imaging, as tissues are excited at two or more wavelengths, it is possible to resolve the distribution of various tissue molecules or biomarkers by unmixing the images based on their spectral signature.
Studies on the quantification of the concentration of single absorbers were conducted. The linear unmixing model is the most widely used model [99]. Xia et al. [98] proposed a dynamic method, which showed that the PA amplitudes measured at different sO2 states canceled the contribution from optical fluence and allowed calibration-free quantification of absolute sO2. Tzoumas et al. [97] found that the statistical sub-pixel detection methods can focus on the detection of a unique spectral target with up to five times enhanced sensitivity compared to linear unmixing approximations. Tzoumas et al. [96] introduced eigenspectra multispectral optoacoustic tomography (eMSOT) to account for wavelength-dependent light attenuation, estimating blood sO2 within deep tissue with substantially enhanced performance compared to previous methods, as shown in Figure 11A. Despite the quantitative advantage offered by eMSOT, both the non-convex nature of the optimization problem and the possible sub-optimality of the constraints may lead to reduced accuracy. To address this issue, Olefir et al. [95] presented a neural network architecture composed of a combination of recurrent and convolutional layers, which improved the accuracy of sO2 computation by directly regressing from a set of input spectra to the desired fluence values. In their work, the input data for network training are generated via simulation experiment.
The biological tissue composition and its accurate spectral characteristics are not always known during spectral separation, leading to the emergence of blind or semi-blind source spectral unmixing techniques. As shown in Figure 11B, Glatz et al. [104] investigated principal component analysis (PCA) and independent component analysis (ICA), which have no a priori information, in separating specific absorbers and backgrounds. Compared them to spectral fitting schemes, this method is shown to be a promising alternative technique for separating mixed components. Deán-Ben et al. [103] suggested using vertex component analysis (VCA) for a semi-blind unmixing of multispectral optoacoustic data, which include a priori information on the spectral signatures of absorbers. In the case of semi-blind spectral unmixing, the standard version of the VCA algorithm can attain similar sensitivity to the PCA-ICA approach and has a faster performance. Deán-Ben et al. [79] also present a blind unmixing approach capable of correcting for wavelength-dependent light fluence variations using a logarithmic representation of the images taken at different wavelengths. An et al. [102] applied an approximate fluence adjustment, which was based on spatially homogeneous optical properties equal to those of the background region, to the PAT images before accurate separation of the chromophores via ICA, further reducing the unmixing error. The effect of retaining different numbers of dimensions for ICA was also demonstrated [101]. For DL-based methods, Durairaj et al. [100] introduces a dual autoencoder neural network architecture designed to estimate the end-member spectra (wavelength-dependent absorption coefficient) and the abundance maps (unmixed images) of the constituent molecules. Their method accounted for the non-linearities present in the mixing models and reduced the dependency on linear unmixing.
Several factors may affect the effectiveness of spectral unmixing [121,122,123,124]. Dolet et al. [122] took into account both the spatial neighborhood and the spectral behavior of the pixels used for multispectral PAT image clustering, which can significantly improve the accuracy of estimations of the concentration. Tzoumas et al. [123] studied the impact of the number of excitation wavelengths employed for the sensitivity and accuracy, finding that the unmixing sensitivity exhibits a statistical increase trend with respect to the number of wavelengths employed. Moreover, negative values may cause significant artifacts, which also can make fine structures invisible to spectral unmixing techniques. To overcome this issue, Ding et al. [121] imposed non-negative constraints solely onto the optical absorbers of interest, thus avoiding negative pressure values. Taruttis et al. [124] proposed a technique that employs stationary wavelet decomposition prior to non-negative spectral unmixing. This method offers a systematic and automated approach for transforming images at multiple excitation wavelengths into multiscale representations of specific chromophores, thus facilitating the identification of hidden structures and reducing the effects of negative values on imaging results.

6. Discussion

In the above literature review, it was proved that image post-processing techniques are able to significantly enhance the quality of PAT images. Ranging from artefact suppression and image deconvolution to PAT-specific SOS correction and light fluence correction, image post-processing methods for PAT are starting to create a solid foundation for the development of advanced PAT imaging technology, as well as for the extension of pre-clinical and clinical applications.
Regarding the aim of improving PAT image quality, rapid developments in the field of computer vision provided novel ideas that may be introduced to PAT. For example, deep generative models, such as the generative adversarial network [125], can adapt to different downstream tasks, such as super-resolution [126,127], denoising [128], etc.. They may be used to perform PAT image enhancement in an unsupervised manner. Moreover, advance segmentation models, such as segment anything [128], can be introduced to perform more accurate PAT-based tissue segmentation. In the absence of sufficient-labeled datasets, self-supervised networks [129,130] that are able to learn from the input test image itself can be introduced to PAT image post-processing tasks, such as deconvolution.
Moreover, hardware systems for PAT imaging are constantly evolving. This fact creates new technical challenges for PAT image post-processing. For example, advanced imaging strategies were previously proposed for ring-shaped PAT, such as the interlaced sparse sampling PAT [131,132], which employs a rotational scanning scheme. Also, there are an increasing number of studies conducted based on 3D PAT systems, such as spherical array PAT [133,134,135,136,137,138]. In these systems, special care must be taken to efficiently and accurately recover high-quality PAT images.
Finally, the development of image post-processing techniques for PAT is still in its early stage. Current research is usually conducted using private datasets with custom codes. Therefore, standardizing the image processing operations [139] is expected to address the differences between studies and facilitates the unification of various PAT-specific processing techniques. Moreover, DL-based methods emerged as promising PAT image post-processing tools, and the datasets used for network training play a big role as the basis of these methods. Several public datasets for PAT imaging were previously released and made available for use [108,140,141,142]. Future research on DL-based methods performed on these datasets will facilitate fair performance comparison.

7. Conclusions

Following the exploration of the importance of image post-processing for PAT imaging, this review summarizes existing works that employ advanced image processing methods to enhance the quality of PAT images. We divide these studies into two major categories—general image processing and PAT-specific processing—to facilitate a comprehensive survey of recent technical advancements. We anticipate that this review will inspire subsequent research on PAT image processing and encourage innovation on this topic to advance PAT imaging technology.

Author Contributions

Formal analysis, investigation, visualization, and writing—original draft preparation, K.T.; Writing—review and editing, S.Z., Z.L., Y.W. and J.G.; Project administration, writing—review and editing, L.Q.; Funding acquisition, W.C. and L.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the Guangdong Basic and Applied Basic Research Foundation (2021A1515012542, 2022A1515011748), the Guangdong Pearl River Talented Young Scholar Program (2017GC010282), and the Key-Area Research and Development Program of Guangdong Province (2018B030333001).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Das, D.; Sharma, A.; Rajendran, P.; Pramanik, M. Another decade of photoacoustic imaging. Phys. Med. Biol. 2021, 66, 05TR01. [Google Scholar] [CrossRef] [PubMed]
  2. Li, X.; Qi, L.; Zhang, S.; Huang, S.; Wu, J.; Lu, L.; Feng, Y.; Feng, Q.; Chen, W. Model-based optoacoustic tomography image reconstruction with non-local and sparsity regularizations. IEEE Access 2019, 7, 102136–102148. [Google Scholar] [CrossRef]
  3. Qi, L.; Huang, S.; Li, X.; Zhang, S.; Lu, L.; Feng, Q.; Chen, W. Cross-sectional photoacoustic tomography image reconstruction with a multi-curve integration model. Comput. Methods Programs Biomed. 2020, 197, 105731. [Google Scholar] [CrossRef] [PubMed]
  4. Jeon, S.; Choi, W.; Park, B.; Kim, C. A Deep Learning-Based Model That Reduces Speed of Sound Aberrations for Improved In Vivo Photoacoustic Imaging. IEEE Trans Image Process. 2021, 30, 8773–8784. [Google Scholar] [CrossRef]
  5. Wang, K.; Schoonover, R.W.; Su, R.; Oraevsky, A.; Anastasio, M.A. Discrete imaging models for three-dimensional optoacoustic tomography using radially symmetric expansion functions. IEEE Trans. Med. Imaging 2014, 33, 1180–1193. [Google Scholar] [CrossRef] [Green Version]
  6. Ruiz-Veloz, M.; Gutiérrez-Juárez, G.; Polo-Parada, L.; Cortalezzi, F.; Kline, D.D.; Dantzler, H.A.; Cruz-Alvarez, L.; Castro-Beltrán, R.; Hidalgo-Valadez, C. Image reconstruction algorithm for laser-induced ultrasonic imaging: The single sensor scanning synthetic aperture focusing technique. J. Acoust. Soc. Am. 2023, 153, 560–572. [Google Scholar] [CrossRef]
  7. Winkler, A.M.; Maslov, K.; Wang, L.V. Noise-equivalent sensitivity of photoacoustics. J. Biomed. Opt. 2013, 18, 097003. [Google Scholar] [CrossRef] [Green Version]
  8. Ku, G.; Wang, X.; Stoica, G.; Wang, L. Multiple-bandwidth photoacoustic tomography. Phys. Med. Biol. 2004, 49, 1329–1338. [Google Scholar] [CrossRef]
  9. Choi, W.; Oh, D.; Kim, C. Practical photoacoustic tomography: Realistic limitations and technical solutions. J. Appl. Phys. 2020, 127, 230903. [Google Scholar] [CrossRef]
  10. Bise, R.; Zheng, Y.; Sato, I.; Toi, M. Vascular Registration in Photoacoustic Imaging by Low-Rank Alignment via Foreground, Background and Complement Decomposition. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2016; Springer: Cham, Switzerland, 2016; pp. 326–334. [Google Scholar]
  11. Shen, K.; Liu, S.; Feng, T.; Yuan, J.; Zhu, B.; Tian, C. Negativity artifacts in back-projection based photoacoustic tomography. J. Phys. D-Appl. Phys. 2021, 54, 074001. [Google Scholar] [CrossRef]
  12. Cox, B.T.; Arridge, S.R.; Beard, P.C. Photoacoustic tomography with a limited-aperture planar sensor and a reverberant cavity. Inverse Probl. 2007, 23, S95. [Google Scholar] [CrossRef]
  13. Tian, C.; Zhang, C.; Zhang, H.; Xie, D.; Jin, Y. Spatial resolution in photoacoustic computed tomography. Rep. Prog. Phys. 2021, 84, 036701. [Google Scholar] [CrossRef]
  14. Ammari, H.; Bretin, E.; Jugnon, V.; Wahab, A. Photoacoustic imaging for attenuating acoustic media. Math. Model. Biomed. Imaging II 2012, 2035, 53–80. [Google Scholar]
  15. Bu, S.; Liu, Z.; Shiina, T.; Kondo, K.; Yamakawa, M.; Fukutani, K.; Someda, Y.; Asao, Y. Model-Based Reconstruction Integrated With Fluence Compensation for Photoacoustic Tomography. IEEE Trans. Biomed. Eng. 2012, 59, 1354–1363. [Google Scholar] [CrossRef]
  16. Huang, C.; Nie, L.; Schoonover, R.W.; Wang, L.V.; Anastasio, M.A. Photoacoustic computed tomography correcting for heterogeneity and attenuation. J. Biomed. Opt. 2012, 17, 0612111–0612115. [Google Scholar] [CrossRef]
  17. Nguyen, H.N.Y.; Hussain, A.; Steenbergen, W. Reflection artifact identification in photoacoustic imaging using multi-wavelength excitation. Biomed. Opt. Express 2018, 9, 4613–4630. [Google Scholar] [CrossRef] [Green Version]
  18. Preisser, S.; Held, G.; Akarcay, H.G.; Jaeger, M.; Frenz, M. Study of clutter origin in in-vivo epi-optoacoustic imaging of human forearms. J. Opt. 2016, 18, 094003. [Google Scholar] [CrossRef]
  19. Thomenius, K.E. Evolution of ultrasound beamformers. In Proceedings of the 1996 IEEE Ultrasonics Symposium, San Antonio, TX, USA, 3–4 November 1996; pp. 1615–1622. [Google Scholar]
  20. Xu, M.; Wang, L.V. Universal back-projection algorithm for photoacoustic computed tomography. Phys. Rev. E Stat. Nonlin. Soft Matter. Phys. 2005, 71, 016706. [Google Scholar] [CrossRef] [Green Version]
  21. Dean-Ben, X.L.; Buehler, A.; Ntziachristos, V.; Razansky, D. Accurate model-based reconstruction algorithm for three-dimensional optoacoustic tomography. IEEE Trans. Med. Imaging 2012, 31, 1922–1928. [Google Scholar] [CrossRef]
  22. Jiang, H.; Iftimia, N.V.; Xu, Y.; Eggert, J.A.; Fajardo, L.L.; Klove, K.L. Near-infrared optical imaging of the breast with model-based reconstruction. Acad. Radiol. 2002, 9, 186–194. [Google Scholar] [CrossRef]
  23. Allman, D.; Reiter, A.; Bell, M.A.L. Photoacoustic Source Detection and Reflection Artifact Removal Enabled by Deep Learning. IEEE Trans. Med. Imaging 2018, 37, 1464–1477. [Google Scholar] [CrossRef] [PubMed]
  24. Muhammad, M.; Prakash, J.; Liapis, E.; Ntziachristos, V.; Jüstel, D. Weighted model-based optoacoustic reconstruction for partial-view geometries. J. Biophotonics 2022, 15, e202100334. [Google Scholar] [CrossRef] [PubMed]
  25. Davoudi, N.; Deán-Ben, X.L.; Razansky, D. Deep learning optoacoustic tomography with sparse data. Nat. Mach. Intell. 2019, 1, 453–460. [Google Scholar] [CrossRef]
  26. Farnia, P.; Mohammadi, M.; Najafzadeh, E.; Alimohamadi, M.; Makkiabadi, B.; Ahmadian, A. High-quality photoacoustic image reconstruction based on deep convolutional neural network: Towards intra-operative photoacoustic imaging. Biomed. Phys. Eng. Express 2020, 6, 045019. [Google Scholar] [CrossRef] [PubMed]
  27. Allman, D.; Reiter, A.; Bell, M.A.L. A machine learning method to identify and remove reflection artifacts in photoacoustic channel data. In Proceedings of the 2017 IEEE International Ultrasonics Symposium (IUS), Washington, DC, USA, 6–9 September 2017; pp. 1–4. [Google Scholar]
  28. Jaeger, M.; Harris-Birtill, D.; Gertsch, A.; O’Flynn, E.; Bamber, J. Deformation-compensated averaging for clutter reduction in epiphotoacoustic imaging in vivo. J. Biomed. Opt. 2012, 17, 066007. [Google Scholar] [CrossRef] [Green Version]
  29. Erlöv, T.; Sheikh, R.; Dahlstrand, U.; Albinsson, J.; Malmsjö, M.; Cinthio, M. Regional motion correction for in vivo photoacoustic imaging in humans using interleaved ultrasound images. Biomed. Opt. Express 2021, 12, 3312–3322. [Google Scholar] [CrossRef]
  30. Jetzfellner, T.; Ntziachristos, V. Performance of blind deconvolution in optoacoustic tomography. J. Innov. Opt. Health Sci. 2011, 4, 385–393. [Google Scholar] [CrossRef]
  31. Qi, L.; Wu, J.; Li, X.; Zhang, S.; Huang, S.; Feng, Q.; Chen, W. Photoacoustic tomography image restoration with measured spatially variant point spread functions. IEEE Trans. Med. Imaging 2021, 40, 2318–2328. [Google Scholar] [CrossRef]
  32. Awasthi, N.; Kalva, S.K.; Pramanik, M.; Yalavarthy, P.K. Image-guided filtering for improving photoacoustic tomographic image reconstruction. J. Biomed. Opt. 2018, 23, 091413–091422. [Google Scholar] [CrossRef] [Green Version]
  33. Nakshatri, H.S.; Prakash, J. Model resolution matrix based deconvolution improves over non-quadratic penalization in frequency-domain photoacoustic tomography. J. Acoust. Soc. Am. 2022, 152, 1345. [Google Scholar] [CrossRef]
  34. Hamarneh, G.; Jassi, P. VascuSynth: Simulating vascular trees for generating volumetric image data with ground-truth segmentation and tree analysis. Comput. Med. Imaging Graph. 2010, 34, 605–616. [Google Scholar] [CrossRef] [Green Version]
  35. Zhang, H.; Bo, W.; Wang, D.; DiSpirito, A.; Huang, C.; Nyayapathi, N.; Zheng, E.; Vu, T.; Gong, Y.; Yao, J. Deep-E: A fully-dense neural network for improving the elevation resolution in linear-array-based photoacoustic tomography. IEEE Trans. Med. Imaging 2021, 41, 1279–1288. [Google Scholar] [CrossRef]
  36. He, H.; Mandal, S.; Buehler, A.; Deán-Ben, X.L.; Razansky, D.; Ntziachristos, V. Improving optoacoustic image quality via geometric pixel super-resolution approach. IEEE Trans. Med. Imaging 2015, 35, 812–818. [Google Scholar] [CrossRef]
  37. Raumonen, P.; Tarvainen, T. Segmentation of vessel structures from photoacoustic images with reliability assessment. Biomed. Opt. Express 2018, 9, 2887–2904. [Google Scholar] [CrossRef] [Green Version]
  38. Liang, Z.; Zhang, S.; Wu, J.; Li, X.; Zhuang, Z.; Feng, Q.; Chen, W.; Qi, L. Automatic 3-D segmentation and volumetric light fluence correction for photoacoustic tomography based on optimal 3-D graph search. Med. Image Anal. 2022, 75, 102275. [Google Scholar] [CrossRef]
  39. Khodaverdi, A.; Erlöv, T.; Hult, J.; Reistad, N.; Pekar-Lukacs, A.; Albinsson, J.; Merdasa, A.; Sheikh, R.; Malmsjö, M.; Cinthio, M. Automatic threshold selection algorithm to distinguish a tissue chromophore from the background in photoacoustic imaging. Biomed. Opt. Express 2021, 12, 3836–3850. [Google Scholar] [CrossRef]
  40. Chlis, N.-K.; Karlas, A.; Fasoula, N.-A.; Kallmayer, M.; Eckstein, H.-H.; Theis, F.J.; Ntziachristos, V.; Marr, C. A sparse deep learning approach for automatic segmentation of human vasculature in multispectral optoacoustic tomography. Photoacoustics 2020, 20, 100203. [Google Scholar] [CrossRef]
  41. Luke, G.; Hoffer-Hawlik, K.; Namen, A.; Shang, R. O-Net: A Convolutional Neural Network for Quantitative Photoacoustic Image Segmentation and Oximetry. arXiv 2019, arXiv:1911.01935. [Google Scholar]
  42. Schellenberg, M.; Dreher, K.K.; Holzwarth, N.; Isensee, F.; Reinke, A.; Schreck, N.; Seitel, A.; Tizabi, M.D.; Maier-Hein, L.; Groehl, J. Semantic segmentation of multispectral photoacoustic images using deep learning. Photoacoustics 2022, 26, 100341. [Google Scholar] [CrossRef]
  43. Mandal, S.; Deán-Ben, X.L.; Razansky, D. Visual quality enhancement in optoacoustic tomography using active contour segmentation priors. IEEE Trans. Med. Imaging 2016, 35, 2209–2217. [Google Scholar] [CrossRef] [Green Version]
  44. Han, T.; Yang, M.; Yang, F.; Zhao, L.; Jiang, Y.; Li, C. A three-dimensional modeling method for quantitative photoacoustic breast imaging with handheld probe. Photoacoustics 2021, 21, 100222. [Google Scholar] [CrossRef] [PubMed]
  45. Singh, M.K.A.; Sato, N.; Ichihashi, F.; Sankai, Y. In vivo demonstration of real-time oxygen saturation imaging using a portable and affordable LED-based multispectral photoacoustic and ultrasound imaging system. In Proceedings of the Conference on Photons Plus Ultrasound—Imaging and Sensing, San Francisco, CA, USA, 3–6 February 2019. [Google Scholar]
  46. Francis, K.J.; Singh, M.K.A.; Steenbergen, W. Tomographic imaging with an LED-based photoacoustic-ultrasound system. In Proceedings of the Conference on Photons Plus Ultrasound—Imaging and Sensing 2020, San Francisco, CA, USA, 2–5 February 2020. [Google Scholar]
  47. Kim, H.; Lee, H.; Kim, H.; Chang, J.H. Elimination of Nontargeted Photoacoustic Signals for Combined Photoacoustic and Ultrasound Imaging. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2021, 68, 1593–1604. [Google Scholar] [CrossRef] [PubMed]
  48. Yoon, C.; Lee, C.; Shin, K.; Kim, C. Motion Compensation for 3D Multispectral Handheld Photoacoustic Imaging. Biosensors 2022, 12, 1092. [Google Scholar] [CrossRef] [PubMed]
  49. Choi, W.; Park, E.-Y.; Jeon, S.; Yang, Y.; Park, B.; Ahn, J.; Cho, S.; Lee, C.; Seo, D.-K.; Cho, J.-H.; et al. Three-dimensional Multistructural Quantitative Photoacoustic and US Imaging of Human Feet in Vivo. Radiology 2022, 303, 467–473. [Google Scholar] [CrossRef] [PubMed]
  50. Ren, W.; Skulason, H.; Schlegel, F.; Rudin, M.; Klohs, J.; Ni, R. Automated registration for optoacoustic tomography and MRI. In Proceedings of the Optical Molecular Probes, Imaging and Drug Delivery, Tucson, AZ, USA, 14–17 April 2019; p. OW2D.4. [Google Scholar]
  51. Ren, W.; Deán-Ben, X.L.; Skachokova, Z.; Augath, M.-A.; Ni, R.; Chen, Z.; Razansky, D. Monitoring mouse brain perfusion with hybrid magnetic resonance optoacoustic tomography. Biomed. Opt. Express 2023, 14, 1192–1204. [Google Scholar] [CrossRef]
  52. Chen, Z.; Gezginer, I.; Augath, M.A.; Liu, Y.H.; Ni, R.; Deán-Ben, X.L.; Razansky, D. Simultaneous Functional Magnetic Resonance and Optoacoustic Imaging of Brain-Wide Sensory Responses in Mice. Adv. Sci. 2023, 10, 2205191. [Google Scholar] [CrossRef]
  53. Hu, Y.; Lafci, B.; Luzgin, A.; Wang, H.; Klohs, J.; Dean-Ben, X.L.; Ni, R.; Razansky, D.; Ren, W. Deep learning facilitates fully automated brain image registration of optoacoustic tomography and magnetic resonance imaging. Biomed. Opt. Express 2022, 13, 4817–4833. [Google Scholar] [CrossRef]
  54. Gehrung, M.; Tomaszewski, M.; McIntyre, D.; Disselhorst, J.; Bohndiek, S. Co-registration of optoacoustic tomography and magnetic resonance imaging data from murine tumour models. Photoacoustics 2020, 18, 100147. [Google Scholar] [CrossRef]
  55. Ren, W.; Skulason, H.; Schlegel, F.; Rudin, M.; Klohs, J.; Ni, R. Automated registration of magnetic resonance imaging and optoacoustic tomography data for experimental studies. Neurophotonics 2019, 6, 025001. [Google Scholar] [CrossRef] [Green Version]
  56. Zhang, S.; Qi, L.; Li, X.; Liang, Z.; Sun, X.; Liu, J.; Lu, L.; Feng, Y.; Chen, W. MRI Information-Based Correction and Restoration of Photoacoustic Tomography. IEEE Trans. Med. Imaging 2022, 41, 2543–2555. [Google Scholar] [CrossRef]
  57. Ni, R.; Deán-Ben, X.L.; Treyer, V.; Gietl, A.; Hock, C.; Klohs, J.; Nitsch, R.M.; Razansky, D. Coregistered transcranial optoacoustic and magnetic resonance angiography of the human brain. Opt. Lett. 2023, 48, 648–651. [Google Scholar] [CrossRef]
  58. Chen, Z.; Gezginer, I.; Augath, M.-A.; Ren, W.; Liu, Y.-H.; Ni, R.; Deán-Ben, X.L.; Razansky, D. Hybrid magnetic resonance and optoacoustic tomography (MROT) for preclinical neuroimaging. Light Sci. Appl. 2022, 11, 332. [Google Scholar] [CrossRef]
  59. Zhang, S.; Liang, Z.; Tang, K.; Li, X.; Zhang, X.; Mo, Z.; Wu, J.; Huang, S.; Liu, J.; Zhuang, Z.; et al. In vivo co-registered hybrid-contrast imaging by successive photoacoustic tomography and magnetic resonance imaging. Photoacoustics 2023, 31, 100506. [Google Scholar] [CrossRef]
  60. Park, S.; Jang, J.; Kim, J.; Kim, Y.S.; Kim, C. Real-time Triple-modal Photoacoustic, Ultrasound, and Magnetic Resonance Fusion Imaging of Humans. IEEE Trans. Med. Imaging 2017, 36, 1912–1921. [Google Scholar] [CrossRef]
  61. Rosenthal, A.; Razansky, D.; Ntziachristos, V. Fast semi-analytical model-based acoustic inversion for quantitative optoacoustic tomography. IEEE Trans. Med. Imaging 2010, 29, 1275–1285. [Google Scholar] [CrossRef]
  62. Buehler, A.; Rosenthal, A.; Jetzfellner, T.; Dima, A.; Razansky, D.; Ntziachristos, V. Model-based optoacoustic inversions with incomplete projection data. Med. Phys. 2011, 38, 1694–1704. [Google Scholar] [CrossRef]
  63. Antholzer, S.; Haltmeier, M.; Schwab, J. Deep learning for photoacoustic tomography from sparse data. Inverse Probl. Sci. Eng. 2019, 27, 987–1005. [Google Scholar] [CrossRef] [Green Version]
  64. Guan, S.; Khan, A.A.; Sikdar, S.; Chitnis, P.V. Fully Dense UNet for 2-D Sparse Photoacoustic Tomography Artifact Removal. IEEE J. Biomed. Health Inform. 2020, 24, 568–576. [Google Scholar] [CrossRef] [Green Version]
  65. Zhang, H.J.; Li, H.Y.; Nyayapathi, N.; Wang, D.P.; Le, A.; Ying, L.; Xia, J. A New Deep Learning Network for Mitigating Limited-view and Under-sampling Artifacts in Ring-shaped Photoacoustic Tomography. Comput. Med. Imaging Graph. 2020, 84, 101720. [Google Scholar] [CrossRef]
  66. Vu, T.; Li, M.C.; Humayun, H.; Zhou, Y.; Yao, J.J. A generative adversarial network for artifact removal in photoacoustic computed tomography with a linear-array transducer. Exp. Biol. Med. 2020, 245, 597–605. [Google Scholar] [CrossRef] [Green Version]
  67. Treeby, B.E.; Cox, B.T. k-Wave: MATLAB toolbox for the simulation and reconstruction of photoacoustic wave fields. J. Biomed. Opt. 2010, 15, 021314. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  68. Staal, J.; Abràmoff, M.D.; Niemeijer, M.; Viergever, M.A.; Van Ginneken, B. Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 2004, 23, 501–509. [Google Scholar] [CrossRef]
  69. Buehler, A.; Dean-Ben, X.L.; Razansky, D.; Ntziachristos, V. Volumetric Optoacoustic Imaging With Multi-Bandwidth Deconvolution. IEEE Trans. Med. Imaging 2014, 33, 814–821. [Google Scholar] [CrossRef] [PubMed]
  70. Queiros, D.; Dean-Ben, X.L.; Buehler, A.; Razansky, D.; Rosenthal, A.; Ntziachristos, V. Modeling the shape of cylindrically focused transducers in three-dimensional optoacoustic tomography. In Proceedings of the Conference on Photons Plus Ultrasound: Imaging and Sensing, San Francisco, CA, USA, 2–5 February 2014. [Google Scholar]
  71. Zhang, J.; Chen, B.; Zhou, M.; Lan, H.; Gao, F. Photoacoustic image classification and segmentation of breast cancer: A feasibility study. IEEE Access 2018, 7, 5457–5466. [Google Scholar] [CrossRef]
  72. Jacques, S.L. Coupling 3D Monte Carlo light transport in optically heterogeneous tissues to photoacoustic signal generation. Photoacoustics 2014, 2, 137–142. [Google Scholar] [CrossRef] [Green Version]
  73. Ni, R.Q.; Chen, Z.Y.; Dean-Ben, X.L.; Voigt, F.F.; Kirschenbaum, D.; Shi, G.; Villois, A.; Zhou, Q.Y.; Crimi, A.; Arosio, P.; et al. Multiscale optical and optoacoustic imaging of amyloid-beta deposits in mice. Nat. Biomed. Eng. 2022, 6, 1031–1034. [Google Scholar] [CrossRef]
  74. Chen, Z.Y.; Zhou, Q.Y.; Dean-Ben, X.L.; Gezginer, I.; Ni, R.Q.; Reiss, M.; Shoham, S.; Razansky, D. Multimodal Noninvasive Functional Neurophotonic Imaging of Murine Brain-Wide Sensory Responses. Adv. Sci. 2022, 9, 2105588. [Google Scholar] [CrossRef]
  75. Gottschalk, S.; Fehm, T.F.; Dean-Ben, X.L.; Tsytsarev, V.; Razansky, D. Correlation between volumetric oxygenation responses and electrophysiology identifies deep thalamocortical activity during epileptic seizures. Neurophotonics 2017, 4, 011007. [Google Scholar] [CrossRef]
  76. Chen, Z.; Deán-Ben, X.L.; Liu, N.; Gujrati, V.; Gottschalk, S.; Ntziachristos, V.; Razansky, D. Concurrent fluorescence and volumetric optoacoustic tomography of nanoagent perfusion and bio-distribution in solid tumors. Biomed. Opt. Express 2019, 10, 5093–5102. [Google Scholar] [CrossRef]
  77. Deán-Ben, X.L.; Stiel, A.C.; Jiang, Y.; Ntziachristos, V.; Westmeyer, G.G.; Razansky, D. Light fluence normalization in turbid tissues via temporally unmixed multispectral optoacoustic tomography. Opt. Lett. 2015, 40, 4691–4694. [Google Scholar] [CrossRef]
  78. Hussain, A.; Daoudi, K.; Hondebrink, E.; Steenbergen, W. Mapping optical fluence variations in highly scattering media by measuring ultrasonically modulated backscattered light. J. Biomed. Opt. 2014, 19, 066002. [Google Scholar] [CrossRef] [Green Version]
  79. Deán-Ben, X.L.; Buehler, A.; Razansky, D.; Ntziachristos, V. Estimation of optoacoustic contrast agent concentration with self-calibration blind logarithmic unmixing. Phys. Med. Biol. 2014, 59, 4785. [Google Scholar] [CrossRef]
  80. Harrison, T.; Shao, P.; Zemp, R.J. A least-squares fixed-point iterative algorithm for multiple illumination photoacoustic tomography. Biomed. Opt. Express 2013, 4, 2224–2230. [Google Scholar] [CrossRef] [Green Version]
  81. Cox, B.T.; Arridge, S.R.; Kostli, K.P.; Beard, P.C. Two-dimensional quantitative photoacoustic image reconstruction of absorption distributions in scattering media by use of a simple iterative method. Appl. Opt. 2006, 45, 1866–1875. [Google Scholar] [CrossRef]
  82. Zhou, X.; Akhlaghi, N.; Wear, K.A.; Garra, B.S.; Pfefer, T.J.; Vogt, W.C. Evaluation of fluence correction algorithms in multispectral photoacoustic imaging. Photoacoustics 2020, 19, 100181. [Google Scholar] [CrossRef]
  83. Schweiger, M.; Arridge, S. The Toast++ software suite for forward and inverse modeling in optical tomography. J. Biomed. Opt. 2014, 19, 040801. [Google Scholar] [CrossRef] [Green Version]
  84. Madasamy, A.; Gujrati, V.; Ntziachristos, V.; Prakash, J. Deep learning methods hold promise for light fluence compensation in three-dimensional optoacoustic imaging. J. Biomed. Opt. 2022, 27, 106004. [Google Scholar] [CrossRef]
  85. Treeby, B.E.; Varslot, T.K.; Zhang, E.Z.; Laufer, J.G.; Beard, P.C. Automatic sound speed selection in photoacoustic image reconstruction using an autofocus approach. J. Biomed. Opt. 2011, 16, 090501–090503. [Google Scholar] [CrossRef] [Green Version]
  86. Yoon, C.; Kang, J.; Han, S.; Yoo, Y.; Song, T.-K.; Chang, J.H. Enhancement of photoacoustic image quality by sound speed correction: Ex vivo evaluation. Opt. Express 2012, 20, 3082–3090. [Google Scholar] [CrossRef]
  87. Mandal, S.; Nasonova, E.; Deán-Ben, X.L.; Razansky, D. Optimal self-calibration of tomographic reconstruction parameters in whole-body small animal optoacoustic imaging. Photoacoustics 2014, 2, 128–136. [Google Scholar] [CrossRef] [Green Version]
  88. Dehner, C.; Zahnd, G.; Ntziachristos, V.; Jüstel, D. DeepMB: Deep neural network for real-time model-based optoacoustic image reconstruction with adjustable speed of sound. arXiv 2022, arXiv:2206.14485. [Google Scholar]
  89. Jeon, S.; Kim, C. Deep learning-based speed of sound aberration correction in photoacoustic images. In Proceedings of the Photons Plus Ultrasound: Imaging and Sensing, San Francisco, CA, USA, 2–5 February 2020; Volume 11240. [Google Scholar]
  90. Cai, C.; Wang, X.; Si, K.; Qian, J.; Luo, J.; Ma, C. Feature coupling photoacoustic computed tomography for joint reconstruction of initial pressure and sound speed in vivo. Biomed. Opt. Express 2019, 10, 3447–3462. [Google Scholar] [CrossRef] [PubMed]
  91. Shan, H.; Wiedeman, C.; Wang, G.; Yang, Y. Simultaneous reconstruction of the initial pressure and sound speed in photoacoustic tomography using a deep-learning approach. In Proceedings of the Novel Optical Systems Methods and Applications XXII, San Diego, CA, USA, 13–14 August 2019; pp. 18–27. [Google Scholar]
  92. Singhvi, A.; Wang, M.L.; Fitzpatrick, A.; Arbabian, A. Multi-task learning for simultaneous speed-of-sound mapping and image reconstruction using non-contact thermoacoustics. In Proceedings of the 2021 IEEE International Ultrasonics Symposium (IUS), Venice, Italy, 11–16 September 2021; pp. 1–5. [Google Scholar]
  93. Jush, F.K.; Biele, M.; Dueppenbecker, P.M.; Maier, A. AutoSpeed: A Linked Autoencoder Approach for Pulse-Echo Speed-of-Sound Imaging for Medical Ultrasound. arXiv 2022, arXiv:2207.02392. [Google Scholar]
  94. Jush, F.K.; Biele, M.; Dueppenbecker, P.M.; Maier, A. Deep Learning for Ultrasound Speed-of-Sound Reconstruction: Impacts of Training Data Diversity on Stability and Robustness. arXiv 2022, arXiv:2202.01208. [Google Scholar]
  95. Olefir, I.; Tzoumas, S.; Restivo, C.; Mohajerani, P.; Xing, L.; Ntziachristos, V. Deep Learning-Based Spectral Unmixing for Optoacoustic Imaging of Tissue Oxygen Saturation. IEEE Trans Med. Imaging 2020, 39, 3643–3654. [Google Scholar] [CrossRef]
  96. Tzoumas, S.; Nunes, A.; Olefir, I.; Stangl, S.; Symvoulidis, P.; Glasl, S.; Bayer, C.; Multhoff, G.; Ntziachristos, V. Eigenspectra optoacoustic tomography achieves quantitative blood oxygenation imaging deep in tissues. Nat. Commun. 2016, 7, 12121. [Google Scholar] [CrossRef]
  97. Tzoumas, S.; Deliolanis, N.C.; Morscher, S.; Ntziachristos, V. Unmixing molecular agents from absorbing tissue in multispectral optoacoustic tomography. IEEE Trans. Med. Imaging 2013, 33, 48–60. [Google Scholar] [CrossRef]
  98. Xia, J.; Danielli, A.; Liu, Y.; Wang, L.; Maslov, K.; Wang, L.V. Calibration-free quantification of absolute oxygen saturation based on the dynamics of photoacoustic signals. Opt. Lett. 2013, 38, 2800–2803. [Google Scholar] [CrossRef] [Green Version]
  99. Craig, M.D. Minimum-volume transforms for remotely sensed data. IEEE Trans. Geosci. Remote Sens. 1994, 32, 542–552. [Google Scholar] [CrossRef]
  100. Durairaj, D.A.; Agrawal, S.; Johnstonbaugh, K.; Chen, H.Y.; Karri, P.K.; Kothapalli, S.R. Unsupervised deep learning approach for Photoacoustic spectral unmixing. In Proceedings of the Conference on Photons Plus Ultrasound—Imaging and Sensing 2020, San Francisco, CA, USA, 2–5 February 2020. [Google Scholar]
  101. An, L.; Cox, B.T. Estimating relative chromophore concentrations from multiwavelength photoacoustic images using independent component analysis. J. Biomed. Opt. 2018, 23, 076007. [Google Scholar] [CrossRef] [Green Version]
  102. An, L.; Cox, B. Independent component analysis for unmixing multi-wavelength photoacoustic images. In Proceedings of the Photons Plus Ultrasound: Imaging and Sensing 2016, San Francisco, CA, USA, 14–17 February 2016; Volume 9708, pp. 886–893. [Google Scholar]
  103. Deán-Ben, X.L.; Deliolanis, N.C.; Ntziachristos, V.; Razansky, D. Fast unmixing of multispectral optoacoustic data with vertex component analysis. Opt. Lasers Eng. 2014, 58, 119–125. [Google Scholar] [CrossRef]
  104. Glatz, J.; Deliolanis, N.C.; Buehler, A.; Razansky, D.; Ntziachristos, V. Blind source unmixing in multi-spectral optoacoustic tomography. Opt. Express 2011, 19, 3175–3184. [Google Scholar] [CrossRef]
  105. Cen, L.-P.; Ji, J.; Lin, J.-W.; Ju, S.-T.; Lin, H.-J.; Li, T.-P.; Wang, Y.; Yang, J.-F.; Liu, Y.-F.; Tan, S. Automatic detection of 39 fundus diseases and conditions in retinal photographs using deep neural networks. Nat. Commun. 2021, 12, 4828. [Google Scholar] [CrossRef] [PubMed]
  106. Pachade, S.; Porwal, P.; Thulkar, D.; Kokare, M.; Deshmukh, G.; Sahasrabuddhe, V.; Giancardo, L.; Quellec, G.; Mériaudeau, F. Retinal fundus multi-disease image dataset (rfmid): A dataset for multi-disease detection research. Data 2021, 6, 14. [Google Scholar] [CrossRef]
  107. Lou, Y.; Zhou, W.; Matthews, T.P.; Appleton, C.M.; Anastasio, M.A. Generation of anatomically realistic numerical phantoms for photoacoustic and ultrasonic breast imaging. J. Biomed. Opt. 2017, 22, 041015. [Google Scholar] [CrossRef]
  108. Li, F.; Villa, U.; Park, S.; Anastasio, M.A. 3-D stochastic numerical breast phantoms for enabling virtual imaging trials of ultrasound computed tomography. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2021, 69, 135–146. [Google Scholar] [CrossRef]
  109. Li, J.; Wang, C.; Chen, T.; Lu, T.; Li, S.; Sun, B.; Gao, F.; Ntziachristos, V. Deep learning-based quantitative optoacoustic tomography of deep tissues in the absence of labeled experimental data. Optica 2022, 9, 32–41. [Google Scholar] [CrossRef]
  110. Brochu, F.M.; Brunker, J.; Joseph, J.; Tomaszewski, M.R.; Morscher, S.; Bohndiek, S.E. Towards quantitative evaluation of tissue absorption coefficients using light fluence correction in optoacoustic tomography. IEEE Trans. Med. Imaging 2016, 36, 322–331. [Google Scholar] [CrossRef] [Green Version]
  111. Huang, C.; Nie, L.; Schoonover, R.W.; Guo, Z.; Schirra, C.O.; Anastasio, M.A.; Wang, L.V. Aberration correction for transcranial photoacoustic tomography of primates employing adjunct image data. J. Biomed. Opt. 2012, 17, 066016. [Google Scholar] [CrossRef]
  112. Dean-Ben, X.L.; Ma, R.; Razansky, D.; Ntziachristos, V. Statistical approach for optoacoustic image reconstruction in the presence of strong acoustic heterogeneities. IEEE Trans. Med. Imaging 2010, 30, 401–408. [Google Scholar] [CrossRef]
  113. Deán-Ben, X.L.; Ntziachristos, V.; Razansky, D. Effects of small variations of speed of sound in optoacoustic tomographic imaging. Med. Phys. 2014, 41, 073301. [Google Scholar] [CrossRef]
  114. Xu, Y.; Wang, L. Effects of Acoustic Heterogeneity in Breast Thermoacoustic Tomography. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2003, 50, 1134–1146. [Google Scholar]
  115. Lafci, B.; Merčep, E.; Herraiz, J.L.; Deán-Ben, X.L.; Razansky, D. Transmission-reflection optoacoustic ultrasound (TROPUS) imaging of mammary tumors. In Proceedings of the Photons Plus Ultrasound: Imaging and Sensing, Online, 6–11 March 2021; pp. 192–197. [Google Scholar]
  116. Modgil, D.; Anastasio, M.A.; La Rivière, P.J. Image reconstruction in photoacoustic tomography with variable speed of sound using a higher-order geometrical acoustics approximation. J. Biomed. Opt. 2010, 15, 021308. [Google Scholar] [CrossRef] [Green Version]
  117. Pattyn, A.; Mumm, Z.; Alijabbari, N.; Duric, N.; Anastasio, M.A.; Mehrmohammadi, M. Model-based optical and acoustical compensation for photoacoustic tomography of heterogeneous mediums. Photoacoustics 2021, 23, 100275. [Google Scholar] [CrossRef]
  118. Benjamin, A.; Ely, G.; Anthony, B.W. 2D speed of sound mapping using a multilook reflection ultrasound tomography framework. Ultrasonics 2021, 114, 106393. [Google Scholar] [CrossRef]
  119. Jin, X.; Wang, L.V. Thermoacoustic tomography with correction for acoustic speed variations. Phys. Med. Biol. 2006, 51, 6437. [Google Scholar] [CrossRef]
  120. Xia, J.; Huang, C.; Maslov, K.; Anastasio, M.A.; Wang, L.V. Enhancement of photoacoustic tomography by ultrasonic computed tomography based on optical excitation of elements of a full-ring transducer array. Opt. Lett. 2013, 38, 3140–3143. [Google Scholar] [CrossRef] [Green Version]
  121. Ding, L.; Deán-Ben, X.L.; Burton, N.C.; Sobol, R.W.; Ntziachristos, V.; Razansky, D. Constrained inversion and spectral unmixing in multispectral optoacoustic tomography. IEEE Trans. Med. Imaging 2017, 36, 1676–1685. [Google Scholar] [CrossRef] [Green Version]
  122. Dolet, A.; Varray, F.; Mure, S.; Grenier, T.; Liu, Y.; Yuan, Z.; Tortoli, P.; Vray, D. Spatial and spectral regularization for multispectral photoacoustic image clustering. In Proceedings of the 2016 IEEE International Ultrasonics Symposium (IUS), Tours, France, 18–21 September 2016; pp. 1–4. [Google Scholar]
  123. Tzoumas, S.; Nunes, A.; Deliolanis, N.C.; Ntziachristos, V. Effects of multispectral excitation on the sensitivity of molecular optoacoustic imaging. J. Biophotonics 2015, 8, 629–637. [Google Scholar] [CrossRef]
  124. Taruttis, A.; Rosenthal, A.; Kacprowicz, M.; Burton, N.C.; Ntziachristos, V. Multiscale multispectral optoacoustic tomography by a stationary wavelet transform prior to unmixing. IEEE Trans. Med. Imaging 2014, 33, 1194–1202. [Google Scholar] [CrossRef] [PubMed]
  125. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. Commun. Acm 2020, 63, 139–144. [Google Scholar] [CrossRef]
  126. Asim, M.; Shamshad, F.; Ahmed, A. Blind Image Deconvolution Using Deep Generative Priors. IEEE Trans. Comput. Imaging 2020, 6, 1493–1506. [Google Scholar] [CrossRef]
  127. Pan, J.S.; Dong, J.X.; Liu, Y.; Zhang, J.W.; Ren, J.M.; Tang, J.H.; Tai, Y.W.; Yang, M.H. Physics-Based Generative Adversarial Models for Image Restoration and Beyond. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 2449–2462. [Google Scholar] [CrossRef] [Green Version]
  128. Chen, Z.; Zeng, Z.; Shen, H.; Zheng, X.; Dai, P.; Ouyang, P. DN-GAN: Denoising generative adversarial networks for speckle noise reduction in optical coherence tomography images. Biomed. Signal Process. Control 2020, 55, 101632. [Google Scholar] [CrossRef]
  129. Ren, D.; Zhang, K.; Wang, Q.; Hu, Q.; Zuo, W. Neural blind deconvolution using deep priors. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 3341–3350. [Google Scholar]
  130. Chen, L.; Bentley, P.; Mori, K.; Misawa, K.; Fujiwara, M.; Rueckert, D. Self-supervised learning for medical image analysis using image context restoration. Med. Image Anal. 2019, 58, 101539. [Google Scholar] [CrossRef]
  131. Li, X.; Zhang, S.; Wu, J.; Huang, S.; Feng, Q.; Qi, L.; Chen, W. Multispectral interlaced sparse sampling photoacoustic tomography. IEEE Trans. Med. Imaging 2020, 39, 3463–3474. [Google Scholar] [CrossRef]
  132. Li, X.; Ge, J.; Zhang, S.; Wu, J.; Qi, L.; Chen, W. Multispectral interlaced sparse sampling photoacoustic tomography based on directional total variation. Comput. Methods Programs Biomed. 2022, 214, 106562. [Google Scholar] [CrossRef]
  133. Ermolayev, V.; Dean-Ben, X.L.; Mandal, S.; Ntziachristos, V.; Razansky, D. Simultaneous visualization of tumour oxygenation, neovascularization and contrast agent perfusion by real-time three-dimensional optoacoustic tomography. Eur. Radiol. 2016, 26, 1843–1851. [Google Scholar] [CrossRef]
  134. Wang, B.; Xiang, L.Z.; Jiang, M.S.; Yang, J.J.; Zhang, Q.Z.; Carney, P.R.; Jiang, H.B. Photoacoustic tomography system for noninvasive real-time three-dimensional imaging of epilepsy. Biomed. Opt. Express 2012, 3, 1427–1432. [Google Scholar] [CrossRef] [Green Version]
  135. Lam, R.B.; Kruger, R.A.; Reinecke, D.R.; DelRio, S.P.; Thornton, M.M.; Picot, P.A.; Morgan, T.G. Dynamic optical angiography of mouse anatomy using radial projections. In Proceedings of the Photons Plus Ultrasound: Imaging and Sensing 2010, San Francisco, CA, USA, 24–26 January 2010; Volume 7564, pp. 38–44. [Google Scholar]
  136. Fehm, T.F.; Deán-Ben, X.L.; Ford, S.J.; Razansky, D. In vivo whole-body optoacoustic scanner with real-time volumetric imaging capacity. Optica 2016, 3, 1153–1159. [Google Scholar] [CrossRef]
  137. Choi, S.; Yang, J.; Lee, S.Y.; Kim, J.; Lee, J.; Kim, W.J.; Lee, S.; Kim, C. Deep Learning Enhances Multiparametric Dynamic Volumetric Photoacoustic Computed Tomography In Vivo (DL-PACT). Adv. Sci. 2023, 10, 2202089. [Google Scholar] [CrossRef]
  138. Lv, J.; Peng, Y.; Li, S.; Guo, Z.; Zhao, Q.; Zhang, X.; Nie, L. Hemispherical photoacoustic imaging of myocardial infarction: In Vivo detection and monitoring. Eur. Radiol. 2018, 28, 2176–2183. [Google Scholar] [CrossRef]
  139. Gröhl, J.; Hacker, L.; Cox, B.T.; Dreher, K.K.; Morscher, S.; Rakotondrainibe, A.; Varray, F.; Yip, L.C.; Vogt, W.C.; Bohndiek, S.E. The IPASC data format: A consensus data format for photoacoustic imaging. Photoacoustics 2022, 26, 100339. [Google Scholar] [CrossRef]
  140. Allman, D.; Reiter, A.; Bell, M.A.L. Photoacoustic Source Detection and Reflection Artifact Deep Learning Dataset, IEEE Dataport. 2018. Available online: https://ieee-dataport.org/open-access/photoacoustic-source-detection-and-reflection-artifact-deep-learning-dataset (accessed on 6 June 2023).
  141. Cho, S.; Baik, J.; Managuli, R.; Kim, C. 3D PHOVIS: 3D Photoacoustic Visualization Studio. Photoacoustics 2020, 18, 100168. [Google Scholar] [CrossRef]
  142. Else, T. Patato: PhotoAcoustic Tomography Analysis TOolkit. 2023. Available online: https://patato.readthedocs.io/en/latest/?badge=latest (accessed on 6 June 2023).
Figure 1. (a) PAT imaging principle. (b) PAT signal and image processing flow. (c) Categories of PAT image post-processing techniques.
Figure 1. (a) PAT imaging principle. (b) PAT signal and image processing flow. (c) Categories of PAT image post-processing techniques.
Photonics 10 00707 g001
Figure 2. (A) Simulated comparison of reconstructed images with limited bandwidth (c) and full bandwidth (b) for target image (a). BW, bandwidth. Reproduced with permission [9]. (B) Examples of vessel averaging images. (a) Ground-truth, (b) synthetic misaligned image. Reproduced with permission [10].
Figure 2. (A) Simulated comparison of reconstructed images with limited bandwidth (c) and full bandwidth (b) for target image (a). BW, bandwidth. Reproduced with permission [9]. (B) Examples of vessel averaging images. (a) Ground-truth, (b) synthetic misaligned image. Reproduced with permission [10].
Photonics 10 00707 g002
Figure 3. (A) Low resolution image of six optical absorbers due to SOS heterogeneities and acoustic attenuation. Reproduced with permission [16]. (B) Reflection artifact in PAI. (a) A deep reflector leads to reflection of US waves. (b) An acquired PA image of a phantom. Reproduced with permission from [17].
Figure 3. (A) Low resolution image of six optical absorbers due to SOS heterogeneities and acoustic attenuation. Reproduced with permission [16]. (B) Reflection artifact in PAI. (a) A deep reflector leads to reflection of US waves. (b) An acquired PA image of a phantom. Reproduced with permission from [17].
Photonics 10 00707 g003
Figure 4. (A) Splitting artifact in PACT. (a) A monkey brain model. White dots encircling skull show locations of a 512-element ring detector. (b) Reconstructed images using BP (back projection) algorithm. (c,d) Corresponding close-ups of images in red dashed box. Reproduced with permission from [23]. (B) Reconstructed cross-sectional PAT images of vessel mimicking targets processed via DAS algorithm under different conditions. Reproduced with permission [9]. (C) Anatomical imaging performance of MB algorithm for in vivo datasets. (a) The spine image. (b) The tumor image. Red arrow indicates spine. White arrows in side insets indicate typical reflection artifacts. Reproduced with permission [24].
Figure 4. (A) Splitting artifact in PACT. (a) A monkey brain model. White dots encircling skull show locations of a 512-element ring detector. (b) Reconstructed images using BP (back projection) algorithm. (c,d) Corresponding close-ups of images in red dashed box. Reproduced with permission from [23]. (B) Reconstructed cross-sectional PAT images of vessel mimicking targets processed via DAS algorithm under different conditions. Reproduced with permission [9]. (C) Anatomical imaging performance of MB algorithm for in vivo datasets. (a) The spine image. (b) The tumor image. Red arrow indicates spine. White arrows in side insets indicate typical reflection artifacts. Reproduced with permission [24].
Photonics 10 00707 g004
Figure 5. (A) Correcting result in an in vivo imaging experiment. (a) An acquired PAT image of a finger. (b) Corrected image. Reproduced with permission from [17]. (B) PAT image of neck, (a) prior to DC (distortion compensation) processing and (b) post-DC. Reproduced with permission from [28]. (C) PAT images without motion correction (left) and with correction (right). Reproduced with permission from [29]. (D) In vivo human finger imaging experiment showing performance of envelope detection and forced zeroing methods. (a) Photograph of a human finger with a red dashed line indicating imaging cross section. (b) Corresponding raw bipolar image reconstructed via BP. (c) Result of (b) processed via envelope detection. (d) Result of (b) processed via forced zeroing. Reproduced with permission from [11].
Figure 5. (A) Correcting result in an in vivo imaging experiment. (a) An acquired PAT image of a finger. (b) Corrected image. Reproduced with permission from [17]. (B) PAT image of neck, (a) prior to DC (distortion compensation) processing and (b) post-DC. Reproduced with permission from [28]. (C) PAT images without motion correction (left) and with correction (right). Reproduced with permission from [29]. (D) In vivo human finger imaging experiment showing performance of envelope detection and forced zeroing methods. (a) Photograph of a human finger with a red dashed line indicating imaging cross section. (b) Corresponding raw bipolar image reconstructed via BP. (c) Result of (b) processed via envelope detection. (d) Result of (b) processed via forced zeroing. Reproduced with permission from [11].
Photonics 10 00707 g005
Figure 6. (A) Restoration of PAT image of a cancerous mouse. Original PAT image (left) and its deconvolution result with spatially variant PSFs (right). Reproduced with permission from [31]. (B) Reconstruction (Step 1) and corresponding model resolution matrix-based deconvolution (Step 2) using Quadratic and Geman–McClure penalty functions. Blue and Green arrows: artifacts reduction. Red arrows: contrast improvement. Reproduced with permission from [33]. (C) Reconstructed PAT image (left) and guided filter result (right) of a numerical blood vessel phantom using TV. Reproduced with permission from [32].
Figure 6. (A) Restoration of PAT image of a cancerous mouse. Original PAT image (left) and its deconvolution result with spatially variant PSFs (right). Reproduced with permission from [31]. (B) Reconstruction (Step 1) and corresponding model resolution matrix-based deconvolution (Step 2) using Quadratic and Geman–McClure penalty functions. Blue and Green arrows: artifacts reduction. Red arrows: contrast improvement. Reproduced with permission from [33]. (C) Reconstructed PAT image (left) and guided filter result (right) of a numerical blood vessel phantom using TV. Reproduced with permission from [32].
Photonics 10 00707 g006
Figure 7. (A) Representative segmentation results of head, lung, liver, abdomen, and sacrum images of healthy mice using a 3-D graph-based segmentation method. Reproduced with permission from [38]. (B) (a) Original PAT image with blood vessels (blue arrows) and reflection (yellow arrows). (b) Thresholding and peak-processed segmented image. Reproduced with permission from [17]. (C) Comparison of results obtained for four inclusions in phantom with and without ATS (automatic threshold selection). Reproduced with permission from [39]. (D) Original PAT image (top) and results of vessel segmentation methodology (down). Reproduced with permission from [37].
Figure 7. (A) Representative segmentation results of head, lung, liver, abdomen, and sacrum images of healthy mice using a 3-D graph-based segmentation method. Reproduced with permission from [38]. (B) (a) Original PAT image with blood vessels (blue arrows) and reflection (yellow arrows). (b) Thresholding and peak-processed segmented image. Reproduced with permission from [17]. (C) Comparison of results obtained for four inclusions in phantom with and without ATS (automatic threshold selection). Reproduced with permission from [39]. (D) Original PAT image (top) and results of vessel segmentation methodology (down). Reproduced with permission from [37].
Photonics 10 00707 g007
Figure 8. (A) (a) Acquired PAT image. (b) Echo ultrasound image. (c) Registration image. Reproduced with permission from [28]. (B) MRI-guided PAT image restoration results at neck position. The solid red box regions before and after restoration are shown in the bottom. rMRI: registered MR image; PAT: raw PAT image; rcPAT: image restored via proposed method. Reproduced with permission from [56].
Figure 8. (A) (a) Acquired PAT image. (b) Echo ultrasound image. (c) Registration image. Reproduced with permission from [28]. (B) MRI-guided PAT image restoration results at neck position. The solid red box regions before and after restoration are shown in the bottom. rMRI: registered MR image; PAT: raw PAT image; rcPAT: image restored via proposed method. Reproduced with permission from [56].
Photonics 10 00707 g008
Figure 9. (A) Initial pressure (left), recovery absorption coefficients (center) via QOAT-Net, and zoomed images (right) of ex vivo mouse liver. Reproduced with permission from [109]. (B) Light fluence distribution estimated at four positions (head, chest, abdomen, and sacrum) via 3D fluence simulation. Reproduced with permission from [38]. (C) PAT light fluence correction using MRI information. Prior: manual segmentation based on MRI image. Reproduced with permission from [59].
Figure 9. (A) Initial pressure (left), recovery absorption coefficients (center) via QOAT-Net, and zoomed images (right) of ex vivo mouse liver. Reproduced with permission from [109]. (B) Light fluence distribution estimated at four positions (head, chest, abdomen, and sacrum) via 3D fluence simulation. Reproduced with permission from [38]. (C) PAT light fluence correction using MRI information. Prior: manual segmentation based on MRI image. Reproduced with permission from [59].
Photonics 10 00707 g009
Figure 10. (A) (a) Autofocus SOS selection under three focus metrics. (b) Defocused image reconstructed using a sound speed overestimated by 5%. (c) Focused image reconstructed using optimized sound speed. Reproduced with permission from [85]. (B) In vivo images of a nude mouse trunk. (a,b) are PAT images corresponding to initial SOS distributions, (c,d) are PAT images corresponding to final SOS distributions. (a,c) correspond to one of liver sections, while (b,d) correspond to one of kidney sections. (e) presents segmentation scheme in finding (d) (I: intermediate tissue, II: kidney, III: bowel). (f) shows zoomed-in views of subdomains in (ad), labeled by colors and types of borderlines. Scale bars: 5 mm. AA, abdominal aorta; BM, backbone muscles; IN, intestines; IVC, inferior vena cava; KD, kidneys; LV, lobes of liver; PV, portal vein; SC: spinal cord; SP, spleen; SV, superficial vessels. Reproduced with permission from [90].
Figure 10. (A) (a) Autofocus SOS selection under three focus metrics. (b) Defocused image reconstructed using a sound speed overestimated by 5%. (c) Focused image reconstructed using optimized sound speed. Reproduced with permission from [85]. (B) In vivo images of a nude mouse trunk. (a,b) are PAT images corresponding to initial SOS distributions, (c,d) are PAT images corresponding to final SOS distributions. (a,c) correspond to one of liver sections, while (b,d) correspond to one of kidney sections. (e) presents segmentation scheme in finding (d) (I: intermediate tissue, II: kidney, III: bowel). (f) shows zoomed-in views of subdomains in (ad), labeled by colors and types of borderlines. Scale bars: 5 mm. AA, abdominal aorta; BM, backbone muscles; IN, intestines; IVC, inferior vena cava; KD, kidneys; LV, lobes of liver; PV, portal vein; SC: spinal cord; SP, spleen; SV, superficial vessels. Reproduced with permission from [90].
Photonics 10 00707 g010
Figure 11. (A) eMSOT tissue sO2 estimation and linear unmixing result (top). Normalized spectra, spectral fitting, and sO2 values of linear unmixing (upper row) and eMSOT (lower row) for three points (down). Reproduced with permission from [96]. (B) Respective source components calculated for ICG and Cy7 inclusions and tissue background with three different unmixing methods. Reproduced with permission from [104].
Figure 11. (A) eMSOT tissue sO2 estimation and linear unmixing result (top). Normalized spectra, spectral fitting, and sO2 values of linear unmixing (upper row) and eMSOT (lower row) for three points (down). Reproduced with permission from [96]. (B) Respective source components calculated for ICG and Cy7 inclusions and tissue background with three different unmixing methods. Reproduced with permission from [104].
Photonics 10 00707 g011
Table 1. Main limiting factors of PAT imaging quality. Bulleted symbol (√) indicates correspondence between limiting factors (table rows) and manifestations (table columns).
Table 1. Main limiting factors of PAT imaging quality. Bulleted symbol (√) indicates correspondence between limiting factors (table rows) and manifestations (table columns).
Structural DistortionSpatial AliasingNegative ValueReflection
Artifacts
ClutterNoise
Limitations in hardwarePoor illumination
Limited view
Limited
bandwidth
Sparse sampling
Motion
Limitations in tissueOptical
attenuation
Out-of-plane
absorption
Acoustic
attenuation
Acoustic
heterogeneity
Limitations in algorithmsInappropriate
algorithms
Table 2. A brief summary of general image post-processing methods for PAT.
Table 2. A brief summary of general image post-processing methods for PAT.
MethodsCategoryAdvantagesMajor Limitations
Artifacts suppressionIdentify absorbers and artifacts [11,17,25,26,27,28,29]Capable of handling artifacts stemming from multiple causesLack of real reference images for validation
DeconvolutionEstimation of PSFs [30,31]Simple and intuitivePSF estimation error
Developing the model matrix [32,33]Can be combined with image reconstruction algorithmsExpensive computation
Data-driven [34,35,36]Wide range of applicationsLack of labeled training datasets
SegmentationThreshold [17,37,38,39]Simple and efficientLacks robustness
Boundary [38,40,41,42,43]Consider small variations in tissue propertiesLimited by low structural contrast of PAT
Multimodal imagingPAT-US [28,44,45,46,47,48,49]High adaptabilityNoisy US image
PAT-MRI [50,51,52,53,54,55,56,57,58,59]High structural contrast provided by MRI High cost/difficult to integrate
Multimodal image fusion [60]Abundant information from other imaging modalitiesDifficult image registration
Table 3. A brief summary of PAT-specific image post-processing methods.
Table 3. A brief summary of PAT-specific image post-processing methods.
MethodsCategoryAdvantagesMajor Limitations
Light fluence correctionModel-based correction [77,78,79,80,81,82,83]Easy implementationSensitive to optical property uncertainty
Data-driven correction [84]Accounts for real-world variationRequires ground truth data
Segmentation-based correction [38,59,84]Considers optical properties of heterogeneous tissueLimited by segmentation accuracy
Acoustic correctionSingle SOS selection [85,86,87,88,89]Easy implementationAssumes a uniform SOS
Heterogeneous SOS correction [90,91,92,93,94]High accuracyHigh computation cost and low algorithm stability
Spectral unmixingLinear unmixing model [95,96,97,98,99]Easy implementationRequires known absorber composition
Blind-source spectral unmixing [100,101,102,103,104]Accounts for unknown biological tissue composition and spectral characteristicsUnstable performance
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tang, K.; Zhang, S.; Liang, Z.; Wang, Y.; Ge, J.; Chen, W.; Qi, L. Advanced Image Post-Processing Methods for Photoacoustic Tomography: A Review. Photonics 2023, 10, 707. https://doi.org/10.3390/photonics10070707

AMA Style

Tang K, Zhang S, Liang Z, Wang Y, Ge J, Chen W, Qi L. Advanced Image Post-Processing Methods for Photoacoustic Tomography: A Review. Photonics. 2023; 10(7):707. https://doi.org/10.3390/photonics10070707

Chicago/Turabian Style

Tang, Kaiyi, Shuangyang Zhang, Zhichao Liang, Yang Wang, Jia Ge, Wufan Chen, and Li Qi. 2023. "Advanced Image Post-Processing Methods for Photoacoustic Tomography: A Review" Photonics 10, no. 7: 707. https://doi.org/10.3390/photonics10070707

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop