sensors-logo

Journal Browser

Journal Browser

Novel Deep Learning Approaches for Photoacoustic Imaging and Sensing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (30 June 2023) | Viewed by 3057

Special Issue Editors

Associate Professor, Department of Electronic Engineering, Tsinghua University, Beijing, China
Interests: deep tissue optical imaging; photoacoustic imaging; wavefront shaping

E-Mail Website
Guest Editor
Department of Medical Physics and Biomedical Engineering, University College London, London, UK
Interests: photoacoustic imaging and biomedical ultrasound

Special Issue Information

Dear Colleagues,

Photoacoustic imaging (PAI) is an emerging imaging modality availing the benefits of optical contrast and acoustic depth of penetration, which breaks the traditional depth limits of ballistic optical imaging and the resolution limits of diffuse optical imaging. Using the acoustic waves generated in response to the absorption of pulsed laser light to reconstruct images at high resolution and great depths, PAI is very promising for a range of biomedical applications. 

The growth of the PAI community is steady. In the past decade, we have witnessed condinued developments in key components, reconstruction algorithms, quantification accuracy, novel contrast agents, and clinical applications of PAI. One of the most exciting research areas is combining deep learning (DL) and PAI for improved image quality, higher quantification accuracy, faster speed, and reduced system cost. By leveraging the ever-increasing computing power and image data, DL helps to bridge the gap between laboratory demos and real-world applications and promises to accelerate PAI’s commercialization. It is anticipated that PAI will one day establish itself as a regular imaging system in clinical practices. This Special Issue aims to provide a comprehensive collection of the latest advances in exploiting DL for better PAI performance.

We would like to cordially invite you to submit an article to this Special Issue. We welcome short communications, full research articles, and timely reviews.

Dr. Cheng Ma
Prof. Dr. Ben Cox
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • photoacoustic imaging and sensing
  • photoacoustic computed tomography
  • photoacoustic microscopy
  • deep learning
  • machine learning
  • neural network

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 4130 KiB  
Article
Efficient Photoacoustic Image Synthesis with Deep Learning
by Tom Rix, Kris K. Dreher, Jan-Hinrich Nölke, Melanie Schellenberg, Minu D. Tizabi, Alexander Seitel and Lena Maier-Hein
Sensors 2023, 23(16), 7085; https://doi.org/10.3390/s23167085 - 10 Aug 2023
Cited by 1 | Viewed by 1285
Abstract
Photoacoustic imaging potentially allows for the real-time visualization of functional human tissue parameters such as oxygenation but is subject to a challenging underlying quantification problem. While in silico studies have revealed the great potential of deep learning (DL) methodology in solving this problem, [...] Read more.
Photoacoustic imaging potentially allows for the real-time visualization of functional human tissue parameters such as oxygenation but is subject to a challenging underlying quantification problem. While in silico studies have revealed the great potential of deep learning (DL) methodology in solving this problem, the inherent lack of an efficient gold standard method for model training and validation remains a grand challenge. This work investigates whether DL can be leveraged to accurately and efficiently simulate photon propagation in biological tissue, enabling photoacoustic image synthesis. Our approach is based on estimating the initial pressure distribution of the photoacoustic waves from the underlying optical properties using a back-propagatable neural network trained on synthetic data. In proof-of-concept studies, we validated the performance of two complementary neural network architectures, namely a conventional U-Net-like model and a Fourier Neural Operator (FNO) network. Our in silico validation on multispectral human forearm images shows that DL methods can speed up image generation by a factor of 100 when compared to Monte Carlo simulations with 5×108 photons. While the FNO is slightly more accurate than the U-Net, when compared to Monte Carlo simulations performed with a reduced number of photons (5×106), both neural network architectures achieve equivalent accuracy. In contrast to Monte Carlo simulations, the proposed DL models can be used as inherently differentiable surrogate models in the photoacoustic image synthesis pipeline, allowing for back-propagation of the synthesis error and gradient-based optimization over the entire pipeline. Due to their efficiency, they have the potential to enable large-scale training data generation that can expedite the clinical application of photoacoustic imaging. Full article
(This article belongs to the Special Issue Novel Deep Learning Approaches for Photoacoustic Imaging and Sensing)
Show Figures

Figure 1

13 pages, 2946 KiB  
Article
Mitigating Under-Sampling Artifacts in 3D Photoacoustic Imaging Using Res-UNet Based on Digital Breast Phantom
by Haoming Huo, Handi Deng, Jianpan Gao, Hanqing Duan and Cheng Ma
Sensors 2023, 23(15), 6970; https://doi.org/10.3390/s23156970 - 05 Aug 2023
Cited by 1 | Viewed by 1171
Abstract
In recent years, photoacoustic (PA) imaging has rapidly grown as a non-invasive screening technique for breast cancer detection using three-dimensional (3D) hemispherical arrays due to their large field of view. However, the development of breast imaging systems is hindered by a lack of [...] Read more.
In recent years, photoacoustic (PA) imaging has rapidly grown as a non-invasive screening technique for breast cancer detection using three-dimensional (3D) hemispherical arrays due to their large field of view. However, the development of breast imaging systems is hindered by a lack of patients and ground truth samples, as well as under-sampling problems caused by high costs. Most research related to solving these problems in the PA field were based on 2D transducer arrays or simple regular shape phantoms for 3D transducer arrays or images from other modalities. Therefore, we demonstrate an effective method for removing under-sampling artifacts based on deep neural network (DNN) to reconstruct high-quality PA images using numerical digital breast simulations. We constructed 3D digital breast phantoms based on human anatomical structures and physical properties, which were then subjected to 3D Monte-Carlo and K-wave acoustic simulations to mimic acoustic propagation for hemispherical transducer arrays. Finally, we applied a 3D delay-and-sum reconstruction algorithm and a Res-UNet network to achieve higher resolution on sparsely-sampled data. Our results indicate that when using a 757 nm laser with uniform intensity distribution illuminated on a numerical digital breast, the imaging depth can reach 3 cm with 0.25 mm spatial resolution. In addition, the proposed DNN can significantly enhance image quality by up to 78.4%, as measured by MS-SSIM, and reduce background artifacts by up to 19.0%, as measured by PSNR, even at an under-sampling ratio of 10%. The post-processing time for these improvements is only 0.6 s. This paper suggests a new 3D real time DNN method addressing the sparse sampling problem based on numerical digital breast simulations, this approach can also be applied to clinical data and accelerate the development of 3D photoacoustic hemispherical transducer arrays for early breast cancer diagnosis. Full article
(This article belongs to the Special Issue Novel Deep Learning Approaches for Photoacoustic Imaging and Sensing)
Show Figures

Figure 1

Back to TopTop