Next Issue
Volume 9, October
Previous Issue
Volume 9, August
 
 

J. Imaging, Volume 9, Issue 9 (September 2023) – 27 articles

Cover Story (view full-size image): Colorectal cancer is one of the leading causes of death worldwide, but, fortunately, early detection highly increases survival rates. Artificial intelligence and deep learning (DL) methods have been applied with great success to improve polyp detection and localization. In this regard, a comparison with clinical experts is required to prove the added value of the methods. This article presents the ClinExpPICCOLO dataset that comprises 65 unedited endoscopic images that represent the clinical setting. Together with the dataset, an expert clinical performance baseline has been established with the performance of 146 gastroenterologists. This baseline is compared against four DL models. Experts’ performance can be established as minimum values for a DL method before performing a clinical trial. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
25 pages, 3360 KiB  
Review
Unravelling the Mystery inside Cells by Using Single-Molecule Fluorescence Imaging
by Julian Zalejski, Jiachen Sun and Ashutosh Sharma
J. Imaging 2023, 9(9), 192; https://doi.org/10.3390/jimaging9090192 - 19 Sep 2023
Cited by 1 | Viewed by 1831
Abstract
Live-cell imaging is a powerful technique to study the dynamics and mechanics of various biological molecules like proteins, organelles, DNA, and RNA. With the rapid evolution of optical microscopy, our understanding of how these molecules are implicated in the cells’ most critical physiological [...] Read more.
Live-cell imaging is a powerful technique to study the dynamics and mechanics of various biological molecules like proteins, organelles, DNA, and RNA. With the rapid evolution of optical microscopy, our understanding of how these molecules are implicated in the cells’ most critical physiological roles deepens. In this review, we focus on how spatiotemporal nanoscale live-cell imaging at the single molecule level allows for profound contributions towards new discoveries in life science. This review will start by summarizing how single-molecule tracking has been used to analyze membrane dynamics, receptor–ligand interactions, protein–protein interactions, inner- and extra-cellular transport, gene expression/transcription, and whole organelle tracking. We then move on to how current authors are trying to improve single-molecule tracking and overcome current limitations by offering new ways of labeling proteins of interest, multi-channel/color detection, improvements in time-lapse imaging, and new methods and programs to analyze the colocalization and movement of targets. We later discuss how single-molecule tracking can be a beneficial tool used for medical diagnosis. Finally, we wrap up with the limitations and future perspectives of single-molecule tracking and total internal reflection microscopy. Full article
(This article belongs to the Special Issue Fluorescence Imaging and Analysis of Cellular System)
Show Figures

Figure 1

15 pages, 2442 KiB  
Article
Limitations of Out-of-Distribution Detection in 3D Medical Image Segmentation
by Anton Vasiliuk, Daria Frolova, Mikhail Belyaev and Boris Shirokikh
J. Imaging 2023, 9(9), 191; https://doi.org/10.3390/jimaging9090191 - 18 Sep 2023
Cited by 1 | Viewed by 1544
Abstract
Deep learning models perform unreliably when the data come from a distribution different from the training one. In critical applications such as medical imaging, out-of-distribution (OOD) detection methods help to identify such data samples, preventing erroneous predictions. In this paper, we further investigate [...] Read more.
Deep learning models perform unreliably when the data come from a distribution different from the training one. In critical applications such as medical imaging, out-of-distribution (OOD) detection methods help to identify such data samples, preventing erroneous predictions. In this paper, we further investigate OOD detection effectiveness when applied to 3D medical image segmentation. We designed several OOD challenges representing clinically occurring cases and found that none of the methods achieved acceptable performance. Methods not dedicated to segmentation severely failed to perform in the designed setups; the best mean false-positive rate at a 95% true-positive rate (FPR) was 0.59. Segmentation-dedicated methods still achieved suboptimal performance, with the best mean FPR being 0.31 (lower is better). To indicate this suboptimality, we developed a simple method called Intensity Histogram Features (IHF), which performed comparably or better in the same challenges, with a mean FPR of 0.25. Our findings highlight the limitations of the existing OOD detection methods with 3D medical images and present a promising avenue for improving them. To facilitate research in this area, we release the designed challenges as a publicly available benchmark and formulate practical criteria to test the generalization of OOD detection beyond the suggested benchmark. We also propose IHF as a solid baseline to contest emerging methods. Full article
Show Figures

Figure 1

27 pages, 3661 KiB  
Article
Efficient Extraction of Deep Image Features Using a Convolutional Neural Network (CNN) for Detecting Ventricular Fibrillation and Tachycardia
by Azeddine Mjahad, Mohamed Saban, Hossein Azarmdel and Alfredo Rosado-Muñoz
J. Imaging 2023, 9(9), 190; https://doi.org/10.3390/jimaging9090190 - 18 Sep 2023
Cited by 2 | Viewed by 1427
Abstract
To safely select the proper therapy for ventricular fibrillation (VF), it is essential to distinguish it correctly from ventricular tachycardia (VT) and other rhythms. Provided that the required therapy is not the same, an erroneous detection might [...] Read more.
To safely select the proper therapy for ventricular fibrillation (VF), it is essential to distinguish it correctly from ventricular tachycardia (VT) and other rhythms. Provided that the required therapy is not the same, an erroneous detection might lead to serious injuries to the patient or even cause ventricular fibrillation (VF). The primary innovation of this study lies in employing a CNN to create new features. These features exhibit the capacity and precision to detect and classify cardiac arrhythmias, including VF and VT. The electrocardiographic (ECG) signals utilized for this assessment were sourced from the established MIT-BIH and AHA databases. The input data to be classified are time–frequency (tf) representation images, specifically, Pseudo Wigner–Ville (PWV). Previous to Pseudo Wigner–Ville (PWV) calculation, preprocessing for denoising, signal alignment, and segmentation is necessary. In order to check the validity of the method independently of the classifier, four different CNNs are used: InceptionV3, MobilNet, VGGNet and AlexNet. The classification results reveal the following values: for VF detection, there is a sensitivity (Sens) of 98.16%, a specificity (Spe) of 99.07%, and an accuracy (Acc) of 98.91%; for ventricular tachycardia (VT), the sensitivity is 90.45%, the specificity is 99.73%, and the accuracy is 99.09%; for normal sinus rhythms, sensitivity stands at 99.34%, specificity is 98.35%, and accuracy is 98.89%; finally, for other rhythms, the sensitivity is 96.98%, the specificity is 99.68%, and the accuracy is 99.11%. Furthermore, distinguishing between shockable (VF/VT) and non-shockable rhythms yielded a sensitivity of 99.23%, a specificity of 99.74%, and an accuracy of 99.61%. The results show that using tf representations as a form of image, combined in this case with a CNN classifier, raises the classification performance above the results in previous works. Considering that these results were achieved without the preselection of ECG episodes, it can be concluded that these features may be successfully introduced in Automated External Defibrillation (AED) and Implantable Cardioverter Defibrillation (ICD) therapies, also opening the door to their use in other ECG rhythm detection applications. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

21 pages, 9974 KiB  
Article
3D Reconstruction of Fishes Using Coded Structured Light
by Christos Veinidis, Fotis Arnaoutoglou and Dimitrios Syvridis
J. Imaging 2023, 9(9), 189; https://doi.org/10.3390/jimaging9090189 - 18 Sep 2023
Cited by 2 | Viewed by 1038
Abstract
3D reconstruction of fishes provides the capability of extracting geometric measurements, which are valuable in the field of Aquaculture. In this paper, a novel method for 3D reconstruction of fishes using the Coded Structured Light technique is presented. In this framework, a binary [...] Read more.
3D reconstruction of fishes provides the capability of extracting geometric measurements, which are valuable in the field of Aquaculture. In this paper, a novel method for 3D reconstruction of fishes using the Coded Structured Light technique is presented. In this framework, a binary image, called pattern, consisting of white geometric shapes, namely symbols, on a black background is projected onto the surface of a number of fishes, which belong to different species. A camera captures the resulting images, and the various symbols in these images are decoded to uniquely identify them on the pattern. For this purpose, a number of steps, such as the binarization of the images captured by the camera, symbol classification, and the correction of misclassifications, are realized. The proposed methodology for 3D reconstructions is adapted to the specific geometric and morphological characteristics of the considered fishes with fusiform body shape, something which is implemented for the first time. Using the centroids of the symbols as feature points, the symbol correspondences immediately result in point correspondences between the pattern and the images captured by the camera. These pairs of corresponding points are exploited for the final 3D reconstructions of the fishes. The extracted 3D reconstructions provide all the geometric information which is related to the real fishes. The experimentation demonstrates the high efficiency of the techniques adopted in each step of the proposed methodology. As a result, the final 3D reconstructions provide sufficiently accurate approximations of the real fishes. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images)
Show Figures

Figure 1

16 pages, 2875 KiB  
Article
Using Different Types of Artificial Neural Networks to Classify 2D Matrix Codes and Their Rotations—A Comparative Study
by Ladislav Karrach and Elena Pivarčiová
J. Imaging 2023, 9(9), 188; https://doi.org/10.3390/jimaging9090188 - 18 Sep 2023
Viewed by 1419
Abstract
Artificial neural networks can solve various tasks in computer vision, such as image classification, object detection, and general recognition. Our comparative study deals with four types of artificial neural networks—multilayer perceptrons, probabilistic neural networks, radial basis function neural networks, and convolutional neural networks—and [...] Read more.
Artificial neural networks can solve various tasks in computer vision, such as image classification, object detection, and general recognition. Our comparative study deals with four types of artificial neural networks—multilayer perceptrons, probabilistic neural networks, radial basis function neural networks, and convolutional neural networks—and investigates their ability to classify 2D matrix codes (Data Matrix codes, QR codes, and Aztec codes) as well as their rotation. The paper presents the basic building blocks of these artificial neural networks and their architecture and compares the classification accuracy of 2D matrix codes under different configurations of these neural networks. A dataset of 3000 synthetic code samples was used to train and test the neural networks. When the neural networks were trained on the full dataset, the convolutional neural network showed its superiority, followed by the RBF neural network and the multilayer perceptron. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

14 pages, 3796 KiB  
Article
A Deep Learning-Based Model for Classifying Osteoporotic Lumbar Vertebral Fractures on Radiographs: A Retrospective Model Development and Validation Study
by Yohei Ono, Nobuaki Suzuki, Ryosuke Sakano, Yasuka Kikuchi, Tasuku Kimura, Kenneth Sutherland and Tamotsu Kamishima
J. Imaging 2023, 9(9), 187; https://doi.org/10.3390/jimaging9090187 - 18 Sep 2023
Cited by 1 | Viewed by 1148
Abstract
Early diagnosis and initiation of treatment for fresh osteoporotic lumbar vertebral fractures (OLVF) are crucial. Magnetic resonance imaging (MRI) is generally performed to differentiate between fresh and old OLVF. However, MRIs can be intolerable for patients with severe back pain. Furthermore, it is [...] Read more.
Early diagnosis and initiation of treatment for fresh osteoporotic lumbar vertebral fractures (OLVF) are crucial. Magnetic resonance imaging (MRI) is generally performed to differentiate between fresh and old OLVF. However, MRIs can be intolerable for patients with severe back pain. Furthermore, it is difficult to perform in an emergency. MRI should therefore only be performed in appropriately selected patients with a high suspicion of fresh fractures. As radiography is the first-choice imaging examination for the diagnosis of OLVF, improving screening accuracy with radiographs will optimize the decision of whether an MRI is necessary. This study aimed to develop a method to automatically classify lumbar vertebrae (LV) conditions such as normal, old, or fresh OLVF using deep learning methods with radiography. A total of 3481 LV images for training, validation, and testing and 662 LV images for external validation were collected. Visual evaluation by two radiologists determined the ground truth of LV diagnoses. Three convolutional neural networks were ensembled. The accuracy, sensitivity, and specificity were 0.89, 0.83, and 0.92 in the test and 0.84, 0.76, and 0.89 in the external validation, respectively. The results suggest that the proposed method can contribute to the accurate automatic classification of LV conditions on radiography. Full article
Show Figures

Figure 1

11 pages, 9191 KiB  
Article
Assessment of Landsat-8 and Sentinel-2 Water Indices: A Case Study in the Southwest of the Buenos Aires Province (Argentina)
by Guillermina Soledad Santecchia, Gisela Noelia Revollo Sarmiento, Sibila Andrea Genchi, Alejandro José Vitale and Claudio Augusto Delrieux
J. Imaging 2023, 9(9), 186; https://doi.org/10.3390/jimaging9090186 - 18 Sep 2023
Viewed by 1233
Abstract
The accuracy assessment of three different Normalized Difference Water indices (NDWIs) was performed in La Salada, a typical lake in the Pampean region. Data were gathered during April 2019, a period in which floods occurred in a large area in the Southwest of [...] Read more.
The accuracy assessment of three different Normalized Difference Water indices (NDWIs) was performed in La Salada, a typical lake in the Pampean region. Data were gathered during April 2019, a period in which floods occurred in a large area in the Southwest of the Buenos Aires Province (Argentina). The accuracy of the estimations using spaceborne medium-resolution multi-spectral imaging and the reliability of three NDWIs to highlight shallow water features in satellite images were evaluated using a high-resolution airbone imagery as ground truth. We show that these indices computed using Landsat-8 and Sentinel-2 imagery are only loosely correlated to the actual flooded area in shallow waters. Indeed, NDWI values vary significantly depending on the satellite mission used and the type of index computed. Full article
(This article belongs to the Section Color, Multi-spectral, and Hyperspectral Imaging)
Show Figures

Figure 1

12 pages, 2836 KiB  
Article
A Simple Denoising Algorithm for Real-World Noisy Camera Images
by Manfred Hartbauer
J. Imaging 2023, 9(9), 185; https://doi.org/10.3390/jimaging9090185 - 18 Sep 2023
Viewed by 2400
Abstract
The noise statistics of real-world camera images are challenging for any denoising algorithm. Here, I describe a modified version of a bionic algorithm that improves the quality of real-word noisy camera images from a publicly available image dataset. In the first step, an [...] Read more.
The noise statistics of real-world camera images are challenging for any denoising algorithm. Here, I describe a modified version of a bionic algorithm that improves the quality of real-word noisy camera images from a publicly available image dataset. In the first step, an adaptive local averaging filter was executed for each pixel to remove moderate sensor noise while preserving fine image details and object contours. In the second step, image sharpness was enhanced by means of an unsharp mask filter to generate output images that are close to ground-truth images (multiple averages of static camera images). The performance of this denoising algorithm was compared with five popular denoising methods: bm3d, wavelet, non-local means (NL-means), total variation (TV) denoising and bilateral filter. Results show that the two-step filter had a performance that was similar to NL-means and TV filtering. Bm3d had the best denoising performance but sometimes led to blurry images. This novel two-step filter only depends on a single parameter that can be obtained from global image statistics. To reduce computation time, denoising was restricted to the Y channel of YUV-transformed images and four image segments were simultaneously processed in parallel on a multi-core processor. Full article
(This article belongs to the Topic Bio-Inspired Systems and Signal Processing)
Show Figures

Figure 1

16 pages, 1040 KiB  
Article
Thermal Image Processing for Respiratory Estimation from Cubical Data with Expandable Depth
by Maciej Szankin, Alicja Kwasniewska and Jacek Ruminski
J. Imaging 2023, 9(9), 184; https://doi.org/10.3390/jimaging9090184 - 13 Sep 2023
Viewed by 1331
Abstract
As healthcare costs continue to rise, finding affordable and non-invasive ways to monitor vital signs is increasingly important. One of the key metrics for assessing overall health and identifying potential issues early on is respiratory rate (RR). Most of the existing methods require [...] Read more.
As healthcare costs continue to rise, finding affordable and non-invasive ways to monitor vital signs is increasingly important. One of the key metrics for assessing overall health and identifying potential issues early on is respiratory rate (RR). Most of the existing methods require multiple steps that consist of image and signal processing. This might be difficult to deploy on edge devices that often do not have specialized digital signal processors (DSP). Therefore, the goal of this study is to develop a single neural network realizing the entire process of RR estimation in a single forward pass. The proposed solution builds on recent advances in video recognition, capturing both spatial and temporal information in a multi-path network. Both paths process the data at different sampling rates to capture rapid and slow changes that are associated with differences in the temperature of the nostril area during the breathing episodes. The preliminary results show that the introduced end-to-end solution achieves better performance compared to state-of-the-art methods, without requiring additional pre/post-processing steps and signal-processing techniques. In addition, the presented results demonstrate its robustness on low-resolution thermal video sequences that are often used at the embedded edge due to the size and power constraints of such systems. Taking that into account, the proposed approach has the potential for efficient and convenient respiratory rate estimation across various markets in solutions deployed locally, close to end users. Full article
(This article belongs to the Special Issue Data Processing with Artificial Intelligence in Thermal Imagery)
Show Figures

Figure 1

14 pages, 2828 KiB  
Article
Efficient Dehazing with Recursive Gated Convolution in U-Net: A Novel Approach for Image Dehazing
by Zhibo Wang, Jia Jia, Peng Lyu and Jeongik Min
J. Imaging 2023, 9(9), 183; https://doi.org/10.3390/jimaging9090183 - 11 Sep 2023
Cited by 1 | Viewed by 1337
Abstract
Image dehazing, a fundamental problem in computer vision, involves the recovery of clear visual cues from images marred by haze. Over recent years, deploying deep learning paradigms has spurred significant strides in image dehazing tasks. However, many dehazing networks aim to enhance performance [...] Read more.
Image dehazing, a fundamental problem in computer vision, involves the recovery of clear visual cues from images marred by haze. Over recent years, deploying deep learning paradigms has spurred significant strides in image dehazing tasks. However, many dehazing networks aim to enhance performance by adopting intricate network architectures, complicating training, inference, and deployment procedures. This study proposes an end-to-end U-Net dehazing network model with recursive gated convolution and attention mechanisms to improve performance while maintaining a lean network structure. In our approach, we leverage an improved recursive gated convolution mechanism to substitute the original U-Net’s convolution blocks with residual blocks and apply the SK fusion module to revamp the skip connection method. We designate this novel U-Net variant as the Dehaze Recursive Gated U-Net (DRGNet). Comprehensive testing across public datasets demonstrates the DRGNet’s superior performance in dehazing quality, detail retrieval, and objective evaluation metrics. Ablation studies further confirm the effectiveness of the key design elements. Full article
(This article belongs to the Topic Computer Vision and Image Processing)
Show Figures

Figure 1

10 pages, 1736 KiB  
Article
Noninvasive Prediction of Sperm Retrieval Using Diffusion Tensor Imaging in Patients with Nonobstructive Azoospermia
by Sikang Gao, Jun Yang, Dong Chen, Xiangde Min, Chanyuan Fan, Peipei Zhang, Qiuxia Wang, Zhen Li and Wei Cai
J. Imaging 2023, 9(9), 182; https://doi.org/10.3390/jimaging9090182 - 08 Sep 2023
Viewed by 1135
Abstract
Microdissection testicular sperm extraction (mTESE) is the first-line treatment plan for nonobstructive azoospermia (NOA). However, studies reported that the overall sperm retrieval rate (SRR) was 43% to 63% among men with NOA, implying that nearly half of the patients fail sperm retrieval. This [...] Read more.
Microdissection testicular sperm extraction (mTESE) is the first-line treatment plan for nonobstructive azoospermia (NOA). However, studies reported that the overall sperm retrieval rate (SRR) was 43% to 63% among men with NOA, implying that nearly half of the patients fail sperm retrieval. This study aimed to evaluate the diagnostic performance of parameters derived from diffusion tensor imaging (DTI) in predicting SRR in patients with NOA. Seventy patients diagnosed with NOA were enrolled and classified into two groups based on the outcome of sperm retrieval during mTESE: success (29 patients) and failure (41 patients). Scrotal magnetic resonance imaging was performed, and the DTI parameters, including mean diffusivity and fractional anisotropy, were analyzed between groups. The results showed that there was a significant difference in mean diffusivity values between the two groups, and the area under the curve for mean diffusivity was calculated as 0.865, with a sensitivity of 72.2% and a specificity of 97.5%. No statistically significant difference was observed in fractional anisotropy values and sex hormone levels between the two groups. This study demonstrated that the mean diffusivity value might serve as a useful noninvasive imaging marker for predicting the SRR of NOA patients undergoing mTESE. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

15 pages, 11082 KiB  
Article
Intelligent Performance Evaluation in Rowing Sport Using a Graph-Matching Network
by Chien-Chang Chen, Cheng-Shian Lin, Yen-Ting Chen, Wen-Her Chen, Chien-Hua Chen and I-Cheng Chen
J. Imaging 2023, 9(9), 181; https://doi.org/10.3390/jimaging9090181 - 31 Aug 2023
Cited by 1 | Viewed by 1098
Abstract
Rowing competitions require consistent rowing strokes among crew members to achieve optimal performance. However, existing motion analysis techniques often rely on wearable sensors, leading to challenges in sporter inconvenience. The aim of our work is to use a graph-matching network to analyze the [...] Read more.
Rowing competitions require consistent rowing strokes among crew members to achieve optimal performance. However, existing motion analysis techniques often rely on wearable sensors, leading to challenges in sporter inconvenience. The aim of our work is to use a graph-matching network to analyze the similarity in rowers’ rowing posture and further pair rowers to improve the performance of their rowing team. This study proposed a novel video-based performance analysis system to analyze paired rowers using a graph-matching network. The proposed system first detected human joint points, as acquired from the OpenPose system, and then the graph embedding model and graph-matching network model were applied to analyze similarities in rowing postures between paired rowers. When analyzing the postures of the paired rowers, the proposed system detected the same starting point of their rowing postures to achieve more accurate pairing results. Finally, variations in the similarities were displayed using the proposed time-period similarity processing. The experimental results show that the proposed time-period similarity processing of the 2D graph-embedding model (GEM) had the best pairing results. Full article
Show Figures

Figure 1

19 pages, 6953 KiB  
Article
Automatic 3D Postoperative Evaluation of Complex Orthopaedic Interventions
by Joëlle Ackermann, Armando Hoch, Jess Gerrit Snedeker, Patrick Oliver Zingg, Hooman Esfandiari and Philipp Fürnstahl
J. Imaging 2023, 9(9), 180; https://doi.org/10.3390/jimaging9090180 - 31 Aug 2023
Viewed by 1101
Abstract
In clinical practice, image-based postoperative evaluation is still performed without state-of-the-art computer methods, as these are not sufficiently automated. In this study we propose a fully automatic 3D postoperative outcome quantification method for the relevant steps of orthopaedic interventions on the example of [...] Read more.
In clinical practice, image-based postoperative evaluation is still performed without state-of-the-art computer methods, as these are not sufficiently automated. In this study we propose a fully automatic 3D postoperative outcome quantification method for the relevant steps of orthopaedic interventions on the example of Periacetabular Osteotomy of Ganz (PAO). A typical orthopaedic intervention involves cutting bone, anatomy manipulation and repositioning as well as implant placement. Our method includes a segmentation based deep learning approach for detection and quantification of the cuts. Furthermore, anatomy repositioning was quantified through a multi-step registration method, which entailed a coarse alignment of the pre- and postoperative CT images followed by a fine fragment alignment of the repositioned anatomy. Implant (i.e., screw) position was identified by 3D Hough transform for line detection combined with fast voxel traversal based on ray tracing. The feasibility of our approach was investigated on 27 interventions and compared against manually performed 3D outcome evaluations. The results show that our method can accurately assess the quality and accuracy of the surgery. Our evaluation of the fragment repositioning showed a cumulative error for the coarse and fine alignment of 2.1 mm. Our evaluation of screw placement accuracy resulted in a distance error of 1.32 mm for screw head location and an angular deviation of 1.1° for screw axis. As a next step we will explore generalisation capabilities by applying the method to different interventions. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

17 pages, 1740 KiB  
Article
Data-Weighted Multivariate Generalized Gaussian Mixture Model: Application to Point Cloud Robust Registration
by Bingwei Ge, Fatma Najar and Nizar Bouguila
J. Imaging 2023, 9(9), 179; https://doi.org/10.3390/jimaging9090179 - 31 Aug 2023
Viewed by 1371
Abstract
In this paper, a weighted multivariate generalized Gaussian mixture model combined with stochastic optimization is proposed for point cloud registration. The mixture model parameters of the target scene and the scene to be registered are updated iteratively by the fixed point method under [...] Read more.
In this paper, a weighted multivariate generalized Gaussian mixture model combined with stochastic optimization is proposed for point cloud registration. The mixture model parameters of the target scene and the scene to be registered are updated iteratively by the fixed point method under the framework of the EM algorithm, and the number of components is determined based on the minimum message length criterion (MML). The KL divergence between these two mixture models is utilized as the loss function for stochastic optimization to find the optimal parameters of the transformation model. The self-built point clouds are used to evaluate the performance of the proposed algorithm on rigid registration. Experiments demonstrate that the algorithm dramatically reduces the impact of noise and outliers and effectively extracts the key features of the data-intensive regions. Full article
(This article belongs to the Special Issue Feature Papers in Section AI in Imaging)
Show Figures

Figure 1

13 pages, 756 KiB  
Article
2-[18F]FDG-PET/CT in Cancer of Unknown Primary Tumor—A Retrospective Register-Based Cohort Study
by Heidi Rimer, Melina Sofie Jensen, Sara Elisabeth Dahlsgaard-Wallenius, Lise Eckhoff, Peter Thye-Rønn, Charlotte Kristiansen, Malene Grubbe Hildebrandt and Oke Gerke
J. Imaging 2023, 9(9), 178; https://doi.org/10.3390/jimaging9090178 - 31 Aug 2023
Cited by 1 | Viewed by 1110
Abstract
We investigated the impact of 2-[18F]FDG-PET/CT on detection rate (DR) of the primary tumor and survival in patients with suspected cancer of unknown primary tumor (CUP), comparing it to the conventional diagnostic imaging method, CT. Patients who received a tentative CUP diagnosis at [...] Read more.
We investigated the impact of 2-[18F]FDG-PET/CT on detection rate (DR) of the primary tumor and survival in patients with suspected cancer of unknown primary tumor (CUP), comparing it to the conventional diagnostic imaging method, CT. Patients who received a tentative CUP diagnosis at Odense University Hospital from 2014–2017 were included. Patients receiving a 2-[18F]FDG-PET/CT were assigned to the 2-[18F]FDG-PET/CT group and patients receiving a CT only to the CT group. DR was calculated as the proportion of true positive findings of 2-[18F]FDG-PET/CT and CT scans, separately, using biopsy of the primary tumor, autopsy, or clinical decision as reference standard. Survival analyses included Kaplan–Meier estimates and Cox proportional hazards regression adjusted for age, sex, treatment, and propensity score. We included 193 patients. Of these, 159 were in the 2-[18F]FDG-PET/CT group and 34 were in the CT group. DR was 36.5% in the 2-[18F]FDG-PET/CT group and 17.6% in the CT group, respectively (p = 0.012). Median survival was 7.4 (95% CI 0.4–98.7) months in the 2-[18F]FDG-PET/CT group and 3.8 (95% CI 0.2–98.1) in the CT group. Survival analysis showed a crude hazard ratio of 0.63 (p = 0.024) and an adjusted hazard ratio of 0.68 (p = 0.087) for the 2-[18F]FDG-PET/CT group compared with CT. This study found a significantly higher DR of the primary tumor in suspected CUP patients using 2-[18F]FDG-PET/CT compared with patients receiving only CT, with possible immense clinical importance. No significant difference in survival was found, although a possible tendency towards longer survival in the 2-[18F]FDG-PET/CT group was observed. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

31 pages, 7266 KiB  
Article
Explainable Artificial Intelligence (XAI) for Deep Learning Based Medical Imaging Classification
by Rawan Ghnemat, Sawsan Alodibat and Qasem Abu Al-Haija
J. Imaging 2023, 9(9), 177; https://doi.org/10.3390/jimaging9090177 - 30 Aug 2023
Cited by 4 | Viewed by 4348
Abstract
Recently, deep learning has gained significant attention as a noteworthy division of artificial intelligence (AI) due to its high accuracy and versatile applications. However, one of the major challenges of AI is the need for more interpretability, commonly referred to as the black-box [...] Read more.
Recently, deep learning has gained significant attention as a noteworthy division of artificial intelligence (AI) due to its high accuracy and versatile applications. However, one of the major challenges of AI is the need for more interpretability, commonly referred to as the black-box problem. In this study, we introduce an explainable AI model for medical image classification to enhance the interpretability of the decision-making process. Our approach is based on segmenting the images to provide a better understanding of how the AI model arrives at its results. We evaluated our model on five datasets, including the COVID-19 and Pneumonia Chest X-ray dataset, Chest X-ray (COVID-19 and Pneumonia), COVID-19 Image Dataset (COVID-19, Viral Pneumonia, Normal), and COVID-19 Radiography Database. We achieved testing and validation accuracy of 90.6% on a relatively small dataset of 6432 images. Our proposed model improved accuracy and reduced time complexity, making it more practical for medical diagnosis. Our approach offers a more interpretable and transparent AI model that can enhance the accuracy and efficiency of medical diagnosis. Full article
(This article belongs to the Special Issue Explainable AI for Image-Aided Diagnosis)
Show Figures

Figure 1

11 pages, 2013 KiB  
Article
Hybrid Autofluorescence and Optoacoustic Microscopy for the Label-Free, Early and Rapid Detection of Pathogenic Infections in Vegetative Tissues
by George J. Tserevelakis, Andreas Theocharis, Stavroula Spyropoulou, Emmanouil Trantas, Dimitrios Goumas, Filippos Ververidis and Giannis Zacharakis
J. Imaging 2023, 9(9), 176; https://doi.org/10.3390/jimaging9090176 - 29 Aug 2023
Viewed by 1336
Abstract
Agriculture plays a pivotal role in food security and food security is challenged by pests and pathogens. Due to these challenges, the yields and quality of agricultural production are reduced and, in response, restrictions in the trade of plant products are applied. Governments [...] Read more.
Agriculture plays a pivotal role in food security and food security is challenged by pests and pathogens. Due to these challenges, the yields and quality of agricultural production are reduced and, in response, restrictions in the trade of plant products are applied. Governments have collaborated to establish robust phytosanitary measures, promote disease surveillance, and invest in research and development to mitigate the impact on food security. Classic as well as modernized tools for disease diagnosis and pathogen surveillance do exist, but most of these are time-consuming, laborious, or are less sensitive. To that end, we propose the innovative application of a hybrid imaging approach through the combination of confocal fluorescence and optoacoustic imaging microscopy. This has allowed us to non-destructively detect the physiological changes that occur in plant tissues as a result of a pathogen-induced interaction well before visual symptoms occur. When broccoli leaves were artificially infected with Xanthomonas campestris pv. campestris (Xcc), eventually causing an economically important bacterial disease, the induced optical absorption alterations could be detected at very early stages of infection. Therefore, this innovative microscopy approach was positively utilized to detect the disease caused by a plant pathogen, showing that it can also be employed to detect quarantine pathogens such as Xylella fastidiosa. Full article
(This article belongs to the Special Issue Fluorescence Imaging and Analysis of Cellular System)
Show Figures

Figure 1

15 pages, 1548 KiB  
Article
End-to-End Depth-Guided Relighting Using Lightweight Deep Learning-Based Method
by Sabari Nathan and Priya Kansal
J. Imaging 2023, 9(9), 175; https://doi.org/10.3390/jimaging9090175 - 28 Aug 2023
Viewed by 1190
Abstract
Image relighting, which involves modifying the lighting conditions while preserving the visual content, is fundamental to computer vision. This study introduced a bi-modal lightweight deep learning model for depth-guided relighting. The model utilizes the Res2Net Squeezed block’s ability to capture long-range dependencies and [...] Read more.
Image relighting, which involves modifying the lighting conditions while preserving the visual content, is fundamental to computer vision. This study introduced a bi-modal lightweight deep learning model for depth-guided relighting. The model utilizes the Res2Net Squeezed block’s ability to capture long-range dependencies and to enhance feature representation for both the input image and its corresponding depth map. The proposed model adopts an encoder–decoder structure with Res2Net Squeezed blocks integrated at each stage of encoding and decoding. The model was trained and evaluated on the VIDIT dataset, which consists of 300 triplets of images. Each triplet contains the input image, its corresponding depth map, and the relit image under diverse lighting conditions, such as different illuminant angles and color temperatures. The enhanced feature representation and improved information flow within the Res2Net Squeezed blocks enable the model to handle complex lighting variations and generate realistic relit images. The experimental results demonstrated the proposed approach’s effectiveness in relighting accuracy, measured by metrics such as the PSNR, SSIM, and visual quality. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

12 pages, 926 KiB  
Protocol
3D Ultrasound and MRI in Assessing Resection Margins during Tongue Cancer Surgery: A Research Protocol for a Clinical Diagnostic Accuracy Study
by Fatemeh Makouei, Tina Klitmøller Agander, Caroline Ewertsen, Morten Bo Søndergaard Svendsen, Rikke Norling, Mikkel Kaltoft, Adam Espe Hansen, Jacob Høygaard Rasmussen, Irene Wessel and Tobias Todsen
J. Imaging 2023, 9(9), 174; https://doi.org/10.3390/jimaging9090174 - 28 Aug 2023
Cited by 1 | Viewed by 1319
Abstract
Surgery is the primary treatment for tongue cancer. The goal is a complete resection of the tumor with an adequate margin of healthy tissue around the tumor.Inadequate margins lead to a high risk of local cancer recurrence and the need for adjuvant therapies. [...] Read more.
Surgery is the primary treatment for tongue cancer. The goal is a complete resection of the tumor with an adequate margin of healthy tissue around the tumor.Inadequate margins lead to a high risk of local cancer recurrence and the need for adjuvant therapies. Ex vivo imaging of the resected surgical specimen has been suggested for margin assessment and improved surgical results. Therefore, we have developed a novel three-dimensional (3D) ultrasound imaging technique to improve the assessment of resection margins during surgery. In this research protocol, we describe a study comparing the accuracy of 3D ultrasound, magnetic resonance imaging (MRI), and clinical examination of the surgical specimen to assess the resection margins during cancer surgery. Tumor segmentation and margin measurement will be performed using 3D ultrasound and MRI of the ex vivo specimen. We will determine the accuracy of each method by comparing the margin measurements and the proportion of correctly classified margins (positive, close, and free) obtained by each technique with respect to the gold standard histopathology. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

16 pages, 3949 KiB  
Article
A Framework for Detecting Thyroid Cancer from Ultrasound and Histopathological Images Using Deep Learning, Meta-Heuristics, and MCDM Algorithms
by Rohit Sharma, Gautam Kumar Mahanti, Ganapati Panda, Adyasha Rath, Sujata Dash, Saurav Mallik and Ruifeng Hu
J. Imaging 2023, 9(9), 173; https://doi.org/10.3390/jimaging9090173 - 27 Aug 2023
Cited by 8 | Viewed by 2538
Abstract
Computer-assisted diagnostic systems have been developed to aid doctors in diagnosing thyroid-related abnormalities. The aim of this research is to improve the diagnosis accuracy of thyroid abnormality detection models that can be utilized to alleviate undue pressure on healthcare professionals. In this research, [...] Read more.
Computer-assisted diagnostic systems have been developed to aid doctors in diagnosing thyroid-related abnormalities. The aim of this research is to improve the diagnosis accuracy of thyroid abnormality detection models that can be utilized to alleviate undue pressure on healthcare professionals. In this research, we proposed deep learning, metaheuristics, and a MCDM algorithms-based framework to detect thyroid-related abnormalities from ultrasound and histopathological images. The proposed method uses three recently developed deep learning techniques (DeiT, Swin Transformer, and Mixer-MLP) to extract features from the thyroid image datasets. The feature extraction techniques are based on the Image Transformer and MLP models. There is a large number of redundant features that can overfit the classifiers and reduce the generalization capabilities of the classifiers. In order to avoid the overfitting problem, six feature transformation techniques (PCA, TSVD, FastICA, ISOMAP, LLE, and UMP) are analyzed to reduce the dimensionality of the data. There are five different classifiers (LR, NB, SVC, KNN, and RF) evaluated using the 5-fold stratified cross-validation technique on the transformed dataset. Both datasets exhibit large class imbalances and hence, the stratified cross-validation technique is used to evaluate the performance. The MEREC-TOPSIS MCDM technique is used for ranking the evaluated models at different analysis stages. In the first stage, the best feature extraction and classification techniques are chosen, whereas, in the second stage, the best dimensionality reduction method is evaluated in wrapper feature selection mode. Two best-ranked models are further selected for the weighted average ensemble learning and features selection using the recently proposed meta-heuristics FOX-optimization algorithm. The PCA+FOX optimization-based feature selection + random forest model achieved the highest TOPSIS score and performed exceptionally well with an accuracy of 99.13%, F2-score of 98.82%, and AUC-ROC score of 99.13% on the ultrasound dataset. Similarly, the model achieved an accuracy score of 90.65%, an F2-score of 92.01%, and an AUC-ROC score of 95.48% on the histopathological dataset. This study exploits the combination novelty of different algorithms in order to improve the thyroid cancer diagnosis capabilities. This proposed framework outperforms the current state-of-the-art diagnostic methods for thyroid-related abnormalities in ultrasound and histopathological datasets and can significantly aid medical professionals by reducing the excessive burden on the medical fraternity. Full article
Show Figures

Figure 1

15 pages, 13578 KiB  
Article
PP-JPEG: A Privacy-Preserving JPEG Image-Tampering Localization
by Riyanka Jena, Priyanka Singh and Manoranjan Mohanty
J. Imaging 2023, 9(9), 172; https://doi.org/10.3390/jimaging9090172 - 27 Aug 2023
Viewed by 1252
Abstract
The widespread availability of digital image-processing software has given rise to various forms of image manipulation and forgery, which can pose a significant challenge in different fields, such as law enforcement, journalism, etc. It can also lead to privacy concerns. We are proposing [...] Read more.
The widespread availability of digital image-processing software has given rise to various forms of image manipulation and forgery, which can pose a significant challenge in different fields, such as law enforcement, journalism, etc. It can also lead to privacy concerns. We are proposing that a privacy-preserving framework to encrypt images before processing them is vital to maintain the privacy and confidentiality of sensitive images, especially those used for the purpose of investigation. To address these challenges, we propose a novel solution that detects image forgeries while preserving the privacy of the images. Our method proposes a privacy-preserving framework that encrypts the images before processing them, making it difficult for unauthorized individuals to access them. The proposed method utilizes a compression quality analysis in the encrypted domain to detect the presence of forgeries in images by determining if the forged portion (dummy image) has a compression quality different from that of the original image (featured image) in the encrypted domain. This approach effectively localizes the tampered portions of the image, even for small pixel blocks of size 10×10 in the encrypted domain. Furthermore, the method identifies the featured image’s JPEG quality using the first minima in the energy graph. Full article
(This article belongs to the Topic Computer Vision and Image Processing)
Show Figures

Figure 1

11 pages, 15835 KiB  
Article
Improving Medical Imaging with Medical Variation Diffusion Model: An Analysis and Evaluation
by Zakaria Rguibi, Abdelmajid Hajami, Dya Zitouni, Amine Elqaraoui, Reda Zourane and Zayd Bouajaj
J. Imaging 2023, 9(9), 171; https://doi.org/10.3390/jimaging9090171 - 25 Aug 2023
Viewed by 1643
Abstract
The Medical VDM is an approach for generating medical images that employs variational diffusion models (VDMs) to smooth images while preserving essential features, including edges. The primary goal of the Medical VDM is to enhance the accuracy and reliability of medical image generation. [...] Read more.
The Medical VDM is an approach for generating medical images that employs variational diffusion models (VDMs) to smooth images while preserving essential features, including edges. The primary goal of the Medical VDM is to enhance the accuracy and reliability of medical image generation. In this paper, we present a comprehensive description of the Medical VDM approach and its mathematical foundation, as well as experimental findings that showcase its efficacy in generating high-quality medical images that accurately reflect the underlying anatomy and physiology. Our results reveal that the Medical VDM surpasses current VDM methods in terms of generating faithful medical images, with a reconstruction loss of 0.869, a diffusion loss of 0.0008, and a latent loss of 5.740068×105. Furthermore, we delve into the potential applications of the Medical VDM in clinical settings, such as its utility in medical education and training and its potential to aid clinicians in diagnosis and treatment planning. Additionally, we address the ethical concerns surrounding the use of generated medical images and propose a set of guidelines for their ethical use. By amalgamating the power of VDMs with clinical expertise, our approach constitutes a significant advancement in the field of medical imaging, poised to enhance medical education, research, and clinical practice, ultimately leading to improved patient outcomes. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

12 pages, 5549 KiB  
Project Report
Bayesian Reconstruction Algorithms for Low-Dose Computed Tomography Are Not Yet Suitable in Clinical Context
by Inga Kniep, Robin Mieling, Moritz Gerling, Alexander Schlaefer, Axel Heinemann and Benjamin Ondruschka
J. Imaging 2023, 9(9), 170; https://doi.org/10.3390/jimaging9090170 - 23 Aug 2023
Viewed by 1147
Abstract
Computed tomography (CT) is a widely used examination technique that usually requires a compromise between image quality and radiation exposure. Reconstruction algorithms aim to reduce radiation exposure while maintaining comparable image quality. Recently, unsupervised deep learning methods have been proposed for this purpose. [...] Read more.
Computed tomography (CT) is a widely used examination technique that usually requires a compromise between image quality and radiation exposure. Reconstruction algorithms aim to reduce radiation exposure while maintaining comparable image quality. Recently, unsupervised deep learning methods have been proposed for this purpose. In this study, a promising sparse-view reconstruction method (posterior temperature optimized Bayesian inverse model; POTOBIM) is tested for its clinical applicability. For this study, 17 whole-body CTs of deceased were performed. In addition to POTOBIM, reconstruction was performed using filtered back projection (FBP). An evaluation was conducted by simulating sinograms and comparing the reconstruction with the original CT slice for each case. A quantitative analysis was performed using peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). The quality was assessed visually using a modified Ludewig’s scale. In the qualitative evaluation, POTOBIM was rated worse than the reference images in most cases. A partially equivalent image quality could only be achieved with 80 projections per rotation. Quantitatively, POTOBIM does not seem to benefit from more than 60 projections. Although deep learning methods seem suitable to produce better image quality, the investigated algorithm (POTOBIM) is not yet suitable for clinical routine. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

18 pages, 950 KiB  
Article
An Innovative Faster R-CNN-Based Framework for Breast Cancer Detection in MRI
by João Nuno Centeno Raimundo, João Pedro Pereira Fontes, Luís Gonzaga Mendes Magalhães and Miguel Angel Guevara Lopez
J. Imaging 2023, 9(9), 169; https://doi.org/10.3390/jimaging9090169 - 23 Aug 2023
Cited by 1 | Viewed by 1900
Abstract
Replacing lung cancer as the most commonly diagnosed cancer globally, breast cancer (BC) today accounts for 1 in 8 cancer diagnoses and a total of 2.3 million new cases in both sexes combined. An estimated 685,000 women died from BC in 2020, corresponding [...] Read more.
Replacing lung cancer as the most commonly diagnosed cancer globally, breast cancer (BC) today accounts for 1 in 8 cancer diagnoses and a total of 2.3 million new cases in both sexes combined. An estimated 685,000 women died from BC in 2020, corresponding to 16% or 1 in every 6 cancer deaths in women. BC represents a quarter of a total of cancer cases in females and by far the most commonly diagnosed cancer in women in 2020. However, when detected in the early stages of the disease, treatment methods have proven to be very effective in increasing life expectancy and, in many cases, patients fully recover. Several medical imaging modalities, such as X-rays Mammography (MG), Ultrasound (US), Computer Tomography (CT), Magnetic Resonance Imaging (MRI), and Digital Tomosynthesis (DT) have been explored to support radiologists/physicians in clinical decision-making workflows for the detection and diagnosis of BC. In this work, we propose a novel Faster R-CNN-based framework to automate the detection of BC pathological Lesions in MRI. As a main contribution, we have developed and experimentally (statistically) validated an innovative method improving the “breast MRI preprocessing phase” to select the patient’s slices (images) and associated bounding boxes representing pathological lesions. In this way, it is possible to create a more robust training (benchmarking) dataset to feed Deep Learning (DL) models, reducing the computation time and the dimension of the dataset, and more importantly, to identify with high accuracy the specific regions (bounding boxes) for each of the patient’s images, in which a possible pathological lesion (tumor) has been identified. As a result, in an experimental setting using a fully annotated dataset (released to the public domain) comprising a total of 922 MRI-based BC patient cases, we have achieved, as the most accurate trained model, an accuracy rate of 97.83%, and subsequently, applying a ten-fold cross-validation method, a mean accuracy on the trained models of 94.46% and an associated standard deviation of 2.43%. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

12 pages, 1529 KiB  
Article
Multimodal Approach for Enhancing Biometric Authentication
by Nassim Ammour, Yakoub Bazi and Naif Alajlan
J. Imaging 2023, 9(9), 168; https://doi.org/10.3390/jimaging9090168 - 22 Aug 2023
Cited by 1 | Viewed by 2070
Abstract
Unimodal biometric systems rely on a single source or unique individual biological trait for measurement and examination. Fingerprint-based biometric systems are the most common, but they are vulnerable to presentation attacks or spoofing when a fake fingerprint is presented to the sensor. To [...] Read more.
Unimodal biometric systems rely on a single source or unique individual biological trait for measurement and examination. Fingerprint-based biometric systems are the most common, but they are vulnerable to presentation attacks or spoofing when a fake fingerprint is presented to the sensor. To address this issue, we propose an enhanced biometric system based on a multimodal approach using two types of biological traits. We propose to combine fingerprint and Electrocardiogram (ECG) signals to mitigate spoofing attacks. Specifically, we design a multimodal deep learning architecture that accepts fingerprints and ECG as inputs and fuses the feature vectors using stacking and channel-wise approaches. The feature extraction backbone of the architecture is based on data-efficient transformers. The experimental results demonstrate the promising capabilities of the proposed approach in enhancing the robustness of the system to presentation attacks. Full article
(This article belongs to the Special Issue Multi-Biometric and Multi-Modal Authentication)
Show Figures

Figure 1

15 pages, 3888 KiB  
Article
Clinical Validation Benchmark Dataset and Expert Performance Baseline for Colorectal Polyp Localization Methods
by Luisa F. Sánchez-Peralta, Ben Glover, Cristina L. Saratxaga, Juan Francisco Ortega-Morán, Scarlet Nazarian, Artzai Picón, J. Blas Pagador and Francisco M. Sánchez-Margallo
J. Imaging 2023, 9(9), 167; https://doi.org/10.3390/jimaging9090167 - 22 Aug 2023
Viewed by 1399
Abstract
Colorectal cancer is one of the leading death causes worldwide, but, fortunately, early detection highly increases survival rates, with the adenoma detection rate being one surrogate marker for colonoscopy quality. Artificial intelligence and deep learning methods have been applied with great success to [...] Read more.
Colorectal cancer is one of the leading death causes worldwide, but, fortunately, early detection highly increases survival rates, with the adenoma detection rate being one surrogate marker for colonoscopy quality. Artificial intelligence and deep learning methods have been applied with great success to improve polyp detection and localization and, therefore, the adenoma detection rate. In this regard, a comparison with clinical experts is required to prove the added value of the systems. Nevertheless, there is no standardized comparison in a laboratory setting before their clinical validation. The ClinExpPICCOLO comprises 65 unedited endoscopic images that represent the clinical setting. They include white light imaging and narrow band imaging, with one third of the images containing a lesion but, differently to another public datasets, the lesion does not appear well-centered in the image. Together with the dataset, an expert clinical performance baseline has been established with the performance of 146 gastroenterologists, who were required to locate the lesions in the selected images. Results shows statistically significant differences between experience groups. Expert gastroenterologists’ accuracy was 77.74, while sensitivity and specificity were 86.47 and 74.33, respectively. These values can be established as minimum values for a DL method before performing a clinical trial in the hospital setting. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

9 pages, 1322 KiB  
Brief Report
Testicular Evaluation Using Shear Wave Elastography (SWE) in Patients with Varicocele
by Sandra Baleato-Gonzalez, Iria Osorio-Vazquez, Enrique Flores-Ríos, María Isolina Santiago-Pérez, Juan Pablo Laguna-Reyes and Roberto Garcia-Figueiras
J. Imaging 2023, 9(9), 166; https://doi.org/10.3390/jimaging9090166 - 22 Aug 2023
Viewed by 1314
Abstract
Purpose: To assess the possible influence of the presence of varicocele on the quantification of testicular stiffness. Methods: Ultrasound with shear wave elastography (SWE) was performed on 48 consecutive patients (96 testicles) referred following urology consultation for different reasons. A total of 94 [...] Read more.
Purpose: To assess the possible influence of the presence of varicocele on the quantification of testicular stiffness. Methods: Ultrasound with shear wave elastography (SWE) was performed on 48 consecutive patients (96 testicles) referred following urology consultation for different reasons. A total of 94 testes were studied and distributed in three groups: testes with varicocele (group A, n = 19), contralateral normal testes (group B; n = 13) and control group (group C, n = 62). Age, testicular volume and testicular parenchymal tissue stiffness values of the three groups were compared using the Kruskal–Wallis test. Results: The mean age of the patients was 42.1 ± 11.1 years. The main reason for consultation was infertility (64.6%). The mean SWE value was 4 ± 0.4 kPa (kilopascal) in group A, 4 ± 0.5 kPa in group B and 4.2 ± 0.7 kPa in group C or control. The testicular volume was 15.8 ± 3.8 mL in group A, 16 ± 4.3 mL in group B and 16.4 ± 5.9 mL in group C. No statistically significant differences were found between the three groups in terms of age, testicular volume and tissue stiffness values. Conclusion: Tissue stiffness values were higher in our control group (healthy testicles) than in patients with varicocele. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop