Topic Editors

Department of Mathematics and Computer Science, University of Cagliari, Via Ospedale 72, 09124 Cagliari, Italy
Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), 90015 Cefalù, Italy
1. Ri.MED Foundation, via Bandiera 11, 90133 Palermo, Italy
2. Research Affiliate Long Term—Laboratory of Computational Computer Vision (LCCV) in the School of Electrical and Computer Engineering at Georgia Institute of Technology, Atlanta, GA, USA
Department of Electrical and Electronic Engineering, University of Cagliari, Piazza d’Armi, 09123 Cagliari, Italy
Department of Mathematics and Computer Science, University of Cagliari, 09124 Cagliari, Italy

Medical Image Analysis

Abstract submission deadline
closed (31 December 2022)
Manuscript submission deadline
closed (28 February 2023)
Viewed by
236186

Topic Information

Dear Colleagues,

The broader availability of medical imaging technology and the increased demand by patients and physicians have dramatically increased diagnostic imaging use over the past decade. However, the increasing amount of available data leads to a more significant effort requirement of the physician, as well as increases the costs and time needed to provide the final diagnosis. In turn, this leads to long waiting lists and highly unsatisfied patients. Computer-Aided Diagnosis (CAD) systems, thanks to appropriate algorithms, allow a reduction in waiting times, financial costs and an increase in the quality of services by mitigating or eliminating the difficulties in data interpretation.

This Topic, aims to present recent advances in the generation and utilization of image processing techniques and future prospects of this key, fundamental research area. All interested authors are invited to submit their newest results on biomedical image processing and analysis for possible publication in one of these journals. All papers need to present original, previously unpublished work and will be subject to the normal standards and peer-review processes of these journals. Papers are welcomed on issues that are related to image processing techniques for biomedical applications, including: medical image reconstruction; medical image retrieval; medical image segmentation; deep or handcrafted features for biomedical image classification; visualization in biomedical imaging; machine learning and artificial intelligence; image analysis of anatomical structures and lesions; computer-aided detection/diagnosis; multi-modality fusion for diagnosis, image analysis, and image-guided interventions; combination of image analysis with clinical data mining and analytics; applications of big data in imaging; microscopy and histology image analysis; ophthalmic image analysis; applications of computational pathology in the clinic.

Prof. Dr. Cecilia Di Ruberto
Dr. Alessandro Stefano
Dr. Albert Comelli
Dr. Lorenzo Putzu
Dr. Andrea Loddo
Topic Editors

Keywords

  • machine learning
  • deep learning
  • transfer learning
  • ensemble learning
  • image analysis
  • image pre-processing
  • image segmentation
  • feature extraction
  • hand-crafted features
  • deep features
  • statistical methods
  • orthogonal moments
  • shape matching
  • biomedical image analysis
  • biomedical image classification
  • biomedical image retrieval
  • biomedical image processing
  • computer-aided diagnosis
  • decision support system for physicians
  • artificial intelligence
  • neural networks
  • image processing
  • computer vision
  • image retrieval
  • medical image analysis
  • shape analysis and matching
  • image retrieval
  • image classification
  • pattern recognition
  • COVID-19
  • MR and CT image analysis for COVID-19 diagnosis
  • coronavirus pandemic
  • COVID-19 pandemic
  • COVID-19 epedemic

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.7 4.5 2011 15.8 Days CHF 2300
Journal of Imaging
jimaging
3.2 4.4 2015 21.9 Days CHF 1600
Electronics
electronics
2.9 4.7 2012 15.8 Days CHF 2200
Diagnostics
diagnostics
3.6 3.6 2011 18.8 Days CHF 2600
Biomedicines
biomedicines
4.7 3.7 2013 14.7 Days CHF 2600

Preprints is a platform dedicated to making early versions of research outputs permanently available and citable. MDPI journals allow posting on preprint servers such as Preprints.org prior to publication. For more details about reprints, please visit https://www.preprints.org.

Published Papers (115 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
Article
Predicting the Tumour Response to Radiation by Modelling the Five Rs of Radiotherapy Using PET Images
J. Imaging 2023, 9(6), 124; https://doi.org/10.3390/jimaging9060124 - 20 Jun 2023
Viewed by 948
Abstract
Despite the intensive use of radiotherapy in clinical practice, its effectiveness depends on several factors. Several studies showed that the tumour response to radiation differs from one patient to another. The non-uniform response of the tumour is mainly caused by multiple interactions between [...] Read more.
Despite the intensive use of radiotherapy in clinical practice, its effectiveness depends on several factors. Several studies showed that the tumour response to radiation differs from one patient to another. The non-uniform response of the tumour is mainly caused by multiple interactions between the tumour microenvironment and healthy cells. To understand these interactions, five major biologic concepts called the “5 Rs” have emerged. These concepts include reoxygenation, DNA damage repair, cell cycle redistribution, cellular radiosensitivity and cellular repopulation. In this study, we used a multi-scale model, which included the five Rs of radiotherapy, to predict the effects of radiation on tumour growth. In this model, the oxygen level was varied in both time and space. When radiotherapy was given, the sensitivity of cells depending on their location in the cell cycle was taken in account. This model also considered the repair of cells by giving a different probability of survival after radiation for tumour and normal cells. Here, we developed four fractionation protocol schemes. We used simulated and positron emission tomography (PET) imaging with the hypoxia tracer 18F-flortanidazole (18F-HX4) images as input data of our model. In addition, tumour control probability curves were simulated. The result showed the evolution of tumours and normal cells. The increase in the cell number after radiation was seen in both normal and malignant cells, which proves that repopulation was included in this model. The proposed model predicts the tumour response to radiation and forms the basis for a more patient-specific clinical tool where related biological data will be included. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
A Convolutional Neural Network-Based Connectivity Enhancement Approach for Autism Spectrum Disorder Detection
J. Imaging 2023, 9(6), 110; https://doi.org/10.3390/jimaging9060110 - 31 May 2023
Viewed by 916
Abstract
Autism spectrum disorder (ASD) represents an ongoing obstacle facing many researchers to achieving early diagnosis with high accuracy. To advance developments in ASD detection, the corroboration of findings presented in the existing body of autism-based literature is of high importance. Previous works put [...] Read more.
Autism spectrum disorder (ASD) represents an ongoing obstacle facing many researchers to achieving early diagnosis with high accuracy. To advance developments in ASD detection, the corroboration of findings presented in the existing body of autism-based literature is of high importance. Previous works put forward theories of under- and over-connectivity deficits in the autistic brain. An elimination approach based on methods that are theoretically comparable to the aforementioned theories proved the existence of these deficits. Therefore, in this paper, we propose a framework that takes into account the properties of under- and over-connectivity in the autistic brain using an enhancement approach coupled with deep learning through convolutional neural networks (CNN). In this approach, image-alike connectivity matrices are created, and then connections related to connectivity alterations are enhanced. The overall objective is the facilitation of early diagnosis of this disorder. After conducting tests using information from the large multi-site Autism Brain Imaging Data Exchange (ABIDE I) dataset, the results show that this approach provides an accurate prediction value reaching up to 96%. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Tomosynthesis-Detected Architectural Distortions: Correlations between Imaging Characteristics and Histopathologic Outcomes
J. Imaging 2023, 9(5), 103; https://doi.org/10.3390/jimaging9050103 - 19 May 2023
Viewed by 946
Abstract
Objective: to determine the positive predictive value (PPV) of tomosynthesis (DBT)-detected architectural distortions (ADs) and evaluate correlations between AD’s imaging characteristics and histopathologic outcomes. Methods: biopsies performed between 2019 and 2021 on ADs were included. Images were interpreted by dedicated breast imaging radiologists. [...] Read more.
Objective: to determine the positive predictive value (PPV) of tomosynthesis (DBT)-detected architectural distortions (ADs) and evaluate correlations between AD’s imaging characteristics and histopathologic outcomes. Methods: biopsies performed between 2019 and 2021 on ADs were included. Images were interpreted by dedicated breast imaging radiologists. Pathologic results after DBT-vacuum assisted biopsy (DBT-VAB) and core needle biopsy were compared with AD detected by DBT, synthetic2D (synt2D) and ultrasound (US). Results: US was performed to assess a correlation for ADs in all 123 cases and a US correlation was identified in 12/123 (9.7%) cases, which underwent US-guided core needle biopsy (CNB). The remaining 111/123 (90.2%) ADs were biopsied under DBT guidance. Among the 123 ADs included, 33/123 (26.8%) yielded malignant results. The overall PPV for malignancy was 30.1% (37/123). The imaging-specific PPV for malignancy was 19.2% (5/26) for DBT-only ADs, 28.2% (24/85) for ADs visible on DBT and synth2D mammography and 66.7% (8/12) for ADs with a US correlation with a statistically significant difference among the three groups (p = 0.01). Conclusions: DBT-only ADs demonstrated a lower PPV of malignancy when compared with syntD mammography, and DBT detected ADs but not low enough to avoid biopsy. As the presence of a US correlate was found to be related with malignancy, it should increase the radiologist’s level of suspicion, even when CNB returned a B3 result. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Functional Magnetic Resonance Urography in Children—Tips and Pitfalls
Diagnostics 2023, 13(10), 1786; https://doi.org/10.3390/diagnostics13101786 - 18 May 2023
Viewed by 721
Abstract
MR urography can be an alternative to other imaging methods of the urinary tract in children. However, this examination may present technical problems influencing further results. Special attention must be paid to the parameters of dynamic sequences to obtain valuable data for further [...] Read more.
MR urography can be an alternative to other imaging methods of the urinary tract in children. However, this examination may present technical problems influencing further results. Special attention must be paid to the parameters of dynamic sequences to obtain valuable data for further functional analysis. The analysis of methodology for renal function assessment using 3T magnetic resonance in children. A retrospective analysis of MR urography studies was performed in a group of 91 patients. Particular attention was paid to the acquisition parameters of the 3D-Thrive dynamic with contrast medium administration as a basic urography sequence. The authors have evaluated images qualitatively and compared contrast-to-noise ratio (CNR), curves smoothness, and quality of baseline (evaluation signal noise ratio) in every dynamic in each patient in every protocol used in our institution. Quality analysis of the image (ICC = 0.877, p < 0.001) was improved so that we have a statistically significant difference in image quality between protocols (χ2(3) = 20.134, p < 0.001). The results obtained for SNR in the medulla and cortex show that there was a statistically significant difference in SNR in the cortex (χ2(3) = 9.060, p = 0.029). Therefore, the obtained results show that with the newer protocol, we obtain lower values of standard deviation for TTP in the aorta (in ChopfMRU: first protocol SD = 14.560 vs. fourth protocol SD = 5.599; in IntelliSpace Portal: first protocol SD = 15.241 vs. fourth protocol SD = 5.506). Magnetic resonance urography is a promising technique with a few challenges that arise and need to be overcome. New technical opportunities should be introduced for everyday practice to improve MRU results. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Review
Mammography Datasets for Neural Networks—Survey
J. Imaging 2023, 9(5), 95; https://doi.org/10.3390/jimaging9050095 - 10 May 2023
Viewed by 2047
Abstract
Deep neural networks have gained popularity in the field of mammography. Data play an integral role in training these models, as training algorithms requires a large amount of data to capture the general relationship between the model’s input and output. Open-access databases are [...] Read more.
Deep neural networks have gained popularity in the field of mammography. Data play an integral role in training these models, as training algorithms requires a large amount of data to capture the general relationship between the model’s input and output. Open-access databases are the most accessible source of mammography data for training neural networks. Our work focuses on conducting a comprehensive survey of mammography databases that contain images with defined abnormal areas of interest. The survey includes databases such as INbreast, the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM), the OPTIMAM Medical Image Database (OMI-DB), and The Mammographic Image Analysis Society Digital Mammogram Database (MIAS). Additionally, we surveyed recent studies that have utilized these databases in conjunction with neural networks and the results they have achieved. From these databases, it is possible to obtain at least 3801 unique images with 4125 described findings from approximately 1842 patients. The number of patients with important findings can be increased to approximately 14,474, depending on the type of agreement with the OPTIMAM team. Furthermore, we provide a description of the annotation process for mammography images to enhance the understanding of the information gained from these datasets. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Self-Paced Dual-Axis Attention Fusion Network for Retinal Vessel Segmentation
Electronics 2023, 12(9), 2107; https://doi.org/10.3390/electronics12092107 - 05 May 2023
Viewed by 698
Abstract
The segmentation of retinal vessels plays an essential role in the early recognition of ophthalmic diseases in clinics. Increasingly, approaches based on deep learning have been pushing vessel segmentation performance, yet it is still a challenging problem due to the complex structure of [...] Read more.
The segmentation of retinal vessels plays an essential role in the early recognition of ophthalmic diseases in clinics. Increasingly, approaches based on deep learning have been pushing vessel segmentation performance, yet it is still a challenging problem due to the complex structure of retinal vessels and the lack of precisely labeled samples. In this paper, we propose a self-paced dual-axis attention fusion network (SPDAA-Net). Firstly, a self-paced learning mechanism using a query-by-committee algorithm is designed to guide the model to learn from easy to hard, which makes model training more intelligent. Secondly, during fusing of multi-scale features, a dual-axis attention mechanism composed of height and width attention is developed to perceive the object, which brings in long-range dependencies while reducing computation complexity. Furthermore, CutMix data augmentation is applied to increase the generalization of the model, enhance the recognition ability of global and local features, and ultimately boost accuracy. We implement comprehensive experiments validating that our SPDAA-Net obtains remarkable performance on both the public DRIVE and CHASE-DB1 datasets. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
A Channel Correction and Spatial Attention Framework for Anterior Cruciate Ligament Tear with Ordinal Loss
Appl. Sci. 2023, 13(8), 5005; https://doi.org/10.3390/app13085005 - 16 Apr 2023
Viewed by 1026
Abstract
The anterior cruciate ligament (ACL) is critical for controlling the motion of the knee joint, but it is prone to injury during sports activities and physical work. If left untreated, ACL injuries can lead to various pathologies such as meniscal damage and osteoarthritis. [...] Read more.
The anterior cruciate ligament (ACL) is critical for controlling the motion of the knee joint, but it is prone to injury during sports activities and physical work. If left untreated, ACL injuries can lead to various pathologies such as meniscal damage and osteoarthritis. While previous studies have used deep learning to diagnose ACL tears, there has been a lack of standardization in human unit classification, leading to mismatches between their findings and actual clinical diagnoses. To address this, we perform a triple classification task based on various tear classes using an ordinal loss on the KneeMRI dataset. We utilize a channel correction module to address image distribution issues across multiple patients, along with a spatial attention module, and test its effectiveness with various backbone networks. Our results show that the modules are effective on various backbone networks, achieving an accuracy of 83.3% on ResNet-18, a 6.65% improvement compared to the baseline. Additionally, we carry out an ablation experiment to verify the effectiveness of the three modules and present our findings with figures and tables. Overall, our study demonstrates the potential of deep learning in diagnosing ACL tear and provides insights into improving the accuracy and standardization of such diagnoses. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Review
Deep Learning Approaches for Data Augmentation in Medical Imaging: A Review
J. Imaging 2023, 9(4), 81; https://doi.org/10.3390/jimaging9040081 - 13 Apr 2023
Cited by 5 | Viewed by 4442
Abstract
Deep learning has become a popular tool for medical image analysis, but the limited availability of training data remains a major challenge, particularly in the medical field where data acquisition can be costly and subject to privacy regulations. Data augmentation techniques offer a [...] Read more.
Deep learning has become a popular tool for medical image analysis, but the limited availability of training data remains a major challenge, particularly in the medical field where data acquisition can be costly and subject to privacy regulations. Data augmentation techniques offer a solution by artificially increasing the number of training samples, but these techniques often produce limited and unconvincing results. To address this issue, a growing number of studies have proposed the use of deep generative models to generate more realistic and diverse data that conform to the true distribution of the data. In this review, we focus on three types of deep generative models for medical image augmentation: variational autoencoders, generative adversarial networks, and diffusion models. We provide an overview of the current state of the art in each of these models and discuss their potential for use in different downstream tasks in medical imaging, including classification, segmentation, and cross-modal translation. We also evaluate the strengths and limitations of each model and suggest directions for future research in this field. Our goal is to provide a comprehensive review about the use of deep generative models for medical image augmentation and to highlight the potential of these models for improving the performance of deep learning algorithms in medical image analysis. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
On The Potential of Image Moments for Medical Diagnosis
J. Imaging 2023, 9(3), 70; https://doi.org/10.3390/jimaging9030070 - 17 Mar 2023
Viewed by 1009
Abstract
Medical imaging is widely used for diagnosis and postoperative or post-therapy monitoring. The ever-increasing number of images produced has encouraged the introduction of automated methods to assist doctors or pathologists. In recent years, especially after the advent of convolutional neural networks, many researchers [...] Read more.
Medical imaging is widely used for diagnosis and postoperative or post-therapy monitoring. The ever-increasing number of images produced has encouraged the introduction of automated methods to assist doctors or pathologists. In recent years, especially after the advent of convolutional neural networks, many researchers have focused on this approach, considering it to be the only method for diagnosis since it can perform a direct classification of images. However, many diagnostic systems still rely on handcrafted features to improve interpretability and limit resource consumption. In this work, we focused our efforts on orthogonal moments, first by providing an overview and taxonomy of their macrocategories and then by analysing their classification performance on very different medical tasks represented by four public benchmark data sets. The results confirmed that convolutional neural networks achieved excellent performance on all tasks. Despite being composed of much fewer features than those extracted by the networks, orthogonal moments proved to be competitive with them, showing comparable and, in some cases, better performance. In addition, Cartesian and harmonic categories provided a very low standard deviation, proving their robustness in medical diagnostic tasks. We strongly believe that the integration of the studied orthogonal moments can lead to more robust and reliable diagnostic systems, considering the performance obtained and the low variation of the results. Finally, since they have been shown to be effective on both magnetic resonance and computed tomography images, they can be easily extended to other imaging techniques. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Using Mean Arterial Pressure in Hypertension Diagnosis versus Using Either Systolic or Diastolic Blood Pressure Measurements
Biomedicines 2023, 11(3), 849; https://doi.org/10.3390/biomedicines11030849 - 10 Mar 2023
Cited by 1 | Viewed by 1835
Abstract
Hypertension is a severe and highly prevalent disease. It is considered a leading contributor to mortality worldwide. Diagnosis guidelines for hypertension use systolic and diastolic blood pressure (BP) together. Mean arterial pressure (MAP), which refers to the average of the arterial blood pressure [...] Read more.
Hypertension is a severe and highly prevalent disease. It is considered a leading contributor to mortality worldwide. Diagnosis guidelines for hypertension use systolic and diastolic blood pressure (BP) together. Mean arterial pressure (MAP), which refers to the average of the arterial blood pressure through a single cardiac cycle, can be an alternative index that may capture the overall exposure of the person to a heightened pressure. A clinical hypothesis, however, suggests that in patients over 50 years old in age, systolic BP may be more predictive of adverse events, while in patients under 50 years old, diastolic BP may be slightly more predictive. In this study, we investigated the correlation between cerebrovascular changes, (impacted by hypertension), and MAP, systolic BP, and diastolic BP separately. Several experiments were conducted using real and synthetic magnetic resonance angiography (MRA) data, along with corresponding BP measurements. Each experiment employs the following methodology: First, MRA data were processed to remove noise, bias, or inhomogeneity. Second, the cerebrovasculature was delineated for MRA subjects using a 3D adaptive region growing connected components algorithm. Third, vascular features (changes in blood vessel’s diameters and tortuosity) that describe cerebrovascular alterations that occur prior to and during the development of hypertension were extracted. Finally, feature vectors were constructed, and data were classified using different classifiers, such as SVM, KNN, linear discriminant, and logistic regression, into either normotensives or hypertensives according to the cerebral vascular alterations and the BP measurements. The initial results showed that MAP would be more beneficial and accurate in identifying the cerebrovascular impact of hypertension (accuracy up to 95.2%) than just using either systolic BP (accuracy up to 89.3%) or diastolic BP (accuracy up to 88.9%). This result emphasizes the pathophysiological significance of MAP and supports prior views that this simple measure may be a superior index for the definition of hypertension and research on hypertension. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Impact of Training Data, Ground Truth and Shape Variability in the Deep Learning-Based Semantic Segmentation of HeLa Cells Observed with Electron Microscopy
J. Imaging 2023, 9(3), 59; https://doi.org/10.3390/jimaging9030059 - 01 Mar 2023
Viewed by 2667
Abstract
This paper investigates the impact of the amount of training data and the shape variability on the segmentation provided by the deep learning architecture U-Net. Further, the correctness of ground truth (GT) was also evaluated. The input data consisted of a three-dimensional set [...] Read more.
This paper investigates the impact of the amount of training data and the shape variability on the segmentation provided by the deep learning architecture U-Net. Further, the correctness of ground truth (GT) was also evaluated. The input data consisted of a three-dimensional set of images of HeLa cells observed with an electron microscope with dimensions 8192×8192×517. From there, a smaller region of interest (ROI) of 2000×2000×300 was cropped and manually delineated to obtain the ground truth necessary for a quantitative evaluation. A qualitative evaluation was performed on the 8192×8192 slices due to the lack of ground truth. Pairs of patches of data and labels for the classes nucleus, nuclear envelope, cell and background were generated to train U-Net architectures from scratch. Several training strategies were followed, and the results were compared against a traditional image processing algorithm. The correctness of GT, that is, the inclusion of one or more nuclei within the region of interest was also evaluated. The impact of the extent of training data was evaluated by comparing results from 36,000 pairs of data and label patches extracted from the odd slices in the central region, to 135,000 patches obtained from every other slice in the set. Then, 135,000 patches from several cells from the 8192×8192 slices were generated automatically using the image processing algorithm. Finally, the two sets of 135,000 pairs were combined to train once more with 270,000 pairs. As would be expected, the accuracy and Jaccard similarity index improved as the number of pairs increased for the ROI. This was also observed qualitatively for the 8192×8192 slices. When the 8192×8192 slices were segmented with U-Nets trained with 135,000 pairs, the architecture trained with automatically generated pairs provided better results than the architecture trained with the pairs from the manually segmented ground truths. This suggests that the pairs that were extracted automatically from many cells provided a better representation of the four classes of the various cells in the 8192×8192 slice than those pairs that were manually segmented from a single cell. Finally, the two sets of 135,000 pairs were combined, and the U-Net trained with these provided the best results. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Brief Report
Prostate Ultrasound Image Segmentation Based on DSU-Net
Biomedicines 2023, 11(3), 646; https://doi.org/10.3390/biomedicines11030646 - 21 Feb 2023
Cited by 3 | Viewed by 1287
Abstract
In recent years, the incidence of prostate cancer in the male population has been increasing year by year. Transrectal ultrasound (TRUS) is an important means of prostate cancer diagnosis. The accurate segmentation of the prostate in TRUS images can assist doctors in needle [...] Read more.
In recent years, the incidence of prostate cancer in the male population has been increasing year by year. Transrectal ultrasound (TRUS) is an important means of prostate cancer diagnosis. The accurate segmentation of the prostate in TRUS images can assist doctors in needle biopsy and surgery and is also the basis for the accurate identification of prostate cancer. Due to the asymmetric shape and blurred boundary line of the prostate in TRUS images, it is difficult to obtain accurate segmentation results with existing segmentation methods. Therefore, a prostate segmentation method called DSU-Net is proposed in this paper. This proposed method replaces the basic convolution in the U-Net model with the improved convolution combining shear transformation and deformable convolution, making the network more sensitive to border features and more suitable for prostate segmentation tasks. Experiments show that DSU-Net has higher accuracy than other existing traditional segmentation methods. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Palatine Tonsil Measurements and Echogenicity during Tonsillitis Using Ultrasonography: A Case–Control Study
Diagnostics 2023, 13(4), 742; https://doi.org/10.3390/diagnostics13040742 - 15 Feb 2023
Cited by 1 | Viewed by 1474
Abstract
This case–control study aimed to assess the size and echogenicity of inflamed tonsils using ultrasonography. It was carried out at different hospitals, nurseries, and primary schools in Khartoum state. About 131 Sudanese volunteers between 1 and 24 years old were recruited. The sample [...] Read more.
This case–control study aimed to assess the size and echogenicity of inflamed tonsils using ultrasonography. It was carried out at different hospitals, nurseries, and primary schools in Khartoum state. About 131 Sudanese volunteers between 1 and 24 years old were recruited. The sample included 79 volunteers with normal tonsils and 52 with tonsillitis according to hematological investigations. The sample was divided into groups according to age—1–5 years old, 6–10 years old, and more than ten years. Measurements in centimeters of height (AP) and width (transverse) of both tonsils (right and left) were taken. Echogenicity was assessed according to normal and abnormal appearances. A data collection sheet containing all the study variables was used. The independent samples test (t-test) showed an insignificant height difference between normal controls and cases with tonsillitis. The transverse diameter increased significantly with inflammation (p-value < 0.05) for both tonsils in all groups. Echogenicity can differentiate between normal and abnormal tonsils (p-value < 0.05 using the chi-square test) for samples from 1–5 years and 6–10 years. The study concluded that measurements and appearance are reliable indicators of tonsillitis, which can be confirmed with the use of ultrasonography, helping physicians to make the correct diagnosis and decisions. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Reverse-Net: Few-Shot Learning with Reverse Teaching for Deformable Medical Image Registration
Appl. Sci. 2023, 13(2), 1040; https://doi.org/10.3390/app13021040 - 12 Jan 2023
Viewed by 1277
Abstract
Multimodal medical image registration has an important role in monitoring tumor growth, radiotherapy, and disease diagnosis. Deep-learning-based methods have made great progress in the past few years. However, its success depends on large training datasets, and the performance of the model decreases due [...] Read more.
Multimodal medical image registration has an important role in monitoring tumor growth, radiotherapy, and disease diagnosis. Deep-learning-based methods have made great progress in the past few years. However, its success depends on large training datasets, and the performance of the model decreases due to overfitting and poor generalization when only limited data are available. In this paper, a multimodal medical image registration framework based on few-shot learning is proposed, named reverse-net, which can improve the accuracy and generalization ability of the network by using a few segmentation labels. Firstly, we used the border enhancement network to enhance the ROI (region of interest) boundaries of T1 images to provide high-quality data for the subsequent pixel alignment stage. Secondly, through a coarse registration network, the T1 image and T2 image were roughly aligned. Then, the pixel alignment network generated more smooth deformation fields. Finally, the reverse teaching network used the warped T1 segmentation labels and warped images generated by the deformation field to teach the border enhancement network more structural knowledge. The performance and generalizability of our model have been evaluated on publicly available brain datasets including the MRBrainS13DataNii-Pro, SRI24, CIT168, and OASIS datasets. Compared with VoxelMorph, the reverse-net obtained performance improvements of 4.36% in DSC on the publicly available MRBrainS13DataNii-Pro dataset. On the unseen dataset OASIS, the reverse-net obtained performance improvements of 4.2% in DSC compared with VoxelMorph, which shows that the model can obtain better generalizability. The promising performance on dataset CIT168 indicates that the model is practicable. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
The Correlation between the Vascular Calcification Score of the Coronary Artery and the Abdominal Aorta in Patients with Psoriasis
Diagnostics 2023, 13(2), 274; https://doi.org/10.3390/diagnostics13020274 - 11 Jan 2023
Viewed by 1359
Abstract
Psoriasis is known as an independent risk factor for cardiovascular disease due to its chronic inflammation. Studies have been conducted to evaluate the progress of atherosclerotic plaques in psoriasis. However, inadequate efforts have been made to clarify the relationship between atherosclerosis progress in [...] Read more.
Psoriasis is known as an independent risk factor for cardiovascular disease due to its chronic inflammation. Studies have been conducted to evaluate the progress of atherosclerotic plaques in psoriasis. However, inadequate efforts have been made to clarify the relationship between atherosclerosis progress in coronary arteries and other important blood vessels. For that reason, we investigated the correlation and development of the coronary artery calcification score (CACS) and the abdominal aortic calcification score (AACS) during a follow-up examination. Eighty-three patients with psoriasis underwent coronary computed tomography angiography (CCTA) for total CACS and abdominal computed tomography (AbCT) for total AACS. PASI score, other clinical features, and blood samples were collected at the same time. The patients’ medical histories were also retrieved for further analysis. Linear regression was used to analyze the CACS and AACS associations. There was a moderate correlation between CACS and AACS, while both calcification scores relatively reflected the coronary plaque number, coronary stenosis number, and stenosis severity observed with CCTA. Both calcification scores were independent of the PASI score. However, a significantly higher CACS was found in psoriatic arthritis, whereas no similar phenomenon was recorded for AACS. To conclude, both CACS and AACS might be potential alternative tests to predict the presence of coronary lesions as confirmed by CCTA. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
A Feature Extraction Using Probabilistic Neural Network and BTFSC-Net Model with Deep Learning for Brain Tumor Classification
J. Imaging 2023, 9(1), 10; https://doi.org/10.3390/jimaging9010010 - 31 Dec 2022
Cited by 4 | Viewed by 2062
Abstract
Background and Objectives: Brain Tumor Fusion-based Segments and Classification-Non-enhancing tumor (BTFSC-Net) is a hybrid system for classifying brain tumors that combine medical image fusion, segmentation, feature extraction, and classification procedures. Materials and Methods: to reduce noise from medical images, the hybrid probabilistic wiener [...] Read more.
Background and Objectives: Brain Tumor Fusion-based Segments and Classification-Non-enhancing tumor (BTFSC-Net) is a hybrid system for classifying brain tumors that combine medical image fusion, segmentation, feature extraction, and classification procedures. Materials and Methods: to reduce noise from medical images, the hybrid probabilistic wiener filter (HPWF) is first applied as a preprocessing step. Then, to combine robust edge analysis (REA) properties in magnetic resonance imaging (MRI) and computed tomography (CT) medical images, a fusion network based on deep learning convolutional neural networks (DLCNN) is developed. Here, the brain images’ slopes and borders are detected using REA. To separate the sick region from the color image, adaptive fuzzy c-means integrated k-means (HFCMIK) clustering is then implemented. To extract hybrid features from the fused image, low-level features based on the redundant discrete wavelet transform (RDWT), empirical color features, and texture characteristics based on the gray-level cooccurrence matrix (GLCM) are also used. Finally, to distinguish between benign and malignant tumors, a deep learning probabilistic neural network (DLPNN) is deployed. Results: according to the findings, the suggested BTFSC-Net model performed better than more traditional preprocessing, fusion, segmentation, and classification techniques. Additionally, 99.21% segmentation accuracy and 99.46% classification accuracy were reached using the proposed BTFSC-Net model. Conclusions: earlier approaches have not performed as well as our presented method for image fusion, segmentation, feature extraction, classification operations, and brain tumor classification. These results illustrate that the designed approach performed more effectively in terms of enhanced quantitative evaluation with better accuracy as well as visual performance. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Cascaded Hough Transform-Based Hair Mask Generation and Harmonic Inpainting for Automated Hair Removal from Dermoscopy Images
Diagnostics 2022, 12(12), 3040; https://doi.org/10.3390/diagnostics12123040 - 04 Dec 2022
Viewed by 1197
Abstract
Restoring information obstructed by hair is one of the main issues for the accurate analysis and segmentation of skin images. For retrieving pixels obstructed by hair, the proposed system converts dermoscopy images into the L*a*b* color space, then principal component analysis (PCA) is [...] Read more.
Restoring information obstructed by hair is one of the main issues for the accurate analysis and segmentation of skin images. For retrieving pixels obstructed by hair, the proposed system converts dermoscopy images into the L*a*b* color space, then principal component analysis (PCA) is applied to produce grayscale images. Afterward, the contrast-limited adaptive histogram equalization (CLAHE) and the average filter are implemented to enhance the grayscale image. Subsequently, the binary image is generated using the iterative thresholding method. After that, the Hough transform (HT) is applied to each image block to generate the hair mask. Finally, the hair pixels are removed by harmonic inpainting. The performance of the proposed automated hair removal was evaluated by applying the proposed system to the International Skin Imaging Collaboration (ISIC) dermoscopy dataset as well as to clinical images. Six performance evaluation metrics were measured, namely the mean squared error (MSE), the peak signal-to-noise ratio (PSNR), the signal-to-noise ratio (SNR), the structural similarity index (SSIM), the universal quality image index (UQI), and the correlation (C). Using the clinical dataset, the system achieved MSE, PSNR, SNR, SSIM, UQI, and C values of 34.7957, 66.98, 42.39, 0.9813, 0.9801, and 0.9985, respectively. The results demonstrated that the proposed system could satisfy the medical diagnostic requirements and achieve the best performance compared to the state-of-art. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Image Decomposition Technique Based on Near-Infrared Transmission
J. Imaging 2022, 8(12), 322; https://doi.org/10.3390/jimaging8120322 - 03 Dec 2022
Viewed by 1187
Abstract
One way to diagnose a disease is to examine pictures of tissue thought to be affected by the disease. Near-infrared properties are subdivided into nonionizing, noninvasive, and nonradiative properties. Near-infrared also has selectivity properties for the objects it passes through. With this selectivity, [...] Read more.
One way to diagnose a disease is to examine pictures of tissue thought to be affected by the disease. Near-infrared properties are subdivided into nonionizing, noninvasive, and nonradiative properties. Near-infrared also has selectivity properties for the objects it passes through. With this selectivity, the resulting attenuation coefficient value will differ depending on the type of material or wavelength. By measuring the output and input intensity values, as well as the attenuation coefficient, the thickness of a material can be measured. The thickness value can then be used to display a reconstructed image. In this study, the object studied was a phantom consisting of silicon rubber, margarine, and gelatin. The results showed that margarine materials could be decomposed from other ingredients with a wavelength of 980 nm. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
How Well Do Self-Supervised Models Transfer to Medical Imaging?
J. Imaging 2022, 8(12), 320; https://doi.org/10.3390/jimaging8120320 - 01 Dec 2022
Viewed by 2612
Abstract
Self-supervised learning approaches have seen success transferring between similar medical imaging datasets, however there has been no large scale attempt to compare the transferability of self-supervised models against each other on medical images. In this study, we compare the generalisability of seven self-supervised [...] Read more.
Self-supervised learning approaches have seen success transferring between similar medical imaging datasets, however there has been no large scale attempt to compare the transferability of self-supervised models against each other on medical images. In this study, we compare the generalisability of seven self-supervised models, two of which were trained in-domain, against supervised baselines across eight different medical datasets. We find that ImageNet pretrained self-supervised models are more generalisable than their supervised counterparts, scoring up to 10% better on medical classification tasks. The two in-domain pretrained models outperformed other models by over 20% on in-domain tasks, however they suffered significant loss of accuracy on all other tasks. Our investigation of the feature representations suggests that this trend may be due to the models learning to focus too heavily on specific areas. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
MULTforAD: Multimodal MRI Neuroimaging for Alzheimer’s Disease Detection Based on a 3D Convolution Model
Electronics 2022, 11(23), 3893; https://doi.org/10.3390/electronics11233893 - 24 Nov 2022
Cited by 5 | Viewed by 1717
Abstract
Alzheimer’s disease (AD) is a neurological disease that affects numerous people. The condition causes brain atrophy, which leads to memory loss, cognitive impairment, and death. In its early stages, Alzheimer’s disease is tricky to predict. Therefore, treatment provided at an early stage of [...] Read more.
Alzheimer’s disease (AD) is a neurological disease that affects numerous people. The condition causes brain atrophy, which leads to memory loss, cognitive impairment, and death. In its early stages, Alzheimer’s disease is tricky to predict. Therefore, treatment provided at an early stage of AD is more effective and causes less damage than treatment at a later stage. Although AD is a common brain condition, it is difficult to recognize, and its classification requires a discriminative feature representation to separate similar brain patterns. Multimodal neuroimage information that combines multiple medical images can classify and diagnose AD more accurately and comprehensively. Magnetic resonance imaging (MRI) has been used for decades to assist physicians in diagnosing Alzheimer’s disease. Deep models have detected AD with high accuracy in computing-assisted imaging and diagnosis by minimizing the need for hand-crafted feature extraction from MRI images. This study proposes a multimodal image fusion method to fuse MRI neuroimages with a modular set of image preprocessing procedures to automatically fuse and convert Alzheimer’s disease neuroimaging initiative (ADNI) into the BIDS standard for classifying different MRI data of Alzheimer’s subjects from normal controls. Furthermore, a 3D convolutional neural network is used to learn generic features by capturing AlD biomarkers in the fused images, resulting in richer multimodal feature information. Finally, a conventional CNN with three classifiers, including Softmax, SVM, and RF, forecasts and classifies the extracted Alzheimer’s brain multimodal traits from a normal healthy brain. The findings reveal that the proposed method can efficiently predict AD progression by combining high-dimensional MRI characteristics from different public sources with an accuracy range from 88.7% to 99% and outperforming baseline models when applied to MRI-derived voxel features. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Putamen Atrophy Is a Possible Clinical Evaluation Index for Parkinson’s Disease Using Human Brain Magnetic Resonance Imaging
J. Imaging 2022, 8(11), 299; https://doi.org/10.3390/jimaging8110299 - 02 Nov 2022
Viewed by 2298
Abstract
Parkinson’s disease is characterized by motor dysfunction caused by functional deterioration of the substantia nigra. Lower putamen volume (i.e., putamen atrophy) may be an important clinical indicator of motor dysfunction and neurological symptoms, such as autonomic dysfunction, in patients with Parkinson’s disease. We [...] Read more.
Parkinson’s disease is characterized by motor dysfunction caused by functional deterioration of the substantia nigra. Lower putamen volume (i.e., putamen atrophy) may be an important clinical indicator of motor dysfunction and neurological symptoms, such as autonomic dysfunction, in patients with Parkinson’s disease. We proposed and applied a new evaluation method for putamen volume measurement on 31 high-resolution T2-weighted magnetic resonance images from 16 patients with Parkinson’s disease (age, 80.3 ± 7.30 years; seven men, nine women) and 30 such images from 19 control participants (age, 75.1 ± 7.85 years; eleven men, eight women). Putamen atrophy was expressed using a ratio based on the thalamus. The obtained values were used to assess differences between the groups using the Wilcoxon rank-sum test. The intraclass correlation coefficient showed sufficient intra-rater reliability and validity of this method. The Parkinson’s disease group had a significantly lower mean change ratio in the putamen (0.633) than the control group (0.719), suggesting that putamen atrophy may be identified using two-dimensional images. The evaluation method presented in this study may indicate the appearance of motor dysfunction and cognitive decline and could serve as a clinical evaluation index for Parkinson’s disease. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Iodine-123 β-methyl-P-iodophenyl-pentadecanoic Acid (123I-BMIPP) Myocardial Scintigraphy for Breast Cancer Patients and Possible Early Signs of Cancer-Therapeutics-Related Cardiac Dysfunction (CTRCD)
J. Imaging 2022, 8(11), 296; https://doi.org/10.3390/jimaging8110296 - 29 Oct 2022
Viewed by 1251
Abstract
(1) Background: The mortality of breast cancer has decreased due to the advancement of cancer therapies. However, more patients are suffering from cancer-therapeutics-related cardiac dysfunction (CTRCD). Diagnostic and treatment guidelines for CTRCD have not been fully established yet. Ultrasound cardiogram (UCG) is the [...] Read more.
(1) Background: The mortality of breast cancer has decreased due to the advancement of cancer therapies. However, more patients are suffering from cancer-therapeutics-related cardiac dysfunction (CTRCD). Diagnostic and treatment guidelines for CTRCD have not been fully established yet. Ultrasound cardiogram (UCG) is the gold standard for diagnosis of CTRCD, but many breast cancer patients cannot undergo UCG due to the surgery wounds or anatomical reasons. The purpose of the study is to evaluate the usefulness of myocardial scintigraphy using Iodine-123 β-methyl-P-iodophenyl-pentadecanoic acid (123I-BMIPP) in comparison with UCG. (2) Methods: 100 breast cancer patients who received chemotherapy within 3 years underwent Thallium (201Tl) and 23I-BMIPP myocardial perfusion and metabolism scintigraphy. The images were visually evaluated by doctors and radiological technologists, and the grade of uptake reduction was scored by Heart Risk View-S software (Nihon Medi-Physics). The scores were deployed in a 17-segment model of the heart. The distribution of the scores were analyzed. (3) Results: Nine patients (9%) could not undergo UCG. No correlation was found between left ventricular ejection fraction (LVEF) and Heart Risk View-S scores of 201Tl myocardial perfusion scintigraphy nor those of BMIPP myocardial metabolism scintigraphy. In a 17-segment model of the heart, the scores of the middle rings were higher than for the basal ring. (4) Conclusions: Evaluation by UCG is not possible for some patients. Myocardial scintigraphy cannot serve as a perfect alternative to UCG. However, it will become the preferable second-choice screening test, as it could point out the early stage of CTRCD. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
An Intelligent Tongue Diagnosis System via Deep Learning on the Android Platform
Diagnostics 2022, 12(10), 2451; https://doi.org/10.3390/diagnostics12102451 - 10 Oct 2022
Cited by 1 | Viewed by 2347
Abstract
To quickly and accurately identify the pathological features of the tongue, we developed an intelligent tongue diagnosis system that uses deep learning on a mobile terminal. We also propose an efficient and accurate tongue image processing algorithm framework to infer the category of [...] Read more.
To quickly and accurately identify the pathological features of the tongue, we developed an intelligent tongue diagnosis system that uses deep learning on a mobile terminal. We also propose an efficient and accurate tongue image processing algorithm framework to infer the category of the tongue. First, a software system integrating registration, login, account management, tongue image recognition, and doctor–patient dialogue was developed based on the Android platform. Then, the deep learning models, based on the official benchmark models, were trained by using the tongue image datasets. The tongue diagnosis algorithm framework includes the YOLOv5s6, U-Net, and MobileNetV3 networks, which are employed for tongue recognition, tongue region segmentation, and tongue feature classification (tooth marks, spots, and fissures), respectively. The experimental results demonstrate that the performance of the tongue diagnosis model was satisfying, and the accuracy of the final classification of tooth marks, spots, and fissures was 93.33%, 89.60%, and 97.67%, respectively. The construction of this system has a certain reference value for the objectification and intelligence of tongue diagnosis. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Using an Ultrasound Tissue Phantom Model for Hybrid Training of Deep Learning Models for Shrapnel Detection
J. Imaging 2022, 8(10), 270; https://doi.org/10.3390/jimaging8100270 - 02 Oct 2022
Cited by 2 | Viewed by 1598
Abstract
Tissue phantoms are important for medical research to reduce the use of animal or human tissue when testing or troubleshooting new devices or technology. Development of machine-learning detection tools that rely on large ultrasound imaging data sets can potentially be streamlined with high [...] Read more.
Tissue phantoms are important for medical research to reduce the use of animal or human tissue when testing or troubleshooting new devices or technology. Development of machine-learning detection tools that rely on large ultrasound imaging data sets can potentially be streamlined with high quality phantoms that closely mimic important features of biological tissue. Here, we demonstrate how an ultrasound-compliant tissue phantom comprised of multiple layers of gelatin to mimic bone, fat, and muscle tissue types can be used for machine-learning training. This tissue phantom has a heterogeneous composition to introduce tissue level complexity and subject variability in the tissue phantom. Various shrapnel types were inserted into the phantom for ultrasound imaging to supplement swine shrapnel image sets captured for applications such as deep learning algorithms. With a previously developed shrapnel detection algorithm, blind swine test image accuracy reached more than 95% accuracy when training was comprised of 75% tissue phantom images, with the rest being swine images. For comparison, a conventional MobileNetv2 deep learning model was trained with the same training image set and achieved over 90% accuracy in swine predictions. Overall, the tissue phantom demonstrated high performance for developing deep learning models for ultrasound image classification. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
The Feasibility of Shadowed Image Restoration Using the Synthetic Aperture Focusing Technique
Appl. Sci. 2022, 12(18), 9297; https://doi.org/10.3390/app12189297 - 16 Sep 2022
Cited by 1 | Viewed by 986
Abstract
The phenomenon of acoustic shadowing on ultrasonography is characterized by an echo signal void behind structures that strongly absorb or reflect ultrasonic energy. In medical ultrasonography, once the ultrasound energy is shielded, acoustic shadowing makes it difficult to create an image, leading to [...] Read more.
The phenomenon of acoustic shadowing on ultrasonography is characterized by an echo signal void behind structures that strongly absorb or reflect ultrasonic energy. In medical ultrasonography, once the ultrasound energy is shielded, acoustic shadowing makes it difficult to create an image, leading to misinterpretations and obscure diagnoses. Hence, instead of dealing with the defocused problem encountered in an ultrasound scan (US), this current research focuses on revealing the existence of an acoustically shadowed target (or a potential lesion) using a well-known restoration algorithm, i.e., the synthetic aperture focusing technique (SAFT). To demonstrate the effects of an acoustic shadow on an ultrasound scan (US), a forward model study is carried out. In laboratory manipulations, a purposely designed physical model is created and then scanned using B-mode and pitch/catch arrangements to carry out shadowed and shadow-free scans in a water tank. Thereafter, making use of a delay-and-sum (DAS) operation, the echo signals are processed by the synthetic aperture focusing technique (SAFT) to perform image restoration. The results of the restoration process show that the SAFT algorithm performs well with respect to directional shadowing. Once the target or lesion is positioned in a total anechoic zone, or even in a multi-channel scan, it will fail. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
CLAIRE—Parallelized Diffeomorphic Image Registration for Large-Scale Biomedical Imaging Applications
J. Imaging 2022, 8(9), 251; https://doi.org/10.3390/jimaging8090251 - 16 Sep 2022
Cited by 3 | Viewed by 1485
Abstract
We study the performance of CLAIRE—a diffeomorphic multi-node, multi-GPU image-registration algorithm and software—in large-scale biomedical imaging applications with billions of voxels. At such resolutions, most existing software packages for diffeomorphic image registration are prohibitively expensive. As a result, practitioners first significantly downsample the [...] Read more.
We study the performance of CLAIRE—a diffeomorphic multi-node, multi-GPU image-registration algorithm and software—in large-scale biomedical imaging applications with billions of voxels. At such resolutions, most existing software packages for diffeomorphic image registration are prohibitively expensive. As a result, practitioners first significantly downsample the original images and then register them using existing tools. Our main contribution is an extensive analysis of the impact of downsampling on registration performance. We study this impact by comparing full-resolution registrations obtained with CLAIRE to lower resolution registrations for synthetic and real-world imaging datasets. Our results suggest that registration at full resolution can yield a superior registration quality—but not always. For example, downsampling a synthetic image from 10243 to 2563 decreases the Dice coefficient from 92% to 79%. However, the differences are less pronounced for noisy or low contrast high resolution images. CLAIRE allows us not only to register images of clinically relevant size in a few seconds but also to register images at unprecedented resolution in reasonable time. The highest resolution considered are CLARITY images of size 2816×3016×1162. To the best of our knowledge, this is the first study on image registration quality at such resolutions. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Interesting Images
Chronic Headache Attributed to Vertebrobasilar Insufficiency
Diagnostics 2022, 12(9), 2038; https://doi.org/10.3390/diagnostics12092038 - 23 Aug 2022
Cited by 1 | Viewed by 1043
Abstract
Vertebrobasilar insufficiency, a condition characterized by poor blood flow to the posterior portion of the brain, can cause headaches. However, the exact underlying mechanism is not yet fully understood. The patient enrolled in our study reported experiencing intermittent headaches radiating from the left [...] Read more.
Vertebrobasilar insufficiency, a condition characterized by poor blood flow to the posterior portion of the brain, can cause headaches. However, the exact underlying mechanism is not yet fully understood. The patient enrolled in our study reported experiencing intermittent headaches radiating from the left shoulder, similar to chronic tension-type headaches. His aggravated headache and severe left vertebral artery stenosis were detected by brain computed tomography angiography. Stent insertion successfully expanded the patient’s narrowed left vertebral artery orifice. Subsequently, the patient’s headaches improved without recurrence during the one-year follow-up period. In summary, chronic headaches attributed to vertebrobasilar insufficiency in this study, improved after stent insertion to reverse severe left vertebral artery stenosis. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Deep Segmentation Networks for Segmenting Kidneys and Detecting Kidney Stones in Unenhanced Abdominal CT Images
Diagnostics 2022, 12(8), 1788; https://doi.org/10.3390/diagnostics12081788 - 23 Jul 2022
Cited by 9 | Viewed by 5496
Abstract
Recent breakthroughs of deep learning algorithms in medical imaging, automated detection, and segmentation techniques for renal (kidney) in abdominal computed tomography (CT) images have been limited. Radiomics and machine learning analyses of renal diseases rely on the automatic segmentation of kidneys in CT [...] Read more.
Recent breakthroughs of deep learning algorithms in medical imaging, automated detection, and segmentation techniques for renal (kidney) in abdominal computed tomography (CT) images have been limited. Radiomics and machine learning analyses of renal diseases rely on the automatic segmentation of kidneys in CT images. Inspired by this, our primary aim is to utilize deep semantic segmentation learning models with a proposed training scheme to achieve precise and accurate segmentation outcomes. Moreover, this work aims to provide the community with an open-source, unenhanced abdominal CT dataset for training and testing the deep learning segmentation networks to segment kidneys and detect kidney stones. Five variations of deep segmentation networks are trained and tested both dependently (based on the proposed training scheme) and independently. Upon comparison, the models trained with the proposed training scheme enable the highly accurate 2D and 3D segmentation of kidneys and kidney stones. We believe this work is a fundamental step toward AI-driven diagnostic strategies, which can be an essential component of personalized patient care and improved decision-making in treating kidney diseases. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Lung Volume Calculation in Preclinical MicroCT: A Fast Geometrical Approach
J. Imaging 2022, 8(8), 204; https://doi.org/10.3390/jimaging8080204 - 22 Jul 2022
Viewed by 1646
Abstract
In this study, we present a time-efficient protocol for thoracic volume calculation as a proxy for total lung volume. We hypothesize that lung volume can be calculated indirectly from this thoracic volume. We compared the measured thoracic volume with manually segmented and automatically [...] Read more.
In this study, we present a time-efficient protocol for thoracic volume calculation as a proxy for total lung volume. We hypothesize that lung volume can be calculated indirectly from this thoracic volume. We compared the measured thoracic volume with manually segmented and automatically thresholded lung volumes, with manual segmentation as the gold standard. A linear regression formula was obtained and used for calculating the theoretical lung volume. This volume was compared with the gold standard volumes. In healthy animals, thoracic volume was 887.45 mm3, manually delineated lung volume 554.33 mm3 and thresholded aerated lung volume 495.38 mm3 on average. Theoretical lung volume was 554.30 mm3. Finally, the protocol was applied to three animal models of lung pathology (lung metastasis and transgenic primary lung tumor and fungal infection). In confirmed pathologic animals, thoracic volumes were: 893.20 mm3, 860.12 and 1027.28 mm3. Manually delineated volumes were 640.58, 503.91 and 882.42 mm3, respectively. Thresholded lung volumes were 315.92 mm3, 408.72 and 236 mm3, respectively. Theoretical lung volume resulted in 635.28, 524.30 and 863.10.42 mm3. No significant differences were observed between volumes. This confirmed the potential use of this protocol for lung volume calculation in pathologic models. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Influence of Prior Imaging Information on Diagnostic Accuracy for Focal Skeletal Processes—A Retrospective Analysis of the Consistency between Biopsy-Verified Imaging Diagnoses
Diagnostics 2022, 12(7), 1735; https://doi.org/10.3390/diagnostics12071735 - 17 Jul 2022
Viewed by 1380
Abstract
Introduction: Comparing imaging examinations with those previously obtained is considered mandatory in imaging guidelines. To our knowledge, no studies are available on neither the influence, nor the sequence, of prior imaging and reports on diagnostic accuracy using biopsy as the reference standard. Such [...] Read more.
Introduction: Comparing imaging examinations with those previously obtained is considered mandatory in imaging guidelines. To our knowledge, no studies are available on neither the influence, nor the sequence, of prior imaging and reports on diagnostic accuracy using biopsy as the reference standard. Such data are important to minimize diagnostic errors and to improve the preparation of diagnostic imaging guidelines. The aim of our study was to provide such data. Materials and methods: A retrospective cohort of 216 consecutive skeletal biopsies from patients with at least 2 different imaging modalities (X-ray, CT and MRI) performed within 6 months of biopsy was identified. The diagnostic accuracy of the individual imaging modality was assessed. Finally, the possible influence of the sequence of imaging modalities was investigated. Results: No significant difference in the accuracy of the imaging modalities was shown, being preceded by another imaging modality or not. However, the sequence analyses indicate sequential biases, particularly if MRI was the first imaging modality. Conclusion: The sequence of the imaging modalities seems to influence the diagnostic accuracy against a pathology reference standard. Further studies are needed to establish evidence-based guidelines for the strategy of using previous imaging and reports to improve diagnostic accuracy. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
The Necessity of Magnetic Resonance Imaging in Congenital Diaphragmatic Hernia
Diagnostics 2022, 12(7), 1733; https://doi.org/10.3390/diagnostics12071733 - 17 Jul 2022
Viewed by 1199
Abstract
This is a retrospective study investigating the relationship between ultrasound and magnetic resonance imaging (MRI) examinations in congenital diaphragmatic hernia (CDH). CDH is a rare cause of pulmonary hypoplasia that increases the mortality and morbidity of patients. Inclusion criteria were: patients diagnosed with [...] Read more.
This is a retrospective study investigating the relationship between ultrasound and magnetic resonance imaging (MRI) examinations in congenital diaphragmatic hernia (CDH). CDH is a rare cause of pulmonary hypoplasia that increases the mortality and morbidity of patients. Inclusion criteria were: patients diagnosed with CDH who underwent MRI examination after the second-trimester morphology ultrasound confirmed the presence of CDH. The patients came from three university hospitals in Bucharest, Romania. A total of 22 patients were included in the study after applying the exclusion criteria. By analyzing the total lung volume (TLV) using MRI, and the lung to head ratio (LHR) calculated using MRI and ultrasound, we observed that LHR can severely underestimate the severity of the pulmonary hypoplasia, even showing values close to normal in some cases. This also proves to be statistically relevant if we eliminate certain extreme values. We found significant correlations between the LHR percentage and herniated organs, such as the left and right liver lobes and gallbladder. MRI also provided additional insights, indicating the presence of pericarditis or pleurisy. We wish to underline the necessity of MRI follow-up in all cases of CDH, as the accurate measurement of the TLV is important for future treatment and therapeutic strategy. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Force Estimation during Cell Migration Using Mathematical Modelling
J. Imaging 2022, 8(7), 199; https://doi.org/10.3390/jimaging8070199 - 15 Jul 2022
Cited by 1 | Viewed by 1492
Abstract
Cell migration is essential for physiological, pathological and biomedical processes such as, in embryogenesis, wound healing, immune response, cancer metastasis, tumour invasion and inflammation. In light of this, quantifying mechanical properties during the process of cell migration is of great interest in experimental [...] Read more.
Cell migration is essential for physiological, pathological and biomedical processes such as, in embryogenesis, wound healing, immune response, cancer metastasis, tumour invasion and inflammation. In light of this, quantifying mechanical properties during the process of cell migration is of great interest in experimental sciences, yet few theoretical approaches in this direction have been studied. In this work, we propose a theoretical and computational approach based on the optimal control of geometric partial differential equations to estimate cell membrane forces associated with cell polarisation during migration. Specifically, cell membrane forces are inferred or estimated by fitting a mathematical model to a sequence of images, allowing us to capture dynamics of the cell migration. Our approach offers a robust and accurate framework to compute geometric mechanical membrane forces associated with cell polarisation during migration and also yields geometric information of independent interest, we illustrate one such example that involves quantifying cell proliferation levels which are associated with cell division, cell fusion or cell death. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Clinically Inspired Skin Lesion Classification through the Detection of Dermoscopic Criteria for Basal Cell Carcinoma
J. Imaging 2022, 8(7), 197; https://doi.org/10.3390/jimaging8070197 - 12 Jul 2022
Cited by 3 | Viewed by 1818
Abstract
Background and Objective. Skin cancer is the most common cancer worldwide. One of the most common non-melanoma tumors is basal cell carcinoma (BCC), which accounts for 75% of all skin cancers. There are many benign lesions that can be confused with these [...] Read more.
Background and Objective. Skin cancer is the most common cancer worldwide. One of the most common non-melanoma tumors is basal cell carcinoma (BCC), which accounts for 75% of all skin cancers. There are many benign lesions that can be confused with these types of cancers, leading to unnecessary biopsies. In this paper, a new method to identify the different BCC dermoscopic patterns present in a skin lesion is presented. In addition, this information is applied to classify skin lesions into BCC and non-BCC. Methods. The proposed method combines the information provided by the original dermoscopic image, introduced in a convolutional neural network (CNN), with deep and handcrafted features extracted from color and texture analysis of the image. This color analysis is performed by transforming the image into a uniform color space and into a color appearance model. To demonstrate the validity of the method, a comparison between the classification obtained employing exclusively a CNN with the original image as input and the classification with additional color and texture features is presented. Furthermore, an exhaustive comparison of classification employing different color and texture measures derived from different color spaces is presented. Results. Results show that the classifier with additional color and texture features outperforms a CNN whose input is only the original image. Another important achievement is that a new color cooccurrence matrix, proposed in this paper, improves the results obtained with other texture measures. Finally, sensitivity of 0.99, specificity of 0.94 and accuracy of 0.97 are achieved when lesions are classified into BCC or non-BCC. Conclusions. To the best of our knowledge, this is the first time that a methodology to detect all the possible patterns that can be present in a BCC lesion is proposed. This detection leads to a clinically explainable classification into BCC and non-BCC lesions. In this sense, the classification of the proposed tool is based on the detection of the dermoscopic features that dermatologists employ for their diagnosis. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Artificial Intelligence-Based Multimodal Medical Image Fusion Using Hybrid S2 Optimal CNN
Electronics 2022, 11(14), 2124; https://doi.org/10.3390/electronics11142124 - 06 Jul 2022
Cited by 4 | Viewed by 1853
Abstract
In medical applications, medical image fusion methods are capable of fusing the medical images from various morphologies to obtain a reliable medical diagnosis. A single modality image cannot provide sufficient information for an exact diagnosis. Hence, an efficient multimodal medical image fusion-based artificial [...] Read more.
In medical applications, medical image fusion methods are capable of fusing the medical images from various morphologies to obtain a reliable medical diagnosis. A single modality image cannot provide sufficient information for an exact diagnosis. Hence, an efficient multimodal medical image fusion-based artificial intelligence model is proposed in this paper. Initially, the multimodal medical images are obtained for an effective fusion process by using a modified discrete wavelet transform (MDWT) thereby attaining an image with high visual clarity. Then, the fused images are classified as malignant or benign using the proposed convolutional neural network-based hybrid optimization dynamic algorithm (CNN-HOD). To enhance the weight function and classification accuracy of the CNN, a hybrid optimization dynamic algorithm (HOD) is proposed. The HOD is the integration of the sailfish optimizer algorithm and seagull optimization algorithm. Here, the seagull optimizer algorithm replaces the migration operation toobtain the optimal location. The experimental analysis is carried out and acquired with standard deviation (58%), average gradient (88%), and fusion factor (73%) compared with the other approaches. The experimental results demonstrate that the proposed approach performs better than other approaches and offers high-quality fused images for an accurate diagnosis. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Pneumonia Detection on Chest X-ray Images Using Ensemble of Deep Convolutional Neural Networks
Appl. Sci. 2022, 12(13), 6448; https://doi.org/10.3390/app12136448 - 25 Jun 2022
Cited by 17 | Viewed by 3917
Abstract
Pneumonia is a life-threatening lung infection resulting from several different viral infections. Identifying and treating pneumonia on chest X-ray images can be difficult due to its similarity to other pulmonary diseases. Thus, the existing methods for predicting pneumonia cannot attain substantial levels of [...] Read more.
Pneumonia is a life-threatening lung infection resulting from several different viral infections. Identifying and treating pneumonia on chest X-ray images can be difficult due to its similarity to other pulmonary diseases. Thus, the existing methods for predicting pneumonia cannot attain substantial levels of accuracy. This paper presents a computer-aided classification of pneumonia, coined Ensemble Learning (EL), to simplify the diagnosis process on chest X-ray images. Our proposal is based on Convolutional Neural Network (CNN) models, which are pretrained CNN models that have been recently employed to enhance the performance of many medical tasks instead of training CNN models from scratch. We propose to use three well-known CNNs (DenseNet169, MobileNetV2, and Vision Transformer) pretrained using the ImageNet database. These models are trained on the chest X-ray data set using fine-tuning. Finally, the results are obtained by combining the extracted features from these three models during the experimental phase. The proposed EL approach outperforms other existing state-of-the-art methods and obtains an accuracy of 93.91% and a F1-score of 93.88% on the testing phase. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Combining High-Resolution Hard X-ray Tomography and Histology for Stem Cell-Mediated Distraction Osteogenesis
Appl. Sci. 2022, 12(12), 6286; https://doi.org/10.3390/app12126286 - 20 Jun 2022
Cited by 2 | Viewed by 1351
Abstract
Distraction osteogenesis is a clinically established technique for lengthening, molding and shaping bone by new bone formation. The experimental evaluation of this expensive and time-consuming treatment is of high impact for better understanding of tissue engineering but mainly relies on a limited number [...] Read more.
Distraction osteogenesis is a clinically established technique for lengthening, molding and shaping bone by new bone formation. The experimental evaluation of this expensive and time-consuming treatment is of high impact for better understanding of tissue engineering but mainly relies on a limited number of histological slices. These tissue slices contain two-dimensional information comprising only about one percent of the volume of interest. In order to analyze the soft and hard tissues of the entire jaw of a single rat in a multimodal assessment, we combined micro computed tomography (µCT) with histology. The µCT data acquired before and after decalcification were registered to determine the impact of decalcification on local tissue shrinkage. Identification of the location of the H&E-stained specimen within the synchrotron radiation-based µCT data collected after decalcification was achieved via non-rigid slice-to-volume registration. The resulting bi- and tri-variate histograms were divided into clusters related to anatomical features from bone and soft tissues, which allowed for a comparison of the approaches and resulted in the hypothesis that the combination of laboratory-based µCT before decalcification, synchrotron radiation-based µCT after decalcification and histology with hematoxylin-and-eosin staining could be used to discriminate between different types of collagen, key components of new bone formation. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Graphical abstract

Article
Agreement of the Discrepancy Index Obtained Using Digital and Manual Techniques—A Comparative Study
Appl. Sci. 2022, 12(12), 6105; https://doi.org/10.3390/app12126105 - 16 Jun 2022
Viewed by 1521
Abstract
The discrepancy index evaluates the complexity of the initial orthodontic diagnosis. The objective is to compare whether there is a difference in the final discrepancy index score of the American Board of Orthodontics (ABO) when obtained using digital and manual techniques. Fifty-six initial [...] Read more.
The discrepancy index evaluates the complexity of the initial orthodontic diagnosis. The objective is to compare whether there is a difference in the final discrepancy index score of the American Board of Orthodontics (ABO) when obtained using digital and manual techniques. Fifty-six initial orthodontic records in a digital and physical format were included (28 each) in 2022 at the Center for Research and Advanced Studies in Dentistry. For the digital measurements, iTero and TRIOS 3 intraoral scanners were used, along with Insignia software and cephalometric tracing with Dolphin Imaging software. Manual measurements were obtained in dental casts using the ruler indicated for the previously mentioned discrepancy index, in addition to conventional cephalometric tracing. Student’s t-test did not show statistically significant differences between the digital and manual techniques, with final discrepancy index scores of 24.61 (13.34) and 24.86 (14.14), respectively (p = 0.769). Cohen’s kappa index showed very good agreement between both categorical measurements (kappa value = 1.00, p = 0.001). The Bland–Altman method demonstrated a good agreement between continuous measurements obtained by both techniques with a bias of 0.2500 (superior limit of agreement =9.0092988, inferior limit of agreement = −8.5092988). Excellent agreement was observed in obtaining the discrepancy index through digital technique (Intraoral scanning and digital records) and manual technique (conventional records). Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Mobile-PolypNet: Lightweight Colon Polyp Segmentation Network for Low-Resource Settings
J. Imaging 2022, 8(6), 169; https://doi.org/10.3390/jimaging8060169 - 14 Jun 2022
Cited by 3 | Viewed by 1691
Abstract
Colon polyps, small clump of cells on the lining of the colon, can lead to colorectal cancer (CRC), one of the leading types of cancer globally. Hence, early detection of these polyps automatically is crucial in the prevention of CRC. The deep learning [...] Read more.
Colon polyps, small clump of cells on the lining of the colon, can lead to colorectal cancer (CRC), one of the leading types of cancer globally. Hence, early detection of these polyps automatically is crucial in the prevention of CRC. The deep learning models proposed for the detection and segmentation of colorectal polyps are resource-consuming. This paper proposes a lightweight deep learning model for colorectal polyp segmentation that achieved state-of-the-art accuracy while significantly reducing the model size and complexity. The proposed deep learning autoencoder model employs a set of state-of-the-art architectural blocks and optimization objective functions to achieve the desired efficiency. The model is trained and tested on five publicly available colorectal polyp segmentation datasets (CVC-ClinicDB, CVC-ColonDB, EndoScene, Kvasir, and ETIS). We also performed ablation testing on the model to test various aspects of the autoencoder architecture. We performed the model evaluation by using most of the common image-segmentation metrics. The backbone model achieved a DICE score of 0.935 on the Kvasir dataset and 0.945 on the CVC-ClinicDB dataset, improving the accuracy by 4.12% and 5.12%, respectively, over the current state-of-the-art network, while using 88 times fewer parameters, 40 times less storage space, and being computationally 17 times more efficient. Our ablation study showed that the addition of ConvSkip in the autoencoder slightly improves the model’s performance but it was not significant (p-value = 0.815). Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Low-Dose High-Resolution Photon-Counting CT of the Lung: Radiation Dose and Image Quality in the Clinical Routine
Diagnostics 2022, 12(6), 1441; https://doi.org/10.3390/diagnostics12061441 - 11 Jun 2022
Cited by 15 | Viewed by 2127
Abstract
This study aims to investigate the qualitative and quantitative image quality of low-dose high-resolution (LD-HR) lung CT scans acquired with the first clinical approved photon counting CT (PCCT) scanner. Furthermore, the radiation dose used by the PCCT is compared to a conventional CT [...] Read more.
This study aims to investigate the qualitative and quantitative image quality of low-dose high-resolution (LD-HR) lung CT scans acquired with the first clinical approved photon counting CT (PCCT) scanner. Furthermore, the radiation dose used by the PCCT is compared to a conventional CT scanner with an energy-integrating detector system (EID-CT). Twenty-nine patients who underwent a LD-HR chest CT scan with dual-source PCCT and had previously undergone a LD-HR chest CT with a standard EID-CT scanner were retrospectively included in this study. Images of the whole lung as well as enlarged image sections displaying a specific finding (lesion) were evaluated in terms of overall image quality, image sharpness and image noise by three senior radiologists using a 5-point Likert scale. The PCCT images were reconstructed with and without a quantum iterative reconstruction algorithm (PCCT QIR+/−). Noise and signal-to-noise (SNR) were measured and the effective radiation dose was calculated. Overall, image quality and image sharpness were rated best in PCCT (QIR+) images. A significant difference was seen particularly in image sections of PCCT (QIR+) images compared to EID-CT images (p < 0.005). Image noise of PCCT (QIR+) images was significantly lower compared to EID-CT images in image sections (p = 0.005). In contrast, noise was lowest on EID-CT images (p < 0.001). The PCCT used significantly less radiation dose compared to the EID-CT (p < 0.001). In conclusion, LD-HR PCCT scans of the lung provide better image quality while using significantly less radiation dose compared to EID-CT scans. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Altered Transmission of Cardiac Cycles to Ductus Venosus Blood Flow in Fetal Growth Restriction: Why Ductus Venosus Reflects Fetal Circulatory Changes More Precisely
Diagnostics 2022, 12(6), 1393; https://doi.org/10.3390/diagnostics12061393 - 04 Jun 2022
Viewed by 1500
Abstract
We aimed to investigate the relation between the time intervals of the flow velocity waveform of ductus venosus (DV-FVW) and cardiac cycles. We defined Delta A as the difference in the time measurements between DV-FVW and cardiac cycles on the assumption that the [...] Read more.
We aimed to investigate the relation between the time intervals of the flow velocity waveform of ductus venosus (DV-FVW) and cardiac cycles. We defined Delta A as the difference in the time measurements between DV-FVW and cardiac cycles on the assumption that the second peak of ductus venosus (D-wave) starts simultaneously with the opening of the mitral valve (MV). As well, we defined Delta B as the difference of the time measurements between DV-FVW and cardiac cycles on the assumption that the D-wave starts simultaneously with the closure of the aortic valve (AV). We then compared Delta A and Delta B in the control and fetal growth restriction (FGR) groups. In the control group of healthy fetuses, Delta A was strikingly shorter than Delta B. On the other hand, in all FGR cases, no difference was observed. The acceleration of the D-wave is suggested to be generated by the opening of the MV under normal fetal hemodynamics, whereas it precedes the opening of the MV in FGR. Our results indicate that the time interval of DV analysis might be a more informative parameter than the analysis of cardiac cycles. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Low-Cost Probabilistic 3D Denoising with Applications for Ultra-Low-Radiation Computed Tomography
J. Imaging 2022, 8(6), 156; https://doi.org/10.3390/jimaging8060156 - 31 May 2022
Cited by 1 | Viewed by 2025
Abstract
We propose a pipeline for synthetic generation of personalized Computer Tomography (CT) images, with a radiation exposure evaluation and a lifetime attributable risk (LAR) assessment. We perform a patient-specific performance evaluation for a broad range of denoising algorithms (including the most popular deep [...] Read more.
We propose a pipeline for synthetic generation of personalized Computer Tomography (CT) images, with a radiation exposure evaluation and a lifetime attributable risk (LAR) assessment. We perform a patient-specific performance evaluation for a broad range of denoising algorithms (including the most popular deep learning denoising approaches, wavelets-based methods, methods based on Mumford–Shah denoising, etc.), focusing both on accessing the capability to reduce the patient-specific CT-induced LAR and on computational cost scalability. We introduce a parallel Probabilistic Mumford–Shah denoising model (PMS) and show that it markedly-outperforms the compared common denoising methods in denoising quality and cost scaling. In particular, we show that it allows an approximately 22-fold robust patient-specific LAR reduction for infants and a 10-fold LAR reduction for adults. Using a normal laptop, the proposed algorithm for PMS allows cheap and robust (with a multiscale structural similarity index >90%) denoising of very large 2D videos and 3D images (with over 107 voxels) that are subject to ultra-strong noise (Gaussian and non-Gaussian) for signal-to-noise ratios far below 1.0. The code is provided for open access. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Thyroid Biokinetics for Radioactive I-131 in Twelve Thyroid Cancer Patients via the Refined Nine-Compartmental Model
Appl. Sci. 2022, 12(11), 5538; https://doi.org/10.3390/app12115538 - 30 May 2022
Cited by 1 | Viewed by 1429
Abstract
The thyroid biokinetic model of radioactive I-131 was re-evaluated using a refined nine-compartmental model and applied to twelve thyroid cancer patients. In contrast to the simplified four-compartmental model regulated by the ICRP-56 report, the revised model included nine compartments specified in the ICRP-128 [...] Read more.
The thyroid biokinetic model of radioactive I-131 was re-evaluated using a refined nine-compartmental model and applied to twelve thyroid cancer patients. In contrast to the simplified four-compartmental model regulated by the ICRP-56 report, the revised model included nine compartments specified in the ICRP-128 report, namely, oral, stomach, body fluid, thyroid, whole body, liver, kidney, bladder, and remainder (i.e., the whole body minus kidney and bladder). A self-developed program run in MATLAB was designed to solve the nine first-order simultaneous linear differential equations. The model was realized in standard and simplified versions. The latter neglected two feedback paths (body fluid to oral, i31, and kidney to the whole body, i87) to reduce computations. Accordingly, the biological half-lives for the major compartments (thyroid and body fluid + whole body) were 36.00 ± 15.01, 15.04 ± 5.63, 34.33 ± 15.42, and 14.83 ± 5.91 of standard and simplified version. The correlations between theoretical and empirical data for each patient were quantified by the dimensionless AT (agreement) index and, the ATtot index integrated each individual AT of a specific organ of one patient. Since small AT values indicated a closer correlation, the obtained range of ATtot (0.048 ± 0.019) proved the standard model’s reliability and high accuracy, while the simplified one yielded slightly higher ATtot (0.058 ± 0.023). The detailed outcomes among various compartments of twelve patients were calculated and compared with other researchers’ work. The correlation results on radioactive I-131 evolution in thyroid cancer patients’ bodies are instrumental in viewpoint of radioactive protection of patients and radiological personnel. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Comparison Study of Myocardial Radiomics Feature Properties on Energy-Integrating and Photon-Counting Detector CT
Diagnostics 2022, 12(5), 1294; https://doi.org/10.3390/diagnostics12051294 - 23 May 2022
Cited by 9 | Viewed by 1605
Abstract
The implementation of radiomics-based, quantitative imaging parameters is hampered by a lack of stability and standardization. Photon-counting computed tomography (PCCT), compared to energy-integrating computed tomography (EICT), does rely on a novel detector technology, promising better spatial resolution and contrast-to-noise ratio. However, its effect [...] Read more.
The implementation of radiomics-based, quantitative imaging parameters is hampered by a lack of stability and standardization. Photon-counting computed tomography (PCCT), compared to energy-integrating computed tomography (EICT), does rely on a novel detector technology, promising better spatial resolution and contrast-to-noise ratio. However, its effect on radiomics feature properties is unknown. This work investigates this topic in myocardial imaging. In this retrospective, single-center IRB-approved study, the left ventricular myocardium was segmented on CT, and the radiomics features were extracted using pyradiomics. To compare features between scanners, a t-test for non-paired samples and F-test was performed, with a threshold of 0.05 set as a benchmark for significance. Feature correlations were calculated by the Pearson correlation coefficient, and visualization was performed with heatmaps. A total of 50 patients (56% male, mean age 56) were enrolled in this study, with equal proportions of PCCT and EICT. First-order features were, nearly, comparable between both groups. However, higher-order features showed a partially significant difference between PCCT and EICT. While first-order radiomics features of left ventricular myocardium show comparability between PCCT and EICT, detected differences of higher-order features may indicate a possible impact of improved spatial resolution, better detection of lower-energy photons, and a better signal-to-noise ratio on texture analysis on PCCT. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Repeatability of Contrast-Enhanced Ultrasound to Determine Renal Cortical Perfusion
Diagnostics 2022, 12(5), 1293; https://doi.org/10.3390/diagnostics12051293 - 23 May 2022
Viewed by 1646
Abstract
Alterations in renal perfusion play a major role in the pathogenesis of renal diseases. Renal contrast-enhanced ultrasound (CEUS) is increasingly applied to quantify renal cortical perfusion and to assess its change over time, but comprehensive assessment of the technique’s repeatability is lacking. Ten [...] Read more.
Alterations in renal perfusion play a major role in the pathogenesis of renal diseases. Renal contrast-enhanced ultrasound (CEUS) is increasingly applied to quantify renal cortical perfusion and to assess its change over time, but comprehensive assessment of the technique’s repeatability is lacking. Ten adults attended two renal CEUS scans within 14 days. In each session, five destruction/reperfusion sequences were captured. One-phase association was performed to derive the following parameters: acoustic index (AI), mean transit time (mTT), perfusion index (PI), and wash-in rate (WiR). Intra-individual and inter-operator (image analysis) repeatability for the perfusion variables were assessed using intra-class correlation (ICC), with the agreement assessed using a Bland–Altman analysis. The 10 adults had a median (IQR) age of 39 years (30–46). Good intra-individual repeatability was found for mTT (ICC: 0.71) and PI (ICC: 0.65). Lower repeatability was found for AI (ICC: 0.50) and WiR (ICC: 0.56). The correlation between the two operators was excellent for all variables: the ICCs were 0.99 for PI, 0.98 for AI, 0.87 for mTT, and 0.83 for WiR. The Bland–Altman analysis showed that the mean biases (± SD) between the two operators were 0.03 ± 0.16 for mTT, 0.005 ± 0.09 for PI, 0.04 ± 0.19 for AI, and −0.02 ± 0.11 for WiR. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Graphical abstract

Article
Effects of Different Scan Duration on Brain Effective Connectivity among Default Mode Network Nodes
Diagnostics 2022, 12(5), 1277; https://doi.org/10.3390/diagnostics12051277 - 20 May 2022
Cited by 1 | Viewed by 1479
Abstract
Background: Resting-state functional magnetic resonance imaging (rs-fMRI) can evaluate brain functional connectivity without requiring subjects to perform a specific task. This rs-fMRI is very useful in patients with cognitive decline or unable to respond to tasks. However, long scan durations have been suggested [...] Read more.
Background: Resting-state functional magnetic resonance imaging (rs-fMRI) can evaluate brain functional connectivity without requiring subjects to perform a specific task. This rs-fMRI is very useful in patients with cognitive decline or unable to respond to tasks. However, long scan durations have been suggested to measure connectivity between brain areas to produce more reliable results, which are not clinically optimal. Therefore, this study aims to evaluate a shorter scan duration and compare the scan duration of 10 and 15 min using the rs-fMRI approach. Methods: Twenty-one healthy male and female participants (seventeen right-handed and four left-handed), with ages ranging between 21 and 60 years, were recruited. All participants underwent both 10 and 15 min of rs-fMRI scans. The present study evaluated the default mode network (DMN) areas for both scan durations. The areas involved were the posterior cingulate cortex (PCC), medial prefrontal cortex (mPFC), left inferior parietal cortex (LIPC), and right inferior parietal cortex (RIPC). Fifteen causal models were constructed and inverted using spectral dynamic causal modelling (spDCM). The models were compared using Bayesian Model Selection (BMS) for group studies. Result: The BMS results indicated that the fully connected model was the winning model among 15 competing models for both 10 and 15 min scan durations. However, there was no significant difference in effective connectivity among the regions of interest between the 10 and 15 min scans. Conclusion: Scan duration in the range of 10 to 15 min is sufficient to evaluate the effective connectivity within the DMN region. In frail subjects, a shorter scan duration is more favourable. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Micro-Computed Tomography Soft Tissue Biological Specimens Image Data Visualization
Appl. Sci. 2022, 12(10), 4918; https://doi.org/10.3390/app12104918 - 12 May 2022
Cited by 3 | Viewed by 1881
Abstract
Visualization of soft tissues in microCT scanning using X-rays is still a complicated matter. There is no simple tool or methodology on how to set up an optimal look-up-table while respecting the type of soft tissue. A partial solution may be the use [...] Read more.
Visualization of soft tissues in microCT scanning using X-rays is still a complicated matter. There is no simple tool or methodology on how to set up an optimal look-up-table while respecting the type of soft tissue. A partial solution may be the use of a contrast agent. However, this must be accompanied by an appropriate look-up-table setting that respects the relationship between the soft tissue type and the Hounsfield units. The main aim of the study is to determine experimentally derived look-up-tables and relevant values of the Hounsfield units based on the statistical correlation analysis. These values were obtained from the liver and kidneys of 24 mice in solutions of ethanol as the centroid value of the opacity look-up-table area under this graph. Samples and phantom were scanned by a Bruker SkyScan 1275 micro-CT and Phywe XR 4.0 and processed using CTvox and ORS Dragonfly software. To reconstruct the micro-CT projections, NRecon software was used. The main finding of the study is that there is a statistically significant relationship between the centroid of the area under the look-up-table curve and the number of days for which the animal sample was stored in an ethanol solution. H1 of the first hypothesis, i.e. that suggested the Spearman’s correlation coefficient does not equal zero (r1 ≠ 0) regarding this relationship was confirmed. On the other hand, there is no statistically significant relationship between the centroid of the area under the look-up-table curve and the concentration of the ethanol solution. In this case, H1 of the second hypothesis, i.e. that the Spearman’s correlation coefficient does not equal zero (r2 ≠ 0) regarding this relationship was not confirmed. Spearman’s correlation coefficients were −0.27 for the concentration and −0.87 for the number of days stored in ethanol solution in the case of the livers of 13 mice and 0.06 for the concentration and 0.94 for the number of days stored in ethanol solution in the case of kidneys of 11 mice. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Detection of the Lateral Thermal Spread during Bipolar Vessel Sealing in an Ex Vivo Model—Preliminary Results
Diagnostics 2022, 12(5), 1217; https://doi.org/10.3390/diagnostics12051217 - 12 May 2022
Cited by 1 | Viewed by 1801
Abstract
Background: As an unwanted side effect, lateral thermal expansion in bipolar tissue sealing may lead to collateral tissue damage. Materials and Methods: Our investigations were carried out on an ex vivo model of porcine carotid arteries. Lateral thermal expansion was measured and a [...] Read more.
Background: As an unwanted side effect, lateral thermal expansion in bipolar tissue sealing may lead to collateral tissue damage. Materials and Methods: Our investigations were carried out on an ex vivo model of porcine carotid arteries. Lateral thermal expansion was measured and a calculated index, based on thermographic recording and histologic examination, was designed to describe the risk of tissue damage. Results: For instrument 1, the mean extent of the critical zone > 50 °C was 2315 ± 509.2 µm above and 1700 ± 331.3 µm below the branches. The width of the necrosis zone was 412.5 ± 79.0 µm above and 426.7 ± 100.7µm below the branches. For instrument 2, the mean extent of the zone > 50 °C was 2032 ± 592.4 µm above and 1182 ± 386.9 µm below the branches. The width of the necrosis zone was 642.6 ± 158.2 µm above and 645.3 ± 111.9 µm below the branches. Our risk index indicated a low risk of damage for instrument 1 and a moderate to high risk for instrument 2. Conclusion: Thermography is a suitable method to estimate lateral heat propagation, and a validated risk index may lead to improved surgical handling. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Radiomics Profiling Identifies the Value of CT Features for the Preoperative Evaluation of Lymph Node Metastasis in Papillary Thyroid Carcinoma
Diagnostics 2022, 12(5), 1119; https://doi.org/10.3390/diagnostics12051119 - 29 Apr 2022
Viewed by 1687
Abstract
Background: The aim of this study was to identify the increased value of integrating computed tomography (CT) radiomics analysis with the radiologists’ diagnosis and clinical factors to preoperatively diagnose cervical lymph node metastasis (LNM) in papillary thyroid carcinoma (PTC) patients. Methods: A total [...] Read more.
Background: The aim of this study was to identify the increased value of integrating computed tomography (CT) radiomics analysis with the radiologists’ diagnosis and clinical factors to preoperatively diagnose cervical lymph node metastasis (LNM) in papillary thyroid carcinoma (PTC) patients. Methods: A total of 178 PTC patients were randomly divided into a training (n = 125) and a test cohort (n = 53) with a 7:3 ratio. A total of 2553 radiomic features were extracted from noncontrast, arterial contrast-enhanced and venous contrast-enhanced CT images of each patient. Principal component analysis (PCA) and Pearson’s correlation coefficient (PCC) were used for feature selection. Logistic regression was employed to build clinical–radiological, radiomics and combined models. A nomogram was developed by combining the radiomics features, CT-reported lymph node status and clinical factors. Results: The radiomics model showed a predictive performance similar to that of the clinical–radiological model, with similar areas under the curve (AUC) and accuracy (ACC). The combined model showed an optimal predictive performance in both the training (AUC, 0.868; ACC, 86.83%) and test cohorts (AUC, 0.878; ACC, 83.02%). Decision curve analysis demonstrated that the combined model has good clinical application value. Conclusions: Embedding CT radiomics into the clinical diagnostic process improved the diagnostic accuracy. The developed nomogram provides a potential noninvasive tool for LNM evaluation in PTC patients. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Applying Taguchi Methodology to Optimize the Brain Image Quality of 128-Sliced CT: A Feasibility Study
Appl. Sci. 2022, 12(9), 4378; https://doi.org/10.3390/app12094378 - 26 Apr 2022
Cited by 2 | Viewed by 1262
Abstract
Injuries due to traffic accidents have been significant causes of death in Taiwan and traffic accidents have been most common in recent years. Brain computed tomography (CT) examinations can improve imaging quality and increase the value of an imaging diagnosis. The image quality [...] Read more.
Injuries due to traffic accidents have been significant causes of death in Taiwan and traffic accidents have been most common in recent years. Brain computed tomography (CT) examinations can improve imaging quality and increase the value of an imaging diagnosis. The image quality of the brain gray/white matter was optimized using the Taguchi design with an indigenous polymethylmethacrylate (PMMA) slit gauge to imitate the adult brain and solid water phantoms. The two gauges without coating contrast media were located inside the center of a plate to simulate the brain and scanned to obtain images for further analysis. Five major parameters—CT slice thickness, milliampere-seconds, current voltage, filter type, and field of view—were optimized. Analysis of variance was used to determine individual interactions among all control parameters. The optimal experimental acquisition/settings were: slice thickness 2.5 mm, 300 mAs, 140 kVp, smooth filter, and FOV 200 mm2. Signal-to-noise was improved by 106% (p < 0.001) over a routine examination. The effective dose (HE) is approximately 1.33 mSv. Further clinical verification and the image quality of the ACR 464 head phantom is also discussed. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Forrest Classification for Bleeding Peptic Ulcer: A New Look at the Old Endoscopic Classification
Diagnostics 2022, 12(5), 1066; https://doi.org/10.3390/diagnostics12051066 - 24 Apr 2022
Cited by 1 | Viewed by 4029
Abstract
The management of peptic ulcer bleeding is clinically challenging. For decades, the Forrest classification has been used for risk stratification for nonvariceal ulcer bleeding. The perception and interpretation of the Forrest classification vary among different endoscopists. The relationship between the bleeder and ulcer [...] Read more.
The management of peptic ulcer bleeding is clinically challenging. For decades, the Forrest classification has been used for risk stratification for nonvariceal ulcer bleeding. The perception and interpretation of the Forrest classification vary among different endoscopists. The relationship between the bleeder and ulcer images and the different stages of the Forrest classification has not been studied yet. Endoscopic still images of 276 patients with peptic ulcer bleeding for the past 3 years were retrieved and reviewed. The intra-rater agreement and inter-rater agreement were compared. The obtained endoscopic images were manually drawn to delineate the extent of the ulcer and bleeding area. The areas of the region of interest were compared between the different stages of the Forrest classification. A total of 276 images were first classified by two experienced tutor endoscopists. The images were reviewed by six other endoscopists. A good intra-rater correlation was observed (0.92–0.98). A good inter-rater correlation was observed among the different levels of experience (0.639–0.859). The correlation was higher among tutor and junior endoscopists than among experienced endoscopists. Low-risk Forrest IIC and III lesions show distinct patterns compared to high-risk Forrest I, IIA, or IIB lesions. We found good agreement of the Forrest classification among different endoscopists in a single institution. This is the first study to quantitively analyze the obtained and explain the distinct patterns of bleeding ulcers from endoscopy images. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Evaluating the Cisplatin Dose Dependence of Testicular Dysfunction Using Creatine Chemical Exchange Saturation Transfer Imaging
Diagnostics 2022, 12(5), 1046; https://doi.org/10.3390/diagnostics12051046 - 21 Apr 2022
Cited by 1 | Viewed by 1232
Abstract
Chemical exchange saturation transfer (CEST) imaging is a non-invasive molecular imaging technique for indirectly measuring low-concentration endogenous metabolites. Conventional CEST has low specificity, owing to the effects of spillover, magnetization transfer (MT), and T1 relaxation, thus necessitating an inverse Z-spectrum analysis. We [...] Read more.
Chemical exchange saturation transfer (CEST) imaging is a non-invasive molecular imaging technique for indirectly measuring low-concentration endogenous metabolites. Conventional CEST has low specificity, owing to the effects of spillover, magnetization transfer (MT), and T1 relaxation, thus necessitating an inverse Z-spectrum analysis. We aimed to investigate the usefulness of inverse Z-spectrum analysis in creatine (Cr)-CEST in mice, by conducting preclinical 7T-magnetic resonance imaging (MRI) and comparing the conventional analysis metric magnetization transfer ratio (MTRconv) with the novel metric apparent exchange-dependent relaxation (AREX). We performed Cr-CEST imaging using 7T-MRI on mouse testes, using C57BL/6 mice as the control and a cisplatin-treated model. We prepared different doses of cisplatin to observe its dose dependence effect on testicular function. CEST imaging was obtained using an MT pulse with varying saturation frequencies, ranging from −4.8 ppm to +4.8 ppm. The application of control mouse testes improved the specificity of the CEST effect and image contrast between the testes and testicular epithelium. The cisplatin-treated model revealed impaired testicular function, and the Cr-CEST imaging displayed decreased Cr levels in the testes. There was a significant difference between the low- and high-dose models. The MTR values of Cr-CEST reflected the cisplatin dose dependence of testicular dysfunction. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
MRI-Based Radiomics Models to Discriminate Hepatocellular Carcinoma and Non-Hepatocellular Carcinoma in LR-M According to LI-RADS Version 2018
Diagnostics 2022, 12(5), 1043; https://doi.org/10.3390/diagnostics12051043 - 21 Apr 2022
Cited by 5 | Viewed by 1464
Abstract
Differentiating hepatocellular carcinoma (HCC) from other primary liver malignancies in the Liver Imaging Reporting and Data System (LI-RADS) M (LR-M) tumours noninvasively is critical for patient treatment options, but visual evaluation based on medical images is a very challenging task. This study aimed [...] Read more.
Differentiating hepatocellular carcinoma (HCC) from other primary liver malignancies in the Liver Imaging Reporting and Data System (LI-RADS) M (LR-M) tumours noninvasively is critical for patient treatment options, but visual evaluation based on medical images is a very challenging task. This study aimed to evaluate whether magnetic resonance imaging (MRI) models based on radiomics features could further improve the ability to classify LR-M tumour subtypes. A total of 102 liver tumours were defined as LR-M by two radiologists based on LI-RADS and were confirmed to be HCC (n = 31) and non-HCC (n = 71) by surgery. A radiomics signature was constructed based on reproducible features using the max-relevance and min-redundancy (mRMR) and least absolute shrinkage and selection operator (LASSO) logistic regression algorithms with tenfold cross-validation. Logistic regression modelling was applied to establish different models based on T2-weighted imaging (T2WI), arterial phase (AP), portal vein phase (PVP), and combined models. These models were verified independently in the validation cohort. The area under the curve (AUC) of the models based on T2WI, AP, PVP, T2WI + AP, T2WI + PVP, AP + PVP, and T2WI + AP + PVP were 0.768, 0.838, 0.778, 0.880, 0.818, 0.832, and 0.884, respectively. The combined model based on T2WI + AP + PVP showed the best performance in the training cohort and validation cohort. The discrimination efficiency of each radiomics model was significantly better than that of junior radiologists’ visual assessment (p < 0.05; Delong). Therefore, the MRI-based radiomics models had a good ability to discriminate between HCC and non-HCC in LR-M tumours, providing more options to improve the accuracy of LI-RADS classification. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Detection of Chronic Blast-Related Mild Traumatic Brain Injury with Diffusion Tensor Imaging and Support Vector Machines
Diagnostics 2022, 12(4), 987; https://doi.org/10.3390/diagnostics12040987 - 14 Apr 2022
Cited by 5 | Viewed by 1825
Abstract
Blast-related mild traumatic brain injury (bmTBI) often leads to long-term sequalae, but diagnostic approaches are lacking due to insufficient knowledge about the predominant pathophysiology. This study aimed to build a diagnostic model for future verification by applying machine-learning based support vector machine (SVM) [...] Read more.
Blast-related mild traumatic brain injury (bmTBI) often leads to long-term sequalae, but diagnostic approaches are lacking due to insufficient knowledge about the predominant pathophysiology. This study aimed to build a diagnostic model for future verification by applying machine-learning based support vector machine (SVM) modeling to diffusion tensor imaging (DTI) datasets to elucidate white-matter features that distinguish bmTBI from healthy controls (HC). Twenty subacute/chronic bmTBI and 19 HC combat-deployed personnel underwent DTI. Clinically relevant features for modeling were selected using tract-based analyses that identified group differences throughout white-matter tracts in five DTI metrics to elucidate the pathogenesis of injury. These features were then analyzed using SVM modeling with cross validation. Tract-based analyses revealed abnormally decreased radial diffusivity (RD), increased fractional anisotropy (FA) and axial/radial diffusivity ratio (AD/RD) in the bmTBI group, mostly in anterior tracts (29 features). SVM models showed that FA of the anterior/superior corona radiata and AD/RD of the corpus callosum and anterior limbs of the internal capsule (5 features) best distinguished bmTBI from HCs with 89% accuracy. This is the first application of SVM to identify prominent features of bmTBI solely based on DTI metrics in well-defined tracts, which if successfully validated could promote targeted treatment interventions. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Sinus Plain Film Can Predict a Risky Distance from the Lacrimal Sac to the Anterior Skull Base: An Anatomic Study of Dacryocystorhinostomy
Diagnostics 2022, 12(4), 930; https://doi.org/10.3390/diagnostics12040930 - 08 Apr 2022
Viewed by 1494
Abstract
Background: Removal of the surrounding bone during dacryocystorhinostomy may present a higher risk of skull base injury in patients with frontal sinus aplasia. We used sinus plain films to predict cases with a greater risk of a reduced skull base distance in dacryocystorhinostomy. [...] Read more.
Background: Removal of the surrounding bone during dacryocystorhinostomy may present a higher risk of skull base injury in patients with frontal sinus aplasia. We used sinus plain films to predict cases with a greater risk of a reduced skull base distance in dacryocystorhinostomy. Methods: Sinus plain films and computed tomography data from patients were retrospectively evaluated. The frontal sinus was classified as normal, hypoplastic, or aplastic according to Waters’ view. Correlations of the frontal sinus roof-supraorbital margin (F-O) and the frontal sinus roof-nasion (F-N) distances on plain film with the closest lacrimal sac-anterior skull base (LS-ASB) distance measured on computed tomography images were assessed. Results: We evaluated 110 patients. In total, 16 (11.8%) patients had frontal sinus aplasia, of whom 6 (2.7%) had bilateral and 10 (9.1%) had unilateral aplasia. Sides with frontal sinus aplasia based on Waters’ view had a shorter median LS-ASB distance than normal or hypoplastic sides. The F-O and F-N distances in Waters’ view were significantly positively correlated with the computed tomographic LS-ASB distance. The F-O margin and F-N distance thresholds for predicting an LS-ASB distance < 10 mm, considered a risky distance, were 11.6 and 14.4 mm, respectively, with sensitivities of 100% and 91.7%, and specificities of 76% and 82.7%, respectively. Conclusions: The LS-ASB distance is closer on aplastic frontal sinus sides. Waters’ view on plain sinus films can provide a fast and inexpensive method for evaluating the skull base distance and sinonasal condition during planning for dacryocystorhinostomy. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

Article
Removal of Specular Reflection Using Angle Adjustment of Linear Polarized Filter in Medical Imaging Diagnosis
Diagnostics 2022, 12(4), 863; https://doi.org/10.3390/diagnostics12040863 - 30 Mar 2022
Cited by 2 | Viewed by 1514
Abstract
The biggest problem in imaging medicine is the occurrence of light reflection in the imaging process for lesion diagnosis. The formation of light reflection obscures the diagnostic field of the lesion and interferes with the correct diagnosis of the observer. The existing method [...] Read more.
The biggest problem in imaging medicine is the occurrence of light reflection in the imaging process for lesion diagnosis. The formation of light reflection obscures the diagnostic field of the lesion and interferes with the correct diagnosis of the observer. The existing method has the inconvenience of performing a diagnosis in a state in which light reflection is suppressed by adjusting the direction angle of the camera. This paper proposes a method for rotating a linear polarization filter to remove light reflection in a diagnostic imaging camera. Vertical polarization and horizontal polarization are controlled through the rotation of the filter, and the polarization is adjusted to horizontal polarization. The rotation angle of the filter for horizontal polarization control will be 90°, and the vertical and horizontal polarization waves