sensors-logo

Journal Browser

Journal Browser

Biomedical Imaging and Sensing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Chemical Sensors".

Deadline for manuscript submissions: closed (31 December 2019) | Viewed by 53030

Special Issue Editors


E-Mail Website
Guest Editor
Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
Interests: computer vision; medical imaging; artificial intelligence; machine learning; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Applied Health Sciences, University of Waterloo, Waterloo, ON N2L 3G1, Canada
Interests: cardiovascular physiology; environmental physiology (spaceflight, altitude); vascular aging; cardiopulmonary exercise testing; ultrasound

E-Mail Website
Guest Editor
Schlegel-University of Waterloo Research Institute for Aging, Waterloo, ON N2J 0E2, Canada
Interests: biomedical optics; biomedical bmage processing; photoplethysmographic imaging; cardiovascular system; machine learning

Special Issue Information

Dear Colleagues,

Human health monitoring technologies have seen a rapid and exciting growth recently. Advances in hardware and software systems have enabled new ways of detecting, monitoring and tracking health-related biomarkers in clinical, pre-clinical, and home environments. This technological shift has led to improved healthcare efficacy, including earlier detection of diseases, novel treatment procedures, and quantitative self-monitoring.

Recent growth in computational power and sensors has enabled both new sensing technologies and new analysis techniques with well-established modalities. For example, advances in machine learning and image processing techniques have become widespread across different facets of biomedical imaging and sensing for identifying complex patterns. Likewise, hardware advances have yielded health monitoring technologies in difficult environments, such as ambulatory monitoring. To bring different facets of health monitoring together, papers reporting novel imaging and/or sensing methods with pre-clinical/clinical applications are invited for submission to this Special Issue. The core themes of this topic include, but are not limited to:

  • Advances in biomedical optical imaging/sensing, including but not limited to diffuse optical spectroscopy, near infrared spectroscopy, diffuse correlation spectroscopy, spatial frequency domain imaging, diffuse optical imaging, optical coherence tomography, photoacoustic imaging, microscopy, optical fibers and sensors, etc.
  • Advances in image analysis for disease detection and/or monitoring, including, but not limited to, MRI, X-ray, PET, ultrasound, etc.
  • Advances in multimodal systems for diagnosis, treatment, and/or prevention.
  • Pre-clinical and clinical applications of novel biomedical sensing technologies, including but not limited to cardiovascular and cardiopulmonary, neurovascular, critical care, sepsis, home monitoring, aging, etc.
  • Machine learning methods for biomedical image and signal analysis.

Prof. Alexander Wong
Prof. Richard L. Hughson
Dr. Robert Amelard
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • biomedical imaging and sensing
  • human health
  • biomedical image processing
  • machine learning

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 849 KiB  
Article
Radiomics Driven Diffusion Weighted Imaging Sensing Strategies for Zone-Level Prostate Cancer Sensing
by Chris Dulhanty, Linda Wang, Maria Cheng, Hayden Gunraj, Farzad Khalvati, Masoom A. Haider and Alexander Wong
Sensors 2020, 20(5), 1539; https://doi.org/10.3390/s20051539 - 10 Mar 2020
Cited by 12 | Viewed by 3512
Abstract
Prostate cancer is the most commonly diagnosed cancer in North American men; however, prognosis is relatively good given early diagnosis. This motivates the need for fast and reliable prostate cancer sensing. Diffusion weighted imaging (DWI) has gained traction in recent years as a [...] Read more.
Prostate cancer is the most commonly diagnosed cancer in North American men; however, prognosis is relatively good given early diagnosis. This motivates the need for fast and reliable prostate cancer sensing. Diffusion weighted imaging (DWI) has gained traction in recent years as a fast non-invasive approach to cancer sensing. The most commonly used DWI sensing modality currently is apparent diffusion coefficient (ADC) imaging, with the recently introduced computed high-b value diffusion weighted imaging (CHB-DWI) showing considerable promise for cancer sensing. In this study, we investigate the efficacy of ADC and CHB-DWI sensing modalities when applied to zone-level prostate cancer sensing by introducing several radiomics driven zone-level prostate cancer sensing strategies geared around hand-engineered radiomic sequences from DWI sensing (which we term as Zone-X sensing strategies). Furthermore, we also propose Zone-DR, a discovery radiomics approach based on zone-level deep radiomic sequencer discovery that discover radiomic sequences directly for radiomics driven sensing. Experimental results using 12,466 pathology-verified zones obtained through the different DWI sensing modalities of 101 patients showed that: (i) the introduced Zone-X and Zone-DR radiomics driven sensing strategies significantly outperformed the traditional clinical heuristics driven strategy in terms of AUC, (ii) the introduced Zone-DR and Zone-SVM strategies achieved the highest sensitivity and specificity, respectively for ADC amongst the tested radiomics driven strategies, (iii) the introduced Zone-DR and Zone-LR strategies achieved the highest sensitivities for CHB-DWI amongst the tested radiomics driven strategies, and (iv) the introduced Zone-DR, Zone-LR, and Zone-SVM strategies achieved the highest specificities for CHB-DWI amongst the tested radiomics driven strategies. Furthermore, the results showed that the trade-off between sensitivity and specificity can be optimized based on the particular clinical scenario we wish to employ radiomic driven DWI prostate cancer sensing strategies for, such as clinical screening versus surgical planning. Finally, we investigate the critical regions within sensing data that led to a given radiomic sequence generated by a Zone-DR sequencer using an explainability method to get a deeper understanding on the biomarkers important for zone-level cancer sensing. Full article
(This article belongs to the Special Issue Biomedical Imaging and Sensing)
Show Figures

Figure 1

15 pages, 3077 KiB  
Article
In Situ Characterization of Micro-Vibration in Natural Latex Membrane Resembling Tympanic Membrane Functionally Using Optical Doppler Tomography
by Daewoon Seong, Jaehwan Kwon, Deokmin Jeon, Ruchire Eranga Wijesinghe, Jaeyul Lee, Naresh Kumar Ravichandran, Sangyeob Han, Junsoo Lee, Pilun Kim, Mansik Jeon and Jeehyun Kim
Sensors 2020, 20(1), 64; https://doi.org/10.3390/s20010064 - 20 Dec 2019
Cited by 11 | Viewed by 2910
Abstract
Non-invasive characterization of micro-vibrations in the tympanic membrane (TM) excited by external sound waves is considered as a promising and essential diagnosis in modern otolaryngology. To verify the possibility of measuring and discriminating the vibrating pattern of TM, here we describe a micro-vibration [...] Read more.
Non-invasive characterization of micro-vibrations in the tympanic membrane (TM) excited by external sound waves is considered as a promising and essential diagnosis in modern otolaryngology. To verify the possibility of measuring and discriminating the vibrating pattern of TM, here we describe a micro-vibration measurement method of latex membrane resembling the TM. The measurements are obtained with an externally generated audio stimuli of 2.0, 2.2, 2.8, 3.1 and 3.2 kHz, and their respective vibrations based tomographic, volumetric and quantitative evaluations were acquired using optical Doppler tomography (ODT). The micro oscillations and structural changes which occurred due to diverse frequencies are measured with sufficient accuracy using a highly sensitive ODT system implied phase subtraction method. The obtained results demonstrated the capability of measuring and analyzing the complex varying micro-vibration of the membrane according to implied sound frequency. Full article
(This article belongs to the Special Issue Biomedical Imaging and Sensing)
Show Figures

Figure 1

10 pages, 3083 KiB  
Article
Heterogeneity Detection Method for Transmission Multispectral Imaging Based on Contour and Spectral Features
by Yanjun Wang, Gang Li, Wenjuan Yan, Guoquan He and Ling Lin
Sensors 2019, 19(24), 5369; https://doi.org/10.3390/s19245369 - 05 Dec 2019
Cited by 8 | Viewed by 2371
Abstract
Transmission multispectral imaging (TMI) has potential value for medical applications, such as early screening for breast cancer. However, because biological tissue has strong scattering and absorption characteristics, the heterogeneity detection capability of TMI is poor. Many techniques, such as frame accumulation and shape [...] Read more.
Transmission multispectral imaging (TMI) has potential value for medical applications, such as early screening for breast cancer. However, because biological tissue has strong scattering and absorption characteristics, the heterogeneity detection capability of TMI is poor. Many techniques, such as frame accumulation and shape function signal modulation/demodulation techniques, can improve detection accuracy. In this work, we develop a heterogeneity detection method by combining the contour features and spectral features of TMI. Firstly, the acquisition experiment of the phantom multispectral images was designed. Secondly, the signal-to-noise ratio (SNR) and grayscale level were improved by combining frame accumulation with shape function signal modulation and demodulation techniques. Then, an image exponential downsampling pyramid and Laplace operator were used to roughly extract and fuse the contours of all heterogeneities in images produced by a variety of wavelengths. Finally, we used the hypothesis of invariant parameters to do heterogeneity classification. Experimental results show that these invariant parameters can effectively distinguish heterogeneities with various thicknesses. Moreover, this method may provide a reference for heterogeneity detection in TMI. Full article
(This article belongs to the Special Issue Biomedical Imaging and Sensing)
Show Figures

Figure 1

17 pages, 2662 KiB  
Article
Automatic Identification and Intuitive Map Representation of the Epiretinal Membrane Presence in 3D OCT Volumes
by Sergio Baamonde, Joaquim de Moura, Jorge Novo, Pablo Charlón and Marcos Ortega
Sensors 2019, 19(23), 5269; https://doi.org/10.3390/s19235269 - 29 Nov 2019
Cited by 8 | Viewed by 2753
Abstract
Optical Coherence Tomography (OCT) is a medical image modality providing high-resolution cross-sectional visualizations of the retinal tissues without any invasive procedure, commonly used in the analysis of retinal diseases such as diabetic retinopathy or retinal detachment. Early identification of the epiretinal membrane (ERM) [...] Read more.
Optical Coherence Tomography (OCT) is a medical image modality providing high-resolution cross-sectional visualizations of the retinal tissues without any invasive procedure, commonly used in the analysis of retinal diseases such as diabetic retinopathy or retinal detachment. Early identification of the epiretinal membrane (ERM) facilitates ERM surgical removal operations. Moreover, presence of the ERM is linked to other retinal pathologies, such as macular edemas, being among the main causes of vision loss. In this work, we propose an automatic method for the characterization and visualization of the ERM’s presence using 3D OCT volumes. A set of 452 features is refined using the Spatial Uniform ReliefF (SURF) selection strategy to identify the most relevant ones. Afterwards, a set of representative classifiers is trained, selecting the most proficient model, generating a 2D reconstruction of the ERM’s presence. Finally, a post-processing stage using a set of morphological operators is performed to improve the quality of the generated maps. To verify the proposed methodology, we used 20 3D OCT volumes, both with and without the ERM’s presence, totalling 2428 OCT images manually labeled by a specialist. The most optimal classifier in the training stage achieved a mean accuracy of 91.9%. Regarding the post-processing stage, mean specificity values of 91.9% and 99.0% were obtained from volumes with and without the ERM’s presence, respectively. Full article
(This article belongs to the Special Issue Biomedical Imaging and Sensing)
Show Figures

Graphical abstract

21 pages, 5706 KiB  
Article
Automatic Identification and Representation of the Cornea–Contact Lens Relationship Using AS-OCT Images
by Pablo Cabaleiro, Joaquim de Moura, Jorge Novo, Pablo Charlón and Marcos Ortega
Sensors 2019, 19(23), 5087; https://doi.org/10.3390/s19235087 - 21 Nov 2019
Cited by 5 | Viewed by 3076
Abstract
The clinical study of the cornea–contact lens relationship is widely used in the process of adaptation of the scleral contact lens (SCL) to the ocular morphology of patients. In that sense, the measurement of the adjustment between the SCL and the cornea can [...] Read more.
The clinical study of the cornea–contact lens relationship is widely used in the process of adaptation of the scleral contact lens (SCL) to the ocular morphology of patients. In that sense, the measurement of the adjustment between the SCL and the cornea can be used to study the comfort or potential damage that the lens may produce in the eye. The current analysis procedure implies the manual inspection of optical coherence tomography of the anterior segment images (AS-OCT) by the clinical experts. This process presents several limitations such as the inability to obtain complex metrics, the inaccuracies of the manual measurements or the requirement of a time-consuming process by the expert in a tedious process, among others. This work proposes a fully-automatic methodology for the extraction of the areas of interest in the study of the cornea–contact lens relationship and the measurement of representative metrics that allow the clinicians to measure quantitatively the adjustment between the lens and the eye. In particular, three distance metrics are herein proposed: Vertical, normal to the tangent of the region of interest and by the nearest point. Moreover, the images are classified to characterize the analysis as belonging to the central cornea, peripheral cornea, limbus or sclera (regions where the inner layer of the lens has already joined the cornea). Finally, the methodology graphically presents the results of the identified segmentations using an intuitive visualization that facilitates the analysis and diagnosis of the patients by the clinical experts. Full article
(This article belongs to the Special Issue Biomedical Imaging and Sensing)
Show Figures

Graphical abstract

22 pages, 2161 KiB  
Article
A Multi-Scale Directional Line Detector for Retinal Vessel Segmentation
by Ahsan Khawaja, Tariq M. Khan, Mohammad A. U. Khan and Syed Junaid Nawaz
Sensors 2019, 19(22), 4949; https://doi.org/10.3390/s19224949 - 13 Nov 2019
Cited by 32 | Viewed by 4439
Abstract
The assessment of transformations in the retinal vascular structure has a strong potential in indicating a wide range of underlying ocular pathologies. Correctly identifying the retinal vessel map is a crucial step in disease identification, severity progression assessment, and appropriate treatment. Marking the [...] Read more.
The assessment of transformations in the retinal vascular structure has a strong potential in indicating a wide range of underlying ocular pathologies. Correctly identifying the retinal vessel map is a crucial step in disease identification, severity progression assessment, and appropriate treatment. Marking the vessels manually by a human expert is a tedious and time-consuming task, thereby reinforcing the need for automated algorithms capable of quick segmentation of retinal features and any possible anomalies. Techniques based on unsupervised learning methods utilize vessel morphology to classify vessel pixels. This study proposes a directional multi-scale line detector technique for the segmentation of retinal vessels with the prime focus on the tiny vessels that are most difficult to segment out. Constructing a directional line-detector, and using it on images having only the features oriented along the detector’s direction, significantly improves the detection accuracy of the algorithm. The finishing step involves a binarization operation, which is again directional in nature, helps in achieving further performance improvements in terms of key performance indicators. The proposed method is observed to obtain a sensitivity of 0.8043, 0.8011, and 0.7974 for the Digital Retinal Images for Vessel Extraction (DRIVE), STructured Analysis of the Retina (STARE), and Child Heart And health Study in England (CHASE_DB1) datasets, respectively. These results, along with other performance enhancements demonstrated by the conducted experimental evaluation, establish the validity and applicability of directional multi-scale line detectors as a competitive framework for retinal image segmentation. Full article
(This article belongs to the Special Issue Biomedical Imaging and Sensing)
Show Figures

Figure 1

18 pages, 3673 KiB  
Article
Automatic Visual Acuity Estimation by Means of Computational Vascularity Biomarkers Using Oct Angiographies
by Macarena Díaz, Marta Díez-Sotelo, Francisco Gómez-Ulla, Jorge Novo, Manuel Francisco G. Penedo and Marcos Ortega
Sensors 2019, 19(21), 4732; https://doi.org/10.3390/s19214732 - 31 Oct 2019
Cited by 3 | Viewed by 2822
Abstract
Optical Coherence Tomography Angiography (OCTA) constitutes a new non-invasive ophthalmic image modality that allows the precise visualization of the micro-retinal vascularity that is commonly used to analyze the foveal region. Given that there are many systemic and eye diseases that affect the eye [...] Read more.
Optical Coherence Tomography Angiography (OCTA) constitutes a new non-invasive ophthalmic image modality that allows the precise visualization of the micro-retinal vascularity that is commonly used to analyze the foveal region. Given that there are many systemic and eye diseases that affect the eye fundus and its vascularity, the analysis of that region is crucial to diagnose and estimate the vision loss. The Visual Acuity (VA) is typically measured manually, implying an exhaustive and time-consuming procedure. In this work, we propose a method that exploits the information of the OCTA images to automatically estimate the VA with an accurate error of 0.1713. Full article
(This article belongs to the Special Issue Biomedical Imaging and Sensing)
Show Figures

Graphical abstract

17 pages, 6383 KiB  
Article
Mixed Maximum Loss Design for Optic Disc and Optic Cup Segmentation with Deep Learning from Imbalanced Samples
by Yong-li Xu, Shuai Lu, Han-xiong Li and Rui-rui Li
Sensors 2019, 19(20), 4401; https://doi.org/10.3390/s19204401 - 11 Oct 2019
Cited by 25 | Viewed by 4141
Abstract
Glaucoma is a serious eye disease that can cause permanent blindness and is difficult to diagnose early. Optic disc (OD) and optic cup (OC) play a pivotal role in the screening of glaucoma. Therefore, accurate segmentation of OD and OC from fundus images [...] Read more.
Glaucoma is a serious eye disease that can cause permanent blindness and is difficult to diagnose early. Optic disc (OD) and optic cup (OC) play a pivotal role in the screening of glaucoma. Therefore, accurate segmentation of OD and OC from fundus images is a key task in the automatic screening of glaucoma. In this paper, we designed a U-shaped convolutional neural network with multi-scale input and multi-kernel modules (MSMKU) for OD and OC segmentation. Such a design gives MSMKU a rich receptive field and is able to effectively represent multi-scale features. In addition, we designed a mixed maximum loss minimization learning strategy (MMLM) for training the proposed MSMKU. This training strategy can adaptively sort the samples by the loss function and re-weight the samples through data enhancement, thereby synchronously improving the prediction performance of all samples. Experiments show that the proposed method has obtained a state-of-the-art breakthrough result for OD and OC segmentation on the RIM-ONE-V3 and DRISHTI-GS datasets. At the same time, the proposed method achieved satisfactory glaucoma screening performance on the RIM-ONE-V3 and DRISHTI-GS datasets. On datasets with an imbalanced distribution between typical and rare sample images, the proposed method obtained a higher accuracy than existing deep learning methods. Full article
(This article belongs to the Special Issue Biomedical Imaging and Sensing)
Show Figures

Figure 1

15 pages, 5012 KiB  
Article
Design, Implementation, and Evaluation of a Head and Neck MRI RF Array Integrated with a 511 keV Transmission Source for Attenuation Correction in PET/MR
by Lucia Isabel Navarro de Lara, Roberta Frass-Kriegl, Andreas Renner, Jürgen Sieg, Michael Pichler, Thomas Bogner, Ewald Moser, Thomas Beyer, Wolfgang Birkfellner, Michael Figl and Elmar Laistler
Sensors 2019, 19(15), 3297; https://doi.org/10.3390/s19153297 - 26 Jul 2019
Cited by 5 | Viewed by 4564
Abstract
The goal of this work is to further improve positron emission tomography (PET) attenuation correction and magnetic resonance (MR) sensitivity for head and neck applications of PET/MR. A dedicated 24-channel receive-only array, fully-integrated with a hydraulic system to move a transmission source helically [...] Read more.
The goal of this work is to further improve positron emission tomography (PET) attenuation correction and magnetic resonance (MR) sensitivity for head and neck applications of PET/MR. A dedicated 24-channel receive-only array, fully-integrated with a hydraulic system to move a transmission source helically around the patient and radiofrequency (RF) coil array, is designed, implemented, and evaluated. The device enables the calculation of attenuation coefficients from PET measurements at 511 keV including the RF coil and the particular patient. The RF coil design is PET-optimized by minimizing photon attenuation from coil components and housing. The functionality of the presented device is successfully demonstrated by calculating the attenuation map of a water bottle based on PET transmission measurements; results are in excellent agreement with reference values. It is shown that the device itself has marginal influence on the static magnetic field B0 and the radiofrequency transmit field B1 of the 3T PET/MR system. Furthermore, the developed RF array is shown to outperform a standard commercial 16-channel head and neck coil in terms of signal-to-noise ratio (SNR) and parallel imaging performance. In conclusion, the presented hardware enables accurate calculation of attenuation maps for PET/MR systems while improving the SNR of corresponding MR images in a single device without degrading the B0 and B1 homogeneity of the scanner. Full article
(This article belongs to the Special Issue Biomedical Imaging and Sensing)
Show Figures

Graphical abstract

26 pages, 13199 KiB  
Article
Deep Ensemble Learning Based Objective Grading of Macular Edema by Extracting Clinically Significant Findings from Fused Retinal Imaging Modalities
by Bilal Hassan, Taimur Hassan, Bo Li, Ramsha Ahmed and Omar Hassan
Sensors 2019, 19(13), 2970; https://doi.org/10.3390/s19132970 - 05 Jul 2019
Cited by 30 | Viewed by 3753
Abstract
Macular edema (ME) is a retinal condition in which central vision of a patient is affected. ME leads to accumulation of fluid in the surrounding macular region resulting in a swollen macula. Optical coherence tomography (OCT) and the fundus photography are the two [...] Read more.
Macular edema (ME) is a retinal condition in which central vision of a patient is affected. ME leads to accumulation of fluid in the surrounding macular region resulting in a swollen macula. Optical coherence tomography (OCT) and the fundus photography are the two widely used retinal examination techniques that can effectively detect ME. Many researchers have utilized retinal fundus and OCT imaging for detecting ME. However, to the best of our knowledge, no work is found in the literature that fuses the findings from both retinal imaging modalities for the effective and more reliable diagnosis of ME. In this paper, we proposed an automated framework for the classification of ME and healthy eyes using retinal fundus and OCT scans. The proposed framework is based on deep ensemble learning where the input fundus and OCT scans are recognized through the deep convolutional neural network (CNN) and are processed accordingly. The processed scans are further passed to the second layer of the deep CNN model, which extracts the required feature descriptors from both images. The extracted descriptors are then concatenated together and are passed to the supervised hybrid classifier made through the ensemble of the artificial neural networks, support vector machines and naïve Bayes. The proposed framework has been trained on 73,791 retinal scans and is validated on 5100 scans of publicly available Zhang dataset and Rabbani dataset. The proposed framework achieved the accuracy of 94.33% for diagnosing ME and healthy subjects and achieved the mean dice coefficient of 0.9019 ± 0.04 for accurately extracting the retinal fluids, 0.7069 ± 0.11 for accurately extracting hard exudates and 0.8203 ± 0.03 for accurately extracting retinal blood vessels against the clinical markings. Full article
(This article belongs to the Special Issue Biomedical Imaging and Sensing)
Show Figures

Graphical abstract

15 pages, 4195 KiB  
Article
Laplacian Eigenmaps Network-Based Nonlocal Means Method for MR Image Denoising
by Houqiang Yu, Mingyue Ding and Xuming Zhang
Sensors 2019, 19(13), 2918; https://doi.org/10.3390/s19132918 - 01 Jul 2019
Cited by 22 | Viewed by 3239
Abstract
Magnetic resonance (MR) images are often corrupted by Rician noise which degrades the accuracy of image-based diagnosis tasks. The nonlocal means (NLM) method is a representative filter in denoising MR images due to its competitive denoising performance. However, the existing NLM methods usually [...] Read more.
Magnetic resonance (MR) images are often corrupted by Rician noise which degrades the accuracy of image-based diagnosis tasks. The nonlocal means (NLM) method is a representative filter in denoising MR images due to its competitive denoising performance. However, the existing NLM methods usually exploit the gray-level information or hand-crafted features to evaluate the similarity between image patches, which is disadvantageous for preserving the image details while smoothing out noise. In this paper, an improved nonlocal means method is proposed for removing Rician noise in MR images by using the refined similarity measures. The proposed method firstly extracts the intrinsic features from the pre-denoised image using a shallow convolutional neural network named Laplacian eigenmaps network (LEPNet). Then, the extracted features are used for computing the similarity in the NLM method to produce the denoised image. Finally, the method noise of the denoised image is utilized to further improve the denoising performance. Specifically, the LEPNet model is composed of two cascaded convolutional layers and a nonlinear output layer, in which the Laplacian eigenmaps are employed to learn the filter bank in the convolutional layers and the Leaky Rectified Linear Unit activation function is used in the final output layer to output the nonlinear features. Due to the advantage of LEPNet in recovering the geometric structure of the manifold in the low-dimension space, the features extracted by this network can facilitate characterizing the self-similarity better than the existing NLM methods. Experiments have been performed on the BrainWeb phantom and the real images. Experimental results demonstrate that among several compared denoising methods, the proposed method can provide more effective noise removal and better details preservation in terms of human vision and such objective indexes as peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). Full article
(This article belongs to the Special Issue Biomedical Imaging and Sensing)
Show Figures

Figure 1

13 pages, 8391 KiB  
Article
Paraspinal Muscle Segmentation Based on Deep Neural Network
by Haixing Li, Haibo Luo and Yunpeng Liu
Sensors 2019, 19(12), 2650; https://doi.org/10.3390/s19122650 - 12 Jun 2019
Cited by 24 | Viewed by 4183
Abstract
The accurate segmentation of the paraspinal muscle in Magnetic Resonance (MR) images is a critical step in the automated analysis of lumbar diseases such as chronic low back pain, disc herniation and lumbar spinal stenosis. However, the automatic segmentation of multifidus and erector [...] Read more.
The accurate segmentation of the paraspinal muscle in Magnetic Resonance (MR) images is a critical step in the automated analysis of lumbar diseases such as chronic low back pain, disc herniation and lumbar spinal stenosis. However, the automatic segmentation of multifidus and erector spinae has not yet been achieved due to three unusual challenges: (1) the muscle boundary is unclear; (2) the gray histogram distribution of the target overlaps with the background; (3) the intra- and inter-patient shape is variable. We propose to tackle the problem of the automatic segmentation of paravertebral muscles using a deformed U-net consisting of two main modules: the residual module and the feature pyramid attention (FPA) module. The residual module can directly return the gradient while preserving the details of the image to make the model easier to train. The FPA module fuses different scales of context information and provides useful salient features for high-level feature maps. In this paper, 120 cases were used for experiments, which were provided and labeled by the spine surgery department of Shengjing Hospital of China Medical University. The experimental results show that the model can achieve higher predictive capability. The dice coefficient of the multifidus is as high as 0.949, and the Hausdorff distance is 4.62 mm. The dice coefficient of the erector spinae is 0.913 and the Hausdorff distance is 7.89 mm. The work of this paper will contribute to the development of an automatic measurement system for paraspinal muscles, which is of great significance for the treatment of spinal diseases. Full article
(This article belongs to the Special Issue Biomedical Imaging and Sensing)
Show Figures

Figure 1

19 pages, 6019 KiB  
Article
Deep CT to MR Synthesis Using Paired and Unpaired Data
by Cheng-Bin Jin, Hakil Kim, Mingjie Liu, Wonmo Jung, Seongsu Joo, Eunsik Park, Young Saem Ahn, In Ho Han, Jae Il Lee and Xuenan Cui
Sensors 2019, 19(10), 2361; https://doi.org/10.3390/s19102361 - 22 May 2019
Cited by 133 | Viewed by 9522
Abstract
Magnetic resonance (MR) imaging plays a highly important role in radiotherapy treatment planning for the segmentation of tumor volumes and organs. However, the use of MR is limited, owing to its high cost and the increased use of metal implants for patients. This [...] Read more.
Magnetic resonance (MR) imaging plays a highly important role in radiotherapy treatment planning for the segmentation of tumor volumes and organs. However, the use of MR is limited, owing to its high cost and the increased use of metal implants for patients. This study is aimed towards patients who are contraindicated owing to claustrophobia and cardiac pacemakers, and many scenarios in which only computed tomography (CT) images are available, such as emergencies, situations lacking an MR scanner, and situations in which the cost of obtaining an MR scan is prohibitive. From medical practice, our approach can be adopted as a screening method by radiologists to observe abnormal anatomical lesions in certain diseases that are difficult to diagnose by CT. The proposed approach can estimate an MR image based on a CT image using paired and unpaired training data. In contrast to existing synthetic methods for medical imaging, which depend on sparse pairwise-aligned data or plentiful unpaired data, the proposed approach alleviates the rigid registration of paired training, and overcomes the context-misalignment problem of unpaired training. A generative adversarial network was trained to transform two-dimensional (2D) brain CT image slices into 2D brain MR image slices, combining the adversarial, dual cycle-consistent, and voxel-wise losses. Qualitative and quantitative comparisons against independent paired and unpaired training methods demonstrated the superiority of our approach. Full article
(This article belongs to the Special Issue Biomedical Imaging and Sensing)
Show Figures

Figure 1

Back to TopTop