sensors-logo

Journal Browser

Journal Browser

Advanced Sensing and Image Processing Techniques for Healthcare Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (30 December 2021) | Viewed by 55140

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Electronic Engineering (CSEE), University of Essex, Colchester, UK
Interests: biomedical signal and image processing; compressive sensing; dictionary learning; blind source separation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science and Electronic Engineering, University of Essex, Colchester CO4 3SQ, UK
Interests: wireless sensor and actuator networks; body area networks; internet of things
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Mathematics, Statistics and Actuarial Science, University of Essex, Colchester, UK
Interests: biomedical signal and image processing; data fusion; blind source separation and machine/deep learning; EEG; fMRI; ECG
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Developing new technologies for health and social care section have always been of particular attention to researchers. Additionally, recent increasing rate of aging population, particularly in developed countries, has doubled the demand of intelligent systems for the elderly. On the other hand, super-fast advances in technology and science has raised expectation of inventing new monitoring and assistive technologies for accurate and delay sensitive acquisition, processing, transmission, and interpretation of human’s physiological and behavioural data. Therefore, a variety of enabling techniques such as signal and image processing, machine learning, and compression techniques could be used to improve such systems and achieve the aforementioned goal. Therefore, the ultimate outcome would be increased quality of life and improving the healthcare services to the older populations.

This special issue aims to attract latest research and findings in design, development and experimentation of healthcare-related technologies. This includes, but not limited to, using novel sensing, imaging, data processing, machine learning, and artificial intelligent devices and algorithms to assist/monitor elderly, patients, and disabled population.

Topics of interest include but are not restricted to:

  • Biomedical signal and image processing
  • Smart monitoring and assisted living systems
  • Deep learning for healthcare data
  • Sensor fusion of biomedical data
  • Compressive sensing of biomedical data
  • Cloud/Edge/Fog computing for healthcare systems
  • Smart phone-based vital signal monitoring
  • Brain Computer Interface for disabled
  • Wireless body sensor networks
  • Risks and accidents detection for elderly care
  • Activity recognition
  • Big data analysis for healthcare applications
  • IoT Applications in healthcare
  • Sensors and actuators in healthcare systems
  • Mental disorder detection
  • Smart breathing activity monitoring
  • Non-invasive glucose monitoring

Dr. Vahid Abolghasemi
Dr. Hossein Anisi
Dr. Saideh Ferdowsi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Biomedical Engineering
  • Signal processing
  • Image processing
  • Machine learning
  • Wireless sensor networks
  • Internet of things
  • Body area network
  • Deep neural networks
  • Dictionary learning
  • Compressive sensing
  • Big data
  • Brain computer interface
  • Artificial intelligence
  • Healthcare technology
  • Telemedicine

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

18 pages, 1932 KiB  
Article
Intraoperative Hypotension Prediction Model Based on Systematic Feature Engineering and Machine Learning
by Subin Lee, Misoon Lee, Sang-Hyun Kim and Jiyoung Woo
Sensors 2022, 22(9), 3108; https://doi.org/10.3390/s22093108 - 19 Apr 2022
Cited by 6 | Viewed by 2059
Abstract
Arterial hypotension is associated with incidence of postoperative complications, such as myocardial infarction or acute kidney injury. Little research has been conducted for the real-time prediction of hypotension, even though many studies have been performed to investigate the factors which affect hypotension events. [...] Read more.
Arterial hypotension is associated with incidence of postoperative complications, such as myocardial infarction or acute kidney injury. Little research has been conducted for the real-time prediction of hypotension, even though many studies have been performed to investigate the factors which affect hypotension events. This forecasting problem is quite challenging compared to diagnosis that detects high-risk patients at current. The forecasting problem that specifies when events occur is more challenging than the forecasting problem that does not specify the event time. In this work, we challenge the forecasting problem in 5 min advance. For that, we aim to build a systematic feature engineering method that is applicable regardless of vital sign species, as well as a machine learning model based on these features for real-time predictions 5 min before hypotension. The proposed feature extraction model includes statistical analysis, peak analysis, change analysis, and frequency analysis. After applying feature engineering on invasive blood pressure (IBP), we build a random forest model to differentiate a hypotension event from other normal samples. Our model yields an accuracy of 0.974, a precision of 0.904, and a recall of 0.511 for predicting hypotensive events. Full article
Show Figures

Figure 1

24 pages, 4075 KiB  
Article
Simulation of 3D Body Shapes for Pregnant and Postpartum Women
by Chanjira Sinthanayothin, Piyanut Xuto, Wisarut Bholsithi, Duangrat Gansawat, Nonlapas Wongwaen, Nantaporn Ratisoontorn, Parut Bunporn and Supiya Charoensiriwath
Sensors 2022, 22(5), 2036; https://doi.org/10.3390/s22052036 - 05 Mar 2022
Cited by 1 | Viewed by 19831
Abstract
Several studies have reported that pre-pregnant women’s body mass index (BMI) affects women’s weight gain with complications during pregnancy and the postpartum weight retention. It is important to control the BMI before, during and after pregnancy. Our objectives are to develop a technique [...] Read more.
Several studies have reported that pre-pregnant women’s body mass index (BMI) affects women’s weight gain with complications during pregnancy and the postpartum weight retention. It is important to control the BMI before, during and after pregnancy. Our objectives are to develop a technique that can compute and visualize 3D body shapes of women during pregnancy and postpartum in various gestational ages, BMI, and postpartum durations. Body changes data from 98 pregnant and 83 postpartum women were collected, tracked for six months, and analyzed to create 3D model shapes. This study allows users to simulate their 3D body shapes in real-time and online, based on weight, height, and gestational age, using multiple linear regression and morphing techniques. To evaluate the results, precision tests were performed on simulated 3D pregnant and postpartum women’s shapes. Additionally, a satisfaction test on the application was conducted on new 149 mothers. The accuracy of the simulation was tested on 75 pregnant and 74 postpartum volunteers in terms of relationships between statistical calculation, simulated 3D models and actual tape measurement of chest, waist, hip, and inseam. Our results can predict accurately the body proportions of pregnant and postpartum women. Full article
Show Figures

Figure 1

22 pages, 13034 KiB  
Article
Automatic Object Detection Algorithm-Based Braille Image Generation System for the Recognition of Real-Life Obstacles for Visually Impaired People
by Dayeon Lee and Jinsoo Cho
Sensors 2022, 22(4), 1601; https://doi.org/10.3390/s22041601 - 18 Feb 2022
Cited by 4 | Viewed by 3466
Abstract
The global prevalence of visual impairment due to diseases and accidents continues to increase. Visually impaired individuals rely on their auditory and tactile senses to recognize surrounding objects. However, accessible public facilities such as tactile pavements and tactile signs are installed only in [...] Read more.
The global prevalence of visual impairment due to diseases and accidents continues to increase. Visually impaired individuals rely on their auditory and tactile senses to recognize surrounding objects. However, accessible public facilities such as tactile pavements and tactile signs are installed only in limited areas globally, and visually impaired individuals use assistive devices such as canes or guide dogs, which have limitations. In particular, the visually impaired are not equipped to face unexpected situations by themselves while walking. Therefore, these situations are becoming a great threat to the safety of the visually impaired. To solve this problem, this study proposes a living assistance system, which integrates object recognition, object extraction, outline generation, and braille conversion algorithms, that is applicable both indoors and outdoors. The smart glasses guide objects in real photos, and the user can detect the shape of the object through a braille pad. Moreover, we built a database containing 100 objects on the basis of a survey to select objects frequently used by visually impaired people in real life to construct the system. A performance evaluation, consisting of accuracy and usefulness evaluations, was conducted to assess the system. The former involved comparing the tactile image generated on the basis of braille data with the expected tactile image, while the latter confirmed the object extraction accuracy and conversion rate on the basis of the images of real-life situations. As a result, the living assistance system proposed in this study was found to be efficient and useful with an average accuracy of 85% a detection accuracy of 90% and higher, and an average braille conversion time of 6.6 s. Ten visually impaired individuals used the assistance system and were satisfied with its performance. Participants preferred tactile graphics that contained only the outline of the objects, over tactile graphics containing the full texture details. Full article
Show Figures

Figure 1

17 pages, 3462 KiB  
Article
Acceleration of Magnetic Resonance Fingerprinting Reconstruction Using Denoising and Self-Attention Pyramidal Convolutional Neural Network
by Jia-Sheng Hong, Ingo Hermann, Frank Gerrit Zöllner, Lothar R. Schad, Shuu-Jiun Wang, Wei-Kai Lee, Yung-Lin Chen, Yu Chang and Yu-Te Wu
Sensors 2022, 22(3), 1260; https://doi.org/10.3390/s22031260 - 07 Feb 2022
Cited by 5 | Viewed by 1933
Abstract
Magnetic resonance fingerprinting (MRF) based on echo-planar imaging (EPI) enables whole-brain imaging to rapidly obtain T1 and T2* relaxation time maps. Reconstructing parametric maps from the MRF scanned baselines by the inner-product method is computationally expensive. We aimed to accelerate the reconstruction of [...] Read more.
Magnetic resonance fingerprinting (MRF) based on echo-planar imaging (EPI) enables whole-brain imaging to rapidly obtain T1 and T2* relaxation time maps. Reconstructing parametric maps from the MRF scanned baselines by the inner-product method is computationally expensive. We aimed to accelerate the reconstruction of parametric maps for MRF-EPI by using a deep learning model. The proposed approach uses a two-stage model that first eliminates noise and then regresses the parametric maps. Parametric maps obtained by dictionary matching were used as a reference and compared with the prediction results of the two-stage model. MRF-EPI scans were collected from 32 subjects. The signal-to-noise ratio increased significantly after the noise removal by the denoising model. For prediction with scans in the testing dataset, the mean absolute percentage errors between the standard and the final two-stage model were 3.1%, 3.2%, and 1.9% for T1, and 2.6%, 2.3%, and 2.8% for T2* in gray matter, white matter, and lesion locations, respectively. Our proposed two-stage deep learning model can effectively remove noise and accurately reconstruct MRF-EPI parametric maps, increasing the speed of reconstruction and reducing the storage space required by dictionaries. Full article
Show Figures

Figure 1

12 pages, 1760 KiB  
Article
Learning a Metric for Multimodal Medical Image Registration without Supervision Based on Cycle Constraints
by Hanna Siebert, Lasse Hansen and Mattias P. Heinrich
Sensors 2022, 22(3), 1107; https://doi.org/10.3390/s22031107 - 01 Feb 2022
Cited by 5 | Viewed by 3015
Abstract
Deep learning based medical image registration remains very difficult and often fails to improve over its classical counterparts where comprehensive supervision is not available, in particular for large transformations—including rigid alignment. The use of unsupervised, metric-based registration networks has become popular, but so [...] Read more.
Deep learning based medical image registration remains very difficult and often fails to improve over its classical counterparts where comprehensive supervision is not available, in particular for large transformations—including rigid alignment. The use of unsupervised, metric-based registration networks has become popular, but so far no universally applicable similarity metric is available for multimodal medical registration, requiring a trade-off between local contrast-invariant edge features or more global statistical metrics. In this work, we aim to improve over the use of handcrafted metric-based losses. We propose to use synthetic three-way (triangular) cycles that for each pair of images comprise two multimodal transformations to be estimated and one known synthetic monomodal transform. Additionally, we present a robust method for estimating large rigid transformations that is differentiable in end-to-end learning. By minimising the cycle discrepancy and adapting the synthetic transformation to be close to the real geometric difference of the image pairs during training, we successfully tackle intra-patient abdominal CT-MRI registration and reach performance on par with state-of-the-art metric-supervision and classic methods. Cyclic constraints enable the learning of cross-modality features that excel at accurate anatomical alignment of abdominal CT and MRI scans. Full article
Show Figures

Figure 1

18 pages, 2669 KiB  
Article
Effect of Auditory Discrimination Therapy on Attentional Processes of Tinnitus Patients
by Ingrid G. Rodríguez-León, Luz María Alonso-Valerdi, Ricardo A. Salido-Ruiz, Israel Román-Godínez, David I. Ibarra-Zarate and Sulema Torres-Ramos
Sensors 2022, 22(3), 937; https://doi.org/10.3390/s22030937 - 26 Jan 2022
Cited by 2 | Viewed by 3476
Abstract
Tinnitus is an auditory condition that causes humans to hear a sound anytime, anywhere. Chronic and refractory tinnitus is caused by an over synchronization of neurons. Sound has been applied as an alternative treatment to resynchronize neuronal activity. To date, various acoustic therapies [...] Read more.
Tinnitus is an auditory condition that causes humans to hear a sound anytime, anywhere. Chronic and refractory tinnitus is caused by an over synchronization of neurons. Sound has been applied as an alternative treatment to resynchronize neuronal activity. To date, various acoustic therapies have been proposed to treat tinnitus. However, the effect is not yet well understood. Therefore, the objective of this study is to establish an objective methodology using electroencephalography (EEG) signals to measure changes in attentional processes in patients with tinnitus treated with auditory discrimination therapy (ADT). To this aim, first, event-related (de-) synchronization (ERD/ERS) responses were mapped to extract the levels of synchronization related to the auditory recognition event. Second, the deep representations of the scalograms were extracted using a previously trained Convolutional Neural Network (CNN) architecture (MobileNet v2). Third, the deep spectrum features corresponding to the study datasets were analyzed to investigate performance in terms of attention and memory changes. The results proved strong evidence of the feasibility of ADT to treat tinnitus, which is possibly due to attentional redirection. Full article
Show Figures

Figure 1

15 pages, 3373 KiB  
Article
Differences in Physiological Signals Due to Age and Exercise Habits of Subjects during Cycling Exercise
by Szu-Yu Lin, Chi-Wen Jao, Po-Shan Wang, Michelle Liou, Jun-Liang Wu, Hsiao Chun, Ching-Ting Tseng and Yu-Te Wu
Sensors 2021, 21(21), 7220; https://doi.org/10.3390/s21217220 - 29 Oct 2021
Cited by 1 | Viewed by 1454
Abstract
Numerous studies indicated the physical benefits of regular exercise, but the neurophysiological mechanisms of regular exercise in elders were less investigated. We aimed to compare changes in brain activity during exercise in elderly people and in young adults with and without regular exercise [...] Read more.
Numerous studies indicated the physical benefits of regular exercise, but the neurophysiological mechanisms of regular exercise in elders were less investigated. We aimed to compare changes in brain activity during exercise in elderly people and in young adults with and without regular exercise habits. A total of 36 healthy young adults (M/F:18/18) and 35 healthy elderly adults (M/F:20/15) participated in this study. According to exercise habits, each age group were classified into regular and occasional exerciser groups. ECG, EEG, and EMG signals were recorded using V-AMP with a 1-kHz sampling rate. The participants were instructed to perform three 5-min bicycle rides with different exercise loads. The EEG spectral power of elders who exercised regularly revealed the strongest positive correlation with their exercise intensity by using Pearson correlation analysis. The results demonstrate that exercise-induced significant cortical activation in the elderly participants who exercised regularly, and most of the p-values are less than 0.001. No significant correlation was observed between spectral power and exercise intensity in the elders who exercised occasionally. The young participants who exercised regularly had greater cardiac and neurobiological efficiency. Our results may provide a new exercise therapy reference for adult groups with different exercise habits, especially for the elders. Full article
Show Figures

Figure 1

14 pages, 1892 KiB  
Article
Safe Hb Concentration Measurement during Bladder Irrigation Using Artificial Intelligence
by Gerd Reis, Xiaoying Tan, Lea Kraft, Mehmet Yilmaz, Dominik Stephan Schoeb and Arkadiusz Miernik
Sensors 2021, 21(17), 5723; https://doi.org/10.3390/s21175723 - 25 Aug 2021
Cited by 2 | Viewed by 2438
Abstract
We have developed a sensor for monitoring the hemoglobin (Hb) concentration in the effluent of a continuous bladder irrigation. The Hb concentration measurement is based on light absorption within a fixed measuring distance. The light frequency used is selected [...] Read more.
We have developed a sensor for monitoring the hemoglobin (Hb) concentration in the effluent of a continuous bladder irrigation. The Hb concentration measurement is based on light absorption within a fixed measuring distance. The light frequency used is selected so that both arterial and venous Hb are equally detected. The sensor allows the measurement of the Hb concentration up to a maximum value of 3.2 g/dL (equivalent to ≈20% blood concentration). Since bubble formation in the outflow tract cannot be avoided with current irrigation systems, a neural network is implemented that can robustly detect air bubbles within the measurement section. The network considers both optical and temporal features and is able to effectively safeguard the measurement process. The sensor supports the use of different irrigants (salt and electrolyte-free solutions) as well as measurement through glass shielding. The sensor can be used in a non-invasive way with current irrigation systems. The sensor is positively tested in a clinical study. Full article
Show Figures

Figure 1

16 pages, 5223 KiB  
Communication
A Cell’s Viscoelasticity Measurement Method Based on the Spheroidization Process of Non-Spherical Shaped Cell
by Yaowei Liu, Yujie Zhang, Maosheng Cui, Xiangfei Zhao, Mingzhu Sun and Xin Zhao
Sensors 2021, 21(16), 5561; https://doi.org/10.3390/s21165561 - 18 Aug 2021
Cited by 2 | Viewed by 1800
Abstract
The mechanical properties of biological cells, especially the elastic modulus and viscosity of cells, have been identified to reflect cell viability and cell states. The existing measuring techniques need additional equipment or operation condition. This paper presents a cell’s viscoelasticity measurement method based [...] Read more.
The mechanical properties of biological cells, especially the elastic modulus and viscosity of cells, have been identified to reflect cell viability and cell states. The existing measuring techniques need additional equipment or operation condition. This paper presents a cell’s viscoelasticity measurement method based on the spheroidization process of non-spherical shaped cell. The viscoelasticity of porcine fetal fibroblast was measured. Firstly, we introduced the process of recording the spheroidization process of porcine fetal fibroblast. Secondly, we built the viscoelastic model for simulating a cell’s spheroidization process. Then, we simulated the spheroidization process of porcine fetal fibroblast and got the simulated spheroidization process. By identifying the parameters in the viscoelastic model, we got the elasticity (500 Pa) and viscosity (10 Pa·s) of porcine fetal fibroblast. The results showed that the magnitude of the elasticity and viscosity were in agreement with those measured by traditional method. To verify the accuracy of the proposed method, we imitated the spheroidization process with silicone oil, a kind of viscous and uniform liquid with determined viscosity. We did the silicone oil’s spheroidization experiment and simulated this process. The simulation results also fitted the experimental results well. Full article
Show Figures

Figure 1

19 pages, 8869 KiB  
Article
DMAS Beamforming with Complementary Subset Transmit for Ultrasound Coherence-Based Power Doppler Detection in Multi-Angle Plane-Wave Imaging
by Che-Chou Shen and Yen-Chen Chu
Sensors 2021, 21(14), 4856; https://doi.org/10.3390/s21144856 - 16 Jul 2021
Cited by 6 | Viewed by 2225
Abstract
Conventional ultrasonic coherent plane-wave (PW) compounding corresponds to Delay-and-Sum (DAS) beamforming of low-resolution images from distinct PW transmit angles. Nonetheless, the trade-off between the level of clutter artifacts and the number of PW transmit angle may compromise the image quality in ultrafast acquisition. [...] Read more.
Conventional ultrasonic coherent plane-wave (PW) compounding corresponds to Delay-and-Sum (DAS) beamforming of low-resolution images from distinct PW transmit angles. Nonetheless, the trade-off between the level of clutter artifacts and the number of PW transmit angle may compromise the image quality in ultrafast acquisition. Delay-Multiply-and-Sum (DMAS) beamforming in the dimension of PW transmit angle is capable of suppressing clutter interference and is readily compatible with the conventional method. In DMAS, a tunable p value is used to modulate the signal coherence estimated from the low-resolution images to produce the final high-resolution output and does not require huge memory allocation to record all the received channel data in multi-angle PW imaging. In this study, DMAS beamforming is used to construct a novel coherence-based power Doppler detection together with the complementary subset transmit (CST) technique to further reduce the noise level. For p = 2.0 as an example, simulation results indicate that the DMAS beamforming alone can improve the Doppler SNR by 8.2 dB compared to DAS counterpart. Another 6-dB increase in Doppler SNR can be further obtained when the CST technique is combined with DMAS beamforming with sufficient ensemble averaging. The CST technique can also be performed with DAS beamforming, though the improvement in Doppler SNR and CNR is relatively minor. Experimental results also agree with the simulations. Nonetheless, since the DMAS beamforming involves multiplicative operation, clutter filtering in the ensemble direction has to be performed on the low-resolution images before DMAS to remove the stationary tissue without coupling from the flow signal. Full article
Show Figures

Figure 1

20 pages, 3357 KiB  
Article
Learning U-Net Based Multi-Scale Features in Encoding-Decoding for MR Image Brain Tissue Segmentation
by Jiao-Song Long, Guang-Zhi Ma, En-Min Song and Ren-Chao Jin
Sensors 2021, 21(9), 3232; https://doi.org/10.3390/s21093232 - 07 May 2021
Cited by 7 | Viewed by 2319
Abstract
Accurate brain tissue segmentation of MRI is vital to diagnosis aiding, treatment planning, and neurologic condition monitoring. As an excellent convolutional neural network (CNN), U-Net is widely used in MR image segmentation as it usually generates high-precision features. However, the performance of U-Net [...] Read more.
Accurate brain tissue segmentation of MRI is vital to diagnosis aiding, treatment planning, and neurologic condition monitoring. As an excellent convolutional neural network (CNN), U-Net is widely used in MR image segmentation as it usually generates high-precision features. However, the performance of U-Net is considerably restricted due to the variable shapes of the segmented targets in MRI and the information loss of down-sampling and up-sampling operations. Therefore, we propose a novel network by introducing spatial and channel dimensions-based multi-scale feature information extractors into its encoding-decoding framework, which is helpful in extracting rich multi-scale features while highlighting the details of higher-level features in the encoding part, and recovering the corresponding localization to a higher resolution layer in the decoding part. Concretely, we propose two information extractors, multi-branch pooling, called MP, in the encoding part, and multi-branch dense prediction, called MDP, in the decoding part, to extract multi-scale features. Additionally, we designed a new multi-branch output structure with MDP in the decoding part to form more accurate edge-preserving predicting maps by integrating the dense adjacent prediction features at different scales. Finally, the proposed method is tested on datasets MRbrainS13, IBSR18, and ISeg2017. We find that the proposed network performs higher accuracy in segmenting MRI brain tissues and it is better than the leading method of 2018 at the segmentation of GM and CSF. Therefore, it can be a useful tool for diagnostic applications, such as brain MRI segmentation and diagnosing. Full article
Show Figures

Figure 1

24 pages, 6084 KiB  
Article
ResBCDU-Net: A Deep Learning Framework for Lung CT Image Segmentation
by Yeganeh Jalali, Mansoor Fateh, Mohsen Rezvani, Vahid Abolghasemi and Mohammad Hossein Anisi
Sensors 2021, 21(1), 268; https://doi.org/10.3390/s21010268 - 03 Jan 2021
Cited by 56 | Viewed by 4954
Abstract
Lung CT image segmentation is a key process in many applications such as lung cancer detection. It is considered a challenging problem due to existing similar image densities in the pulmonary structures, different types of scanners, and scanning protocols. Most of the current [...] Read more.
Lung CT image segmentation is a key process in many applications such as lung cancer detection. It is considered a challenging problem due to existing similar image densities in the pulmonary structures, different types of scanners, and scanning protocols. Most of the current semi-automatic segmentation methods rely on human factors therefore it might suffer from lack of accuracy. Another shortcoming of these methods is their high false-positive rate. In recent years, several approaches, based on a deep learning framework, have been effectively applied in medical image segmentation. Among existing deep neural networks, the U-Net has provided great success in this field. In this paper, we propose a deep neural network architecture to perform an automatic lung CT image segmentation process. In the proposed method, several extensive preprocessing techniques are applied to raw CT images. Then, ground truths corresponding to these images are extracted via some morphological operations and manual reforms. Finally, all the prepared images with the corresponding ground truth are fed into a modified U-Net in which the encoder is replaced with a pre-trained ResNet-34 network (referred to as Res BCDU-Net). In the architecture, we employ BConvLSTM (Bidirectional Convolutional Long Short-term Memory)as an advanced integrator module instead of simple traditional concatenators. This is to merge the extracted feature maps of the corresponding contracting path into the previous expansion of the up-convolutional layer. Finally, a densely connected convolutional layer is utilized for the contracting path. The results of our extensive experiments on lung CT images (LIDC-IDRI database) confirm the effectiveness of the proposed method where a dice coefficient index of 97.31% is achieved. Full article
Show Figures

Figure 1

14 pages, 3187 KiB  
Article
Imaging Tremor Quantification for Neurological Disease Diagnosis
by Yuichi Mitsui, Thi Thi Zin, Nobuyuki Ishii and Hitoshi Mochizuki
Sensors 2020, 20(22), 6684; https://doi.org/10.3390/s20226684 - 22 Nov 2020
Cited by 4 | Viewed by 2246
Abstract
In this paper, we introduce a simple method based on image analysis and deep learning that can be used in the objective assessment and measurement of tremors. A tremor is a neurological disorder that causes involuntary and rhythmic movements in a human body [...] Read more.
In this paper, we introduce a simple method based on image analysis and deep learning that can be used in the objective assessment and measurement of tremors. A tremor is a neurological disorder that causes involuntary and rhythmic movements in a human body part or parts. There are many types of tremors, depending on their amplitude and frequency type. Appropriate treatment is only possible when there is an accurate diagnosis. Thus, a need exists for a technique to analyze tremors. In this paper, we propose a hybrid approach using imaging technology and machine learning techniques for quantification and extraction of the parameters associated with tremors. These extracted parameters are used to classify the tremor for subsequent identification of the disease. In particular, we focus on essential tremor and cerebellar disorders by monitoring the finger–nose–finger test. First of all, test results obtained from both patients and healthy individuals are analyzed using image processing techniques. Next, data were grouped in order to determine classes of typical responses. A machine learning method using a support vector machine is used to perform an unsupervised clustering. Experimental results showed the highest internal evaluation for distribution into three clusters, which could be used to differentiate the responses of healthy subjects, patients with essential tremor and patients with cerebellar disorders. Full article
Show Figures

Figure 1

Other

Jump to: Research

12 pages, 3445 KiB  
Letter
Semantic Segmentation of Intralobular and Extralobular Tissue from Liver Scaffold H&E Images
by Miroslav Jirik, Ivan Gruber, Vladimira Moulisova, Claudia Schindler, Lenka Cervenkova, Richard Palek, Jachym Rosendorf, Janine Arlt, Lukas Bolek, Jiri Dejmek, Uta Dahmen, Milos Zelezny and Vaclav Liska
Sensors 2020, 20(24), 7063; https://doi.org/10.3390/s20247063 - 10 Dec 2020
Cited by 7 | Viewed by 2149
Abstract
Decellularized tissue is an important source for biological tissue engineering. Evaluation of the quality of decellularized tissue is performed using scanned images of hematoxylin-eosin stained (H&E) tissue sections and is usually dependent on the observer. The first step in creating a tool for [...] Read more.
Decellularized tissue is an important source for biological tissue engineering. Evaluation of the quality of decellularized tissue is performed using scanned images of hematoxylin-eosin stained (H&E) tissue sections and is usually dependent on the observer. The first step in creating a tool for the assessment of the quality of the liver scaffold without observer bias is the automatic segmentation of the whole slide image into three classes: the background, intralobular area, and extralobular area. Such segmentation enables to perform the texture analysis in the intralobular area of the liver scaffold, which is crucial part in the recellularization procedure. Existing semi-automatic methods for general segmentation (i.e., thresholding, watershed, etc.) do not meet the quality requirements. Moreover, there are no methods available to solve this task automatically. Given the low amount of training data, we proposed a two-stage method. The first stage is based on classification of simple hand-crafted descriptors of the pixels and their neighborhoods. This method is trained on partially annotated data. Its outputs are used for training of the second-stage approach, which is based on a convolutional neural network (CNN). Our architecture inspired by U-Net reaches very promising results, despite a very low amount of the training data. We provide qualitative and quantitative data for both stages. With the best training setup, we reach 90.70% recognition accuracy. Full article
Show Figures

Figure 1

Back to TopTop