sensors-logo

Journal Browser

Journal Browser

Deep Learning for Pathology Detection and Diagnosis in Medical Imaging

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 20 May 2024 | Viewed by 22404

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, 400027 Cluj-Napoca, Romania
Interests: image processing; pattern recognition; machine/deep learning; stereo vision; perception

E-Mail Website
Guest Editor
Department of Computer Science, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, 400027 Cluj-Napoca, Romania
Interests: computer vision; pattern classification; machine/deep learning; databases&data analytics; software engineering

Special Issue Information

Dear Colleagues,

Severe pathologies, such as the diffuse liver diseases or tumors, can lead to the significant degradation of the human health and sometimes to lethal stages. The most reliable methods for the diagnosis of these affections, such as the classical biopsy or surgery, are invasive and dangerous. Advanced computerized methods are urgently needed to reduce invasiveness and enhance the information derived from medical images as much as possible by unveiling their subtle aspects, conducting to a virtual biopsy. Computer Vision and Machine Learning can be successfully employed to achieve this target. Thus, advanced image analysis combined with conventional machine learning, as well as the deep learning techniques, can lead to a highly accurate automatic diagnosis process. The corresponding features, together with the classification, segmentation, fusion of multiple image modalities, and 3D reconstruction techniques, can be involved in the achievement of appropriate 2D and 3D models for the considered affections, which are helpful in computer-aided diagnosis and surgery.

The current special issue expects scientific articles with original contributions in these directions, as well as surveys in the described domain. The papers should cover domain topics that include but are not limited to:

• Deep learning and conventional methods for severe pathology detection and diagnosis in medical imaging (e.g. US, CEUS, elastography images, CT, MRI, radiographies, endoscopy, microscopic images, etc.)
• Pathologies of interest (malignant/benign tumors, diffuse liver diseases, skin pathology, brain diseases, heart diseases, etc.)
• Early detection of the severe pathologies based on medical images through deep learning methods
• Deep learning methods for recognition within medical images
• Segmentation through deep learning techniques within medical images
• Comparisons and combinations between the conventional and the deep learning methods for recognition and segmentation within medical images
• Fusion of multiple medical image modalities, for improved pathology detection and diagnosis
• 2D and 3D models of severe pathologies, for improved pathology detection and diagnosis
• 3D reconstruction based on medical images, for improved pathology detection and diagnosis

Prof. Dr. Sergiu Nedevschi
Dr. Mitrea Delia-Alexandrina
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • virtual biopsy
  • severe pathology
  • computer vision
  • deep learning
  • radiomics
  • automatic and computer-aided diagnosis
  • medical images

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

12 pages, 11530 KiB  
Article
Deep Learning-Based Multi-Class Segmentation of the Paranasal Sinuses of Sinusitis Patients Based on Computed Tomographic Images
by Jongwook Whangbo, Juhui Lee, Young Jae Kim, Seon Tae Kim and Kwang Gi Kim
Sensors 2024, 24(6), 1933; https://doi.org/10.3390/s24061933 - 18 Mar 2024
Viewed by 509
Abstract
Accurate paranasal sinus segmentation is essential for reducing surgical complications through surgical guidance systems. This study introduces a multiclass Convolutional Neural Network (CNN) segmentation model by comparing four 3D U-Net variations—normal, residual, dense, and residual-dense. Data normalization and training were conducted on a [...] Read more.
Accurate paranasal sinus segmentation is essential for reducing surgical complications through surgical guidance systems. This study introduces a multiclass Convolutional Neural Network (CNN) segmentation model by comparing four 3D U-Net variations—normal, residual, dense, and residual-dense. Data normalization and training were conducted on a 40-patient test set (20 normal, 20 abnormal) using 5-fold cross-validation. The normal 3D U-Net demonstrated superior performance with an F1 score of 84.29% on the normal test set and 79.32% on the abnormal set, exhibiting higher true positive rates for the sphenoid and maxillary sinus in both sets. Despite effective segmentation in clear sinuses, limitations were observed in mucosal inflammation. Nevertheless, the algorithm’s enhanced segmentation of abnormal sinuses suggests potential clinical applications, with ongoing refinements expected for broader utility. Full article
Show Figures

Figure 1

20 pages, 9812 KiB  
Article
Metric Learning in Histopathological Image Classification: Opening the Black Box
by Domenico Amato, Salvatore Calderaro, Giosué Lo Bosco, Riccardo Rizzo and Filippo Vella
Sensors 2023, 23(13), 6003; https://doi.org/10.3390/s23136003 - 28 Jun 2023
Cited by 2 | Viewed by 1299
Abstract
The application of machine learning techniques to histopathology images enables advances in the field, providing valuable tools that can speed up and facilitate the diagnosis process. The classification of these images is a relevant aid for physicians who have to process a large [...] Read more.
The application of machine learning techniques to histopathology images enables advances in the field, providing valuable tools that can speed up and facilitate the diagnosis process. The classification of these images is a relevant aid for physicians who have to process a large number of images in long and repetitive tasks. This work proposes the adoption of metric learning that, beyond the task of classifying images, can provide additional information able to support the decision of the classification system. In particular, triplet networks have been employed to create a representation in the embedding space that gathers together images of the same class while tending to separate images with different labels. The obtained representation shows an evident separation of the classes with the possibility of evaluating the similarity and the dissimilarity among input images according to distance criteria. The model has been tested on the BreakHis dataset, a reference and largely used dataset that collects breast cancer images with eight pathology labels and four magnification levels. Our proposed classification model achieves relevant performance on the patient level, with the advantage of providing interpretable information for the obtained results, which represent a specific feature missed by the all the recent methodologies proposed for the same purpose. Full article
Show Figures

Figure 1

17 pages, 9173 KiB  
Article
Hexagonal-Grid-Layout Image Segmentation Using Shock Filters: Computational Complexity Case Study for Microarray Image Analysis Related to Machine Learning Approaches
by Aurel Baloi, Carmen Costea, Robert Gutt, Ovidiu Balacescu, Flaviu Turcu and Bogdan Belean
Sensors 2023, 23(5), 2582; https://doi.org/10.3390/s23052582 - 26 Feb 2023
Viewed by 2003
Abstract
Hexagonal grid layouts are advantageous in microarray technology; however, hexagonal grids appear in many fields, especially given the rise of new nanostructures and metamaterials, leading to the need for image analysis on such structures. This work proposes a shock-filter-based approach driven by mathematical [...] Read more.
Hexagonal grid layouts are advantageous in microarray technology; however, hexagonal grids appear in many fields, especially given the rise of new nanostructures and metamaterials, leading to the need for image analysis on such structures. This work proposes a shock-filter-based approach driven by mathematical morphology for the segmentation of image objects disposed in a hexagonal grid. The original image is decomposed into a pair of rectangular grids, such that their superposition generates the initial image. Within each rectangular grid, the shock-filters are once again used to confine the foreground information for each image object into an area of interest. The proposed methodology was successfully applied for microarray spot segmentation, whereas its character of generality is underlined by the segmentation results obtained for two other types of hexagonal grid layouts. Considering the segmentation accuracy through specific quality measures for microarray images, such as the mean absolute error and the coefficient of variation, high correlations of our computed spot intensity features with the annotated reference values were found, indicating the reliability of the proposed approach. Moreover, taking into account that the shock-filter PDE formalism is targeting the one-dimensional luminance profile function, the computational complexity to determine the grid is minimized. The order of growth for the computational complexity of our approach is at least one order of magnitude lower when compared with state-of-the-art microarray segmentation approaches, ranging from classical to machine learning ones. Full article
Show Figures

Figure 1

29 pages, 1063 KiB  
Article
Hepatocellular Carcinoma Recognition from Ultrasound Images Using Combinations of Conventional and Deep Learning Techniques
by Delia-Alexandrina Mitrea, Raluca Brehar, Sergiu Nedevschi, Monica Lupsor-Platon, Mihai Socaciu and Radu Badea
Sensors 2023, 23(5), 2520; https://doi.org/10.3390/s23052520 - 24 Feb 2023
Cited by 8 | Viewed by 1975
Abstract
Hepatocellular Carcinoma (HCC) is the most frequent malignant liver tumor and the third cause of cancer-related deaths worldwide. For many years, the golden standard for HCC diagnosis has been the needle biopsy, which is invasive and carries risks. Computerized methods are due to [...] Read more.
Hepatocellular Carcinoma (HCC) is the most frequent malignant liver tumor and the third cause of cancer-related deaths worldwide. For many years, the golden standard for HCC diagnosis has been the needle biopsy, which is invasive and carries risks. Computerized methods are due to achieve a noninvasive, accurate HCC detection process based on medical images. We developed image analysis and recognition methods to perform automatic and computer-aided diagnosis of HCC. Conventional approaches that combined advanced texture analysis, mainly based on Generalized Co-occurrence Matrices (GCM) with traditional classifiers, as well as deep learning approaches based on Convolutional Neural Networks (CNN) and Stacked Denoising Autoencoders (SAE), were involved in our research. The best accuracy of 91% was achieved for B-mode ultrasound images through CNN by our research group. In this work, we combined the classical approaches with CNN techniques, within B-mode ultrasound images. The combination was performed at the classifier level. The CNN features obtained at the output of various convolution layers were combined with powerful textural features, then supervised classifiers were employed. The experiments were conducted on two datasets, acquired with different ultrasound machines. The best performance, above 98%, overpassed our previous results, as well as representative state-of-the-art results. Full article
Show Figures

Figure 1

17 pages, 8720 KiB  
Article
Using Sparse Patch Annotation for Tumor Segmentation in Histopathological Images
by Yiqing Liu, Qiming He, Hufei Duan, Huijuan Shi, Anjia Han and Yonghong He
Sensors 2022, 22(16), 6053; https://doi.org/10.3390/s22166053 - 13 Aug 2022
Cited by 3 | Viewed by 2070
Abstract
Tumor segmentation is a fundamental task in histopathological image analysis. Creating accurate pixel-wise annotations for such segmentation tasks in a fully-supervised training framework requires significant effort. To reduce the burden of manual annotation, we propose a novel weakly supervised segmentation framework based on [...] Read more.
Tumor segmentation is a fundamental task in histopathological image analysis. Creating accurate pixel-wise annotations for such segmentation tasks in a fully-supervised training framework requires significant effort. To reduce the burden of manual annotation, we propose a novel weakly supervised segmentation framework based on sparse patch annotation, i.e., only small portions of patches in an image are labeled as ‘tumor’ or ‘normal’. The framework consists of a patch-wise segmentation model called PSeger, and an innovative semi-supervised algorithm. PSeger has two branches for patch classification and image classification, respectively. This two-branch structure enables the model to learn more general features and thus reduce the risk of overfitting when learning sparsely annotated data. We incorporate the idea of consistency learning and self-training into the semi-supervised training strategy to take advantage of the unlabeled images. Trained on the BCSS dataset with only 25% of the images labeled (five patches for each labeled image), our proposed method achieved competitive performance compared to the fully supervised pixel-wise segmentation models. Experiments demonstrate that the proposed solution has the potential to reduce the burden of labeling histopathological images. Full article
Show Figures

Figure 1

20 pages, 23984 KiB  
Article
Explainable Transformer-Based Deep Learning Model for the Detection of Malaria Parasites from Blood Cell Images
by Md. Robiul Islam, Md. Nahiduzzaman, Md. Omaer Faruq Goni, Abu Sayeed, Md. Shamim Anower, Mominul Ahsan and Julfikar Haider
Sensors 2022, 22(12), 4358; https://doi.org/10.3390/s22124358 - 08 Jun 2022
Cited by 18 | Viewed by 3225
Abstract
Malaria is a life-threatening disease caused by female anopheles mosquito bites. Various plasmodium parasites spread in the victim’s blood cells and keep their life in a critical situation. If not treated at the early stage, malaria can cause even death. Microscopy is a [...] Read more.
Malaria is a life-threatening disease caused by female anopheles mosquito bites. Various plasmodium parasites spread in the victim’s blood cells and keep their life in a critical situation. If not treated at the early stage, malaria can cause even death. Microscopy is a familiar process for diagnosing malaria, collecting the victim’s blood samples, and counting the parasite and red blood cells. However, the microscopy process is time-consuming and can produce an erroneous result in some cases. With the recent success of machine learning and deep learning in medical diagnosis, it is quite possible to minimize diagnosis costs and improve overall detection accuracy compared with the traditional microscopy method. This paper proposes a multiheaded attention-based transformer model to diagnose the malaria parasite from blood cell images. To demonstrate the effectiveness of the proposed model, the gradient-weighted class activation map (Grad-CAM) technique was implemented to identify which parts of an image the proposed model paid much more attention to compared with the remaining parts by generating a heatmap image. The proposed model achieved a testing accuracy, precision, recall, f1-score, and AUC score of 96.41%, 96.99%, 95.88%, 96.44%, and 99.11%, respectively, for the original malaria parasite dataset and 99.25%, 99.08%, 99.42%, 99.25%, and 99.99%, respectively, for the modified dataset. Various hyperparameters were also finetuned to obtain optimum results, which were also compared with state-of-the-art (SOTA) methods for malaria parasite detection, and the proposed method outperformed the existing methods. Full article
Show Figures

Figure 1

31 pages, 12259 KiB  
Article
Hybrid and Deep Learning Approach for Early Diagnosis of Lower Gastrointestinal Diseases
by Suliman Mohamed Fati, Ebrahim Mohammed Senan and Ahmad Taher Azar
Sensors 2022, 22(11), 4079; https://doi.org/10.3390/s22114079 - 27 May 2022
Cited by 28 | Viewed by 2527
Abstract
Every year, nearly two million people die as a result of gastrointestinal (GI) disorders. Lower gastrointestinal tract tumors are one of the leading causes of death worldwide. Thus, early detection of the type of tumor is of great importance in the survival of [...] Read more.
Every year, nearly two million people die as a result of gastrointestinal (GI) disorders. Lower gastrointestinal tract tumors are one of the leading causes of death worldwide. Thus, early detection of the type of tumor is of great importance in the survival of patients. Additionally, removing benign tumors in their early stages has more risks than benefits. Video endoscopy technology is essential for imaging the GI tract and identifying disorders such as bleeding, ulcers, polyps, and malignant tumors. Videography generates 5000 frames, which require extensive analysis and take a long time to follow all frames. Thus, artificial intelligence techniques, which have a higher ability to diagnose and assist physicians in making accurate diagnostic decisions, solve these challenges. In this study, many multi-methodologies were developed, where the work was divided into four proposed systems; each system has more than one diagnostic method. The first proposed system utilizes artificial neural networks (ANN) and feed-forward neural networks (FFNN) algorithms based on extracting hybrid features by three algorithms: local binary pattern (LBP), gray level co-occurrence matrix (GLCM), and fuzzy color histogram (FCH) algorithms. The second proposed system uses pre-trained CNN models which are the GoogLeNet and AlexNet based on the extraction of deep feature maps and their classification with high accuracy. The third proposed method uses hybrid techniques consisting of two blocks: the first block of CNN models (GoogLeNet and AlexNet) to extract feature maps; the second block is the support vector machine (SVM) algorithm for classifying deep feature maps. The fourth proposed system uses ANN and FFNN based on the hybrid features between CNN models (GoogLeNet and AlexNet) and LBP, GLCM and FCH algorithms. All the proposed systems achieved superior results in diagnosing endoscopic images for the early detection of lower gastrointestinal diseases. All systems produced promising results; the FFNN classifier based on the hybrid features extracted by GoogLeNet, LBP, GLCM and FCH achieved an accuracy of 99.3%, precision of 99.2%, sensitivity of 99%, specificity of 100%, and AUC of 99.87%. Full article
Show Figures

Figure 1

15 pages, 7459 KiB  
Article
Nuclei-Guided Network for Breast Cancer Grading in HE-Stained Pathological Images
by Rui Yan, Fei Ren, Jintao Li, Xiaosong Rao, Zhilong Lv, Chunhou Zheng and Fa Zhang
Sensors 2022, 22(11), 4061; https://doi.org/10.3390/s22114061 - 27 May 2022
Cited by 9 | Viewed by 2353
Abstract
Breast cancer grading methods based on hematoxylin-eosin (HE) stained pathological images can be summarized into two categories. The first category is to directly extract the pathological image features for breast cancer grading. However, unlike the coarse-grained problem of breast cancer classification, breast cancer [...] Read more.
Breast cancer grading methods based on hematoxylin-eosin (HE) stained pathological images can be summarized into two categories. The first category is to directly extract the pathological image features for breast cancer grading. However, unlike the coarse-grained problem of breast cancer classification, breast cancer grading is a fine-grained classification problem, so general methods cannot achieve satisfactory results. The second category is to apply the three evaluation criteria of the Nottingham Grading System (NGS) separately, and then integrate the results of the three criteria to obtain the final grading result. However, NGS is only a semiquantitative evaluation method, and there may be far more image features related to breast cancer grading. In this paper, we proposed a Nuclei-Guided Network (NGNet) for breast invasive ductal carcinoma (IDC) grading in pathological images. The proposed nuclei-guided attention module plays the role of nucleus attention, so as to learn more nuclei-related feature representations for breast IDC grading. In addition, the proposed nuclei-guided fusion module in the fusion process of different branches can further enable the network to focus on learning nuclei-related features. Overall, under the guidance of nuclei-related features, the entire NGNet can learn more fine-grained features for breast IDC grading. The experimental results show that the performance of the proposed method is better than that of state-of-the-art method. In addition, we released a well-labeled dataset with 3644 pathological images for breast IDC grading. This dataset is currently the largest publicly available breast IDC grading dataset and can serve as a benchmark to facilitate a broader study of breast IDC grading. Full article
Show Figures

Figure 1

14 pages, 3960 KiB  
Article
Accuracy Report on a Handheld 3D Ultrasound Scanner Prototype Based on a Standard Ultrasound Machine and a Spatial Pose Reading Sensor
by Radu Chifor, Tiberiu Marita, Tudor Arsenescu, Andrei Santoma, Alexandru Florin Badea, Horatiu Alexandru Colosi, Mindra-Eugenia Badea and Ioana Chifor
Sensors 2022, 22(9), 3358; https://doi.org/10.3390/s22093358 - 27 Apr 2022
Cited by 4 | Viewed by 2146
Abstract
The aim of this study was to develop and evaluate a 3D ultrasound scanning method. The main requirements were the freehand architecture of the scanner and high accuracy of the reconstructions. A quantitative evaluation of a freehand 3D ultrasound scanner prototype was performed, [...] Read more.
The aim of this study was to develop and evaluate a 3D ultrasound scanning method. The main requirements were the freehand architecture of the scanner and high accuracy of the reconstructions. A quantitative evaluation of a freehand 3D ultrasound scanner prototype was performed, comparing the ultrasonographic reconstructions with the CAD (computer-aided design) model of the scanned object, to determine the accuracy of the result. For six consecutive scans, the 3D ultrasonographic reconstructions were scaled and aligned with the model. The mean distance between the 3D objects ranged between 0.019 and 0.05 mm and the standard deviation between 0.287 mm and 0.565 mm. Despite some inherent limitations of our study, the quantitative evaluation of the 3D ultrasonographic reconstructions showed comparable results to other studies performed on smaller areas of the scanned objects, demonstrating the future potential of the developed prototype. Full article
Show Figures

Figure 1

Review

Jump to: Research

15 pages, 1720 KiB  
Review
Comparison of Diagnostic Test Accuracy of Cone-Beam Breast Computed Tomography and Digital Breast Tomosynthesis for Breast Cancer: A Systematic Review and Meta-Analysis Approach
by Temitope Emmanuel Komolafe, Cheng Zhang, Oluwatosin Atinuke Olagbaju, Gang Yuan, Qiang Du, Ming Li, Jian Zheng and Xiaodong Yang
Sensors 2022, 22(9), 3594; https://doi.org/10.3390/s22093594 - 09 May 2022
Cited by 3 | Viewed by 2560
Abstract
Background: Cone-beam breast computed tomography (CBBCT) and digital breast tomosynthesis (DBT) remain the main 3D modalities for X-ray breast imaging. This study aimed to systematically evaluate and meta-analyze the comparison of diagnostic accuracy of CBBCT and DBT to characterize breast cancers. Methods: Two [...] Read more.
Background: Cone-beam breast computed tomography (CBBCT) and digital breast tomosynthesis (DBT) remain the main 3D modalities for X-ray breast imaging. This study aimed to systematically evaluate and meta-analyze the comparison of diagnostic accuracy of CBBCT and DBT to characterize breast cancers. Methods: Two independent reviewers identified screening on diagnostic studies from 1 January 2015 to 30 December 2021, with at least reported sensitivity and specificity for both CBBCT and DBT. A univariate pooled meta-analysis was performed using the random-effects model to estimate the sensitivity and specificity while other diagnostic parameters like the area under the ROC curve (AUC), positive likelihood ratio (LR+), and negative likelihood ratio (LR) were estimated using the bivariate model. Results: The pooled sensitivity specificity, LR+ and LR and AUC at 95% confidence interval are 86.7% (80.3–91.2), 87.0% (79.9–91.8), 6.28 (4.40–8.96), 0.17 (0.12–0.25) and 0.925 for the 17 included studies in DBT arm, respectively, while, 83.7% (54.6–95.7), 71.3% (47.5–87.2), 2.71 (1.39–5.29), 0.20 (0.04–1.05), and 0.831 are the pooled sensitivity specificity, LR+ and LR and AUC for the five studies in the CBBCT arm, respectively. Conclusions: Our study demonstrates that DBT shows improved diagnostic performance over CBBCT regarding all estimated diagnostic parameters; with the statistical improvement in the AUC of DBT over CBBCT. The CBBCT might be a useful modality for breast cancer detection, thus we recommend more prospective studies on CBBCT application. Full article
Show Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Title: Deep learning for liver slide segmentation. New approaches and challenges
Authors: C. Vicas1, I. Rusu2, N. Al-Hajjar3, M. Lupşor-Platon2,3
Affiliation: 1Computer Science Department, Faculty of Automation and Computer Science, Technical University Cluj-Napoca, Romania 2Prof. Dr. Octavian Fodor Regional Institute of Gastroenterology and Hepatology, Cluj-Napoca, Romania 3 Iuliu Hațieganu, University of Medicine and Pharmacy, Cluj-Napoca, Romania
Abstract: --

Title: Towards improving the hepatocellular carcinoma recognition performance, through the combination between classical texture analysis and deep learning techniques, based on ultrasound images
Authors: D. Mitrea1, R. Brehar1, S. Nedevschi1, M. Lupşor-Platon2,3, M. Socaciu2,3, R. Badea2,3
Affiliation: 1Computer Science Department, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, Romania 2Prof. Dr. Octavian Fodor Regional Institute of Gastroenterology and Hepatology, Cluj-Napoca, Romania 3 Iuliu Hațieganu, University of Medicine and Pharmacy, Cluj-Napoca, Romania
Abstract: --

Title: Polyp Detection in Endoscopic Images Using Deep Learning
Authors: R. R. Slavescu1, I. Frim1, K. C. Slavescu2
Affiliation: 1Computer Science Department, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, Romania 2 Iuliu Hațieganu, University of Medicine and Pharmacy, Cluj-Napoca, Romania
Abstract: --

Back to TopTop