Next Article in Journal
Advanced Characterization of a Hybrid Shielding Solution for Reducing Electromagnetic Interferences at Board Level
Previous Article in Journal
Sentiment Analysis in Portuguese Restaurant Reviews: Application of Transformer Models in Edge Computing
Previous Article in Special Issue
Enhanced Real-Time Maintenance Management Model—A Step toward Industry 4.0 through Lean: Conveyor Belt Operation Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Convolutional Neural Network-Based Diagnostic Software for the Presumptive Determination of Non-Dermatophyte Molds

by
Mina Milanović
1,*,
Suzana Otašević
2,3,
Marina Ranđelović
2,3,
Andrea Grassi
4,
Claudia Cafarchia
5,
Mihai Mares
6 and
Aleksandar Milosavljević
1
1
Faculty of Electronic Engineering, University of Niš, Aleksandra Medvedeva 14, 18000 Niš, Serbia
2
Department of Microbiology and Immunology, Faculty of Medicine, University of Niš, 18000 Niš, Serbia
3
Center of Microbiology and Parasitology, Public Health Institute Niš, 18000 Niš, Serbia
4
Istituto Zooprofilattico Sperimentale della Lombardia e dell’Emilia Romagna, 27100 Pavia, Italy
5
Department of Veterinary Medicine, University of Bari, Valenzano, 70010 Bari, Italy
6
Laboratory of Antimicrobial Chemotherapy, Iasi University of Life Sciences, 700490 Iasi, Romania
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(3), 594; https://doi.org/10.3390/electronics13030594
Submission received: 14 December 2023 / Revised: 19 January 2024 / Accepted: 26 January 2024 / Published: 31 January 2024

Abstract

:
Based on the literature data, the incidence of superficial and invasive non-dermatophyte mold infection (NDMI) has increased. Many of these infections are undiagnosed or misdiagnosed, thus causing inadequate treatment procedures followed by critical conditions or even mortality of the patients. Accurate diagnosis of these infections requires complex mycological analyses and operator skills, but simple, fast, and more efficient mycological tests are still required to overcome the limitations of conventional fungal diagnostic procedures. In this study, software has been developed to provide an efficient mycological diagnosis using a trained convolutional neural network (CNN) model as a core classifier. Using EfficientNet-B2 architecture and permanent slides of NDM isolated from patient’s materials (personal archive of Prof. Otašević, Department of Microbiology and Immunology, Medical Faculty, University of Niš, Serbia), a multi-CNN model has been trained and then integrated into the diagnostic tool, with a 93.73% accuracy of the main model. The Grad-CAM visualization model has been used for further validation of the pattern recognition of the model. The software, which makes the final diagnosis based on the rule of the major method, has been tested with images provided by different European laboratories, showing an almost faultless accuracy with different test images.

1. Introduction

Recently, the relevant published data showed that the incidence and prevalence of superficial [1] and invasive non-dermatophyte mold infections (NDMI) [2] have dramatically increased. The mycological analyses of NDMI imply complex procedures, the need for expert knowledge, and the implementation of methods with optimal sensitivity and specificity. Consequently, a mycological diagnostic is not a part of most laboratories’ routine work, and the high percentage of superficial or invasive NDMI remains undiagnosed and leaves patients without treatment. Moreover, the invasive NDMI have very high morbidity, they are life-threatening, and the determination of causative agents is significant since different mold species do not have equal antifungal susceptibility [2]. There is a need for prompt diagnosis and mold determination, which is crucial for establishing the most effective therapy. Additionally, facilitating accurate diagnosis will gain significant experience and knowledge in the field of epidemiology, antifungal drug effectiveness, and better monitoring of NDMI.
The standard mycological procedure includes isolating and identifying fungi based on morphological/biochemical features. Significant progress in the domain of diagnostic procedures for infection caused by dermatophytes, yeasts, and Aspergillus species has been made, but the diagnoses of NDMI remain a challenge, primarily because commercial tests for mold differentiation are still needed. To overcome these problems, numerous techniques were researched to investigate the possibility of rapid detection and identification of infective agents and the pathogenesis of these infections [3]. Studies are primarily trying to design molecular tests and validate and standardize molecular kits, such as PCR molecular platforms, significantly improving the diagnosis [4,5].
Additionally, there is satisfactory progress in the MALDI-ToF mass spectrometry technique [6,7]. Recently, for instance, there have been reports of applying a new imaging tool, dynamic full-field optical coherence tomography (D-FF-OCT), that can be used to visualize and differentiate microscopic filamentous fungi [8]. However, many non-dermatophyte molds are not included in these studies.
Considering all the predicaments of the diagnostics process of different NDMIs, the main purpose of this research was to design, develop, and validate diagnostics software that can automate, accelerate, and simplify the procedure. This research consisted of obtaining necessary microscopical images, preprocessing the dataset so it is suitable for training, developing a convolutional neural network model, and validating the solution with test images received from different laboratories so that it can be integrated into a software solution. A prototype of the software tool has also been developed and tested, with the idea of the software becoming available in different laboratories, providing accurate diagnostics results, bypassing the need to send the materials to a different laboratory, thus making the process more rapid and efficient.
The development of different modern devices, like smartphones and cameras, has led to a drastically increased number of images produced every day in many different fields [9]. The databases of different images are updated regularly and are used not only for social media but in many other fields where image classification and determination from images can be performed.
The area of research that involves browsing, searching, and fetching images from a database is called image retrieval, and it has applications in various scientific areas [9]. From surveillance cameras, where identification of people and cars should be efficient and quick, to e-commerce, where similar images should be grouped for more accurate recommendations, and electronic circuit systems, where different techniques can be used to enhance and facilitate output signals [10], image retrieval is an integral part of many different systems. Previous solutions for image retrieval included hand-made solutions like the ones based on keywords, which are very complicated to update and maintain for larger databases. Some other approaches included using computer vision and machine learning methods, like color histograms, binary patterns, or other image descriptors, but the development of deep learning methods like convolutional neural networks soon took the lead due to their precision and effectiveness, either as a whole solution model or as a part of the system which needs image classification [11].
In recent years, artificial intelligence and machine learning methods have been an important part of CAD (computer-aided diagnostics), providing support to doctors in various fields of medicine, especially for assisted diagnosis [12]. CAD systems have been an integral part of many diagnostics practices and medical image evaluations. These systems can not only extract more information, perform the necessary preprocessing of images, and provide analysis from the data, but they can also speed up the process of diagnosis, making the transmission of data between different laboratories and specialists much faster and more efficient. Many of these systems are based on convolutional neural networks, relying on their accuracy and precision in pattern recognition.
Convolutional neural networks have been successfully applied to various fields of medical imaging, with a high percentage of accuracy. Many of the solutions provided have been carried out on microscopic medical imaging. Some of the examples of CNN use in medical imaging include object detection tasks in bioimaging [13]. Research has also been conducted in the analysis of cells under a microscope to detect possible pathology, like the ones infected by different bacteria [12]. Research has also been conducted on blood samples to rapidly detect diseases, like the ones affecting white blood cells [14].
In recent years, few projects have been developed using machine learning and convolutional neural networks (CNNs) to classify fungi. Many papers have discussed macroscopic or microscopic classification within one fungi genus, like Aspergillus spp. [15,16] and Leuchorrea spp. [17,18], providing good results using CNNs, but only when the genera are already determined. Some authors focus on food safety and macroscopic changes that molds create on bread [19]. There were some articles touching on the importance of the subject but focusing more on the recognition of cells [20], spores [21], or infection present [22] in the microscopic sample of the fungi provided, but not making the diagnosis itself. Researchers have also tried to make a tool that will differentiate molds using an advanced optical sensor system [23], which gave excellent results, but with the disadvantage that the equipment used is costly, and thus only available to some laboratories.
Since many of the mentioned papers had outstanding results, they provided excellent ground for creating software that can recognize and differentiate different species of fungi, thus also molds [15,16,17,18,19,20,21,22,23]. Within this research, we noticed while CAD systems and CNNs are an integral part of many medical systems, not a lot of research has been conducted in the field of mold determination, and the research that has been conducted has focused mainly on certain equipment or determination within one genus. This pointed to the lack of software or tools that can help determine molds that cause NDMI, especially rare types, for which the number of infections has increased in recent years. Herein, the goal was to investigate the possibility of developing a convolutional neural network model that will aid the identification of these non-dermatophyte molds and thus accelerate the diagnostic process, and then use the model for software diagnostics tool development.

2. Materials and Methods

2.1. Dataset Description

We created examples of nine non-dermatophyte molds, namely Alternaria spp., Aspergillus spp., Aureobasidium spp., Bipolaris spp., Cladosporium spp., Fusarium spp., Mucolares group, Penicillium spp., and Scopulariopsis spp. High-resolution images have been obtained at the Department of Microbiology and Immunology, Medical Faculty, University of Niš, Serbia, laboratories (personal archive of Prof. Otašević), taken from microscopic slides of isolated molds from patients’ material.
As a part of this research, we obtained between hundred and hundred and fifty images from each genera, and in total, we collected nine hundred and twenty initial samples (Table 1). Those samples are high-resolution images that needed preprocessing to be suitable for the convolutional neural network model training.

2.2. Determination of Non-Dermatophyte Molds

The growth of fungi present in the patient samples, in addition to the appropriate cultural characteristics and morphology of colonies, was determined at the genus level based on their microscopical morphological characteristics. The microscopic morphology of the non-dermatophytic genera is as follows [24,25]:
  • Alternaria spp.: characterized by septate, pale to dark-brown hyphae, conidiophores, and formation of large ovoid or ellipsoidal macroconidia which have transverse and longitudinal septations (Figure 1a);
  • Aspergillus spp.: characterized by septate hyphae and unbranched conidiophores which end with swollen vesicles with flask-shaped phialides on which there are chains of conidia (Figure 1b);
  • Aureobasidium spp.: determined by the very characteristic forming of two types of hyphae: hyaline with a thin wall producing conidia directly from walls and dark dense walls, closely septated hyphae, and single- and multi-celled swollen cells, some of which then convert into melanin-producing chlamydoconidiae (Figure 1c);
  • Bipolaris spp.: characterized by dark septate hyphae and conidiophores on which there is sympodial development of pale brown pigmented, dense-walled pseudoseptate conidia, which have three to five separations (Figure 1d);
  • Cladosporium spp.: characterized by septate, dark hyphae, conidiophores, and produced chains of brown, oval, smooth-wall conidia (Figure 1e);
  • Fusarium spp.: characterized by septate hyaline hyphae with the formation of slender sickle multiseptate macroconidia (Figure 1f);
  • Mucorales group: characterized by irregularly shaped, non septate, broad hyphae with right-angle branching, sporangiophores, and terminal-formed spore-filled sporangia (Figure 1g);
  • Penicillium spp.: characterized by septate hyaline hyphae, branched conidiophores, and the presence of branched metula with produced phialides (a brush-like appearance) on which there are chains of conidia (Figure 1h);
  • Scopulariopsis spp.: characterized by septated hyphae with shorter conidiophores with cylindrical conidia-bearing cells, and larger thick-walled mature conidia with cut-offs at the base that are usually very rough and spiny (Figure 1i).
During our previous work on this research [26], high-resolution sample images (usually 3024 × 4032 pixels) were cut into smaller images that are suitable for CNN training and then manually divided into ones that contain helpful mold information and ones that do not, since we do not want our neural network to obtain examples that are not valid. This was a very long process since there were between 700 and 1000 cut images from only one shot, meaning there were more than 100,000 samples from each initial sample, making it hard to classify. Our initial approach for resolving this problem consisted of adding a ‘Background’ category as the tenth category of the main model, but this soon proved to be inefficient since there are usually more useless images from each slide, making the Background category more than nine times bigger than all other and confusing the model, which still tried to categorize larger blots on the slides as some mold genera.
Because of this, we developed an input neural network model that only functions to classify sample images. This input neural network not only solved problems of classifying input images but also helped exclude crowded parts of slides that do contain mold parts but are too tangled up to provide useful information.
The input model, with its high accuracy of 98%, has presented itself to be very useful for preparing example data; it chooses only clear samples and drastically improves the process of adding new sample images or classes to the training of the primary model. In Figure 2, green rectangles represent valid instances of the original input image, and red ones are excluded.
With the help of trained input classification, we obtained 8138 images suitable for training from the initial 920 samples, whose examples have been presented in Figure 3. Of the chosen samples, around 80% was used for training the model, and the rest 20% for validation (Table 1).
The final testing of the model was carried out with images obtained from different laboratories rather than the ones used for training.

2.3. Training Neural Network for Sample Classification

Convolutional neural networks represent a significant improvement in the machine learning and computer vision fields due to their accuracy and efficiency in recognizing patterns of images by extracting filters with image feature maps [27]. CNNs have to learn from accurate category examples to be able to classify images into different categories.
For humans, recognizing a specific familiar pattern or object is easy, regardless of the object’s position, rotation, or color. Still, for machines, it can be challenging, keeping in mind that they store data in binary space. Because of this, it is necessary to provide examples of neural network training that can cover all cases.
Data augmentation techniques provide a set of operations to widen the dataset, performing rotation, translation, flipping, and brightness enhancement on existing training samples, thus creating a more informative dataset for the desired model [28]. An example of how those operations were applied to our dataset can be seen in Figure 4.
Convolutional neural networks (CNNs) have been commonly used in image classification in various fields of expertise because of their innovative approach. CNNs represent a gathering of three architectural concepts, local receptive fields, shared weights, and spatial subsampling, making them more persistent in distortion and translation.
Since CNN’s invention, various architectures have been developed for different purposes, but for similar problems like our software, the most commonly used are ResNet50 [29] and the EfficientNet family of models (B0 to B7). For our input method, ResNet50 gave the best results, according to accuracy, but the EfficientNet-B2 model provided the best results for the primary classifier. EfficientNet models, introduced in 2019 by Tan and Le [30], feature an essential innovation in the heuristic approach to compound scaling the model, improving efficiency and accuracy [31].
Compound scaling (Figure 5e) uniformly scales all dimensions of the model, in contrast to the other, more conventional methods (Figure 5b–d). For compound scaling, scaling coefficients are obtained by performing a grid search and discovering relationships between scaling dimensions in that way. The desired target model size is then determined by applying those coefficients to the baseline network [32].
The implementation and training of our model have been carried out using the programming language Python [33] and Keras API. Keras library [34], which was designed as a Python addition, provides an interface with a set of functions that make working and experimenting with neural networks easier and faster. All aspects of neural network training can be accustomed to using Keras, which runs on top of a machine learning platform called TensorFlow [35], and adjusting parameters in the mentioned functions.
From Keras, EfficientNet family architectures have proven to work best for classifier training [30]. Our model has presented the best results with the sparse_categorical_crossentropy type of error (from the losses Keras module) and has been compiled with the RMSprop algorithm [36] for optimization (Keras module optimizers). Since accuracy is the most important criterion in developing a diagnosis application, it has been set as the only parameter of the metric.
During the model’s training, feature vectors are created, presenting the solution’s core. These vectors are used to form a classifier, which can classify images into nine mold categories. An overview of the proposed model can be seen in Figure 6, presenting the process from obtaining the images from laboratories to software implementation, where the classifier is highlighted as the core of the solution. The diagram on the right presents the logic behind the software solution tool.

3. Results and Discussion

3.1. Results Obtained from CNN Training

As mentioned before, our dataset and problem statement have not been discussed in other papers so far, so the model could not be compared with other models. In Table 2, we provided results for each fungi genera category obtained from the validation of our model.
The best model accuracy after training was obtained using EfficientNet-B2 architecture after 17 epochs, reaching 93.73% accuracy (Figure 7). A graph of the training process can be seen in Figure 8.
The validation of the model showed promising results, and according to the confusion matrix (Figure 9), only two genera had a significant number of misclassified test images, which is not surprising because Bipolaris spp. genera are the most similar to Alternaria spp., and Scopulariopsis spp., whose morphology is not characteristic compared toother species. When a high-resolution image is put in the software diagnostics tool, the decision is made for each small sample extracted from it, so even if some predictions of the small samples are incorrect, this should not affect the diagnosis.
Since the final decision of the software is made by the ruling of the major, these results are very good, showing that from a pull of images, we would have enough hits in the correct mold category.
There are two main groups of fungi based on their morphological characteristics—unicellular fungi (yeasts) with basal cell blastoconidia and multicellular fungi (molds) with a basic hyphal cell. Multicellular fungi are divided into dermatophytes and non-dermatophytes [4]. In this paper, we focused on the determination of non-dermatophyte molds, for which there are no automated tests or diagnostics tools available in laboratories.
The convolutional neural network model was trained to represent the core of the diagnostics tool. Best model accuracy after training has been obtained using EfficientNet-B2 architecture after 17 epochs, reaching 93.73% accuracy, which is not as great as in our previous paper [26], but excellent considering we only have non-dermatophyte categories in this paper, whose morphologies are more similar within the group than of those in previous research where we considered both dermatophyte and non-dermatophyte fungi genera.
The importance of further testing and validation of the trained model is crucial for implementing a reliable diagnostic tool. As mentioned before, convolutional neural networks learn from examples. Still, slides on the images contain more information than mold patterns, like the end of the slide or blots. They are colored differently, so it was vital to check if the pattern model learned is correct and not based on other attributes.
One way of CNN decision-making visualization is by using the Grad-CAM method. Class activation maps can be used to interpret the prediction decision made by the convolutional neural network, and they can be implemented in various systems, like object detection [37], histopathology segmentation [38], and rotating machinery fault diagnosis [39]. The Grad-CAM technique highlights parts of the image that represent the recognized pattern by extracting gradients of the target notion (molds in our model) [40]. These targets are flown into the final convolutional layer of the model to highlight regions of interest, making heat maps.
Using Grad-CAM, we were able to validate our model because upon observing the images produced using this method, it is clear that patterns of molds have been highlighted, indicating that they were recognized (Figure 10).
Using the Grad-CAM method, images whose patterns were not recognized as supposed can also be taken into consideration for a better understanding of the model and ways of improvement. For our model, we can conclude that it struggled with crowded images of Scopulariopsis spp., edgy examples of Bipolaris spp., and Alternaria spp. samples that contain many recognizable shapes which pollute the image (Figure 11).

3.2. Software Implementation

The trained CNN model can be used for low-resolution image predictions. Because input images usually taken using microscopes are high-resolution, the best way to use our model was to design a simple interface software to diagnose with that kind of input.
Our implemented software takes a high-resolution microscope image as input data and, as a first step, cuts the provided image into a set of smaller ones; similarly, we prepare image samples for model training. After smaller images are ready, they are first processed with an entrance neural network, which decides which of these contain important information for diagnosis and should be used with the main model. All small images that do not pass this first selection are colored in red (Figure 12).
After this first selection, the main CNN model makes predictions for each remaining image, where the prediction vector reveals the probability of the image belonging to each of the nine classes. According to that vector, the score of the class whose prediction is highest is incremented and the small image is then colored in class color (Alternaria—cyan, Aspergillus—yellow, etc.) as represented in Figure 12.
The final decision is made when all the small images are processed, making the diagnosis from not one but all parts of a high-resolution image. This way, the precision of the whole software is much higher than the accuracy of the neural network model, using the ruling of the major. The software writes the final score (diagnosis) but also shows which class was determined for each small part of the original image (Figure 12). For example, in the lower image in Figure 12, it can be easily determined which parts of the high-resolution image have been excluded from classification by the initial neural network because they are colored red. For the rest of the image, it can be seen that Scopulariopsis has indeed been recognized since most of the squares have been colored white. Misses can also be noticed, most of which are yellow (the color of Aspergillus), which corresponds to our results.
Implemented software has then been used with test images that have not been used in the training or testing of the model. These images were courtesy of the Department of Veterinary Medicine, University of Bari and Laboratory of Antimicrobial Chemotherapy, Iasi University of Life Sciences, making them perfect testing samples for different microscopes and devices for obtaining images. The identification of molds is usually carried out based on their morphological differences (both macroscopic and microscopic examination), but if a diagnosis cannot be determined with certainty, mass spectrometry and in-depth molecular biology (PCR of the ITS segment or other) are also performed, ensuring that the test images results are accurate. In Table 3, the results of testing these high-resolution images can be seen, showing that the software works for most of those test images, having issues only with images that contain a tiny sample of the mold on the slide.
As presented in Table 3, testing images obtained from the Department of Veterinary Medicine, the University of Bari, and the Laboratory of Antimicrobial Chemotherapy, Iasi University of Life Sciences, have shown excellent results using our software, showing incorrect results for only two images, and correctly diagnosing all other images, making the final accuracy percentage almost 100% for all the species.

4. Conclusions

During the last decade, traveling has become much more accessible, allowing people to visit many remote areas and explore many different cultures. The accessibility of previously inaccessible parts of the world has, inter alia, increased the spread of some previously rare types of non-dermatophyte mold infections. Those infections, if not treated adequately and in time, can lead to the critical conditions of the patient, and even mortality. Unfortunately, specialists and laboratories working in the mycology field are limited, and sometimes these infections imitate symptoms of some other diseases, leading to slow or wrong diagnosis.
CAD has proven to be an essential part of the support of specialists in various medical fields, providing the needed help in many diagnostics proceedings. Computer systems can not only preprocess and analyze data efficiently and rapidly, but they can also help in the transmission of data among different laboratories and specialists, making various diagnostics processes easier to oversee and maintain.
In this paper, we proposed a method and described software implementation that can become a handy tool for non-dermatophyte mold diagnostics. The software presented in this paper could drastically improve the process of identification of rare species of fungi, making the diagnostics process faster and more accurate. One of the most important innovations of this research is providing precise diagnostics, especially of some rare types of fungi that are often misdiagnosed and have proven to be very dangerous for patients if mistreated. With an accuracy of 93.73% of the main model and almost flawless accuracy of the software itself, the tool has been proven to represent a significant part of the diagnostics procedure.
The advantage of using a second convolutional neural network has proven to be effective in the diagnostics process by making the preparation of the dataset easier. The input neural network is also an integral part of the software diagnostics algorithm; by extracting the unnecessary parts of high-resolution images, it allows the main model to perform decisions and include only parts of the images that contain vital information that has been tested and proven with various sample images, as shown in the examples.
The main convolutional neural network model, used for image pattern recognition of the morphological characteristics of molds, has shown promising results, with an accuracy of 93.73% and improving each time we receive a new batch of images. Still, with the implementation of the ruling of the major technique from high-resolution images, the accuracy of the solution has become even higher. Using the interface provided and described, based on the validation of the model and performed testing, the precision of the software is close to perfect, with some exceptional cases that need algorithm improvement. This proves that our tool can be very significant and reliable in the diagnostics process and provide a crucial aim for the morphological determination of rare species of molds.
Future development of the application will include enabling the application to run on different platforms and be easily accessible. Once the solution is available for mobile devices, the diagnostics process can be rapid and performed in many more laboratories, even those that do not have a specialist available. This way, the process can be reduced from a few days of sending materials to another laboratory or department to only a few minutes of taking the image with a mobile phone.
In step with these software upgrades, our plans involve adding more important genera of significant molds for diagnostics, for which we are currently working on acquiring images.

Author Contributions

Conceptualization, M.M. (Mina Milanović), A.M. and S.O.; data curation, M.M. (Mina Milanović), A.M., S.O. and M.R.; formal analysis, M.M. (Mina Milanović), A.M., S.O. and M.R.; investigation, M.M. (Mina Milanović), A.M. and S.O.; methodology, M.M. (Mina Milanović) and A.M.; resources, S.O., M.R., A.G., C.C. and M.M. (Mihai Mares); software, M.M. (Mina Milanović) and A.M.; supervision, A.M. and S.O.; validation M.M. (Mina Milanović), A.M., S.O. and M.R.; visualization, M.M. (Mina Milanović) and A.M.; writing—original draft, M.M. (Mina Milanović); writing—review and editing, A.M., S.O., A.G., C.C. and M.M. (Mihai Mares). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author. The data are not publicly available due to privacy issues related to the different institutions that provided the materials.

Acknowledgments

We would also like to thank the Department of Veterinary Medicine, University of Bari (Italy), and the Laboratory of Antimicrobial Chemotherapy, Iasi University of Life Sciences (Romania), for all the help and samples provided.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Rezaei, N. (Ed.) Encyclopedia of Infection and Immunity; Elsevier: Amsterdam, The Netherlands; Oxford, UK; Cambridge, UK, 2022; Volume 3, pp. 414–432. ISBN 9780128187319. [Google Scholar]
  2. Chen, S.C.-A.; Perfect, J.; Colombo, A.L.; Cornely, O.A.; Groll, A.H.; Seidel, D.; Albus, K.; De Almedia, J.N.; Garcia-Effron, G.; Gilroy, N.; et al. Global Guideline for the Diagnosis and Management of Rare Yeast Infections: An Initiative of the ECMM in Cooperation with ISHAM and ASM. Lancet Infect. Dis. 2021, 21, e375–e386. [Google Scholar] [CrossRef] [PubMed]
  3. Rajković, K.M.; Milošević, N.T.; Otašević, S.; Jeremić, S.; Arsenijević, V.A. Aspergillus Fumigatus Branching Complexity in Vitro: 2D Images and Dynamic Modeling. Comput. Biol. Med. 2019, 104, 215–219. [Google Scholar] [CrossRef] [PubMed]
  4. Cafarchia, C.; Iatta, R.; Latrofa, M.S.; Gräser, Y.; Otranto, D. Molecular Epidemiology, Phylogeny and Evolution of Dermatophytes. Infect. Genet. Evol. 2013, 20, 336–351. [Google Scholar] [CrossRef] [PubMed]
  5. Otašević, S.; Momčilović, S.; Stojanović, N.M.; Skvarč, M.; Rajković, K.; Arsić-Arsenijević, V. Non-Culture Based Assays for the Detection of Fungal Pathogens. J. De Mycol. 2018, 28, 236–248. [Google Scholar] [CrossRef] [PubMed]
  6. Singhal, N.; Kumar, M.; Kanaujia, P.K.; Virdi, J.S. MALDI-TOF Mass Spectrometry: An Emerging Technology for Microbial Identification and Diagnosis. Front. Microbiol. 2015, 6, 791. [Google Scholar] [CrossRef] [PubMed]
  7. Florio, W.; Tavanti, A.; Barnini, S.; Ghelardi, E.; Lupetti, A. Recent Advances and Ongoing Challenges in the Diagnosis of Microbial Infections by MALDI-TOF Mass Spectrometry. Front. Microbiol. 2018, 9, 1097. [Google Scholar] [CrossRef]
  8. Maldiney, T.; Chassot, J.-M.; Boccara, C.; Blot, M.; Piroth, L.; Charles, P.-E.; Garcia-Hermoso, D.; Lanternier, F.; Dalle, F.; Sautour, M. Dynamic Full-Field Optical Coherence Tomography as Complementary Tool in Fungal Diagnostics. J. Med. Mycol. 2022, 32, 101303. [Google Scholar] [CrossRef]
  9. Cools, A.; Belarbi, M.A.; Mahmoudi, S.A. A Comparative Study of Reduction Methods Applied on a Convolutional Neural Network. Electronics 2022, 11, 1422. [Google Scholar] [CrossRef]
  10. Dimauro, G.; Deperte, F.; Maglietta, R.; Bove, M.; La Gioia, F.; Renò, V.; Simone, L.; Gelardi, M. A Novel Approach for Biofilm Detection Based on a Convolutional Neural Network. Electronics 2020, 9, 881. [Google Scholar] [CrossRef]
  11. Jia, Z.; Wang, S.; Zhao, K.; Li, Z.; Yang, Q.; Liu, Z. An Efficient Diagnostic Strategy for Intermittent Faults in Electronic Circuit Systems by Enhancing and Locating Local Features of Faults. Meas. Sci. Technol. 2024, 35, 036107. [Google Scholar] [CrossRef]
  12. Liu, Y.; Jiang, H.; Wang, Y.; Wu, Z.; Liu, S. A Conditional Variational Autoencoding Generative Adversarial Networks with Self-Modulation for Rolling Bearing Fault Diagnosis. Measurement 2022, 192, 110888. [Google Scholar] [CrossRef]
  13. Ahn, H.; Lee, M.; Seong, S.; Lee, M.; Na, G.-J.; Chun, I.-G.; Kim, Y.; Hong, C.-H. BioEdge: Accelerating Object Detection in Bioimages with Edge-Based Distributed Inference. Electronics 2023, 12, 4544. [Google Scholar] [CrossRef]
  14. Almurayziq, T.S.; Senan, E.M.; Mohammed, B.A.; Al-Mekhlafi, Z.G.; Alshammari, G.; Alshammari, A.; Alturki, M.; Albaker, A. Deep and Hybrid Learning Techniques for Diagnosing Microscopic Blood Samples for Early Detection of White Blood Cell Diseases. Electronics 2023, 12, 1853. [Google Scholar] [CrossRef]
  15. Billones, R.K.C.; Calilung, E.J.; Dadios, E.P.; Santiago, N. Aspergillus Species Fungi Identification Using Microscopic Scale Images. In Proceedings of the 2020 IEEE 12th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM), Manila, Philippines, 3–7 December 2020; IEEE: Manila, Philippines, 2020; pp. 1–5. [Google Scholar]
  16. Billones, R.K.C.; Calilung, E.J.; Dadios, E.P.; Santiago, N. Image Based Macroscopic Classification of Aspergillus Fungi Species using Convolutional Neural Networks. In Proceedings of the 2020 IEEE 12th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM), Manila, Philippines, 3–7 December 2020; IEEE: Manila, Philippines, 2020; pp. 1–4. [Google Scholar]
  17. Zhang, J.; Lu, S.; Wang, X.; Du, X.; Ni, G.; Liu, J.; Liu, L.; Liu, Y. Automatic Identification of Fungi in Microscopic Leucorrhea Images. J. Opt. Soc. Am. A 2017, 34, 1484. [Google Scholar] [CrossRef] [PubMed]
  18. Hao, R.; Wang, X.; Zhang, J.; Liu, J.; Du, X.; Liu, L. Automatic Detection of Fungi in Microscopic Leucorrhea Images Based on Convolutional Neural Network and Morphological Method. In Proceedings of the 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chengdu, China, 15–17 March 2019; IEEE: Chengdu, China, 2019; pp. 2491–2494. [Google Scholar]
  19. Ollinger, N.; Malachova, A.; Sulyok, M.; Schütz-Kapl, L.; Wiesinger, N.; Krska, R.; Weghuber, J. Combination of DNA Barcoding, Targeted Metabolite Profiling and Multispectral Imaging to Identify Mold Species and Metabolites in Sliced Bread. Future Foods 2022, 6, 100196. [Google Scholar] [CrossRef]
  20. Du, X.; Liu, L.; Wang, X.; Ni, G.; Zhang, J.; Hao, R.; Liu, J.; Liu, Y. Automatic Classification of Cells in Microscopic Fecal Images Using Convolutional Neural Networks. Biosci. Rep. 2019, 39, BSR20182100. [Google Scholar] [CrossRef] [PubMed]
  21. Tahir, M.W.; Zaidi, N.A.; Rao, A.A.; Blank, R.; Vellekoop, M.J.; Lang, W. A Fungus Spores Dataset and a Convolutional Neural Network Based Approach for Fungus Detection. IEEE Trans. Nanobioscience 2018, 17, 281–290. [Google Scholar] [CrossRef] [PubMed]
  22. Prommakhot, A.; Srinonchat, J. Exploiting Convolutional Neural Network for Automatic Fungus Detection in Microscope Images. In Proceedings of the 2020 8th International Electrical Engineering Congress (iEECON), Chiang Mai, Thailand, 4–6 March 2020; IEEE: Chiang Mai, Thailand, 2020; pp. 1–4. [Google Scholar]
  23. Tahir, M.W.; Zaidi, N.A.; Blank, R.; Vinayaka, P.P.; Vellekoop, M.J.; Lang, W. Detection of Fungus through an Optical Sensor System Using the Histogram of Oriented Gradients. In Proceedings of the 2016 IEEE SENSORS, Orlando, FL, USA, 30 October–3 November 2016; IEEE: Orlando, FL, USA, 2016; pp. 1–3. [Google Scholar]
  24. Larone, D.H. Medically Important Fungi: A Guide to Identification, 3rd ed.; ASM Press: Washington, DC, USA, 1995; ISBN 9781555810917. [Google Scholar]
  25. Larone, D.H.; Walsh, T.J.; Hayden, R.T.; Larone, D.H. Larone’s Medically Important Fungi: A Guide to Identification, 6th ed.; ASM Press: Washington, DC, USA, 2018; ISBN 9781555819873. [Google Scholar]
  26. Milanovic, M.; Milosavljevic, A.; Randjelovic, M. Visualization of Microscopic Morphological Characteristics Used for Determination of Infectious Molds. In Proceedings of the 8th International Conference IcETRAN, Ethno Village Stanišići, Republic of Srpska, 8 September 2021; pp. 528–532. [Google Scholar]
  27. Arbib, M.A. (Ed.) The Handbook of Brain Theory and Neural Networks. In A Bradford Book; 1. MIT Press paperback ed.; MIT: Cambridge, MA, USA, 1998; ISBN 9780262511025. [Google Scholar]
  28. How to Load Large Datasets From Directories for Deep Learning in Keras by Jason Brownlee. Available online: https://machinelearningmastery.com/how-to-configure-image-data-augmentation-when-training-deep-learning-neural-networks/ (accessed on 20 November 2023).
  29. ResNet50 Architecture. Available online: https://iq.opengenus.org/resnet50-architecture/ (accessed on 25 November 2023).
  30. Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional. Neural Netw. 2019. [Google Scholar] [CrossRef]
  31. Image Classification Efficientnet Fine Tuning. Available online: https://keras.io/examples/vision/image_classification_efficientnet_fine_tuning/ (accessed on 25 November 2023).
  32. EfficientNet: Improving Accuracy and Efficiency through AutoML and Model Scaling. Available online: https://ai.googleblog.com/2019/05/efficientnet-improving-accuracy-and.html (accessed on 25 November 2023).
  33. Python. Available online: https://www.python.org/ (accessed on 25 July 2023).
  34. Keras: The Python Deep Learning Library. Available online: https://keras.io (accessed on 25 November 2023).
  35. TensorFlow. Available online: https://www.tensorflow.org/ (accessed on 25 November 2023).
  36. Understanding RMSprop—Faster Neural Network Learning. Available online: https://towardsdatascience.com/understanding-rmsprop-faster-neural-network-learning-62e116fcf29a (accessed on 26 November 2023).
  37. Zhang, S.; Yu, S.; Ding, H.; Hu, J.; Cao, L. CAM R-CNN: End-to-End Object Detection with Class Activation Maps. Neural Process. Lett. 2023, 55, 10483–10499. [Google Scholar] [CrossRef]
  38. Li, Y.; Wang, L.; Huang, X.; Wang, Y.; Dong, L.; Ge, R.; Zhou, H.; Ye, J.; Zhang, Q. Sketch-Supervised Histopathology Tumour Segmentation: Dual CNN-Transformer With Global Normalised CAM. IEEE J. Biomed. Health Inform. 2024, 28, 66–77. [Google Scholar] [CrossRef]
  39. Zhao, K.; Liu, Z.; Zhao, B.; Shao, H. Class-Aware Adversarial Multiwavelet Convolutional Neural Network for Cross-Domain Fault Diagnosis. IEEE Trans. Ind. Inf. 2023, 1–12. [Google Scholar] [CrossRef]
  40. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. Int. J. Comput. Vis. 2020, 128, 336–359. [Google Scholar] [CrossRef]
Figure 1. Microscopical morphological characteristics of molds: (a) Alternaria, (b) Aspergillus, (c) Aureobasidium, (d) Bipolaris, (e) Cladosporium, (f) Fusarium, (g) Mucorales, (h) Penicillium, and (i) Scopulariopsis.
Figure 1. Microscopical morphological characteristics of molds: (a) Alternaria, (b) Aspergillus, (c) Aureobasidium, (d) Bipolaris, (e) Cladosporium, (f) Fusarium, (g) Mucorales, (h) Penicillium, and (i) Scopulariopsis.
Electronics 13 00594 g001
Figure 2. Example of input image sample classification, where green squares represent useful training samples and red squares are excluded (which are empty parts of the slide).
Figure 2. Example of input image sample classification, where green squares represent useful training samples and red squares are excluded (which are empty parts of the slide).
Electronics 13 00594 g002
Figure 3. Examples of the low-resolution training images.
Figure 3. Examples of the low-resolution training images.
Electronics 13 00594 g003
Figure 4. Data augmentation of a single sample.
Figure 4. Data augmentation of a single sample.
Electronics 13 00594 g004
Figure 5. Comparison of different scaling methods [32].
Figure 5. Comparison of different scaling methods [32].
Electronics 13 00594 g005
Figure 6. Overview of the proposed model.
Figure 6. Overview of the proposed model.
Electronics 13 00594 g006
Figure 7. Confusion matrix showing accuracy in %.
Figure 7. Confusion matrix showing accuracy in %.
Electronics 13 00594 g007
Figure 8. Training graph through epochs plotting accuracy on the training (blue line) and validation (orange line) set.
Figure 8. Training graph through epochs plotting accuracy on the training (blue line) and validation (orange line) set.
Electronics 13 00594 g008
Figure 9. Confusion matrix.
Figure 9. Confusion matrix.
Electronics 13 00594 g009
Figure 10. Examples of good pattern recognition visualization.
Figure 10. Examples of good pattern recognition visualization.
Electronics 13 00594 g010
Figure 11. Examples of bad pattern recognition.
Figure 11. Examples of bad pattern recognition.
Electronics 13 00594 g011
Figure 12. Examples of software pattern recognition, where different squares, according to the legend presented, illustrate predictions of the classifier for that part of the high-resolution image, clearly showing the most hits for Alternatia spp. (up-left, cyan) and Scopulariopsis spp. (down, white). Red squares represent parts of the images that have been excluded from the classification process.
Figure 12. Examples of software pattern recognition, where different squares, according to the legend presented, illustrate predictions of the classifier for that part of the high-resolution image, clearly showing the most hits for Alternatia spp. (up-left, cyan) and Scopulariopsis spp. (down, white). Red squares represent parts of the images that have been excluded from the classification process.
Electronics 13 00594 g012
Table 1. Details of used sample dataset.
Table 1. Details of used sample dataset.
Number of GeneraNumber of SamplesNumber of Samples per ClassNumber of Images after Input ClassificationImages Used for TrainingImages Used for Validation
9920100–150813865201618
Table 2. CNN training results for different fungi genera.
Table 2. CNN training results for different fungi genera.
Fungi GeneraAccuracy [%]Samples Placed Correctly/Samples per Class
Alternaria91.8168/183
Aspergillus94.2146/156
Aureobasidium99.5193/194
Bipolaris84.1164/195
Cladosporium94.1193/205
Fusarium100145/145
Mucorales99.4162/163
Penicillium98.1159/162
Scopulariopsis85.5165/193
Table 3. Software testing results for different fungi genera.
Table 3. Software testing results for different fungi genera.
Fungi Genera.University of BariIasi University of Life SciencesSoftware Accuracy in %
Alternaria-1/1100
Aspergillus19/204/495.8
Aureobasidium---
Bipolaris-1/1100
Cladosporium4/4-100
Fusarium-2/2100
Mucorales---
Penicillium3/32/383.3
Scopulariopsis-1/1100
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Milanović, M.; Otašević, S.; Ranđelović, M.; Grassi, A.; Cafarchia, C.; Mares, M.; Milosavljević, A. Multi-Convolutional Neural Network-Based Diagnostic Software for the Presumptive Determination of Non-Dermatophyte Molds. Electronics 2024, 13, 594. https://doi.org/10.3390/electronics13030594

AMA Style

Milanović M, Otašević S, Ranđelović M, Grassi A, Cafarchia C, Mares M, Milosavljević A. Multi-Convolutional Neural Network-Based Diagnostic Software for the Presumptive Determination of Non-Dermatophyte Molds. Electronics. 2024; 13(3):594. https://doi.org/10.3390/electronics13030594

Chicago/Turabian Style

Milanović, Mina, Suzana Otašević, Marina Ranđelović, Andrea Grassi, Claudia Cafarchia, Mihai Mares, and Aleksandar Milosavljević. 2024. "Multi-Convolutional Neural Network-Based Diagnostic Software for the Presumptive Determination of Non-Dermatophyte Molds" Electronics 13, no. 3: 594. https://doi.org/10.3390/electronics13030594

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop