Next Article in Journal
Thermal Cues Composed of Sequences of Pulses for Transferring Data via a Haptic Thermal Interface
Previous Article in Journal
Effects of Barefoot and Shod Conditions on the Kinematics and Kinetics of the Lower Extremities in Alternating Jump Rope Skipping—A One-Dimensional Statistical Parameter Mapping Study
Previous Article in Special Issue
Adversarial Attack and Defense in Breast Cancer Deep Learning Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Machine Learning for Automated Classification of Abnormal Lung Sounds Obtained from Public Databases: A Systematic Review

1
Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA
2
Department of Medicine, Division of Pulmonary and Critical Care Medicine, Mayo Clinic, Rochester, MN 55905, USA
3
Department of Cardiac Anesthesiology and Intensive Care, Republican Clinical Medical Center, 223052 Minsk, Belarus
4
Division of Pulmonary Medicine, Mayo Clinic Health Systems, Essentia Health, Duluth, MN 55805, USA
5
Department of Anesthesiology, Taipei Veterans General Hospital, National Yang Ming Chiao Tung University, Taipei 11217, Taiwan
6
Department of Biomedical Sciences and Engineering, National Central University, Taoyuan 320317, Taiwan
7
Mayo Clinic Libraries, Mayo Clinic, Rochester, MN 55905, USA
*
Author to whom correspondence should be addressed.
Bioengineering 2023, 10(10), 1155; https://doi.org/10.3390/bioengineering10101155
Submission received: 9 August 2023 / Revised: 15 September 2023 / Accepted: 26 September 2023 / Published: 2 October 2023
(This article belongs to the Special Issue Machine Learning and Signal Processing for Biomedical Applications)

Abstract

:
Pulmonary auscultation is essential for detecting abnormal lung sounds during physical assessments, but its reliability depends on the operator. Machine learning (ML) models offer an alternative by automatically classifying lung sounds. ML models require substantial data, and public databases aim to address this limitation. This systematic review compares characteristics, diagnostic accuracy, concerns, and data sources of existing models in the literature. Papers published from five major databases between 1990 and 2022 were assessed. Quality assessment was accomplished with a modified QUADAS-2 tool. The review encompassed 62 studies utilizing ML models and public-access databases for lung sound classification. Artificial neural networks (ANN) and support vector machines (SVM) were frequently employed in the ML classifiers. The accuracy ranged from 49.43% to 100% for discriminating abnormal sound types and 69.40% to 99.62% for disease class classification. Seventeen public databases were identified, with the ICBHI 2017 database being the most used (66%). The majority of studies exhibited a high risk of bias and concerns related to patient selection and reference standards. Summarizing, ML models can effectively classify abnormal lung sounds using publicly available data sources. Nevertheless, inconsistent reporting and methodologies pose limitations to advancing the field, and therefore, public databases should adhere to standardized recording and labeling procedures.

Graphical Abstract

1. Introduction

1.1. Context and Objectives

Respiratory conditions are among the most common diseases associated with substantial morbidity and mortality [1], representing a growing health burden. Rapidly and reliably diagnosing pulmonary diseases is vital for establishing appropriate medical management and preventing further respiratory decompensation. Most conventional diagnostic tools (e.g., chest radiographs) can only be performed intermittently, and the standard physical exam (e.g., visual inspection and percussion) offers limited diagnostic accuracy [2,3,4]. Pulmonary auscultation is a noninvasive, safe, inexpensive, and easy-to-perform way to rapidly evaluate patients with pulmonary symptoms, making it an essential component of the clinical examination [5]. However, auscultation is operator-dependent and subject to inherent interobserver variability [2,3].
Deep learning (DL) is a subfield of machine learning (ML) and has seen increased exploration with the recent increasing computational power and large database availability [6]. In lay terms, ML allows a machine to learn rules and insights from input data, thus allowing it to apply those rules to generate predictions from data in new situations [7]. DL takes advantage of its multilayered architecture by sequentially feeding the representations into multiple layers, generating more distinguishable data points. This process allows the machine to learn highly complex functions [6].
ML and DL have shown encouraging results in healthcare when diagnosing diseases, primarily by analyzing images. For instance, radiology and pathology have benefitted from DL in disease diagnosis [8]. By utilizing large databases, classification algorithms have become increasingly accurate for detecting abnormalities in images and classifying them into multiple disease types [9], promising to reduce physician burnout and enhance test interpretations. Similarly, ML and DL can process audio signals and therefore classify sounds, such as those captured by auscultation, offering to aid clinicians in detecting and classifying heart [10] and lung [11] pathologies.
Respiratory sounds (RS) comprise relevant diagnostic information for pulmonary diseases [12]. These are heard over the chest wall and originate from the air movement in and out of the lungs during the respiratory cycle. RS interpretation in auscultation is often used in diagnosing lung pathologies, such as obstructive or restrictive respiratory diseases. As expected, these sounds are nonstationary and nonlinear, prone to noise contamination, making it hard for clinicians to detect abnormalities [13]. The diagnostic value of auscultation in detecting abnormal RSs could be improved if an objective and standardized interpretation approach is implemented [14,15]. This review aims to assess the diagnostic accuracy of ML and DL algorithms in abnormal lung sound detection and classification and evaluate the differences in methodology and reporting in the published literature to identify common issues that potentially slow down the progress of this promising field.

1.2. Process of Automated Abnormal Lung Sounds Classification

DL can recognize lung disorders and abnormalities based on RS analysis. These computer-assisted techniques increase the objectivity in detecting and diagnosing adventitious or pathological sounds. Figure 1 illustrates an overview of the automatic abnormal lung sounds classification process, which typically includes the following steps: audio recording, file preprocessing, feature extraction, and classification.

1.2.1. Lung Sound Recording

Lung sounds are typically recorded for training healthcare workers and for research analysis; these audio samples can be broken down to objectively describe their duration, waveform, and frequency components [16]. Recordings are obtained in one of two ways, either directly by trained personnel that perform the auscultation with a device designed or adapted (with a microphone) for sound recording or by attaching sensors to the subject’s chest, which allows prolonged or continuous recording [17]. The most used sensors are piezoelectric microphones, contact microphones, electret microphones, and the more widely distributed electronic stethoscopes [11]. However, this step is subject to variability among study designs due to differences in auscultation points, recording devices, and environmental conditions.

1.2.2. Audio Preprocessing

Preprocessing is an essential step, as it allows to modify the samples to better fit the purpose of the intended analysis, reduce the storage burden, and facilitate the extraction of features [18]. Among the components of preprocessing is denoising, which aims to eliminate signals that correspond to interference sources such as background noise, heartbeats, and movement [19] while preserving the valuable information; consequently, the resulting signal is cleaner and more suitable for further analysis. The most widespread denoising techniques are discrete wavelet transform (DWT), singular value decomposition (SVD), and adaptive filtering, which provide robust denoising but can be computationally expensive [20]. Smoothing is another approach, where multiple techniques are used to minimize the fluctuations in a signal, regardless of noise [21]. Other preprocessing methods include segmentation to separate breath cycles into their corresponding phases and amplitude normalization to reduce amplitude variations attributable to factors like a gain of the recording tool or subject demographics [22]. The adequate preprocessing of the audio files impacts the overall accuracy of the models [20].

1.2.3. Feature Extraction

Feature extraction is identifying a set of unique properties from a signal that will be used for comparison in the classification stage. In this step, a large input signal with many redundant components can be transformed into a smaller set of representative features able to describe the original signal accurately to facilitate and expedite the classification step [23]. In general, the features are extracted from one of the following: time, frequency, and time–frequency domains [11]. Some of the established techniques for feature extraction include autoregressive models, characterized for their short training time and low variance); mel-frequency cepstral coefficients (MFCCs), which are effective for reducing dimensionality but may not capture all the nuances of complex data; and spectral and wavelet-based features, which offer multiresolution analysis and precise feature localization [11].

1.2.4. Classification

ML and DL algorithms can classify the preprocessed signals and extracted features based on their characteristics, allowing them to differentiate between normal and abnormal sounds automatically. Two ways exist to feed the data into the model: holdout validation and cross-validation. In holdout validation, the dataset is divided into fixed splits of training, validation, and testing sets. The model uses training data to learn the parameters; then, the validation data allows the algorithm to search for the optimal set of hyperparameters for the model; finally, the test data is hidden during the whole model building and is used to assess the performance [24]. In the cross-validation approach, multiple partitions of the dataset are generated, allowing each partition to be used multiple times and with different purposes, potentially improving the statistical reliability of the classification results [25]. The goal of classification is to divide the sound signals into normal or abnormal [11], and more complex algorithms may go as far as differentiating between types of sounds or even underlying conditions. The performance metrics are derived from the results of this step, and measures such as accuracy or sensitivity can be calculated. Of note, the performance metrics not only depend on the used classifier but on all the previous steps.

1.3. Public Lung Sound Databases

The increasing popularity of artificial intelligence (AI) in biosignal classification coexists with a significant interest in developing public databases that provide the much-needed clinical data essential for developing classification models. Previous reviews have stated that biosignal databases have a clear tendency to use electrocardiogram (ECG) data [26]. Nonetheless, publicly available databases have been essential in developing abnormal lung sound [11] and cardiac [10] classification models. Undoubtedly, the interest in automatic lung sound detection has resurfaced mainly due to the widespread growth in ML and DL techniques, as well as the apparition of the mentioned publicly accessible databases [27], which narrow the gap between ML developers and available lung sound audio data. Despite the surge in the usage of large lung sound databases for DL algorithms development, a systematic evaluation has yet to examine the accuracy and reporting variations in the corresponding papers published in the last ten years.

2. Materials and Methods

2.1. Bibliographic Search

The systematic review was performed following the recommendations of the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement [28]. The comprehensive literature search for articles published between January 1990 and December 2022 was carried out by an experienced specialist medical librarian (D.J.G.) on five databases, including MEDLINE, Embase, Cochrane Central Register of Controlled Trials, Web of Science, and Scopus. The full search strategy can be found in the Supplementary Files. This was confirmed by two authors independently (J.G.-M. and A.L.). The final study protocol was registered on the OSF server: https://osf.io/8sf5w.

2.2. Eligibility Criteria

For inclusion criteria, we defined studies that (a) proposed an ML classifier for the detection of adventitious and pathological lung sounds in adults; (b) used publicly available (online or CD) lung sounds databases; and (c) reported at least one performance metric for adequate classification, such as sensitivity, specificity, or accuracy. Book chapters, review papers, abstracts of communications or meetings, letters to the editor, commentaries to articles, unpublished works, and study protocols were excluded. Studies focused on the pediatric population or using nonpublic audio recordings were excluded. A complementary search using the references in the included papers was also conducted. Table 1 includes the detailed eligibility criteria.

2.3. Article Selection

Abstracts were screened by H.-Y.W. and J.G.-M. using the inclusion criteria. Full texts were independently reviewed in duplicate by eight reviewers organized in pairs (H.-Y.W., S.H., Y.P., A.T., J.G.-M., I.A., I.K., and A.L.). Disagreements were resolved during consensus meetings with a third reviewer (V.H.). Covidence software [29] was used for data collection. The studies’ outcomes were reported as the diagnostic accuracy for abnormal sound or pathology detection (sensitivity, specificity, and accuracy, when available). The types of performance measures reported depended on the approach of each study.

2.4. Data Extraction

The study details for the included articles were abstracted by ten independent researchers (H.-Y.W., S.H., Y.P., A.T., K.L., D.V., S.Q., J.G.-M., I.A., and I.K.) using a standardized data extraction form, and each article was assessed by two different researchers. The reviewers resolved discrepancies by consensus or in consultation with a third party, as needed. The data abstracted included the baseline details (year of publication and first author); study design (type of lung sound or pathology evaluated, DL algorithm used, feature extraction techniques, training/validation/test split, and evidence of external validation); dataset characteristics (number of recordings, auscultation points, the sensor used, and reference standard); and the performance metrics (reported as accuracy, sensitivity, and specificity).

2.5. Quality Assessment

We assessed the risk of bias (ROB) and applicability concerns for every included study using a modified QUADAS-2 (Quality Assessment of Diagnostic Accuracy Studies-2) instrument [30]. Ten researchers independently assessed the included articles. The quality assessment for each article was performed at least by two authors. Final adjudication and discrepancies were solved by consultation with a third author (A.L.). Given the poor standards of quality assessment (QA) reporting for AI-based diagnostic accuracy studies and the lack of validated QA tools [31], we modified the QUADAS-2 instrument to fit the purposes of this review. The four core domains for ROB evaluation were maintained, and new signaling questions tailored for this review were assessed. Given that the eligible studies used audio files from publicly available lung sound databases, such data sources were accessed when possible. This allowed for the assessment of the ROB during the audio recording creation of the database. When the corresponding lung sound database was not accessible anymore, the signaling question was answered as “N/A”, indicating a lack of information. The ROB for each domain was judged as low only when the answers to all signaling questions were “yes”; conversely, the ROB was deemed high in the presence of at least one signaling question responded to as “no”. If at least half of the signaling questions of a domain could not be assessed due to a lack of information, the ROB for the domain was deemed “unclear”. When the reference standard used to determine the sound ground truth classification was interpreted by a human or expert, this was listed as a potential source of bias and the corresponding question responded to as “no”. Applicability concerns were evaluated in the reference standard, index test, and patient selection domains, as recommended by the original QUADAS-2 instrument [32]. Notably, a significant portion of the studies used databases known to contain pediatric patients; therefore, these studies were classified as having a “high” risk regarding applicability.

3. Results

A standardized approach was used for this systematic review. A database search identified a total of 3143 records. The removal of 650 duplicates left 2493 articles. Of these, 2311 articles were excluded based on title and abstract screening. From the screening, 182 full-text articles were assessed for eligibility. The main reasons for exclusion were not using audio recordings from publicly available databases and not proposing a ML/DL algorithm for abnormal lung sound classification. A few studies developed an algorithm but did not test it with patient data or lacked a performance metrics report. This study selection resulted in a total of 62 articles included in the qualitative synthesis. Figure 2 depicts this process in detail. Supplementary Table S1 presents the characteristics of each included study, namely the classifier and database used, best obtained performance metrics, and classification categories.

3.1. Sources of Lung Sound Recordings

As mentioned earlier, this review focuses on studies that used abnormal lung sound recordings from public databases as opposed to studies that recorded their own audio samples for the study. Creating such databases involves a series of features, including data recording protocol, recording and storage hardware, time and place of collection, and audio file labeling. Having a number of features, these biosignal repositories are prone to heterogeneity in every aspect, as well as inconsistencies, even within the same database. For this reason, the characteristics of the databases were retrieved for quality assessment, as stated in the Methods section.
As AI applications in healthcare continue to expand, the amount of available data repositories continues to grow. In this review, 17 different data sources were identified. Forty-nine articles used recordings from a single source, whereas thirteen combined audio files from multiple sources. The most frequently used online databases were the International Conference in Biomedical and Health Informatics (ICBHI) 2017 database [27] (66%) and the Respiration Acoustics Laboratory Environment (R.A.L.E.) Lung Sounds database [33] (23%), whereas other databases such as the King Abdullah University Hospital (KAUH) database or the Stethographics Lung Sound Samples were used much less often. Some studies used not currently available online databases [34,35] or only CD-accessible [36,37,38,39] databases, which prevented the quality assessment of their creation process. It is worth noting that the introduction of databases like the one by Rocha et al. [27] in 2017 led to a surge in the production of articles, as observed in Figure 3, which describes the number of studies per year of publication.

3.2. Features of Lung Sounds Databases

The ICBHI 2017 database contains recordings from 126 individuals, obtained by two groups of researchers using the AKG C417L Microphone (AKGC417L), 3M Littmann Classic II SE Stethoscope (LittC2SE), 3M Littmann 3200 Electronic Stethoscope (Litt3200), and Welch Allyn Meditron Master Elite Electronic Stethoscope (Meditron) at university hospitals in Portugal and Greece [27]. Respiratory experts annotated the lung sounds as “crackles, wheezes, a combination of them, or no adventitious respiratory sounds”, and the patients had conditions such as asthma, bronchiectasis, bronchiolitis, COPD, and upper and lower respiratory tract infections. As mentioned earlier, lung sounds from this database were used by most articles, as it is an open-access, readily available database that covers a wide range of diseases and abnormal sounds. In addition, the database authors suggest calculating a series of standard performance metrics, further facilitating the comparison and validation of new classification models.
The other frequently used source was the R.A.L.E. Lung Sounds database [33]. These researchers from Canada used the 3 M Littmann3200 Electronic Stethoscope (Litt3200) and Welch Allyn Meditron Master Elite Electronic Stethoscope (Meditron) to capture over 50 recordings of lung sounds, including wheezes, rhonchi, crackles, squeaks, squawks, and pleural friction rubs, annotated by respiratory experts. This database is commercially available; a license must be acquired before access. Although this resource has been available for over 20 years, a significantly smaller number of the included studies opted to use it. The license includes access to clinical cases and quizzes related to lung sounds.
Notably, one-quarter of the reported databases are only accessible via the physical acquisition of a CD-ROM [40,41,42,43,44], which impairs the quality assessment and the description of characteristics in this review. Finally, seven of all the mentioned databases were not accessible when this review was performed, in all cases due to outdated internet sources. Therefore, their characteristics could only be derived from the included articles’ descriptions in studies where combined databases were described as a whole, preventing a distinction between sources and halting their separate assessments. Further features of all the databases are described in Table 2.

3.3. Types of Sounds Analyzed

All eligible articles in this review targeted pulmonary sounds, but their algorithms classified sounds differently. Thirty-eight studies (61%) created algorithms that classified sounds into normal or adventitious lung sounds, with the most common ones being crackles and wheezes, although some algorithms also identified rhonchi or stridor. Twenty-one studies (34%) classified recordings into different diseases, namely with chronic obstructive pulmonary disease (COPD), asthma, pneumonia, and bronchiectasis being the most common ones. Finally, three studies (5%) created separate algorithms to distinguish adventitious lung sounds and lung pathologies.

3.4. Classification Models

Table 3 contains the most used classifiers in this review, a general description, and the included references corresponding to each model. As explained earlier, these techniques are the final step in the process, and they allow to classify the abnormal sounds into different categories based on the similarities and differences of their features.
Among the included manuscripts, the most used classifiers were artificial neural networks (ANN) and their subtypes and support vector machines (SVM). These techniques are examples of supervised learning algorithms, which must be trained with labeled data before classifying the unseen data points [52]. These two models can generalize appropriately these unseen data points by minimizing the risk of overfitting, resulting from having a model that learned in a way that can only apply to the training sample and poorly generalizes to unseen data [53]. Notably, many variations of ANN were tested in the included studies, ranging from the basic multilayer perceptron (MLP), composed of a series of fully connected layers [54], to the more complex recurrent neural networks (RNN) and convoluted neural networks (CNN). Ensemble methods such as Random Forests and Boosting algorithms, which combine multiple learning algorithms to improve estimates and the classification performance [55], were occasionally used in the manuscripts.
Table 3. The most used machine learning classification techniques.
Table 3. The most used machine learning classification techniques.
Name FeaturesRefs.
ANNCNN
RNN
DNN
DBN
MLP
Inspired by networks of neurons, ANN models contain multiple layers of computing nodes that operate as nonlinear summing devices. These nodes communicate with each other by connection lines; the weight of each line is adjusted as the model is trained [56]. [18,35,36,38,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91]
SVM This maximal margin classifier aims to find the hyperplane in an N-dimensional space that distinctly classifies the data points [92].[14,37,59,63,65,66,78,87,93,94,95,96,97,98,99]
k-NN This classifier intends to classify a set of unlabeled data by assigning it to the class that contains the most similar labeled data points [100]. [14,39,59,63,65,98,99]
DT This technique classifies data by posing questions regarding the item’s features. Each question is represented in a node, and every node directs to a series of child nodes, one for each possible answer, forming a hierarchical tree [101]. [59,87,98,102,103,104]
DA This unsupervised learning technique intends to transform the features from a data point into a lower dimensional space, hereby maximizing the ratio of the between-class variance to the within-class variance, which results in maximized class separability [105].[87,106,107]
RF Random Forest is a classifier that builds multiple decision trees by using random samples of data points for each tree and random samples of the predictors; the resulting forest provides fitted values more accurate than those of a single tree [108].[78,109]
GMM Mixture models are derived from the idea that any distribution can be expressed as a mixture of distributions of known parameterization (such as Gaussians). Then, an optimization technique (such as expectation maximization) can be used to calculate estimates of the parameters of each component distribution [110].[34,35,111]
HMM The hidden Markov model creates a sequence of GMM models to explain the input data. Its main difference from GMM is that it takes account of the temporal progression of the data, whereas GMM treats each sound as a single entity [112]. [111,113,114,115]
GB The main idea behind boosting techniques is to add a series of models into an ensemble sequentially. At each iteration, a new model is trained concerning the error of the whole ensemble [116]. [99,117]
LR Logistic regression is a technique that describes and tests hypotheses about relationships between a categorical (outcome) variable and one or more categorical or continuous predictor variables [118].[63,119]
NB This supervised learning algorithm is based on the Bayes theorem. This technique works on probability distribution. The features present in the dataset are used to determine the outcome, but they are not related to other features [120]. [39]
Abbreviations: ANN: Artificial Neural Network; CNN: Convoluted Neural Network; RNN: Recurrent Neural Network; DNN: Deep Neural Network; DBN: Deep Belief Network; MLP: Multilayer Perceptron; SVM: Support Vector Machine; k-NN: k-Nearest Neighbors; DT: Decision Tree; DA: Discriminant Analysis; RF: Random Forest; GMM: Gaussian Mixture Model; HMM: Hidden Markov Model; GB: Gradient Boosting; LR: Logistic Regression; NB: Naive Bayes.

3.5. Performance Metrics

The evaluation of the ability of a model to adequately classify lung sounds into the appropriate category yields a series of metrics. It is of utmost importance to remember that the performance of a model not only depends on the ML/DL classifier but also on all the steps that precede it (audio recording, preprocessing, feature selection, and model training). These metrics are helpful when comparing different models that use the same data sources but, understandably, are not a reliable way to compare models across different databases. Some databases, like the ICBHI 2017 Challenge [27], suggest that researchers use specific performance metrics to evaluate their models; nonetheless, for this review, the evaluated performance metrics were accuracy and/or sensitivity and specificity. The accuracy for classification into abnormal sound categories ranged between 49.43 [102] and 100.00 [18]. Meanwhile, the sensitivity and specificity ranged between 17.80 [90] and 100.00 [18,65] and 59.69 [113] and 100.00 [38,64], respectively. On the other hand, the lowest and highest accuracies for models that classified sounds into disease classes were 69.40 [99] and 99.62 [69]. For the same studies, the sensitivity ranged between 28.00 [77] and 100.00 [63], whereas the specificity ranged between 81.00 [77] and 100.00 [88]. Remarkably, the reported metrics were highly heterogeneous between studies, limiting direct comparisons.

3.6. Quality Assessment

Given the lack of a validated tool for the quality assessment of diagnostic studies that use artificial intelligence, we optimized a version of the QUADAS-2 tool to evaluate the risk of bias and applicability concerns. After using this tool, all the studies were classified as having an overall high ROB, with most concerns over the patient selection and the reference standards. The high ROB in these domains directly relates to using public databases to obtain audio files. These sources often do not follow a specific sound recording protocol, use multiple devices, and rely on interpretation by an individual to assign labels to each recording. In addition, the characteristics of each database are rarely available, further halting the quality assessment process. None of the included studies had concerns regarding applicability in the index test domain, while almost all the studies had serious or unclear concerns in the patient selection and reference standard domains. The concern arose due to the poor description of the patient population in the included papers and/or data sources, which creates a risk of including pediatric patients, for example. Also, using expert annotation as a reference standard precludes the reliability of the labels for each study, raising concerns in this domain. Tables S2 and S3 in the Supplementary Files contain the individual assessment results of the risk of bias and applicability concerns, respectively. Figure 4 summarizes the quality assessment findings.

4. Discussion

Our systematic review provides a comprehensive update on using contemporary ML and DL models. To the best of our knowledge, this work offers a much-needed update that highlights the advances in automatic lung sound classification during the last six years, focusing on the introduction of large public databases that have encouraged further research in the field. The apparition of large public data sources in recent years has led to an increasing number of studies to share their lung sound audio samples, ideally facilitating comparisons between models. Nonetheless, a detailed description of the databases and studies is necessary to identify the emerging issues in the field and the progress made so far. Supplementary Table S1 highlights the models identified in our systematic review with the best accuracy, sensitivity, and specificity performance metrics.

4.1. Clinical and Scientific Relevance

Machine learning (ML) and deep learning (DL) techniques are of increasing importance and great functionality in the identification and classification of normal and abnormal lung sounds [121], although, historically, a bedside clinician has been the key decider for identifying and classifying various normal and abnormal lung sounds, such as vesicular lung sounds, crackles, and wheezes. This information carries various degrees of diagnostic certainty, depending on the experience level and skill set. The inability to identify and accurately classify lung sounds could significantly impact the delay in diagnosis and downstream management [122]. Güler et al. described the initial work of utilizing a neural networks-genetic algorithm approach to advance the field in the lung sounds classification [123]. Additionally, they employed a multilayer perception neural network employing a backpropagation training algorithm to predict normal or abnormal lung sounds (such as crackles or wheezes), ultimately yielding a model with promising performance, with correct classification rates of up to 93% for all lung sounds. Early studies like the aforementioned served as the groundwork for future authors that intended to improve the methodology and capabilities of their models.
The traditional methods of lung sound analysis depend heavily on the expertise of bedside clinician, which has a significant subjectivity. Their results could be prone to interobserver variability, and the same observer could potentially classify the same lung sounds differently. ML and DL algorithms could minimize that variability and could provide objectivity, offering several advantages. In addition to this, the ML and DL methods could extract the relevant features from lung sound recordings, capturing characteristics that were not picked up by pulmonary auscultation [124,125], such as the frequency content, temporal patterns, and spectral properties, to name a few. These additional characteristics could further enrich a training dataset’s diversity and variability, enabling accurate classification and identification for future studies.
With the technical advances in computing, machine learning in deep planning models such as support vector machines (SVM), Random Forests, and neural networks have been utilized at an increasing pace to label and classify lung sound data [126]. The increasing fidelity and improvement in the performance of the resulting models could provide accurate diagnostic and predictive enrichment for specific disease states, such as pneumonia, pleural effusions, consolidations, and airway diseases (rhonchi and wheezing), among others.
Deep learning models such as neural networks (NNs) could provide the benefit of real-time monitoring of lung sounds. If developed and validated clinically, these models could be used for real-time lung sound monitoring in acute care settings (such as hospitals) and remote monitoring environments such as nursing homes, rehabilitation facilities, or even at home [119,127]. The real-time analysis could allow for the early detection of disease states, enabling an actionable point of timely intervention and overall improvement in healthcare delivery. Potential challenges that could be anticipated include difficulty in noise reduction, thereby impeding the signal-to-noise ratio and diluting the diagnostic information present in the audio signals. With the advent of precision and personalized medicine, these machine learning and deep planning models can be trained on high-quality datasets with high signal-to-noise ratios, thereby allowing the further design of personalized models that could consider individual variations in lung sounds, accounting for age, sex, body habitus, disease progression, ethnicity, and other factors contributing to patient-to-patient variability [128,129,130].

4.2. Opportunities and Barriers

Utilizing machine learning and deep learning techniques in this realm has several strengths and advantages. ML and DL algorithms will enable the automated analysis of lung sounds, thereby relying less on human subjective nature and interpretation. This automation will improve efficiency with a reduction in interobserver variability. ML and DL models also excel in recognizing complex patterns in data that are either unknown or difficult to recognize by humans; this concept also holds true in lung sound identification [131,132]. As highlighted above, one of the biggest advantages will be the real-time monitoring of patients’ lung sounds remotely in a hospital setting and their community (at home). This will facilitate the early detection of physiological abnormality, and we will provide an actionable point of timely intervention. Adaptability and self-limiting from new data will allow for continuous improvement in performance and fidelity over time. Despite all the advantages highlighted above, these ML and DL models have inherent weaknesses. The availability of high-quality and labeled lung sound datasets can be a challenge, as highlighted by many manuscripts included in our systematic review. Heterogeneity in the database creation process inevitably leads to a scenario where comparisons between models are not possible. Stakeholder engagement for creating well-annotated datasets with patient populations can be time-consuming and expensive. Databases lacking in diversity could affect the generalizability and potentially increase healthcare disparities in diagnostics and healthcare delivery. Physiologically, lung sounds could vary significantly due to various patient factors such as body habitus, body position, patient movement, disease timeline, and recording conditions. This variability in lung sound recording could present hurdles in realizing consistent and accurate classification if not accurately annotated.

4.3. Strengths and Limitations

The strengths of this review include the extensive literature search, as well as the individual evaluations and detailed descriptions of the data sources. Furthermore, we developed a new approach to the quality assessment of the included articles, given the lack of validated assessment tools for diagnostic accuracy studies that use artificial intelligence. Our study was limited by the impossibility to perform a meta-analysis, given the heterogeneity in the performance reporting and data sources. Similarly, we could not access a large portion of the older databases, preventing us from evaluating and describing their characteristics. Notably, our review focused on studies in English that used public databases as their source of audio samples, excluding those published in other languages and those that opted for a different approach, such as collecting their own sounds. Although omitted in our work, these studies may provide valuable contributions to the development of the field.

4.4. Future Work

As noted, while the machine learning and deep learning techniques have, so far, offered valuable strengths in the accurate identification and classification of lung sounds, improved efficiency, and provided the possibility of real-time remote monitoring, they also face certain limitations. To harness the full potential of these techniques in healthcare, we need to overcome the challenges surrounding data availability, data security, accurate labeling and interpretation, and domain expertise. As evidenced by the results of this review, public databases are an essential component in the progress of the field of automatic lung sound classification, but researchers interested in developing their own database should aim to create a standardized approach to the recording, storage, and share processes, which will ultimately lead to more reliable comparisons between models. Utilizing ML and DL techniques for lung sound analysis could raise ethical concerns regarding patient privacy, data security, and other regulatory oversight needs [133]. Therefore, these concerns should be clearly addressed when developing public databases.

5. Conclusions

In conclusion, we see a rising trend of more ML and DL techniques demonstrating promise in appropriate identification and classification, increasing the accuracy for various lung sound characteristics. Automating the analysis process and enriching the currently publicly available databases could offer a precious source of objective and accurate diagnostic utility. With further advancements in computational prowess, these techniques have the potential to provide better-personalized precision medicine and accurate assessments of respiratory conditions, aiding in diagnosis, monitoring, and treatment.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/bioengineering10101155/s1, Table S1: List of the included studies’ main characteristics. Table S2: Individual risk of bias assessments. Table S3: Individual applicability concerns assessments. Database search strategy.

Author Contributions

Conceptualization, V.H., D.D. and B.W.P.; methodology, H.-Y.W., V.H. and S.H.; search strategy development and resources, D.J.G.; data extraction and curation, J.P.G.-M., H.-Y.W., S.H., A.T., Y.P., K.L., D.J.G., I.N.A., I.K. and S.Q.; writing—original draft preparation, J.P.G.-M., H.-Y.W., S.H., A.T., Y.P., K.L., I.N.A., I.K. and S.Q.; writing—review and editing, D.D., V.H., B.W.P. and A.L.; and supervision, V.H. and A.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Labaki, W.W.; Han, M.K. Chronic respiratory diseases: A global view. Lancet Respir. Med. 2020, 8, 531–533. [Google Scholar] [CrossRef] [PubMed]
  2. Wipf, J.E.; Lipsky, B.A.; Hirschmann, J.V.; Boyko, E.J.; Takasugi, J.; Peugeot, R.L.; Davis, C.L. Diagnosing pneumonia by physical examination: Relevant or relic? Arch. Intern. Med. 1999, 159, 1082–1087. [Google Scholar] [CrossRef]
  3. Brooks, D.; Thomas, J. Interrater reliability of auscultation of breath sounds among physical therapists. Phys. Ther. 1995, 75, 1082–1088. [Google Scholar] [CrossRef] [PubMed]
  4. Cardinale, L.; Volpicelli, G.; Lamorte, A.; Martino, J.; Andrea, V. Revisiting signs, strengths and weaknesses of Standard Chest Radiography in patients of Acute Dyspnea in the Emergency Department. J. Thorac. Dis. 2012, 4, 398–407. [Google Scholar] [CrossRef] [PubMed]
  5. Hopkins, R.L. Differential auscultation of the acutely ill patient. Ann. Emerg. Med. 1985, 14, 589–590. [Google Scholar] [CrossRef]
  6. Esteva, A.; Robicquet, A.; Ramsundar, B.; Kuleshov, V.; DePristo, M.; Chou, K.; Cui, C.; Corrado, G.; Thrun, S.; Dean, J. A guide to deep learning in healthcare. Nat. Med. 2019, 25, 24–29. [Google Scholar] [CrossRef]
  7. Myszczynska, M.A.; Ojamies, P.N.; Lacoste, A.M.; Neil, D.; Saffari, A.; Mead, R.; Hautbergue, G.M.; Holbrook, J.D.; Ferraiuolo, L. Applications of machine learning to diagnosis and treatment of neurodegenerative diseases. Nat. Rev. Neurol. 2020, 16, 440–456. [Google Scholar] [CrossRef]
  8. Hayashi, Y. The right direction needed to develop white-box deep learning in radiology, pathology, and ophthalmology: A short review. Front. Robot. AI 2019, 6, 24. [Google Scholar] [CrossRef]
  9. Kim, M.; Yun, J.; Cho, Y.; Shin, K.; Jang, R.; Bae, H.-j.; Kim, N. Deep learning in medical imaging. Neurospine 2019, 16, 657. [Google Scholar] [CrossRef]
  10. Chen, W.; Sun, Q.; Chen, X.; Xie, G.; Wu, H.; Xu, C. Deep learning methods for heart sounds classification: A systematic review. Entropy 2021, 23, 667. [Google Scholar] [CrossRef]
  11. Palaniappan, R.; Sundaraj, K.; Ahamed, N.U. Machine learning in lung sound analysis: A systematic review. Biocybern. Biomed. Eng. 2013, 33, 129–135. [Google Scholar] [CrossRef]
  12. Reichert, S.; Gass, R.; Brandt, C.; Andrès, E. Analysis of respiratory sounds: State of the art. Clin. Med. Circ. Respirat. Pulm. Med. 2008, 2, 45–58. [Google Scholar] [CrossRef] [PubMed]
  13. Kandaswamy, A.; Kumar, C.S.; Ramanathan, R.P.; Jayaraman, S.; Malmurugan, N. Neural classification of lung sounds using wavelet coefficients. Comput. Biol. Med. 2004, 34, 523–537. [Google Scholar] [CrossRef] [PubMed]
  14. Palaniappan, R.; Sundaraj, K.; Sundaraj, S. A comparative study of the SVM and K-nn machine learning algorithms for the diagnosis of respiratory pathologies using pulmonary acoustic signals. BMC Bioinform. 2014, 15, 223. [Google Scholar] [CrossRef]
  15. Richeldi, L.; Cottin, V.; Würtemberger, G.; Kreuter, M.; Calvello, M.; Sgalla, G. Digital Lung Auscultation: Will Early Diagnosis of Fibrotic Interstitial Lung Disease Become a Reality? Am. J. Respir. Crit. Care Med. 2019, 200, 261–263. [Google Scholar] [CrossRef]
  16. Kraman, S.S.; Wodicka, G.R.; Pressler, G.A.; Pasterkamp, H. Comparison of lung sound transducers using a bioacoustic transducer testing system. J. Appl. Physiol. 2006, 101, 469–476. [Google Scholar] [CrossRef]
  17. Gupta, P.; Wen, H.; Di Francesco, L.; Ayazi, F. Detection of pathological mechano-acoustic signatures using precision accelerometer contact microphones in patients with pulmonary disorders. Sci. Rep. 2021, 11, 13427. [Google Scholar] [CrossRef]
  18. Zulfiqar, R.; Majeed, F.; Irfan, R.; Rauf, H.T.; Benkhelifa, E.; Belkacem, A.N. Abnormal respiratory sounds classification using deep CNN through artificial noise addition. Front. Med. 2021, 8, 714811. [Google Scholar] [CrossRef]
  19. Salman, A.H.; Ahmadi, N.; Mengko, R.; Langi, A.Z.; Mengko, T.L. Performance comparison of denoising methods for heart sound signal. In Proceedings of the 2015 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Bali, Indonesia, 9–12 November 2015; pp. 435–440. [Google Scholar]
  20. Li, S.; Li, F.; Tang, S.; Xiong, W. A review of computer-aided heart sound detection techniques. BioMed Res. Int. 2020, 2020, 5846191. [Google Scholar] [CrossRef]
  21. Barclay, V.; Bonner, R.; Hamilton, I. Application of wavelet transforms to experimental spectra: Smoothing, denoising, and data set compression. Anal. Chem. 1997, 69, 78–90. [Google Scholar] [CrossRef]
  22. Mondal, A.; Banerjee, P.; Tang, H. A novel feature extraction technique for pulmonary sound analysis based on EMD. Comput. Methods Programs Biomed. 2018, 159, 199–209. [Google Scholar] [CrossRef] [PubMed]
  23. Krishnan, S.; Athavale, Y. Trends in biomedical signal feature extraction. Biomed. Signal Process. Control 2018, 43, 41–63. [Google Scholar] [CrossRef]
  24. Maleki, F.; Muthukrishnan, N.; Ovens, K.; Reinhold, C.; Forghani, R. Machine learning algorithm validation: From essentials to advanced applications and implications for regulatory certification and deployment. Neuroimaging Clin. 2020, 30, 433–445. [Google Scholar] [CrossRef]
  25. Ramezan, C.A.; Warner, T.A.; Maxwell, A.E. Evaluation of sampling and cross-validation tuning strategies for regional-scale machine learning classification. Remote Sens. 2019, 11, 185. [Google Scholar] [CrossRef]
  26. Barbosa, L.C.; Moreira, A.H.; Carvalho, V.; Vilaça, J.L.; Morais, P. Biosignal Databases for Training of Artificial Intelligent Systems. In Proceedings of the 9th International Conference on Bioinformatics Research and Applications, Berlin, Germany, 18–20 September 2022; pp. 74–81. [Google Scholar]
  27. Rocha, B.M.; Filos, D.; Mendes, L.; Vogiatzis, I.; Perantoni, E.; Kaimakamis, E.; Maglaveras, N. A respiratory sound database for the development of automated classification. In Precision Medicine Powered by Phealth and Connected Health; Springer: Berlin/Heidelberg, Germany, 2018; pp. 33–37. [Google Scholar] [CrossRef]
  28. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; Group, P. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. J. Clin. Epidemiol. 2009, 62, 1006–1012. [Google Scholar] [CrossRef] [PubMed]
  29. Innovation, V.H. Covidence Systematic Review Software. Available online: www.covidence.org (accessed on 1 August 2023).
  30. Whiting, P.; Rutjes, A.W.; Reitsma, J.B.; Bossuyt, P.M.; Kleijnen, J. The development of QUADAS: A tool for the quality assessment of studies of diagnostic accuracy included in systematic reviews. BMC Med. Res. Methodol. 2003, 3, 25. [Google Scholar] [CrossRef]
  31. Jayakumar, S.; Sounderajah, V.; Normahani, P.; Harling, L.; Markar, S.R.; Ashrafian, H.; Darzi, A. Quality assessment standards in artificial intelligence diagnostic accuracy systematic reviews: A meta-research study. NPJ Digit. Med. 2022, 5, 11. [Google Scholar] [CrossRef]
  32. Whiting, P.F.; Rutjes, A.W.; Westwood, M.E.; Mallett, S.; Deeks, J.J.; Reitsma, J.B.; Leeflang, M.M.; Sterne, J.A.; Bossuyt, P.M.; QUADAS-2 Group. QUADAS-2: A revised tool for the quality assessment of diagnostic accuracy studies. Ann. Intern. Med. 2011, 155, 529–536. [Google Scholar] [CrossRef]
  33. R.A.L.E. Lung Sounds 3.2. Available online: http://www.rale.ca/LungSounds.htm. (accessed on 1 August 2023).
  34. Lu, X.; Bahoura, M. An integrated automated system for crackles extraction and classification. Biomed. Signal Process. Control 2008, 3, 244–254. [Google Scholar] [CrossRef]
  35. Bahoura, M. Pattern recognition methods applied to respiratory sounds classification into normal and wheeze classes. Comput. Biol. Med. 2009, 39, 824–843. [Google Scholar] [CrossRef]
  36. Tocchetto, M.A.; Bazanella, A.S.; Guimaraes, L.; Fragoso, J.; Parraga, A. An embedded classifier of lung sounds based on the wavelet packet transform and ANN. IFAC Proc. Vol. 2014, 47, 2975–2980. [Google Scholar] [CrossRef]
  37. Datta, S.; Choudhury, A.D.; Deshpande, P.; Bhattacharya, S.; Pal, A. Automated lung sound analysis for detecting pulmonary abnormalities. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (Embc), Jeju Island, Republic of Korea, 11–15 July 2017; pp. 4594–4598. [Google Scholar]
  38. Oweis, R.; Abdulhay, E.; Khayal, A.; Awad, A. An alternative respiratory sounds classification system utilizing artificial neural networks. Biomed. J. 2015, 38, 153–161. [Google Scholar] [CrossRef]
  39. Naves, R.; Barbosa, B.H.; Ferreira, D.D. Classification of lung sounds using higher-order statistics: A divide-and-conquer approach. Comput. Methods Programs Biomed. 2016, 129, 12–20. [Google Scholar] [CrossRef]
  40. Racineux, J. L’auscultation à L’écoute du Poumon ASTRA; CD-Phonopneumogrammes: Paris, France, 1994. [Google Scholar]
  41. Coviello, J.S. Auscultation Skills: Breath & Heart Sounds; Lippincott Williams & Wilkins: Philadelphia, PA, USA, 2013. [Google Scholar]
  42. Wilkins, R.; Hodgkin, J.; Lopez, B. Fundamentals of Lung and Heart Sounds, 3/e (Book and CD-ROM); CV Mosby: Maryland Heights, MO, USA, 2004. [Google Scholar]
  43. Wrigley, D. Heart and Lung Sounds Reference Library; PESI HealthCare: Eau Claire, WI, USA, 2011. [Google Scholar]
  44. Lehrer, S. Understanding Lung Sounds; Saunders: Philadelphia, PA, USA, 2018. [Google Scholar]
  45. Fraiwan, M.; Fraiwan, L.; Khassawneh, B.; Ibnian, A. A dataset of lung sounds recorded from the chest wall using an electronic stethoscope. Data Brief 2021, 35, 106913. [Google Scholar] [CrossRef] [PubMed]
  46. Altan, G.; Kutlu, Y. RespiratoryDatabase@ TR (COPD Severity Analysis). 2020. Available online: https://data.mendeley.com/datasets/p9z4h98s6j/1 (accessed on 1 August 2023).
  47. Thinklabs Medical LLC. Thinklabs One Lung Sounds Library. Available online: https://www.thinklabs.com/sound-library (accessed on 1 August 2023).
  48. East Tennessee State University. Pulmonary Breath Sounds. Available online: https://faculty.etsu.edu/arnall/www/public_html/heartlung/breathsounds/contents.html (accessed on 1 August 2023).
  49. Bahoura, M. Analyse des Signaux Acoustiques Respiratoires: Contribution à la Detection Automatique des Sibilants par Paquets D’ondelettes. Ph.D. Thesis, Université de Rouen, Mont-Saint-Aignan, France, 1999. [Google Scholar]
  50. Hsiao, C.-H.; Lin, T.-W.; Lin, C.-W.; Hsu, F.-S.; Lin, F.Y.-S.; Chen, C.-W.; Chung, C.-M. Breathing sound segmentation and detection using transfer learning techniques on an attention-based encoder-decoder architecture. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 754–759. [Google Scholar]
  51. Grinchenko, A.; Makarenkov, V.; Makarenkova, A. Kompjuternaya auskultaciya-novij metod objektivizacii harakterictik zvykov dihaniya [Computer auscultation is a new method of objectifying the lung sounds characteristics]. Klin. Inform. I Telemeditsina 2010, 6, 31–36. [Google Scholar]
  52. Cunningham, P.; Cord, M.; Delany, S.J. Supervised learning. In Machine Learning Techniques for Multimedia: Case Studies on Organization and Retrieval; Springer: Berlin/Heidelberg, Germany, 2008; pp. 21–49. [Google Scholar]
  53. Mutasa, S.; Sun, S.; Ha, R. Understanding artificial intelligence based radiology studies: What is overfitting? Clin. Imaging 2020, 65, 96–99. [Google Scholar] [CrossRef]
  54. Pal, S.K.; Mitra, S. Multilayer perceptron, fuzzy sets, classifiaction. IEEE Trans. Neural Netw. 1992, 3, 683–697. [Google Scholar] [CrossRef]
  55. Bannick, M.S.; McGaughey, M.; Flaxman, A.D. Ensemble modelling in descriptive epidemiology: Burden of disease estimation. Int. J. Epidemiol. 2020, 49, 2065–2073. [Google Scholar] [CrossRef]
  56. Dayhoff, J.E.; DeLeo, J.M. Artificial neural networks: Opening the black box. Cancer Interdiscip. Int. J. Am. Cancer Soc. 2001, 91, 1615–1635. [Google Scholar] [CrossRef]
  57. Acharya, J.; Basu, A. Deep Neural Network for Respiratory Sound Classification in Wearable Devices Enabled by Patient Specific Model Tuning. IEEE Trans. Biomed. Circuits Syst. 2020, 14, 535–544. [Google Scholar] [CrossRef]
  58. Alqudah, A.M.; Qazan, S.; Obeidat, Y.M. Deep learning models for detecting respiratory pathologies from raw lung auscultation sounds. Soft Comput. 2022, 26, 13405–13429. [Google Scholar] [CrossRef] [PubMed]
  59. Altan, G.; Kutlu, Y.; Allahverdi, N. Deep Learning on Computerized Analysis of Chronic Obstructive Pulmonary Disease. IEEE J. Biomed. Health. Inform. 2019. [Google Scholar] [CrossRef] [PubMed]
  60. Bahoura, M. FPGA implementation of an automatic wheezing detection system. Biomed. Signal Process. Control 2018, 46, 76–85. [Google Scholar] [CrossRef]
  61. Bardou, D.; Zhang, K.; Ahmad, S.M. Lung sounds classification using convolutional neural networks. Artif. Intell. Med. 2018, 88, 58–69. [Google Scholar] [CrossRef] [PubMed]
  62. Basu, V.; Rana, S. Respiratory diseases recognition through respiratory sound with the help of deep neural network. Respiratory diseases recognition through respiratory sound with the help of deep neural network. In Proceedings of the 2020 4th International Conference on Computational Intelligence and Networks (CINE), Kolkata, India, 27–29 February 2020; pp. 1–6. [Google Scholar]
  63. Brunese, L.; Mercaldo, F.; Reginelli, A.; Santone, A. A Neural Network-Based Method for Respiratory Sound Analysis and Lung Disease Detection. Appl. Sci. 2022, 12, 3877. [Google Scholar] [CrossRef]
  64. Chen, H.; Yuan, X.; Pei, Z.; Li, M.; Li, J. Triple-Classification of Respiratory Sounds Using Optimized S-Transform and Deep Residual Networks. IEEE Access 2019, 7, 32845–32852. [Google Scholar] [CrossRef]
  65. Chen, H.; Yuan, X.; Li, J.; Pei, Z.; Zheng, X. Automatic multi-level in-exhale segmentation and enhanced generalized S-transform for wheezing detection. Comput. Methods Programs Biomed. 2019, 178, 163–173. [Google Scholar] [CrossRef] [PubMed]
  66. Demir, F.; Sengur, A.; Bajaj, V. Convolutional neural networks based efficient approach for classification of lung diseases. Health Inf. Sci. Syst. 2020, 8, 4. [Google Scholar] [CrossRef]
  67. Demir, F.; Ismael, A.M.; Sengur, A. Classification of Lung Sounds With CNN Model Using Parallel Pooling Structure. IEEE Access 2020, 8, 105376–105383. [Google Scholar] [CrossRef]
  68. Perna, D. Convolutional neural networks learning from respiratory data. In Proceedings of the 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Madrid, Spain, 3–6 December 2018; pp. 2109–2113. [Google Scholar]
  69. Fraiwan, M.; Fraiwan, L.; Alkhodari, M.; Hassanin, O. Recognition of pulmonary diseases from lung sounds using convolutional neural networks and long short-term memory. J. Ambient. Intell. Humaniz. Comput. 2022, 13, 4759–4771. [Google Scholar] [CrossRef]
  70. Gairola, S.; Tom, F.; Kwatra, N.; Jain, M. RespireNet: A Deep Neural Network for Accurately Detecting Abnormal Lung Sounds in Limited Data Setting. Ann. Int. Conf. IEEE Eng. Med. Biol. Soc. 2021, 2021, 527–530. [Google Scholar] [CrossRef]
  71. Garcia-Ordas, M.T.; Benitez-Andrades, J.A.; Garcia-Rodriguez, I.; Benavides, C.; Alaiz-Moreton, H. Detecting Respiratory Pathologies Using Convolutional Neural Networks and Variational Autoencoders for Unbalancing Data. Sensors 2020, 20, 1214. [Google Scholar] [CrossRef] [PubMed]
  72. Hazra, R.; Majhi, S. Detecting respiratory diseases from recorded lung sounds by 2D CNN. In Proceedings of the 2020 5th International Conference on Computing, Communication and Security (ICCCS), Patna, India, 14–16 October 2020; pp. 1–6. [Google Scholar]
  73. Jung, S.Y.; Liao, C.H.; Wu, Y.S.; Yuan, S.M.; Sun, C.T. Efficiently Classifying Lung Sounds through Depthwise Separable CNN Models with Fused STFT and MFCC Features. Diagnostics 2021, 11, 732. [Google Scholar] [CrossRef]
  74. Kochetov, K.; Putin, E.; Balashov, M.; Filchenkov, A.; Shalyto, A. Noise Masking Recurrent Neural Network for Respiratory Sound Classification. In Artificial Neural Networks and Machine Learning ICANN 2018; Lecture Notes in Computer Science; Springer: New York, NY, USA, 2018; pp. 208–217. [Google Scholar]
  75. Li, J.; Wang, C.; Chen, J.; Zhang, H.; Dai, Y.; Wang, L.; Wang, L.; Nandi, A.K. Explainable CNN With Fuzzy Tree Regularization for Respiratory Sound Analysis. IEEE Trans. Fuzzy Syst. 2022, 30, 1516–1528. [Google Scholar] [CrossRef]
  76. Li, J.; Yuan, J.; Wang, H.; Liu, S.; Guo, Q.; Ma, Y.; Li, Y.; Zhao, L.; Wang, G. LungAttn: Advanced lung sound classification using attention mechanism with dual TQWT and triple STFT spectrogram. Physiol. Meas. 2021, 4, 105006. [Google Scholar] [CrossRef] [PubMed]
  77. Minami, K.; Lu, H.; Kim, H.; Mabu, S.; Hirano, Y.; Kido, S. Automatic classification of large-scale respiratory sound dataset based on convolutional neural network. In Proceedings of the 2019 19th International Conference on Control, Automation and Systems (ICCAS), Jeju, Republic of Korea, 15–18 October 2019; pp. 804–807. [Google Scholar]
  78. Monaco, A.; Amoroso, N.; Bellantuono, L.; Pantaleo, E.; Tangaro, S.; Bellotti, R. Multi-Time-Scale Features for Accurate Respiratory Sound Classification. Appl. Sci. 2020, 10, 8606. [Google Scholar] [CrossRef]
  79. Mukherjee, H.; Sreerama, P.; Dhar, A.; Obaidullah, S.M.; Roy, K.; Mahmud, M.; Santosh, K.C. Automatic Lung Health Screening Using Respiratory Sounds. J. Med. Syst. 2021, 45, 19. [Google Scholar] [CrossRef]
  80. Ngo, D.; Pham, L.; Nguyen, A.; Phan, B.; Tran, K.; Nguyen, T. Deep Learning Framework Applied For Predicting Anomaly of Respiratory Sounds. In Proceedings of the 2021 International Symposium on Electrical and Electronics Engineering (ISEE), Ho Chi Minh, Vietnam, 15–16 April 2021; pp. 42–47. [Google Scholar]
  81. Nguyen, T.; Pernkopf, F. Lung sound classification using snapshot ensemble of convolutional neural networks. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 760–763. [Google Scholar]
  82. Paraschiv, E.-A.; Rotaru, C.-M. Machine learning approaches based on wearable devices for respiratory diseases diagnosis. In Proceedings of the 2020 International Conference on e-Health and Bioengineering (EHB), Iasi, Romania, 29–30 October 2020; pp. 1–4. [Google Scholar]
  83. Petmezas, G.; Cheimariotis, G.A.; Stefanopoulos, L.; Rocha, B.; Paiva, R.P.; Katsaggelos, A.K.; Maglaveras, N. Automated Lung Sound Classification Using a Hybrid CNN-LSTM Network and Focal Loss Function. Sensors 2022, 22, 1232. [Google Scholar] [CrossRef] [PubMed]
  84. Pham, L.; Phan, H.; Palaniappan, R.; Mertins, A.; McLoughlin, I. CNN-MoE Based Framework for Classification of Respiratory Anomalies and Lung Disease Detection. IEEE J. Biomed. Health Inf. 2021, 25, 2938–2947. [Google Scholar] [CrossRef]
  85. Pham, L.; Phan, H.; Schindler, A.; King, R.; Mertins, A.; McLoughlin, I. Inception-Based Network and Multi-Spectrogram Ensemble Applied To Predict Respiratory Anomalies and Lung Diseases. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2021, 2021, 253–256. [Google Scholar] [CrossRef]
  86. Pham Thi Viet, H.; Nguyen Thi Ngoc, H.; Tran Anh, V.; Hoang Quang, H. Classification of lung sounds using scalogram representation of sound segments and convolutional neural network. J. Med. Eng. Technol. 2022, 46, 270–279. [Google Scholar] [CrossRef] [PubMed]
  87. Rocha, B.M.; Pessoa, D.; Marques, A.; Carvalho, P.; Paiva, R.P. Automatic Classification of Adventitious Respiratory Sounds: A (Un)Solved Problem? Sensors 2020, 21, 57. [Google Scholar] [CrossRef] [PubMed]
  88. Shuvo, S.B.; Ali, S.N.; Swapnil, S.I.; Hasan, T.; Bhuiyan, M.I.H. A Lightweight CNN Model for Detecting Respiratory Diseases From Lung Auscultation Sounds Using EMD-CWT-Based Hybrid Scalogram. IEEE J. Biomed. Health Inf. 2021, 25, 2595–2603. [Google Scholar] [CrossRef]
  89. Tariq, Z.; Shah, S.K.; Lee, Y. Lung disease classification using deep convolutional neural network. In Proceedings of the 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), San Diego, CA, USA, 18–21 November 2019; pp. 732–735. [Google Scholar]
  90. Yang, Z.; Liu, S.; Song, M.; Parada-Cabaleiro, E.; Schuller, B.W. Adventitious Respiratory Classification Using Attentive Residual Neural Networks. In Proceedings of the Interspeech 2020, Shanghai, China, 25–29 October 2020; pp. 2912–2916. [Google Scholar]
  91. Ma, Y.; Xu, X.; Yu, Q.; Zhang, Y.; Li, Y.; Zhao, J.; Wang, G. Lungbrn: A smart digital stethoscope for detecting respiratory disease using bi-resnet deep learning algorithm. In Proceedings of the 2019 IEEE Biomedical Circuits and Systems Conference (BioCAS), Nara, Japan, 17–19 October 2019; pp. 1–4. [Google Scholar]
  92. Stitson, M.; Weston, J.; Gammerman, A.; Vovk, V.; Vapnik, V. Theory of support vector machines. Univ. Lond. 1996, 117, 188–191. [Google Scholar]
  93. Boujelben, O.; Bahoura, M. Efficient FPGA-based architecture of an automatic wheeze detector using a combination of MFCC and SVM algorithms. J. Syst. Archit. 2018, 88, 54–64. [Google Scholar] [CrossRef]
  94. Sen, I.; Saraclar, M.; Kahya, Y. Computerized Diagnosis of Respira tory Disorders. Methods Inf. Med. 2014, 53, 291–295. [Google Scholar] [PubMed]
  95. Serbes, G.; Ulukaya, S.; Kahya, Y.P. An Automated Lung Sound Preprocessing and Classification System Based OnSpectral Analysis Methods. In Precision Medicine Powered by pHealth and Connected Health; Springer: New York, NY, USA, 2018; pp. 45–49. [Google Scholar] [CrossRef]
  96. Stasiakiewicz, P.; Dobrowolski, A.P.; Targowski, T.; Gałązka-Świderek, N.; Sadura-Sieklucka, T.; Majka, K.; Skoczylas, A.; Lejkowski, W.; Olszewski, R. Automatic classification of normal and sick patients with crackles using wavelet packet decomposition and support vector machine. Biomed. Signal Process. Control 2021, 67, 102521. [Google Scholar] [CrossRef]
  97. Romero, E.; Lepore, N.; Sosa, G.D.; Cruz-Roa, A.; González, F.A. Automatic detection of wheezes by evaluation of multiple acoustic feature extraction methods and C-weighted SVM. In Proceedings of the 10th International Symposium on Medical Information Processing and Analysis, Cartagena, Colombia, 14–16 October 2014. [Google Scholar]
  98. Tasar, B.; Yaman, O.; Tuncer, T. Accurate respiratory sound classification model based on piccolo pattern. Appl. Acoust. 2022, 188, 108589. [Google Scholar] [CrossRef]
  99. Vidhya, B.; Nikhil Madhav, M.; Suresh Kumar, M.; Kalanandini, S. AI Based Diagnosis of Pneumonia. Wirel. Pers. Commun. 2022, 126, 3677–3692. [Google Scholar] [CrossRef]
  100. Zhang, Z. Introduction to machine learning: K-nearest neighbors. Ann. Transl. Med. 2016, 4, 218. [Google Scholar] [CrossRef]
  101. Kingsford, C.; Salzberg, S.L. What are decision trees? Nat. Biotechnol. 2008, 26, 1011–1013. [Google Scholar] [CrossRef] [PubMed]
  102. Chambres, G.; Hanna, P.; Desainte-Catherine, M. Automatic detection of patient with respiratory diseases using lung sound analysis. In Proceedings of the 2018 International Conference on Content-Based Multimedia Indexing (CBMI), La Rochelle, France, 4–6 September 2018; pp. 1–6. [Google Scholar]
  103. Kok, X.H.; Imtiaz, S.A.; Rodriguez-Villegas, E. A novel method for automatic identification of respiratory disease from acoustic recordings. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 2589–2592. [Google Scholar]
  104. Oletic, D.; Arsenali, B.; Bilas, V. Low-power wearable respiratory sound sensing. Sensors 2014, 14, 6535–6566. [Google Scholar] [CrossRef] [PubMed]
  105. Tharwat, A.; Gaber, T.; Ibrahim, A.; Hassanien, A.E. Linear discriminant analysis: A detailed tutorial. AI Commun. 2017, 30, 169–190. [Google Scholar] [CrossRef]
  106. Naqvi, S.Z.H.; Choudhry, M.A. An Automated System for Classification of Chronic Obstructive Pulmonary Disease and Pneumonia Patients Using Lung Sound Analysis. Sensors 2020, 20, 6512. [Google Scholar] [CrossRef] [PubMed]
  107. Porieva, H.; Ivanko, K.; Semkiv, C.; Vaityshyn, V. Investigation of lung sounds features for detection of bronchitis and COPD using machine learning methods. Radiotekhnika Radioaparatobuduvannia 2021, 84, 78–87. [Google Scholar] [CrossRef]
  108. Matsuki, K.; Kuperman, V.; Van Dyke, J.A. The Random Forests statistical technique: An examination of its value for the study of reading. Sci. Stud. Read. 2016, 20, 20–33. [Google Scholar] [CrossRef] [PubMed]
  109. Jaber, M.M.; Abd, S.K.; Shakeel, P.M.; Burhanuddin, M.A.; Mohammed, M.A.; Yussof, S. A telemedicine tool framework for lung sounds classification using ensemble classifier algorithms. Measurement 2020, 162, 107883. [Google Scholar] [CrossRef]
  110. Aristophanous, M.; Penney, B.C.; Martel, M.K.; Pelizzari, C.A. A Gaussian mixture model for definition of lung tumor volumes in positron emission tomography. Med. Phys. 2007, 34, 4223–4235. [Google Scholar] [CrossRef]
  111. Ntalampiras, S. Collaborative framework for automatic classification of respiratory sounds. IET Signal Process. 2020, 14, 223–228. [Google Scholar] [CrossRef]
  112. Brown, J.C.; Smaragdis, P. Hidden Markov and Gaussian mixture models for automatic call classification. J. Acoust. Soc. Am. 2009, 125, EL221–EL224. [Google Scholar] [CrossRef]
  113. Jakovljević, N.; Lončar-Turukalo, T. Hidden Markov Model Based Respiratory Sound Classification. In Precision Medicine Powered by pHealth and Connected Health; Springer: New York, NY, USA, 2018; pp. 39–43. [Google Scholar]
  114. Ntalampiras, S.; Potamitis, I. Automatic acoustic identification of respiratory diseases. Evol. Syst. 2020, 12, 69–77. [Google Scholar] [CrossRef]
  115. Oletic, D.; Bilas, V. Asthmatic Wheeze Detection From Compressively Sensed Respiratory Sound Spectra. IEEE J. Biomed. Health Inf. 2018, 22, 1406–1414. [Google Scholar] [CrossRef]
  116. Natekin, A.; Knoll, A. Gradient boosting machines, a tutorial. Front. Neurorobotics 2013, 7, 21. [Google Scholar] [CrossRef]
  117. Tripathy, R.K.; Dash, S.; Rath, A.; Panda, G.; Pachori, R.B. Automated Detection of Pulmonary Diseases From Lung Sound Signals Using Fixed-Boundary-Based Empirical Wavelet Transform. IEEE Sens. Lett. 2022, 6, 1–4. [Google Scholar] [CrossRef]
  118. Peng, C.-Y.J.; Lee, K.L.; Ingersoll, G.M. An introduction to logistic regression analysis and reporting. J. Educ. Res. 2002, 96, 3–14. [Google Scholar] [CrossRef]
  119. Pramono, R.X.A.; Bowyer, S.; Rodriguez-Villegas, E. Automatic adventitious respiratory sound analysis: A systematic review. PLoS ONE 2017, 12, e0177926. [Google Scholar] [CrossRef]
  120. Reddy, E.M.K.; Gurrala, A.; Hasitha, V.B.; Kumar, K.V.R. Introduction to Naive Bayes and a Review on Its Subtypes with Applications. In Bayesian Reasoning and Gaussian Processes for Machine Learning Applications; CRC Press: Boca Raton, FL, USA, 2022; pp. 1–14. [Google Scholar] [CrossRef]
  121. Koning, C.; Lock, A. A systematic review and utilization study of digital stethoscopes for cardiopulmonary assessments. J. Med. Res. Innov. 2021, 5, 4–14. [Google Scholar] [CrossRef]
  122. Arts, L.; Lim, E.H.T.; van de Ven, P.M.; Heunks, L.; Tuinman, P.R. The diagnostic accuracy of lung auscultation in adult patients with acute pulmonary pathologies: A meta-analysis. Sci. Rep. 2020, 10, 7347. [Google Scholar] [CrossRef] [PubMed]
  123. Güler, İ.; Polat, H.; Ergün, U. Combining neural network and genetic algorithm for prediction of lung sounds. J. Med. Syst. 2005, 29, 217–231. [Google Scholar] [CrossRef] [PubMed]
  124. Xia, T.; Han, J.; Mascolo, C. Exploring machine learning for audio-based respiratory condition screening: A concise review of databases, methods, and open issues. Exp. Biol. Med. 2022, 247, 2053–2061. [Google Scholar] [CrossRef] [PubMed]
  125. Heitmann, J.; Glangetas, A.; Doenz, J.; Dervaux, J.; Shama, D.M.; Garcia, D.H.; Benissa, M.R.; Cantais, A.; Perez, A.; Müller, D. DeepBreath—Automated detection of respiratory pathology from lung auscultation in 572 pediatric outpatients across 5 countries. NPJ Digit. Med. 2023, 6, 104. [Google Scholar] [CrossRef]
  126. Tran-Anh, D.; Vu, N.H.; Nguyen-Trong, K.; Pham, C. Multi-task learning neural networks for breath sound detection and classification in pervasive healthcare. Pervasive Mob. Comput. 2022, 86, 101685. [Google Scholar] [CrossRef] [PubMed]
  127. Zhai, Q.; Han, X.; Han, Y.; Yi, J.; Wang, S.; Liu, T. A contactless on-bed radar system for human respiration monitoring. IEEE Trans. Instrum. Meas. 2022, 71, 1–10. [Google Scholar] [CrossRef]
  128. Johnson, K.; Wei, W.; Weeraratne, D.; Frisse, M.; Misulis, K.; Rhee, K.; Zhao, J.; Snowdon, J. Precision medicine, AI, and the future of personalized health care. Clin. Transl. Sci. 2021, 14, 86–93. [Google Scholar] [CrossRef]
  129. Lal, A.; Pinevich, Y.; Gajic, O.; Herasevich, V.; Pickering, B. Artificial intelligence and computer simulation models in critical illness. World J. Crit. Care Med. 2020, 9, 13. [Google Scholar] [CrossRef]
  130. Lal, A.; Li, G.; Cubro, E.; Chalmers, S.; Li, H.; Herasevich, V.; Dong, Y.; Pickering, B.W.; Kilickaya, O.; Gajic, O. Development and verification of a digital twin patient model to predict specific treatment response during the first 24 hours of sepsis. Crit. Care Explor. 2020, 2, e0249. [Google Scholar] [CrossRef] [PubMed]
  131. Davenport, T.; Kalakota, R. The potential for artificial intelligence in healthcare. Future Healthc. J. 2019, 6, 94–98. [Google Scholar] [CrossRef]
  132. Richens, J.G.; Lee, C.M.; Johri, S. Improving the accuracy of medical diagnosis with causal machine learning. Nat. Commun. 2020, 11, 3923. [Google Scholar] [CrossRef]
  133. Lal, A.; Dang, J.; Nabzdyk, C.; Gajic, O.; Herasevich, V. Regulatory oversight and ethical concerns surrounding software as medical device (SaMD) and digital twin technology in healthcare. Ann. Transl. Med. 2022, 10, 950. [Google Scholar] [CrossRef]
Figure 1. Process of automatic lung sound classification.
Figure 1. Process of automatic lung sound classification.
Bioengineering 10 01155 g001
Figure 2. Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram.
Figure 2. Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram.
Bioengineering 10 01155 g002
Figure 3. Number of included publications by year, absolute values.
Figure 3. Number of included publications by year, absolute values.
Bioengineering 10 01155 g003
Figure 4. Quality assessment summary plots for the risk of bias (top) and applicability concerns (bottom). Presented as the number of articles with high, unclear, or low risk/concerns across each domain of the modified QUADAS-2 tool. (Green: low risk of bias; red: high risk of bias; yellow: unclear risk of bias).
Figure 4. Quality assessment summary plots for the risk of bias (top) and applicability concerns (bottom). Presented as the number of articles with high, unclear, or low risk/concerns across each domain of the modified QUADAS-2 tool. (Green: low risk of bias; red: high risk of bias; yellow: unclear risk of bias).
Bioengineering 10 01155 g004
Table 1. Population, Intervention, Comparator, Outcome, and Study Design (PICOS) eligibility criteria for the systematic review.
Table 1. Population, Intervention, Comparator, Outcome, and Study Design (PICOS) eligibility criteria for the systematic review.
ParameterInclusion CriteriaExclusion Criteria
Population
  • Total or majority of adult (age ≥ 17) cohort.
  • Underlying pulmonary disease, causing abnormal respiratory sounds.
  • Audio files obtained from a publicly available database.
  • Manuscripts published in English.
  • Studies focusing on pediatric cohorts.
  • Focus on cardiac auscultation sounds.
  • Audio files obtained from private databases.
  • Audio files self-collected for the purpose of the study.
  • Manuscripts published in languages other than English.
Intervention
  • Use of at least one machine learning algorithm to classify abnormal respiratory sounds.
  • No machine learning algorithm used.
Comparator
  • Labeling of abnormal sounds provided by the source database.
  • No labeling provided by the source database.
Outcomes
  • Report of at least one of the following performance metrics: accuracy, sensitivity, and specificity.
  • No performance metric report.
Study Designs
  • Machine learning algorithm development, comparison, validation, and hyperparameter tuning.
  • Book chapters.
  • Reviews.
  • Abstracts.
  • Letters to the editor.
  • Unpublished work.
  • Study protocol.
Table 2. Abnormal lung sounds sources are mentioned in the included articles. Some databases are no longer accessible or their characteristics are not described. (Contents are sorted by availability, last column, and country of origin, second column).
Table 2. Abnormal lung sounds sources are mentioned in the included articles. Some databases are no longer accessible or their characteristics are not described. (Contents are sorted by availability, last column, and country of origin, second column).
Database or Author NameCountryParticipants Number
(Total (M/F); HC)
Abnormal Lung Sounds LabeledPathologies LabeledAvailability 1Ref.
R.A.L.E. Lung Sounds 3.2Canada70 (-); 17Crackles, Wheezes, Squawk, Stridor, RhonchiAsthma, COPD, Bronchiolitis, Laryngeal web, Bronchogenic carcinoma, Lung fibrosis, Cystic fibrosis.Available online[33]
ICBHI 2017 Challenge DatabaseGreece,
Portugal
126 (46/79); 26Crackles, Wheezes, Crackles + WheezesAsthma, Bronchiectasis, Bronchiolitis, COPD, Pneumonia, LRTI, URTIAvailable online[27]
KAUH databaseJordan120 (43/69); 35Crackles, Wheezes, Crepitations, Bronchial sounds, Crackles + Wheezes, Crackles + BronchialAsthma, Pneumonia, COPD, Bronchitis, Heart failure, Lung fibrosis, Pleural effusionAvailable online[45]
RespiratoryDatabase@TRTurkey77 (64/13); 30Crackles, WheezesAsthma, COPDAvailable online[46]
Thinklabs Lung Sounds LibraryUnited States-Crackles, Wheezes, Pleural rub, Rhonchi, StridorAsthma, Bronchiolitis, COPD, Laryngomalacia, Pulmonary edemaAvailable online[47]
East Tennessee State University Pulmonary Breath SoundsUnited States-Crackles, Pleural rub, Stridor, Wheezing, Rhonchus-Available online[48]
ASTRA database France---CD-ROM[40]
Auscultation Skills: Breath & Heart SoundsUnited States---CD-ROM[41]
Fundamentals of Lung and Heart SoundsUnited States---CD-ROM[42]
Heart and Lung Sounds Reference Library, WrigleyUnited States-Bronchial, Bronchovesicular, Rhonchi, Pneumonia, Wheezes, Bronchophony, Crackles, Stridor,-CD-ROM[43]
Understanding Lung Sounds, LehrerUnited States-Crackles, Wheezes-CD-ROM[44]
Bahoura 1999France---Undefined[49]
Hsiao 2020Taiwan22 (12/10); -Crackles, Wheezes-Undefined[50]
Bogazici University Lung Acoustics LaboratoryTurkey--Bronchiectasis, Interstitial lung diseaseUndefined-
CORA databaseUkraine--Bronchitis, COPDUndefined[51]
Stethographics Lung Sound Samples 2United States---Undefined-
3M Littmann Lung Sounds LibraryUnited States---Undefined-
Mediscuss Respiratory Sounds 2 ----Undefined-
Abbreviations: M: Males; F: Females; HC: Healthy Controls; COPD: Chronic Obstructive Pulmonary Disease; LRTI: Lower Respiratory Tract Infection; URTI: Upper Respiratory Tract Infection. ETSU: East Tennessee State University; ICBHI: International Conference on Biomedical and Health Informatics; KAUH: King Abdullah University Hospital; R.A.L.E: Respiratory Acoustics Laboratory Environment. 1 Availability at the time of submission. 2 This database was mentioned in one of the included articles but could not be found in this review.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Garcia-Mendez, J.P.; Lal, A.; Herasevich, S.; Tekin, A.; Pinevich, Y.; Lipatov, K.; Wang, H.-Y.; Qamar, S.; Ayala, I.N.; Khapov, I.; et al. Machine Learning for Automated Classification of Abnormal Lung Sounds Obtained from Public Databases: A Systematic Review. Bioengineering 2023, 10, 1155. https://doi.org/10.3390/bioengineering10101155

AMA Style

Garcia-Mendez JP, Lal A, Herasevich S, Tekin A, Pinevich Y, Lipatov K, Wang H-Y, Qamar S, Ayala IN, Khapov I, et al. Machine Learning for Automated Classification of Abnormal Lung Sounds Obtained from Public Databases: A Systematic Review. Bioengineering. 2023; 10(10):1155. https://doi.org/10.3390/bioengineering10101155

Chicago/Turabian Style

Garcia-Mendez, Juan P., Amos Lal, Svetlana Herasevich, Aysun Tekin, Yuliya Pinevich, Kirill Lipatov, Hsin-Yi Wang, Shahraz Qamar, Ivan N. Ayala, Ivan Khapov, and et al. 2023. "Machine Learning for Automated Classification of Abnormal Lung Sounds Obtained from Public Databases: A Systematic Review" Bioengineering 10, no. 10: 1155. https://doi.org/10.3390/bioengineering10101155

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop