Next Article in Journal
Measuring the Energy and Performance of Scientific Workflows on Low-Power Clusters
Next Article in Special Issue
Image Formation Algorithms for Low-Cost Freehand Ultrasound Scanner Based on Ego-Motion Estimation and Unsupervised Clustering
Previous Article in Journal
DroidFDR: Automatic Classification of Android Malware Using Model Checking
Previous Article in Special Issue
Automatic Estimation of Food Intake Amount Using Visual and Ultrasonic Signals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Recent Advances in Machine Learning Applied to Ultrasound Imaging

School of Engineering, University of Basilicata, 85100 Potenza, Italy
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(11), 1800; https://doi.org/10.3390/electronics11111800
Submission received: 20 March 2022 / Revised: 8 May 2022 / Accepted: 9 May 2022 / Published: 6 June 2022
(This article belongs to the Special Issue Ultrasonic Pattern Recognition by Machine Learning)

Abstract

:
Machine learning (ML) methods are pervading an increasing number of fields of application because of their capacity to effectively solve a wide variety of challenging problems. The employment of ML techniques in ultrasound imaging applications started several years ago but the scientific interest in this issue has increased exponentially in the last few years. The present work reviews the most recent (2019 onwards) implementations of machine learning techniques for two of the most popular ultrasound imaging fields, medical diagnostics and non-destructive evaluation. The former, which covers the major part of the review, was analyzed by classifying studies according to the human organ investigated and the methodology (e.g., detection, segmentation, and/or classification) adopted, while for the latter, some solutions to the detection/classification of material defects or particular patterns are reported. Finally, the main merits of machine learning that emerged from the study analysis are summarized and discussed.

1. Introduction

In recent years, machine learning (ML) techniques, which are based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention, have had a significant impact on industry and science due to their capacity to solve challenging problems. ML applications are innumerable and include computer vision [1,2,3,4], self-driving cars [5,6], virtual person assistants [7], speech recognition [8] and even ultrasound imaging, which is the field of application that will be discussed in detail in this review.
Ultrasound imaging is employed in a wide variety of applications, including sonar [9,10,11], non-destructive evaluation (NDE) [12,13,14], indoor positioning systems (IPS) [15,16] and biometric recognition [17,18,19]. However, its best known application is medical diagnostics, where it has been found to be very attractive and interesting compared to other imaging modalities, such as magnetic resonance (MR), X-rays and computed tomography(CT), because it enables the acquisition of organ images at low cost and in a safe and non-invasive way. At present, it is commonly employed for various analyses, such as fetal monitoring, the anatomical study of blood vessels and flow, lung and liver disease screening and even the diagnosis of tumor pathologies.
The employment of ML techniques in ultrasound imaging has occurred for several years [20,21,22,23,24], but, very recently, there has been a dramatic growth of scientific interest in this issue, where the main fields involved are sonar, NDE and medicine. The present review focuses on the latter two fields, with particular regard to the medical area. Sonar applications are not included here because, due to the very huge quantity of recent studies [25,26,27,28,29,30], they deserve a separate review. Given the very large number of relevant scientific publications, this review is mainly devoted to the analysis of more recent studies, with a particular focus on papers published after 2019 and on journal papers rather than conference papers.
The review is organized as follows: The Section 2 is devoted to a classification of the most used ML algorithms in the analyzed articles; the Section 3 is focused on the basic principles of ultrasound imaging and the most used ultrasound techniques; the Section 4 reviews papers related to applications of ML to ultrasound medical diagnostics, sub-divided by organs; the Section 5 considers the application of ML algorithm in NDE; the Section 6 is devoted to the conclusions, where the benefits and limits of ML applications are highlighted.

2. Machine Learning

Machine learning (ML), one of the most rapidly developing subfields of artificial intelligence (AI) research, is defined as the field of study of computer algorithms capable of learning autonomously to automatically improve the performance of a task on the basis of their own previous experience. The main intention of ML techniques is to allow the system to obtain knowledge with no explicit programming and to learn from the data. A great deal of scientific activity has been dedicated to proposing methods to enable machines to learn by themselves without being explicitly programmed [31]. ML relies on different algorithms to solve data problems, where the kind of algorithm used depends on a series of factors, including the type of problem, the number of variables, and the most suitable model. ML fundamentally includes two types of approaches: supervised and unsupervised learning. Supervised learning is a machine learning method where an algorithm is trained on input data that has been labeled for a particular output. Supervised algorithms, depending on the output to be obtained, can be categorized as classification algorithms, which involve identifying the category to which an object belongs on the basis of the characteristic of the object itself, and regression algorithms, which involve estimating the value of a particular feature of an object. The most used algorithms for regression problems are:
  • Linear regression, which establishes a relation between dependent variables (output) and independent variables (input) through a fitting line. Linear regression can be of two types: simple and multiple linear regression, where the first includes one independent variable, while the second includes two or more independent variables.
  • Logistic regression, which is a statistical tool used to model a binomial result, i.e., binary problems with one or more explanatory variables. In this way, it is possible to describe the data and the relationship between a binary dependent variable and one or more independent variables.
The most used algorithms for classification problems are listed below.
  • Naive Bayes is based on an underlying probabilistic model and enables capture of the uncertainty of a model by determining the probabilities of the outcomes. The Bayesian classification can solve predictive problems by providing practical learning algorithms and combining observed data. This classification provides a useful perspective for understanding and evaluating learning algorithms [32].
  • Support vector machine (SVM) is a method for the classification of two groups of data points, which exploits a hyperplane that divides two categories of data points with the largest margin [33]. Linear SVM is the most simple form of an SVM classifier, where examples are represented as points in space and mapped out so that the examples belonging to two different categories are divided by a clear gap that maximizes the difference. The prediction of the category to which examples belong is made on the basis of the side on which they fall. SVM can be performed either linearly or non-linearly. Non-linear SVM results are useful when data are not separable linearly. This approach involves implementation of a kernel trick [31], a non-linear function which replaces the scalar product, thus maximizing the hyperspace. The most used kernels are polynomial and Gaussian.
  • Decision tree, the goal of which is to create a model that predicts the value of a certain variable by learning some decision rules obtained from data features. It consists of a tree that classifies instances by sorting them based on feature values; each node of the decision tree is an instance feature to be classified and each branch corresponds to a value that can be assigned to the node. Basically, the procedure consists of classifying the instances and sorting them on their feature values [34].
  • Random forest (RF) combines the output of multiple decision trees to reach a single result. The random forest algorithm is an extension of the bagging method because it utilizes both bagging and feature randomness to create an uncorrelated forest of decision trees. RF is employed for classification, regression, and other activity based on the construction of a multitude of decision trees during the training and generates the class that represents the overall prediction of the single trees. Random forest corrects the overfitting problem of decision trees [35].
  • K-nearest neighbors (K-NN) aims to predict a new instance by knowing the data points which are separated into different classes. Specifically, within each class there are associated data points or instances, the set of which defines the data set. Its operation is based on the similarity of the characteristics; that is, the closer an instance is to a data point, the more the algorithm considers them similar. To evaluate the similarity, the algorithm uses some distance, such as Euclidean [36], Chebyshev [37], or Minkowski [38]. In addition to the distance, KNN plans to set a parameter k, chosen in an arbitrary way, which identifies the number of minimum distances; the class that obtains the greatest number of these distances is chosen as a prediction [39].
  • Linear discriminant analysis (LDA) is a commonly used technique for supervised classification problems, which aims to reduce their dimensions. It is used for modeling differences in groups, i.e., separating two or more classes. It transfers features from higher to lower dimension spaces. LDA, as SVM, computes optimal hyperplanes with respect to their individual objectives. However, LDA hyperplanes are optimal only when the covariance matrices are identical for all of the classes, while SVM computes optimal hyperplanes without making an assumption [40].
With unsupervised learning, it is necessary to identify hidden structures in datasets, without outputs being labeled. The principal unsupervised learning algorithms are:
  • K-means clustering, which initially defines k centroids and iteratively selects the closest data points to each centroid and assigns them to the centroid itself [33].
  • Principal component analysis (PCA) reduces the dimensionality of a data set consisting of many variables correlated with each other, either heavily or lightly, while retaining the variation present in the dataset, to the maximum extent. The same operation is performed by converting the original variables to a new set of variables, named principal components, which are orthogonal and are ordered in such a way that the retention of variation present in the original variables decreases by moving down the order [41].
Deep learning (DL) is a subfield of machine learning, which attempts to learn high-level abstractions in data automatically using hierarchical architectures [42]. The most popular DL techniques are based on neural networks, which are inspired by the human nervous system and the structure of the brain. It consists of processing units or nodes organized in input, hidden, and output layers where the nodes in each layer are connected to other nodes in adjacent layers. The inputs are multiplied by the respective weights and summed at each node. The sum then undergoes a transformation based on the activation function, which, in most cases, is a sigmoid function, tanh, or rectified linear unit (reLU). The output of the function is then fed as input to the subsequent unit in the next layer. Finally, the result of the final output represents the solution to the problem. The principal type of neural network is the convolutional neural network (CNN). Fundamentally, a CNN consists of a series of convolutional layers, sub-sampling or pooling layers, fully connected layers, and a normalizing layer. In this case, instead of using the activation functions described above, convolution and pooling functions are employed as activation functions. The series of convolutions perform increasingly more refined feature extraction at every layer moving from the input to the output layers. Pooling layers occur between each convolution layer and reduce the feature maps and the size of network and then effectively reduce the feature network’s susceptibility to scale and distortion of the image [43]. Finally, the classification is performed by fully connected layers through a certain number of categories. The most popular CNN types are reported below.
UNet [44] is a CNN developed for biomedical image segmentation. The UNet architecture has a U-shape and is based on two paths, a contraction or encoder path and an expansion or decoder path, where each encoder convolution layer is concatenated to its reciprocal decoder layer. Each concatenation provides a localized feature specific for segmented classes. The basic concept behind the Unet deep learning technique is to use convolutional layers and max-pooling architectures to extract identifying features and patterns from a series of images.
Alexnet [45] is a CNN consisting of eight layers, where the first five are convolutional layers and the last three are fully connected layers. Compared to traditional CNNs, AlexNet identifies more features because it consists of a deep structure and has many parameters in the model.
ResNet [46] is a specific architecture of CNN distinguished by its employment of a skip connection to jump over some layers. The adding of a skip connection is useful because it mitigates the problem of degradation (accuracy saturation) that occurs when a high number of layers are used, leading to training errors. Figure 1 presents a scheme for the algorithms described above.
Traditional ML is based on training data and testing data by considering the same input feature space and same data distribution. However, there are some cases where the training data and testing data show different data distributions and the result of the predictive learner can be degraded. The solution entails transfer learning that allows definition of a high-performance learner for a target domain trained from a related source domain, which copes with the difficulty of obtaining training data that matches the feature space and the predicted data distribution characteristics of the test data.
Transfer learning consists mainly of re-employing a pre-developed model to accomplish a determined activity as a starting point for the development of a model destined to execute another different activity. Transfer learning is widely employed in the majority of deep learning models where neural networks, on which a large data set has already been inserted, are retrained with the purpose of classifying images on a large scale. The intuition behind transfer learning, especially in the image classification activity, is that, if a model is trained on a sufficiently large and general data set, it will effectively act as a generic model of the visual world; it will therefore be possible to exploit the general function maps learned, without having to train a new model of neural networks from scratch, wasting resources and time in the training process of the neural network on data sets large enough to be able to return an optimal result.

3. Ultrasound Imaging

One of the most popular ultrasound imaging techniques is the pulse-echo method, where its basic modality is represented by A(Amplitude)-Mode [47]. In this modality, a single-element transducer is excited by high-intensity short pulses through a signal generator. The waves transmitted by the transducer propagate through the body and echoes are reflected from various tissues because of the large interfaces between organs. The echoes are detected by the same transducer and are amplified because ultrasound energy is attenuated by the tissues as it penetrates deeper into the body, and is processed and displayed.
A two-dimensional image is usually obtained by using linear or convex phased arrays, which allow electronic scanning of the desired volume and performance of beam-forming techniques (e.g., apodization, steering, focusing). A first kind of image, named the B(Brightness)-mode image, is characterized by grayscale pixels whose value is proportional to the amplitude of the returned echo [48]. Figure 2 shows an example of a B-mode image of the liver and kidney.
Another method for generating 2D images is represented by M(Motion)-Mode. In this case, the transducer is placed in front of a moving target, and echo signals are repeatedly acquired along the same A-line orientation. The obtained image represents the distances to the targets as a function of time.
In cardiovascular analysis, the most employed techniques are based on the Doppler effect that exploits the ultrasound capability to measure blood flow, principally to assess the state of blood vessels and functions of an organ. The Doppler effect is the alteration of the frequency of a received wave compared to the transmitted one due to the relative movement of a transmitter and a receiver. It is based on the backscattering of ultrasonic waves by the red blood cells in motion with respect to the probe. Conventionally, ultrasound Doppler flow measurements are based on three main approaches: continuous wave (CW), pulsed wave (PW) Doppler, and color Doppler. CW Doppler systems use two transducers, one for transmission and one for reception, and can obtain information on the velocity along a US beam without any information about the position [49]. PW Doppler, instead, is based on a single transducer that alternatively transmits and receives. It is sensitive to the beam-to-flow angle and enables extraction of the flow velocity at one specific depth [50]. the color Doppler is a technique that allows display of B-mode and Doppler blood flow data simultaneously, where the Doppler information is visualized as color and is superimposed onto the B-mode image. In particular, the color red indicates flow toward the transducer while the color blue indicates flow away from the transducer [51,52]. A color Doppler image of the carotid artery is shown in Figure 3.
Another technique employed in the cardiovascular analysis is intravascular ultrasound (IVUS), a catheter-based technique that provides real-time high-resolution tomographic images of both the lumen and arterial wall of coronary segments [53].
A relatively new imaging technique is represented by elastography. This technique has the aim of imitating palpation, one of the oldest methods to detect tumors and other pathologies, using acoustic waves. Basically, a pathologic region is characterized by lower elasticity in comparison to normal tissues—tissue stiffness is considered an important biomarker for pathological processes. In this case, the reflected echoes are used to map reflectivity properties and geometry by assuming that the examined organ is stationary or moves only because of internal physiological changes. Another kind of contrast that represents the elastic properties of the tissue can be obtained by applying a known external mechanical load to the tissue [54].
An important evolution of ultrasound imaging is represented by the formation of three-dimensional images that provide more information than 2D-images. 3D-images are obtained by acquiring multiple slices of the images; the reconstruction of the 3D-image can be performed offline or in real-time. In the latter case, there is reference to 4D-images, i.e., 3D-images in real-time motion. 3D ultrasound data are achieved by employing a linear array performing a single mechanical scan or a two-dimensional array performing only electronic scans. Basically, 3D-images are displayed through two modalities: a series of multiplanar images orthogonal to one another and/or images showing three-dimensional structures [55].

4. ML in US Medical Diagnostics

In recent years, ML techniques have played a fundamental role in the analysis of US medical images in order to improve the reliability of diagnosis that is often compromised by the relatively poor quality of images due to the presence of noise and acquisition errors. Furthermore, ML techniques reduce operator-dependence, standardize the image interpretation, provide stable results and the capability to make rapid decisions, and relieve the heavy work of radiologists.
The next section is subdivided into several subsections; each subsection is devoted to a particular organ and consists of two parts describing:
  • the general issues related to organ diseases;
  • the most recent papers on innovative ML techniques organized according to the methodology adopted: detection, segmentation and classification.

4.1. Breast

Breast cancer is a disease that represents one of the principal causes of cancer deaths for women and this number is increasing. The probability that a woman will die from a breast tumor is about 1 in 39. Only 10% of cases are detected at the initial stages. Breast cancer can begin in different parts of the breast which is made up of lobules, ducts, and stromal tissues. Most breast cancers begin in the cells that line the ducts, while some begin in the cells that line the lobules and a small number begin in the other tissues [56]. Breast cancer manifests itself mainly through a breast nodule or thickening that feels different from the surrounding tissue, lymph node enlargement, nipple discharge, a retracted nipple, or persistent tenderness of the breast.
A successful diagnosis in the early stages of breast cancer makes better treatment possible with increase in the probability of the person’s survival [57]. Furthermore, the cost of breast cancer treatment is high. For such reasons, in recent years, several breast diagnostic approaches have been investigated, such as mammography, magnetic resonance imaging, computerized tomography, biopsy, and ultrasound imaging. The latter, in the last few years, has started to become an integral part of the characterization of breast masses because of the advantages previously described. In addition, compared to mammography, ultrasound is the most accessible imaging modality, is age-independent [58] and allows the assessment of breast density that often represents a predictor of breast cancer risk evaluation and prevention. The breast density percentage is defined as the ratio between the area of the fibrograndular tissue and the total area of the breast. Breast ultrasound is also used to distinguish benign from malignant lesions.
For the purposes mentioned above, most of the techniques investigated are based on three principal issues, i.e., detection [59,60,61,62,63,64,65], segmentation [66,67,68,69,70,71,72,73,74,75,76,77], and classification [63,65,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94].
Detection is fundamental in ultrasound analysis because it provides support for segmentation and/or classification between malignant and benign tumors. In a recent study, Gao et al. [60] proposed a method for the recognition of breast ultrasound nodules with low labeled data. Nodule detection was achieved by employing the faster region-based CNN. Benign and malignant nodules were classified through a semi-supervised classifier, based on the mean teacher model, trained on a small amount of labeled data. The results demonstrated that the SSL enabled performances comparable to those obtained with SL trained on a large number of data to be achieved.
Segmentation [66] has an important role in the clinical diagnosis of breast cancer due to the capability to discriminate different functional tissues, providing valuable references for image interpretation, tumor localization, and breast cancer diagnosis. A segmentation approach that combines fuzzy logic and deep learning was suggested by Badawy et al. [67] for automatic semantic segmentation of tumors in breast ultrasound images. The proposed scheme is based on two steps: the first consists of preprocessing based on a fuzzy intensification operator and the second consists of semantic segmentation based on CNN, based on experimenting with eight known models. It is applied using different modes: batch and one-by-one image-processing. The results demonstrated that fuzzy preprocessing was able to enhance the automatic semantic segmentation for each evaluated metric, but only in the case of batch processing. Another automatic semantic segmentation approach was proposed by Huang et al. [69]. In this approach, BUS images are first preprocessed using wavelet features; then, the augmented images are segmented through a fuzzy fully convolutional network, and, finally, an accurately fine-tuning post-processing based on breast anatomy constraints through conditional random fields (CRFs) is performed. The experimental results showed that fuzzy FCN provided better performances than non-fuzzy FCN, both in terms of robustness and accuracy; moreover, its performances were better than all the other methods used for comparison and remained strong when small data sets were used. Ilesanmi et al. [70] used contrast-limited adaptive histogram equalization (CLAHE) to improve image quality. Semantic segmentation was performed through a variant of UNET, named VEU-NET, based on a variant enhanced (VE) block, which encoded the preprocessed image, and concatenated convolutions that produced the segmentation mask. The results indicated that the VEU-Net produced better segmentation than the other classic CNN methods that were tested for comparison. An approach based on the integration of deep learning with visual saliency for breast tumor segmentation was proposed by Vakanski et al. [73]. Attention blocks were introduced into a U-Net architecture and feature representations which prioritized spatial regions with high saliency levels were learned. The results demonstrated that the accuracy of tumor segmentation was better than for models without salient attention layers. An important merit of this investigation was the use of US images collected from different systems, which demonstrated the robustness of the technique.
Image classification is very important in medical diagnostics because it enables distinguishing lesions or benignant tumors from malignant ones and a particular type of tissue from others. Shia et al. [78] presented a method based on a transfer learning algorithm to recognize and classify benign and malignant breast tumors from B-mode images. Specifically, feature extraction was performed by employing a deep residual network model (ResNet-101). The features extracted were classified through the linear SVM with a sequential minimal optimization solver. The experimental results highlighted that the proposed method was able to improve the quality and efficacy of clinical diagnosis. Chen et al. [89] presented a contrast-enhanced ultrasound (CEUS) video classification model for breast cancer into benign and malignant types. The model was based on a 3D CNN with a temporal attention module (DKG-TAM) incorporating temporal domain knowledge and a channel attention module (DKG-CAM) that included feature-based domain knowledge. It was found that the incorporation of domain knowledge led to improvements in sensitivity. A study aimed at testing the capability of AutoML Vision, a highly automatic machine learning model, for breast lesion classification was presented by Wan et al. [91]. The performance of AutoML Vision was compared with traditional ML models with the most used classifiers (RF, KNN, LDA, LR, SVM and NB) and a CNN designed in a Tensorflow environment. The AutoML Vision performances were, on average, comparable to the others, demonstrating its reliability for clinical practice. Finally, Huo et al. [93] experimentally evaluated six machine learning models (LR, RF, extra trees, SWM, multilayer perceptron, and XG Boost) for differentiating between benign and malignant breast lesions using data from different sources. Two examples of the ultrasound depictions of malignant breast lesions are shown in Figure 4. The experimental results demonstrated that the LR model exhibited better diagnostic efficiency than the others and was also better than clinician diagnosis (see Table 1).

4.2. Arteries

Another major cause of death in the world is represented by cardiovascular diseases (CVD), caused principally by a pathological condition called atherosclerosis, which is characterized by alterations of artery walls that have lost their elasticity because of the accumulation of calcium, cholesterol, or inflammatory cells. It is the principal cause of ictus and infarct. Early detection of plaques in the arteries has a fundamental role in the prevention of brain strokes. The imaging modality based on ultrasound represents a useful method for the analysis of carotid diseases through visualization and interpretation of carotid plaques because a correct characterization of this disease is fundamental to identifying plaques vulnerable to surgery. A reliable and useful indicator of atherosclerosis is the so-called intima-media (IM) thickness, defined as the distance from the lumen-intima (LI) to the media-adventitia (MA) interface. Most studies have been devoted to the improvement of early atherosclerosis diagnosis; in this respect, three main issues are considered: detection [95,102,103,104,105,106,107], segmentation [108,109,110,111,112,113,114,115], and classification [116,117,118,119,120,121,122,123,124,125,126,127,128].
As far as detection is concerned, Bajaj et al. [95] designed a novel deep-learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound (IVUS) images. Near-infrared spectroscopy(NIRS)-IVUS were collected from 20 coronary arteries and co-registered with the concurrent electrocardiographic (ECG)-signal for identification of end-diastolic frames. A bidirectional-gated recurrent unit (Bi-GRU) neural network was trained by a segment of 64 frames, which incorporated at least one cardiac cycle, and then the test set was processed to identify the end-diastolic frames. The performances of the proposed method demonstrated higher accuracy than expert analysts and conventional image-based (CIB) methodologies.
Two recent segmentation approaches based on DL have been proposed by Blanco et al. [111] and Zhou et al. [112]. The first method [111] employs small datasets for algorithm training. Specifically, plaques from 2D carotid B-mode images, are trained on three small databases, and are segmented through a UNet++ ensemble algorithm, which uses eight individual UNet++ networks with different backbones and architectures in the encoder. Good segmentation accuracy was achieved for different datasets without retraining. The second method [112] involves the concatenation of a multi-frame convolutional neural network (MFCNN), which exploits adjacency information present in longitudinally neighboring IVUS frames to deliver a preliminary segmentation, followed by a Gaussian process (GP) regressor to construct the final lumen and vessel contours by filtering high-dimensional noise. The results obtained with the model developed demonstrated accurate segmentation in terms of image metrics, contour metrics, and clinically relevant variables, potentially enabling its use in clinical routine by reducing the costs involved in the manual management of IVUS datasets. Lo Vercio et al. [128] suggested an automatic detection method fundamentally based on two machine learning algorithms: SVM and RF. The first one is employed to detect lumen, media, and surrounding tissues through SVM algorithms, and the second one to detect different morphological structures and to modify the initial layer classification depending on the detected structures. Successively, the resulting classification maps are inserted into a segmentation method based on deformable contours to detect LI and MA interfaces. The main steps of LI and MA segmentation are described in Figure 5.
With respect to classification, Saba et al. [119] focused on the classification of plaque tissues by employing four ML systems, one transfer learning system, and one deep learning architecture with different layers. Two types of plaque characterization were used: an AI-based mean feature strength and a bispectrum analysis. The results demonstrated that the proposed method was able to accurately characterize symptomatic carotid plaques, clearly discriminating them from symptomatic ones. Another study on carotid diseases was published by Luo et al. [120] that proposed an innovative classification approach based on lower extremity arterial Doppler (LEAD) duplex carotid ultrasound studies. They developed a hierarchical deep learning model for the classification of aortoiliac, femoropopliteal, and trifurcation disease and an RF algorithm for the classification of the quantity of carotid stenosis from duplex carotid ultrasound studies. Then, an automated interpretation of the LEAD and carotid duplex ultrasound studies was developed through artificial intelligence. Successively, a statistical analysis was performed using a confusion matrix and the reliability of novel machine learning models in differentiating normal from diseased arterial systems was evaluated. Good accuracy in classifying the extent of vascular disease was demonstrated (see Table 2).

4.3. Heart

Echocardiography is one of the most employed diagnostic tests in cardiology, where heart images are created through Doppler ultrasound. It is routinely employed in the diagnosis, management, and follow-up of patients with any suspected or known heart disease.
The heart is a muscular organ that pumps blood through the body and is fundamentally divided into four different chambers: the upper left and right atria and the lower left and right ventricles. The heart activity can be divided into two principal phases: systole and diastole. During systole, the myocardium contracts, ejecting blood to the lungs (right ventricle) and the body (left ventricle). During diastole, the cardiac muscle dilates expanding the heart’s volume and causing blood to flow in. The heart has four valves, including the mitral valve that collapses the left atria and the left ventricle and plays a fundamental role by regulating the blood transition from atria to ventricle, opening up during the diastole, while during the systole the valve closes and prevents reflux towards the left atria. Echocardiography can provide information about different anatomical heart aspects including position, shape of the atrium and ventricles [135], and even other variables such as cardiac output, ejection fraction and diastolic function. In addition, echocardiography enables detection of a series of heart diseases, including cardiomyopathy, congenital heart diseases, aneurysm, and mitral valve diseases. However, one of the major issues in echocardiography is the difficulty of automatically classifying and identifying large databases of echocardiogram views in order to provide a diagnosis. The classification task is challenging because of several properties of echocardiograms, including the presence of noise, redundant information, acquisition errors, and the variability of different scans due to different acquisition techniques.
Several studies have been devoted to the automation of algorithms for the detection of anomalies and heart anatomy [96,97,136,137,138], and the classification of echocardiogram views to provide a full and reliable assessment of cardiac functionality improving diagnosis accuracy [139,140].
As far as detection is concerned, an advanced method for the evaluation of several biomarkers from echocardiogram videos based on DL has been developed by Hughes et al. [96]. The method, named EchoNet-Labs, is a CNN with residual connections and spatio-temporal convolutions that provides a beat-by-beat estimate for biomarker values. Experimental results have demonstrated high accuracy in detecting abnormal values of hemoglobin, troponin I, and other proteins, and better performance compared to models based on traditional risk factors. A detection method based on radiomics-based texture analysis and supervised learning was proposed by Kagiyama et al. [97], who designed a low-cost texture-based pipeline for the prediction of fibrosis and myocardial tissue remodeling. The first part of the method consists of the extraction of 328 texture-based features of the myocardium from ultrasound images and exploration of the phenotypes of myocardial textures through unsupervised similarity networks. The second part involves the employment of supervised machine learning models (decision trees, RF, logistic regression models, neural network) for the prediction of functional left ventricular remodeling, while, in the third part, supervised models (logistic regression models) for predicting the presence of myocardial fibrosis are employed. Figure 6 shows a comparison of two myocardial fibrosis predictions from ultrasound and magnetic resonance images.
A classification deep learning approach was developed by Vaseli et al. [139]. They defined a method for obtaining a lightweight deep learning model for the classification of 12 standard echography views, by employing a large echography dataset. For this purpose, three different teacher networks are implemented, each of which consists of a CNN module and fully-connected (FC) module, where the first module is based on one of the three advanced deep learning architectures, i.e, VGG-16, DenseNet, and Resnet. A dataset of 16,612 echo cines obtained from 3151 unique patients across several ultrasound imaging machines was employed for the development and evaluation of the networks. The proposed models were shown to be lightweight and faster than state-of-the-art huge deep models, and to be suitable for POCUS diagnosis.

4.4. Liver

Liver disease is one of the principal causes of death worldwide and comprises a wide range of diseases with varied or unknown origins. In 2017, about 1.32 million deaths worldwide were due to cirrhosis. Furthermore, liver cancer represents the fifth most common cancer and the second cause of death for cancer according to the World Health Organization (WHO). Studied pathologies can be summarized as:
  • focal liver lesions, solid formations that can be benign or malignant,
  • liver fibrosis, excessive accumulation of extracellular matrix proteins, such as collagen,
  • fatty liver or liver steatosis, conditions based on the accumulation of excess fat in the liver,
  • liver tumors.
A number of studies have sought to develop automated algorithms for detection [98,141,142,143,144], segmentation [143], and classification, [143,145,146,147,148,149,150,151,152,153,154,155,156,157] of the diseases described above.
Yu et al. [98] developed a machine learning system to detect and localize gallstones and to detect acute cholecystitis using still images for preliminary rapid and low cost diagnoses. A single-shot multibox detector (SSD) and a feature pyramid network (FPN) were used to classify and localize objects using image features extracted by ResNet-50 for gallstones and MobileNet V2 to classify cholecystitis. The deep learning models were pretrained using public datasets. The experimental results demonstrated the capability of the proposed system to detect cholecystitis and gallstones with acceptable discrimination and speed and its suitability for point-of-care ultrasound (POCUS).
A recent study by Cha et al. [129] proposed a deep learning model aimed at automatically quantifying the hepatorenal index (HRI) for evaluation of ultrasound fatty liver, in order to overcome limitations due to interobserver and intraobserver variability. They developed an organ segmentation based on a deep convolutional neural network (DCNN) with Gaussian mixture modeling for automated quantification of the hepatorenal index (HRI) by employing B-mode ultrasound abdominal images. Interobserver agreement for the measured brightness of liver, kidney, and calculated HRI were analyzed between two board-certified radiologists and DCNN using intraclass correlation coefficients. The automatic quantification of HRI through DCNN results were found to be similar to those obtained by expert radiologists.
Regarding classification, Wang et al. [146] proposed a method to differentiate malignant from benign focal liver lesions through two-dimensional shear-wave elastography (2D-SWE)-based ultrasomics (ultrasound-based radiomics). The ultrasomics technique was employed to extract from 2D-SWE images features that were used to define an ultrasomics score model, while SWE measurements and ultrasomics features were used to define a combined score model through an SVM algorithm. Good diagnostic accuracy for the combined score in differentiating malignant from benign focal liver lesions was demonstrated. The authors highlighted, however, that, to achieve more reliable results, a higher number of cases would be required to better train the ML model. An alternative approach based on ultrasomics was proposed by Peng et al. [147] who concentrated on the differentiation of infected focal lesions from malignant mimickers. In particular, they defined an ultrasomics model based on machine learning methods with ultrasomics features extracted from grayscale images, and dimensionality reduction methods and classifiers employed to carry out feature selection and predictive modeling. The experimental results demonstrated the usefulness of ultrasomics in differentiating focal liver lesions from malignant mimickers. An alternative approach focusing on ultrasound SWE was proposed by Brattain et al. [149], who developed an automated method for the classification of liver fibrosis stages. This method was based on the integration of three modules for the evaluation of SWE image quality, selection of a region of interest, and use of machine learning-based (SVM, RF, CNN and FCNN) multi-image SWE classification for fibrosis stage ≥ F2. The performance of the system was compared with manual methods, showing that the proposed method improved classification accuracy. A study focused on liver steatosis was published by Neogi et al. [155]. They presented a novel set of features that exploited the anisotropy of liver texture. The features were obtained using a gray level difference histogram, pair correlation function, probabilistic local directionality statistics, and randomness of texture. Three datasets that included anisotropy features were employed for the classification of images using five classifiers: MLP, PNN, LVQ, SVM, Bayesian. The best results were achieved with PNN and anisotropy features.

4.5. Fetus

Ultrasound imaging was introduced into the field of obstetrics by Donald et al. [158], and, since then, it has become the most commonly used imaging modality for investigating several factors related to fetal diagnosis, such as information on fetal biometric measurements, including head and abdominal circumferences, biparietal diameter and information on fetal cardiac activity. Several scientific studies have been devoted to advancement of the quality of prenatal diagnoses by focusing on three main issues: detection of anomalies, fetal measurements, scanning planes and heartbeat [99,100,159,160,161,162,163,164], segmentation of fetal anatomy in ultrasound images and videos [99,130,131,132,164,165,166,167] and classification of fetal standard planes, congenital anomalies, biometric measures, and fetal facial expressions [99,100,163,165,167,168,169,170,171,172,173].
A detection approach based on DL was proposed by Maraci et al. [99]. They designed a method for point-of-care ultrasound estimation of fetal gestational age (GA) from the trans-cerebellar (TC) diameter. In the first step, TC plane frames are extracted from a short ultrasound video using a standard CNN based on a variation of the AlexNet architecture. Then, an FCN is employed to localize TC structure and to perform TC diameter estimation. GA is finally achieved through a standard equation. A good agreement was found between the automatic and manual estimation of GA. A recent ML detection method has been published by Wang et al. [100], who focused on the accurate identification of the fetal facial ultrasound standard plane (FFUSP), which has a significant role in facial deformity detection and disease screening, such as cleft lip and palate detection. The authors proposed an LH-SVM texture feature fusion method for automatic recognition and classification of FFUSP. Texture features were extracted from US images through a local binary pattern (LBP) and a histogram of oriented granted (HOG); successively, features were fused and SVM was employed for predictive classification. The performances obtained demonstrated that the proposed method was able to effectively predict and classify FFUSP.
With respect to segmentation, Dozen et al. [130] proposed a novel segmentation method called cropping-segmentation-calibration (CSC) of the ventricular septum in fetal cardiac ultrasound videos. This method was based on time-series information of videos and specific information for U-Net output calibration, obtained by cropping the original frame. The experimental results demonstrated a clear improvement in performance with respect to general segmentation methods, such as DeepLab v3+ and U-net.
A novel model-agnostic DL method (MFCY) was proposed by Shozu et al. [131] in order to improve the segmentation performance of the thoracic wall in ultrasound videos. Three standard UNet (DeepLabV3+), pre-trained with the original sequence video and labels of thoracic wall (TW), thoracic cavity (TC) and whole thorax (WT), were used to perform a preliminary segmentation of the video sequence. Then a multi-frame method (MF) was used to extract predictions for each labeled target. Finally, a cylinder method (CY) integrated the three prediction labels for final segmentation. The results showed improvement in the segmentation performance of the thoracic wall in fetal ultrasound videos without altering the neural network structure.
Perez-Gonzalez et al. [132] presented a method, named probabilistic learning coherent point drift (PL-CPD), for automatic registration of real 3D ultrasound fetal brain volumes with a significant degree of occlusion artifacts, noise, and missing data. Different acquisition planes of the brain were preprocessed to extract confidence maps and texture features, which were used for segmentation purposes and to estimate probabilistic weights by means of random forest classification. Point clouds were finally registered using a variation of the coherent point drift (CPD) method that basically assigns probabilistic weights to the point cloud. The experimental results, although obtained from a relatively small dataset, demonstrated the high suitability of the proposed method for automatic registration of fetal head volumes.
A recent deep learning classification model was developed by Rasheed et al. [165] for automation of fetal head biometry by employing a live ultrasound feed. Initially, the headframes were classified through the CNN ALEXNET, obtained in this case from the ultrasound videos. The classified headframes were then validated through occipitofrontal diameter (OFD) measurement. Successively, the classified headframes were segmented by a UNET with mask and annotated images. Then, the least square ellipse (LSE) was employed to compute the biparietal diameter (BPD) and head circumference (HC). This approach enabled accurate computation of the gestational age with very reduced interaction of the sonographer with the system (see Table 3).

4.6. Lungs

Computed tomography (CT) is considered as the imaging gold standard for pulmonary disease due to its high reliability. However, CT presents a series of disadvantages because it is risky due to the presence of radiation, expensive and non-portable. A valid alternative is represented by lung ultrasound (LU), which is cheap, safe, portable, and is capable of generating medical images in real-time. LU has been used for many years for the evaluation of several lung diseases, including tumors [175,176], interstitial diseases [177,178], post-extubation distress [179], lung edemas [180], and subpleural lesions [181]. In very recent years, research activity into lung ultrasonography has been growing significantly due to the diffusion of the pandemic worldwide. In particular, in COVID-19 evaluation, the use of AI has assumed an increasingly important role concerning the analysis of images in order to make rapid decisions and relieve the heavy workload of radiologists.
AI techniques reduce operator-dependence, standardize the interpretation of images and provide stable results; they have been focused principally on COVID-19 syndrome detection [101,182,183,184,185,186], segmentation of lung regions [133,182,183,184,185,186], classification of lung diseases between COVID-19 positivity and COVID-19 negativity [174,182,183,184,185,186,187,188,189].
With respect to detection, Shang et al. [101] proposed a CAD system that consists of the feature extraction of LUs images through a residual network (ResNet) to assist radiologists in distinguishing COVID-19 syndrome from healthy and non-covid pneumonia. The architecture of the ResNet, pre-trained using ImageNet, was modified by adding a fully connected layer for feature extraction and a global average pooling for features classification. Then, the gradient-weighted class activation mapping (Grad-CAM) method was used to create an activation map that highlights the crucial areas to help radiologist visualization. The CAD system has proved capable of improving radiologists’ performance of COVID-19 diagnosis in experiments carried out.
An interesting segmentation method for accurate COVID-19 diagnosis has been proposed by Xue et al. [133]. The method is based on a dual-level supervised multiple instances learning module (DSA-MIL) and can predict patient severity from heterogeneous LUS data of multiple lung zones. An original modality alignment contrastive learning module (MA-CLR) is proposed for the combination of LUS data and clinical information. Nonlinear mapping was trained through a staged representation transfer (SRT) strategy. This method demonstrated great potential in real clinical practice for COVID-19 patients, particularly for pregnant women and children.
A classification deep learning procedure was proposed by Tsai et al. [174] who defined a standardized protocol combined with a deep learning model based on a spatial transformer network for automatic pleural effusion classification. Then, supervised and weakly supervised approaches, based on frame and video ground truth labels, respectively, were used for training deep learning models. The method was compared with expert clinical image interpretation with similar accuracy obtained for both methods, which brings closer the possibility of achieving the automatic, efficient and reliable diagnosis of lung diseases.

4.7. Other Organs

Machine learning ultrasound is being successfully applied to a number of other organs including:
  • Prostate [190,191,192]: research activity has mainly focused on prostate segmentation on ultrasound images, fundamental in biopsy needle placement and radiotherapy treatment planning; it is quite challenging due to the relatively low quality of US images. In recent years, segmentation based on deep learning techniques has been widely developed due to several benefits compared to classical techniques which are difficult to apply in real-time image-guided interventions.
  • Thyroid [88,134,193,194,195,196,197,198,199,200,201,202,203,204,205,206]: the risk of malignancy of thyroid nodules can be evaluated on the basis of nodule ultrasonographic characteristics, such as echogenicity and calcification. Much activity has been devoted to automate thyroid detection through CAD systems, mainly based on CNN.
  • Kidneys [207,208,209,210,211,212,213,214,215,216,217,218,219,220]: US image-based diagnosis are widely used for the detection of kidney abnormalities including cysts and tumors. For the early diagnosis of kidney diseases, DNN and SVM are very often used as machine learning models for abnormality detection and classification.
Table 1, Table 2 and Table 3 summarize the main features of the analyzed studies subdivided by detection, segmentation, and classification, respectively.
Figure 7 presents a resuming histogram where the frequency of application of the different ML techniques in the analysed period is reported for each analyzed organ. As can be seen, DL techniques based on CNN are clearly the most popular for almost all organs. In particular, for breast and liver, which are the most investigated organs, CNN is employed in about 63 and 50 percent of the cases, respectively. Only for arteries is a slight predominance of SVM methods observed.

5. ML in US Non-Destructive Evaluation

Ultrasound application to the mechanical field is principally focused on material NDE. Ultrasound is implemented in NDE to obtain information about the location and size of subsurface defects in different materials. In general, NDE techniques are employed to extend component life, reduce manufacturing costs and increase safety. The inspections typically involve evaluating a material’s response to a physical stimulus, such as ultrasound. This is often carried out from many angles and positions to build an image of the internal structure of a component and identify the presence of damage. Defect characterization is obtained through inspection of NDE data by a human operator that can create consistency issues, especially when the data is very complex. As data volume increases, inspection by a single operator becomes very slow, and then more operators who work in parallel may be necessary, but the results become more inconsistent. These problems have promoted a push towards the identification of an automated method. Since it is based on a pattern recognition process, machine learning represents the ideal candidate. Machine learning can use all available information and produce a more accurate result, increasing automation, and reducing the possibility that the human operator will create errors. In recent years, several machine learning techniques have been employed in order to improve the reliability of ultrasound non-destructive testing [124,221,222,223,224,225,226,227,228].
Pyle et al. [221] proposed a method based on the use of DL for crack characterization in NDE through ultrasound technology. The principal problem of this method is represented by the scarcity of real defect data to train on. This problem was solved through an efficient hybrid finite element (FE) and ray-based simulation employed in order to train a CNN to characterize real defects. The effectiveness of such a method was demonstrated by sizing surface-breaking cracks in ultrasonic inline pipe inspection obtaining high characterization accuracy of the deep learning approach compared to traditional image-based sizing.
Oliveira et al. [222] proposed the application of several novelty detection methods (identification of novel, or unusual, data from within a dataset), combined with non-destructive ultrasound testing, to identify structural problems in wind turbine blades. In particular, ultrasound signals were preprocessed to extract relevant features through discrete Fourier transform (DFT) and PCA to reduce noise using wavelet decomposition. Several novelty detection algorithms have been applied to detect the presence of defects in wind turbine blades, including k-means, one-class SVM, and distance-based methods. The results of those novelty detection methods were compared with those obtained from multi-class classifiers, such as artificial neural networks, and demonstrated very high discrimination efficiencies.
Li et al. [223] introduced an alternative method of spectral analysis named quantile-frequency analysis (QFA). QFA is based on trigonometric quantile regression and converts a time series into a bivariate function of quantile level and frequency variables. For problems of time series classification, the technique of functional principal component analysis (FPCA) is applied to the quantile periodogram and the resultant projection coefficients are employed as features with reduced dimensions for time-series classification. Various machine learning classifiers were trained and tested by cross-validation using the proposed features. The study case analyzed was on ultrasound signals that have to be automatically classified for NDE of the structural integrity of aircraft panels made with bonded aluminum layers. Three classifiers were employed to evaluate the performance of FPCA: LDA, quadratic discriminant analysis (QDA), and SVM. The QFA method was found to be more effective than its ordinary periodogram–based counterpart in delivering higher out-of-sample classification accuracy.
Arbaoui et al. [229] proposed a methodology for automatic crack detection and monitoring in concrete structures. The method involves three stages. In the first stage, NDE of the specimen is performed and an ultrasonic signal, which contains information on the presence of defects, is obtained. Then a multiresolution analysis based on wavelets is performed to analyze crack sizes at different scales. Finally scalogram features are extracted by CNN in order to identify the type of defect that will be classified as a crack or not. CNN is basically composed of four stages, each composed of four layers (convolution, batch normalization, RELU and pool), and a final FC stage that performs the classification. The method was tested on a public dataset (SDNET2018) using both AlexNet and ResNet50 architectures, achieving good accuracy.
In the archeology field, a study investigating a new pattern recognition application for the ML classification of the provenance of archeological ceramics has recently been published by Salazar et al. [230]. The method is based on non-destructive ultrasonic testing and data analysis through advanced pattern recognition techniques, including feature ranking, sample augmentation, semi-supervised active learning, and optimal late fusion. Ultrasonic characterization of ceramic material from the measured ultrasonic signal is performed for the construction of a material signature that consists of time, frequency, and statistical variables defined on the base of a material reflectivity model. More exactly, the proposed method involves processing of the ultrasound features extracted from a set of pieces by considering classification on the basis of the fusion of the results from three different classifiers: LDA, RF, SVM. A problem of archeological provenance classification of pieces consisting of terra sigillata and non-terra sigillata ceramic shards from the same archeological sites, Iberian ceramic shards from two cities in Spain, and Roman sigillata ceramic shards from two origins, was investigated. The experimental results demonstrated that the fusion-based method achieved the best results in comparison with LDA, RF, and SVM.

6. Conclusions

In the present review, the most recent ML techniques applied to ultrasound imaging have been illustrated with a focus on medical diagnostics and NDE. The review commenced with an overview of the most employed ML and ultrasound imaging techniques in the analyzed papers and subsequently was devoted to the application of such techniques in the field of medical diagnostics and NDE.
From the analysis of the studies examined, in which a great variety of ML algorithms have been tested, we highlighted the noticeable employment of deep learning techniques based on CNN for almost all organs, especially for the breast and liver. The use of SVM methods, in particular for arterial disease diagnosis, was also quite frequent.
The main merits of ML over conventional methods can be summarized as follows. First, ML methods enable enhancing of the quality of US images, which is lower than for other imaging technologies due to the common presence of artifacts and noise. ML approaches also guarantee a more objective evaluation of the data than traditional methods that involve analysis by operators, often based on a heuristic approach, avoiding consistency issues. In addition, they provide a reduction in time and cost for evaluation and analysis. The principal benefits of ML are particularly noticeable in the medical diagnostics field, where ultrasound techniques are extensively used for the diagnosis of several kinds of organ diseases. From the papers analyzed, it is evident that ML enables significant improvements in terms of accuracy in the detection and classification of different tissues and diseases, and the segmentation of several types of organs. Furthermore, it is able to reduce the rate of missing or incorrect diagnoses because the algorithms manage to find some details or particulars that could not be identified by medical operators. The diagnostic results are often better than those provided by clinicians, both in terms of precision and quality. In many cases, the usefulness of ML is to assist radiologists by providing a “second opinion” in clinical preliminary examinations, which should improve their diagnostic capacity, and reduce the time and effort associated with manual analysis of ultrasound images. As an important consequence, it is anticipated that the availability of reliable automatic techniques will democratise effective access to ultrasound diagnostic tools (POCUS), extending them also to people living in rural zones or developing countries. A possible drawback highlighted in many of the studies reviewed is the relatively low number of samples present in databases, which limits the reliability of results. In many cases, the databases from medical diagnosis are generated by a single type of device and/or by a single collection site (e.g. institution or hospital), limiting the generalizability of the ML classification models derived from these databases. Unfortunately, this issue is a peculiar problem of US images because, based on present clinical practice, their quality and information content are highly operator-dependent. This also underlines the almost total absence of ML techniques applied to 3D US images, which are increasingly used in modern US diagnosis methods.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sant’Ana, D.; Carneiro Brito Pache, M.; Martins, J.; Astolfi, G.; Pereira Soares, W.; Neves de Melo, S.; da Silva Heimbach, N.; de Moraes Weber, V.; Gonçalves Mateus, R.; Pistori, H. Computer vision system for superpixel classification and segmentation of sheep. Ecol. Inform. 2022, 68, 101551. [Google Scholar]
  2. Chabot, D.; Stapleton, S.; Francis, C. Using Web images to train a deep neural network to detect sparsely distributed wildlife in large volumes of remotely sensed imagery: A case study of polar bears on sea ice. Ecol. Inform. 2022, 68, 101547. [Google Scholar] [CrossRef]
  3. Ahmed, T.; Rahman, T.; Roy, B.; Uddin, J. Drone Detection by Neural Network Using GLCM and SURF Features. J. Inf. Syst. Telecommun. 2021, 9, 15–23. [Google Scholar] [CrossRef]
  4. Saad, A.; Mohamed, A. An integrated human computer interaction scheme for object detection using deep learning. Comput. Electr. Eng. 2021, 96, 107475. [Google Scholar] [CrossRef]
  5. Thakkar, H.; Desai, A.; Singh, P.; Samhitha, K. ReLearner: A Reinforcement Learning-Based Self Driving Car Model Using Gym Environment. In Communications in Computer and Information Science; 1528 CCIS; Springer: Cham, Switzerland, 2022; pp. 399–409. [Google Scholar]
  6. Gadri, S.; Adouane, N. Efficient Traffic Signs Recognition Based on CNN Model for Self-Driving Cars. Lect. Notes Netw. Syst. 2022, 371, 45–54. [Google Scholar]
  7. Heiyanthuduwa, T.; Nikini Umasha Amarapala, K.; Vinura Budara Gunathilaka, K.; Satheesh Ravindu, K.; Wickramarathne, J.; Kasthurirathna, D. VirtualPT: Virtual reality based home care physiotherapy rehabilitation for elderly. In Proceedings of the ICAC 2020—2nd International Conference on Advancements in Computing, Malabe, Sri Lanka, 10–11 December 2020; pp. 311–316. [Google Scholar]
  8. Saitta, A.; Ntalampiras, S. Language-agnostic speech anger identification. In Proceedings of the 2021 44th International Conference on Telecommunications and Signal Processing (TSP 2021), Brno, Czech Republic, 26–28 July 2021; pp. 249–253. [Google Scholar]
  9. Wang, R.; Müller, R. Bioinspired solution to finding passageways in foliage with sonar. Bioinspir. Biomim. 2021, 16, 066022. [Google Scholar] [CrossRef]
  10. Nadimi, N.; Javidan, R.; Layeghi, K. Efficient detection of underwater natural gas pipeline leak based on synthetic aperture sonar (Sas) systems. J. Mar. Sci. Eng. 2021, 9, 1273. [Google Scholar] [CrossRef]
  11. Sun, T.; Jin, J.; Liu, T.; Zhang, J. Active sonar target classification method based on fisher’s dictionary learning. Appl. Sci. 2021, 11, 10635. [Google Scholar] [CrossRef]
  12. Mazeika, L.; Raišutis, R.; Jankauskas, A.; Rekuvienė, R.; Šliteris, R.; Samaitis, V.; Nageswaran, C.; Budimir, M. High sensitivity ultrasonic NDT technique for detecting creep damage at the early stage in power plant steels. Int. J. Press. Vessel. Pip. 2022, 196, 104613. [Google Scholar] [CrossRef]
  13. Netzelmann, U.; Mross, A.; Waschkies, T.; Weber, D.; Toma, E.; Neurohr, H. Nondestructive Testing of the Integrity of Solid Oxide Fuel Cell Stack Elements by Ultrasound and Thermographic Techniques. Energies 2022, 15, 831. [Google Scholar] [CrossRef]
  14. Zhao, H.; Zhang, C.; He, J.; Li, Y.; Li, B.; Jiang, X.; Ta, D. Nondestructive Evaluation of Special Defects Based on Ultrasound Metasurface. Front. Mater. 2022, 8, 552. [Google Scholar] [CrossRef]
  15. Carotenuto, R.; Merenda, M.; Iero, D.; Della Corte, F. Mobile synchronization recovery for ultrasonic indoor positioning. Sensors 2020, 20, 702. [Google Scholar] [CrossRef] [Green Version]
  16. Carotenuto, R.; Merenda, M.; Iero, D.; Corte, F. Simulating signal aberration and ranging error for ultrasonic indoor positioning. Sensors 2020, 20, 3548. [Google Scholar] [CrossRef]
  17. Iula, A. Ultrasound systems for biometric recognition. Sensors 2019, 19, 2317. [Google Scholar] [CrossRef] [Green Version]
  18. Iula, A.; Micucci, M. Experimental validation of a reliable palmprint recognition system based on 2D ultrasound images. Electronics 2019, 8, 1393. [Google Scholar] [CrossRef] [Green Version]
  19. Nardiello, D.; Iula, A. A new recognition procedure for palmprint features extraction from ultrasound images. Lect. Notes Electr. Eng. 2019, 512, 110–118. [Google Scholar]
  20. Yovel, Y.; Franz, M.; Stilz, P.; Schnitzler, H.U. Plant classification from bat-like echolocation signals. PLoS Comput. Biol. 2008, 4, e1000032. [Google Scholar] [CrossRef]
  21. Pujol, O.; Masip, D. Geometry-based ensembles: Toward a structural characterization of the classification boundary. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 1140–1146. [Google Scholar] [CrossRef] [PubMed]
  22. Koley, C.; Midya, B. 3-D object recognition system using ultrasound. In Proceedings of the 3rd International Conference on Intelligent Sensing and Information Processing (ICISIP 2005), Bangalore, India, 14–17 December 2005; pp. 99–104. [Google Scholar]
  23. Ding, J.; Cheng, H.; Huang, J.; Liu, J.; Zhang, Y. Breast ultrasound image classification based on multiple-instance learning. J. Digit. Imaging 2012, 25, 620–627. [Google Scholar] [CrossRef]
  24. Alberti, M.; Balocco, S.; Gatta, C.; Ciompi, F.; Pujol, O.; Silva, J.; Carrillo, X.; Radeva, P. Automatic bifurcation detection in coronary IVUS sequences. IEEE Trans. Biomed. Eng. 2012, 59, 1022–1031. [Google Scholar] [CrossRef] [PubMed]
  25. Barros, R.; Ebecken, N. Development of a ship classification method based on Convolutional neural network and Cyclostationarity Analysis. Mech. Syst. Signal Process. 2022, 170, 108778. [Google Scholar] [CrossRef]
  26. Zhang, L.; Müller, R. Large-scale recognition of natural landmarks with deep learning based on biomimetic sonar echoes. Bioinspir. Biomim. 2022, 17, 026011. [Google Scholar] [CrossRef]
  27. Polap, D.; Wawrzyniak, N.; Wlodarczyk-Sielicka, M. Side-scan sonar analysis using roi analysis and deep neural networks. IEEE Trans. Geosci. Remote Sens. 2022. [Google Scholar] [CrossRef]
  28. Li, S.; Zhao, J.; Zhang, H.; Qu, S. Sub-Bottom Profiler Sonar Image Missing Area Reconstruction Using Multi-Survey Line Patch Group Deep Learning. IEEE Geosci. Remote Sens. Lett. 2022, 19. [Google Scholar] [CrossRef]
  29. Qin, X.; Luo, X.; Wu, Z.; Shang, J.; Zhao, D. Deep Learning-Based High Accuracy Bottom Tracking on 1-D Side-Scan Sonar Data. IEEE Geosci. Remote Sens. Lett. 2022, 19. [Google Scholar] [CrossRef]
  30. Gerg, I.; Monga, V. Structural Prior Driven Regularized Deep Learning for Sonar Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60. [Google Scholar] [CrossRef]
  31. Mahesh, B. Machine Learning Algorithms—A review. Int. J. Sci. Res. 2018, 9, 381–386. [Google Scholar]
  32. Hoare, Z. Naive Bayes classifier: True and estimated errors for 2-class, 2-features case. In Proceedings of the 2006 3rd International IEEE Conference Intelligent Systems, London, UK, 4–6 September 2006; pp. 566–570. [Google Scholar]
  33. Qingyang, W. A Review of Methods Used in Machine Learning and Data Analysis. In Proceedings of the International Conference on Machine Learning and Computing, Zhuhai, China, 22–24 February 2019. [Google Scholar]
  34. Gareth, J.; Witten, D.; Hastie, T.; Tibshirani, R. Tree-Based Methods; Springer: New York, NY, USA, 2013; pp. 303–335. [Google Scholar]
  35. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  36. Bridges, D. The Euclidean distance construction of order homomorphisms. Math. Soc. Sci. 1988, 15, 179–188. [Google Scholar] [CrossRef]
  37. Mousa, A.; Yusof, Y. An improved Chebyshev distance metric for clustering medical images. In AIP Conference Proceedings; AIP Publishing LLC: New York, NY, USA, 2015; Volume 1691. [Google Scholar]
  38. Du, W. Minkowski-type distance measures for generalized orthopair fuzzy sets. Int. J. Intell. Syst. 2018, 33, 802–817. [Google Scholar] [CrossRef]
  39. Zhang, Z. Introduction to machine learning: K-nearest neighbors. Ann. Transl. Med. 2016, 4, 218. [Google Scholar] [CrossRef] [Green Version]
  40. Gokcen, I.; Peng, J. Comparing linear discriminant analysis and support vector machines. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2002; Volume 2457, pp. 104–113. [Google Scholar]
  41. Rodionova, O.; Kucheryavskiy, S.; Pomerantsev, A. Efficient tools for principal component analysis of complex data—A tutorial. Chemom. Intell. Lab. Syst. 2021, 213, 104304. [Google Scholar] [CrossRef]
  42. Guo, Y.; Liu, Y.; Oerlemans, A.; Lao, S.; Wu, S.; Lew, M. Deep learning for visual understanding: A review. Neurocomputing 2016, 187, 27–48. [Google Scholar] [CrossRef]
  43. Shrestha, A.; Mahmood, A. Review of deep learning algorithms and architectures. IEEE Access 2019, 7, 53040–53065. [Google Scholar] [CrossRef]
  44. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2015; Volume 9351, pp. 234–241. [Google Scholar]
  45. Yuan, Z.W.; Zhang, J. Feature extraction and image retrieval based on AlexNet. In Proceedings of the Eighth International Conference on Digital Image Processing (ICDIP 2016), Chengu, China, 20–22 May 2016; Volume 10033. [Google Scholar]
  46. Wu, Z.; Shen, C.; van den Hengel, A. Wider or Deeper: Revisiting the ResNet Model for Visual Recognition. Pattern Recognit. 2019, 90, 119–133. [Google Scholar] [CrossRef] [Green Version]
  47. Azhari, H. Basic of Biomedical Ultrasound for Engineers; Wiley: Hoboken, NJ, USA, 2010. [Google Scholar]
  48. Zander, D.; Hüske, S.; Hoffmann, B.; Cui, X.; Dong, Y.; Lim, A.; Jenssen, C.; Löwe, A.; Koch, J.B.H.; Dietrich, C.F. Ultrasound Image Optimization (“Knobology”): B-Mode. Ultrasound Int. Open 2020, 6, E14–E24. [Google Scholar] [CrossRef] [PubMed]
  49. Nishimura, R.; Callahan, M.; Schaff, H.; Ilstrup, D.; Miller, F.; Tajik, A. Noninvasive Measurement of Cardiac Output by Continuous-Wave Doppler Echocardiography: Initial Experience and Review of the Literature. Mayo Clin. Proc. 1984, 59, 484–489. [Google Scholar] [CrossRef]
  50. Bonagura, J.; Miller, M.; Darke, P. Doppler echocardiography. I. Pulsed-wave and continuous-wave examinations. Vet. Clin. N. Am. Small Anim. Pract. 1998, 28, 1325–1359. [Google Scholar] [CrossRef]
  51. Arning, C.; Grzyska, U. Color Doppler imaging of cervicocephalic fibromuscular dysplasia. Cardiovasc. Ultrasound 2004, 2, 7. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Shung, K.K. Diagnostic Ultrasound; CRC Press: Boca Raton, FL, USA, 2006. [Google Scholar]
  53. Shi, C.; Luo, X.; Guo, J.; Najdovski, Z.; Fukuda, T.; Ren, H. Three-dimensional intravascular reconstruction techniques based on intravascular ultrasound: A technical review. IEEE J. Biomed. Health Inform. 2018, 22, 806–817. [Google Scholar] [CrossRef]
  54. Gennisson, J.L.; Deffieux, T.; Fink, M.; Tanter, M. Ultrasound elastography: Principles and techniques. Diagn. Interv. Imaging 2013, 94, 487–495. [Google Scholar] [CrossRef]
  55. Iula, A.; Nardiello, D. 3-D Ultrasound Palmprint Recognition System Based on Principal Lines Extracted at Several under Skin Depths. IEEE Trans. Instrum. Meas. 2019, 68, 4653–4662. [Google Scholar] [CrossRef]
  56. Sharma, G.; Dave, R.; Sanadya, J.; Sharma, P.; Sharma, K. Various types and management of breast cancer: An overview. J. Adv. Pharm. Technol. Res. 2010, 1, 109–126. [Google Scholar]
  57. Santiago-Montero, R.; Sossa, H.; Gutiérrez-Hernández, D.; Zamudio, V.; Hernández-Bautista, I.; Valadez-Godínez, S. Novel mathematical model of breast cancer diagnostics using an associative pattern classification. Diagnostics 2020, 10, 136. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  58. Ara, S.; Alam, F.; Rahman, M.; Akhter, S.; Awwal, R.; Hasan, M. Bimodal multiparameter-based approach for benign-malignant classification of breast tumors. Ultrasound Med. Biol. 2015, 41, 2022–2038. [Google Scholar] [CrossRef] [PubMed]
  59. Cao, Z.; Duan, L.; Yang, G.; Yue, T.; Chen, Q.; Fu, H.; Xu, Y. Breast tumor detection in ultrasound images using deep learning. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); 10530 LNCS; Springer: Berlin/Heidelberg, Germany, 2017; pp. 121–128. [Google Scholar]
  60. Gao, Y.; Liu, B.; Zhu, Y.; Chen, L.; Tan, M.; Xiao, X.; Yu, G.; Guo, Y. Detection and recognition of ultrasound breast nodules based on semi-supervised deep learning: A powerful alternative strategy. Quant. Imaging Med. Surg. 2021, 11, 2265–2278. [Google Scholar] [CrossRef] [PubMed]
  61. Yang, X.; Zhou, D.; Zhou, Y.; Huang, Y.; Liu, H. Towards Zero Re-Training for Long-Term Hand Gesture Recognition via Ultrasound Sensing. IEEE J. Biomed. Health Inform. 2019, 23, 1639–1646. [Google Scholar] [CrossRef] [Green Version]
  62. Zheng, J.; Lin, D.; Gao, Z.; Wang, S.; He, M.; Fan, J. Deep Learning Assisted Efficient AdaBoost Algorithm for Breast Cancer Detection and Early Diagnosis. IEEE Access 2020, 8, 96946–96954. [Google Scholar] [CrossRef]
  63. Cao, Z.; Duan, L.; Yang, G.; Yue, T.; Chen, Q. An experimental study on breast lesion detection and classification from ultrasound images using deep learning architectures. BMC Med. Imaging 2019, 19, 51. [Google Scholar] [CrossRef]
  64. Li, J.; Bu, Y.; Lu, S.; Pang, H.; Luo, C.; Liu, Y.; Qian, L. Development of a Deep Learning–Based Model for Diagnosing Breast Nodules With Ultrasound. J. Ultrasound Med. 2021, 40, 513–520. [Google Scholar] [CrossRef]
  65. Wang, S.; Niu, S.; Qu, E.; Forsberg, F.; Wilkes, A.; Sevrukov, A.; Nam, K.; Mattrey, R.; Ojeda-Fournier, H.; Eisenbrey, J. Characterization of indeterminate breast lesions on B-mode ultrasound using automated machine learning models. J. Med. Imaging 2020, 7, 057002. [Google Scholar] [CrossRef]
  66. Gu, P.; Lee, W.M.; Roubidoux, M.; Yuan, J.; Wang, X.; Carson, P. Automated 3D ultrasound image segmentation to aid breast cancer image interpretation. Ultrasonics 2016, 65, 51–58. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  67. Badawy, S.; Mohamed, A.N.; Hefnawy, A.; Zidan, H.; GadAllah, M.; El-Banby, G. Automatic semantic segmentation of breast tumors in ultrasound images based on combining fuzzy logic and deep learning—A feasibility study. PLoS ONE 2021, 16, e0251899. [Google Scholar] [CrossRef] [PubMed]
  68. Hu, Y.; Guo, Y.; Wang, Y.; Yu, J.; Li, J.; Zhou, S.; Chang, C. Automatic tumor segmentation in breast ultrasound images using a dilated fully convolutional network combined with an active contour model. Med. Phys. 2019, 46, 215–228. [Google Scholar] [CrossRef] [Green Version]
  69. Huang, K.; Zhang, Y.; Cheng, H.; Xing, P.; Zhang, B. Semantic segmentation of breast ultrasound image with fuzzy deep learning network and breast anatomy constraints. Neurocomputing 2021, 450, 319–335. [Google Scholar] [CrossRef]
  70. Ilesanmi, A.; Chaumrattanakul, U.; Makhanov, S. A method for segmentation of tumors in breast ultrasound images using the variant enhanced deep learning. Biocybern. Biomed. Eng. 2021, 41, 802–818. [Google Scholar] [CrossRef]
  71. Liao, W.X.; He, P.; Hao, J.; Wang, X.Y.; Yang, R.L.; An, D.; Cui, L.G. Automatic Identification of Breast Ultrasound Image Based on Supervised Block-Based Region Segmentation Algorithm and Features Combination Migration Deep Learning Model. IEEE J. Biomed. Health Inform. 2020, 24, 984–993. [Google Scholar] [CrossRef]
  72. Pourasad, Y.; Zarouri, E.; Parizi, M.; Mohammed, A. Presentation of novel architecture for diagnosis and identifying breast cancer location based on ultrasound images using machine learning. Diagnostics 2021, 11, 1870. [Google Scholar] [CrossRef]
  73. Vakanski, A.; Xian, M.; Freer, P. Attention-Enriched Deep Learning Model for Breast Tumor Segmentation in Ultrasound Images. Ultrasound Med. Biol. 2020, 46, 2819–2833. [Google Scholar] [CrossRef]
  74. Webb, J.; Adusei, S.; Wang, Y.; Samreen, N.; Adler, K.; Meixner, D.; Fazzio, R.; Fatemi, M.; Alizad, A. Comparing deep learning-based automatic segmentation of breast masses to expert interobserver variability in ultrasound imaging. Comput. Biol. Med. 2021, 139, 104966. [Google Scholar] [CrossRef]
  75. Xu, Y.; Wang, Y.; Yuan, J.; Cheng, Q.; Wang, X.; Carson, P. Medical breast ultrasound image segmentation by machine learning. Ultrasonics 2019, 91, 1–9. [Google Scholar] [CrossRef]
  76. Yap, M.; Goyal, M.; Osman, F.; Martí, R.; Denton, E.; Juette, A.; Zwiggelaar, R. Breast ultrasound lesions recognition: End-to-end deep learning approaches. J. Med. Imaging 2019, 6, 011007. [Google Scholar]
  77. Han, L.; Huang, Y.; Dou, H.; Wang, S.; Ahamad, S.; Luo, H.; Liu, Q.; Fan, J.; Zhang, J. Semi-supervised segmentation of lesion from breast ultrasound images with attentional generative adversarial network. Comput. Methods Programs Biomed. 2020, 189, 105275. [Google Scholar] [CrossRef] [PubMed]
  78. Shia, W.C.; Lin, L.S.; Chen, D.R. Classification of malignant tumours in breast ultrasound using unsupervised machine learning approaches. Sci. Rep. 2021, 11, 1418. [Google Scholar] [CrossRef] [PubMed]
  79. Gonzelez-Luna, F.; Hernandez-Lopez, J.; Gomez-Flores, W. A performance evaluation of machine learning techniques for breast ultrasound classification. In Proceedings of the 2019 16th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE 2019), Mexico City, Mexico, 11–13 September 2019. [Google Scholar]
  80. Fleury, E.; Marcomini, K. Performance of machine learning software to classify breast lesions using BI-RADS radiomic features on ultrasound images. Eur. Radiol. Exp. 2019, 3, 34. [Google Scholar] [CrossRef] [Green Version]
  81. Destrempes, F.; Trop, I.; Allard, L.; Chayer, B.; Khoury, M.; Lalonde, L.; Cloutier, G. BI-RADS assessment of solid breast lesions based on quantitative ultrasound and machine learning. In Proceedings of the IEEE International Ultrasonics Symposium (IUS), Glasgow, UK, 6–9 October 2019; Volume 2019, pp. 1909–1911. [Google Scholar]
  82. Mishra, A.; Roy, P.; Bandyopadhyay, S.; Das, S. Breast ultrasound tumour classification: A Machine Learning—Radiomics based approach. Expert Syst. 2021, 38, e12713. [Google Scholar] [CrossRef]
  83. Romeo, V.; Cuocolo, R.; Apolito, R.; Stanzione, A.; Ventimiglia, A.; Vitale, A.; Verde, F.; Accurso, A.; Amitrano, M.; Insabato, L.; et al. Clinical value of radiomics and machine learning in breast ultrasound: A multicenter study for differential diagnosis of benign and malignant lesions. Eur. Radiol. 2021, 31, 9511–9519. [Google Scholar] [CrossRef]
  84. Zhang, Z.; Li, Y.; Wu, W.; Chen, H.; Cheng, L.; Wang, S. Tumor detection using deep learning method in automated breast ultrasound. Biomed. Signal Process. Control 2021, 68, 102677. [Google Scholar] [CrossRef]
  85. Shin, S.; Lee, S.; Yun, I.; Kim, S.; Lee, K. Joint weakly and semi-supervised deep learning for localization and classification of masses in breast ultrasound images. IEEE Trans. Med. Imaging 2019, 38, 762–774. [Google Scholar] [CrossRef] [Green Version]
  86. Tanaka, H.; Chiu, S.W.; Watanabe, T.; Kaoku, S.; Yamaguchi, T. Computer-aided diagnosis system for breast ultrasound images using deep learning. Phys. Med. Biol. 2019, 64, 235013. [Google Scholar] [CrossRef]
  87. Wu, T.; Sultan, L.; Tian, J.; Cary, T.; Sehgal, C. Machine learning for diagnostic ultrasound of triple-negative breast cancer. Breast Cancer Res. Treat. 2019, 173, 365–373. [Google Scholar] [CrossRef]
  88. Zhu, Y.C.; AlZoubi, A.; Jassim, S.; Jiang, Q.; Zhang, Y.; Wang, Y.B.; Ye, X.D.; DU, H. A generic deep learning framework to classify thyroid and breast lesions in ultrasound images. Ultrasonics 2021, 110, 106300. [Google Scholar] [CrossRef] [PubMed]
  89. Chen, C.; Wang, Y.; Niu, J.; Liu, X.; Li, Q.; Gong, X. Domain Knowledge Powered Deep Learning for Breast Cancer Diagnosis Based on Contrast-Enhanced Ultrasound Videos. IEEE Trans. Med. Imaging 2021, 40, 2439–2451. [Google Scholar] [CrossRef]
  90. Marcon, M.; Ciritsis, A.; Rossi, C.; Becker, A.; Berger, N.; Wurnig, M.; Wagner, M.; Frauenfelder, T.; Boss, A. Diagnostic performance of machine learning applied to texture analysis-derived features for breast lesion characterisation at automated breast ultrasound: A pilot study. Eur. Radiol. Exp. 2019, 3, 44. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  91. Wan, K.; Wong, C.; Ip, H.; Fan, D.; Yuen, P.; Fong, H.; Ying, M. Evaluation of the performance of traditional machine learning algorithms, convolutional neural network and AutoML Vision in ultrasound breast lesions classification: A comparative study. Quant. Imaging Med. Surg. 2021, 11, 1381–1393. [Google Scholar] [CrossRef]
  92. Al-Dhabyani, W.; Fahmy, A.; Gomaa, M.; Khaled, H. Deep learning approaches for data augmentation and classification of breast masses using ultrasound images. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 618–627. [Google Scholar] [CrossRef] [Green Version]
  93. Huo, L.; Tan, Y.; Wang, S.; Geng, C.; Li, Y.; Ma, X.; Wang, B.; He, Y.; Yao, C.; Ouyang, T. Machine learning models to improve the differentiation between benign and malignant breast lesions on ultrasound: A multicenter external validation study. Cancer Manag. Res. 2021, 13, 3367–3379. [Google Scholar] [CrossRef] [PubMed]
  94. Ciritsis, A.; Rossi, C.; Eberhard, M.; Marcon, M.; Becker, A.; Boss, A. Automatic classification of ultrasound breast lesions using a deep convolutional neural network mimicking human decision-making. Eur. Radiol. 2019, 29, 5458–5468. [Google Scholar] [CrossRef]
  95. Bajaj, R.; Huang, X.; Kilic, Y.; Jain, A.; Ramasamy, A.; Torii, R.; Moon, J.; Koh, T.; Crake, T.; Parker, M.; et al. A deep learning methodology for the automated detection of end-diastolic frames in intravascular ultrasound images. Int. J. Cardiovasc. Imaging 2021, 37, 1825–1837. [Google Scholar] [CrossRef]
  96. Hughes, J.; Yuan, N.; He, B.; Ouyang, J.; Ebinger, J.; Botting, P.; Lee, J.; Theurer, J.; Tooley, J.; Nieman, K.; et al. Deep learning evaluation of biomarkers from echocardiogram videos. EBioMedicine 2021, 73, 103613. [Google Scholar] [CrossRef]
  97. Kagiyama, N.; Shrestha, S.; Cho, J.; Khalil, M.; Singh, Y.; Challa, A.; Casaclang-Verzosa, G.; Sengupta, P. A low-cost texture-based pipeline for predicting myocardial tissue remodeling and fibrosis using cardiac ultrasound: Texture-based myocardial tissue characterization using cardiac ultrasound. EBioMedicine 2020, 54, 102726. [Google Scholar] [CrossRef] [PubMed]
  98. Yu, C.J.; Yeh, H.J.; Chang, C.C.; Tang, J.H.; Kao, W.Y.; Chen, W.C.; Huang, Y.J.; Li, C.H.; Chang, W.H.; Lin, Y.T.; et al. Lightweight deep neural networks for cholelithiasis and cholecystitis detection by point-of-care ultrasound. Comput. Methods Programs Biomed. 2021, 211, 106382. [Google Scholar] [CrossRef] [PubMed]
  99. Maraci, M.; Yaqub, M.; Craik, R.; Beriwal, S.; Self, A.; Von Dadelszen, P.; Papageorghiou, A.; Noble, J. Toward point-of-care ultrasound estimation of fetal gestational age from the trans-cerebellar diameter using CNN-based ultrasound image analysis. J. Med. Imaging 2020, 7, 014501. [Google Scholar]
  100. Wang, X.; Liu, Z.; Du, Y.; Diao, Y.; Liu, P.; Lv, G.; Zhang, H. Recognition of Fetal Facial Ultrasound Standard Plane Based on Texture Feature Fusion. Comput. Math. Methods Med. 2021, 2021, 6656942. [Google Scholar] [CrossRef] [PubMed]
  101. Shang, S.; Huang, C.; Yan, W.; Chen, R.; Cao, J.; Zhang, Y.; Guo, Y.; Du, G. Performance of a computer aided diagnosis system for SARS-CoV-2 pneumonia based on ultrasound images. Eur. J. Radiol. 2022, 146, 110066. [Google Scholar] [CrossRef] [PubMed]
  102. Jana, B.; Oswal, K.; Mitra, S.; Saha, G.; Banerjee, S. Detection of peripheral arterial disease using Doppler spectrogram based expert system for Point-of-Care applications. Biomed. Signal Process. Control 2019, 54, 101599. [Google Scholar] [CrossRef]
  103. Sakar, B.; Serbes, G.; Aydin, N. Emboli detection using a wrapper-based feature selection algorithm with multiple classifiers. Biomed. Signal Process. Control 2022, 71, 103080. [Google Scholar] [CrossRef]
  104. Sofian, H.; Than, J.; Mohammad, S.; Noor, N. Calcification detection of coronary artery disease in intravascular ultrasound image: Deep feature learning approach. Int. J. Integr. Eng. 2018, 10, 43–57. [Google Scholar] [CrossRef]
  105. Sofian, H.; Than, J.; Mohamad, S.; Noor, N. Calcification detection for intravascular ultrasound image using direct acyclic graph architecture: Pre-Trained model for 1-channel image. Indones. J. Electr. Eng. Comput. Sci. 2021, 22, 787–794. [Google Scholar] [CrossRef]
  106. Sofian, H.; Ming, J.; Muhammad, S.; Noor, N. Calcification detection using convolutional neural network architectures in intravascular ultrasound images. Indones. J. Electr. Eng. Comput. Sci. 2019, 17, 1313–1321. [Google Scholar] [CrossRef]
  107. Willemink, M.; Varga-Szemes, A.; Schoepf, U.; Codari, M.; Nieman, K.; Fleischmann, D.; Mastrodicasa, D. Emerging methods for the characterization of ischemic heart disease: Ultrafast Doppler angiography, micro-CT, photon-counting CT, novel MRI and PET techniques, and artificial intelligence. Eur. Radiol. Exp. 2021, 5, 12. [Google Scholar] [CrossRef]
  108. Cui, H.; Xia, Y.; Zhang, Y. Supervised machine learning for coronary artery lumen segmentation in intravascular ultrasound images. Int. J. Numer. Methods Biomed. Eng. 2020, 36, e3348. [Google Scholar] [CrossRef] [PubMed]
  109. Zhang, C.; Guo, X.; Guo, X.; Molony, D.; Li, H.; Samady, H.; Giddens, D.; Athanasiou, L.; Tang, D.; Nie, R.; et al. Machine learning model comparison for automatic segmentation of intracoronary optical coherence tomography and plaque cap thickness quantification. CMES—Comput. Model. Eng. Sci. 2020, 123, 631–646. [Google Scholar] [CrossRef]
  110. Bajaj, R.; Huang, X.; Kilic, Y.; Ramasamy, A.; Jain, A.; Ozkor, M.; Tufaro, V.; Safi, H.; Erdogan, E.; Serruys, P.; et al. Advanced deep learning methodology for accurate, real-time segmentation of high-resolution intravascular ultrasound images. Int. J. Cardiol. 2021, 339, 185–191. [Google Scholar] [CrossRef]
  111. Blanco, P.; Ziemer, P.; Bulant, C.; Ueki, Y.; Bass, R.; Räber, L.; Lemos, P.; García-García, H. Fully automated lumen and vessel contour segmentation in intravascular ultrasound datasets. Med. Image Anal. 2022, 75, 102262. [Google Scholar] [CrossRef] [PubMed]
  112. Zhou, R.; Guo, F.; Azarpazhooh, M.; Hashemi, S.; Cheng, X.; Spence, J.; Ding, M.; Fenster, A. Deep Learning-Based Measurement of Total Plaque Area in B-Mode Ultrasound Images. IEEE J. Biomed. Health Inform. 2021, 25, 2967–2977. [Google Scholar] [CrossRef]
  113. Lee, J.G.; Ko, J.; Hae, H.; Kang, S.J.; Kang, D.Y.; Lee, P.; Ahn, J.M.; Park, D.W.; Lee, S.W.; Kim, Y.H.; et al. Intravascular ultrasound-based machine learning for predicting fractional flow reserve in intermediate coronary artery lesions. Atherosclerosis 2020, 292, 171–177. [Google Scholar] [CrossRef] [PubMed]
  114. Guvenir Torun, S.; Torun, H.; Hansen, H.; Gandini, G.; Berselli, I.; Codazzi, V.; de Korte, C.; van der Steen, A.; Migliavacca, F.; Chiastra, C.; et al. Multicomponent Mechanical Characterization of Atherosclerotic Human Coronary Arteries: An Experimental and Computational Hybrid Approach. Front. Physiol. 2021, 12, 1480. [Google Scholar] [CrossRef]
  115. Boyd, C.; Brown, G.; Kleinig, T.; Dawson, J.; McDonnell, M.; Jenkinson, M.; Bezak, E. Machine learning quantitation of cardiovascular and cerebrovascular disease: A systematic review of clinical applications. Diagnostics 2021, 11, 551. [Google Scholar] [CrossRef]
  116. Savaş, S.; Topaloglu, N.; Kazcı, O.; Koşar, P. Classification of Carotid Artery Intima Media Thickness Ultrasound Images with Deep Learning. J. Med. Syst. 2019, 43, 273. [Google Scholar] [PubMed]
  117. Skandha, S.; Gupta, S.; Saba, L.; Koppula, V.; Johri, A.; Khanna, N.; Mavrogeni, S.; Laird, J.; Pareek, G.; Miner, M.; et al. 3-D optimized classification and characterization artificial intelligence paradigm for cardiovascular/stroke risk stratification using carotid ultrasound-based delineated plaque: Atheromatic™ 2.0. Comput. Biol. Med. 2020, 125, 103958. [Google Scholar] [CrossRef] [PubMed]
  118. Hsu, K.C.; Lin, C.H.; Johnson, K.; Liu, C.H.; Chang, T.Y.; Huang, K.L.; Fann, Y.C.; Lee, T.H. Autodetect extracranial and intracranial artery stenosis by machine learning using ultrasound. Comput. Biol. Med. 2020, 116, 103569. [Google Scholar] [CrossRef] [PubMed]
  119. Saba, L.; Sanagala, S.; Gupta, S.; Koppula, V.; Laird, J.; Viswanathan, V.; Sanches, M.; Kitas, G.; Johri, A.; Sharma, N.; et al. A Multicenter Study on Carotid Ultrasound Plaque Tissue Characterization and Classification Using Six Deep Artificial Intelligence Models: A Stroke Application. IEEE Trans. Instrum. Meas. 2021, 70, 1–12. [Google Scholar] [CrossRef]
  120. Luo, X.; Ara, L.; Ding, H.; Rollins, D.; Motaganahalli, R.; Sawchuk, A. Computational methods to automate the initial interpretation of lower extremity arterial Doppler and duplex carotid ultrasound studies. J. Vasc. Surg. 2021, 74, 988–996.e1. [Google Scholar] [CrossRef] [PubMed]
  121. Klingensmith, J.; Haggard, A.; Ralston, J.; Qiang, B.; Fedewa, R.; Elsharkawy, H.; Geoffrey Vince, D. Tissue classification in intercostal and paravertebral ultrasound using spectral analysis of radiofrequency backscatter. J. Med. Imaging 2019, 6, 047001. [Google Scholar] [CrossRef] [PubMed]
  122. Khanna, N.; Jamthikar, A.; Gupta, D.; Piga, M.; Saba, L.; Carcassi, C.; Giannopoulos, A.; Nicolaides, A.; Laird, J.; Suri, H.; et al. Rheumatoid Arthritis: Atherosclerosis Imaging and Cardiovascular Risk Assessment Using Machine and Deep Learning–Based Tissue Characterization. Curr. Atheroscler. Rep. 2019, 21, 7. [Google Scholar] [CrossRef]
  123. Jamthikar, A.; Gupta, D.; Khanna, N.; Saba, L.; Araki, T.; Viskovic, K.; Suri, H.; Gupta, A.; Mavrogeni, S.; Turk, M.; et al. A low-cost machine learning-based cardiovascular/stroke risk assessment system: Integration of conventional factors with image phenotypes. Cardiovasc. Diagn. Ther. 2019, 9, 420–430. [Google Scholar] [CrossRef] [Green Version]
  124. Guo, X.; Maehara, A.; Matsumura, M.; Wang, L.; Zheng, J.; Samady, H.; Mintz, G.; Giddens, D.; Tang, D. Predicting plaque vulnerability change using intravascular ultrasound + optical coherence tomography image-based fluid–structure interaction models and machine learning methods with patient follow-up data: A feasibility study. BioMedical Eng. Online 2021, 20, 34. [Google Scholar] [CrossRef] [PubMed]
  125. Gudigar, A.; Nayak, S.; Samanth, J.; Raghavendra, U.; Ashwal, A.; Barua, P.; Hasan, M.; Ciaccio, E.; Tan, R.S.; Rajendra Acharya, U. Recent trends in artificial intelligence-assisted coronary atherosclerotic plaque characterization. Int. J. Environ. Res. Public Health 2021, 18, 10003. [Google Scholar] [CrossRef]
  126. Golemati, S.; Patelaki, E.; Gastounioti, A.; Andreadis, I.; Liapis, C.; Nikita, K. Motion synchronisation patterns of the carotid atheromatous plaque from B-mode ultrasound. Sci. Rep. 2020, 10, 11221. [Google Scholar] [CrossRef]
  127. Coelewij, L.; Waddington, K.; Robinson, G.; Chocano, E.; McDonnell, T.; Farinha, F.; Peng, J.; Dönnes, P.; Smith, E.; Croca, S.; et al. Serum Metabolomic Signatures Can Predict Subclinical Atherosclerosis in Patients with Systemic Lupus Erythematosus. Arterioscler. Thromb. Vasc. Biol. 2021, 41, 1446–1458. [Google Scholar] [CrossRef] [PubMed]
  128. Lo Vercio, L.; del Fresno, M.; Larrabide, I. Lumen-intima and media-adventitia segmentation in IVUS images using supervised classifications of arterial layers and morphological structures. Comput. Methods Programs Biomed. 2019, 177, 113–121. [Google Scholar] [CrossRef]
  129. Cha, D.; Kang, T.; Min, J.; Joo, I.; Sinn, D.; Ha, S.; Kim, K.; Lee, G.; Yi, J. Deep learning-based automated quantification of the hepatorenal index for evaluation of fatty liver by ultrasonography. Ultrasonography 2021, 40, 565–574. [Google Scholar] [CrossRef]
  130. Dozen, A.; Komatsu, M.; Sakai, A.; Komatsu, R.; Shozu, K.; Machino, H.; Yasutomi, S.; Arakaki, T.; Asada, K.; Kaneko, S.; et al. Image segmentation of the ventricular septum in fetal cardiac ultrasound videos based on deep learning using time-series information. Biomolecules 2020, 10, 1526. [Google Scholar] [CrossRef]
  131. Shozu, K.; Komatsu, M.; Sakai, A.; Komatsu, R.; Dozen, A.; Machino, H.; Yasutomi, S.; Arakaki, T.; Asada, K.; Kaneko, S.; et al. Model-agnostic method for thoracic wall segmentation in fetal ultrasound videos. Biomolecules 2020, 10, 1691. [Google Scholar] [CrossRef] [PubMed]
  132. Perez-Gonzalez, J.; Arámbula Cosío, F.; Huegel, J.; Medina-Bañuelos, V. Probabilistic Learning Coherent Point Drift for 3D Ultrasound Fetal Head Registration. Comput. Math. Methods Med. 2020, 2020, 4271519. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  133. Xue, W.; Cao, C.; Liu, J.; Duan, Y.; Cao, H.; Wang, J.; Tao, X.; Chen, Z.; Wu, M.; Zhang, J.; et al. Modality alignment contrastive learning for severity assessment of COVID-19 from lung ultrasound and clinical information. Med. Image Anal. 2021, 69, 101975. [Google Scholar] [CrossRef]
  134. Ouahabi, A.; Taleb-Ahmed, A. Deep learning for real-time semantic segmentation: Application in ultrasound imaging. Pattern Recognit. Lett. 2021, 144, 27–34. [Google Scholar] [CrossRef]
  135. Penatti, O.; Werneck, R.; de Almeida, W.; Stein, B.; Pazinato, D.; Mendes Júnior, P.; Torres, R.; Rocha, A. Mid-level image representations for real-time heart view plane classification of echocardiograms. Comput. Biol. Med. 2015, 66, 66–81. [Google Scholar] [CrossRef]
  136. Sulas, E.; Urru, M.; Tumbarello, R.; Raffo, L.; Pani, D. Automatic detection of complete and measurable cardiac cycles in antenatal pulsed-wave Doppler signals. Comput. Methods Programs Biomed. 2020, 190, 105336. [Google Scholar] [CrossRef]
  137. Farahani, N.; Enayati, M.; Sundaram, D.; Damani, D.; Kaggal, V.; Zacher, A.; Geske, J.; Kane, G.; Arunachalam, S.; Pasupathy, K.; et al. Application of machine learning for detection of hypertrophic cardiomyopathy patients from echocardiogram measurements. In Proceedings of the 2021 Design of Medical Devices Conference (DMD 2021), Minneapolis, MN, USA, 12–15 April 2021. [Google Scholar]
  138. Hur, D.; Sugeng, L. Non-invasive Multimodality Cardiovascular Imaging of the Right Heart and Pulmonary Circulation in Pulmonary Hypertension. Front. Cardiovasc. Med. 2019, 6, 24. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  139. Vaseli, H.; Liao, Z.; Abdi, A.; Girgis, H.; Behnami, D.; Luong, C.; Dezaki, F.; Dhungel, N.; Rohling, R.; Gin, K.; et al. Designing lightweight deep learning models for echocardiography view classification. In Progress in Biomedical Optics and Imaging; Proceedings of SPIE; SPIE: Philadelphia, PA, USA, 2019; Volume 10951. [Google Scholar]
  140. Puyol-Anton, E.; Ruijsink, B.; Gerber, B.; Amzulescu, M.; Langet, H.; De Craene, M.; Schnabel, J.; Piro, P.; King, A. Regional Multi-View Learning for Cardiac Motion Analysis: Application to Identification of Dilated Cardiomyopathy Patients. IEEE Trans. Biomed. Eng. 2019, 66, 956–966. [Google Scholar] [CrossRef] [Green Version]
  141. Xi, P.; Guan, H.; Shu, C.; Borgeat, L.; Goubran, R. An integrated approach for medical abnormality detection using deep patch convolutional neural networks. Vis. Comput. 2020, 36, 1869–1882. [Google Scholar] [CrossRef]
  142. Mahalingam, D.; Chelis, L.; Nizamuddin, I.; Lee, S.; Kakolyris, S.; Halff, G.; Washburn, K.; Attwood, K.; Fahad, I.; Grigorieva, J.; et al. Detection of hepatocellular carcinoma in a high-risk population by a mass spectrometry-based test. Cancers 2021, 13, 3109. [Google Scholar] [CrossRef]
  143. Brehar, R.; Mitrea, D.A.; Vancea, F.; Marita, T.; Nedevschi, S.; Lupsor-Platon, M.; Rotaru, M.; Badea, R. Comparison of deep-learning and conventional machine-learning methods for the automatic recognition of the hepatocellular carcinoma areas from ultrasound images. Sensors 2020, 20, 3085. [Google Scholar] [CrossRef]
  144. Schmauch, B.; Herent, P.; Jehanno, P.; Dehaene, O.; Saillard, C.; Aubé, C.; Luciani, A.; Lassau, N.; Jégou, S. Diagnosis of focal liver lesions from ultrasound using deep learning. Diagn. Interv. Imaging 2019, 100, 227–233. [Google Scholar] [CrossRef] [PubMed]
  145. Zamanian, H.; Mostaar, A.; Azadeh, P.; Ahmadi, M. Implementation of combinational deep learning algorithm for non-alcoholic fatty liver classification in ultrasound images. J. Biomed. Phys. Eng. 2021, 11, 73–84. [Google Scholar] [CrossRef] [PubMed]
  146. Wang, W.; Zhang, J.C.; Tian, W.S.; Chen, L.D.; Zheng, Q.; Hu, H.T.; Wu, S.S.; Guo, Y.; Xie, X.Y.; Lu, M.D.; et al. Shear wave elastography-based ultrasomics: Differentiating malignant from benign focal liver lesions. Abdom. Radiol. 2021, 46, 237–248. [Google Scholar] [CrossRef]
  147. Peng, J.; Peng, Y.; Lin, P.; Wan, D.; Qin, H.; Li, X.; Wang, X.; He, Y.; Yang, H. Differentiating infected focal liver lesions from malignant mimickers: Value of ultrasound-based radiomics. Clin. Radiol. 2022, 77, 104–113. [Google Scholar] [CrossRef]
  148. Li, W.; Lv, X.Z.; Zheng, X.; Ruan, S.M.; Hu, H.T.; Chen, L.D.; Huang, Y.; Li, X.; Zhang, C.Q.; Xie, X.Y.; et al. Machine Learning-Based Ultrasomics Improves the Diagnostic Performance in Differentiating Focal Nodular Hyperplasia and Atypical Hepatocellular Carcinoma. Front. Oncol. 2021, 11, 863. [Google Scholar] [CrossRef] [PubMed]
  149. Brattain, L.; Ozturk, A.; Telfer, B.; Dhyani, M.; Grajo, J.; Samir, A. Image Processing Pipeline for Liver Fibrosis Classification Using Ultrasound Shear Wave Elastography. Ultrasound Med. Biol. 2020, 46, 2667–2676. [Google Scholar] [CrossRef]
  150. Byra, M.; Styczynski, G.; Szmigielski, C.; Kalinowski, P.; Michalowski, L.; Paluszkiewicz, R.; Ziarkiewicz-Wróblewska, B.; Zieniewicz, K.; Sobieraj, P.; Nowicki, A. Transfer learning with deep convolutional neural network for liver steatosis assessment in ultrasound images. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 1895–1903. [Google Scholar] [CrossRef] [Green Version]
  151. Che, H.; Brown, L.; Foran, D.; Nosher, J.; Hacihaliloglu, I. Liver disease classification from ultrasound using multi-scale CNN. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 1537–1548. [Google Scholar] [CrossRef]
  152. Chou, T.H.; Yeh, H.J.; Chang, C.C.; Tang, J.H.; Kao, W.Y.; Su, I.C.; Li, C.H.; Chang, W.H.; Huang, C.K.; Sufriyana, H.; et al. Deep learning for abdominal ultrasound: A computer-aided diagnostic system for the severity of fatty liver. J. Chin. Med. Assoc. JCMA 2021, 84, 842–850. [Google Scholar] [CrossRef] [PubMed]
  153. Kim, T.; Lee, D.; Park, E.K.; Choi, S. Deep learning techniques for fatty liver using multi-view ultrasound images scanned by different scanners:development and validation study. JMIR Med. Inform. 2021, 9, e30066. [Google Scholar] [CrossRef]
  154. Mitrea, D.; Badea, R.; Mitrea, P.; Brad, S.; Nedevschi, S. Hepatocellular carcinoma automatic diagnosis within ceus and b-mode ultrasound images using advanced machine learning methods. Sensors 2021, 21, 2202. [Google Scholar] [CrossRef] [PubMed]
  155. Neogi, N.; Adhikari, A.; Roy, M. Use of a novel set of features based on texture anisotropy for identification of liver steatosis from ultrasound images: A simple method. Multimed. Tools Appl. 2019, 78, 11105–11127. [Google Scholar] [CrossRef]
  156. Zhang, H.; Guo, L.; Wang, D.; Wang, J.; Bao, L.; Ying, S.; Xu, H.; Shi, J. Multi-Source Transfer Learning Via Multi-Kernel Support Vector Machine plus for B-Mode Ultrasound-Based Computer-Aided Diagnosis of Liver Cancers. IEEE J. Biomed. Health Inform. 2021, 25, 3874–3885. [Google Scholar] [CrossRef]
  157. Yang, Q.; Wei, J.; Hao, X.; Kong, D.; Yu, X.; Jiang, T.; Xi, J.; Cai, W.; Luo, Y.; Jing, X.; et al. Improving B-mode ultrasound diagnostic performance for focal liver lesions using deep learning: A multicentre study. EBioMedicine 2020, 56, 102777. [Google Scholar] [CrossRef] [PubMed]
  158. Donald, I.; Macvicar, J.; Brown, T. Investigation of Abdominal Masses by pulsed ultrasound. Lancet 1958, 271, 1188–1195. [Google Scholar] [CrossRef]
  159. Gudigar, A.; Samanth, J.; Raghavendra, U.; Dharmik, C.; Vasudeva, A.; Padmakumar, R.; Tan, R.S.; Ciaccio, E.; Molinari, F.; Rajendra Acharya, U. Local Preserving Class Separation Framework to Identify Gestational Diabetes Mellitus Mother Using Ultrasound Fetal Cardiac Image. IEEE Access 2020, 8, 229043–229051. [Google Scholar] [CrossRef]
  160. Kim, H.; Lee, S.; Kwon, J.Y.; Park, Y.; Kim, K.; Seo, J. Automatic evaluation of fetal head biometry from ultrasound images using machine learning. Physiol. Meas. 2019, 40, 065009. [Google Scholar] [CrossRef] [Green Version]
  161. Liu, S.; Sun, Y.; Luo, N. Doppler Ultrasound Imaging Combined with Fetal Heart Detection in Predicting Fetal Distress in Pregnancy-Induced Hypertension under the Guidance of Artificial Intelligence Algorithm. J. Healthc. Eng. 2021, 2021, 4405189. [Google Scholar] [CrossRef]
  162. Qu, R.; Xu, G.; Ding, C.; Jia, W.; Sun, M. Deep Learning-Based Methodology for Recognition of Fetal Brain Standard Scan Planes in 2D Ultrasound Images. IEEE Access 2020, 8, 44443–44451. [Google Scholar] [CrossRef]
  163. Sahli, H.; Mouelhi, A.; Ben Slama, A.; Sayadi, M.; Rachdi, R. Supervised classification approach of biometric measures for automatic fetal defect screening in head ultrasound images. J. Med. Eng. Technol. 2019, 43, 279–286. [Google Scholar] [CrossRef]
  164. Zhu, F.; Liu, M.; Wang, F.; Qiu, D.; Li, R.; Dai, C. Automatic measurement of fetal femur length in ultrasound images: A comparison of random forest regression model and SegNet. Math. Biosci. Eng. 2021, 18, 7790–7805. [Google Scholar] [CrossRef] [PubMed]
  165. Rasheed, K.; Junejo, F.; Malik, A.; Saqib, M. Automated Fetal Head Classification and Segmentation Using Ultrasound Video. IEEE Access 2021, 9, 160249–160267. [Google Scholar] [CrossRef]
  166. Torrents-Barrena, J.; Monill, N.; Piella, G.; Gratacós, E.; Eixarch, E.; Ceresa, M.; González Ballester, M. Assessment of Radiomics and Deep Learning for the Segmentation of Fetal and Maternal Anatomy in Magnetic Resonance Imaging and Ultrasound. Acad. Radiol. 2021, 28, 173–188. [Google Scholar] [CrossRef] [PubMed]
  167. Xia, T.H.; Tan, M.; Li, J.H.; Wang, J.J.; Wu, Q.Q.; Kong, D.X. Establish a normal fetal lung gestational age grading model and explore the potential value of deep learning algorithms in fetal lung maturity evaluation. Chin. Med. J. 2021, 134, 1828–1837. [Google Scholar] [CrossRef]
  168. Crockart, I.; Brink, L.; du Plessis, C.; Odendaal, H. Classification of intrauterine growth restriction at 34–38 weeks gestation with machine learning models. Inform. Med. Unlocked 2021, 23, 100533. [Google Scholar] [CrossRef]
  169. Feng, M.; Wan, L.; Li, Z.; Qing, L.; Qi, X. Fetal Weight Estimation via Ultrasound Using Machine Learning. IEEE Access 2019, 7, 87783–87791. [Google Scholar] [CrossRef]
  170. Meng, Q.; Matthew, J.; Zimmer, V.; Gomez, A.; Lloyd, D.; Rueckert, D.; Kainz, B. Mutual Information-Based Disentangled Neural Networks for Classifying Unseen Categories in Different Domains: Application to Fetal Ultrasound Imaging. IEEE Trans. Med. Imaging 2021, 40, 722–734. [Google Scholar] [CrossRef]
  171. Miyagi, Y.; Hata, T.; Bouno, S.; Koyanagi, A.; Miyake, T. Recognition of fetal facial expressions using artificial intelligence deep learning. Donald Sch. J. Ultrasound Obstet. Gynecol. 2021, 15, 223–228. [Google Scholar]
  172. Miyagi, Y.; Hata, T.; Bouno, S.; Koyanagi, A.; Miyake, T. Recognition of facial expression of fetuses by artificial intelligence (AI). J. Perinat. Med. 2021, 49, 596–603. [Google Scholar] [CrossRef] [PubMed]
  173. Sridar, P.; Kumar, A.; Quinton, A.; Nanan, R.; Kim, J.; Krishnakumar, R. Decision Fusion-Based Fetal Ultrasound Image Plane Classification Using Convolutional Neural Networks. Ultrasound Med. Biol. 2019, 45, 1259–1273. [Google Scholar] [CrossRef] [PubMed]
  174. Tsai, C.H.; van der Burgt, J.; Vukovic, D.; Kaur, N.; Demi, L.; Canty, D.; Wang, A.; Royse, A.; Royse, C.; Haji, K.; et al. Automatic deep learning-based pleural effusion classification in lung ultrasound images for respiratory pathology diagnosis. Phys. Medica 2021, 83, 38–45. [Google Scholar] [CrossRef]
  175. Chen, C.H.; Lee, Y.W.; Huang, Y.S.; Lan, W.R.; Chang, R.F.; Tu, C.Y.; Chen, C.Y.; Liao, W.C. Computer-aided diagnosis of endobronchial ultrasound images using convolutional neural network. Comput. Methods Programs Biomed. 2019, 177, 175–182. [Google Scholar] [CrossRef] [PubMed]
  176. Chang, Y.; Lafata, K.; Segars, W.; Yin, F.F.; Ren, L. Development of realistic multi-contrast textured XCAT (MT-XCAT) phantoms using a dual-discriminator conditional-generative adversarial network (D-CGAN). Phys. Med. Biol. 2020, 65, 065009. [Google Scholar] [CrossRef]
  177. Zhou, B.; Bartholmai, B.; Kalra, S.; Zhang, X. Predicting lung mass density of patients with interstitial lung disease and healthy subjects using deep neural network and lung ultrasound surface wave elastography. J. Mech. Behav. Biomed. Mater. 2020, 104, 103682. [Google Scholar] [CrossRef]
  178. Tomlinson, G.; Thomas, N.; Chain, B.; Best, K.; Simpson, N.; Hardavella, G.; Brown, J.; Bhowmik, A.; Navani, N.; Janes, S.; et al. Transcriptional profiling of endobronchial ultrasound-guided lymph node samples aids diagnosis of mediastinal lymphadenopathy. Chest 2016, 149, 535–544. [Google Scholar] [CrossRef] [Green Version]
  179. Silva, S.; Ait Aissa, D.; Cocquet, P.; Hoarau, L.; Ruiz, J.; Ferre, F.; Rousset, D.; Mora, M.; Mari, A.; Fourcade, O.; et al. Combined Thoracic Ultrasound Assessment during a Successful Weaning Trial Predicts Postextubation Distress. Anesthesiology 2017, 127, 666–674. [Google Scholar] [CrossRef]
  180. Wang, X.; Burzynski, J.; Hamilton, J.; Rao, P.; Weitzel, W.; Bull, J. Quantifying lung ultrasound comets with a convolutional neural network: Initial clinical results. Comput. Biol. Med. 2019, 107, 39–46. [Google Scholar] [CrossRef] [PubMed]
  181. Xu, Y.; Zhang, Y.; Bi, K.; Ning, Z.; Xu, L.; Shen, M.; Deng, G.; Wang, Y. Boundary Restored Network for Subpleural Pulmonary Lesion Segmentation on Ultrasound Images at Local and Global Scales. J. Digit. Imaging 2020, 33, 1155–1166. [Google Scholar] [CrossRef]
  182. Suri, J.; Agarwal, S.; Gupta, S.; Puvvula, A.; Biswas, M.; Saba, L.; Bit, A.; Tandel, G.; Agarwal, M.; Patrick, A.; et al. A narrative review on characterization of acute respiratory distress syndrome in COVID-19-infected lungs using artificial intelligence. Comput. Biol. Med. 2021, 130, 104210. [Google Scholar] [CrossRef] [PubMed]
  183. Born, J.; Beymer, D.; Rajan, D.; Coy, A.; Mukherjee, V.; Manica, M.; Prasanna, P.; Ballah, D.; Guindy, M.; Shaham, D.; et al. On the role of artificial intelligence in medical imaging of COVID-19. Patterns 2021, 2, 100269. [Google Scholar] [CrossRef]
  184. Alhasan, M.; Hasaneen, M. Digital imaging, technologies and artificial intelligence applications during COVID-19 pandemic. Comput. Med. Imaging Graph. 2021, 91, 101933. [Google Scholar] [CrossRef]
  185. Li, W.; Deng, X.; Shao, H.; Wang, X. Deep learning applications for COVID-19 analysis: A state-of-the-art survey. CMES—Comput. Model. Eng. Sci. 2021, 129, 65–98. [Google Scholar] [CrossRef]
  186. McDermott, C.; Łącki, M.; Sainsbury, B.; Henry, J.; Filippov, M.; Rossa, C. Sonographic Diagnosis of COVID-19: A Review of Image Processing for Lung Ultrasound. Front. Big Data 2021, 4, 612561. [Google Scholar] [CrossRef] [PubMed]
  187. Kallel, A.; Rekik, M.; Khemakhem, M. Hybrid-based framework for COVID-19 prediction via federated machine learning models. J. Supercomput. 2021, 78, 7078–7105. [Google Scholar] [CrossRef]
  188. Cossio, M.; Gilardino, R. Would the Use of Artificial Intelligence in COVID-19 Patient Management Add Value to the Healthcare System? Front. Med. 2021, 8, 34. [Google Scholar] [CrossRef]
  189. Chandra, G.; Challa, M. AE-CNN Based Supervised Image Classification. Commun. Comput. Inf. Sci. 2021, 1378 CCIS, 434–442. [Google Scholar]
  190. Girum, K.; Lalande, A.; Hussain, R.; Créhange, G. A deep learning method for real-time intraoperative US image segmentation in prostate brachytherapy. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 1467–1476. [Google Scholar] [CrossRef]
  191. Karimi, D.; Zeng, Q.; Mathur, P.; Avinash, A.; Mahdavi, S.; Spadinger, I.; Abolmaesumi, P.; Salcudean, S. Accurate and robust deep learning-based segmentation of the prostate clinical target volume in ultrasound images. Med. Image Anal. 2019, 57, 186–196. [Google Scholar] [CrossRef] [PubMed]
  192. Lei, Y.; Tian, S.; He, X.; Wang, T.; Wang, B.; Patel, P.; Jani, A.; Mao, H.; Curran, W.; Liu, T.; et al. Ultrasound prostate segmentation based on multidirectional deeply supervised V-Net. Med. Phys. 2019, 46, 3194–3206. [Google Scholar] [CrossRef] [PubMed]
  193. Poudel, P.; Illanes, A.; Ataide, E.; Esmaeili, N.; Balakrishnan, S.; Friebe, M. Thyroid Ultrasound Texture Classification Using Autoregressive Features in Conjunction with Machine Learning Approaches. IEEE Access 2019, 7, 79354–79365. [Google Scholar] [CrossRef]
  194. Daulatabad, R.; Vega, R.; Jaremko, J.; Kapur, J.; Hareendranathan, A.; Punithakumar, K. Integrating User-Input into Deep Convolutional Neural Networks for Thyroid Nodule Segmentation. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Mexico City, Mexico, 1–5 November 2021; Volume 2021, pp. 2637–2640. [Google Scholar]
  195. Chen, Y.; Wang, Y.; Cai, Z.; Jiang, M. Predictions for central lymph node metastasis of papillary thyroid carcinoma via CNN-based fusion modeling of ultrasound images. Trait. Du Signal 2021, 38, 629–638. [Google Scholar] [CrossRef]
  196. Vadhiraj, V.; Simpkin, A.; O’connell, J.; Singh Ospina, N.; Maraka, S.; O’keeffe, D. Ultrasound image classification of thyroid nodules using machine learning techniques. Medicina 2021, 57, 527. [Google Scholar] [CrossRef] [PubMed]
  197. Sharifi, Y.; Bakhshali, M.; Dehghani, T.; DanaiAshgzari, M.; Sargolzaei, M.; Eslami, S. Deep learning on ultrasound images of thyroid nodules. Biocybern. Biomed. Eng. 2021, 41, 636–655. [Google Scholar] [CrossRef]
  198. Turk, G.; Ozdemir, M.; Zeydan, R.; Turk, Y.; Bilgin, Z.; Zeydan, E. On the identification of thyroid nodules using semi-supervised deep learning. Int. J. Numer. Methods Biomed. Eng. 2021, 37, e3433. [Google Scholar] [CrossRef] [PubMed]
  199. Gild, M.; Chan, M.; Gajera, J.; Lurie, B.; Gandomkar, Z.; Clifton-Bligh, R. Risk stratification of indeterminate thyroid nodules using ultrasound and machine learning algorithms. Clin. Endocrinol. 2021, 96, 646–652. [Google Scholar] [CrossRef] [PubMed]
  200. Gulame, M.; Dixit, V.; Suresh, M. Thyroid nodules segmentation methods in clinical ultrasound images: A review. Mater. Today Proc. 2021, 45, 2270–2276. [Google Scholar] [CrossRef]
  201. Gomes Ataide, E.; Ponugoti, N.; Illanes, A.; Schenke, S.; Kreissl, M.; Friebe, M. Thyroid nodule classification for physician decision support using machine learning-evaluated geometric and morphological features. Sensors 2020, 20, 6110. [Google Scholar] [CrossRef]
  202. Zhou, H.; Wang, K.; Tian, J. Online Transfer Learning for Differential Diagnosis of Benign and Malignant Thyroid Nodules with Ultrasound Images. IEEE Trans. Biomed. Eng. 2020, 67, 2773–2780. [Google Scholar] [CrossRef]
  203. Sun, C.; Zhang, Y.; Chang, Q.; Liu, T.; Zhang, S.; Wang, X.; Guo, Q.; Yao, J.; Sun, W.; Niu, L. Evaluation of a deep learning-based computer-aided diagnosis system for distinguishing benign from malignant thyroid nodules in ultrasound images. Med. Phys. 2020, 47, 3952–3960. [Google Scholar] [CrossRef]
  204. Ma, X.; Xi, B.; Zhang, Y.; Zhu, L.; Sui, X.; Tian, G.; Yang, J. A machine learning-based diagnosis of thyroid cancer using thyroid nodules ultrasound images. Curr. Bioinform. 2020, 15, 349–358. [Google Scholar] [CrossRef]
  205. Stib, M.; Pan, I.; Merck, D.; Middleton, W.; Beland, M. Thyroid Nodule Malignancy Risk Stratification Using a Convolutional Neural Network. Ultrasound Q. 2020, 36, 164–172. [Google Scholar] [CrossRef] [PubMed]
  206. Yu, X.; Wang, H.; Ma, L. Detection of thyroid nodules with ultrasound images based on deep learning. Curr. Med. Imaging 2020, 16, 174–180. [Google Scholar] [CrossRef]
  207. George, M.; Anita, H. Analysis of Kidney Ultrasound Images Using Deep Learning and Machine Learning Techniques: A Review. Lect. Notes Netw. Syst. 2022, 317, 183–199. [Google Scholar]
  208. Ma, L.; Dong, M.; Li, G.; Liu, J.; Wu, J.; Lu, H.; Zou, G.; Zhuo, L.; Mou, S.; Zheng, M. Predicting renal diseases with deep learning model based on shear wave elastography and convolutional neural network. Chin. J. Med. Imaging Technol. 2021, 37, 919–922. [Google Scholar]
  209. Patil, S.; Choudhary, S. Deep convolutional neural network for chronic kidney disease prediction using ultrasound imaging. Bio-Algorithms Med.-Syst. 2021, 17, 137–163. [Google Scholar] [CrossRef]
  210. Sudharson, S.; Kokil, P. Computer-aided diagnosis system for the classification of multi-class kidney abnormalities in the noisy ultrasound images. Comput. Methods Programs Biomed. 2021, 205, 106071. [Google Scholar] [CrossRef] [PubMed]
  211. De Jesus-Rodriguez, H.; Morgan, M.; Sagreiya, H. Deep Learning in Kidney Ultrasound: Overview, Frontiers, and Challenges. Adv. Chronic Kidney Dis. 2021, 28, 262–269. [Google Scholar] [CrossRef] [PubMed]
  212. Herle, H.; Padmaja, K. Machine Learning Based Techniques for Detection of Renal Calculi in Ultrasound Images. In Proceedings of the Communications in Computer and Information Science, Nashik, India, 23–24 April 2021; 1440 CCIS. Springer: Berlin/Heidelberg, Germany, 2021; pp. 452–462. [Google Scholar]
  213. Shi, S. A novel hybrid deep learning architecture for predicting acute kidney injury using patient record data and ultrasound kidney images. Appl. Artif. Intell. 2021, 35, 1329–1345. [Google Scholar] [CrossRef]
  214. Alex, D.; Chandy, D. Exploration of a framework for the identification of chronic kidney disease based on 2d ultrasound images: A survey. Curr. Med. Imaging 2021, 17, 464–478. [Google Scholar] [CrossRef]
  215. Li, G.; Liu, J.; Wu, J.; Tian, Y.; Ma, L.; Liu, Y.; Zhang, B.; Mou, S.; Zheng, M. Diagnosis of renal diseases based on machine learning methods using ultrasound images. Curr. Med. Imaging 2021, 17, 425–432. [Google Scholar] [CrossRef]
  216. Sudharson, S.; Kokil, P. An ensemble of deep neural networks for kidney ultrasound image classification. Comput. Methods Programs Biomed. 2020, 197, 105709. [Google Scholar] [CrossRef] [PubMed]
  217. Ma, F.; Sun, T.; Liu, L.; Jing, H. Detection and diagnosis of chronic kidney disease using deep learning-based heterogeneous modified artificial neural network. Future Gener. Comput. Syst. 2020, 111, 17–26. [Google Scholar] [CrossRef]
  218. Sagreiya, H.; Akhbardeh, A.; Li, D.; Sigrist, R.; Chung, B.; Sonn, G.; Tian, L.; Rubin, D.; Willmann, J. Point Shear Wave Elastography Using Machine Learning to Differentiate Renal Cell Carcinoma and Angiomyolipoma. Ultrasound Med. Biol. 2019, 45, 1944–1954. [Google Scholar] [CrossRef]
  219. Zheng, Q.; Furth, S.; Tasian, G.; Fan, Y. Computer-aided diagnosis of congenital abnormalities of the kidney and urinary tract in children based on ultrasound imaging data by integrating texture image features and deep transfer learning image features. J. Pediatr. Urol. 2019, 15, 75.e1–75.e7. [Google Scholar] [CrossRef] [Green Version]
  220. Yin, S.; Peng, Q.; Li, H.; Zhang, Z.; You, X.; Liu, H.; Fischer, K.; Furth, S.; Tasian, G.; Fan, Y. Multi-instance Deep Learning with Graph Convolutional Neural Networks for Diagnosis of Kidney Diseases Using Ultrasound Imaging. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Shenzhen, China, 17 October 2019; 11840 LNCS; Springer: Berlin/Heidelberg, Germany, 2019; pp. 146–154. [Google Scholar]
  221. Pyle, R.; Bevan, R.; Hughes, R.; Rachev, R.; Ali, A.; Wilcox, P. Deep Learning for Ultrasonic Crack Characterization in NDE. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2021, 68, 1854–1865. [Google Scholar] [CrossRef]
  222. Oliveira, M.; Simas Filho, E.; Albuquerque, M.; Santos, Y.; da Silva, I.; Farias, C. Ultrasound-based identification of damage in wind turbine blades using novelty detection. Ultrasonics 2020, 108, 106166. [Google Scholar] [CrossRef]
  223. Li, T.H. From zero crossings to quantile-frequency analysis of time series with an application to nondestructive evaluation. Appl. Stoch. Model. Bus. Ind. 2020, 36, 1111–1130. [Google Scholar] [CrossRef]
  224. Nasir, V.; Fathi, H.; Kazemirad, S. Combined machine learning—Wave propagation approach for monitoring timber mechanical properties under UV aging. Struct. Health Monit. 2021, 20, 2035–2053. [Google Scholar] [CrossRef]
  225. Obaton, A.F.; Wang, Y.; Butsch, B.; Huang, Q. A non-destructive resonant acoustic testing and defect classification of additively manufactured lattice structures. Weld. World 2021, 65, 361–371. [Google Scholar] [CrossRef]
  226. Rodrigues, L.; Cruz, F.; Oliveira, M.; Simas Filho, E.; Albuquerque, M.; Silva, I.; Farias, C. Carburization level identification in industrial HP pipes using ultrasonic evaluation and machine learning. Ultrasonics 2019, 94, 145–151. [Google Scholar] [CrossRef]
  227. Silva, L.; Simas Filho, E.; Albuquerque, M.; Silva, I.; Farias, C. Embedded decision support system for ultrasound nondestructive evaluation based on extreme learning machines. Comput. Electr. Eng. 2021, 90, 106891. [Google Scholar] [CrossRef]
  228. Soltani Firouz, M.; Farahmandi, A.; Hosseinpour, S. Early Detection of Freeze Damage in Navel Orange Fruit Using Nondestructive Low Intensity Ultrasound Coupled with Machine Learning. Food Anal. Methods 2021, 14, 1140–1149. [Google Scholar] [CrossRef]
  229. Arbaoui, A.; Ouahabi, A.; Jacques, S.; Hamiane, M. Concrete cracks detection and monitoring using deep learning-based multiresolution analysis. Electronics 2021, 10, 1772. [Google Scholar] [CrossRef]
  230. Salazar, A.; Safont, G.; Vergara, L.; Vidal, E. Pattern recognition techniques for provenance classification of archaeological ceramics using ultrasounds. Pattern Recognit. Lett. 2020, 135, 441–450. [Google Scholar] [CrossRef]
Figure 1. A possible schematization of machine learning algorithms.
Figure 1. A possible schematization of machine learning algorithms.
Electronics 11 01800 g001
Figure 2. Example of B-mode image of the right liver lobe and right kidney obtained with a convex probe. The kidney is indicated by the arrow [48].
Figure 2. Example of B-mode image of the right liver lobe and right kidney obtained with a convex probe. The kidney is indicated by the arrow [48].
Electronics 11 01800 g002
Figure 3. Example of color Doppler image showing high-grade stenosis of internal carotid artery [51].
Figure 3. Example of color Doppler image showing high-grade stenosis of internal carotid artery [51].
Electronics 11 01800 g003
Figure 4. Ultrasound depictions of malignant breast lesions: (a) lesion characterized by irregular shape, calcification indicated by large arrow and not circumscribed margin by thin arrow (b) lesion characterized by the an oval shape, circumscribed margins indicated by thin arrow and enhancement posterior features by large arrow [93].
Figure 4. Ultrasound depictions of malignant breast lesions: (a) lesion characterized by irregular shape, calcification indicated by large arrow and not circumscribed margin by thin arrow (b) lesion characterized by the an oval shape, circumscribed margins indicated by thin arrow and enhancement posterior features by large arrow [93].
Electronics 11 01800 g004
Figure 5. Main steps of LI and MA segmentation: (a) B-mode images,(b) edge map, (c) contour segmentation, (d) final segmentation. LI and MA are marked in red and green, respectively [128].
Figure 5. Main steps of LI and MA segmentation: (a) B-mode images,(b) edge map, (c) contour segmentation, (d) final segmentation. LI and MA are marked in red and green, respectively [128].
Electronics 11 01800 g005
Figure 6. Prognosis of myocardial fibrosis. Three ultrasound renderings and the corresponding myocardial textures [97].
Figure 6. Prognosis of myocardial fibrosis. Three ultrasound renderings and the corresponding myocardial textures [97].
Electronics 11 01800 g006
Figure 7. Frequency of ML algorithms application across all organs.
Figure 7. Frequency of ML algorithms application across all organs.
Electronics 11 01800 g007
Table 1. Summary of Detection ML algorithms employed in analyzed studies with respect to organ investigated, diagnosis objective, dataset used, and main results achieved.
Table 1. Summary of Detection ML algorithms employed in analyzed studies with respect to organ investigated, diagnosis objective, dataset used, and main results achieved.
Ref.OrganObjectiveTechniqueResultsDatasets
[60]BreastRecognition ofFaster R-CNNMean accuracy: 87%Public
breast ultrasoundfor detection ofPerformances of6746 and
nodules withnodules and SSLSSL and SL2220 nodules
low labeled imagesfor classificationare comparable
[95]ArteriesDetection ofBi-GRU NNMean accuracy: 80%Private
end-diastolictrained by aBetter accuracy20 coronary
frames in NIRS-IVUSsegment ofthan expert analystsarteries
images of64 frameswith Doppler criteria
coronary arteries
[96]HeartEvaluation ofCNN withMean ROC AUCPublic
biomarkersresidualAnemia: 80%108521
fromconnections andBNP: 84%echocardiogram
echocardiogramspatio-temporalTroponin I: 75%studies
videosconvolutions forBUN: 71.5%
estimation of
biomarker values
[97]HeartExtract informationTexture-based featuresROC AUC: 80%Public
associatedextracted with unsupervisedSensitivity: 86.4%392 subjects
with myocardialsimilarity networksSpecificity: 83.3%
remodelingML models (DT,RF,LR,NN)Prediction of
from stillfor prediction ofmyocardial fibrosis
ultrasoundfunctional remodelingonly from textures
imagesLR for predictingof ultrasound images
presence of fibrosis
[98]LiverDetection ofSSD and FPN toROC AUCPublic
gallstones andclassify gallstones withResNet-50: 92%89,000 images
acute cholecystitisfeatures extractedMobileNetV2: 94%
with still imagesby ResNet-50 anddetect cholecystitis
for preliminaryMobileNetV2and gallstones with
diagnosisto classifyacceptable discrimination
cholecystitisand speed
[99]FetusGestational age andAlexNet variationAccuracy %Private
automatic estimationfor TC framesTC plane detection: 99%5000 TC images
from TC diameterextractionTC segmentation: 97.98%
as a POCUS solutionFCN for TCAccurate GA
localization andestimation
measurment
[100]FetusAutomaticLH-SVMAccuracy: 94.67%Private
recognitionSVM for learning ofAverage precision: 94.25%943 standard
and classificationfeatures extractedAverage recall rate: 93.88%planes
of FFUSPby LBP and HOGAverage F1 score: 94.88%424 nasolabial
for diagnosis Effective predictioncoronal planes
of cardiac and classification50 nonstandard
conditions of FFUSPplanes
[101]LungsAssist diagnosisPre-trained ResNet50Average F1-scorePublic
of Covid19Fully connected layerBal dataset: 93.5%3909 images
on LUS imagesfor feature extractionUnbal dataset: 95.3%
of FFUSPGlobal average poolingImproves performances
for features classification.in radiologists’ diagnosis
Table 2. Summary of segmentation ML algorithms employed in the studies analyzed related to organ investigated, diagnosis objective, dataset used, and main results achieved.
Table 2. Summary of segmentation ML algorithms employed in the studies analyzed related to organ investigated, diagnosis objective, dataset used, and main results achieved.
Ref.OrganObjectiveTechniqueDatasetsResults
[67]BreastAutomaticFuzzy preprocessingGA: 95.45%Public
semantic8 CNN-basedMean IoU: 78.7%1200 images
segmentationSS modelsBF: 68.08%
of breast Improvements only
tumors with batch processing
[69]BreastAutomaticBUS images enhancedTPR: 90.33%Private
semanticwith wavelet featuresFPR: 90.00%325 BUS images
segmentationFuzzy FCN segmentationIoU: 81.29%
of breastFine-tuning based onFuzzy FCN
tumorsanatomy constraints withprovides better
conditional random fieldsperformances
CRFs performsthan non-fuzzy FCN
[70]BreastAutomaticPreprocessing through CLAHEMean valuesPublic
semanticUNet variant based onHD: 77.6%264 BUS images
segmentationVE block for encodingJM: 80.1%830 BUS images
of breastConcatenated convolutionsDM: 90.7%
tumorsfor segmentationBetter segmentation
results than
classic CNN methods
[73]BreastSegmentation ofUNet modifiedDSC: 90.5%Private
tumors bywith attentionBetter accuracy510 images
incorporatingblocks accountingwith saliency maps
prior domainfor input saliencyRobustness to
specificmaps to generateimages from
knowledgesegmentationdifferent US scanners
[111]ArteriesMeasurement8 UNet++ withMean DSC: 87.15%Private
of total carotiddifferent backbonesDatasets collected in144 subjects
plaque area inand architecturesdifferent institutions497 subjects
B-mode imagesSmall datasets for
algorithm training
[112]ArteriesAutomaticMFCNN forMedian valuesPublic
segmentationpreliminary segmentationJI_lumen: 0.913160 IVUS
of lumen andGP regressor toJI_vessel: 0.94pullbacks
vessel contoursconstruct lumenHD_lumen: 0.196 mm
and vessel contoursHD_vessel: 0.163 mm
[128]ArteriesLumen intima andSVM and RF forJMPublic
media-adventiaclassification mapsLI: 0.88 ± 0.8 mm435 images
fully automaticLI and MA segmentationMA: 0.84 ± 0.9 mm
segmentation ofwith deformable contoursGood accuracy
arterial layersmethodModular and open-source
[129]LiverAutomaticDCNN developedICCsPrivate
quantificationwith ICNet forHepatic: 91.9%294 liver images
of the hepatorenalorgan segmentationRenal: 91.6%
index (HRI) forGaussian texture forHRI: 73.4%
evaluation ofHRI quantificationResults comparable to
fatty liver those of radiologists
[130]FetusAutomaticCSC: UNet variantMIOU: 0.55Private
segmentationthat calibrateBetter than DeepLabv3+421 fetal cardiac
of the ventricularsegmentation withand U-netUS videos
septumtime-series information
[131]FetusAutomaticU-net/DeepLabV3+IoUPrivate
segmentationsegmentation enhancedDeepLabv3+: 0.47538 4VC images
of the thoracicwith MultiFrame andU-Net: 0.493in 280 videos
wall inCylinder methodsImproved performances
ultrasound videos without altering
NN structure
[132]FetusAutomaticCPD variation forTarget registrationPrivate
fetal headpoint cloud segmentationerror: 6.38 ± 3.24 mm18 fetal brain
segmentation fromand estimation of
3D noisy imagesprobabilistic weights
obtained with RF
[133]LungsAssessment ofDSA-MIL to combineAccuracyPublic
COVID19 frommultiple LUS dataPatient severity: 75%233 patients
LUS andMA-CLRBinari identification: 87.5 %
clinical informationfor combinationEspecially suited for
of LUS data andpregnant women
clinical informationand children
[134]ThyroidEfficient andCNN based on a layerIoU: 79.5%Public
precise semanticthat integrates denseTPF: 88.5%3794 images
segmentation ofconnectivity, dilatedFPF: 0.13%
thyroid nodulesconvolutions andHigh accuracy/efficiency
factorized filtersReal time
Table 3. Summary of classification ML algorithms employed in studies analyzed related to organ investigated, diagnosis objective, dataset used, and main results achieved.
Table 3. Summary of classification ML algorithms employed in studies analyzed related to organ investigated, diagnosis objective, dataset used, and main results achieved.
Ref.OrganObjectiveTechniqueResultsDatasets
[78]BreastClassification ofPretrained ResNet-101Sensitivity: 94.34%Private
tumors betweenfor feature extractionSpecificity: 93.22%2099 images
benign andLinear SVMPPV: 92.6%
malignant fromfor classificationNPV: 94.8%
B-mode images More accurate
performances
than radiologists
[89]BreastClassification of3D CNNSensitivity: 97.2%Public
tumors betweenfor temporal/spatialAccuracy: 86.3%221 lesions
benign andextractionDomain knowledge
malignantDKG-TAM forallows diagnostics
from CEUStemporal attentionimprovements
DKG-CAM for
features concatenation
[91]BreastHighly automaticAutoML VisionAccuracy: 86%Public
classification ofFor comparison:Sensitivity: 84%895 images
tumors fromCNN andSpecificity: 88%
B-mode imagesML classifiersF1: 0.83
(RF, KNN, LDA, LR)AutoML Vision
comparable with
other methods
[93]BreastEvaluation of ML6 ML classifiers:LRPublic
methods forLR, RF, Extra Trees,ROC AUC: 90.6%1345 patients
breast cancerSVM, MLP, XG BoostBrier score: 0.65
diagnosis images
[119]ArteriesCharacterization and7 CNNMean accuracyPublic
classification offor data optimizationDL: 93.55%346 patients
carotid ultrasoundTL for characterizingTL: 94.55%
plaque tissuescarotid plaques andML: 89%
4 ML models
(KNN, SVM,DT,RF)
[120]ArteriesAutomation of theHNN for classificationAccuracyPublic
initial interpretationof aortillac andNormal: 97%5761 LEAD
of lower extremitytrifurcation diseaseAortillac: 82%studies
arterial DopplerRF forFemoropopliteal: 90.1%18,659
and duplex carotidclassificationTrifurcation: 90.5%duplex
ultrasound studiesof stenosisGood performancescarotid
studies
[139]HeartLightweight andThree teacherAccuracy: 89 %Public
fast transthoracicnetworks (VGG-16,Six times16,612
ecocardiographyDenseNet and Resnet)faster thanechocines
classification fortransfer learnedhuge models
diagnosis ofknowledge to
cardiac conditionslighweight models
[146]LiverDifferentiation ofSVM algorithmAUC: 0.94%Private
malignant fromfor establishingSensitivity: 92.59%175 focal lesions
benign focal livertwo predictiveSpecificity: 87.5%
lesions usingmodels: ultrasomicsPPV: 94.59%
2D SWE-basedfeatures andNPV: 82.50%
ultrasomicsSWE measurementsGood accuracy
of combined method
[147]LiverDifferentiation ofLR and PCA toAUCPrivate
of infected focalreduce dimensionHCC: 0.836%104 focal
liver lesionsof radiomics featuresCC: 0.766%liver lesions
from malignantNB, DT,KNN,LR,SVMcHCC-CC: 0.74485 hepatic
mimickers into obtainLiver metastasis: 0.808tumors
B-mode imagespredictive modelsMH tumor: 0.745
[149]LiverLiver fibrosisImage proc pipeline:AUCPrivate
classificationQuality assessmentSpecificity: 71%5526 SWE
with SWE imagesROI selectionSensitivity: 95%images
fibrosis classificationBetter accuracy
with SVM, RF, CNN, FCNNthan manual methods
[155]LiverIdentification ofFive differentSensitivity: 99%Private
liver steatosisclassifiers (MLP,Accuracy: 100%340 images
from anisotropyPNN,SVM,LVQ,with anisotropy
features in B-modeBayesian)features and
imagesThree featuresPNN classifiers
sets including
anisotropy features
[165]FetusClassification andCNN AlexNet forAccuracy: 96 %Public
segmentation ofclassification ofAlmost automatic10,000 labeled
fetal headheadframesgestational ageimages
from ultrasoundCNN Unet forestimation1000 ultrasound
videossegmentation of videos
classified
headframes
[174]LungsDevelopment ofDeep learningAccuracyPrivate
a system forbased on Reg-STNFrame: 92.2%623 videos
accurate interpretationtrained withVideo: 91.1%
of pleural effusionsupervised andGood agreement
in LUS imagesweakly supervisedwith expert
methods modelsclinicians
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Micucci, M.; Iula, A. Recent Advances in Machine Learning Applied to Ultrasound Imaging. Electronics 2022, 11, 1800. https://doi.org/10.3390/electronics11111800

AMA Style

Micucci M, Iula A. Recent Advances in Machine Learning Applied to Ultrasound Imaging. Electronics. 2022; 11(11):1800. https://doi.org/10.3390/electronics11111800

Chicago/Turabian Style

Micucci, Monica, and Antonio Iula. 2022. "Recent Advances in Machine Learning Applied to Ultrasound Imaging" Electronics 11, no. 11: 1800. https://doi.org/10.3390/electronics11111800

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop