Next Article in Journal
Impact of Anti-Angiogenic Treatment on Bone Vascularization in a Murine Model of Breast Cancer Bone Metastasis Using Synchrotron Radiation Micro-CT
Previous Article in Journal
An Open-Source AI Framework for the Analysis of Single Cells in Whole-Slide Images with a Note on CD276 in Glioblastoma
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Diagnostic Strategies for Breast Cancer Detection: From Image Generation to Classification Strategies Using Artificial Intelligence Algorithms

by
Jesus A. Basurto-Hurtado
1,2,
Irving A. Cruz-Albarran
1,2,
Manuel Toledano-Ayala
3,
Mario Alberto Ibarra-Manzano
4,
Luis A. Morales-Hernandez
1,* and
Carlos A. Perez-Ramirez
2,*
1
C.A. Mecatrónica, Facultad de Ingeniería, Campus San Juan del Río, Universidad Autónoma de Querétaro, Rio Moctezuma 249, San Cayetano, San Juan del Rio 76807, Mexico
2
Laboratorio de Dispositivos Médicos, Facultad de Ingeniería, Universidad Autónoma de Querétaro, Carretera a Chichimequillas S/N, Ejido Bolaños, Santiago de Querétaro 76140, Mexico
3
División de Investigación y Posgrado de la Facultad de Ingeniería (DIPFI), Universidad Autónoma de Querétaro, Cerro de las Campanas S/N Las Campanas, Santiago de Querétaro 76010, Mexico
4
Laboratorio de Procesamiento Digital de Señales, Departamento de Ingeniería Electrónica, Division de Ingenierias Campus Irapuato-Salamanca (DICIS), Universidad de Guanajuato, Carretera Salamanca-Valle de Santiago KM. 3.5 + 1.8 Km., Salamanca 36885, Mexico
*
Authors to whom correspondence should be addressed.
Cancers 2022, 14(14), 3442; https://doi.org/10.3390/cancers14143442
Submission received: 21 May 2022 / Revised: 2 July 2022 / Accepted: 12 July 2022 / Published: 15 July 2022
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)

Abstract

:

Simple Summary

With the recent advances in the field of artificial intelligence, it has been possible to develop robust and accurate methodologies that can deliver noticeable results in different health- related areas, where the oncology is one the hottest research areas nowadays, as it is now possible to fuse information that the images have with the patient medical records in order to offer a more accurate diagnosis. In this sense, understanding the process of how an AI-based methodology is developed can offer a helpful insight to develop such methodologies. In this review, we comprehensively guide the reader on the steps required to develop such methodology, starting from the image formation to its processing and interpretation using a wide variety of methods; further, some techniques that can be used in the next-generation diagnostic strategies are also presented. We believe this helpful insight will provide deeper comprehension to students and researchers in the related areas, of the advantages and disadvantages of every method.

Abstract

Breast cancer is one the main death causes for women worldwide, as 16% of the diagnosed malignant lesions worldwide are its consequence. In this sense, it is of paramount importance to diagnose these lesions in the earliest stage possible, in order to have the highest chances of survival. While there are several works that present selected topics in this area, none of them present a complete panorama, that is, from the image generation to its interpretation. This work presents a comprehensive state-of-the-art review of the image generation and processing techniques to detect Breast Cancer, where potential candidates for the image generation and processing are presented and discussed. Novel methodologies should consider the adroit integration of artificial intelligence-concepts and the categorical data to generate modern alternatives that can have the accuracy, precision and reliability expected to mitigate the misclassifications.

1. Introduction

According to the World Health Organization, Breast Cancer (BC) represents around 16% of the malignant tumors diagnosed worldwide [1]. In Mexico, BC is the leading death cause for cancer in the female population [2]. BC develops when any lump begins an angiogenesis process, that is, the process that causes the development of new blood vessels and capillaries from the existent vasculature [3]. Unfortunately, BC has a mortality rate of 69% in emergent countries, which is greater than the one in developed countries [1]. This increase is explained as the cancer is detected in a later stage, making the treatment a financial obstacle as its price increases, especially if the disease is detected in an advanced stage [4]. Hence, the development of strategies that can perform an early detection of BC is a priority topic for governments, as an early detection increases the survival chances and lowers the financial burden the disease imposes to families and health systems [4].
A methodology for the BC detection can be composed of 4 steps: (1) image acquisition, (2) Segmentation and preprocessing, (3) feature extraction, and (4) classification. An illustration of the abovementioned concepts is described in Figure 1.
From this figure, it can be seen that the first step uses the different technologies available to acquire the internal tissue dynamics of the breast, so they can be expressed in an image; the second step is used to execute algorithms that perform basic tasks on the images (for instance, correcting the color scale), so the segmentation, which is the detection of Region-of-interest (ROI), can be done; then, the third step quantifies the differences between images that have abnormalities from the ones that do not have; finally, once the differences are quantified, it is necessary to classify them to provide a diagnosis. With the rapid development of novel technologies that can capture more accurately the dynamics of the breast tissues, numerous advances have been done in all the aforementioned fields; in this sense, the goal of detecting all the abnormalities without generating false alarms is still a highly desirable feature for all the proposals [5,6]. Recently, some articles have reviewed some proposals regarding the feature classification and its interpretation [6,7,8,9]; yet, an article that presents the main technologies used to form the breast image as well as the processing stages required to provide a diagnosis is still missing. This article presents a state-of-the-art review of both the technologies used to create the breast image as well as the strategies employed to perform the image processing and classification. The article is organized as follows: Section 2 describes the main technologies used for the image generation; Section 3 describes the methods used to perform the segmentation, feature extraction, as well as the interpretation; next, Section 4 and Section 5 present some emerging techniques that can be used to improve the image formation and the algorithms used for the interpretation. The article ends with some concluding remarks.

2. Technologies Used to Obtain Breast Tissue Images

One of the steps require to develop a diagnose system is the representation of the breast tissue dynamics. In this sense, there are several technologies that are commonly used to represent the tissue by means of images. This section presents the most used ones.

2.1. Mammography

Mammography is a study used to screen the breast tissue in order to detect abnormalities that could indicate the prescience of cancer or other breast diseases [10]. This technique has a sensibility of up to 85% in the recommended population. Essentially, mammography uses low doses of X-ray to form a picture of the breast internal tissues [11]. To form the picture, the breasts are compressed by two plates with the aim of mitigating the dispersion of the rays, allowing to obtain a better picture without using an X-ray high-dose [11], where the tissue changes might appear as white zones on a grey contrast [11]. On average, the total radiation dose for a typical mammogram with 2 views for each breast is about 0.4 [11]. Figure 2 illustrates the mammography procedure.
Several works have focused on the processing of the digital mammographies to detect the most common symptoms that could indicate the presence of cancer: calcifications or masses [12]. Traditionally, the specialist looks for zones that have a different appearance (size, shape, contrast, edges, or bright spots) than the normal tissue. With the employment of segmentation algorithms [13,14,15], the automatization of this task has been proposed, where some attempts using neural networks have done [12,16,17], delivering encouraging results.
Recently, the utilization of the Breast tomosynthesis (BT) and the Contrast-Enhanced Mammography (CEM) [10] have been proposed as improvements to the traditional digital mammography. The former is a 3D breast reconstruction that allows to further improve the image resolution whereas the latter improves the image resolution injecting a contrast agent; in this way, the anatomic and vascularity definition of the abnormalities is exposed. In this sense, some improvements when dealing with breast-dense tissue patients are obtained; yet, the detection of clustered micro calcifications is still an issue [10]; on the other hand, additional screening tests are required to determine if the abnormality detected by CEM is cancer or not, besides of requiring more expensive equipment.

2.2. Ultrasound

Ultrasound is a non-invasive and non-irradiating technique that uses sound waves to create images from organs, in this case the breasts, to detect changes in their form. To create the images, a transducer sends high-frequency sound waves (>20 kHz) and measures the reflected ones [10]. The image is formed using the wave sound reflected from the internal tissues. This procedure is depicted in Figure 3.
Ultrasound is used for three purposes: (1) assessing and determining the abnormality condition, that is, to help doctors if the abnormal mass is solid, which might require further examination, is fluid-filled, or has both features; (2) as an auxiliary screen tool, which is used when the patient has dense breasts and the mammography is not the reliable enough, (3) or as a guide to develop a biopsy in the suspected abnormality [10]. Several computer-aided diagnose (CAD) systems that analyze ultrasound images have been proposed [18]. One of the points they note it is necessary to improve is the resolution of the images [19] using specific-designed filters. Another modification proposed is the utilization of micro-bubbles that are injected into the abnormalities detected at first sight [20].
It should be noticed that the mass tends to stay in its position when compressed, i.e., they do not displace. Elastography is the technique that is employed to measure the tumor displacement when compressed using a special transducer [21]. These developments have led to discover masses that usually require performing a biopsy to determine the mass nature, which delay the diagnosis confirmation [10,21]; moreover, the image interpretation requires a well-trained specialist, which is not always available to perform all the studies.

2.3. Magnetic Resonance Imagining (MRI)

Breast MRI (BMRI) uses a magnetic field and radio waves to create a detailed image from the breast. Usually, a 1.5 T magnet is used along with a contrast, usually gadolinium, to generate the images of both breasts [22]. To acquire the images, the patient is located in a prone position, in order to minimize the respiration movement and to allow the expansion of the breast tissue [10,22]. When the magnet is turned on, the magnetic field temporary realigns the water molecules; thus, when radio waves are applied, the emitted radiation is captured using specific-designed coils, located at the breast positions, which transforms the captured radiation in electrical signals. The coils position must ensure an appropriate field-of-vision from the clavicle to the infra-mammary fold, including axilla [10]. An illustration of the patient position is depicted in Figure 4.
The main objective of getting the images is to assess for the breast symmetry and the possible changes in the parenchymal tissue, since those changes might indicate the presence of lesions that can be malignant. In general, malignant lesions have irregular margins (or asymmetry), whereas the benign ones usually have a round or oval geometrical shape with well-defined margins (symmetry). To deliver the best possible result, it is necessary to remove the homogenous fat around the breast and parenchyma since fat can render images that can be uninterpretable, specially to detect subtle lesions [10,22].
On the other hand, one of the problems that BMRI has is the false-positive (specificity) rate, as the technique can detect low-size masses (lesions whose size is less than 5 mm) that are benign [10,22]. To mitigate the aforementioned issue, nanomaterials have been developed, so they stick to the cancer masses but not to the benign ones [23] as well as contrast agents [24]. Recently, it has been proposed that a multiparametric approach has been suggested as a strategy to improve the specificity rate [10].

2.4. Other Approaches

Recently, microwave radiation has been employed as an alternative to obtain information about the breast tissue. The microwaves, whose frequency range varies from 1 to 20 GHz, are applied to the breast and the reflected waves are measured using specific-designed antennas. To have the best possible results, some works propose that the tissue must be immersed in a liquid [25]. In this sense, some works have proposed acquisition systems that deal with this issue [26,27,28,29].
When it is necessary to perform a biopsy to confirm, images from the cells that form the abnormalities are obtained using among other techniques, the fine needle aspiration citology (FNAC), core or excisional biopsy. Once the cell images are captured, an image processing technique is applied in order to detect the differences between normal and malignant cells, which are classified using modern strategies [30,31,32] such as neural networks, probabilistic-based algorithms and association rules coupled with neural networks.
It should be pointed out that other alternatives for imaging are employed such as Computed Tomography (CT) or Positron Emission Tomography (PET). The former employ X-rays to form images from the chest using different angles; using image processing and reconstruction algorithms, a 3D image of the chest (including the breasts) is obtained [33,34]; on the other hand, the latter uses a small amount of tracer, that is a specific-designed sugar with radioactive properties known as fluorodeoxyglucose-18. The main idea of using this type of sugar is that cancer cells have an increased consume of glucose compared with the normal cells; in this sense, the tracer sticks in the zones where there is an increased glucose consume [35,36]. It is worth noticing that these techniques are recommended to determine the cancer stage rather than first-line diagnosis scheme [10,37]. In this way, they complement the three main techniques to provide more information from the tissues surrounding the breasts [37]. Table 1 presents a table that summarizes the abovementioned methods.
As it is seen in Table 1, numerous advances for imagining techniques have been achieved in the last years; still, there is a necessity of developing strategies that can allow obtaining sharp images, even for dense breast tissues. In this sense, the obtained images can be used to perform a focused surveillance on the patients that have a higher risk for developing the disease, allowing to achieve the cancer detection in the earliest possible stage. On the other hand, these novel imagining techniques should be able to operate without requiring additional requirements, such as specific electrical or mechanical conditions, so they can be easily adopted in hospitals, or in an ambulatory area.

3. Image Processing and Classification Strategies

3.1. ROI Estimation

Once the image is acquired, the next step required is its interpretation. To this purpose, it is necessary to identify the suspicious regions that might contain masses or calcifications, where model, region, or counter-based algorithms for the image segmentation are employed [45]. It should be noticed that these approaches often rely on the manual entries to refine the segmentation zones, which limits the applicability of the proposals on different datasets [45], making necessary to develop novel strategies that can automatically detect all the interest zones. Recently, Sha et al. [46] proposed a convolutional neural network (CNN)-based method for segmentation. The authors develop an optimization scheme to determine the best parameters for the CNN in order to segment the suspicious zones. The results presented show the proposal has a reasonable sensitivity and specificity (89% and 88%, respectively) to determine if a mammograph presents cancerous tumors or not. Wang et al. [47] present a CNN-based strategy. They modify the convolutional layer to increase the detection of multiple suspicious zones. Heidari et al. [48] employ a Gaussian bandpass filter to detect suspicious zones using local properties of the image. On the other hand, Suresh et al. [49] and Sapate et al. [50] employ a fuzzy-based strategy to cluster all the pixels with similar features in order to detect all the zones that have differences. Other strategies involve the utilization of mathematical morphology [51,52,53,54,55], image contrast and intensity [56,57], geometrical features [58,59], correlation and convolution [60,61], non-linear filtering [62,63], texture features [64], deep learning [65,66,67,68,69], entropy [70,71], among other strategies. It is worth noticing that from the diversity of the employed strategies, some of them still require an initial guidance to detect the suspicious zones, either by manually selecting pixels inside of the zone or using the radiologist notes about the localization. An effective approach for the automatic detection should employ a denoising stage in order to remove residual noise generated during the acquisition and equalization, so the intensity pixel disparities associated to the environment light can be mitigated as much as possible.

3.2. Feature Extraction

After the suspicious zones are detected and segmented, it is necessary to extract features from them to generate the necessary information to classify the detected lesions as cancer or benign. To this purpose, Fourier Transform-based methods [48,72], wavelet transform-based strategies [73,74,75,76], geometric features [77,78], information theory algorithms [79], co-occurrence matrix features [47,80,81,82], histogram-based values [46,83,84,85], morphology [86,87], among others. On the other hand, with the increased capabilities (the number of simultaneous operations that can be done) of the new-generation graphical processor units, it is now possible to execute high-load computational algorithms faster than in a multicore processor [88]; in consequence, novel neural networks algorithms that perform the feature extraction and quantification are now being proposed. For instance, Xu et al. [89], use a CNN to extract and classify ultrasound images with suspicious areas in four categories: skin, glandular tissue, masses, and fat. They modify the convolutional filters to speed up the process. Arora et al. [90] also use an ensemble of CNN architectures to extract directly the suspicious zones. They only modify the final layers to speed up the training process. Gao et al. [91] use a deep neural network to generate the features from mammograms. They employ a modified architecture where the outputs and inputs of the network are used to update the model parameters during its training. Similar approaches are described in [92,93,94,95].
It should be pointed out that a reduction of the estimated features is often used to reduce the amount of computational resources used in the training scheme and to mitigate the overfitting problem, which reduce the algorithm efficacy. This step is known as dimensionality reduction [45] and the most employed algorithms are the principal component analysis (PCA) and linear discriminant analysis (LDA). PCA use eigenvalue-based algorithms to determine the features that are unrelated between them, that is, they have the maximum variance between them as this will indicate the maximum variation of the information contained, whereas LDA perform a projection of the samples to find out the distance between the classes’ mean. In this sense, the greater the distance between the means, the more unrelated the features are [96]. Nevertheless, these algorithms use global properties of the values which might cause to deliver suboptimal results [96]. For these reasons, hybrid strategies are proposed such as neurofuzzy algorithms [97,98], diffusion maps [99], deep learning [100,101,102], independent component analysis (ICA) [103], clustering-based approaches [104], multidimensional scaling [105], among other strategies. It should be pointed out that hybrid approaches, as abovementioned ones, are particularly effective when a non-linear relationship between the features exists.
To the best of the authors’ knowledge, there are no papers that compare some of the abovementioned techniques using the same database to compare the techniques efficacy. In this sense, it is an interesting research topic, since the results of this comparison can provide some guidelines about the image used (mammogram, ultrasound, or MRI) and the technique that has the best performance.

3.3. Classifiers

The last step of this stage is the classification of the extracted features to make a diagnosis. Broadly speaking, a classifier uses the input data to find out relationships that can be used to determine the class where the input data belongs to. The evaluation of the classifier is done using three basic measurements: accuracy, specificity, and sensitivity [106,107]. Accuracy refers to the percentage of images that are correctly classified in their corresponding classes; sensitivity is the percentage of classified images as malignant that truly are specificity is the percentage of classified images as benign that truly are, and the area under the curve is a parameter that allows choosing the optimal models. It takes a value between 0 and 1, being a good classifier the one that has a value close to 1 [108]. In this sense, depending on the training algorithm required by the strategy, classifiers can be divided in unsupervised or supervised [45,106,107].

3.3.1. Unsupervised Classifiers

An unsupervised classifier aims to find the underlying structures that the input data has without making explicit the class the input data belongs to [109]. In this sense, input data that has similar values is assigned to the same class [109]. Dubey et al. [110] studied the effects that the selection scheme for the size of the number of clusters in the K-means algorithm has. To this purpose, the random and foggy methods were employed. They note that foggy initialization method and the Euclidean-type distances produced the best results, as a 92%-accuracy is obtained. K-means and K-nearest neighbor classifiers have been also employed by Singh et al. [58] and Hernandez-Capistran et al. [111]. This family of classifiers is effective when the distance between the clusters is reasonable; but, when the aforementioned concept is not possible, the accuracy rate is highly degraded. For this reason, Onan [112] introduced the concepts of the fuzzy logic to measure the distance between the set of features used as input and the clusters, where the mutual information, an information theory algorithm, is the chosen to measure the aforementioned distance. The author reports an accuracy of 99%, and a specificity and sensitivity of 99% and 100%, respectively. Similar results are achieved using the fuzzy c-means algorithm [113,114], fuzzy-based classifier for time-series [115], fuzzy rule classifier [116,117], among others. Other clustering-based approaches employed for classification are hierarchical clustering [118] and Unsupervised Test Vector Optimization [119]. It should be pointed out that unsupervised classifiers require a careful selection of the features used to train the algorithm, since an incorrect mix of features will degrade the performance of the classifier.

3.3.2. Supervised Classifiers

Supervised classifiers require to know a-priori the class of which the input data belongs to, that is, the input data must be labeled. The Decision Tree (DT) is an algorithm that uses a set of rules to determine the class of the data input. DT has been employed by Mughal et al. [71], where they perform the detection of masses in mammograms using texture features in the region of interest. Using a DT, they obtain an accuracy, specificity, and sensibility of 89%, 89% and 88.5%, respectively. Shan et al. [120] employ geometrical features to classify abnormalities detected in ultrasound images. The obtained results show an accuracy, sensitivity, and specificity of 77.7%, 74.0%, and 82.0%, respectively. An improvement of DT is the Random Forest (RF). During the training stage, RF uses several DT, where the ones that have the lowest error are chosen; in this way, the accuracy is enhanced. RF are considered as ensemble classifiers, where some applications have been reported [121,122,123,124]. The accuracy, specificity, and sensitivity reported show an improvement. Another type of ensemble classifier is the Adaptive Boosting (AdaBoost) algorithm. It consists in the utilization of weak classifiers, which are usually features that can generate a classification accuracy greater than 50% by themselves; thus, using them in an ensemble way, the outliers that the features value have are used, improving the classifier accuracy. AdaBoost applications have been reported [125,126,127], achieving good results (the accuracy, specificity, and sensitivity values are greater than 90%); yet, the authors note that extensive investigation is still required to ensure that these results can be obtained with different types of images (mammograms, ultrasound, and MRI).
Another classification algorithm widely used for BC detection is the support vector machine (SVM). SVM finds the hyperplane that divides the zones where the values of the input features are located. In this regard, Liu et al. [52] use the morphological and edge features combined with a SVM classifier with a linear kernel, to detect benign and malignant masses in ultrasound images. They obtain an accuracy, sensitivity, and specificity of 82.6%, 66.67%, and 93.55%, respectively. It should be noted that most of the revised works use the term malignant to describe masses or lesions that are cancer regardless its type. To improve the aforementioned results, Sharma and Khanna [128] use the Zernike moments as features and a SVM classifier using a non-linear function as a kernel. The authors obtain a specificity and sensitivity of 99%. Similar approaches have been reported [87,129,130,131,132,133]. It is worth noticing that if the features have a strong nonlinear relationship, other classifiers could deliver better results.

3.4. Artificial Intelligence-Based Classifiers

Artificial Intelligence (AI) is the section of the computer science that develops algorithms to perform complex tasks that previously are solved with the human knowledge [134]. Evidently, since classification is a task usually solved by the physician, AI can provide automated solutions. In this sense, Artificial Neural Networks (ANN) are a type of AI algorithms employed to perform the classification in different classes. ANN are brain-inspired algorithms that store the knowledge that the input data using a training process [135]. An ANN consists in a three-layer scheme: input, hidden, and output, as depicted in Figure 5.
The training process takes the information contained in the input variables and adjust the values of the variables (weights) that connect all the layers in order to match the input with its respecting class; in this way, the hidden pattern that share all the input and their corresponding class is detected and stored. Consequently, it is necessary to use a sufficient database, with representative scenarios, to train the ANN. Beura et al. [136] present a methodology that employs mammograms to detect masses (benign and malignant) using the two-dimension discrete wavelet transform (2D-DWT) with normalized gray-level co-occurrence matrices (NGLCM). The images are segmented using a cropping-based strategy to obtain the ROI, which are analyzed with the symmetric biorthogonal 4.4 wavelet mother and a decomposition level of 2. All the frequency bands are processed to obtain the features (NGLCM), where the t-test is selected to perform the optimal choice of the most discriminant features. The obtained results show that the proposal achieves an accuracy, sensitivity, and specificity of 94.2%, 100%, and 90% respectively, using the ANN classifier, whereas a RF classifier, using the same database, obtains an 82.4%-accuracy. Mohammed et al. [137] uses fractal dimension values as features to classify ultrasound breast images in benign and malignant. They obtain the ROIs using a cropping-based algorithm, which are processed to obtain multifractal dimension features. They obtain an accuracy, sensitivity, and specificity of 82.04%, 79.4%, and 84.76% respectively using an ANN classifier. They point out that the ROI extraction algorithm must be improved. Gallego-Ortiz and Martel [138] classifies MRI breast images using graph-based features, the Deep Embedded Clustering algorithm to select the most relevant features and an ANN classifier. The ROIs are obtained using a graph model, where they obtain an area under the curve, which is another feature to measure the classifier effectiveness, of 0.80 (the closer to 1, the better). ANN classifiers have been also used in [139,140,141,142].
Deep neural networks (DNN) are a specific type of AI algorithms based on the architecture of an ANN [134]. DNN resembles how the brain stores, in multiple layers, the acquired knowledge to solve a specific task [8]. The Convolutional Neural Network (CNN) is a DNN that emulates the visual processing cortex to determine the class that an image belongs to [8,134]. A CNN typical scheme is depicted in Figure 6.
From the figure, it is seen that a CNN consists of a kernel, pooling and fully connected layers. The purpose of the kernel layer is to detect and extract spatial features that the image has, which is usually done with the convolution operator. The output of this layer, known as feature map, might contain negative values that might cause numerical instabilities in the training stage; thus, map is processed using a function to avoid the negative values. Once the feature map is processed, the pooling layer reduces the amount of information contained in order to eliminate redundant information; finally, the output of the pooling layer goes to the fully connected layer to be classified. In this sense, several works [143,144,145,146,147,148], have been employed CNN to detect benign and malignant tissues in either mammography or MRI images. They note that the depth of the network, i.e., the number of layers, the fine-tuning of some of the kernel or pooling layers, as well as the number of images, affect the classifier performance.
Ribli et al. [149] add an additional layer to implement specific-designed filters for mammograms. The CNN they employ has 16-layers and classifies the detected lesions in benign or malignant, obtaining an area under the curve of 0.85. A similar approach is proposed in [150]. The modification they propose is that a fully connected layer is placed as the first layer of the CNN so when the images are noise-corrupted, the feature extraction process is not degraded. They obtain an accuracy, sensitivity, and specificity of 98.7%, 98.65%, and 99.57% for the detection of benign and malignant lesions in mammograms. Zhang et al. [151] carry out a test to find out the specific-suited process for the pooling layer. They found out that rank-based stochastic process is the best-suited algorithm, obtaining an accuracy, sensibility, and specificity of 94.0%, 93.4%, and 94.6%, respectively, for classifying lesions for normal or abnormal using mammograms. Similar approaches have been proposed [152,153,154,155]. Table 2 presents a summary of the classifiers above discussed. It should be noted that a mix of images from mammograms, ultrasound, MRI are usually employed. These images usually came from private databases.
From the data shown in Table 2, it can be seen that it is necessary to standardize the minimum requirements regarding the number of images that the databases must have. In this way, the performance metrics that are employed, i.e., accuracy, specificity, and sensitivity, can be compared in a better way. Moreover, even when the presented approaches show interesting results, one thing they found out is the necessity of having a considerable database that contain significant labeled images to obtain the best possible results, which in many real-life scenarios is not always possible. For these reasons, algorithms that can work with both labeled and unlabeled images are still a necessity.

4. Recent Image Generation Techniques

Infrared Thermography (IRT) Applied to Breast Cancer

Temperature has been documented as an indicator of health [156]. Specifically speaking of breast cancer, when a tumor exists, it makes use of nutrients for its growth (angiogenesis), resulting in an increase in metabolism, thus the temperature around the tumor will increase in all directions [157]. To detect the temperature changes, IRT has been used as it measures the intensity of the thermal radiation (in the form of energy) that bodies emit, converting it into temperature [158]. The emitted energy can be visualized in the electromagnetic spectrum, as shown in Figure 7, where it is seen that the infrared (IR) wave ranges from 0.76 to 1000 μm and in turn is divided into Near-IR, Mid-IR and Far-IR. The available technology to measure IR allows performing the aforementioned task using non-invasive, contactless, safe, and painless equipment [159,160,161], making a suitable proposal for developing scanning technologies.
To obtain the best possible images, there are mainly three factors that influence thermographic imaging in humans [162,163]
  • Individual factors: everything that has to do with the patient’s conditions, such as age, sex, height, medical history, among others. As well as the inclusion and exclusion criteria. An aspect of vital importance is the emissivity of humans, which is 0.98 [164].
  • Technical factors: it has to do with everything related to the technology used during the study, such as the thermal imager (considering the distance from the lens to the patient), the protocol, the processing of the medical thermal images obtained, as well such as feature extraction and subsequent analysis.
  • Environmental factors: room position (it should be located in the area of the lowest possible incidence of light), temperature, relative humidity of the space where the thermographic images are to be taken, as well as the patient’s air conditioning time.
Considering the all the above discussed aspects, a suitable location for developing a controlled scenario to acquire thermographic images focused on breast cancer is depicted in Figure 8.
Once the room is conditioned for obtaining the thermographic images, the acquisition can be done. The reported results make use of the previously discussed image processing and classification algorithms. Table 3 shows a brief resume of the most recent proposed works.
Recently, dynamic infrared thermography (DIT) has been proposed as an alternative to further improve the image quality and sharpness [64]. DIT is a sequence of thermograms captured after stimulating the breasts by means of a cold stressor [176]. The objective of this stressor is to generate a contrast between areas with abnormal vascularity and metabolic activity with areas free of abnormalities. Therefore, it is possible to analyze the sinus response after removing this stimulus. In this way, the image sharpness is enhanced. Silva et al. [177] proposed a technology that analyzes the information from the DIT to indicate patients at risk of breast cancer, where they segment the area of interest (breast) and analyze the changes in temperature through the different thermograms acquired. Saniei et al. [178] proposed a system that segments both breasts to obtain the branching point of the vascular network, which represents the pattern of the veins; finally, these patterns are classified to obtain the diagnosis. As it can be seen, the DIT requires robust systems that allow the analysis of the acquired thermograms over time, which should be considered in order to generate the next generation of equipment that can allow the early detection of the angiogenesis process. By doing this, patients can be properly monitored so the changes in the patterns of the angiogenesis process be detected.

5. Recent Classification Algorithms

As pointed out in the Classifiers subsection, it is necessary to overcome the lack of a large database of images (mammograms, ultrasound or BRMI) that have been diagnosed to generate robust and efficient classifiers. In this sense, semi-supervised methods can be an attractive choice to explore. They usually combine an unsupervised algorithm to cluster the images available, so a representation of the dataset is obtained; then, the supervised classifier assigns the classes that images have [109,179]. The data that is used in the unsupervised algorithm assumes the unlabeled images are close to the labeled ones in their input space, so their labels are the same [109]. Some of the most recent developments that could be applied in the breast cancer detection are presented.

5.1. Autoencoders

An autoencoder is a neural network that has one or more hidden layers that is used to reconstruct the input compactly, as the hidden layers have few neurons. The autoencoder is depicted in Figure 9.
From the figure, it is seen that it has two parts: the encoder, that represents the input into its compact representation, and the decoder, which performs the inverse operation, that is, use the compact representation to recover the original data. The most common training scheme consists in employing a loss function that aims to reduce the error between the original and reconstructed data. For breast cancer detection, autoencoders can be used feature extraction stages, as the encoder part obtains the compact representation or features of the input image, that are followed by a supervised classifier. Recently, this approach has been explored [79,94,180,181,182,183] showing promising results to generate robust methodologies, where accuracies values above 95% are obtained.

5.2. Deep Belief Networks (DBF)

They are based on the usage of restricted Boltzmann machines (RBMs). RBMs only use two layers: input and hidden, to represent, as in the case of the autoencoders, the most important features that can represent the input data but in a stochastic way [99]. This ensure that the outliers do not affect the network performance. Detailed information can be found in [184,185]. The main idea in employing DBF is that the image segmentation can be done without external guidance; thus, a totally automated methodology can be proposed. Recent works have been explored this idea to perform the liver segmentation [186], lung lesions detection [187], and fusion of medical images [188]. Its use could deliver promising results to detect BC.

5.3. Ladder Networks

Ladder Neural Network, proposed by Rasmus et al. [189], uses an autoencoder as the first part of a feedforward network to denoise the inputs; further, by determining the minimum features that represent the inputs, the classification can be done using simple algorithms. The network uses a penalization term in the training algorithm to ensure the maximum similarity between the original and reconstructed inputs.

5.4. Deep Neural Network (DNN)-Based Algorithms

Recently, DNN-based classification strategies have been proposed to maximize the accuracy that the classifiers achieve while reducing the computational resources required to perform its training and execution, being the physics-informed neural network or more recently, the Deep Kronecker neural network [190] are one of the most recent algorithms that have been proposed. In particular, these NNs are designed to take full advantage of the adaptive activation functions. Traditional activation functions, such as the unipolar and bipolar sigmoid and the ReLU, might have problem when dealing with low-amplitude features as the training algorithm fails to achieve the lowest point in the error surface, thus generating classifiers prone to have generalization issues [190].
In this sense, by introducing a parameter into the activation function equations that can be modified during the training process, it can be avoided that the gradient function does not stall in a local minimum on the error surface [191]; thus, the highest accuracy can be obtained since the global minimum is reached [192]. The results presented [190,191,192] suggest that the utilization of this type of activation function might increase the classifier accuracy without increasing the computational burden required to train the network as the geometrical shape that the activation function defines can be adapted during the training time to the boundary decision zone where classification is required. It should be noted that the proposed Rowdy family of activation functions could be an interesting research topic for designing classification algorithms, as the presented results demonstrate that the lowest error is achieved in a prediction task.

6. Concluding Remarks

This paper presents a state-of-the-art review of the technologies used to acquire images from the breast and the algorithms used to detect BC. To the best of the author’s knowledge, this is the first review article that deals with all the required steps to propose a reliable methodology for the BC detection. This is important as the earliest detection of the disease can save a considerable amount of money in the required treatments, and the most important, potentially saving numerous lives.
The analyzed papers are focused on the research on the processing of images obtained using non-invasive methods: X-ray, ultrasound, or magnetic resonance, as they are the most accessible technologies in hospitals. The strategy used in most of the papers has 4 steps: image acquisition, ROI estimation, feature extraction, and interpretation. For the ROI estimation, the strategies proposed are based on radiologist annotations or require external help in order to be executed. This is an opportunity area to develop automatic algorithms that can detect the abnormalities. The feature estimation is used to quantify the detected zones in numerical values. In this sense, texture-based and geometrical-based features are by far, the most employed due to its estimation simplicity; still, frequency or spatial features have recently begun to be explored and can lead to detect minimal changes that might increase the sensitivity required to further improve the classification accuracy. It should be noticed that feature reduction strategies are commonly employed in order to reduce the training time or avoid potential misclassifications, where the most popular are LDA and PCA. On the other hand, classification strategies employ either supervised or unsupervised algorithms. The selection of the type of classifier heavily depends on the nature of the features extracted. If they are highly discriminant between them, then an unsupervised classifier is usually selected. On the other hand, when the features used have an overlap zone, then it is necessary to employ a supervised classifier. It should be noticed that AI-based algorithms, especially those based on deep learning, have the edge in terms of the performance they get at the expense of being very expensive in terms of the computational resources employed.
Emerging imaging technologies such as the microwave and thermography are being explored recently. In particular, the latter has recently obtained the attention of researchers as it is easy-to-use, and, with a proper cooling protocol, can reach an interesting level of accuracy to detect, at least, suspected masses that might evolved into malignant ones. With the development of semi-supervised strategies, some of the stages employed can be integrated into one, allowing the development of effective feature extraction, selection and classification strategies that have the same performance of supervised classifier, with lower computational resources employed, even in the presence of limited labeled images, which is a major obstacle to the training of the classifiers.
Modern BC detection strategies should rely using artificial intelligence(AI)-based algorithms that can use both on the information of the images acquired and categorical data [193,194,195], i.e., information about the daily life of the patients, with the aim of proposing algorithms that can determine if the patient has malignant lesions with a higher certainty and with the lowest false alarm at the earliest stage possible in order to get an effective treatment that can prevent the disease propagation. To achieve this goal, it is necessary to develop a database that contains the aforementioned features and whose size can reflect the main scenarios that can be found in real-life. Further, having algorithms that can deal with the aforementioned information, it can be possible to design personalized surveillance and clinical screening strategies that could offer the best health outcome for every patient.

Author Contributions

Conceptualization: I.A.C.-A., L.A.M.-H. and C.A.P.-R.; methodology: J.A.B.-H. and M.A.I.-M.; investigation: C.A.P.-R. and M.T.-A.; writing—original draft preparation, C.A.P.-R., J.A.B.-H. and I.A.C.-A.; writing—review and editing: M.A.I.-M., M.T.-A. and L.A.M.-H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. World Health Organization (WHO). Cáncer de Mama: Prevención y Control. Available online: https://www.who.int/topics/cancer/breastcancer/es/index1.html (accessed on 3 May 2022).
  2. Villa-Guillen, D.E.; Avila-Monteverde, E.; Gonzalez-Zepeda, J.H. Breast cancer risk and residential exposure to envi-ronmental hazards in Hermosillo, Sonora, Mexico [abstract]. In Proceedings of the 2019 San Antonio Breast Cancer Symposium, San Antonio, TX, USA, 10–14 December 2019; AACR: Philadelphia, PA, USA. [Google Scholar]
  3. Keith, B.; Simon, M.C. Tumor Angiogenesis. The Molecular Basis of Cancer, 4th ed.; Mendelsohn, J., Gray, J.W., Howley, P.M., Israel, M.A., Thompson, C.B., Eds.; Elsevier: Philadelphia, PA, USA, 2015; pp. 257–268. [Google Scholar]
  4. Semin, J.N.; Palm, D.; Smith, L.M.; Ruttle, S. Understanding breast cancer survivors’ financial burden and distress after financial assistance. Support. Care Cancer 2020, 28, 4241–4248. [Google Scholar] [CrossRef]
  5. Mann, R.M.; Cho, N.; Moy, L. Breast MRI: State of the Art. Radiology 2019, 292, 520–536. [Google Scholar] [CrossRef] [PubMed]
  6. Vobugari, N.; Raja, V.; Sethi, U.; Gandhi, K.; Raja, K.; Surani, S.R. Advancements in Oncology with Artificial Intelligence—A Review Article. Cancers 2022, 14, 1349. [Google Scholar] [CrossRef]
  7. Chougrad, H.; Zouaki, H.; Alheyane, O. Multi-label transfer learning for the early diagnosis of breast cancer. Neurocomputing 2020, 392, 168–180. [Google Scholar] [CrossRef]
  8. Le, E.P.V.; Wang, Y.; Huang, Y.; Hickman, S.; Gilbert, F. Artificial intelligence in breast imaging. Clin. Radiol. 2019, 74, 357–366. [Google Scholar] [CrossRef] [PubMed]
  9. Yassin, N.I.R.; Omran, S.; Houby, E.M.F.; Allam, H. Machine learning techniques for breast cancer computer aided diagnosis using different image modalities: A systematic review. Comput. Methods Programs Biomed. 2018, 156, 25–45. [Google Scholar] [CrossRef]
  10. Jochelson, M. Advanced Imaging Techniques for the Detection of Breast Cancer; American Society of Clinical Oncology Educational Book: Alexandria, VA, USA, 2012; pp. 65–69. [Google Scholar]
  11. Yaffe, M.J. AAPM tutorial. Physics of mammography: Image recording process. RadioGraphics 1990, 10, 341–363. [Google Scholar] [CrossRef] [Green Version]
  12. Pak, F.; Kanan, H.R.; Alikhassi, A. Breast cancer detection and classification in digital mammography based on Non-Subsampled Contourlet Transform (NSCT) and Super Resolution. Comput. Methods Programs Biomed. 2015, 122, 89–107. [Google Scholar] [CrossRef]
  13. Geweid, G.G.N.; Abdallah, M.A. A Novel Approach for Breast Cancer Investigation and Recognition Using M-Level Set-Based Optimization Functions. IEEE Access 2019, 7, 136343–136357. [Google Scholar] [CrossRef]
  14. Guzmán-Cabrera, R.; Guzmán-Sepúlveda, J.R.; Torres-Cisneros, M.; May-Arrioja, D.A.; Ruiz-Pinales, J.; Ibarra-Manzano, O.G.; Aviña-Cervantes, G.; Parada, A.G. Digital Image Processing Technique for Breast Cancer Detection. Int. J. Thermophys. 2012, 34, 1519–1531. [Google Scholar] [CrossRef]
  15. Avuti, S.K.; Bajaj, V.; Kumar, A.; Singh, G.K. A novel pectoral muscle segmentation from scanned mammograms using EMO algorithm. Biomed. Eng. Lett. 2019, 9, 481–496. [Google Scholar] [CrossRef]
  16. Vijayarajeswari, R.; Parthasarathy, P.; Vivekanandan, S.; Basha, A.A. Classification of mammogram for early detection of breast cancer using SVM classifier and Hough transform. Measurement 2019, 146, 800–805. [Google Scholar] [CrossRef]
  17. Rodríguez-Álvarez, M.X.; Tahoces, P.G.; Cadarso-Suárez, C.; Lado, M.J. Comparative study of ROC regression techniques—Applications for the computer-aided diagnostic system in breast cancer detection. Comput. Stat. Data Anal. 2011, 55, 888–902. [Google Scholar] [CrossRef]
  18. Cheng, H.D.; Shan, J.; Ju, W.; Guo, Y.; Zhang, L. Automated breast cancer detection and classification using ultrasound images: A survey. Pattern Recognit. 2010, 43, 299–317. [Google Scholar] [CrossRef] [Green Version]
  19. Ouyang, Y.; Tsui, P.-H.; Wu, S.; Wu, W.; Zhou, Z. Classification of Benign and Malignant Breast Tumors Using H-Scan Ultrasound Imaging. Diagnostics 2019, 9, 182. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Ouyang, Y.; Tsui, P.-H.; Wu, S.; Wu, W.; Zhou, Z. Breast cancer detection by B7-H3–targeted ultrasound molecular imaging. Cancer Res. 2015, 75, 2501–2509. [Google Scholar] [CrossRef] [Green Version]
  21. Athanasiou, A.; Tardivon, A.; Ollivier, L.; Thibault, F.; El Khoury, C.; Neuenschwander, S. How to optimize breast ultrasound. Eur. J. Radiol. 2009, 69, 6–13. [Google Scholar] [CrossRef]
  22. Mumin MRad, N.A.; Hamid MRad, M.T.R.; Ding Wong, J.H.; Rahmat MRad, K.; Hoong Ng, K. Magnetic Resonance Imaging Phenotypes of Breast Cancer Molecular Subtypes: A Systematic Review. Acad. Radiol. 2022, 29, S89–S106. [Google Scholar] [CrossRef]
  23. Han, C.; Zhang, A.; Kong, Y.; Yu, N.; Xie, T.; Dou, B.; Li, K.; Wang, Y.; Li, J.; Xu, K. Multifunctional iron oxide-carbon hybrid nanoparticles for targeted fluorescent/MR dual-modal imaging and detection of breast cancer cells. Anal. Chim. Acta 2019, 1067, 115–128. [Google Scholar] [CrossRef]
  24. Mango, V.L.; Morris, E.A.; Dershaw, D.D.; Abramson, A.; Fry, C.; Moskowitz, C.S.; Hughes, M.; Kaplan, J.; Jochelson, M.S. Abbreviated protocol for breast MRI: Are multiple sequences needed for cancer detection? Eur. J. Radiol. 2015, 84, 65–70. [Google Scholar] [CrossRef]
  25. Nikolova, N.K. Microwave Imaging for Breast Cancer. IEEE Microw. Mag. 2011, 12, 78–94. [Google Scholar] [CrossRef]
  26. Xu, M.; Thulasiraman, P.; Noghanian, S. Microwave tomography for breast cancer detection on Cell broadband engine processors. J. Parallel Distrib. Comput. 2021, 72, 1106–1116. [Google Scholar] [CrossRef]
  27. Grzegorczyk, T.M.; Meaney, P.M.; Kaufman, P.A.; di Florio-Alexander, R.M.; Paulsen, K.D. Fast 3-D Tomographic Microwave Imaging for Breast Cancer Detection. IEEE Trans. Med. Imaging 2012, 31, 1584–1592. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. AlSawaftah, N.; El-Abed, S.; Dhou, S.; Zakaria, A. Microwave Imaging for Early Breast Cancer Detection: Current State, Challenges, and Future Directions. J. Imaging 2022, 8, 123. [Google Scholar] [CrossRef] [PubMed]
  29. Zerrad, F.-E.; Taouzari, M.; Makroum, E.M.; El Aoufi, J.; Islam, M.T.; Özkaner, V.; Abdulkarim, Y.I.; Karaaslan, M. Multilayered metamaterials array antenna based on artificial magnetic conductor’s structure for the application diagnostic breast cancer detection with microwave imaging. Med. Eng. Phys. 2022, 99, 103737. [Google Scholar] [CrossRef]
  30. Karabatak, M.; Ince, M.C. An expert system for detection of breast cancer based on association rules and neural network. Expert Syst. Appl. 2009, 36, 3465–3469. [Google Scholar] [CrossRef]
  31. Wang, P.; Hu, X.; Li, Y.; Liu, Q.; Zhu, X. Automatic cell nuclei segmentation and classification of breast cancer histopathology images. Signal Process. 2016, 122, 1–13. [Google Scholar] [CrossRef]
  32. Wahab, N.; Khan, A.; Lee, Y.S. Two-phase deep convolutional neural network for reducing class skewness in histopathological images based breast cancer detection. Comput. Biol. Med. 2017, 85, 86–97. [Google Scholar] [CrossRef]
  33. Fan, Y.; Wang, H.; Gemmeke, H.; Hopp, T.; Hesser, J. Model-data-driven image reconstruction with neural networks for ultrasound computed tomography breast imaging. Neurocomputing 2022, 467, 10–21. [Google Scholar] [CrossRef]
  34. Koh, J.; Yoon, Y.; Kim, S.; Han, K.; Kim, E.-K. Deep Learning for the Detection of Breast Cancers on Chest Computed Tomography. Clin. Breast Cancer 2021, 22, 26–31. [Google Scholar] [CrossRef]
  35. Zangheri, B.; Messa, C.; Picchio, M.; Gianolli, L.; Landoni, C.; Fazio, F. PET/CT and breast cancer. Euro. J. Nuclear Med. Mol. Imaging. 2004, 31, S135–S142. [Google Scholar] [CrossRef] [PubMed]
  36. Sollini, M.; Cozzi, L.; Ninatti, G.; Antunovic, L.; Cavinato, L.; Chiti, A.; Kirienko, M. PET/CT radiomics in breast cancer: Mind the step. Methods 2020, 188, 122–132. [Google Scholar] [CrossRef] [PubMed]
  37. Salaün, P.-Y.; Abgral, R.; Malard, O.; Querellou-Lefranc, S.; Quere, G.; Wartski, M.; Coriat, R.; Hindie, E.; Taieb, D.; Tabarin, A.; et al. Good clinical practice recommendations for the use of PET/CT in oncology. Eur. J. Nuclear Med. Mol. Imaging 2020, 47, 28–50. [Google Scholar] [CrossRef] [PubMed]
  38. Yi, A.; Jang, M.-J.; Yim, D.; Kwon, B.R.; Shin, S.U.; Chang, J.M. Addition of Screening Breast US to Digital Mammography and Digital Breast Tomosynthesis for Breast Cancer Screening in Women at Average Risk. Radiology 2021, 298, 568–575. [Google Scholar] [CrossRef]
  39. Spak, D.A.; Le-Petross, H.T. Screening Modalities for Women at Intermediate and High Risk for Breast Cancer. Curr. Breast Cancer Rep. 2019, 11, 111–116. [Google Scholar] [CrossRef]
  40. Lee, T.C.; Reyna, C.; Shaughnessy, E.; Lewis, J.D. Screening of populations at high risk for breast cancer. J. Surg. Oncol. 2019, 120, 820–830. [Google Scholar] [CrossRef]
  41. Shah, T.A.; Guraya, S.S. Breast cancer screening programs: Review of merits, demerits, and recent recommendations practiced across the world. J. Microsc. Ultrastruct. 2017, 5, 59–69. [Google Scholar] [CrossRef]
  42. Nguyen, D.L.; Myers, K.S.; Oluyemi, E.; Mullen, L.A.; Panigrahi, B.; Rossi, J.; Ambinder, E.B. BI-RADS 3 Assessment on MRI: A Lesion-Based Review for Breast Radiologists. J. Breast Imaging 2022, wbac032. [Google Scholar] [CrossRef]
  43. Daimiel Naranjo, I.; Gibbs, P.; Reiner, J.S.; Lo Gullo, R.; Thakur, S.B.; Jochelson, M.S.; Thakur, N.; Baltzer, P.A.T.; Helbich, T.H.; Pinker, K. Breast Lesion Classification with Multiparametric Breast MRI Using Radiomics and Machine Learning: A Comparison with Radiologists’ Performance. Cancers 2022, 14, 1743. [Google Scholar] [CrossRef]
  44. Shimauchi, A.; Jansen, S.A.; Abe, H.; Jaskowiak, N.; Schmidt, R.A.; Newstead, G.M. Breast Cancers Not Detected at MRI: Review of False-Negative Lesions. Am. J. Roentgenol. 2010, 194, 1674–1679. [Google Scholar] [CrossRef]
  45. Tasdemir, S.B.Y.; Tasdemir, K.; Aydin, Z. A review of mammographic region of interest classification. WIREs Data Min. Knowl. Discov. 2020, 10, 1357. [Google Scholar] [CrossRef]
  46. Sha, Z.; Hu, L.; Rouyendegh, B.D. Deep learning and optimization algorithms for automatic breast cancer detection. Int. J. Imaging Syst. Technol. 2020, 30, 495–506. [Google Scholar] [CrossRef]
  47. Wang, C.; Brentnall, A.R.; Mainprize, J.G.; Yaffe, M.; Cuzick, J.; Harvey, J.A. External validation of a mammographic texture marker for breast cancer risk in a case–control study. J. Med. Imaging 2020, 7, 014003. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Heidari, M.; Mirniaharikandehei, S.; Liu, W.; Hollingsworth, A.B.; Liu, H.; Zheng, B. Development and Assessment of a New Global Mammographic Image Feature Analysis Scheme to Predict Likelihood of Malignant Cases. IEEE Trans. Med. Imaging 2020, 39, 1235–1244. [Google Scholar] [CrossRef]
  49. Suresh, R.; Rao, A.N.; Reddy, B.E. Detection and classification of normal and abnormal patterns in mammograms using deep neural network. Concurr. Comput. Pract. Exp. 2018, 31, 5293. [Google Scholar] [CrossRef]
  50. Sapate, S.; Talbar, S.; Mahajan, A.; Sable, N.; Desai, S.; Thakur, M. Breast cancer diagnosis using abnormalities on ipsilateral views of digital mammograms. Biocybern. Biomed. Eng. 2020, 40, 290–305. [Google Scholar] [CrossRef]
  51. Pezeshki, H. Breast tumor segmentation in digital mammograms using spiculated regions. Biomed. Signal Process. Control 2022, 76, 103652. [Google Scholar] [CrossRef]
  52. Liu, Y.; Ren, L.; Cao, X.; Tong, Y. Breast tumors recognition based on edge feature extraction using support vector machine. Biomed. Signal Process. Control 2020, 58, 101825. [Google Scholar] [CrossRef]
  53. Liu, Y.; Ren, L.; Cao, X.; Tong, Y. Diffusion-Weighted MRI of Breast Cancer: Improved Lesion Visibility and Image Quality Using Synthetic b-Values. J. Magn. Reson. Imaging 2019, 50, 1754–1761. [Google Scholar]
  54. Almalki, Y.E.; Soomro, T.A.; Irfan, M.; Alduraibi, S.K.; Ali, A. Impact of Image Enhancement Module for Analysis of Mammogram Images for Diagnostics of Breast Cancer. Sensors 2022, 22, 1868. [Google Scholar] [CrossRef]
  55. Rani, V.M.K.; Dhenakaran, S.S. Classification of ultrasound breast cancer tumor images using neural learning and predicting the tumor growth rate. Multimed. Tools Appl. 2020, 79, 16967–16985. [Google Scholar] [CrossRef]
  56. Bria, A.; Karssemeijer, N.; Tortorella, F. Learning from unbalanced data: A cascade-based approach for detecting clustered microcalcifications. Med. Image Anal. 2014, 18, 241–252. [Google Scholar] [CrossRef] [PubMed]
  57. Shrivastava, N.; Bharti, J. Breast Tumor Detection in Digital Mammogram Based on Efficient Seed Region Growing Segmentation. IETE J. Res. 2020. [Google Scholar] [CrossRef]
  58. Singh, H.; Sharma, V.; Singh, D. Comparative analysis of proficiencies of various textures and geometric features in breast mass classification using k-nearest neighbor. Vis. Comput. Ind. Biomed. Art 2022, 5, 1–19. [Google Scholar] [CrossRef] [PubMed]
  59. Sasaki, M.; Tozaki, M.; Rodríguez-Ruiz, A.; Yotsumoto, D.; Ichiki, Y.; Terawaki, A.; Oosako, S.; Sagara, Y.; Sagara, Y. Artificial intelligence for breast cancer detection in mammography: Experience of use of the ScreenPoint Medical Transpara system in 310 Japanese women. Breast Cancer 2020, 27, 642–651. [Google Scholar] [CrossRef]
  60. Junior, G.B.; da Rocha, S.V.; de Almeida, J.D.S.; de Paiva, A.C.; Silva, A.C.; Gattass, M. Breast cancer detection in mammography using spatial diversity, geostatistics, and concave geometry. Multimed. Tools Appl. 2019, 78, 13005–13031. [Google Scholar] [CrossRef]
  61. Fanizzi, A.; Basile, T.M.A.; Losurdo, L.; Bellotti, R.; Bottigli, U.; Dentamaro, R.; Didonna, V.; Fausto, A.; Massafra, R.; Moschetta, M.; et al. A machine learning approach on multiscale texture analysis for breast microcalcification diagnosis. BMC Bioinform. 2020, 21, 1–11. [Google Scholar] [CrossRef] [Green Version]
  62. Green, C.A.; Goodsitt, M.M.; Lau, J.H.; Brock, K.K.; Davis, C.L.; Carson, P.L. Deformable Mapping Method to Relate Lesions in Dedicated Breast CT Images to Those in Automated Breast Ultrasound and Digital Breast Tomosynthesis Images. Ultrasound Med. Biol. 2020, 46, 750–765. [Google Scholar] [CrossRef]
  63. Padmavathy, T.V.; Vimalkumar, M.N.; Bhargava, D.S. Adaptive clustering based breast cancer detection with ANFIS classifier using mammographic images. Clust. Comput. 2019, 22, 13975–13984. [Google Scholar] [CrossRef]
  64. Raghavendra, U.; Gudigar, A.; Ciaccio, E.J.; Ng, K.H.; Chan, W.Y.; Rahmat, K.; Acharya, U.R. 2DSM vs FFDM: A computer aided diagnosis based comparative study for the early detection of breast cancer. Expert Syst. 2021, 38, e12474. [Google Scholar] [CrossRef]
  65. Wang, Z.; Li, M.; Wang, H.; Jiang, H.; Yao, Y.; Zhang, H.; Xin, J. Breast Cancer Detection Using Extreme Learning Machine Based on Feature Fusion with CNN Deep Features. IEEE Access 2019, 7, 105146–105158. [Google Scholar] [CrossRef]
  66. Yap, M.H.; Pons, G.; Marti, J.; Ganau, S.; Sentis, M.; Zwiggelaar, R.; Davison, A.K.; Marti, R. Automated Breast Ultrasound Lesions Detection Using Convolutional Neural Networks. IEEE J. Biomed. Health Inform. 2018, 22, 1218–1226. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  67. Teare, P.; Fishman, M.; Benzaquen, O.; Toledano, E.; Elnekave, E. Malignancy Detection on Mammography Using Dual Deep Convolutional Neural Networks and Genetically Discovered False Color Input Enhancement. J. Digit. Imaging 2017, 30, 499–505. [Google Scholar] [CrossRef] [PubMed]
  68. Shen, L.; Margolies, L.R.; Rothstein, J.H.; Fluder, E.; McBride, R.; Sieh, W. Deep Learning to Improve Breast Cancer Detection on Screening Mammography. Sci. Rep. 2019, 9, 1–12. [Google Scholar] [CrossRef] [PubMed]
  69. Gamage, T.P.B.; Malcolm, D.T.K.; Talou, G.D.M.; Mîra, A.; Doyle, A.; Nielsen, P.M.F.; Nash, M.P. An automated computational biomechanics workflow for improving breast cancer diagnosis and treatment. Interface Focus 2019, 9, 20190034. [Google Scholar] [CrossRef] [Green Version]
  70. Bouron, C.; Mathie, C.; Seegers, V.; Morel, O.; Jézéquel, P.; Lasla, H.; Guillerminet, C.; Girault, S.; Lacombe, M.; Sher, A.; et al. Prognostic Value of Metabolic, Volumetric and Textural Parameters of Baseline [18F]FDG PET/CT in Early Triple-Negative Breast Cancer. Cancers 2022, 14, 637. [Google Scholar] [CrossRef]
  71. Mughal, B.; Sharif, M.; Muhammad, N. Bi-model processing for early detection of breast tumor in CAD system. Eur. Phys. J. Plus 2017, 132, 266. [Google Scholar] [CrossRef]
  72. Wang, S.; Rao, R.V.; Chen, P.; Zhang, Y.; Liu, A.; Wei, L. Abnormal Breast Detection in Mammogram Images by Feed-forward Neural Network Trained by Jaya Algorithm. Fundam. Inform. 2017, 151, 191–211. [Google Scholar] [CrossRef]
  73. Muduli, D.; Dash, R.; Majhi, B. Automated breast cancer detection in digital mammograms: A moth flame optimization based ELM approach. Biomed. Signal Process. Control 2020, 59, 101912. [Google Scholar] [CrossRef]
  74. Shiji, T.P.; Remya, S.; Lakshmanan, R.; Pratab, T.; Thomas, V. Evolutionary intelligence for breast lesion detection in ultrasound images: A wavelet modulus maxima and SVM based approach. J. Intell. Fuzzy Syst. 2020, 38, 6279–6290. [Google Scholar] [CrossRef]
  75. Chakraborty, J.; Midya, A.; Rabidas, R. Computer-aided detection and diagnosis of mammographic masses using multi-resolution analysis of oriented tissue patterns. Expert Syst. Appl. 2018, 99, 168–179. [Google Scholar] [CrossRef]
  76. Jara-Maldonado, M.; Alarcon-Aquino, V.; Rosas-Romero, R. A new machine learning model based on the broad learning system and wavelets. Eng. Appl. Artif. Intell. 2022, 112, 104886. [Google Scholar] [CrossRef]
  77. Hajiabadi, H.; Babaiyan, V.; Zabihzadeh, D.; Hajiabadi, M. Combination of loss functions for robust breast cancer prediction. Comput. Electr. Eng. 2020, 84, 106624. [Google Scholar] [CrossRef]
  78. Eltrass, A.S.; Salama, M.S. Fully automated scheme for computer-aided detection and breast cancer diagnosis using digitised mammograms. IET Image Process. 2020, 14, 495–505. [Google Scholar] [CrossRef]
  79. Parekh, V.S.; Jacobs, M.A. Multiparametric radiomics methods for breast cancer tissue characterization using radiological imaging. Breast Cancer Res. Treat. 2020, 180, 407–421. [Google Scholar] [CrossRef] [Green Version]
  80. Wang, C.; Brentnall, A.R.; Cuzick, J.; Harkness, E.F.; Evans, D.G.; Astley, S. A novel and fully automated mammographic texture analysis for risk prediction: Results from two case-control studies. Breast Cancer Res. 2017, 19, 114. [Google Scholar] [CrossRef] [Green Version]
  81. Bajaj, V.; Pawar, M.; Meena, V.K.; Kumar, M.; Sengur, A.; Guo, Y. Computer-aided diagnosis of breast cancer using bi-dimensional empirical mode decomposition. Neural Comput. Appl. 2019, 31, 3307–3315. [Google Scholar] [CrossRef]
  82. Li, Z.; Yu, L.; Wang, X.; Yu, H.; Gao, Y.; Ren, Y.; Wang, G.; Zhou, X. Diagnostic Performance of Mammographic Texture Analysis in the Differential Diagnosis of Benign and Malignant Breast Tumors. Clin. Breast Cancer 2018, 18, e621–e627. [Google Scholar] [CrossRef]
  83. Huang, Q.; Huang, Y.; Luo, Y.; Yuan, F.; Li, X. Segmentation of breast ultrasound image with semantic classification of superpixels. Med. Image Anal. 2020, 61, 101657. [Google Scholar] [CrossRef]
  84. Bressan, R.S.; Bugatti, P.H.; Saito, P.T. Breast cancer diagnosis through active learning in content-based image retrieval. Neurocomputing 2019, 357, 1–10. [Google Scholar] [CrossRef]
  85. Suradi, S.H.; Abdullah, K.A.; Isa, N.A.M. Improvement of image enhancement for mammogram images using Fuzzy Anisotropic Diffusion Histogram Equalisation Contrast Adaptive Limited (FADHECAL). Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2022, 10, 67–75. [Google Scholar] [CrossRef]
  86. Al-Antari, M.A.; Al-Masni, M.; Park, S.-U.; Park, J.; Metwally, M.K.; Kadah, Y.M.; Han, S.-M.; Kim, T.-S. An Automatic Computer-Aided Diagnosis System for Breast Cancer in Digital Mammograms via Deep Belief Network. J. Med. Biol. Eng. 2018, 38, 443–456. [Google Scholar] [CrossRef]
  87. Zhang, Q.; Peng, Y.; Liu, W.; Bai, J.; Zheng, J.; Yang, X.; Zhou, L. Radiomics Based on Multimodal MRI for the Differential Diagnosis of Benign and Malignant Breast Lesions. J. Magn. Reson. Imaging 2020, 52, 596–607. [Google Scholar] [CrossRef]
  88. Dhouibi, M.; Ben Salem, A.K.; Saidi, A.; Ben Saoud, S. Accelerating Deep Neural Networks implementation: A survey. IET Comput. Digit. Tech. 2021, 15, 79–96. [Google Scholar] [CrossRef]
  89. Xu, Y.; Wang, Y.; Yuan, J.; Cheng, Q.; Wang, X.; Carson, P.L. Medical breast ultrasound image segmentation by machine learning. Ultrasonics 2019, 91, 1–9. [Google Scholar] [CrossRef] [PubMed]
  90. Arora, R.; Rai, P.K.; Raman, B. Deep feature–based automatic classification of mammograms. Med. Biol. Eng. Comput. 2020, 58, 1199–1211. [Google Scholar] [CrossRef] [PubMed]
  91. Gao, F.; Wu, T.; Li, J.; Zheng, B.; Ruan, L.; Shang, D.; Patel, B. SD-CNN: A shallow-deep CNN for improved breast cancer diagnosis. Comput. Med. Imaging Graph. 2018, 70, 53–62. [Google Scholar] [CrossRef] [Green Version]
  92. Romeo, V.; Clauser, P.; Rasul, S.; Kapetas, P.; Gibbs, P.; Baltzer, P.A.T.; Hacker, M.; Woitek, R.; Helbich, T.H.; Pinker, K. AI-enhanced simultaneous multiparametric 18F-FDG PET/MRI for accurate breast cancer diagnosis. Eur. J. Pediatr. 2022, 49, 596–608. [Google Scholar] [CrossRef]
  93. Tsochatzidis, L.; Koutla, P.; Costaridou, L.; Pratikakis, I. Integrating segmentation information into CNN for breast cancer diagnosis of mammographic masses. Compt. Meth. Prog. Biomed. 2021, 200, 105913. [Google Scholar] [CrossRef]
  94. Toğaçar, M.; Ergen, B.; Cömert, Z. Application of breast cancer diagnosis based on a combination of convolutional neural networks, ridge regression and linear discriminant analysis using invasive breast cancer images processed with autoencoders. Med. Hypotheses 2020, 135, 109503. [Google Scholar] [CrossRef]
  95. Singh, V.K.; Rashwan, H.A.; Romani, S.; Akram, F.; Pandey, N.; Sarker, M.K.; Saleh, A.; Arenas, M.; Arquez, M.; Puig, D.; et al. Breast tumor segmentation and shape classification in mammograms using generative adversarial and convolutional neural network. Expert Syst. Appl. 2020, 139, 112855. [Google Scholar] [CrossRef]
  96. Khan, S.; Islam, N.; Jan, Z.; Din, I.U.; Rodrigues, J.J.P.C. A novel deep learning based framework for the detection and classification of breast cancer using transfer learning. Pattern Recognit. Lett. 2019, 125, 1–6. [Google Scholar] [CrossRef]
  97. Li, J.-B.; Yu, Y.; Yang, Z.-M.; Tang, L.-L. Breast Tissue Image Classification Based on Semi-supervised Locality Discriminant Projection with Kernels. J. Med. Syst. 2012, 36, 2779–2786. [Google Scholar] [CrossRef] [PubMed]
  98. Algehyne, E.A.; Jibril, M.L.; Algehainy, N.A.; Alamri, O.A.; Alzahrani, A.K. Fuzzy Neural Network Expert System with an Improved Gini Index Random Forest-Based Feature Importance Measure Algorithm for Early Diagnosis of Breast Cancer in Saudi Arabia. Big Data Cogn. Comput. 2022, 6, 13. [Google Scholar] [CrossRef]
  99. Akhbardeh, A.; Jacobs, M.A. Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation. Med. Phys. 2012, 39, 2275–2289. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  100. Ragab, M.; Albukhari, A.; Alyami, J.; Mansour, R.F. Ensemble Deep-Learning-Enabled Clinical Decision Support System for Breast Cancer Diagnosis and Classification on Ultrasound Images. Biology 2022, 11, 439. [Google Scholar] [CrossRef]
  101. Jabeen, K.; Khan, M.A.; Alhaisoni, M.; Tariq, U.; Zhang, Y.-D.; Hamza, A.; Mickus, A.; Damaševičius, R. Breast Cancer Classification from Ultrasound Images Using Probability-Based Optimal Deep Learning Feature Fusion. Sensors 2022, 22, 807. [Google Scholar] [CrossRef]
  102. Bacha, S.; Taouali, O. A novel machine learning approach for breast cancer diagnosis. Measurement 2021, 187, 110233. [Google Scholar] [CrossRef]
  103. Mert, A.; Kılıç, N.; Akan, A. An improved hybrid feature reduction for increased breast cancer diagnostic performance. Biomed. Eng. Lett. 2015, 4, 285–291. [Google Scholar] [CrossRef]
  104. Zheng, B.; Yoon, S.W.; Lam, S.S. Breast cancer diagnosis based on feature extraction using a hybrid of K-means and support vector machine algorithms. Expert Syst. Appl. 2014, 41, 1476–1482. [Google Scholar] [CrossRef]
  105. Sun, W.; Tseng, T.-L.; Zhang, J.; Qian, W. Enhancing deep convolutional neural network scheme for breast cancer diagnosis with unlabeled data. Comput. Med. Imaging Graph. 2017, 57, 4–9. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  106. Sharif, M.I.; Li, J.P.; Naz, J.; Rashid, I. A comprehensive review on multi-organs tumor detection based on machine learning. Pattern Recognit. Lett. 2020, 131, 30–37. [Google Scholar] [CrossRef]
  107. Shehab, M.; Abualigah, L.; Shambour, Q.; Abu-Hashem, M.A.; Shambour, M.K.Y.; Alsalibi, A.I.; Gandomi, A.H. Machine learning in medical applications: A review of state-of-the-art methods. Comput. Biol. Med. 2022, 145, 105458. [Google Scholar] [CrossRef] [PubMed]
  108. Hicks, S.A.; Strümke, I.; Thambawita, V.; Hammou, M.; Riegler, M.A.; Halvorsen, P.; Parasa, S. On evaluation metrics for medical applications of artificial intelligence. Sci. Rep. 2022, 12, 5979. [Google Scholar] [CrossRef]
  109. van Engelen, J.E.; Hoos, H.H. A survey on semi-supervised learning. Mach. Learn. 2019, 109, 373–440. [Google Scholar] [CrossRef] [Green Version]
  110. Dubey, A.K.; Gupta, U.; Jain, S. Analysis of k-means clustering approach on the breast cancer Wisconsin dataset. Int. J. Comput. Assist. Radiol. Surg. 2016, 11, 2033–2047. [Google Scholar] [CrossRef] [PubMed]
  111. Hernández-Capistrán, J.; Martínez-Carballido, J.F.; Rosas-Romero, R. False Positive Reduction by an Annular Model as a Set of Few Features for Microcalcification Detection to Assist Early Diagnosis of Breast Cancer. J. Med. Syst. 2018, 42, 134. [Google Scholar] [CrossRef]
  112. Onan, A. A fuzzy-rough nearest neighbor classifier combined with consistency-based subset evaluation and instance selection for automated diagnosis of breast cancer. Expert Syst. Appl. 2015, 42, 6844–6852. [Google Scholar] [CrossRef]
  113. Hosseinpour, M.; Ghaemi, S.; Khanmohammadi, S.; Daneshvar, S. A hybrid high-order type-2 FCM improved random forest classification method for breast cancer risk assessment. Appl. Math. Comput. 2022, 424. [Google Scholar] [CrossRef]
  114. Sadad, T.; Munir, A.; Saba, T.; Hussain, A. Fuzzy C-means and region growing based classification of tumor from mammograms using hybrid texture feature. J. Comput. Sci. 2018, 29, 34–45. [Google Scholar] [CrossRef]
  115. Saberi, H.; Rahai, A.; Hatami, F. A fast and efficient clustering based fuzzy time series algorithm (FEFTS) for regression and classification. Appl. Soft Comput. 2017, 61, 1088–1097. [Google Scholar] [CrossRef]
  116. Thani, I.; Kasbe, T. Expert system based on fuzzy rules for diagnosing breast cancer. Health Technol. 2022, 12, 473–489. [Google Scholar] [CrossRef]
  117. Nguyen, T.-L.; Kavuri, S.; Park, S.-Y.; Lee, M. Attentive Hierarchical ANFIS with interpretability for cancer diagnostic. Expert Syst. Appl. 2022, 201, 117099. [Google Scholar] [CrossRef]
  118. Zhang, Q.; Xiao, Y.; Suo, J.; Shi, J.; Yu, J.; Guo, Y.; Wang, Y.; Zheng, H. Sonoelastomics for Breast Tumor Classification: A Radiomics Approach with Clustering-Based Feature Selection on Sonoelastography. Ultrasound Med. Biol. 2017, 43, 1058–1069. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  119. Indra, P.; Manikandan, M. Multilevel Tetrolet transform based breast cancer classifier and diagnosis system for healthcare applications. J. Ambient Intell. Humaniz. Comput. 2020, 12, 3969–3978. [Google Scholar] [CrossRef]
  120. Shan, J.; Alam, S.K.; Garra, B.; Zhang, Y.; Ahmed, T. Computer-Aided Diagnosis for Breast Ultrasound Using Computerized BI-RADS Features and Machine Learning Methods. Ultrasound Med. Biol. 2016, 42, 980–988. [Google Scholar] [CrossRef]
  121. Abdel-Nasser, M.; Melendez, J.; Moreno, A.; Omer, O.A.; Puig, D. Breast tumor classification in ultrasound images using texture analysis and super-resolution methods. Eng. Appl. Artif. Intell. 2017, 59, 84–92. [Google Scholar] [CrossRef]
  122. Muramatsu, C.; Hara, T.; Endo, T.; Fujita, H. Breast mass classification on mammograms using radial local ternary patterns. Comput. Biol. Med. 2016, 72, 43–53. [Google Scholar] [CrossRef]
  123. Alam, Z.; Rahman, M.S. A Random Forest based predictor for medical data classification using feature ranking. Inform. Med. Unlocked 2019, 15, 100180. [Google Scholar] [CrossRef]
  124. Wu, J.-X.; Chen, P.-Y.; Lin, C.-H.; Chen, S.; Shung, K.K. Breast Benign and Malignant Tumors Rapidly Screening by ARFI-VTI Elastography and Random Decision Forests Based Classifier. IEEE Access 2020, 8, 54019–54034. [Google Scholar] [CrossRef]
  125. Lu, W.; Li, Z.; Chu, J. A novel computer-aided diagnosis system for breast MRI based on feature selection and ensemble learning. Comput. Biol. Med. 2017, 83, 157–165. [Google Scholar] [CrossRef] [PubMed]
  126. Huang, Q.; Chen, Y.; Liu, L.; Tao, D.; Li, X. On Combining Biclustering Mining and AdaBoost for Breast Tumor Classification. IEEE Trans. Knowl. Data Eng. 2019, 32, 728–738. [Google Scholar] [CrossRef]
  127. Vamvakas, A.; Tsivaka, D.; Logothetis, A.; Vassiou, K.; Tsougos, I. Breast Cancer Classification on Multiparametric MRI—Increased Performance of Boosting Ensemble Methods. Technol. Cancer Res. Treat. 2022, 21. [Google Scholar] [CrossRef]
  128. Sharma, S.; Khanna, P. Computer-Aided Diagnosis of Malignant Mammograms using Zernike Moments and SVM. J. Digit. Imaging 2015, 28, 77–90. [Google Scholar] [CrossRef]
  129. Agossou, C.; Atchadé, M.N.; Djibril, A.M.; Kurisheva, S.V. Support Vector Machine, Naive Bayes Classification, and Mathematical Modeling for Public Health Decision-Making: A Case Study of Breast Cancer in Benin. SN Comput. Sci. 2022, 3, 1–19. [Google Scholar] [CrossRef]
  130. Alshutbi, M.; Li, Z.; Alrifaey, M.; Ahmadipour, M.; Murtadha Othman, M. A hybrid classifier based on support vector machine and Jaya algorithm for breast cancer classification. Neural Compt. App. 2022, 1–13. [Google Scholar] [CrossRef]
  131. Samma, H.; Lahasan, B. Optimized Two-Stage Ensemble Model for Mammography Mass Recognition. IRBM 2020, 41, 195–204. [Google Scholar] [CrossRef]
  132. Wu, W.; Li, B.; Mercan, E.; Mehta, S.; Bartlett, J.; Weaver, D.L.; Elmore, J.G.; Shapiro, L.G. MLCD: A Unified Software Package for Cancer Diagnosis. JCO Clin. Cancer Inform. 2020, 4, 290–298. [Google Scholar] [CrossRef]
  133. Badr, E.; Almotairi, S.; Salam, M.A.; Ahmed, H. New Sequential and Parallel Support Vector Machine with Grey Wolf Optimizer for Breast Cancer Diagnosis. Alex. Eng. J. 2022, 61, 2520–2534. [Google Scholar] [CrossRef]
  134. Mendelson, E.B. Artificial Intelligence in Breast Imaging: Potentials and Limitations. Am. J. Roentgenol. 2019, 212, 293–299. [Google Scholar] [CrossRef]
  135. Amato, F.; López, A.; Mendez, E.P.; Vanhara, P.; Hampl, A.; Havel, J. Artificial neural networks in medical diagnosis. J. Appl. Biomed. 2013, 11, 47–58. [Google Scholar] [CrossRef]
  136. Beura, S.; Majhi, B.; Dash, R. Mammogram classification using two dimensional discrete wavelet transform and gray-level co-occurrence matrix for detection of breast cancer. Neurocomputing 2015, 154, 1–14. [Google Scholar] [CrossRef]
  137. Mohammed, M.A.; Al-Khateeb, B.; Rashid, A.N.; Ibrahim, D.A.; Ghani, M.K.A.; Mostafa, S.A. Neural network and multi-fractal dimension features for breast cancer classification from ultrasound images. Comput. Electr. Eng. 2018, 70, 871–882. [Google Scholar] [CrossRef]
  138. Gallego-Ortiz, C.; Martel, A.L. A graph-based lesion characterization and deep embedding approach for improved computer-aided diagnosis of nonmass breast MRI lesions. Med. Image Anal. 2019, 51, 116–124. [Google Scholar] [CrossRef] [PubMed]
  139. Danala, G.; Patel, B.; Aghaei, F.; Heidari, M.; Li, J.; Wu, T.; Zheng, B. Classification of Breast Masses Using a Computer-Aided Diagnosis Scheme of Contrast Enhanced Digital Mammograms. Ann. Biomed. Eng. 2018, 46, 1419–1431. [Google Scholar] [CrossRef] [PubMed]
  140. Punitha, S.; Amuthan, A.; Joseph, K.S. Enhanced Monarchy Butterfly Optimization Technique for effective breast cancer diagnosis. J. Med. Syst. 2019, 43, 206. [Google Scholar] [CrossRef]
  141. Alshayeji, M.H.; Ellethy, H.; Abed, S.; Gupta, R. Computer-aided detection of breast cancer on the Wisconsin dataset: An artificial neural networks approach. Biomed. Signal Process. Control 2022, 71, 103141. [Google Scholar] [CrossRef]
  142. Rezaeipanah, A.; Ahmadi, G. Breast Cancer Diagnosis Using Multi-Stage Weight Adjustment In The MLP Neural Network. Comput. J. 2022, 65, 788–804. [Google Scholar] [CrossRef]
  143. Ting, F.F.; Tan, Y.J.; Sim, K.S. Convolutional neural network improvement for breast cancer classification. Expert Syst. Appl. 2019, 120, 103–115. [Google Scholar] [CrossRef]
  144. Yousefi, M.; Krzyżak, A.; Suen, C.Y. Mass detection in digital breast tomosynthesis data using convolutional neural networks and multiple instance learning. Comput. Biol. Med. 2018, 96, 283–293. [Google Scholar] [CrossRef]
  145. Wu, N.; Phang, J.; Park, J.; Shen, Y.; Huang, Z.; Zorin, M.; Jastrzebski, S.; Fevry, T.; Katsnelson, J.; Kim, E.; et al. Deep Neural Networks Improve Radiologists’ Performance in Breast Cancer Screening. IEEE Trans. Med. Imaging 2019, 39, 1184–1194. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  146. AlBalawi, U.; Manimurugan, S.; Varatharajan, R. Classification of breast cancer mammogram images using convolution neural network. Concurr. Comput. Pract. Exp. 2022, 34, e3803. [Google Scholar] [CrossRef]
  147. Inan, M.S.K.; Alam, F.I.; Hasan, R. Deep integrated pipeline of segmentation guided classification of breast cancer from ultrasound images. Biomed. Signal Process. Control 2022, 75, 103553. [Google Scholar] [CrossRef]
  148. Feizi, A. A gated convolutional neural network for classification of breast lesions in ultrasound images. Soft Comput. 2022, 26, 5241–5250. [Google Scholar] [CrossRef]
  149. Ribli, D.; Horváth, A.; Unger, Z.; Pollner, P.; Csabai, I. Detecting and classifying lesions in mammograms with Deep Learning. Sci. Rep. 2018, 8, 4165–4167. [Google Scholar] [CrossRef] [Green Version]
  150. Liu, K.; Kang, G.; Zhang, N.; Hou, B. Breast Cancer Classification Based on Fully-Connected Layer First Convolutional Neural Networks. IEEE Access 2018, 6, 23722–23732. [Google Scholar] [CrossRef]
  151. Zhang, Y.-D.; Pan, C.; Chen, X.; Wang, F. Abnormal breast identification by nine-layer convolutional neural network with parametric rectified linear unit and rank-based stochastic pooling. J. Comput. Sci. 2018, 27, 57–68. [Google Scholar] [CrossRef]
  152. Oyetade, I.S.; Ayeni, J.O.; Ogunde, A.O.; Oguntunde, B.O.; Olowookere, T.A. Hybridized Deep Convolutional Neural Network and Fuzzy Support Vector Machines for Breast Cancer Detection. SN Comput. Sci. 2022, 3, 58. [Google Scholar] [CrossRef]
  153. Takahashi, K.; Fujioka, T.; Oyama, J.; Mori, M.; Yamaga, E.; Yashima, Y.; Imokawa, T.; Hayashi, A.; Kujiraoka, Y.; Tsuchiya, J.; et al. Deep Learning Using Multiple Degrees of Maximum-Intensity Projection for PET/CT Image Classification in Breast Cancer. Tomography 2022, 8, 131–141. [Google Scholar] [CrossRef]
  154. Muduli, D.; Dash, R.; Majhi, B. Automated diagnosis of breast cancer using multi-modal datasets: A deep convolution neural network based approach. Biomed. Signal Process. Control 2021, 71, 102825. [Google Scholar] [CrossRef]
  155. Ayana, G.; Park, J.; Jeong, J.-W.; Choe, S.-W. A Novel Multistage Transfer Learning for Ultrasound Breast Cancer Image Classification. Diagnostics 2022, 12, 135. [Google Scholar] [CrossRef] [PubMed]
  156. Dey, S.; Roychoudhury, R.; Malakar, S.; Sarkar, R. Screening of breast cancer from thermogram images by edge detection aided deep transfer learning model. Multimed. Tools Appl. 2022, 81, 9331–9349. [Google Scholar] [CrossRef] [PubMed]
  157. Ring, E. The historical development of temperature measurement in medicine. Infrared Phys. Technol. 2007, 49, 297–301. [Google Scholar] [CrossRef]
  158. Ng, E.Y.K.; Kee, E.C. Advanced integrated technique in breast cancer thermography. J. Med. Eng. Technol. 2008, 32, 103–114. [Google Scholar] [CrossRef] [PubMed]
  159. Lahiri, B.; Bagavathiappan, S.; Jayakumar, T.; Philip, J. Medical applications of infrared thermography: A review. Infrared Phys. Technol. 2012, 55, 221–235. [Google Scholar] [CrossRef]
  160. Singh, D.; Singh, A.K. Role of image thermography in early breast cancer detection- Past, present and future. Comput. Methods Programs Biomed. 2020, 183, 105074. [Google Scholar] [CrossRef]
  161. Baic, A.; Plaza, D.; Lange, B.; Michalecki Stanek, A.; Kowalczyk, A.; Ślosarek, K.; Cholewka, A. Long-Term Skin Temperature Changes after Breast Cancer Radiotherapy. Int. J. Environ. Res. Public Health 2022, 19, 6891. [Google Scholar] [CrossRef]
  162. Fernández-Cuevas, I.; Marins, J.C.B.; Lastras, J.A.; Carmona, P.M.G.; Cano, S.P.; García-Concepción, M.Á.; Sillero-Quintana, M. Classification of factors influencing the use of infrared thermography in humans: A review. Infrared Phys. Technol. 2015, 71, 28–55. [Google Scholar] [CrossRef]
  163. Ioannou, S.; Gallese, V.; Merla, A. Thermal infrared imaging in psychophysiology: Potentialities and limits. Psychophysiology 2014, 51, 951–963. [Google Scholar] [CrossRef] [Green Version]
  164. Bernard, V.; Staffa, E.; Mornstein, V.; Bourek, A. Infrared camera assessment of skin surface temperature—Effect of emissivity. Phys. Med. 2013, 29, 583–591. [Google Scholar] [CrossRef] [Green Version]
  165. Ekici, S.; Jawzal, H. Breast cancer diagnosis using thermography and convolutional neural networks. Med. Hypotheses 2019, 137, 109542. [Google Scholar] [CrossRef]
  166. AlFayez, F.; El-Soud, M.W.A.; Gaber, T. Thermogram Breast Cancer Detection: A Comparative Study of Two Machine Learning Techniques. Appl. Sci. 2019, 10, 551. [Google Scholar] [CrossRef] [Green Version]
  167. Gogoi, U.R.; Majumdar, G.; Bhowmik, M.K.; Ghosh, A.K. Evaluating the efficiency of infrared breast thermography for early breast cancer risk prediction in asymptomatic population. Infrared Phys. Technol. 2019, 99, 201–211. [Google Scholar] [CrossRef]
  168. Saxena, A.; Ng, E.; Raman, V.; Hamli, M.S.B.M.; Moderhak, M.; Kolacz, S.; Jankau, J. Infrared (IR) thermography-based quantitative parameters to predict the risk of post-operative cancerous breast resection flap necrosis. Infrared Phys. Technol. 2019, 103, 103063. [Google Scholar] [CrossRef]
  169. Tello-Mijares, S.; Woo, F.; Flores, F. Breast Cancer Identification via Thermography Image Segmentation with a Gradient Vector Flow and a Convolutional Neural Network. J. Health Eng. 2019, 2019, 1–13. [Google Scholar] [CrossRef] [Green Version]
  170. Garduño-Ramón, M.A.; Vega-Mancilla, S.G.; Morales-Henández, L.A.; Osornio-Rios, R.A. Supportive Noninvasive Tool for the Diagnosis of Breast Cancer Using a Thermographic Camera as Sensor. Sensors 2017, 17, 497. [Google Scholar] [CrossRef] [Green Version]
  171. Raghavendra, U.; Acharya, U.R.; Ng, E.Y.K.; Tan, J.-H.; Gudigar, A. An integrated index for breast cancer identification using histogram of oriented gradient and kernel locality preserving projection features extracted from thermograms. Quant. Infrared Thermogr. J. 2016, 13, 195–209. [Google Scholar] [CrossRef]
  172. Lashkari, A.; Pak, F.; Firouzmand, M. Full Intelligent Cancer Classification of Thermal Breast Images to Assist Physician in Clinical Diagnostic Applications. J. Med. Signals Sens. 2016, 6, 12–24. [Google Scholar] [CrossRef]
  173. Francis, S.V.; Sasikala, M.; Saranya, S. Detection of Breast Abnormality from Thermograms Using Curvelet Transform Based Feature Extraction. J. Med. Syst. 2014, 38, 1–9. [Google Scholar] [CrossRef]
  174. Milosevic, M.; Jankovic, D.; Peulic, A. Thermography based breast cancer detection using texture features and minimum variance quantization. EXCLI J. 2014, 13, 1204. [Google Scholar] [CrossRef]
  175. Araújo, M.C.; Lima, R.C.; de Souza, R.M. Interval symbolic feature extraction for thermography breast cancer detection. Expert Syst. Appl. 2014, 41, 6728–6737. [Google Scholar] [CrossRef]
  176. Gonzalez-Hernandez, J.-L.; Recinella, A.N.; Kandlikar, S.G.; Dabydeen, D.; Medeiros, L.; Phatak, P. Technology, application and potential of dynamic breast thermography for the detection of breast cancer. Int. J. Heat Mass Transf. 2018, 131, 558–573. [Google Scholar] [CrossRef]
  177. Silva, L.F.; Santos, A.A.S.; Bravo, R.S.; Silva, A.C.; Muchaluat-Saade, D.C.; Conci, A. Hybrid analysis for indicating patients with breast cancer using temperature time series. Comput. Methods Programs Biomed. 2016, 130, 142–153. [Google Scholar] [CrossRef]
  178. Saniei, E.; Setayeshi, S.; Akbari, M.E.; Navid, M. A vascular network matching in dynamic thermography for breast cancer detection. Quant. Infrared Thermogr. J. 2015, 12, 1–13. [Google Scholar] [CrossRef]
  179. Kumar, A.; Bi, L.; Kim, J.; Feng, D.D. Machine learning in medical imaging. In Biomedical Information Technology; Feng, D.D., Ed.; Academic Press: Cambridge, MA, USA, 2020; pp. 167–196. [Google Scholar] [CrossRef]
  180. Nayak, D.R.; Dash, R.; Majhi, B.; Pachori, R.B.; Zhang, Y. A deep stacked random vector functional link network autoencoder for diagnosis of brain abnormalities and breast cancer. Biomed. Signal Process. Control 2020, 58, 101860. [Google Scholar] [CrossRef]
  181. Kadam, V.J.; Jadhav, S.M.; Vijayakumar, K. Breast Cancer Diagnosis Using Feature Ensemble Learning Based on Stacked Sparse Autoencoders and Softmax Regression. J. Med. Syst. 2019, 43, 263. [Google Scholar] [CrossRef]
  182. Zhang, E.; Seiler, S.; Chen, M.; Lu, W.; Gu, X. BIRADS features-oriented semi-supervised deep learning for breast ultrasound computer-aided diagnosis. Phys. Med. Biol. 2020, 65, 125005. [Google Scholar] [CrossRef] [Green Version]
  183. Zhang, H.; Guo, W.; Zhang, S.; Lu, H.; Zhao, X. Unsupervised Deep Anomaly Detection for Medical Images Using an Improved Adversarial Autoencoder. J. Digit. Imaging 2022, 35, 153–161. [Google Scholar] [CrossRef]
  184. Movahedi, F.; Coyle, J.L.; Sejdic, E. Deep Belief Networks for Electroencephalography: A Review of Recent Contributions and Future Outlooks. IEEE J. Biomed. Health Inform. 2017, 22, 642–652. [Google Scholar] [CrossRef]
  185. Le Roux, N.; Bengio, Y. Representational Power of Restricted Boltzmann Machines and Deep Belief Networks. Neural Comput. 2007, 20, 1631–1649. [Google Scholar] [CrossRef]
  186. Ahmad, M.; Ai, D.; Xie, G.; Qadri, S.F.; Song, H.; Huang, Y.; Wang, Y.; Yang, J. Deep Belief Network Modeling for Automatic Liver Segmentation. IEEE Access 2019, 7, 20585–20595. [Google Scholar] [CrossRef]
  187. Kaur, M.; Singh, D. Fusion of medical images using deep belief networks. Clust. Comput. 2019, 23, 1439–1453. [Google Scholar] [CrossRef]
  188. Zhao, Z.; Zhao, J.; Song, K.; Hussain, A.; Du, Q.; Dong, Y.; Liu, J.; Yang, X. Joint DBN and Fuzzy C-Means unsupervised deep clustering for lung cancer patient stratification. Eng. Appl. Artif. Intell. 2020, 91, 103571. [Google Scholar] [CrossRef]
  189. Rasmus, A.; Berglund, M.; Honkala, M.; Valpola, H.; Raiko, T. Semi-supervised learning with ladder networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems—Volume 2 (NIPS’15); MIT Press: Cambridge, MA, USA, 2015; pp. 3546–3554. [Google Scholar]
  190. Zahoor, S.; Shoaib, U.; Lali, I.U. Breast Cancer Mammograms Classification Using Deep Neural Network and Entropy-Controlled Whale Optimization Algorithm. Diagnostics 2022, 12, 557. [Google Scholar] [CrossRef]
  191. Jagtap, A.D.; Kawaguchi, K.; Karniadakis, G.E. Adaptive activation functions accelerate convergence in deep and physics-informed neural networks. J. Comput. Phys. 2020, 404, 109136. [Google Scholar] [CrossRef] [Green Version]
  192. Jagtap, A.D.; Shin, Y.; Kawaguchi, K.; Karniadakis, G.E. Deep Kronecker neural networks: A general framework for neural networks with adaptive activation functions. Neurocomputing 2022, 468, 165–180. [Google Scholar] [CrossRef]
  193. Zhang, J.; Wang, G.; Ren, J.; Yang, Z.; Li, D.; Cui, Y.; Yang, X. Multiparametric MRI-based radiomics nomogram for preoperative prediction of lymphovascular invasion and clinical outcomes in patients with breast invasive ductal carcinoma. Eur. Radiol. 2022, 32, 4079–4089. [Google Scholar] [CrossRef] [PubMed]
  194. Schaffter, T.; Buist, D.S.M.; Lee, C.I.; Nikulin, Y.; Ribli, D.; Guan, Y.; Lotter, W.; Jie, Z.; Du, H.; Wang, S.; et al. Evaluation of Combined Artificial Intelligence and Radiologist Assessment to Interpret Screening Mammograms. JAMA Netw. Open 2020, 3, e200265. [Google Scholar] [CrossRef] [Green Version]
  195. Grimm, L.J.; Mazurowski, M.A. Breast Cancer Radiogenomics: Current Status and Future Directions. Acad. Radiol. 2020, 27, 39–46. [Google Scholar] [CrossRef] [Green Version]
Figure 1. BC detection using image processing strategies.
Figure 1. BC detection using image processing strategies.
Cancers 14 03442 g001
Figure 2. Mammography procedure.
Figure 2. Mammography procedure.
Cancers 14 03442 g002
Figure 3. Ultrasound procedure.
Figure 3. Ultrasound procedure.
Cancers 14 03442 g003
Figure 4. BMRI procedure.
Figure 4. BMRI procedure.
Cancers 14 03442 g004
Figure 5. Artificial Neural Network.
Figure 5. Artificial Neural Network.
Cancers 14 03442 g005
Figure 6. Convolutional Neural Network.
Figure 6. Convolutional Neural Network.
Cancers 14 03442 g006
Figure 7. Electromagnetic spectrum.
Figure 7. Electromagnetic spectrum.
Cancers 14 03442 g007
Figure 8. Proposed experimental set up for the breast thermal images acquisition.
Figure 8. Proposed experimental set up for the breast thermal images acquisition.
Cancers 14 03442 g008
Figure 9. Autoencoder structure.
Figure 9. Autoencoder structure.
Cancers 14 03442 g009
Table 1. Summary of the used breast image generation technologies.
Table 1. Summary of the used breast image generation technologies.
Imagining TechniqueAdvantagesDisadvantagesRecommended PopulationSome Types of Cancer DetectedSensitivity and/or Specificity
Mammography1. Equipment is widely available worldwide.
2. Methods, such as tomosynthesis, can improve the specificity and sensibility of the technique with patients that have dense breasts [10]
1. The rate of both false positive and false negatives increases since there is no possibility to determine if the masses are benign
2. The procedure used to obtain the images could be bothersome.
3. Dense breasts or young patients are not indicated to use this imaging technique.
Women whose age is greater than 40 years, have low-dense breast and an average risk of contracting the disease.1. Ductal Carcinoma in Situ
2. Invasive Breast Cancer.
Sensitivity up to 85%.
Ultrasound1. Can be used in young patients or have dense breast.
2. The equipment used is available in most of the hospitals
1. Calcifications could not be detected.
2. Sensitivity depends on the operator ability to interpret the images
3. False-positivity rate is an issue.
Women with heterogeneously or extremely dense breast tissue [38,39].
Women that are pregnant or lactating [40].
1. Ductal Carcinoma in Situ.
2. Invasive ductal carcinoma
Sensitivity ranging between 40–75% in younger high-risk women [40].
Magnetic Resonance Imaging1. Effective for detecting suspicious masses in high-risk population [10].
2. The breast tissue density is no longer an issue [38,39,40].
3. Multifocal lesions can be detected [10,41]
1. Equipment is only available in specialized hospitals.
2. Expensive
3. False positive findings are an important concern [41]
1. Women that may carry mutation in ATM, BRCA1, BRCA2, CHEK2, PALB2, PTEN, TP53 genes.
2. Women that had radiation therapy in the chest zone during the childhood.
1. Ductal in situ carcinomas
2. Invasive ductal carcinomas.
3. Invasive lobular carcinomas
4. Invasive mammary carcinomas with mixed ductal and lobular features [24]
Sensitivity ranging from 83 to 100% [42,43,44].
Table 2. Summary of the used image classification algorithms.
Table 2. Summary of the used image classification algorithms.
Type of ClassifierClassifierAdvantagesDisadvantagesNumber of
Images
Performance Metrics
UnsupervisedK-means
  • Easiness of implementation.
  • Fast implementation.
  • Fast computing (distance to the centroids is only required).
  • The initial value of the centroids length influences the performance.
  • Samples must be presented in an organized and normalized way.
  • The centroid distance of the classes might induce misclassifications.
  • Some works have used the Wisconsin Breast Cancer Dataset with 569 instances [110,112].
  • Accuracy: up to 92% [110]
  • Specificity: up to 99% [112]
  • Sensitivity: up to 100% [112]
Hierarchical Clustering
  • No distance measurement is required.
  • Similarity measures could be employed.
  • Easy to implement.
  • Large datasets increase the time complexity to deliver a result.
  • Outliers degrade the classifier performance.
  • Normalization of the samples values is required.
  • 117 images are analyzed [118].
  • Accuracy: 88.0%
  • Specificity: 89.3%
  • Sensitivity: 85.7% [118]
SupervisedDecision Trees
  • Its construction no imposes any probabilistic distribution to the data.
  • Can deal with large datasets.
  • Easy to understand.
  • They can be too complex if the training data is not carefully chosen.
  • Their performance will decrease if several classes exist in the data.
  • Some works have analyzed from 283 [120] to 722 images [71].
  • Accuracy: up to 89%,
  • Specificity: up to 89%
  • Sensitivity: up to 90% [71,120]
Random Forest
  • Non-linear relationships between the features are well processed.
  • Outliers do not degrade the classifier performance.
  • Noisy measurements do not affect the accuracy.
  • Training time increases due to the number of trees generated.
  • The classifier complexity is increased as the number of trees needed to be evaluated.
  • Several works have used different number of images from 59 [121], 283 [120] to 512 [122].
  • On the other hand, some authors have used ten different datasets, the shortest with 155 images and the largest with 569 images [123].
  • Accuracy: up to 80%.
  • Specificity: up to 80%.
  • Sensitivity: up to 90% [120,121,122,123]
AdaBoost
  • Base classifiers only need to have an accuracy greater than 50%.
  • They can be from different domains (spatial, frequency, among others)
  • Noise can degrade the classifier performance, as the weight assigned to each weakly classifier is increased to reduce the error.
  • Sensitive to the base classifiers employed.
  • Some works have used from 1062 [126] to 2336 [125] images.
  • Accuracy: up to 90%.
  • Specificity: up to 90%.
  • Sensitivity: up to 90% [12,125,126]
Support Vector Machines
  • Can deal with high-dimensional data (features).
  • Robust against outliers.
  • Overfitting is reduced due to the training process.
  • Accuracy is kernel dependent.
  • Large datasets are not properly handled.
  • Overlapping and noise degrade the accuracy.
  • Uncertainty cannot be incorporated.
  • Some authors have used different number of images from 207 [87], 240 [132] to 1187 [131].
  • Accuracy: up to 90%.
  • Specificity: up to 90%.
  • Sensitivity: up to 90% [74,87,131,132]
Artificial Neural Networks
  • Can deal with highly non-linear relationships.
  • Can deal with noisy data.
  • Uncertainty can be incorporated.
  • Fine-tuning could be done using different activation functions.
  • High-dimensional data might cause instabilities to the training algorithms.
  • Prone to overfitting.
  • Selection of the number of neurons could be troublesome.
  • Other authors have been used 111 [139], 184 [137], and 569 [140] images.
  • Accuracy: up to 95%.
  • Sensitivity: up to 100%.
  • Specificity: up to 90% [134,137,138,139,140]
Convolutional Neural Networks
  • Can process the image without any preprocessing stage.
  • They can perform feature extraction task automatically.
  • Moderate noisy images can be properly handled.
  • They require a large dataset to avoid overfitting.
  • They require a high computational load to their training.
  • Some authors have used different number of images from 87 [144], 221 [143] to 229,426 digital screening mammography exams [145].
  • Accuracy: up to 99%.
  • Sensitivity: up to 99%
  • Specificity: up to 99.6% [7,8,143,144,145,149]
Table 3. Summary of the breast lesions detection using infrared thermography.
Table 3. Summary of the breast lesions detection using infrared thermography.
AuthorsNumber of PatientsIR SystemImage Processing and Classification AlgorithmsAccuracy (%)Room
Temperature
(°C)
Acclimation Time (min)
FeaturesClassification
Ekici and Jawzal [165]140FLIR SC-620Bio-data, image analysis, and image statisticsCNNs optimized by Bayes algorithm98.9517–2415
AlFayez et al. [166]Public dataset DMR-IRGeometrical and textural featuresExtreme Learning Machine (ELM) and Multilayer Perceptron (MLP)ELM—100
MLP—82.2
Public dataset DMR-IR
Rani et al. [167]60FLIR T650SCTemperature and intensitySVM with Radial basis function kernel83.2220–2415
Saxena et al. [168]32FLIR A320 ROI thermalCut-off value8822 ± 0.5Not specified
Tello-Mijares [169]63FLIR SC-620Shape, colour, texture, and left and right breast relationCNN10020–2215
Garduño-Ramón et al. [170]454FLIR A300TemperatureDifference of temperature79.6018–2215
Raghavendra et al. [171]50Thermo TVS200Student’s t-test based feature selection algorithmDecision Tree9820–2215
Lashkari et al. [172]67Thermoteknix VisIR 64023 features, including statistical, morphological, frequency domain, histogram and Gray Level Co-occurrence MatrixAdaboost, SVM, kNN, Naive, PNN85.33 and 87.4218–23ice test: 20 min
Francis et al. [173]22med2000™ IRISStatistical and texture features are extracted from thermograms in the curvelet domainSVM90.912515
Milosevic et al. [174]40 imagesVARIOSCAN 3021 STTexture measures derived from the Gray Level Co-occurrence Matrix K-Nearest Neighbor92.520–23Few minutes
Araujo et al. [175]50FLIR
S45
Thermal interval for each breast Linear discriminant classifier, minimum distance classifier, and Parzen window-24–28At least 10 min
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Basurto-Hurtado, J.A.; Cruz-Albarran, I.A.; Toledano-Ayala, M.; Ibarra-Manzano, M.A.; Morales-Hernandez, L.A.; Perez-Ramirez, C.A. Diagnostic Strategies for Breast Cancer Detection: From Image Generation to Classification Strategies Using Artificial Intelligence Algorithms. Cancers 2022, 14, 3442. https://doi.org/10.3390/cancers14143442

AMA Style

Basurto-Hurtado JA, Cruz-Albarran IA, Toledano-Ayala M, Ibarra-Manzano MA, Morales-Hernandez LA, Perez-Ramirez CA. Diagnostic Strategies for Breast Cancer Detection: From Image Generation to Classification Strategies Using Artificial Intelligence Algorithms. Cancers. 2022; 14(14):3442. https://doi.org/10.3390/cancers14143442

Chicago/Turabian Style

Basurto-Hurtado, Jesus A., Irving A. Cruz-Albarran, Manuel Toledano-Ayala, Mario Alberto Ibarra-Manzano, Luis A. Morales-Hernandez, and Carlos A. Perez-Ramirez. 2022. "Diagnostic Strategies for Breast Cancer Detection: From Image Generation to Classification Strategies Using Artificial Intelligence Algorithms" Cancers 14, no. 14: 3442. https://doi.org/10.3390/cancers14143442

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop