Next Article in Journal
A Micro Air Velocity Sensor for Measuring the Internal Environment of the Cold Air Ducts of Heating, Ventilation, and Air Conditioning Systems
Next Article in Special Issue
Optimization of a Screw Centrifugal Blood Pump Based on Random Forest and Multi-Objective Gray Wolf Optimization Algorithm
Previous Article in Journal
Analysis of Nonlinear Convection–Radiation in Chemically Reactive Oldroyd-B Nanoliquid Configured by a Stretching Surface with Robin Conditions: Applications in Nano-Coating Manufacturing
Previous Article in Special Issue
Precise Control of Glioma Cell Apoptosis Induced by Micro-Plasma-Activated Water (μ-PAW)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Application of Deep Learning in Histopathology Images of Breast Cancer: A Review

1
College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
2
Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
3
Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
*
Author to whom correspondence should be addressed.
Micromachines 2022, 13(12), 2197; https://doi.org/10.3390/mi13122197
Submission received: 1 November 2022 / Revised: 4 December 2022 / Accepted: 9 December 2022 / Published: 11 December 2022
(This article belongs to the Special Issue Intelligent Biomedical Devices and Systems)

Abstract

:
With the development of artificial intelligence technology and computer hardware functions, deep learning algorithms have become a powerful auxiliary tool for medical image analysis. This study was an attempt to use statistical methods to analyze studies related to the detection, segmentation, and classification of breast cancer in pathological images. After an analysis of 107 articles on the application of deep learning to pathological images of breast cancer, this study is divided into three directions based on the types of results they report: detection, segmentation, and classification. We introduced and analyzed models that performed well in these three directions and summarized the related work from recent years. Based on the results obtained, the significant ability of deep learning in the application of breast cancer pathological images can be recognized. Furthermore, in the classification and detection of pathological images of breast cancer, the accuracy of deep learning algorithms has surpassed that of pathologists in certain circumstances. Our study provides a comprehensive review of the development of breast cancer pathological imaging-related research and provides reliable recommendations for the structure of deep learning network models in different application scenarios.

1. Introduction

Cancer is a huge public health problem worldwide. Among cancer types, breast cancer (BC) is the most common cancer in women [1]. Since the late 1970s, the number of breast cancer patients worldwide has increased, and to date, breast cancer has become one of the types of cancer with the highest incidence and mortality rates in the world. Based on statistics from the World Health Organization, 8.8 million people died of cancer in 2020, of which 684,996 died of breast cancer [2]. A histopathological examination of breast cancer is the “gold standard” for a breast cancer diagnosis [3]. Pathologists can distinguish between normal tissue, non-malignant (benign) tissue, and malignant lesions by observing the microscopic structure and organization of biopsy samples in microhistological images.
A traditional pathological diagnosis has high prestige in a medical diagnosis. The pathologist observes tissue slices through a microscope and makes the corresponding cancer diagnosis by observing the tissue structure and cytopathic characteristics of the slices. The staining density and flatness of the slice, as well as the collection and storage of pathological slice images, may affect the integrity of the final pathological slice image. The inherent complexity and diversity of breast histological images make the diagnostic work of pathologists tedious and time-consuming. Additionally, differences in experience and the subjectivity of pathological diagnostic criteria often lead to inconsistency and the non-reproducibility of diagnostic results [4]. The development of digital pathology has reduced these effects, which is helpful in obtaining high-resolution images [5]. Compared to traditional pathology, digital pathology uses digital pathology systems to digitize and network process pathological resources. With the application of big data technology in the medical field, we can describe the collection visualization, long-term storage, and synchronous browsing of data. The processing of collected pathological resources is no longer restricted by time and space. Therefore, digital pathology has become widely used in related fields of pathology [6].
In recent years, the continuous development of artificial intelligence technology has achieved remarkable results in various fields. With the continuous improvement and increase in medical equipment and data recording systems, the medical field has several large-scale datasets, such as Camelyon16. As a subcategory of artificial intelligence, deep learning has also benefited greatly and achieved remarkable achievements [7]. Srinidhi et al., presented a comprehensive review of state-of-the-art deep learning approaches that have been used in the context of histopathological image analysis [8]. Wang et al., summarized recent deep learning approaches relevant to precision oncology and reviewed more than 150 articles from the past six years [9]. We searched original articles published from 2007 to 2022 using “breast cancer”, “pathology”, and “deep learning” as keywords in Web of Science and performed a statistical analysis. By analyzing the studies from 2007 to 2022, we found that the articles about deep learning have gradually increased since 2007, Figure 1A. Robertson et al., introduced the development from image processing technology to artificial intelligence in breast pathology [10]. Gao et al., introduced medical image analysis technology based on convolutional neural networks in computer-aided diagnosis (CAD) research [11]. Biswas et al., introduced the development of deep learning and certain applications in medical imaging [12]. Figure 1B shows the summary of the themes of review-type articles in articles on the application of deep learning to breast cancer pathological pictures. Although some articles summarizing the application of breast cancer pathology in deep learning have been published [13,14,15], with the rapid development of deep learning algorithms, it is still necessary to systematically and comprehensively summarize the research results in this field. In this review, we summarize 202 articles from Web of Science from 2007 to the present on the application of deep learning in breast cancer pathology. This review analyzed the application of deep learning methods in the detection, segmentation, and classification of breast pathological images. In addition, relevant articles were introduced and sorted, respectively, and common public breast cancer pathological image datasets are summarized.
Section 2 of this paper comprehensively summarizes the public dataset related to pathological images of breast cancer. Section 3 explores the application of the deep learning method in breast cancer pathological images from the three perspectives of detection, segmentation, and classification. Section 4 summarizes the above methods and looks forward to the future development of this field.

2. Datasets

Most deep learning networks used to process pathological images are algorithms with supervision. To improve the network accuracy, a large amount of labeled data is required for training to improve the fit. The acquisition and labeling of datasets are time-consuming and complex [16]. In this section, we summarize the data and information contained in the existing breast cancer public datasets and the number and types of pathological images they contain. Table 1 summarizes the commonly used public datasets. Breast cancer histopathological image classification (BreakHis) consists of 9109 microscopic images of breast tumor tissue from 82 patients using different magnification scales (40×, 100×, 200×, and 400×). The Cancer Imaging Archive (TCIA) has identified and hosted a vast archive of medical images of cancer for public download and includes typical patient images related to common diseases, imaging modalities or types (Magnetic Resonance Imaging (MRI), Computed Tomography (CT), digital histopathology, etc.), or research priorities. The primary file format for TCIA is Digital Imaging and Communications in Medicine (DICOM). The Genomic Data Sharing Area (GDC) is a research initiative of the National Cancer Institute (NCI). The mission of the GDC is to provide the cancer research community with a unified data repository to share data across cancer genome research to support precision medicine. Sklearn datasets are composed of classic data that can be directly called by the classic machine learning module Sklearn in Python; these datasets include Boston house price data, Wisconsin breast cancer data, diabetes data, the handwritten digital dataset, Fisher’s iris data, and wine data. BACH (International Conference on Image Analysis and Recognition (ICIAR) 2018 Grand Challenge) refers to the ICIAR Grand Challenge on BreAst Cancer Histology images. The challenge provides a dataset consisting of the histological microscopic examination of the breast stained with hematoxylin and eosin (H&E) and the entire slide image. Camelyon16 refers to the dataset provided by the Camelyon16 Challenge. The goal of this challenge is to evaluate new and existing algorithms to automatically detect metastases in whole-slide images of H&E-stained lymph node sections. The data in the challenge included a total of 400 full-slide images (WSIs) from sentinel nodes from two separate datasets from the Radboud University Medical Centre (Nijmegen, Netherlands) and the University Medical Centre of Utrecht, Netherlands. CAMELYON17 is a dataset provided by the CAMELYON17 Challenge. The goal of this challenge is to evaluate new and existing algorithms for the automatic detection and classification of breast cancer metastases in full-slide images of histological lymph nodes. The CAMELYON17 dataset comes from five medical centers in the Netherlands. The WSIs are available as TIFF images. Comments on the lesion level are provided in the form of XML files. For training, 100 patients will be selected, and another 100 will be tested.

3. Methodology

In 2006, the concept of deep learning again attracted the attention of researchers [17]. From 2007, deep learning began to be applied in breast pathology image data. Then, under the efforts of many researchers, random gradient descent (SGD), Dropout, and other network optimization strategies were successively proposed, especially GPU parallel computing technology that solves the problem of multiple optimization times of deep network parameters for a long time, which has set off an upsurge of deep learning worldwide and continues to this day. Over the past 10 years, many classical deep learning architectures were proposed, such as AlexNet [18], Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM) [19], Generative Adversarial Network (GAN) [20], and transformer [21]. In this section, we investigate the application of deep learning to histopathological sections of breast cancer, including the most advanced and effective models available today, and we provide a summary of related work. The literature can be divided into three primary categories based on the direction of research reported in each article: breast cancer detection, breast cancer segmentation, and breast cancer classification [22].
Figure 2 shows the common basic neural network structure, where yellow squares represent convolution layers, orange squares represent pooling layers, purple squares represent full-connected layers, and blue squares represent deconvolution layers. Among them, Figure 2A is a simplified convolution neural network, which is composed of only two layers of convolution, and the rest of the depth convolution neural network can be composed by superimposing Figure 2A. LeNet (Figure 2B) [23] is an early convolutional neural network, which was proposed by Yann LeCun et al., in 1990. AlexNet (Figure 2C) is a deep convolutional neural network proposed by Hinton et al., in 2012 and won the championship in the ImageNet challenge that year. Figure 2D is a full convolution neural network (FCN) used for semantic segmentation in the early stage [24]. It is also widely used in the early research of breast cancer pathological image segmentation task. The appearance of U-Net [25] has significantly improved the performance of FCN in medical image segmentation tasks. Holistically Nested Network (HED) [26] has achieved better results than traditional edge detection algorithms in edge detection tasks, and this method has also been used to improve the performance of breast cancer segmentation tasks [27].

3.1. Detection of Breast Lesions

In medical image analysis, detection aims to locate areas of interest in tissue slices [28,29,30,31,32,33,34,35,36,37,38,39,40]. The detection system provides strong support for object segmentation, the distinction between malignant and benign tumors, or the detection of tumors or lesions. For example, nuclear or mitosis has important implications for cancer screening. Cell spatial distribution analysis and mitotic count provide support for differentiation. Automatic cell/nucleus detection is a prerequisite for a series of subsequent tasks, such as cell/nucleus instance segmentation, tracking, and morphological measurements [41]. In recent years, many studies based on deep learning were performed in this field of study. Among existing deep learning detection algorithms, CNN-based networks perform better than other network structures in detection accuracy [42]. In certain areas, CNN-based networks have achieved diagnostic standards that surpass pathologists [43]. Next, we will introduce some typical models that work particularly well for accuracy and performance. Table 2 shows the application of deep learning algorithm in histopathological detection of breast cancer. The application of deep learning algorithm in histopathological detection of breast cancer was classified according to the types of models, and the strategies were summarized.
George et al. [44] proposed a low-complexity breast cancer detection convolutional neural network (CNN) called NucDeep, which includes a low-complexity CNN for feature extraction of non-overlapping nuclear plaques and converts local nuclear features into compact image-level features to improve classifier performance (Figure 3A). Chen et al. [45] proposed a novel deep cascaded neural network model (CasNN). CasNN greatly increased the speed of detection of mitosis (Figure 3B). Liu et al. [43] proposed InceptionV3 as a way to automatically detect and locate cancers in high-resolution images (Figure 3C). The author uses InceptionV3 as an experimental framework, and the balance and expansion of data are achieved through data enhancement and balance to improve model accuracy. Bardou et al. [46] proposed a method for automatic classification of breast cancer histological images based on a convolutional neural network (Figure 3D). The linear rectification function (ReLU) layer is used for the convolution and fully connected layers to accelerate the convergence learning rate, introduce nonlinearity, and adjust the network weight to prevent overfitting.
Some classical detection algorithms [47,48,49] in the field of natural images have also been used in the problems related to pathological images of breast cancer. Lu et al. [50] proposed a model based on the latest yolo v4 structure that can quickly and accurately segment the lesion area in high-resolution breast cancer pathological slices. The ROI recognition accuracy is 0.936 and F1 score is 0.787, which is of great significance for improving the diagnostic efficiency and accuracy of pathologists on breast pathological images. Huang et al. [51] proposed an algorithm for breast cancer pathological image nuclear detection based on mask RCNN. This method effectively combines feature pyramid network (FPN), ResNet, and other modules to achieve more accurate detection. Harrison et al. [52] proposed an algorithm for tumor detection in breast pathological images based on Faster RCNN and found that patching the images can significantly improve the sensitivity of the model, from 1% to 60%, and the performance improvement brought by dye normalization is limited. Yamaguchi et al. [53] proposed an automatic detection algorithm for breast cancer based on single-shot multibox detector (SSD) and achieved 88.3% and 90.5% diagnostic accuracy in two (benign or malignant) and three (benign, non-invasive carcinoma, or invasive carcinoma) classification tasks, respectively. Mitotic cell count is an important biomarker for grading and prognosis of breast cancer and is also a common application of pathological intelligence analysis of breast cancer. Zorgani et al. [54] designed a method to detect breast cancer mitotic cells based on the deep yolo architecture and obtained 0.839 F1 measure on the ICPR2012 dataset.
Thus, by reading and analyzing the article, we found that the model proposed by [44] is a low-complexity model with classification results comparable to existing technologies and uses nucleus patches alone rather than random patches. The proposed method of [45] can obtain various multi-level and multi-scale features from breast cancer histopathological images, providing competitive performance in the classification of complex breast cancer histopathological images. However, the dataset collected is relatively small and contains only two types of images, and the dataset should be extended to include images for multiclass classification problems. The authors of [43] made data enhancement for negative samples to solve the large gap between positive and negative samples and optimized the sampling process to remove the patch as the background. After the probability graph is obtained, the tumor region is continuously iterated according to the current maximum value to predict the tumor region based on the non-maximum suppression method. In [46], compared with CNN, the performance of the manual feature-based method based on encoding model for local descriptors to construct image representation is low, and the performance of multiclass classification is lower than that of binary classification. Classical object detection algorithms can also achieve remarkable results in the field of pathological images, especially Faster RCNN series algorithms.
From the above results, it can be seen that the deep learning model has the advantages of direct learning characteristics on breast cancer pathological images, which can greatly reduce the manual investment and also reduce the artificial differences caused by manual reading. Higher accuracy also provides help for the development of precision medicine.
Table 2. Summary of the application of deep learning algorithms in breast cancer histopathology for detection.
Table 2. Summary of the application of deep learning algorithms in breast cancer histopathology for detection.
ModelStrategyAdvantagesPublication
RNNDevelopment of decision support systems for pathologyRNN allows neurons in the hidden layer to communicate with each other, storing the previous output as information in the hidden layer [55]
Propose a SmallMitosis framework for the detection of mitotic cells from hematoxylin and eosin (H&E)-stained breast histological images [56]
InceptionHistologic identification of tumor cells in lymph nodesInception increases the width of the network by pooling each layer with a different convolution to extract features from the previous layer, and by adding a 1*1 convolution after the pooling layer before the 3*3 and 5*5 convolutions, which effectively avoids complex parameters and computational effort [57]
Improve the computer-aided diagnosis method based on deep learning [58]
ResNetDetection of invasive ductal carcinoma in breast histological images and the classification of lymphoma subtypesThe main feature of ResNet is the residual block, the purpose of the residual block is to preserve the characteristics of the parameters before the current layer is trained and to pass these parameters into the subsequent layers together with the trained data [59]
Diagnostic breast cancer whole-slide tissue images [60]
Propose an automatic detection method for invasive ductal carcinoma (IDC) based on deep transfer learning technology [61]
Propose Mask RCNN, a multi-task deep learning framework for object detection and instance segmentation, to automatically detect mitosis [62]
DCNNPropose an accurate method for detecting the mitotic cells from histopathological slides using a multi-stage deep learning framework [63]
Present an SSAE for efficient nuclei detection on high-resolution histopathological images of breast cancer [64]
Introduce deep learning as a technique to improve the objectivity and efficiency of histopathologic slide analysis [65,66,67,68,69,70]
Semi-Supervised LearningPresent a semi-supervised deep learning strategy for breast cancer diagnosisSemi-supervised learning is to use a large number of unlabeled samples and a small number of labeled samples to train the classifier, solving the problem of insufficient labeled samples [71,72]
YOLOA fast lesion detection method based on yolo is proposedSimple structure and fast speed [50]
Faster RCNNA fast detection method of breast tumor based on Faster RCNN is proposedFaster RCNN realizes object detection performance with high accuracy through second-order network and Region Proposal Network [52]
Single Shot multibox Detector (SSD)An automatic detection method of breast cancer lesion based on SSD is proposedOne stage, good at detecting small objects [53]

3.2. Segmentation Method of Breast Pathological Image

Segmentation refers to dividing the input image into many specific areas with unique properties and extracting them, separating the content in a region of interest (ROI) from the image background. The ROI in a pathological image of breast cancer is part of a lesion. When using deep learning correlation methods, it is generally necessary to analyze and extract the characteristics of tumor lesions in ROI so as to detect and classify pathological images. Pathological image segmentation plays an important role in the field of pathological image processing and analysis, which is helpful to provide reliable basis for clinical auxiliary diagnosis and treatment. Despite the high complexity of pathological images and the lack of simple linear features, pathological image segmentation technology has made significant progress due to the effective application of deep learning algorithm in pathological image segmentation. In pathological image segmentation, deep learning algorithms have made remarkable achievements. Most pathological image segmentation uses supervised deep learning algorithms, such as FCN, RNN, and U-Net. Next, we will introduce some typical models that work particularly well for accuracy and performance. Table 3 shows the application of deep learning algorithm in histopathological segmentation of breast cancer. The application of deep learning algorithm in histopathological segmentation of breast cancer was classified according to the types of models, and the strategies were summarized.
Mehta et al. [73] introduced a method to generate distinguishable tissue-level segmentation masks for breast cancer diagnosis (Figure 4A). This Y-Net network expands and generalizes U-Net, adds a parallel branch for the generation of discriminative maps, and supports modularization of convolution blocks. Guo et al. [74] proposed v3-DCNN, a fast cancer region segmentation framework (Figure 4B). The classification model Inception-V3 was used to preselect the tumor area, and then the semantic segmentation model DCNN was used to segment the 1280 × 1280 patch to reduce computation time and improve accuracy. Pan et al. [75] proposed an automatic nuclear segmentation method for histopathological images of breast cancer stained with H&E. The sparse reconstruction method is used to roughly remove the background to emphasize the core of the pathological image, and then the deep convolutional network (DCN) of the multilayer convolutional network cascade is used to effectively segment the core from the background (Figure 4C). Maria Priego-Torres et al. [76] presented a processing pipeline for automatic segmentation of breast cancer images to present different types of histopathological patterns (Figure 4D). The deep convolutional neural network (DCNN) and the encoder–decoder with separable convolution structure were used to complete the segmentation of each patch, and the local segmentation results were merged based on the effective full connection condition random field (CRF) to avoid discontinuity and inconsistency.
Transformer-based methods are also widely used in medical image segmentation [21]. However, there are few studies about the segmentation task of breast pathological images. Therefore, we try to retrieve transformer-based segmentation methods in the fields related to breast pathological image segmentation, aiming to promote the development of transformer-based methods in this field. Cam et al. [77] quantitatively evaluated the segmentation performance of six popular transformer-based segmentation networks on pathological images based on the PAIP liver histopathology dataset and compared the classical CNN-based segmentation networks. The results show that the transformer-based segmentation network is generally better than the CNN-based model, proving the effectiveness of the transformer architecture on pathological image segmentation tasks. Li et al. [78] proposed a vision language medical image segmentation model, LViT (Language measures Vision Transformer), to solve the problem of insufficient annotation of medical images and verified the cell segmentation performance of this method on the MoNuSeg dataset. Diao et al. [79] introduced transformer into the classic U-Net architecture to extract and encode global context information and achieved SOTA performance in the nasopharyngeal carcinoma pathological image dataset.
Semi-automatic segmentation algorithm also attracted much attention in the field of breast cancer image analysis, mainly applied to X-ray [80], ultrasound [81], MRI [82], and other images, and there is less research on pathological images of breast cancer. In recent related research, Lai et al. [83], in conjunction with semi-supervised and active learning, proposed a segmentation algorithm for brain tissue pathological images and achieved IoU scores competitive with fully supervised learning.
Thus, through research and analysis, we found that the features generated by the discriminant segmentation mask used by the authors in [73] were able to achieve the same segmentation accuracy as the most advanced methods while learning fewer parameters. However, this paper only studied breast biopsy images and did not extend to other medical imaging tasks. The method proposed in [74], based on the V3 DCNN model, achieved a higher FROC score of 83.5% than the champion method Camelyon16 80.7%, and further, the automatic heat map generation of WSI was achieved. However, the proposed model lacks dataset validation and should be tested on more breast histopathological images. In [75], k-SVD and Batch-OMP algorithms were used for sparse reconstruction to enhance the nuclear region. In the segmentation stage, DCN trained by structural label was used to obtain the exact pixel of the nucleus, and morphological operation and some prior knowledge were introduced to improve the segmentation performance and reduce errors. The proposed algorithm is a general method and can be applied to many pathological applications. However, the number of datasets is too small, and the number of background pixels is far more than that of nuclei, so there is an imbalance between the number of nuclear pixels and background pixels. The proposed segmentation model in [76] performed well on standard success rate and similarity segmentation metrics, especially considering that the dataset included WSI images with high tumor variability. Web-based viewers and annotation tools were developed to allow collaboration with pathologists and technologists to establish a way to create datasets. However, all images used in this working study were stained in the same laboratory and digitized using the same scanner, and images from other sources should be used to increase the heterogeneity of the training set. Transformer-based methods have achieved remarkable results in the field of computer vision and become popular in medical image analysis tasks. [21] Transformer can well encode context characteristics. This is also used by most researchers to build the global features of breast cancer pathological images so as to improve the model performance. A large number of experimental results show the effectiveness of transformer architecture in this field [77,79,80]. The semi-automatic segmentation algorithm is relatively less used in the breast cancer pathological image segmentation task. So far, this method is still a field worthy of researchers to explore.
Table 3. Summary of the application of deep learning algorithms in breast cancer histopathology for segmentation.
Table 3. Summary of the application of deep learning algorithms in breast cancer histopathology for segmentation.
ModelStrategyAdvantagesPublication
ResNetPropose segmentation of limited data using rough image-level tags with performance comparable to fully labeled datasetsThe main feature of ResNet is the residual block, the purpose of the residual block is to preserve the characteristics of the parameters before the current layer is trained and to pass these parameters onto the subsequent layers together with the trained data [84]
FCNPropose a fast segmentation method for breast cancer metastases in pathological imagesThe FCN replaces the fully connected layer behind the traditional CNN with a convolutional layer so that the output of the network will be a heat map rather than a category; at the same time, the image size is recovered using upsampling in order to address the reduction in image size due to convolution and pooling [85]
Propose an automatic method for detecting mitosis [86]
Describe a method to automatically segment nuclei from hematoxylin and eosin (H&E)-stained histopathology data with fully convolutional networks [87]
Use annotated datasets to create accurate models [60]
Propose a histopathological tissue analysis framework based on deep learning and verifies its universality and model generalization under different data distributions [88]
U-NetUse histopathological images obtained with hematoxylin and eosin staining for biopsy samples for the diagnosis and segmentation of breast cancerU-Net networks are able to use valid labeled data more effectively from a very small number of training images, relying on data augmentation [89]
Address the task of tissue-level segmentation in intermediate resolution of histopathological breast cancer images [90]
Propose a deep learning framework consisting of high-resolution encoder paths, pyramidal pooled bottleneck modules in porous space, and decoders [91]
Investigate whether it is possible to further improve the performance of the classifier model at the patch level by integrating multiple extracted histological features into the input image [92]
CNNImprove the performance of current Simple Linear Iterative Clustering (SLIC) algorithm to achieve hyperpixel segmentation of high-dimensional features [93]
Use a pretrained convolutional neural network (CNN) for segmentation and then another Hybrid-CNN for classification of mitoses [94]
Identify a useful cell segmentation approach with histopathological images that uses prominent deep learning algorithms and spatial relationships [95]
Propose a framework that combines the effectiveness of attention-based encoder–decoder architecture with an empty space pyramid pool with efficient dimensional convolution (kide-Segnet) [96]
Propose a deep learning model for automatic segmentation of complex cores in tissue images by encoder-decoder structure [97,98]
TransformerTransformer-encoded global features improve U-Net segmentation performanceTransformer model can be used to encode the global features of pathological images and can improve the performance of current algorithms in many fields [79]

3.3. Disease Classification Based on Breast Pathological Images

Classification of medical images by defining the anatomical or pathological features distinguishes certain anatomical structures or tissues. Classification tasks can include many applications in determining the presence of disease, including the identification of tumor types. Deep learning is often used with medical images to classify target lesions into two or more categories. Binary classification refers to distinguishing between breast cancer tissue slices and normal breast tissue slices in the pathological tissue slice dataset used. Multiclass classifications divide pathological tissue slices of the breast into multiple categories using deep learning algorithms based on requirements. Common classifications of breast tissue slices are normal, benign, in situ carcinoma, or invasive carcinoma. In general, the accuracy of the binary classification task is higher than that of the multiclass classification task. Among some existing deep learning classification algorithms, the classification accuracy has reached or even exceeded that of pathologists [43]. Next, we will introduce some typical models that work particularly well for accuracy and performance. Table 4 shows the application of deep learning algorithm in histopathological classification of breast cancer. The application of deep learning algorithm in histopathological classification of breast cancer was classified according to the types of models, and the strategies were summarized.
Convolution neural network is the most widely used method in breast cancer pathological image classification task. [46,99,100,101,102,103,104,105,106,107,108,109] Roy et al. [110] developed a patch-based classifier (PBC) through a convolutional neural network (CNN) for automatic classification of breast cancer histopathological images (Figure 5A). They used two methods: one patch in one decision (OPOD), and all patches in one decision (APOD). Gandomkar et al. [111] proposed a framework MuDeRN (multicategory classification of breast histopathological images using deep residual networks) (Figure 5B). The 152-layer RsetNet is used to sort tasks to improve deeper network optimization for higher model accuracy. Vesal et al. [112] proposed a method based on metastatic learning to divide histological images of breast cancer into four subtypes: normal, benign, carcinoma in situ, and invasive carcinoma(Figure 5C). Migration learning migrates ImageNet’s pretrained parameters into InceptionV3 and RestNet50 and removes the last five layers of the network model to obtain global information. Alom et al. [113] proposed a method for breast cancer classification using the Inception recurrent residual convolutional neural network (IRRCNN) model, which is a combination of the Inception network (Inception-v4), the residual network (ResNet), and a convolutional neural network (RCNN), which provides advantages to DCNN models that exhibit superior performance in object recognition tasks (Figure 5D). Sudharshan et al. [109] proposed a weakly supervised learning framework based on multi-instance learning for classification of breast pathological images. This method does not need to label each instance so it significantly alleviates the problem of difficult labeling of pathological images.
In the field of medical image analysis, transformer-based methods were first used to process disease classification tasks and produced significant results. [114,115,116,117,118,119,120,121,122] There have also been many significant advances in the classification of pathological images of breast cancer. Alotaibi et al. [119] designed an integrated model based on VIT and DeiT to classify pathological images of breast cancer tissues and achieved an accuracy of 98.17 on BreakHis public dataset. However, this method requires pretraining on large-scale datasets and model fine-tuning to alleviate the data hunger of the transformer, which will obviously increase the training cost of the model and limit the scope of use. Shao et al. [120] used the global characteristics of the transformer to build the relationship between instances in order to capture the context information so as to improve the performance of multi-instance learning on the breast whole-slide image. They achieved 93.09% of AUC’s binary classification performance on CAMELYON16 dataset. Chen et al. [121] proposed the Multimodal Co-Attendance Transformer (MCAT) architecture, which aims to build the relationship between WSI and genomic features and use it in survival analysis tasks. This method has proved effective on different cancer datasets, including breast cancer datasets. Chen et al. [122] proposed a multi-scale vision transformer model (GasHis Transformer) for gastric cancer tissue image classification. The author also verified the effectiveness of this method on the breast pathology image dataset. He et al. [123] proposed Deconv-Transformer (DecT), which incorporates the color deconvolution in the form of convolution layers, and uses a self-attention mechanism to match the independent properties of the HED channel information obtained by the color deconvolution. In [124], DCET-Net (based on two backbone streams of CNN and transformer) was proposed, which utilizes CNN stream to focus on the local deep feature extraction of histopathological images, while through the Transformer stream, it enhances the global information representation of images.
The capsule network proposed by Geoffrey Hinton also has some valuable exploration in this field. Anupama et al. [125] used capsule network with preprocessed histology images, which demonstrates that preprocessing data and tuning parameter can improve the performance of conventional architectures. Wang et al. [126] used FE-BkCapsNet based on deep feature fusion and enhanced routing, which combines the advantages of CNN and CapsNet, and the classification performances are better than that of BkNet and CapsNet. However, it is a very time-consuming methodology to classify based on capsule features and convolution features extracted in two parallel channels. Iesmantas et al. [127] developed convolutional capsule network for classification of four types of images of breast tissue biopsy when hematoxylin and eosin staining is applied, but regularization was not taken into consideration.
Table 4. Summary of the application of deep learning algorithms in breast cancer histopathology for disease classification.
Table 4. Summary of the application of deep learning algorithms in breast cancer histopathology for disease classification.
ModelStrategyAdvantagesPublication
Deep Belief Network (DBN)Propose a new patch-based deep learning method PA-DBN-BC for breast cancer detection and classification in histopathological images [128]
Deep Neural Network (DNN)Propose a new feature extractor, Deep Manifold Reservation Autoencoder, for automatic classification of breast cancer histopathological imagesDeep Belief Network is a probability generation model, which has important value in the early application of deep learning methods [129]
Generative Adversarial Network (GAN)Explore whether a deep learning algorithm can learn objective histologic H&E featuresGAN can be used as a means of data enhancement to alleviate the problem of insufficient data of pathological images of breast cancer [103]
Visual Geometry Group Network (VGG)Discuss and compare the task of automatic amplification in breast cancer detection based on multiple classificationVGG is a deep neural network proposed in 2014, which provides rich deep features for early breast pathological image research [130]
Recurrent Neural Network (RNN)Present a deep learning model to classify hematoxylin¨Ceosin-stained breast biopsy images into four classesRNN can construct the context between pathological image features and be used for prediction of slide-level diagnostic results [131]
Propose a second-order multi-instance learning approach that stacks adaptive aggregators by attentional mechanisms and recurrent neural networks (RNN) for histopathological image classification [132]
InceptionCompare different machine learning methods for classification and evaluation of breast cancer tumors [133]
Propose a depth model based on computer-aided transfer learning as a binary classifier for breast cancer detection [134]
Propose a method for diagnosing breast cancer as benign or malignant in magnification specific binary (MSB) classification [135]
Dynamic Convolution Neural Network (DCNN)Propose an efficient deep convolutional neural network classification model for fast back propagation learningDCNN can adaptively adjust the convolution kernel parameters according to the input data, enhance the feature expression ability of the model, and specifically solve the tasks related to breast pathological images [136]
Develop a deep learning model biopsy microscopic image cancer network (BMIC_Net) for multiple classification of BC [137]
Propose two efficient models based on deep transfer learning to improve the binary and multiclassification systems [138,139]
Convolution Neural Network (CNN)Propose a new deep architecture based on self-integration to leverage semantic information from annotated images and explore information hidden in unlabeled data [140]
Propose an analysis and synthesis model learning method with novel algorithms and search strategies to classify images more effectively [141,142,143,144,145,146,147,148,149,150]
Propose a set of training techniques and use image processing techniques to improve the performance of CNN-based models in breast cancer classification [143,151,152,153,154,155,156,157]
Deep residual network (ResNet)Present a deep neural network which performs representation learning and cell nuclei recognition in an end-to-end manner [158]
Propose an automatic multiclassification method for breast cancer histopathological images based on metastasis learning [159]
Present a method that employs a convolutional neural network for detecting tumor on entire-slide images [59,130,136,160]
Propose a breast cancer multiclassification method using a proposed deep learning model [106,113,137,161,162,163,164,165,166,167,168]
Thus, through research and analysis, we found that the classifier proposed by [110] first predicts the class label of each input patch by OPOD technique and then predicts the whole-image label by APOD technique. At the same time, the number of filters and kernel size of each layer are adjusted so that the number of trainable parameters is smaller than the number of samples and overfitting can be prevented. The authors’ proposed framework in [111], MuDeRN, first trains a deep residual network (ResNet) to classify patches in images as benign or malignant. Images classified as malignant were then subdivided into four cancer subgroups, and images classified as benign were divided into four cancer subgroups. MuDeRN classified patients as benign or cancerous with 98.77% accuracy and achieved 96.25% patient-level accuracy across the eight categories. However, for some subtypes with too few cases, MuDeRN’s performance should be investigated on a larger database. In [112], the conventional use of normalized means to deal with color differences, using a different normalized way, showing a good effect but should verify its effect. The author does not use a test set in the training, and the results produced by using only one partition in the training set are hardly convincing. At the same time, the author changed the structure of the end of the network without proving the correctness of the modification by experiment. The method proposed in [113], the IRRCNN model, was used to successfully classify binary and multiple types of breast cancer with constant amplification coefficients. Image and patient-level data were evaluated using different magnifications on publicly available histopathological datasets for breast cancer. Compared with the existing breast cancer classification algorithm, it shows superior performance. Recently, there were also some studies based on attention mechanisms [169] and the use of deep semantic features and image texture features [170] for breast cancer classification. They all achieved good results and provided some references and directions for future research.
From the above results, it can be seen that deep learning has already achieved remarkable results in the field of pathological image classification of breast cancer. How to further improve the performance of deep learning-assisted diagnosis and better provide treatment recommendations based on existing results in pathological image analysis will be the focus of research in the following section. Eventually, we hope to realize a deep learning model that can integrate multimodal data (including medical images, gene sequences, diagnostic reports, drug molecular structure, and other related information) and truly exert the value of deep learning in clinical applications.

3.4. Genetic Prediction Based on Deep Learning

WSI is widely used in digital pathology to predict gene mutations, molecular subtypes, and clinical outcomes. Therefore, they are usually divided into patches for training neural networks and prediction models. However, because patch-level tags are usually unavailable, we cannot directly classify each patch. In the past few decades, with the help of the rapid development of high-throughput technologies of microarrays and gene expression analysis technologies, there have been many studies that use gene expression patterns to understand the molecular characteristics of breast cancer. Van de Vijver [171] conducted a preliminary study to effectively predict the prognosis of breast cancer through gene expression profile. They clustered gene expression profile data and correlated them with prognostic values. The integration of gene expression profile data and clinical data may improve the accuracy of prognosis and diagnostic prediction models [172]. In fact, microarray data are high-dimensional, and each patient contains about 25,000 genes. There may be potential relationships between different genes, which may improve the accuracy of prognosis prediction of breast cancer [173]. Many genes related to breast cancer have been identified. Mutation and abnormal amplification of oncogenes and tumor suppressor genes play a key role in the occurrence and development of tumors. For example, two famous breast cancer risk anti-cancer genes, BRCA1 and BRCA2, and human epidermal growth factor receptor 2, also known as c-erbB-2, are important carcinogens in breast cancer, and so on. Table 5 shows the application of deep learning algorithm in genetic prediction of breast cancer.
Khademi et al., proposed the probability graph model (PGM) [172], which predicts and diagnoses breast cancer by integrating two independent microarray models and clinical data. They first applied principal component analysis (PCA) to reduce the dimensions of microarray data and built a depth confidence network to extract the feature representation of the data. At the same time, they also applied structural learning algorithms to clinical data. However, today, inspired by the successful application of deep learning methods in the cv field and the huge contribution of multidimensional data to cancer prognosis prediction, there is a lot of work to directly provide slide-level prediction through deep learning, and digital whole image (WSI) may provide a computationally effective and efficient method to quantitatively characterize the heterogeneity of cancer specimen cell level. Pathologists usually use WSIs to identify nuclear features, diagnose cancer status, and measure histopathological grading of cancer tissues. Preliminary evidence shows that the application of deep learning method can automatically predict the cancer subtypes of various cancers [174], predict the mutations of lung cancer [175] and liver cancer [176], classify mesotheliomas [172], detect DNA methylation patterns [177], estimate the status of human epidermal growth factor receptor in breast cancer [178], and predict the pan cancerous prognosis of patients [179]. However, pan cancer research [180] cannot provide an in-depth description of breast cancer histopathology, mutation, and pathway activity level. At present, DL based on cnn can predict the gene mutation status in H&E-stained WSIs, and it has the potential to improve the prognosis and treatment of cancer by using biomarkers that are currently undetectable to clinicians. Although artificial intelligence cannot completely replace human beings in practice, gene mutation prediction can be used as a prescreening to improve the cost efficiency before next-generation sequencing, thus improving the performance of precision medical treatment.
However, there is still a lack of research related to deep learning that links breast cancer WSI with genes [181]. After our careful search, Wang, Xiaoxiao et al. [182] developed a computer system (DL based on cnn) to predict the molecular marker (gBRCA mutation) of BC through tumor histomorphology analysis. To study whether gBRCA mutation can affect the tumor cell pattern on BCH&E-stained WSIs; Qu, Hui et al. [181] used ResNet followed by a full connection layer with self-attention and maximum pooling to display the weight graph of each tumor tile so as to understand the decision of the classifier and highlight the area that contributes the most to the final prediction. It is proved that the key gene mutation results and biological pathway activities of breast cancer can be predicted through the deep learning classifier of full-slide images; He, Bryan et al. [183] used a complete slide image combined with hematoxylin and eosin (H&E) staining, trained a large number of H&E and IHC-labeled image pair datasets using deep neural networks, and proved the accurate ER receptor state estimation from H&E staining (Figure 6B).
In addition, considering the limitations of the method based on a single information source, such as a lack of nonuniversality, uniqueness, and noise data, multimodal learning is proposed to solve these problems and obtain a final decision by combining relevant information from multiple sources [184,185]. As a kind of multimodal learning, multimodal deep learning [186] proposed a new multimodal deep neural network prognosis prediction for human breast cancer by integrating multidimensional data (MDNNMD). MDNNMD is an effective method to integrate multidimensional data, including gene expression profile, copy number change (CNA) profile, and clinical data with the score level of final prediction results. This method takes into account the heterogeneity between different data types and makes full use of the abstract high-level representation of each data source (Figure 6A).
Petkov et al. [187] accurately predict the prognosis of IDC, which is helpful to determine the individualized adjuvant treatment of breast cancer patients. Lin, Zhiquan et al. [188] proposed and tested WSI preprocessing and feature extraction methods. Combining CAF gene, WSI characteristics, and lymph node status, a multigroup model was established to predict the prognosis of IDC breast cancer patients (Figure 6C).
In the field of digital pathology, unsupervised clustering has been widely used to reduce the dimension of patches to facilitate multi-instance learning (for example, patches from WSI can be immediately installed on the graphics processing unit (GPU)) [189]. This method is also used to derive additional cluster-based characteristics and identify rare events. Dooley et al. [190] and Zhuet al. [191] clustered the plaques and used the frequency of plaques in each cluster as a new feature to predict the rejection of heart transplantation. Similarly, see Abbet et al. [189]. Although various unsupervised clustering applications have been developed in digital pathology, few studies have evaluated the use of unsupervised clustering to identify image patches related to gene mutation. Chen et al. [192] proposed a multi-instance learning method based on unsupervised clustering and developed an in-depth learning model using WSIs of three common cancer types obtained from the Cancer Genome Map (TCGA) to optimize the prediction of genetic mutation.
Figure 6. Typical application of deep learning in genetic prediction task. (a) MDNNMD model uses multidimensional data to predict the prognosis of breast cancer [186]. (b) Estrogen receptor status (ERS) was predicted from the whole-slide image of H&E staining [183]. (c) A multi-omics signature to predict the prognosis of invasive ductal carcinoma of the breast [188].
Figure 6. Typical application of deep learning in genetic prediction task. (a) MDNNMD model uses multidimensional data to predict the prognosis of breast cancer [186]. (b) Estrogen receptor status (ERS) was predicted from the whole-slide image of H&E staining [183]. (c) A multi-omics signature to predict the prognosis of invasive ductal carcinoma of the breast [188].
Micromachines 13 02197 g006
Table 5. Summary of the application of deep learning algorithms in breast cancer histopathology for genetic prediction.
Table 5. Summary of the application of deep learning algorithms in breast cancer histopathology for genetic prediction.
ModelStrategyAdvantagesPublication
Attention mechanismWeighting different pathological images based on attention mechanism to improve prediction resultsAttention mechanism simulates human visual behavior by applying different weights to images. This method can highlight the key areas and non key areas in pathological images, thus improving the prediction results of the model [181]
Based on ResNet and attention mechanism, a method for predicting pathological gene subtypes of breast cancer is proposed [183]
KNN and K-meansUse unsupervised clustering method to reduce the workload of manual labeling by pathologistsUnsupervised [192]

4. Conclusions and Perspective

Deep learning is widely used for the detection, segmentation, and classification of breast cancer pathology and has achieved remarkable results. There are several common networks in the detection, including CNN and RNN, which have reached the level of pathologists in some areas of recognition accuracy. Among these network structures, a CNN-based network structure performs excellently in recognition accuracy. On the Camelyon 16 challenge test set, the CNN-based network NCRF achieved an average FROC score of 0.8096, higher than the previous champion of the challenge. For comparison, the score of professional pathologists is 0.7240. Further comparing the CNN and RNN methods, the RNN can be used to describe the output of the continuous state in time with a memory function, while the CNN is used for static output. Although the RNN can solve problems that the CNN cannot handle, it is not as effective as the CNN. We can integrate the time-series processing method of the RNN into the CNN so that we can combine the output of the continuous state in time to obtain a better result of the image recognition.
There are several common networks in the segmentation, including the FCN, U-Net, RNN, and GAN. Those models have good results in pathological image segmentation, whereas pathological image segmentation methods based on an FCN usually use manually segmented samples at a pixel level as the training dataset and then learn by calculating the loss per pixel. The network structure is affected by subsampling, which makes it difficult to retain meaningful spatial information in the upsampled feature map. In addition to improving the network structure and training learning methods, we can also solve the segmentation problem of pathological cases by defining different loss functions. However, it still relies on the mechanism of comparing the differences per pixel, so its ability to constraint spatial geometric information is very limited. The method of pathological image segmentation based on U-Net is one of the most widely used techniques. The U-Net network can effectively solve the segmentation problem of complex neural structures by capturing global features in the contracting path and achieving an accurate localization in the expanding path. However, the local dependence between pixels is not fully considered, which makes it susceptible to the influence of the external characteristics of the target. We can evaluate the importance of different positional features by combining it with an attention mechanism and assigning weight, and then model the context dependence of the local features. Further comparing the FCN and RNN methods, the network structure and training method adopted by the pathological image segmentation method based on the RNN fully consider the long-term and global dependence between similar pixels, and the ability to capture the spatial and apparent consistency of segmentation marks is enhanced. GAN-based pathological image segmentation methods generally do not need to be modeled in advance; generators and discriminators can choose any structure of the neural network. A GAN model usually leads to poor controllability in the training process and the insufficient stability of the model. Therefore, when using a GAN model to learn the distribution of large-scale pathological data, it is necessary to enhance the stability of the model and its training process. The improvement in the segmentation accuracy of the pathological images of breast cancer will improve the model accuracy of recognition and classification tasks.
In a classification task, the accuracy of multiclassification tasks is usually lower than that of binary classification tasks. Among the deep learning models for classification tasks, the models based on the Inception-V3 series yield a better accuracy in both binary classification and multiclassification tasks. Further comparing the Inception and RestNet methods, Inception requires fewer parameters to set than RestNet. In addition, we can introduce an attention mechanism into a deep learning network to analyze pathological images. The weights corresponding to different scales were learned through the network framework, and then the features of different scales were fused with the attention mechanism to obtain richer features of the pathological images so as to achieve an accurate classification of the pathological images.
The successful application of deep learning in breast cancer pathology has provided pathologists with auxiliary diagnosis methods, which significantly improve the accuracy and efficiency of breast cancer diagnoses. For supervised deep learning algorithms, many labeled training set samples are required to improve the models’ R-squared values. There are few labeled data in the existing public datasets, and the high precision of the pathological sections also makes manual labeling extremely cumbersome. To develop models with higher R-squared values, it is an effective way to obtain more labeled training sets via dataset sharing or using unsupervised deep learning algorithms. In addition, the lack of interpretability of deep learning algorithms has hindered their application in medical diagnosis. It is difficult to understand the features or decision logic of a neural network at the semantic level due to a lack of mathematical tools to diagnose and evaluate the characteristic expression ability of the network (for example, the generalization ability and convergence speed of the depth model). It is also difficult to explain the information processing of different neural network models. When sample data are input into a neural network, we have a hard time explaining the reasons for the predicted results, and optimizing a neural network is difficult. To describe the interpretability of deep learning algorithms, we can start from the internal interpretability based on the model as we can understand the internal operation process of the model while obtaining the output result. We can also start from the interpretability based on the results to infer the operation process of the model from the output results.
With the development of deep learning, AI systems can be built to provide pathologists with auxiliary diagnosis methods. The assisted diagnosis based on deep learning is beneficial to provide more objective and reasonable diagnosis results for patients. Further, AI and healthcare are combined to promote intelligent health management. Intelligent health management is a specific scenario in which artificial intelligence technology is applied to health management for risk identification, virtual nursing, mental health consultation, online consultation, health interventions, and health management based on precision medicine. In the future, pathology image AI needs to further improve its interpretability, such as developing more rational visualization algorithms and adding causal inference to deep learning algorithms.
The medical image analysis method based on deep learning still has many limitations and challenges. At present, the mainstream deep learning method is still the data-driven supervised learning method. Large-scale datasets and fine-grained manual annotation are the key factors for such methods to achieve an excellent performance. This is contrary to the actual clinical environment. The development of deep learning in the field of medical image analysis is limited by the long tail of diseases, the heterogeneity of medical images, and the professionalism of fine-grained labeling. Therefore, how to make full use of large-scale unlabeled datasets, how to mitigate the heterogeneity of multicenter data, and how to make full use of only coarse-grained labels have become the key issues in this field. Unsupervised learning, few-shot learning, image denoising, and other methods will have important value in the future medical image analysis field.
This paper has some limitations. First, this paper focuses on the application of deep learning in breast cancer pathology images, with less description of the innovation of the algorithm and the details of the model. Second, the included articles are mainly based on the prediction of images and lack a focus on multimodal models.

Author Contributions

Conceptualization, Y.Z. and X.C.; methodology, Y.Z.; validation, J.Z., H.Q., and Y.Z.; formal analysis, J.Z. and D.H.; investigation, X.C. and Y.T.; resources, Y.Z.; data curation, J.Z.; writing—original draft preparation, Y.Z.; writing—review and editing, X.C.; visualization, H.Q.and D.H.; supervision, X.C.; project administration, Y.Z.; funding acquisition, X.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Fundamental Research Funds for the Central Universities N2219001, Ningbo Science and Technology Bureau (Grant No.2021Z027) and the Fundamental Research Funds for the Central Universities N2224001-10.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BCBreast Cancer
CADComputer-Aided Diagnosis
ReLULinear Rectification Function
GDCThe Genomic Data Sharing Area
NCIThe National Cancer Institute
H&EHematoxylin and Eosin
WSIsFull-Slide Images
CNNConvolutional Neural Network
IDCThe Invasive Ductal Carcinoma
Faster R-CNNFaster Region Convolutional Neural Network
SGEStacked Generalized Ensemble
ROIRegion of Interest
DCNDeep Convolutional Network
DCNNDeep Convolutional Neural Network
CRFCondition Random Field
SLICSimple Linear Iterative Clustering
PBCPatch-based Classifier
OPODOne Patch in One Decision
APODAll Patches in One Decision
MuDeRNMulticategory Classification of Breast Histopathological
Images Using Deep Residual Networks
IRRCNNInception Recurrent Residual Convolutional Neural Network
ResNetResidual Network
MSBMagnification Specific Binary
BMIC_NetBiopsy Microscopic Image Cancer Network

References

  1. Hoon Tan, P.; Ellis, I.; Allison, K.; Brogi, E.; Fox, S.B.; Lakhani, S.; Lazar, A.J.; Morris, E.A.; Sahin, A.; Salgado, R.; et al. The 2019 World Health Organization classification of tumours of the breast. Histopathology 2020, 77, 181–185. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Ferlay, J.; Colombet, M.; Soerjomataram, I.; Parkin, D.M.; Piñeros, M.; Znaor, A.; Bray, F. Cancer statistics for the year 2020: An overview. Int. J. Cancer 2021, 149, 778–789. [Google Scholar] [CrossRef] [PubMed]
  3. Abels, E.; Pantanowitz, L.; Aeffner, F.; Zarella, M.D.; Kozlowski, C. Computational Pathology Definitions, Best Practices, and Recommendations for Regulatory Guidance: A White Paper from the Digital Pathology Association. J. Pathol. 2019, 249, 286–294. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Veta, M.; Pluim, J.P.; Van Diest, P.J.; Viergever, M.A. Breast cancer histopathology image analysis: A review. IEEE Trans. Biomed. Eng. 2014, 61, 1400–1411. [Google Scholar] [CrossRef] [PubMed]
  5. Niazi, M.K.K.; Parwani, A.V.; Gurcan, M.N. Digital pathology and artificial intelligence. Lancet Oncol. 2019, 20, e253–e261. [Google Scholar] [CrossRef]
  6. Yaffe, M.J. Emergence of “Big Data” and Its Potential and Current Limitations in Medical Imaging. Semin. Nucl. Med. 2019, 49, 94–104. [Google Scholar] [CrossRef]
  7. Jang, H.-J.; Cho, K.-O. Applications of deep learning for the analysis of medical data. Arch. Pharmacal Res. 2019, 42, 492–504. [Google Scholar] [CrossRef]
  8. Srinidhi, C.L.; Ciga, O.; Martel, A.L. Deep neural network models for computational histopathology: A survey. Med. Image Anal. 2019, 67, 101813. [Google Scholar] [CrossRef]
  9. Wang, C.W.; Khalil, M.A.; Firdi, N.P. A Survey on Deep Learning for Precision Oncology. Diagnostics 2022, 12, 1489. [Google Scholar] [CrossRef] [PubMed]
  10. Robertson, S.; Azizpour, H.; Smith, K.; Hartman, J. Digital image analysis in breast pathology-from image processing techniques to artificial intelligence. Transl. Res. 2018, 194, 19–35. [Google Scholar] [CrossRef]
  11. Gao, J.; Jiang, Q.; Zhou, B.; Chen, D. Convolutional neural networks for computer-aided detection or diagnosis in medical image analysis: An overview. Math. Biosci. Eng. 2019, 16, 6536–6561. [Google Scholar] [CrossRef] [PubMed]
  12. Suri, J.S.; Biswas, M.; Kuppili, V.; Saba, L.; Edla, D.R.; Suri, H.S.; Cuadrado-Godia, E.; Laird, J.R.; Marinhoe, R.T.; Sanches, J.M.; et al. State-of-the-art review on deep learning in medical imaging. Front.-Biosci.-Landmark 2019, 24, 392–426. [Google Scholar] [CrossRef] [PubMed]
  13. Krithiga, R.; Geetha, P. Breast Cancer Detection, Segmentation and Classification on Histopathology Images Analysis: A Systematic Review. Arch. Comput. Methods Eng. 2020, 24, 392–426. [Google Scholar] [CrossRef]
  14. Debelee, T.G.; Schwenker, F.; Ibenthal, A.; Yohannes, D. Survey of deep learning in breast cancer image analysis. Evolving Systems 2020, 11, 143–163. [Google Scholar] [CrossRef]
  15. Jannesari, M.; Habibzadeh, M.; Aboulkheyr, H.; Khosravi, P.; Elemento, O.; Totonchi, M.; Hajirasouliha, I. Breast Cancer Histopathological Image Classification: A Deep Learning Approach. In Proceedings of the 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Madrid, Spain, 3–6 December 2018; pp. 2405–2412. [Google Scholar]
  16. Wang, X.; Chen, H.; Gan, C.; Lin, H.; Dou, Q.; Tsougenis, E.; Huang, Q.; Cai, M.; Heng, P.-A. Weakly Supervised Deep Learning for Whole Slide Lung Cancer Image Analysis. IEEE Trans. Cybern. 2020, 50, 3950–3962. [Google Scholar] [CrossRef]
  17. Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [Green Version]
  18. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
  19. Sherstinsky, A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Phys. D Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef] [Green Version]
  20. Creswell, A.; White, T.; Dumoulin, V.; Arulkumaran, K.; Sengupta, B.; Bharath, A.A. Generative adversarial networks: An overview. IEEE Signal Process. Mag. 2018, 35, 53–65. [Google Scholar] [CrossRef] [Green Version]
  21. Shamshad, F.; Khan, S.; Zamir, S.W.; Khan, M.H.; Hayat, M.; Khan, F.S.; Fu, H. Transformers in medical imaging: A survey. arXiv 2022, arXiv:2201.09873. [Google Scholar]
  22. Dimitriou, N.; Arandjelović, O.; Caie, P.D. Deep Learning for Whole Slide Image Analysis: An Overview. Front. Med. 2019, 6, 264. [Google Scholar] [CrossRef] [PubMed]
  23. LeCun, Y.; Boser, B.; Denker, J.; Henderson, D.; Howard, R.; Hubbard, W.; Jackel, L. Handwritten digit recognition with a back-propagation network. Adv. Neural Inf. Process. Syst. 1989, 2. Available online: https://proceedings.neurips.cc/paper/1989/file/53c3bce66e43be4f209556518c2fcb54-Paper.pdf (accessed on 1 December 2022).
  24. AlEisa, H.N.; Touiti, W.; Ali ALHussan, A.; Ben Aoun, N.; Ejbali, R.; Zaied, M.; Saadia, A. Breast Cancer Classification Using FCN and Beta Wavelet Autoencoder. Comput. Intell. Neurosci. 2022, 2022, 8044887. [Google Scholar] [CrossRef] [PubMed]
  25. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  26. Xie, S.; Tu, Z. Holistically-nested edge detection. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1395–1403. [Google Scholar]
  27. Rampun, A.; López-Linares, K.; Morrow, P.J.; Scotney, B.W.; Wang, H.; Ocaña, I.G.; Maclair, G.; Zwiggelaar, R.; Ballester, M.A.G.; Macía, I. Breast pectoral muscle segmentation in mammograms using a modified holistically-nested edge detection network. Med. Image Anal. 2019, 57, 1–17. [Google Scholar] [CrossRef]
  28. Alom, M.Z.; Aspiras, T.; Taha, T.M.; Bowen, T.; Asari, V.K. MitosisNet: End-to-End Mitotic Cell Detection by Multi-Task Learning. IEEE Access 2020, 99, 1. [Google Scholar] [CrossRef]
  29. Toaar, M.; Zkurt, K.B.; Ergen, B.; Cmert, Z. BreastNet: A novel convolutional neural network model through histopathological images for the diagnosis of breast cancer. Phys. A Statal Mech. Its Appl. 2019, 545, 123592. [Google Scholar]
  30. Cui, Z.; Su, F.; Li, Y.; Yang, D. Circulating tumour cells as prognosis predictive markers of neoadjuvant chemotherapy-treated breast cancer patients. J. Chemother. 2020, 32, 304–309. [Google Scholar] [CrossRef]
  31. Khosravi, P.; Kazemi, E.; Imielinski, M.; Elemento, O.; Hajirasouliha, I. Deep Convolutional Neural Networks Enable Discrimination of Heterogeneous Digital Pathology Images. Ebiomedicine 2018, 27, 317–328. [Google Scholar] [CrossRef] [Green Version]
  32. Feng, M.; Deng, Y.; Yang, L.; Jing, Q.; Bu, H. Automated quantitative analysis of Ki-67 staining and HE images recognition and registration based on whole tissue sections in breast carcinoma. Diagn. Pathol. 2020, 15, 65. [Google Scholar] [CrossRef]
  33. Akbar, S.; Peikari, M.; Salama, S.; Panah, A.Y.; Nofech-Mozes, S.; Martel, A.L. Automated and Manual Quantification of Tumour Cellularity in Digital Slides for Tumour Burden Assessment. Sci. Rep. 2019, 9, 14099. [Google Scholar] [CrossRef] [Green Version]
  34. Lin, H.; Chen, H.; Graham, S.; Dou, Q.; Rajpoot, N.; Heng, P.A. Fast ScanNet: Fast and Dense Analysis of Multi-Gigapixel Whole-Slide Images for Cancer Metastasis Detection. IEEE Trans. Med. Imaging 2019, 38, 1948–1958. [Google Scholar] [CrossRef] [Green Version]
  35. Jimenez, G.; Racoceanu, D. Deep Learning for Semantic Segmentation vs. Classification in Computational Pathology: Application to Mitosis Analysis in Breast Cancer Grading. Front. Bioeng. Biotechnol. 2019, 7, 145. [Google Scholar] [CrossRef] [PubMed]
  36. Wahab, N.; Khan, A.; Lee, Y.S. Two-phase deep convolutional neural network for reducing class skewness in histopathological images based breast cancer detection. Comput. Biol. Med. 2017, 85, 86–97. [Google Scholar] [CrossRef] [PubMed]
  37. Mahmood, T.; Arsalan, M.; Owais, M.; Lee, M.B.; Park, K.R. Artificial Intelligence-Based Mitosis Detection in Breast Cancer Histopathology Images Using Faster R-CNN and Deep CNNs. J. Clin. Med. 2020, 9, 749. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Kumar, D.; Batra, U. Classification of Invasive Ductal Carcinoma from histopathology breast cancer images using Stacked Generalized Ensemble. J. Intell. Fuzzy Syst. 2021, 40, 4919–4934. [Google Scholar] [CrossRef]
  39. Sigirci, I.O.; Albayrak, A.; Bilgin, G. Detection of mitotic cells in breast cancer histopathological images using deep versus handcrafted features. Multimed. Tools Appl. 2021, 81, 13179–13202. [Google Scholar] [CrossRef]
  40. Zeiser, F.A.; da Costa, C.A.; de Oliveira Ramos, G.; Bohn, H.C.; Santos, I.; Roehe, A.V. DeepBatch: A hybrid deep learning model for interpretable diagnosis of breast cancer in whole-slide images. Expert Syst. Appl. 2021, 185, 115586. [Google Scholar] [CrossRef]
  41. Krithiga, R.; Geetha, P. Deep learning based breast cancer detection and classification using fuzzy merging techniques. Mach. Vis. Appl. 2020, 31, 63. [Google Scholar] [CrossRef]
  42. Zhao, Z.Q.; Zheng, P.; Xu, S.T.; Wu, X. Object detection with deep learning: A review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef] [Green Version]
  43. Liu, Y.; Gadepalli, K.; Norouzi, M.; Dahl, G.E.; Kohlberger, T.; Boyko, A.; Venugopalan, S.; Timofeev, A.; Nelson, P.Q.; Corrado, G.S.; et al. Detecting cancer metastases on gigapixel pathology images. arXiv 2017, arXiv:1703.02442. [Google Scholar]
  44. George, K.; Sankaran, P.; Joseph, P.K. Computer assisted recognition of breast cancer in biopsy images via fusion of nucleus-guided deep convolutional features. Comput. Methods Programs Biomed. 2020, 194, 105531. [Google Scholar] [CrossRef] [PubMed]
  45. Chen, H.; Dou, Q.; Wang, X.; Qin, J.; Heng, P. Mitosis detection in breast cancer histology images via deep cascaded networks. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016. [Google Scholar]
  46. Bardou, D.; Zhang, K.; Ahmad, S.M. Classification of Breast Cancer Based on Histology Images Using Convolutional Neural Networks. IEEE Access 2018, 6, 24680–24693. [Google Scholar] [CrossRef]
  47. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  48. Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
  49. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. Eur. Conf. Comput. Vis. 2016, 9905, 21–37. [Google Scholar]
  50. Lu, Y.; Zhang, J.; Liu, X.; Zhang, Z.; Li, W.; Zhou, X.; Li, R. Prediction of breast cancer metastasis by deep learning pathology. IET Image Process. 2022. [Google Scholar] [CrossRef]
  51. Huang, H.; Feng, X.; Jiang, J.; Chen, P.; Zhou, S. Mask RCNN algorithm for nuclei detection on breast cancer histopathological images. Int. J. Imaging Syst. Technol. 2022, 32, 209–217. [Google Scholar] [CrossRef]
  52. Harrison, P.; Park, K. Tumor Detection In Breast Histopathological Images Using Faster R-CNN. In Proceedings of the 2021 International Symposium on Medical Robotics (ISMR), Atlanta, GA, USA, 17–19 November 2021; pp. 1–7. [Google Scholar]
  53. Yamaguchi, M.; Sasaki, T.; Uemura, K.; Tajima, Y.; Kato, S.; Takagi, K.; Yamazaki, Y.; Saito-Koyama, R.; Inoue, C.; Kawaguchi, K.; et al. Automatic breast carcinoma detection in histopathological micrographs based on Single Shot Multibox Detector. J. Pathol. Inform. 2022, 13, 100147. [Google Scholar] [CrossRef]
  54. Zorgani, A.; Mohamed, M.; Mehmood, I.; Ugail, H. Deep yolo-based detection of breast cancer mitotic-cells in histopathological images. In Proceedings of the International Conference on Medical Imaging and Computer-Aided Diagnosis, Birmingham, UK, 25–26 March 2021; pp. 335–342. [Google Scholar]
  55. Campanella, G.; Hanna, M.G.; Geneslaw, L.; Miraflor, A.; Silva, V.W.K.; Busam, K.J.; Brogi, E.; Reuter, V.E.; Klimstra, D.S.; Fuchs, T.J. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nat. Med. 2019, 25, 1301–1309. [Google Scholar] [CrossRef]
  56. Kausar, T.; Wang, M.; Ashraf, M.A.; Kausar, A. SmallMitosis: Small Size Mitotic Cells Detection in Breast Histopathology Images. IEEE Access 2021, 9, 905–922. [Google Scholar] [CrossRef]
  57. Liu, Y.; Kohlberger, T.; Norouzi, M.; Dahl, G.E.; Smith, J.L.; Mohtashamian, A.; Olson, N.; Peng, L.H.; Hipp, J.D.; Stumpe, M.C. Artificial Intelligence-Based Breast Cancer Nodal Metastasis Detection. Arch. Pathol. Lab. Med. 2019, 143, 859–868. [Google Scholar] [CrossRef] [Green Version]
  58. Ma, X.; Liu, H.; Niu, Y.; Zhang, C.; Liu, D. Improvement of Whole-Slide Pathological Image Recognition Method Based on Deep Learning. Int. Symp. Comput. Intell. Des. 2018, 2, 269–272. [Google Scholar]
  59. Brancati, N.; De Pietro, G.; Frucci, M.; Riccio, D. A Deep Learning Approach for Breast Invasive Ductal Carcinoma Detection and Lymphoma Multi-Classification in Histological Images. IEEE Access 2019, 7, 44709–44720. [Google Scholar] [CrossRef]
  60. Amgad, M.; Elfandy, H.; Hussein, H.; A Atteya, L.; Elsebaie, M.A.T.; Elnasr, L.S.A.; A Sakr, R.; E Salem, H.S.; Ismail, A.F.; Saad, A.; et al. Structured crowdsourcing enables convolutional segmentation of histology images. Bioinformatics 2019, 35, 3461–3467. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  61. Celik, Y.; Talo, M.; Yildirim, O.; Karabatak, M.; Acharya, U.R. Automated invasive ductal carcinoma detection based using deep transfer learning with whole-slide images. Pattern Recognit. Lett. 2020, 133, 232–239. [Google Scholar] [CrossRef]
  62. Sebai, M.; Wang, X.; Wang, T. MaskMitosis: A deep learning framework for fully supervised, weakly supervised, and unsupervised mitosis detection in histopathology images. Med. Biol. Eng. Comput. 2020, 58, 1603–1623. [Google Scholar] [CrossRef] [PubMed]
  63. Li, C.; Wang, X.; Liu, W.; Latecki, L.J. DeepMitosis: Mitosis detection via deep detection, verification and segmentation networks. Med. Image Anal. 2018, 45, 121–133. [Google Scholar] [CrossRef]
  64. Xu, J.; Xiang, L.; Liu, Q.; Gilmore, H.; Wu, J.; Tang, J.; Madabhushi, A. Stacked Sparse Autoencoder (SSAE) for Nuclei Detection on Breast Cancer Histopathology Images. IEEE Trans. Med. Imaging 2016, 35, 119–130. [Google Scholar] [CrossRef]
  65. Couture, H.D.; Williams, L.A.; Geradts, J.; Nyante, S.J.; Butler, E.N.; Marron, J.S.; Perou, C.M.; Troester, M.A.; Niethammer, M. Image analysis with deep learning to predict breast cancer grade, ER status, histologic subtype, and intrinsic subtype. NPJ Breast Cancer 2018, 4, 30. [Google Scholar] [CrossRef] [Green Version]
  66. Bejnordi, B.E.; Mullooly, M.; Pfeiffer, R.M.; Fan, S.; Vacek, P.M.; Weaver, D.L.; Herschorn, S.; Brinton, L.A.; Van Ginneken, B.; Karssemeijer, N.; et al. Using deep convolutional neural networks to identify and classify tumor-associated stroma in diagnostic breast biopsies. Mod. Pathol. 2018, 31, 1502–1512. [Google Scholar] [CrossRef]
  67. Litjens, G.; Sánchez, C.I.; Timofeeva, N.; Hermsen, M.; Nagtegaal, I.; Kovacs, I.; Hulsbergen-van de Kaa, C.; Bult, P.; Van Ginneken, B.; Van Der Laak, J. Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis. Sci. Rep. 2016, 6, 26286. [Google Scholar] [CrossRef] [Green Version]
  68. Stanitsas, P.; Cherian, A.; Li, X.; Truskinovsky, A.; Morellas, V.; Papanikolopoulos, N. Evaluation of feature descriptors for cancerous tissue recognition. In Proceedings of the International Conference on Pattern Recognition, Cancun, Mexico, 4–8 December 2016. [Google Scholar]
  69. BenTaieb, A.; Hamarneh, G. Predicting Cancer with a Recurrent Visual Attention Model for Histopathology Images. Med. Image Comput. Comput. Assist. Interv.-Miccai 2018, 110712018, 129–137. [Google Scholar]
  70. Saha, M.; Chakraborty, C.; Racoceanu, D. Efficient deep learning model for mitosis detection using breast histopathology images. Comput. Med. Imaging Graph. 2018, 64, 29–40. [Google Scholar] [CrossRef] [PubMed]
  71. Sun, W.; Tseng, T.-L.; Zhang, J.; Qian, W. Enhancing deep convolutional neural network scheme for breast cancer diagnosis with unlabeled data. Comput. Med. Imaging Graph. 2017, 57, 4–9. [Google Scholar] [CrossRef] [Green Version]
  72. Xiao, Y.; Wu, J.; Lin, Z.; Zhao, X. A semi-supervised deep learning method based on stacked sparse auto-encoder for cancer prediction using RNA-seq data. Comput. Methods Programs Biomed. 2018, 166, 99–105. [Google Scholar] [CrossRef] [PubMed]
  73. Mehta, S.; Mercan, E.; Bartlett, J.; Weaver, D.; Elmore, J.G.; Shapiro, L. Y-Net: Joint Segmentation and Classification for Diagnosis of Breast Biopsy Images. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—Miccai 2018, Pt Ii. Lecture Notes in Computer Science, Granada, Spain, 16–20 September 2018; Volume 110712018, pp. 893–901. [Google Scholar]
  74. Guo, Z.; Liu, H.; Ni, H.; Wang, X.; Qian, Y. Publisher Correction: A Fast and Refined Cancer Regions Segmentation Framework in Whole-slide Breast Pathological Images. Sci. Rep. 2020, 10, 8591. [Google Scholar] [CrossRef] [PubMed]
  75. Pan, X.; Li, L.; Yang, H.; Liu, Z.; Yang, J.; Zhao, L.; Fan, Y. Accurate segmentation of nuclei in pathological images via sparse reconstruction and deep convolutional networks. Neurocomputing 2017, 229, 88–99. [Google Scholar] [CrossRef]
  76. Priego-Torres, B.M.; Sanchez-Morillo, D.; Fernandez-Granero, M.A.; Garcia-Rojo, M. Automatic segmentation of whole-slide H&E stained breast histopathology images using a deep convolutional neural network architecture. Expert Syst. Appl. 2020, 151, 113387. [Google Scholar]
  77. Nguyen, C.; Asad, Z.; Deng, R.; Huo, Y. Evaluating transformer-based semantic segmentation networks for pathological image segmentation. Med. Imaging 2022 Image Process. 2022, 12032, 942–947. [Google Scholar]
  78. Li, Z.; Li, Y.; Li, Q.; Zhang, Y.; Wang, P.; Guo, D.; Lu, L.; Jin, D.; Hong, Q. LViT: Language meets vision transformer in medical image segmentation. arXiv 2022, arXiv:2206.14718. [Google Scholar]
  79. Diao, S.; Tang, L.; He, J.; Zhao, H.; Luo, W.; Xie, Y.; Qin, W. Automatic Computer-Aided Histopathologic Segmentation for Nasopharyngeal Carcinoma Using Transformer Framework. In Proceedings of the International Workshop on Computational Mathematics Modeling in Cancer Analysis, Singapore, 18 September 2022; pp. 141–149. [Google Scholar]
  80. Saleck, M.M.; El Moutaouakkil, A.; Rmili, M. Semi-automatic segmentation of breast masses in mammogram images. In Proceedings of the International Conference on Pattern Recognition and Artificial Intelligence, Beijing, China, 20–24 August 2018; pp. 59–62. [Google Scholar]
  81. Zhai, D.; Hu, B.; Gong, X.; Zou, H.; Luo, J. ASS-GAN: Asymmetric semi-supervised GAN for breast ultrasound image segmentation. Neurocomputing 2022, 493, 204–216. [Google Scholar] [CrossRef]
  82. Veeraraghavan, H.; Dashevsky, B.Z.; Onishi, N.; Sadinski, M.; Morris, E.; Deasy, J.; Sutton, E.J. Appearance constrained semi-automatic segmentation from DCE-MRI is reproducible and feasible for breast cancer radiomics: A feasibility study. Sci. Rep. 2018, 8, 4838. [Google Scholar] [CrossRef]
  83. Lai, Z.; Wang, C.; Oliveira, L.C.; Dugger, B.N.; Cheung, S.-C.; Chuah, C.-N. Joint Semi-supervised and Active Learning for Segmentation of Gigapixel Pathology Images with Cost-Effective Labeling. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 591–600. [Google Scholar]
  84. Ciga, O.; Martel, A.L. Learning to segment images with classification labels. Med. Image Anal. 2021, 68, 101912. [Google Scholar] [CrossRef] [PubMed]
  85. Khalil, M.-A.; Lee, Y.-C.; Lien, H.-C.; Jeng, Y.-M.; Wang, C.-W. Fast Segmentation of Metastatic Foci in H&E Whole-Slide Images for Breast Cancer Diagnosis. Diagnostics 2022, 12, 990. [Google Scholar] [PubMed]
  86. Li, C.; Wang, X.; Liu, W.; Latecki, L.J.; Wang, B.; Huang, J. Weakly Supervised Mitosis Detection in Breast Histopathology Images using Concentric Loss. Med. Image Anal. 2019, 53, 165–178. [Google Scholar] [CrossRef] [PubMed]
  87. Naylor, P.; Laé, M.; Reyal, F.; Walter, T. Nuclei segmentation in histopathology images using deep neural networks. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, VIC, Australia, 18–21 April 2017. [Google Scholar]
  88. Khened, M.; Kori, A.; Rajkumar, H.; Krishnamurthi, G.; Srinivasan, B. A generalized deep learning framework for whole-slide image segmentation and analysis. Sci. Rep. 2021, 11, 11579. [Google Scholar] [CrossRef] [PubMed]
  89. Naylor, P.; Lae, M.; Reyal, F.; Walter, T. Segmentation of Nuclei in Histopathology Images by Deep Regression of the Distance Map. IEEE Trans. Med. Imaging 2018, 38, 448–459. [Google Scholar] [CrossRef]
  90. Mejbri, S.; Franchet, C.; Ismat-Ara, R.; Mothe, J.; Brousset, P.; Faure, E. Deep Analysis of CNN Settings for New Cancer Whole-slide Histological Images Segmentation: The Case of Small Training Sets. In Proceedings of the 6th International Conference on Bioimaging, Prague, Czech Republic, 22–24 February 2019. [Google Scholar]
  91. Chanchal, A.K.; Lal, S.; Kini, J. High-resolution deep transferred ASPPU-Net for nuclei segmentation of histopathology images. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 2159–2175. [Google Scholar] [CrossRef]
  92. Jin, Y.W.; Jia, S.; Ashraf, A.B.; Hu, P. Integrative Data Augmentation with U-Net Segmentation Masks Improves Detection of Lymph Node Metastases in Breast Cancer Patients. Cancers 2020, 12, 2934. [Google Scholar] [CrossRef]
  93. Zhou, J.; Ruan, J.; Wu, C.; Ye, G.; Zhu, Z.; Yue, J.; Zhang, Y. Superpixel Segmentation of Breast Cancer Pathology Images Based on Features Extracted from the Autoencoder. In Proceedings of the 2019 IEEE 11th International Conference on Communication Software and Networks, Chongqing, China, 12–15 June 2019; pp. 366–370. [Google Scholar]
  94. Noorul, W.; Asifullah, K.; Soo, L.Y. Transfer learning based deep CNN for segmentation and detection of mitoses in breast cancer histopathological images. Microscopy 2019, 3, 216–233. [Google Scholar]
  95. Hatipoglu, N.; Bilgin, G. Cell segmentation in histopathological images with deep learning algorithms by utilizing spatial relationships. Med. Biol. Eng. Comput. 2017, 55, 1829–1848. [Google Scholar] [CrossRef]
  96. Aatresh, A.A.; Yatgiri, R.P.; Chanchal, A.K.; Kumar, A.; Ravi, A.; Das, D.; Bs, R.; Lal, S.; Kini, J. Efficient deep learning architecture with dimension-wise pyramid pooling for nuclei segmentation of histopathology images. Comput. Med. Imaging Graph. 2021, 93, 101975. [Google Scholar] [CrossRef]
  97. Chanchal, A.K.; Kumar, A.; Lal, S.; Kini, J. Efficient and robust deep learning architecture for segmentation of kidney and breast histopathology images. Comput. Electr. Eng. 2021, 92, 107177. [Google Scholar] [CrossRef]
  98. van Rijthoven, M.; Balkenhol, M.; Silina, K.; van der Laak, J.; Ciompi, F. HookNet: Multi-resolution convolutional neural networks for semantic segmentation in histopathology whole-slide images. Med. Image Anal. 2021, 68, 101890. [Google Scholar] [CrossRef] [PubMed]
  99. Ghanem, N.M.; Attallah, O.; Anwar, F.; Ismail, M.A. AUTO-BREAST: A fully automated pipeline for breast cancer diagnosis using AI technology. Artif. Intell. Cancer Diagn. Progn. 2022, 6, 1–24. [Google Scholar]
  100. Karthiga, R.; Narasimhan, K. Automated diagnosis of breast cancer using wavelet based entropy features. In Proceedings of the 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 29–31 March 2018; pp. 274–279. [Google Scholar]
  101. Anwar, F.; Attallah, O.; Ghanem, N.; Ismail, M.A. Automatic breast cancer classification from histopathological images. In Proceedings of the 2019 International Conference on Advances in the Emerging Computing Technologies (AECT), Al Madinah Al Munawwarah, Saudi Arabia, 10 February 2020; pp. 1–6. [Google Scholar]
  102. Spanhol, F.A.; Oliveira, L.S.; Petitjean, C.; Heutte, L. Breast cancer histopathological image classification using convolutional neural networks. In Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 2560–2567. [Google Scholar]
  103. Han, Z.; Wei, B.; Zheng, Y.; Yin, Y.; Li, K.; Li, S. Breast cancer multi-classification from histopathological images with structured deep learning model. Sci. Rep. 2017, 7, 4172. [Google Scholar] [CrossRef] [PubMed]
  104. Kahya, M.A.; Al-Hayani, W.; Algamal, Z.Y. Classification of breast cancer histopathology images based on adaptive sparse support vector machine. J. Appl. Math. Bioinform. 2017, 7, 49. [Google Scholar]
  105. Spanhol, F.A.; Oliveira, L.S.; Cavalin, P.R.; Petitjean, C.; Heutte, L. Deep features for breast cancer histopathological image classification. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, AB, Canada, 5–8 October 2017; Volume 71, pp. 1868–1873. [Google Scholar]
  106. Bayramoglu, N.; Kannala, J.; Heikkila, J. Deep learning for magnification independent breast cancer histopathology image classification. In Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 4–8 December 2016; pp. 2440–2445. [Google Scholar]
  107. Attallah, O.; Anwar, F.; Ghanem, N.M.; Ismail, M.A. Histo-CADx: Duo cascaded fusion stages for breast cancer diagnosis from histopathological images. PeerJ Comput. Sci. 2021, 7, e493. [Google Scholar] [CrossRef]
  108. Nahid, A.A.; Kong, Y. Histopathological breast-image classification using local and frequency domains by convolutional neural network. Information 2018, 9, 19. [Google Scholar] [CrossRef] [Green Version]
  109. Sudharshan, P.J.; Petitjean, C.; Spanhol, F.; Oliveira, L.E.; Heutte, L.; Honeine, P. Multiple instance learning for histopathological breast cancer image classification. Expert Syst. Appl. 2019, 117, 103–111. [Google Scholar] [CrossRef]
  110. Roy, K.; Banik, D.; Bhattacharjee, D.; Nasipuri, M. Patch-based system for Classification of Breast Histology images using deep learning. Comput. Med. Imaging Graph. 2019, 71, 90–103. [Google Scholar] [CrossRef]
  111. Gandomkar, Z.; Brennan, P.C.; Mello-Thoms, C. MuDeRN: Multi-category classification of breast histopathological image using deep residual networks. Artif. Intell. Med. 2018, 88, 14–24. [Google Scholar] [CrossRef]
  112. Vesal, S.; Ravikumar, N.; Davari, A.; Ellmann, S.; Maier, A. Classification of Breast Cancer Histology Images Using Transfer Learning. In International Conference Image Analysis and Recognition; Springer: Cham, Switzerland, 2018; pp. 812–819. [Google Scholar]
  113. Alom, M.Z.; Yakopcic, C.; Nasrin, M.S.; Taha, T.M.; Asari, V.K. Breast Cancer Classification from Histopathological Images with Inception Recurrent Residual Convolutional Neural Network. J. Digit. Imaging 2019, 32, 605–617. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  114. Dai, Y.; Gao, Y.; Liu, F. Transmed: Transformers advance multi-modal medical image classification. Diagnostics 2021, 11, 1384. [Google Scholar] [CrossRef] [PubMed]
  115. Almalik, F.; Yaqub, M.; Nandakumar, K. Self-Ensembling Vision Transformer (SEViT) for Robust Medical Image Classification. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Singapore, 18–22 September 2022; pp. 376–386. [Google Scholar]
  116. Karimi, D.; Vasylechko, S.D.; Gholipour, A. Convolution-free medical image segmentation using transformers. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Strasbourg, France, 27 September–1 October 2021; pp. 78–88. [Google Scholar]
  117. Chen, J.; He, Y.; Frey, E.C.; Li, Y.; Du, Y. Vit-v-net: Vision transformer for unsupervised volumetric medical image registration. arXiv 2021, arXiv:2104.06468. [Google Scholar]
  118. Yu, S.; Ma, K.; Bi, Q.; Bian, C.; Ning, M.; He, N.; Li, Y.; Liu, H.; Zheng, Y. Mil-vt: Multiple instance learning enhanced vision transformer for fundus image classification. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Strasbourg, France, 27 September–1 October 2021; pp. 45–54. [Google Scholar]
  119. Alotaibi, A.; Alafif, T.; Alkhilaiwi, F.; Alatawi, Y.; Althobaiti, H.; Alrefaei, A.; Hawsawi, Y.M.; Nguyen, T. ViT-DeiT: An Ensemble Model for Breast Cancer Histopathological Images Classification. arXiv 2022, arXiv:2211.00749. [Google Scholar]
  120. Shao, Z.; Bian, H.; Chen, Y.; Wang, Y.; Zhang, J.; Ji, X. Transmil: Transformer based correlated multiple instance learning for whole slide image classification. Adv. Neural Inf. Process. Syst. 2021, 34, 2136–2147. [Google Scholar]
  121. Chen, R.J.; Lu, M.Y.; Weng, W.H.; Chen, T.Y.; Williamson, D.F.; Manz, T.; Shady, M.; Mahmood, F. Multimodal co-attention transformer for survival prediction in gigapixel whole slide images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 4015–4025. [Google Scholar]
  122. Chen, H.; Li, C.; Wang, G.; Li, X.; Rahaman, M.M.; Sun, H.; Hu, W.; Li, Y.; Liu, W.; Sun, C.; et al. GasHis-Transformer: A multi-scale visual transformer approach for gastric histopathological image detection. Pattern Recognit. 2022, 130, 108827. [Google Scholar] [CrossRef]
  123. He, Z.; Lin, M.; Xu, Z.; Yao, Z.; Chen, H.; Alhudhaif, A.; Alenezi, F. Deconv-transformer (DecT): A histopathological image classification model for breast cancer based on color deconvolution and transformer architecture. Inf. Sci. 2022, 608, 1093–1112. [Google Scholar] [CrossRef]
  124. Zou, Y.; Chen, S.; Sun, Q.; Liu, B.; Zhang, J. DCET-Net: Dual-Stream Convolution Expanded Transformer for Breast Cancer Histopathological Image Classification. In Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Houston, TX, USA, 9–12 December 2021; pp. 1235–1240. [Google Scholar]
  125. Anupama, M.A.; Sowmya, V.; Soman, K.P. Breast cancer classification using capsule network with preprocessed histology images. In Proceedings of the International conference on communication and signal processing (ICCSP), Melmaruvathur, Tamil Nadu, India, 4–6 April 2019; pp. 143–147. [Google Scholar]
  126. Wang, P.; Wang, J.; Li, Y.; Li, P.; Li, L.; Jiang, M. Automatic classification of breast cancer histopathological images based on deep feature fusion and enhanced routing. Biomed. Signal Process. Control 2021, 65, 102341. [Google Scholar] [CrossRef]
  127. R, I.T.A. Convolutional capsule network for classification of breast cancer histology images. In Proceedings of the International Conference Image Analysis and Recognition, Waterloo, ON, Canada, 27–29 August 2018; pp. 853–860. [Google Scholar]
  128. Hirra, I.; Ahmad, M.; Hussain, A.; Ashraf, M.U.; Saeed, I.A.; Qadri, S.F.; Alghamdi, A.M.; Alfakeeh, A.S. Breast Cancer Classification From Histopathological Images Using Patch-Based Deep Learning Modeling. IEEE Access 2021, 9, 24273–24287. [Google Scholar] [CrossRef]
  129. Feng, Y.; Zhang, L.; Mo, J. Deep Manifold Preserving Autoencoder for Classifying Breast Cancer Histopathological Images. IEEE-ACM Trans. Comput. Biol. Bioinform. 2020, 17, 91–101. [Google Scholar] [CrossRef]
  130. Sharma, S.; Mehra, R. Conventional Machine Learning and Deep Learning Approach for Multi-Classification of Breast Cancer Histopathology Images-a Comparative Insight. J. Digit. Imaging 2020, 33, 632–654. [Google Scholar] [CrossRef] [PubMed]
  131. Yao, H.; Zhang, X.; Zhou, X.; Liu, S. Parallel Structure Deep Neural Network Using CNN and RNN with an Attention Mechanism for Breast Cancer Histology Image Classification. Cancers 2019, 11, 1901. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  132. Wang, Q.; Zou, Y.; Zhang, J.; Liu, B. Second-order multi-instance learning model for whole slide image classification. Phys. Med. Biol. 2021, 66, 145006. [Google Scholar] [CrossRef] [PubMed]
  133. Yadavendra; Chand, S. A comparative study of breast cancer tumor classification by classical machine learning methods and deep learning method. Mach. Vis. Appl. 2020, 31, 46. [Google Scholar] [CrossRef]
  134. Tembhurne, J.V.; Hazarika, A.; Diwan, T. BrC-MCDLM: Breast Cancer detection using Multi-Channel deep learning model. Multimed. Tools Appl. 2021, 80, 31647–31670. [Google Scholar] [CrossRef]
  135. Alkassar, S.; Jebur, B.A.; Abdullah, M.A.M.; Al-Khalidy, J.H.; Chambers, J.A. Going deeper: Magnification-invariant approach for breast cancer classification using histopathological images. IET Comput. Vis. 2021, 15, 151–164. [Google Scholar] [CrossRef]
  136. Burcak, K.C.; Baykan, O.K.; Uguz, H. A new deep convolutional neural network model for classifying breast cancer histopathological images and the hyperparameter optimisation of the proposed model. J. Supercomput. 2021, 77, 973–989. [Google Scholar] [CrossRef]
  137. Murtaza, G.; Shuib, L.; Mujtaba, G.; Raza, G. Breast Cancer Multi-classification through Deep Neural Network and Hierarchical Classification Approach. Multimed. Tools Appl. 2020, 79, 15481–15511. [Google Scholar] [CrossRef]
  138. Yari, Y.; Nguyen, T.V.; Nguyen, H.T. Deep Learning Applied for Histological Diagnosis of Breast Cancer. IEEE Access 2020, 8, 162432–162448. [Google Scholar] [CrossRef]
  139. Elmannai, H.; Hamdi, M.; AlGarni, A. Deep Learning Models Combining for Breast Cancer Histopathology Image Classification. Int. J. Comput. Intell. Syst. 2021, 14, 1003–1013. [Google Scholar] [CrossRef]
  140. Shi, X.; Su, H.; Xing, F.; Liang, Y.; Qu, G.; Yang, L. Graph temporal ensembling based semi-supervised convolutional neural network with noisy labels for histopathology image analysis. Med. Image Anal. 2020, 60, 101624. [Google Scholar] [CrossRef]
  141. Oyelade, O.N.; Ezugwu, A.E. A bioinspired neural architecture search based convolutional neural network for breast cancer detection using histopathology images. Sci. Rep. 2021, 11, 19940. [Google Scholar] [CrossRef] [PubMed]
  142. Rana, P.; Gupta, P.K.; Sharma, V. A Novel Deep Learning-based Whale Optimization Algorithm for Prediction of Breast Cancer. Braz. Arch. Biol. Technol. 2021, 64, 1–16. [Google Scholar] [CrossRef]
  143. Li, X.; Monga, V.; Rao, U.K.A. Analysis-Synthesis Learning With Shared Features: Algorithms for Histology Image Classification. IEEE Trans. Biomed. Eng. 2020, 67, 1061–1073. [Google Scholar] [CrossRef] [PubMed]
  144. George, K.; Faziludeen, S.; Sankaran, P.; Joseph, P.K. Breast cancer detection from biopsy images using nucleus guided transfer learning and belief based fusion. Comput. Biol. Med. 2020, 124, 103954. [Google Scholar] [CrossRef] [PubMed]
  145. Liu, W.; Juhas, M.; Zhang, Y. Fine-Grained Breast Cancer Classification With Bilinear Convolutional Neural Networks (BCNNs). Front. Genet. 2020, 11, 547327. [Google Scholar] [CrossRef]
  146. Lin, C.-J.; Jeng, S.-Y.; Lee, C.-L. Hyperparameter Optimization of Deep Learning Networks for Classification of Breast Histopathology Images. Sensors Mater. 2021, 33, 315–325. [Google Scholar] [CrossRef]
  147. George Melekoodappattu, J.; Sahaya Dhas, A.; Kumar, B.K.; Adarsh, K.S. Malignancy detection on mammograms by integrating modified convolutional neural network classifier and texture features. Int. J. Imaging Syst. Technol. 2021, 32, 564–574. [Google Scholar] [CrossRef]
  148. Sohail, A.; Khan, A.; Nisar, H.; Tabassum, S.; Zameer, A. Mitotic nuclei analysis in breast cancer histopathology images using deep ensemble classifier. Med. Image Anal. 2021, 72, 102121. [Google Scholar] [CrossRef]
  149. Arya, N.; Saha, S. Multi-modal advanced deep learning architectures for breast cancer survival prediction. Knowl.-Based Syst. 2021, 221, 106965. [Google Scholar] [CrossRef]
  150. Lin, C.-J.; Jeng, S.-Y. Optimization of Deep Learning Network Parameters Using Uniform Experimental Design for Breast Cancer Histopathological Image Classification. Diagnostics 2020, 10, 662. [Google Scholar] [CrossRef] [PubMed]
  151. Yamlome, P.; Akwaboah, A.D.; Marz, A.; Deo, M. Convolutional Neural Network Based Breast Cancer Histopathology Image Classification. In Proceedings of the 42nd Annual International Conference of the IEEE-Engineering-in-Medicine-and-Biology-Society (EMBC), Montreal, QC, Canada, 20–24 July 2020. [Google Scholar]
  152. Mercan, C.; Aygunes, B.; Aksoy, S.; Mercan, E.; Shapiro, L.G.; Weaver, D.L.; Elmore, J.G. Deep Feature Representations for Variable-Sized Regions of Interest in Breast Histopathology. IEEE J. Biomed. Health Inform. 2021, 25, 2041–2049. [Google Scholar] [CrossRef] [PubMed]
  153. Pattarone, G.; Acion, L.; Simian, M.; Iarussi, E. Learning deep features for dead and living breast cancer cell classification without staining. Sci. Rep. 2021, 11, 1–10. [Google Scholar]
  154. Li, G.; Li, C.; Wu, G.; Ji, D.; Zhang, H. Multi-View Attention-Guided Multiple Instance Detection Network for Interpretable Breast Cancer Histopathological Image Diagnosis. IEEE Access 2021, 9, 79671–79684. [Google Scholar] [CrossRef]
  155. Zormpas-Petridis, K.; Noguera, R.; Ivankovic, D.K.; Roxanis, I.; Jamin, Y.; Yuan, Y. SuperHistopath: A Deep Learning Pipeline for Mapping Tumor Heterogeneity on Low-Resolution Whole-Slide Digital Histopathology Images. Front. Oncol. 2021, 10, 586292. [Google Scholar] [CrossRef]
  156. Aresta, G.; Araújo, T.; Kwok, S.; Chennamsetty, S.S.; Safwan, M.; Alex, V.; Marami, B.; Prastawa, M.; Chan, M.; Donovan, M.; et al. BACH: Grand Challenge on Breast Cancer Histology Images. Med. Image Anal. 2018, 56, 122–139. [Google Scholar] [CrossRef]
  157. Baris, G.; Selim, A.; Ezgi, M.; Shapiro, L.G.; Weaver, D.L.; Elmore, J.G. Detection and classification of cancer in whole slide breast histopathology images using deep convolutional networks. Pattern Recognit. 2018, 84, S0031320318302577. [Google Scholar]
  158. Feng, Y.; Zhang, L.; Yi, Z. Breast cancer cell nuclei classification in histopathology images using deep neural networks. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 179–191. [Google Scholar] [CrossRef]
  159. Boumaraf, S.; Liu, X.; Zheng, Z.; Ma, X.; Ferkous, C. A new transfer learning based approach to magnification dependent and independent classification of breast cancer in histopathological images. Biomed. Signal Process. Control 2021, 63, 102192. [Google Scholar] [CrossRef]
  160. Rawat, R.R.; Ortega, I.; Roy, P.; Sha, F.; Shibata, D.; Ruderman, D.; Agus, D.B. Deep learned tissue “fingerprints” classify breast cancers by ER/PR/Her2 status from H&E images. Sci. Rep. 2020, 10, 7275. [Google Scholar]
  161. Kate, V.; Shukla, P. Multiple Classifier Framework System for Fast Sequential Prediction of Breast Cancer using Deep Learning Models. In Proceedings of the 2019 IEEE 16th India Council International Conference (INDICON), Rajkot, Gujarat, 13–15 December 2019. [Google Scholar]
  162. Man, Y.; Yao, H. Automatic Breast Cancer Grading of Histological Images using Dilated Residual Network. In Proceedings of the 2019 11th International Conference on Bioinformatics and Biomedical Technology, Stockholm, Sweden, 29–31 May 2019. [Google Scholar]
  163. Li, Y.; Xie, X.; Shen, L.; Liu, S. Reversed Active Learning based Atrous DenseNet for Pathological Image Classification. BMC Bioinform. 2018, 20, 445. [Google Scholar]
  164. Qi, Q.; Li, Y.; Wang, J.; Zheng, H.; Huang, Y.; Ding, X.; Rohde, G.K. Label-Efficient Breast Cancer Histopathological Image Classification. Biomedical and Health Informatics. IEEE J. Biomed. Health Inform. 2019, 23, 2108–2116. [Google Scholar] [CrossRef] [PubMed]
  165. Kang, J.H.; Krause, S.; Tobin, H.; Mammoto, A.; Kanapathipillai, M.; Ingber, D.E. A combined micromagnetic-microfluidic device for rapid capture and culture of rare circulating tumor cells. Lab Chip 2012, 12, 2175–2181. [Google Scholar] [CrossRef] [PubMed]
  166. Cruz-Roa, A.; Basavanhally, A.; González, F.; Gilmore, H.; Feldman, M.; Ganesan, S.; Shih, N.; Tomaszewski, J.; Madabhushi, A. Automatic detection of invasive ductal carcinoma in whole slide images with convolutional neural networks. Proc. SPIE–Int. Soc. Opt. Eng. 2014, 9041, 139–144. [Google Scholar]
  167. Araújo, T.; Aresta, G.; Castro, E.; Rouco, J.; Aguiar, P.; Eloy, C.; Polónia, A.; Campilho, A. Classification of breast cancer histology images using Convolutional Neural Networks. PLoS ONE 2017, 12, e0177544. [Google Scholar] [CrossRef]
  168. Cruz-Roa, A.; Gilmore, H.; Basavanhally, A.; Feldman, M.; Ganesan, S.; Shih, N.N.; Tomaszewski, J.; González, F.A.; Madabhushi, A. Accurate and reproducible invasive breast cancer detection in whole-slide images: A Deep Learning approach for quantifying tumor extent. Sci. Rep. 2017, 7, 46450. [Google Scholar] [CrossRef] [Green Version]
  169. Karthik, R.; Menaka, R.; Siddharth, M.V. Classification of breast cancer from histopathology images using an ensemble of deep multiscale networks. Biocybern. Biomed. Eng. 2022, 42, 963–976. [Google Scholar] [CrossRef]
  170. Hao, Y.; Zhang, L.; Qiao, S.; Bai, Y.; Cheng, R.; Xue, H.; Hou, Y.; Zhang, W.; Zhang, G. Breast cancer histopathological images classification based on deep semantic features and gray level co-occurrence matrix. PLoS ONE 2022, 17, e0267955. [Google Scholar] [CrossRef]
  171. Van De Vijver, M.J.; He, Y.D.; Van’t Veer, L.J.; Dai, H.; Hart, A.A.; Voskuil, D.W.; Schreiber, G.J.; Peterse, J.L.; Roberts, C.; Marton, M.J.; et al. A gene-expression signature as a predictor of survival in breast cancer. N. Engl. J. Med. 2002, 347, 1999–2009. [Google Scholar] [CrossRef] [Green Version]
  172. Khademi, M.; Nedialkov, N.S. Probabilistic graphical models and deep belief networks for prognosis of breast cancer. In Proceedings of the 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA), Miami, FL, USA, 9–11 December 2015; pp. 727–732. [Google Scholar]
  173. Lee, E.S.; Son, D.S.; Kim, S.H.; Lee, J.; Jo, J.; Han, J.; Kim, H.; Lee, H.J.; Choi, H.Y.; Jung, Y.; et al. Prediction of recurrence-free survival in postoperative non–small cell lung cancer patients by using an integrated model of clinical information and gene expression. Clin. Cancer Res. 2008, 14, 7397–7404. [Google Scholar] [CrossRef] [Green Version]
  174. Stone, P.C.; Lund, S. Predicting prognosis in patients with advanced cancer. Ann. Oncol. 2007, 18, 971–976. [Google Scholar] [CrossRef] [PubMed]
  175. Martin, L.R.; Williams, S.L.; Haskard, K.B.; DiMatteo, M.R. The challenge of patient adherence. Ther. Clin. Risk Manag. 2005, 1, 189. [Google Scholar] [PubMed]
  176. Wang, Y.; Klijn, J.G.; Zhang, Y.; Sieuwerts, A.M.; Look, M.P.; Yang, F.; Talantov, D.; Timmermans, M.; Meijer-van Gelder, M.E.; Yu, J.; et al. Gene-expression profiles to predict distant metastasis of lymph-node-negative primary breast cancer. Lancet 2005, 365, 671–679. [Google Scholar] [CrossRef] [PubMed]
  177. Sun, Y.; Goodison, S.; Li, J.; Liu, L.; Farmerie, W. Improved breast cancer prognosis through the combination of clinical and genetic markers. Bioinformatics 2007, 23, 30–37. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  178. Gevaert, O.; Smet, F.D.; Timmerman, D.; Moreau, Y.; Moor, B.D. Predicting the prognosis of breast cancer by integrating clinical and microarray data with Bayesian networks. Bioinformatics 2006, 22, e184–e190. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  179. Xu, X.; Zhang, Y.; Zou, L.; Wang, M.; Li, A. A gene signature for breast cancer prognosis using support vector machine. In Proceedings of the 2012 5th International Conference on Biomedical Engineering and Informatics, Chongqing, China, 16–18 October 2012; pp. 928–931. [Google Scholar]
  180. Nguyen, C.; Wang, Y.; Nguyen, H.N. Random forest classifier combined with feature selection for breast cancer diagnosis and prognostic. J. Biomed. Sci. Eng. 2013, 6, 31887. [Google Scholar] [CrossRef]
  181. Qu, H.; Zhou, M.; Yan, Z.; Wang, H.; Rustgi, V.K.; Zhang, S.; Gevaert, O.; Metaxas, D.N. Genetic mutation and biological pathway prediction based on whole slide images in breast carcinoma using deep learning. NPJ Precis. Oncol. 2021, 5, 87. [Google Scholar] [CrossRef]
  182. Wang, X.; Zou, C.; Zhang, Y.; Li, X.; Wang, C.; Ke, F.; Chen, J.; Wang, W.; Wang, D.; Xu, X.; et al. Prediction of BRCA gene mutation in breast cancer based on deep learning and histopathology images. Front. Genet. 2021, 12, 1147. [Google Scholar] [CrossRef]
  183. He, B.; Bergenstråhle, L.; Stenbeck, L.; Abid, A.; Andersson, A.; Borg, Å.; Maaskola, J.; Lundeberg, J.; Zou, J. Integrating spatial gene expression and breast tumour morphology via deep learning. Nat. Biomed. Eng. 2020, 4, 827–834. [Google Scholar] [CrossRef]
  184. Wang, F.; Han, J. Multimodal biometric authentication based on score level fusion using support vector machine. Opto-Electron. Rev. 2009, 17, 59–64. [Google Scholar] [CrossRef]
  185. Jain, A.K.; Ross, A. Multibiometric systems. Commun. ACM 2004, 47, 30–40. [Google Scholar] [CrossRef]
  186. Sun, D.; Wang, M.; Li, A. A multimodal deep neural network for human breast cancer prognosis prediction by integrating multi-dimensional data. IEEE/ACM Trans. Comput. Biol. Bioinform. 2018, 16, 841–850. [Google Scholar] [CrossRef] [PubMed]
  187. Petkov, V.I.; Miller, D.P.; Howlader, N.; Gliner, N.; Howe, W.; Schussler, N.; Cronin, K.; Baehner, F.L.; Cress, R.; Deapen, D.; et al. Breast-cancer-specific mortality in patients treated based on the 21-gene assay: A SEER population-based study. NPJ Breast Cancer 2016, 2, 1–9. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  188. Lin, Z.; He, Y.; Qiu, C.; Yu, Q.; Huang, H.; Zhang, Y.; Li, W.; Qiu, T.; Li, X. A multi-omics signature to predict the prognosis of invasive ductal carcinoma of the breast. Comput. Biol. Med. 2022, 2022, 106291. [Google Scholar] [CrossRef] [PubMed]
  189. Abbet, C.; Zlobec, I.; Bozorgtabar, B.; Thiran, J.P. Divide-and-rule: Self-supervised learning for survival analysis in colorectal cancer. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 4–8 October 2020; pp. 480–489. [Google Scholar]
  190. Dooley, A.E.; Tong, L.; Deshpande, S.R.; Wang, M.D. Prediction of heart transplant rejection using histopathological whole-slide imaging. In Proceedings of the 2018 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), Las Vegas, NV, USA, 4–7 March 2018; pp. 251–254. [Google Scholar]
  191. Zhu, Y.; Tong, L.; Deshpande, S.R.; Wang, M.D. Improved prediction on heart transplant rejection using convolutional autoencoder and multiple instance learning on whole-slide imaging. In Proceedings of the 2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), Chicago, IL, USA, 19–22 May 2019; pp. 1–4. [Google Scholar]
  192. Chen, Z.; Li, X.; Yang, M.; Zhang, H.; Xu, X.S. Optimization of deep learning models for the prediction of gene mutations using unsupervised clustering. J. Pathol. Clin. Res. 2022, 9, 3–17. [Google Scholar] [CrossRef]
Figure 1. Publication of articles on deep learning of pathological images of breast cancer. (A) Year of publication of the article on deep learning in breast cancer pathological images. (B) Summary of the themes of review-type articles in articles on the application of deep learning to breast cancer pathological pictures.
Figure 1. Publication of articles on deep learning of pathological images of breast cancer. (A) Year of publication of the article on deep learning in breast cancer pathological images. (B) Summary of the themes of review-type articles in articles on the application of deep learning to breast cancer pathological pictures.
Micromachines 13 02197 g001
Figure 2. The network structure of the neural network structure commonly used in breast pathological image analysis. These network structure models have been widely applied and performed well in deep learning pathological image classification, segmentation, and recognition tasks. The input size of each layer is shown in the figure. (A) Convolutional neural network (CNN). (B) LeNet. (C) AlexNet. (D) Fully convolutional networks for semantic segmentation (FCN). (E) UNet. (F) Holistically Nested Network (HED).
Figure 2. The network structure of the neural network structure commonly used in breast pathological image analysis. These network structure models have been widely applied and performed well in deep learning pathological image classification, segmentation, and recognition tasks. The input size of each layer is shown in the figure. (A) Convolutional neural network (CNN). (B) LeNet. (C) AlexNet. (D) Fully convolutional networks for semantic segmentation (FCN). (E) UNet. (F) Holistically Nested Network (HED).
Micromachines 13 02197 g002
Figure 3. Some typical models of breast lesion detection methods. (A) The network structure and model reported by George et al. (B) The network reported by Chen et al. The deep cascaded convolutional neural network was used to achieve high accuracy while greatly increasing the speed of analysis. (Reproduced with permission from [Hao Chen], [THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE]; published by [PKP Publishing Services Network], [2016].) (C) The automatic detection and positioning framework reported by Liu et al., (Reproduced with permission from [Yun Liu], [arXiv]; published by [arXiv], [2017].) (D) The network of the automatic classification of breast cancer histological images reported by Bardou et al., (Reproduced with permission from [Dalal Bardou], [IEEE Access]; published by [IEEE], [2018].)
Figure 3. Some typical models of breast lesion detection methods. (A) The network structure and model reported by George et al. (B) The network reported by Chen et al. The deep cascaded convolutional neural network was used to achieve high accuracy while greatly increasing the speed of analysis. (Reproduced with permission from [Hao Chen], [THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE]; published by [PKP Publishing Services Network], [2016].) (C) The automatic detection and positioning framework reported by Liu et al., (Reproduced with permission from [Yun Liu], [arXiv]; published by [arXiv], [2017].) (D) The network of the automatic classification of breast cancer histological images reported by Bardou et al., (Reproduced with permission from [Dalal Bardou], [IEEE Access]; published by [IEEE], [2018].)
Micromachines 13 02197 g003
Figure 4. Some typical models of breast pathological image segmentation. (A) Overview of methods for detecting breast cancer reported by Mehta et al. (B) The fast and refined cancer region segmentation framework v3_DCNN reported by Guo et al. (C) The method reported by Pan et al. (D) The deep neural network-based pipeline segmentation method reported by Maria Priego-Torres et al.
Figure 4. Some typical models of breast pathological image segmentation. (A) Overview of methods for detecting breast cancer reported by Mehta et al. (B) The fast and refined cancer region segmentation framework v3_DCNN reported by Guo et al. (C) The method reported by Pan et al. (D) The deep neural network-based pipeline segmentation method reported by Maria Priego-Torres et al.
Micromachines 13 02197 g004
Figure 5. Some typical models of disease classification based on breast pathological images. (A) The block diagram of patchwise classification and the block diagram of the CNN architecture reported by Roy et al. (B) The steps of MuDeRN reported by Gandomkar et al. (C) The workflow of breast histology image classification reported by Vesal et al. (D) Alom et al., reported the implementation diagram of the IRRCNN model to identify breast cancer.
Figure 5. Some typical models of disease classification based on breast pathological images. (A) The block diagram of patchwise classification and the block diagram of the CNN architecture reported by Roy et al. (B) The steps of MuDeRN reported by Gandomkar et al. (C) The workflow of breast histology image classification reported by Vesal et al. (D) Alom et al., reported the implementation diagram of the IRRCNN model to identify breast cancer.
Micromachines 13 02197 g005
Table 1. Common breast cancer pathological image public dataset.
Table 1. Common breast cancer pathological image public dataset.
NameData Details
BreakHisbenign2480
(https://web.inf.ufpr.br/vri/databases/breast-cancer-histopathological-database-breakhis/, accessed on 1 December 2022)malignant5429
TCIAmalignant549
(https://www.cancerimagingarchive.net/, accessed on 1 December 2022)
GDC Data Portalmalignant9114
(https://gdc.cancer.gov/access-data/gdc-data-portal, accessed on 1 December 2022)
Sklearn.datasetsbenign357
(https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer.html, accessed on 1 December 2022)malignant212
BACH (ICIAR 2018 Grand Challenge)normal100
(https://iciar2018-challenge.grand-challenge.org/, accessed on 1 December 2022)benign100
malignant200
Camelyon16normal160
(https://camelyon16.grand-challenge.org/, accessed on 1 December 2022)malignant240
Camelyon17normal160
(https://camelyon17.grand-challenge.org/, accessed on 1 December 2022)malignant1240
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhao, Y.; Zhang, J.; Hu, D.; Qu, H.; Tian, Y.; Cui, X. Application of Deep Learning in Histopathology Images of Breast Cancer: A Review. Micromachines 2022, 13, 2197. https://doi.org/10.3390/mi13122197

AMA Style

Zhao Y, Zhang J, Hu D, Qu H, Tian Y, Cui X. Application of Deep Learning in Histopathology Images of Breast Cancer: A Review. Micromachines. 2022; 13(12):2197. https://doi.org/10.3390/mi13122197

Chicago/Turabian Style

Zhao, Yue, Jie Zhang, Dayu Hu, Hui Qu, Ye Tian, and Xiaoyu Cui. 2022. "Application of Deep Learning in Histopathology Images of Breast Cancer: A Review" Micromachines 13, no. 12: 2197. https://doi.org/10.3390/mi13122197

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop