Next Article in Journal
Effects of Multidisciplinary Rehabilitation Program in Patients with Long COVID-19: Post-COVID-19 Rehabilitation (PCR SIRIO 8) Study
Next Article in Special Issue
Artificial Intelligence-Assisted Transcriptomic Analysis to Advance Cancer Immunotherapy
Previous Article in Journal
Predictive Performance of Machine Learning-Based Methods for the Prediction of Preeclampsia—A Prospective Study
Previous Article in Special Issue
Radiomics Combined with Multiple Machine Learning Algorithms in Differentiating Pancreatic Ductal Adenocarcinoma from Pancreatic Neuroendocrine Tumor: More Hands Produce a Stronger Flame
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Overview of Artificial Intelligence in Breast Cancer Medical Imaging

Laboratory of Integrative Medicine, Clinical Research Center for Breast, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University and Collaborative Innovation Center, Chengdu 610041, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
J. Clin. Med. 2023, 12(2), 419; https://doi.org/10.3390/jcm12020419
Submission received: 28 November 2022 / Revised: 26 December 2022 / Accepted: 30 December 2022 / Published: 4 January 2023

Abstract

:
The heavy global burden and mortality of breast cancer emphasize the importance of early diagnosis and treatment. Imaging detection is one of the main tools used in clinical practice for screening, diagnosis, and treatment efficacy evaluation, and can visualize changes in tumor size and texture before and after treatment. The overwhelming number of images, which lead to a heavy workload for radiologists and a sluggish reporting period, suggests the need for computer-aid detection techniques and platform. In addition, complex and changeable image features, heterogeneous quality of images, and inconsistent interpretation by different radiologists and medical institutions constitute the primary difficulties in breast cancer screening and imaging diagnosis. The advancement of imaging-based artificial intelligence (AI)-assisted tumor diagnosis is an ideal strategy for improving imaging diagnosis efficient and accuracy. By learning from image data input and constructing algorithm models, AI is able to recognize, segment, and diagnose tumor lesion automatically, showing promising application prospects. Furthermore, the rapid advancement of “omics” promotes a deeper and more comprehensive recognition of the nature of cancer. The fascinating relationship between tumor image and molecular characteristics has attracted attention to the radiomic and radiogenomics, which allow us to perform analysis and detection on the molecular level with no need for invasive operations. In this review, we integrate the current developments in AI-assisted imaging diagnosis and discuss the advances of AI-based breast cancer precise diagnosis from a clinical point of view. Although AI-assisted imaging breast cancer screening and detection is an emerging field and draws much attention, the clinical application of AI in tumor lesion recognition, segmentation, and diagnosis is still limited to research or in limited patients’ cohort. Randomized clinical trials based on large and high-quality cohort are lacking. This review aims to describe the progress of the imaging-based AI application in breast cancer screening and diagnosis for clinicians.

1. Introduction

By 2020, breast cancer surpassed lung cancer as the most common type of cancer worldwide, with the highest incidence and second highest mortality rate [1]. With the continuous development of new drugs with better efficacy and lower toxicity, breast cancer has become a malignant tumor with a better prognosis, and the concept of prevention and treatment of breast cancer has gradually shifted to treating it as a chronic disease and formulating a series of management strategies from screening to treatment and follow-up after diagnosis. Therefore, early detection of breast cancer has become the key link in the whole management of breast cancer, which can help improve the cure rate and significantly reduce the mortality rate [2,3].
Imaging detections including mammography, ultrasound, magnetic resonance imaging (MRI), and positron emission tomography (PET) are the most essential image tools for breast cancer auxiliary diagnosis [4]. Mammograms are mainly used in breast cancer screening in asymptomatic women due to its good performance in detecting small tumors [5,6,7] using a low-dose of X-ray [8]. It is the only imaging tool that has been proven to decrease breast cancer-related mortality, but it is accompanied by a significantly higher proportion of over-diagnosis [9]. For diagnosed or suspected breast cancer primary loci, ultrasound is the most common imaging detection for tumor staging and biopsy guiding in clinical routine. Compared to mammograms, ultrasound is relatively more versatile, portable, and cost-effective [10], but highly dependent on the well-trained operator. MRI has higher sensitivity than the above two techniques in breast cancer detection, though it is the most expensive one, with low specificity [11,12]. In addition, conflicting evidence obscures the value of MRI in breast cancer [10], limiting its use to high-risk diseases or indiscernible lesions that are difficult to detect by conventional imaging tools such as mammography and ultrasound [12,13]. As another widely used imaging examination, PET can depict the metabolic characteristics of breast cancer as opposed to just anatomic appearance. Moreover, several hybrid imaging techniques have been introduced into routine clinical practice, such as PET/MRI and PET/computed tomography (PET/CT) imaging, facilitating breast cancer diagnosis and staging.
To date, imaging detections have been widely applied in the early detection and clinical staging of breast cancer, however, several tricky issues have become increasingly prominent in clinical practice. On the one hand, the large amount of imaging data generated during the diagnosis of breast cancer places heavy workload on radiologists. On the other hand, images with low quality or ambiguous features limit the diagnostic accuracy of radiologists, and the presentation of subtle or complex disease manifestations may require both imaging and clinical information to make comprehensive judgement [14]. Computer-aid diagnosis (CAD) provides efficient automated image recognition, lesion segmentation, and diagnosis, potentially reducing the workload of radiologists, and improving diagnostic accuracy. With the advances in CAD, more flexible and versatile analyses are constantly emerging, especially image-based artificial intelligence (AI) techniques, significantly improving the clinical value of CAD in breast cancer. To improve and ensure the accuracy of diagnosis, a reliable CAD method with a high-performance computer technique is essential, which affects the interpretation accuracy directly [15]. Therefore, optimizing the performance of AI-based breast cancer screening and diagnosis is of great importance in better assisting the work of radiologists. In this review, we discuss the application of AI-based imaging detections including mammography, ultrasound, MRI, and PET in breast cancer screening and diagnosis.

2. Materials and Methods

In this review, we collected studies and reviews concerning the computer-aided diagnosis of breast cancer which were published from the beginning of 2000 to the present on PubMed, medRxiv, Google Scholar, and Scopus database. The key words used for the literature search were as follows: “breast cancer”, “artificial intelligence”, “deep learning”, “machine learning”, “imaging”, “mammogram or mammography”, “ultrasound”, “MRI”, “PET”, “radiomics”, “radiogenomics”. The search was performed using Boolean “AND” and “OR” operators between the main term and keywords. Articles and reviews were manually rejected if they were irrelevant to primary breast cancer diagnosis or specific to the computer-aided diagnosis based on pathological information. Research articles or reviews published in languages other than English were excluded. According to the image detection methods, the literatures were categorized into mammogram-based, ultrasound-based, MRI-based, and PET-based. We did not discuss the role of computer tomography in breast cancer diagnosis because it is less commonly used in breast cancer diagnosis and there is little relevant research.
In Section 2, we introduce the basic conceptions relevant to AI, machine learning (ML), and deep learning (DL). In Section 3, we discuss the characteristics of different image detection methods and their applications in breast cancer diagnosis.

3. Results

A total of 551 studies were identified, and all qualified studies were classified according to the main content covered by each of them. Specifically, studies were grouped into mammography-based AI application, ultrasound-based AI application, MRI-based AI application, PET-based AI application, and radiomics/radiogenomics application in breast cancer, and each of these themes is discussed in Section 3.2.1, Section 3.2.2, Section 3.2.3, Section 3.2.4 and Section 3.2.5. The working process of literature retrieval is presented in Figure 1.

3.1. Artificial Intelligence: From Machine Learning to Deep Learning

AI is defined as the ability of a computer to learn algorithms to reason and perform tasks including reading, writing, interacting, problem-solving, and decision-making. ML, a subfield of AI, is mainly used to extract features from training set data and build analytical mathematical models for prediction and analysis of unknown data. The primary model of ML can be divided into the predictive models and the explanatory models, which aim to solve tasks of judging unknown data sets, and describing and explaining the features of unknown data sets according to the purpose of the model built, respectively. When classified by the algorithms, ML can be grouped into supervised algorithms, unsupervised algorithms, and reinforcement algorithms [16]. The unsupervised learning adopts data in the absence of labels and performs tasks about classification (e.g., decision trees, K-nearest neighbors, and support vector machine, SVM) and regression (e.g., linear/non-linear regression, local regression, and ordinary least squares regression), while the supervised learning received labeled data to work on cluster analysis (e.g., hierarchical clustering and K-means clustering) and dimension reduction (liner discriminant analysis and principal component analysis) [17]. The reinforcement learning allows the computer to self-train based on the output consequences of interaction with the environment or the success of the decision, and optimize the decision results by continuously adjusting the algorithm parameters.
Compared with ML, DL also depends on the input information or datasets to acquire decision-making ability, but it does not rely on handcrafted features, and the way it learns is more inclined to the human learning approach. Inspired by the biological nervous system, DL depends on numerous highly interconnected computer units that mimic the neurons of the human brain. These units are constituted of layers, and each layer is fully connected to form the artificial neural networks. The algorithms of DL for image recognition analysis mainly include convolutional neural networks (CNN), deep convolutional neural networks (DCNN), fully convolutional networks (FCN), recurrent neural networks (RNN), and generative adversarial networks (GAN), etc. [18]. In the process of DL, the final goal is decomposed into a series of simple nested mappings (i.e., the concept of layers) and performs a multitude of logical decision-like tasks layer by layer to obtain the answer to the final question. The data input (image data) is presented in the “visible layer”, while the middle convolutional layer performs multiple feature extraction operations on the image. Each convolutional layer contains a large number of convolutional kernels that extract a large number of image features at different locations of the input image for subsequent analysis, which are hidden from the visible data. The dimension of the image feature from the convolutional layer is reduced in the pooling layer and low-resolution but highly representative features of the image are outputted.
ML predictive models are better at explaining predictions than DL because they are built based on well-labeled training datasets. However, in many areas where AI is currently applied, a number of tasks cannot be generalized by mathematical models for interpretation, such as tumor imaging or pathological tissue characterization, and the radiologist can make diagnoses based on their knowledge and experience. DL, which is more similar to the human mindset, can make more factual judgments or predictions “end-to-end” through uninterpretable neural network analysis decision methods. With the parallel high-speed development of computer hardware and data storage flux, AI has excelled in its ability to recognize and learn from high throughput of data and information, a characteristic that extends its applicability to all aspects of cancer research and medicine, such as automatic and accurate detection of cancer from stained tumor slides or radiology images, thus reducing the duplication of work for radiologists and pathologists. AI in the breast cancer field covers a wide range of applications from tumor screening, diagnosis, staging, treatment, follow-up, and drug development [19] (Figure 2).

3.2. Application of AI in Breast Cancer Imaging Diagnosis

Breast cancer prevention and control strategies are currently focused on secondary prevention, i.e., enhancing screening for high-risk populations, and early diagnosis is a key component of breast cancer control strategy [20,21]. The greatest value of AI in breast cancer screening may lie in the efficient search for tumor lesions from the huge number of images of healthy people, which greatly reduces the workload of imaging physicians [22]. The development of AI-assisted breast imaging diagnosis is based on computer-aid detection/diagnosis (CADe/CADx) systems. As a primary version of ML, CAD is used to help radiologists with tiny tumor lesions that may be missed by integrating mathematics, statistics, image processing, and analysis by computer. However, the high false positive rate (FPR) and biopsy rate resulting from CAD identification in clinical practice limits its usage in clinical practice [23,24,25]. To improve the performance of CAD, the input image data is used as a training set to build a model, and end-to-end learning is achieved including the processes of clinical data set acquisition, neural network normalization data set processing, ML classification algorithm selection, and overall system performance evaluation [26]. Building a DL-based AI application tool for breast cancer diagnosis requires a large number of high-quality breast examination images as a training dataset; and building DL algorithm that is consistent across people, devices, and modalities [27]. Tumor images can be annotated by outlining the lesion and annotating the lesion features manually or with DL. Manually annotating relies on the knowledge and experience of the imaging specialist, and the lesions outlined manually are used as reference standards for automatic segmentation. However, lesions with small volumes or obscure features are difficult to distinguish from the surrounding normal breast tissue. Moreover, Asian women, especially premenopausal women or patients with received estrogen-replacement therapy may have endo breast [28], making the tumor segmentation more difficult. Therefore, the expert-defined lesions do not fully and accurately represent the true lesion area and extent, and the repeatability consistency is weak [29,30,31].
Automatic image segmentation methods generally include thresholding-based, region-based, edge-based, clustering-based, or other segmentation methods, while no accepted gold standard for image segmentation methods has been identified. DL algorithms usually refer to neural network algorithms, and CNN is one of the most developed DL algorithms with convolutional, nonlinear, pooling, and fully connected layers in the computing path (Figure 3). AI applied to breast cancer screening can help reduce or prevent some visible lesions from being overlooked or misinterpreted [32]. However, AI image interpretation tools based on DL are actually most commonly used for secondary review of negative cases that were manually interpreted [24,25]. Furthermore, there are no prospective randomized controlled studies comparing the accuracy of AI as a stand-alone breast cancer screening interpretation system with that of radiologists’ interpretations. In retrospective studies, AI systems were inferior to radiologists in terms of interpretation accuracy [33]. However, through DL and model training of image data, there is still great potential to use AI to complete early breast cancer screening in the future [27].

3.2.1. Mammograph in Breast Cancer

Breast cancer mortality among women in the United States has continued to decline since the introduction of mammography screening. Women with breast cancer who underwent mammography screening demonstrated a 53% lower risk of breast cancer-related death compared to those with intermittent/incidental breast cancer [34]. However, radiologists usually determine the properties of a lesion based on its image characteristics on the mammogram, such as density, homogeneity, and margins, which relies on the physician’s personal perception and experience and suffers from subjective cognitive bias. Sahiner et al. first attempted to use CNN for mammogram image analysis, although only a simple model with few layers was used, mammography techniques and the application of AI in mammography image analysis have been greatly developed since then [35,36]. The application of CNN in mammography diagnosis mainly contains benign and malignant tumor identification, mass localization, segmentation of cancerous, and non-cancerous tissues, and breast classification based on density [37]. CNN algorithm in benign and malignant tumor mass identification is only accurate in the lesion with BI-RADS classification of grade 1, while it is difficult to diagnose images of higher grade tumors [38]. CNN is weak in identifying tumor lesions in dense breast and pectoral muscle when used for mass localization in mammography [39,40,41]; moreover, mammography does not show the extent of tumor well, and tumor staging is difficult even with manual interpretation using plain digital mammograms alone [42]. Furthermore, the use of CNN to differentiate between cancerous and peripheral normal tissues in the lesion area requires an algorithm that excludes microcalcifications and benign lesions for accurate identification [43].
To improve diagnostic accuracy, Pillai et al. performed breast cancer mammography detection using the VGG16 model and obtained a high accuracy, outperforming the classic AlexNet, GoogleNet, and EfficientNet models [44]. Studies have reported that CNNs can be used to identify “scattered density” and “heterogeneous density” in the BI-RADS breast density classification to categorize breasts to help predict breast cancer risk [45]. Kumar and colleagues performed breast cancer subtype classification using the VGGNet-based CNN, which reported an accuracy rate of 78% in differentiating Luminal A and Luminal B subtypes, and 67% in discerning all four subtypes of breast cancer including Luminal A, Luminal B, HER2-positive, and basal-like [46]. Singh et al. reported the application of a multi-class CNN architecture in predicting the breast tumor shapes in mammograms through a conditional Generative Adversarial Network (cGAN) and correlated the molecular subtypes of breast cancer with the predicted tumor shapes [47]. Another study explored a novel CAD system based on DCNN that helps to process deep feature extraction and fusion, which enhanced the accuracy of the SVM in classification [48]. Contrast-enhanced spectral mammography (CESM) is a new technique for angiography of breast tumors, based on plain digital mammography, using iodinated contrast agent diffused through the neovascularization of the breast within the tumor tissue to create iodine-enhanced images [49]. The uptake of contrast agent is higher in tumors compared to the surrounding normal breast tissue. Even for intraductal carcinoma, the contrast agent is able to visualize the tumor lesion through the endothelial incomplete tumor neovascularization due to the abnormal structure of the tumor neovascularization [50,51]. Investigators compared the ability of radiologists and SVM classifiers to identify malignant tumors in CESM images and suggested that SVM-based computer-aided CESM diagnosis could help radiologists reduce the number of false-positive results [52]. Another study evaluated the performance of neural network DL in identifying benign and malignant tumors in CESM images and confirmed that multimodal network image recognition can significantly reduce the biopsy rate of benign tumors [53]. Digital breast tomosynthesis (DBT) depends on anatomical changes in the breast caused by breast cancer to show the lesion, allowing different angles of the breast to be photographed and reconstructed to form a thin three-dimensional (3D) image of the breast, significantly improving the sensitivity of breast cancer imaging diagnosis [54], but accompanied by the drawback of low resolution of lesions in dense breasts [55]. DL-based AI has been used to automatically identify DBT and mammography, and researchers have used DBT image data to create maximum suspicion transmission (MSP) DL algorithms that outperformed manual reading in the identification of mammograms [27]. However, the dependence of CESM on iodine contrast media limits the use of this screening examination in people with renal insufficiency or iodine allergy.

3.2.2. Ultrasound in Breast Cancer

Breast ultrasound is another popular imaging detection modality for breast cancer, which is widely used in breast cancer diagnosis and guided puncture biopsy due to its non-radioactive and easy operation [56]. For small occult lesions that are not calcified, ultrasound is more advantageous than mammography [57,58,59]. In addition to conventional breast ultrasound, there are ultrasound elastography, ultrasonography, automated full-volume breast scan imaging, ultrasound light scattering tomography, etc. These ultrasound-based detections integrate multiple ultrasound contrast agents, 3D imaging technology, and spectral analysis technology on the basis of general ultrasound to achieve a variety of diagnostic needs such as assessment of tumor texture, differentiation of benign and malignant, and display of the relationship with surrounding tissues. Han et al. reported the identification of benign and malignant masses using CNN for breast ultrasound images, with an accuracy of 90%, and diagnostic sensitivity and specificity of 86% and 96%, respectively [60]. The recent advent of ultrasound elastography has greatly improved the accuracy of ultrasonography in the diagnosis of breast cancer, and the semi-quantitative assessment of lesion stiffness enables a more accurate differentiation of lesion benignity and malignancy [61]. Zhang et al. used a two-layer DL algorithm model to discriminate the properties of tumor masses using elastic shear wave images, with a diagnostic accuracy of 93.4%, sensitivity of 88.6%, and specificity of 97.1% [62]. Combining the imaging features of breast ultrasound and ultrasound elastography for histological analysis can be used to preoperatively predict the status of axillary lymph node metastasis in clinical T1-2 breast cancer [63].
Compared to DL applied to CT and MRI image recognition, ultrasound examination is an operator-dependent imaging modality, and in-depth AI recognition of ultrasound images suffers a significant subjective bias in the operation of ultrasonographers and in the reading of films by ultrasound physicians. Therefore, AI recognition of ultrasound images is more dependent on communicating DL models with ultrasonographers than CT and MRI, which partly induces the development of AI recognition of ultrasound to lag behind that of CT and MRI. To overcome the disadvantages of the breast ultrasound image, Singh et al. presented a breast ultrasound segmentation method using contextual-information-aware deep adversarial learning framework, whereby the breast ultrasound image can be segmented efficiently and handle various tumors with distinct sizes and shapes [64]. Hassanien et al. presented a new DL-based radiomics method called ConvNeXt to endow the CAD to predict the malignancy score of a breast lesion. By using a vision transformer style, the ConvNeXt system is able to perform the malignant score analysis of breast ultrasound sequences and present visual interpretations for its decision [65]. Jabeen and colleagues optimized feature extraction and improved breast cancer categorization accuracy to 99.1% by modifying and retraining the deep model named DarkNet53. They achieved the best feature selection using reformed deferential evolution (RDE) and reformed gray wolf (RGW) optimization algorithms, and these features were fused and categorized with the probability-based approach and ML algorithms, respectively [66].

3.2.3. MRI in Breast Cancer

Compared with the previous two imaging methods, breast MRI is the most sensitive and accurate imaging method for preoperative staging of breast cancer, and its sensitivity for tumor diagnosis is not affected by the density [11,67]. MRI provides information indicating tumor biological function: spectral imaging can determine the function of chemical components within tissue regions for qualitative tumor diagnosis through metabolite detection [68]; or MRI spectral imaging can be used for quantitative assessment of lipid composition in the breast based on altered adipogenesis genes in breast cancers with vascular infiltration, allowing prediction of tumor vascular infiltration events [69], assessment of chemotherapy response [70], and subtype identification [71]. Diffusion-weighted imaging (DWI) is highly sensitive for breast cancer as it reveals and evaluates local pathophysiological features by measuring the mobile phase of water molecules in the tissue. MRI has been reported to play a multifaceted role in the diagnosis of breast cancer, for example for pCR prediction of breast cancer after neoadjuvant therapy [72,73]; diffusion-weighted (DWI-MRI) and contrast-enhanced MRI (CE-MRI) have the advantages of high sensitivity and specificity, respectively, in neoadjuvant efficacy monitoring of breast cancer, and the combination of the two diagnostic imaging techniques is expected to improve the accuracy of neoadjuvant efficacy assessment [74]. The utilization of CE-MRI can provide four types of features: tumor morphology, texture, hemodynamic, and pharmacokinetics, among which kinetic features are a unique detection advantage of MRI over the previous two imaging examinations, which can help in tumor identifying and classification by showing hemodynamic features of tumors that are completely different from those of normal glands [75]. Some studies have reported that the accuracy of MRI for neoadjuvant efficacy evaluation of breast cancer varies between different subtypes of breast cancer, and MRI is more suitable for evaluating the neoadjuvant therapy efficacy in human epidermal receptor-2 (HER2)-positive breast cancer and triple-negative breast cancer (TNBC) subtypes compared to luminal subtype [76]. A growing trend in new application studies is multimodal image reconstruction with a combination of multiparametric MRI, i.e., combining one or several MRI techniques to provide more accurate imaging evaluation by combining the advantages of various examinations. Winkler and colleagues confirmed the feasibility of DL in identifying breast tumor lesion-containing slides in MRI images. To improve the clinical workflow of breast MRI viewing, they integrated the DL technology into the picture and archiving communication system (PACS), whereby radiologists can quickly choose the targeted image instead of scanning the imaging stack [77].

3.2.4. Nuclear Medicine Techniques

The employment of nuclear medicine examination represented by 18F-FDG PET/CT in breast cancer covers diagnosis, staging, assessment of recurrent metastases, phenotypic identification, prognosis, and assessment of treatment response. Compared with other imaging examinations, PET has the advantage of obtaining whole-body staging information by a single examination [78,79,80]. The uptake of FDG has been shown to correlate with tumor grade and proliferation index [81]. Nuclear medicine tests, such as 18F-FDG PEM or dedicated breast PET (dbPET), have shown higher sensitivity and specificity compared to PET/CT in several studies for early breast cancer diagnosis [82]. Other studies have reported 64Cu-DOTA-trastuzumab PET/CT and MRI for establishing predicting model of neoadjuvant response in HER2-positive breast cancer; using 18F-FES PET/CT for predicting the chemotherapy efficacy of MBC [83], or using FDG PET/CT to screen candidates for sentinel lymph node biopsy or axillary lymph node dissection [84]. However, PET/CT is not suitable for detecting small intracranial metastases due to the higher glucose metabolic background of brain parenchyma, and the cost-effectiveness of this examination in early breast cancer needs further evaluation considering the high expense. MRI shows higher accuracy than other imaging examinations in breast cancer diagnosis, encouraging the application of fusion imaging of 18F FDG-PET/CT with MRI in breast cancer. The imaging fusion technology has further improved sensitivity and specificity for breast cancer detection compared to 18F FDG-PET/CT or MRI alone [79,85]. PET/MRI has better staging capability and lower radiation dose than PET/CT for breast cancer [86], and has a sensitivity of 90-99% for breast cancer detection. However, the specificity of breast cancer diagnosis is only 72-89%, which may augment the FPR, and the biopsies proportion [79], and potentially cause the problem of higher examination costs. Due to the small sample sizes, inconsistent study designs, high heterogeneity of imaging data between studies, and lack of uniform criteria such as indications and scan parameters, the evidence is not clear on the role of PET/CT and PET/MRI in breast cancer.

3.2.5. Radiomics and Radiogenomics in Breast Cancer

Traditional imaging methods diagnose the properties of tumors by the qualitative characteristics of tumor lesions in the imaging images, including tumor density, intratumor components (such as tumor parenchymal components, blood vessels, necrosis, and calcification), the morphology of tumor lesion edges, and the anatomical relationship with surrounding tissues. The rapid development of sequencing technology has enabled the detailed study of genetic information in all dimensions of human diseases, and further expanded to the analysis of multiple dimensions of biological information, i.e., “multi-omics” research, such as the multi-modal integrative analysis of genomics, transcriptomics, proteomics, etc. Multi-omics research provides a more comprehensive view of the biological processes of human physiology and pathology. Along with this, the concept of radiomics has also been introduced in the field of imaging research, where AI is used to extract features from images (X-ray, CT, ultrasound, and MRI) through high throughput and enable digital decoding of radiographic images into quantitative features, establishing AI-based image feature recognition as a clinical diagnostic aid. Similar to other multi-omics, the purpose of radiomics is to study the molecular biology of a lesion through its imaging performance. The image features extracted from radiomics reflect not only their appearance on the image but also the alterations that occur at the genetic and molecular levels.
The general process of image histology analysis is similar to that of AI-based diagnostic imaging techniques, including the acquisition of high-quality image data, outlining the region of interest (tumor segmentation), AI-based high-throughput image feature extraction, feature screening using algorithms based on the selected predictive/assessment endpoint events, model construction using the screened image features, and model validation using internal and external datasets. A DBT-based imaging histology analysis found a correlation between tumor size and estrogen receptor status [87]. A radiomics study using breast MRI images revealed a correlation between entropy and tumor vascularity and heterogeneity, which can be applied to identify benign and malignant breast tumors [88]. Another radiomics study using TCGA/TCIA public MRI data enrolled 91 cases of biopsy-proven invasive breast cancer for molecular typing by radiomics. The model developed in this study was able to better distinguish the expression status of prognostic molecular markers, including estrogen receptor, progestin receptor, and HER2. The study also observed a correlation between tumor size and tumor aggressiveness at the same time [89]. Radiomics analysis of MRI was also used for the prediction of axillary lymph node metastasis in breast cancer and further incorporated clinical features based on radiomic models to provide a nomograph for predicting the risk of axillary lymph node metastasis and recurrence in patients with early breast cancer. Although this study was also retrospective, with large heterogeneity in MRI scans and a short follow-up period, its findings provide a valuable direction for the application of MRI-based imaging radiomics in breast cancer diagnosis [90]. Yu et al. also used MRI radiomics to develop a model for preoperative predicting axillary lymph node metastasis in breast cancer. Since the metastatic axillary lymph nodes were of the same origin as the primary breast foci, the study incorporated both the primary breast foci and the lymph node metastases to develop a prediction model for axillary lymph node metastasis, which improve prediction accuracy than the prediction model developed only with the primary foci features [91].
ET/CT images have also been reported to allow for radiomics analysis. Poor therapeutic outcome due to tumor heterogeneity is a leading research topic in the field of drug resistance. Using 18F-FDG PET/CT-based images of locally advanced breast cancer before and after neoadjuvant therapy, the tumor heterogeneity texture parameters were extracted, and the obtained metabolomic patterns correlated with ki67 expression, pCR rate, and risk of recurrence after neoadjuvant [92]. CEM-obtained breast images combined with manual segmentation of tumor lesions can better facilitate DL-based automatic extraction of tumor features and identification of invasive and non-invasive breast cancers [93]. The study used CEM images of the breast with manual segmentation of lesion areas for radiomics analysis and achieved the identification of histological and molecular subtypes of HER2-positive and triple-negative breast cancers successfully [94]. The development of new diagnostic imaging modalities for breast cancer using radiomics is highly promising, and the accuracy and efficiency of breast cancer diagnosis can be improved by combining radiomic analysis with conventional imaging. However, the diagnostic accuracy of some of the classification models currently developed to identify benign and malignant tumors is still lower than manual identification, and more research is still needed to refine the algorithms and define indications to establish a general and stable automated analysis tool.
Correlations between medical imaging features and gene expression pathways have been shown to provide information on the constitute of lesion genetics, for example, imaging features of tumor size are positively correlated with cell proliferation-related gene expression and signaling pathways [95]. Radiogenomics perform joint analysis of radiomics and genomics to provide more informative and individualized radiological and genetic features [95], allowing the concept of precision medicine and personalized therapy to be no longer limited to studies at the genomic or proteomic level that rely on tissue or blood samples. Radiogenomics studies can help in prediction and early detection of cancer by using imaging features to analyze changes at the molecular level of the disease. Multiple different combinations of imaging genomics analysis have arisen because there are multiple dimensions of genetic testing, such as DNA sequencing and RNA sequencing, as well as different image features obtained by different imaging examinations. The study by Yeh et al. combined RNA-seq data with 3D DCE-MRI images to establish robust associations between multiple signaling pathways involving cell growth and death, immune regulation, intercellular interactions, and signal transduction with enhanced MRI images, and the up- or down-regulation of these signaling pathways were associated with the different imaging phenotypes of tumors [95]. Another advantage of imaging genomics is the possibility of obtaining comprehensive information at the molecular level of genes within the focus by resolving imaging information of the whole lesion, which cannot be acquired by puncture or small sample biopsy due to tumor heterogeneity. The Cancer Genome Atlas (TCGA) program has linked the completed genomic and clinical biomarkers for adult cancer identification with the data from The Cancer Imaging Archive (TCIA) [96,97]. Currently, radiogenomics lacks standardized processes and assessment criteria to ensure quantitative imaging, and various discrepancies between imaging scans may lead to over- or under-treatment. Thus, while radiogenomics is much more effective than histopathological imaging, the process involves very expensive and requires large sample data sets and powerful computational capabilities for validation. Radiogenomics is not routinely practiced in clinical practice.

4. Discussion

AI-assisted imaging diagnosis is a promising strategy in breast cancer diagnosis. Several potential directions of AI application are attractive: 1. AI-based diagnosis as minimally invasive or non-invasive tests that can satisfy the purpose of convenient efficacy assessment; 2. differentiation of distinct molecular biological features in primary or recurrent/metastatic loci; 3. obtaining information on tumor heterogeneity; 4. identification of treatment response or tumor progression (present in immunotherapy). However, the current techniques, from imaging detection methods to lesion segmentation and qualitative diagnosis of images to analysis of lesion image features, are deficient and inadequate to support the use of AI-assisted imaging diagnosis to complete independent imaging or even clinical diagnosis. Several key issues limit the prevalence of AI-assisted imaging diagnosis in breast cancer. First, there is a lack of a generally recognized operation standard of the process, ranging from imaging detection and tumor segmentation to image feature extraction, to ensure reproducible results. Secondary, novel algorithms with higher specificity are needed to cope with various kinds of images of distinct quality as well as patients’ individual variations. Third, even though imaging-based AI diagnosis can be potentially applied in breast cancer to reduce the FPR, the diagnostic accuracy relies on the image quality of imaging detections including mammography, ultrasound, etc., and the algorithm is still waiting to be refined to improve the diagnostic accuracy. Furthermore, the clinical practical value of AI-assisted breast cancer diagnosis still needs to be validated in randomized clinical trial with large perspective cohort.

5. Conclusions

AI-assisted imaging diagnosis provides a promising perspective of a more accurate and high-efficient diagnostic model for breast cancer. However, further optimization and validation in randomized clinical trials are essential before it is applied in clinical practice. Moreover, a breakthrough in the direct connection pathway between AI-specific training/diagnostic databases and health insurance databases or hospital information systems (HIS), can help accelerate the establishment of a robust, open, self-optimizing AI imaging platform for breast cancer diagnosis.

Author Contributions

D.Z. and X.H.: investigation, writing—original draft preparation, visualization; J.J.: conceptualization, visualization, project administration. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Giaquinto, A.N.; Sung, H.; Miller, K.D.; Kramer, J.L.; Newman, L.A.; Minihan, A.; Jemal, A.; Siegel, R.L. Breast Cancer Statistics, 2022. CA A Cancer J. Clin. 2022, 72, 524–541. [Google Scholar] [CrossRef]
  2. Marmot, M.G.; Altman, D.G.; Cameron, D.A.; Dewar, J.A.; Thompson, S.G.; Wilcox, M. The benefits and harms of breast cancer screening: An independent review. Br. J. Cancer 2013, 108, 2205–2240. [Google Scholar] [CrossRef] [Green Version]
  3. Sun, L.; Legood, R.; Sadique, Z.; Dos-Santos-Silva, I.; Yang, L. Cost-effectiveness of risk-based breast cancer screening programme, China. Bull. World Health Organ. 2018, 96, 568–577. [Google Scholar] [CrossRef]
  4. Pomerantz, B.J. Imaging and Interventional Radiology for Cancer Management. Surg. Clin. North Am. 2020, 100, 499–506. [Google Scholar] [CrossRef] [PubMed]
  5. Bevers, T.B.; Helvie, M.; Bonaccio, E.; Calhoun, K.E.; Daly, M.B.; Farrar, W.B.; Garber, J.E.; Gray, R.; Greenberg, C.C.; Greenup, R.; et al. Breast Cancer Screening and Diagnosis, Version 3.2018, NCCN Clinical Practice Guidelines in Oncology. J. Natl. Compr. Canc. Netw. 2018, 16, 1362–1389. [Google Scholar] [CrossRef] [Green Version]
  6. Coleman, C. Early Detection and Screening for Breast Cancer. Semin. Oncol. Nurs. 2017, 33, 141–155. [Google Scholar] [CrossRef]
  7. Welch, H.G.; Prorok, P.C.; O’Malley, A.J.; Kramer, B.S. Breast-Cancer Tumor Size, Overdiagnosis, and Mammography Screening Effectiveness. N. Engl. J. Med. 2016, 375, 1438–1447. [Google Scholar] [CrossRef]
  8. Maitra, I.K.; Nag, S. Bandyopadhyay SKJCm, biomedicine pi: Technique for preprocessing of digital mammogram. Comput. Methods Programs Biomed. 2012, 107, 175–188. [Google Scholar] [CrossRef]
  9. Miller, A.B.; Wall, C.; Baines, C.J.; Sun, P.; To, T.; Narod, S.A. Twenty five year follow-up for breast cancer incidence and mortality of the Canadian National Breast Screening Study: Randomised screening trial. BMJ 2014, 348, g366. [Google Scholar] [CrossRef] [Green Version]
  10. Sood, R.; Rositch, A.F.; Shakoor, D.; Ambinder, E.; Pool, K.-L.; Pollack, E.; Mollura, D.J.; Mullen, L.A. Harvey SCJJogo: Ultrasound for breast cancer detection globally: A systematic review and meta-analysis. J. Glob. Oncol. 2019, 5, 1–17. [Google Scholar]
  11. Morrow, M.; Waters, J.; Morris, E. MRI for breast cancer screening, diagnosis, and treatment. Lancet 2011, 378, 1804–1811. [Google Scholar] [CrossRef] [PubMed]
  12. Swayampakula, A.K.; Dillis, C.; Abraham, J. Role of MRI in screening, diagnosis and management of breast cancer. Expert Rev. Anticancer Ther. 2008, 8, 811–817. [Google Scholar] [CrossRef] [PubMed]
  13. Leithner, D.; Moy, L.; Morris, E.A.; Marino, M.A.; Helbich, T.H.; Pinker, K. Abbreviated MRI of the Breast: Does It Provide Value? J. Magn. Reson. Imaging 2019, 49, e85–e100. [Google Scholar] [CrossRef] [PubMed]
  14. Giger, M.L. Update on the potential of computer-aided diagnosis for breast cancer. Future Oncol. 2010, 6, 1–4. [Google Scholar] [CrossRef] [Green Version]
  15. Bilska-Wolak, A.O.; Floyd Jr, C.E.; Lo, J.Y.; Baker, J.A. Computer aid for decision to biopsy breast masses on mammography: Validation on new cases1. Acad. Radiol. 2005, 12, 671–680. [Google Scholar] [CrossRef]
  16. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.; van Ginneken, B.; Sanchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Choy, G.; Khalilzadeh, O.; Michalski, M.; Do, S.; Samir, A.E.; Pianykh, O.S.; Geis, J.R.; Pandharipande, P.V.; Brink, J.A.; Dreyer, K.J. Current Applications and Future Impact of Machine Learning in Radiology. Radiology 2018, 288, 318–328. [Google Scholar] [CrossRef]
  18. Nassif, A.B.; Talib, M.A.; Nasir, Q.; Afadar, Y.; Elgendy, O. Breast cancer detection using artificial intelligence techniques: A systematic literature review. Artif. Intell. Med. 2022, 127, 102276. [Google Scholar] [CrossRef]
  19. Kann, B.H.; Hosny, A.; Aerts, H. Artificial intelligence for clinical oncology. Cancer Cell 2021, 39, 916–927. [Google Scholar] [CrossRef]
  20. Cedolini, C.; Bertozzi, S.; Londero, A.P.; Bernardi, S.; Seriau, L.; Concina, S.; Cattin, F.; Risaliti, A. Type of breast cancer diagnosis, screening, and survival. Clin. Breast Cancer 2014, 14, 235–240. [Google Scholar] [CrossRef]
  21. Sardanelli, F.; Boetes, C.; Borisch, B.; Decker, T.; Federico, M.; Gilbert, F.J.; Helbich, T.; Heywang-Kobrunner, S.H.; Kaiser, W.A.; Kerin, M.J.; et al. Magnetic resonance imaging of the breast: Recommendations from the EUSOMA working group. Eur. J. Cancer 2010, 46, 1296–1316. [Google Scholar] [CrossRef]
  22. Evans, K.K.; Birdwell, R.L.; Wolfe, J.M. If you don’t find it often, you often don’t find it: Why some cancers are missed in breast cancer screening. PLoS ONE 2013, 8, e64366. [Google Scholar] [CrossRef] [Green Version]
  23. Geras, K.J.; Mann, R.M.; Moy, L. Artificial Intelligence for Mammography and Digital Breast Tomosynthesis: Current Concepts and Future Perspectives. Radiology 2019, 293, 246–259. [Google Scholar] [CrossRef]
  24. Fenton, J.J.; Taplin, S.H.; Carney, P.A.; Abraham, L.; Sickles, E.A.; D’Orsi, C.; Berns, E.A.; Cutter, G.; Hendrick, R.E.; Barlow, W.E.; et al. Influence of computer-aided detection on performance of screening mammography. N. Engl. J. Med. 2007, 356, 1399–1409. [Google Scholar] [CrossRef]
  25. Lehman, C.D.; Wellman, R.D.; Buist, D.S.; Kerlikowske, K.; Tosteson, A.N.; Miglioretti, D.L.; Breast Cancer Surveillance, C. Diagnostic Accuracy of Digital Screening Mammography with and without Computer-Aided Detection. JAMA Intern. Med. 2015, 175, 1828–1837. [Google Scholar] [CrossRef]
  26. Hinton, G.E.; Osindero, S.; Teh, Y.W. A fast learning algorithm for deep belief nets. Neural. Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef]
  27. Lotter, W.; Diab, A.R.; Haslam, B.; Kim, J.G.; Grisot, G.; Wu, E.; Wu, K.; Onieva, J.O.; Boyer, Y.; Boxerman, J.L.; et al. Robust breast cancer detection in mammography and digital breast tomosynthesis using an annotation-efficient deep learning approach. Nat. Med. 2021, 27, 244–249. [Google Scholar] [CrossRef]
  28. Carney, P.A.; Miglioretti, D.L.; Yankaskas, B.C.; Kerlikowske, K.; Rosenberg, R.; Rutter, C.M.; Geller, B.M.; Abraham, L.A.; Taplin, S.H.; Dignan, M.; et al. Individual and combined effects of age, breast density, and hormone replacement therapy use on the accuracy of screening mammography. Ann. Intern. Med. 2003, 138, 168–175. [Google Scholar] [CrossRef]
  29. Hong, T.S.; Tome, W.A.; Harari, P.M. Heterogeneity in head and neck IMRT target design and clinical practice. Radiother. Oncol. 2012, 103, 92–98. [Google Scholar] [CrossRef] [Green Version]
  30. Li, X.A.; Tai, A.; Arthur, D.W.; Buchholz, T.A.; Macdonald, S.; Marks, L.B.; Moran, J.M.; Pierce, L.J.; Rabinovitch, R.; Taghian, A.; et al. Variability of target and normal structure delineation for breast cancer radiotherapy: An RTOG Multi-Institutional and Multiobserver Study. Int. J. Radiat. Oncol. Biol. Phys. 2009, 73, 944–951. [Google Scholar] [CrossRef] [Green Version]
  31. Bi, W.L.; Hosny, A.; Schabath, M.B.; Giger, M.L.; Birkbak, N.J.; Mehrtash, A.; Allison, T.; Arnaout, O.; Abbosh, C.; Dunn, I.F.; et al. Artificial intelligence in cancer imaging: Clinical challenges and applications. CA Cancer J. Clin. 2019, 69, 127–157. [Google Scholar] [CrossRef]
  32. Rodriguez-Ruiz, A.; Lang, K.; Gubern-Merida, A.; Broeders, M.; Gennaro, G.; Clauser, P.; Helbich, T.H.; Chevalier, M.; Tan, T.; Mertelmeier, T.; et al. Stand-Alone Artificial Intelligence for Breast Cancer Detection in Mammography: Comparison with 101 Radiologists. J. Natl. Cancer Inst. 2019, 111, 916–922. [Google Scholar] [CrossRef]
  33. Freeman, K.; Geppert, J.; Stinton, C.; Todkill, D.; Johnson, S.; Clarke, A.; Taylor-Phillips, S. Use of artificial intelligence for image analysis in breast cancer screening programmes: Systematic review of test accuracy. BMJ 2021, 374, n1872. [Google Scholar] [CrossRef]
  34. Shen, Y.; Yang, Y.; Inoue, L.Y.; Munsell, M.F.; Miller, A.B.; Berry, D.A. Role of detection method in predicting breast cancer survival: Analysis of randomized screening trials. J. Natl. Cancer Inst. 2005, 97, 1195–1203. [Google Scholar] [CrossRef] [Green Version]
  35. Kooi, T.; Litjens, G.; van Ginneken, B.; Gubern-Merida, A.; Sanchez, C.I.; Mann, R.; den Heeten, A.; Karssemeijer, N. Large scale deep learning for computer aided detection of mammographic lesions. Med. Image Anal. 2017, 35, 303–312. [Google Scholar] [CrossRef]
  36. Sahiner, B.; Chan, H.P.; Petrick, N.; Wei, D.; Helvie, M.A.; Adler, D.D.; Goodsitt, M.M. Classification of mass and normal breast tissue: A convolution neural network classifier with spatial domain and texture images. IEEE Trans. Med. Imaging 1996, 15, 598–610. [Google Scholar] [CrossRef]
  37. Wong, D.J.; Gandomkar, Z.; Wu, W.J.; Zhang, G.; Gao, W.; He, X.; Wang, Y.; Reed, W. Artificial intelligence and convolution neural networks assessing mammographic images: A narrative literature review. J. Med. Radiat. Sci. 2020, 67, 134–142. [Google Scholar] [CrossRef] [Green Version]
  38. Akselrod-Ballin, A.; Karlinsky, L.; Alpert, S.; Hasoul, S.; Ben-Ari, R.; Barkan, E. A Region Based Convolutional Network for Tumor Detection and Classification in Breast Mammography. In Deep Learning and Data Labeling for Medical Applications: 2016//2016; Springer International Publishing: Cham, Switzerland, 2016; pp. 197–205. [Google Scholar]
  39. Kuhl, C. The current status of breast MR imaging. Part, I. Choice of technique, image interpretation, diagnostic accuracy, and transfer to clinical practice. Radiology 2007, 244, 356–378. [Google Scholar] [CrossRef]
  40. Al-Masni, M.A.; Al-Antari, M.A.; Park, J.M.; Gi, G.; Kim, T.Y.; Rivera, P.; Valarezo, E.; Han, S.M.; Kim, T.S. Detection and classification of the breast abnormalities in digital mammograms via regional Convolutional Neural Network. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jeju, Republic of Korea, 11–15 July 2017; pp. 1230–1233. [Google Scholar]
  41. Sampaio, W.B.; Diniz, E.M.; Silva, A.C.; de Paiva, A.C.; Gattass, M. Detection of masses in mammogram images using CNN, geostatistic functions and SVM. Comput. Biol. Med. 2011, 41, 653–664. [Google Scholar] [CrossRef]
  42. Hieken, T.J.; Harrison, J.; Herreros, J.; Velasco, J.M. Correlating sonography, mammography, and pathology in the assessment of breast cancer size. Am. J. Surg. 2001, 182, 351–354. [Google Scholar] [CrossRef]
  43. Kim, E.K.; Kim, H.E.; Han, K.; Kang, B.J.; Sohn, Y.M.; Woo, O.H.; Lee, C.W. Applying Data-driven Imaging Biomarker in Mammography for Breast Cancer Screening: Preliminary Study. Sci. Rep. 2018, 8, 2762. [Google Scholar] [CrossRef]
  44. Pillai, A.; Nizam, A.; Joshee, M.; Pinto, A.; Chavan, S. Breast Cancer Detection in Mammograms Using Deep Learning. In Applied Information Processing Systems: 2022//2022; Springer: Singapore, 2022; pp. 121–127. [Google Scholar]
  45. Ha, R.; Chang, P.; Karcich, J.; Mutasa, S.; Pascual Van Sant, E.; Liu, M.Z.; Jambawalikar, S. Convolutional Neural Network Based Breast Cancer Risk Stratification Using a Mammographic Dataset. Acad. Radiol. 2019, 26, 544–549. [Google Scholar] [CrossRef]
  46. Singh, V.K.; Romani, S.; Torrents-Barrena, J.; Akram, F.; Pandey, N.; Sarker, M.M.K.; Saleh, A.; Arenas, M.; Arquez, M.; Puig, D. Classification of breast cancer molecular subtypes from their micro-texture in mammograms using a VGGNet-based convolutional neural network. Front. Artif. Intell. Appl. 2017, 2017, 76–85. [Google Scholar]
  47. Singh, V.K.; Rashwan, H.A.; Romani, S.; Akram, F.; Pandey, N.; Sarker, M.M.K.; Saleh, A.; Arenas, M.; Arquez, M.; Puig, D.; et al. Breast tumor segmentation and shape classification in mammograms using generative adversarial and convolutional neural network. Expert Syst. Appl. 2020, 139, 112855. [Google Scholar] [CrossRef]
  48. Ragab, D.A.; Attallah, O.; Sharkas, M.; Ren, J.; Marshall, S. A framework for breast cancer classification using Multi-DCNNs. Comput. Biol. Med. 2021, 131, 104245. [Google Scholar] [CrossRef]
  49. Suter, M.B.; Pesapane, F.; Agazzi, G.M.; Gagliardi, T.; Nigro, O.; Bozzini, A.; Priolo, F.; Penco, S.; Cassano, E.; Chini, C.; et al. Diagnostic accuracy of contrast-enhanced spectral mammography for breast lesions: A systematic review and meta-analysis. Breast 2020, 53, 8–17. [Google Scholar] [CrossRef]
  50. Jochelson, M.S.; Lobbes, M.B.I. Contrast-enhanced Mammography: State of the Art. Radiology 2021, 299, 36–48. [Google Scholar] [CrossRef]
  51. Ghaderi, K.F.; Phillips, J.; Perry, H.; Lotfi, P.; Mehta, T.S. Contrast-enhanced Mammography: Current Applications and Future Directions. Radiographics 2019, 39, 1907–1920. [Google Scholar] [CrossRef]
  52. Patel, B.K.; Ranjbar, S.; Wu, T.; Pockaj, B.A.; Li, J.; Zhang, N.; Lobbes, M.; Zhang, B.; Mitchell, J.R. Computer-aided diagnosis of contrast-enhanced spectral mammography: A feasibility study. Eur. J. Radiol. 2018, 98, 207–213. [Google Scholar] [CrossRef]
  53. Perek, S.; Kiryati, N.; Zimmerman-Moreno, G.; Sklair-Levy, M.; Konen, E.; Mayer, A. Classification of contrast-enhanced spectral mammography (CESM) images. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 249–257. [Google Scholar] [CrossRef]
  54. Alzaghal, A.A.; DiPiro, P.J. Applications of Advanced Breast Imaging Modalities. Curr. Oncol. Rep. 2018, 20, 57. [Google Scholar] [CrossRef]
  55. Patel, B.K.; Lobbes, M.B.I.; Lewin, J. Contrast Enhanced Spectral Mammography: A Review. Semin. Ultrasound CT MR 2018, 39, 70–79. [Google Scholar] [CrossRef]
  56. Guo, R.; Lu, G.; Qin, B.; Fei, B. Ultrasound Imaging Technologies for Breast Cancer Detection and Management: A Review. Ultrasound Med. Biol. 2018, 44, 37–70. [Google Scholar] [CrossRef]
  57. Smith, R.A.; Saslow, D.; Sawyer, K.A.; Burke, W.; Costanza, M.E.; Evans, W.P., 3rd; Foster, R.S., Jr.; Hendrick, E.; Eyre, H.J.; Sener, S.; et al. American Cancer Society guidelines for breast cancer screening: Update 2003. CA Cancer J. Clin. 2003, 53, 141–169. [Google Scholar] [CrossRef] [Green Version]
  58. Albert, U.S.; Altland, H.; Duda, V.; Engel, J.; Geraedts, M.; Heywang-Kobrunner, S.; Holzel, D.; Kalbheim, E.; Koller, M.; Konig, K.; et al. 2008 update of the guideline: Early detection of breast cancer in Germany. J. Cancer Res. Clin. Oncol. 2009, 135, 339–354. [Google Scholar] [CrossRef]
  59. Gartlehner, G.; Thaler, K.; Chapman, A.; Kaminski-Hartenthaler, A.; Berzaczy, D.; Van Noord, M.G.; Helbich, T.H. Mammography in combination with breast ultrasonography versus mammography for breast cancer screening in women at average risk. Cochrane Database Syst. Rev. 2013, 2013, CD009632. [Google Scholar] [CrossRef]
  60. Han, S.; Kang, H.K.; Jeong, J.Y.; Park, M.H.; Kim, W.; Bang, W.C.; Seong, Y.K. A deep learning framework for supporting the classification of breast lesions in ultrasound images. Phys. Med. Biol. 2017, 62, 7714–7728. [Google Scholar] [CrossRef]
  61. Giuseppetti, G.M.; Martegani, A.; Di Cioccio, B.; Baldassarre, S. Elastosonography in the diagnosis of the nodular breast lesions: Preliminary report. Radiol. Med. 2005, 110, 69–76. [Google Scholar]
  62. Zhang, Q.; Xiao, Y.; Dai, W.; Suo, J.; Wang, C.; Shi, J.; Zheng, H. Deep learning based classification of breast tumors with shear-wave elastography. Ultrasonics 2016, 72, 150–157. [Google Scholar] [CrossRef]
  63. Zheng, X.; Yao, Z.; Huang, Y.; Yu, Y.; Wang, Y.; Liu, Y.; Mao, R.; Li, F.; Xiao, Y.; Wang, Y.; et al. Deep learning radiomics can predict axillary lymph node status in early-stage breast cancer. Nat. Commun. 2020, 11, 1236. [Google Scholar] [CrossRef] [Green Version]
  64. Singh, V.K.; Abdel-Nasser, M.; Akram, F.; Rashwan, H.A.; Sarker, M.M.K.; Pandey, N.; Romani, S.; Puig, D. Breast tumor segmentation in ultrasound images using contextual-information-aware deep adversarial learning framework. Expert Syst. Appl. 2020, 162, 113870. [Google Scholar] [CrossRef]
  65. Hassanien, M.A.; Singh, V.K.; Puig, D.; Abdel-Nasser, M. Predicting Breast Tumor Malignancy Using Deep ConvNeXt Radiomics and Quality-Based Score Pooling in Ultrasound Sequences. Diagnostics 2022, 12, 1053. [Google Scholar] [CrossRef]
  66. Jabeen, K.; Khan, M.A.; Alhaisoni, M.; Tariq, U.; Zhang, Y.D.; Hamza, A.; Mickus, A.; Damasevicius, R. Breast Cancer Classification from Ultrasound Images Using Probability-Based Optimal Deep Learning Feature Fusion. Sensors 2022, 22, 807. [Google Scholar] [CrossRef]
  67. Van Goethem, M.; Schelfout, K.; Kersschot, E.; Colpaert, C.; Verslegers, I.; Biltjes, I.; Tjalma, W.A.; Weyler, J.; De Schepper, A. Enhancing area surrounding breast carcinoma on MR mammography: Comparison with pathological examination. Eur. Radiol. 2004, 14, 1363–1370. [Google Scholar] [CrossRef]
  68. O’Flynn, E.A.; DeSouza, N.M. Functional magnetic resonance: Biomarkers of response in breast cancer. Breast Cancer Res. 2011, 13, 204. [Google Scholar] [CrossRef] [Green Version]
  69. Cheung, S.M.; Husain, E.; Mallikourti, V.; Masannat, Y.; Heys, S.; He, J. Intra-tumoural lipid composition and lymphovascular invasion in breast cancer via non-invasive magnetic resonance spectroscopy. Eur. Radiol. 2021, 31, 3703–3711. [Google Scholar] [CrossRef]
  70. Jagannathan, N.R.; Singh, M.; Govindaraju, V.; Raghunathan, P.; Coshic, O.; Julka, P.K.; Rath, G.K. Volume localized in vivo proton MR spectroscopy of breast carcinoma: Variation of water-fat ratio in patients receiving chemotherapy. NMR Biomed. 1998, 11, 414–422. [Google Scholar] [CrossRef]
  71. Agarwal, K.; Sharma, U.; Mathur, S.; Seenu, V.; Parshad, R.; Jagannathan, N.R. Study of lipid metabolism by estimating the fat fraction in different breast tissues and in various breast tumor sub-types by in vivo (1)H MR spectroscopy. Magn. Reson. Imaging 2018, 49, 116–122. [Google Scholar] [CrossRef]
  72. Park, J.; Chae, E.Y.; Cha, J.H.; Shin, H.J.; Choi, W.J.; Choi, Y.W.; Kim, H.H. Comparison of mammography, digital breast tomosynthesis, automated breast ultrasound, magnetic resonance imaging in evaluation of residual tumor after neoadjuvant chemotherapy. Eur. J. Radiol. 2018, 108, 261–268. [Google Scholar] [CrossRef]
  73. Rauch, G.M.; Adrada, B.E.; Kuerer, H.M.; van la Parra, R.F.; Leung, J.W.; Yang, W.T. Multimodality Imaging for Evaluating Response to Neoadjuvant Chemotherapy in Breast Cancer. AJR Am. J. Roentgenol. 2017, 208, 290–299. [Google Scholar] [CrossRef]
  74. Romeo, V.; Accardo, G.; Perillo, T.; Basso, L.; Garbino, N.; Nicolai, E.; Maurea, S.; Salvatore, M. Assessment and Prediction of Response to Neoadjuvant Chemotherapy in Breast Cancer: A Comparison of Imaging Modalities and Future Perspectives. Cancers (Basel) 2021, 13, 3521. [Google Scholar] [CrossRef] [PubMed]
  75. Schlossbauer, T.; Sourbron, S.; Scholz, A.; Mosner, M.; Kahlert, S.; Bohm, H.; Reiser, M.; Hellerhoff, K. Dynamic breast MRI in the course of neoadjuvant chemotherapy: Standardized evaluation of tumor size and enhancement parameters in correlation to different histopathologic characteristics. Acad. Radiol. 2010, 17, 441–449. [Google Scholar] [CrossRef] [PubMed]
  76. Pasquero, G.; Surace, A.; Ponti, A.; Bortolini, M.; Tota, D.; Mano, M.P.; Arisio, R.; Benedetto, C.; Bau, M.G. Role of Magnetic Resonance Imaging in the Evaluation of Breast Cancer Response to Neoadjuvant Chemotherapy. In Vivo 2020, 34, 909–915. [Google Scholar] [CrossRef]
  77. Eskreis-Winkler, S.; Onishi, N.; Pinker, K.; Reiner, J.S.; Kaplan, J.; Morris, E.A.; Sutton, E.J. Using Deep Learning to Improve Nonsystematic Viewing of Breast Cancer on MRI. J. Breast Imaging 2021, 3, 201–207. [Google Scholar] [CrossRef]
  78. Fuster, D.; Duch, J.; Paredes, P.; Velasco, M.; Munoz, M.; Santamaria, G.; Fontanillas, M.; Pons, F. Preoperative staging of large primary breast cancer with [18F]fluorodeoxyglucose positron emission tomography/computed tomography compared with conventional imaging procedures. J. Clin. Oncol. 2008, 26, 4746–4751. [Google Scholar] [CrossRef]
  79. Fowler, A.M.; Strigel, R.M. Clinical advances in PET-MRI for breast cancer. Lancet Oncol. 2022, 23, e32–e43. [Google Scholar] [CrossRef]
  80. Li, H.; Yao, L.; Jin, P.; Hu, L.; Li, X.; Guo, T.; Yang, K. MRI and PET/CT for evaluation of the pathological response to neoadjuvant chemotherapy in breast cancer: A systematic review and meta-analysis. Breast 2018, 40, 106–115. [Google Scholar] [CrossRef]
  81. Hegarty, C.; Collins, C.D. PET/CT and breast cancer. Cancer Imaging 2010, 10, S59–S62. [Google Scholar] [CrossRef] [PubMed]
  82. Dromain, C.; Vietti-Violi, N.; Meuwly, J.Y. Angiomammography: A review of current evidences. Diagn. Interv. Imaging 2019, 100, 593–605. [Google Scholar] [CrossRef]
  83. Gong, C.; Yang, Z.; Sun, Y.; Zhang, J.; Zheng, C.; Wang, L.; Zhang, Y.; Xue, J.; Yao, Z.; Pan, H.; et al. A preliminary study of (18)F-FES PET/CT in predicting metastatic breast cancer in patients receiving docetaxel or fulvestrant with docetaxel. Sci. Rep. 2017, 7, 6584. [Google Scholar] [CrossRef] [Green Version]
  84. Heusner, T.A.; Kuemmel, S.; Hahn, S.; Koeninger, A.; Otterbach, F.; Hamami, M.E.; Kimmig, K.R.; Forsting, M.; Bockisch, A.; Antoch, G.; et al. Diagnostic value of full-dose FDG PET/CT for axillary lymph node staging in breast cancer patients. Eur. J. Nucl. Med. Mol. Imaging 2009, 36, 1543–1550. [Google Scholar] [CrossRef] [PubMed]
  85. Moy, L.; Noz, M.E.; Maguire, G.Q., Jr.; Melsaether, A.; Deans, A.E.; Murphy-Walcott, A.D.; Ponzo, F. Role of fusion of prone FDG-PET and magnetic resonance imaging of the breasts in the evaluation of breast cancer. Breast J. 2010, 16, 369–376. [Google Scholar] [CrossRef] [PubMed]
  86. Di Micco, R.; Santurro, L.; Gasparri, M.L.; Zuber, V.; Cisternino, G.; Baleri, S.; Morgante, M.; Rotmensz, N.; Canevari, C.; Gallivanone, F.; et al. PET/MRI for Staging the Axilla in Breast Cancer: Current Evidence and the Rationale for SNB vs. PET/MRI Trials. Cancers (Basel) 2021, 13, 3571. [Google Scholar] [CrossRef]
  87. Tagliafico, A.S.; Valdora, F.; Mariscotti, G.; Durando, M.; Nori, J.; La Forgia, D.; Rosenberg, I.; Caumo, F.; Gandolfo, N.; Houssami, N.; et al. An exploratory radiomics analysis on digital breast tomosynthesis in women with mammographically negative dense breasts. Breast 2018, 40, 92–96. [Google Scholar] [CrossRef]
  88. Parekh, V.S.; Jacobs, M.A. Integrated radiomic framework for breast cancer and tumor biology using advanced machine learning and multiparametric MRI. NPJ Breast Cancer 2017, 3, 43. [Google Scholar] [CrossRef] [Green Version]
  89. Li, H.; Zhu, Y.; Burnside, E.S.; Huang, E.; Drukker, K.; Hoadley, K.A.; Fan, C.; Conzen, S.D.; Zuley, M.; Net, J.M.; et al. Quantitative MRI radiomics in the prediction of molecular classifications of breast cancer subtypes in the TCGA/TCIA data set. NPJ Breast Cancer 2016, 2, 16012. [Google Scholar] [CrossRef] [Green Version]
  90. Yu, Y.; Tan, Y.; Xie, C.; Hu, Q.; Ouyang, J.; Chen, Y.; Gu, Y.; Li, A.; Lu, N.; He, Z.; et al. Development and Validation of a Preoperative Magnetic Resonance Imaging Radiomics-Based Signature to Predict Axillary Lymph Node Metastasis and Disease-Free Survival in Patients with Early-Stage Breast Cancer. JAMA Netw. Open 2020, 3, e2028086. [Google Scholar] [CrossRef]
  91. Yu, Y.; He, Z.; Ouyang, J.; Tan, Y.; Chen, Y.; Gu, Y.; Mao, L.; Ren, W.; Wang, J.; Lin, L.; et al. Magnetic resonance imaging radiomics predicts preoperative axillary lymph node metastasis to support surgical decisions and is associated with tumor microenvironment in invasive breast cancer: A machine learning, multicenter study. EBioMedicine 2021, 69, 103460. [Google Scholar] [CrossRef] [PubMed]
  92. Ha, S.; Park, S.; Bang, J.I.; Kim, E.K.; Lee, H.Y. Metabolic Radiomics for Pretreatment (18)F-FDG PET/CT to Characterize Locally Advanced Breast Cancer: Histopathologic Characteristics, Response to Neoadjuvant Chemotherapy, and Prognosis. Sci. Rep. 2017, 7, 1556. [Google Scholar] [CrossRef] [Green Version]
  93. Marino, M.A.; Pinker, K.; Leithner, D.; Sung, J.; Avendano, D.; Morris, E.A.; Jochelson, M. Contrast-Enhanced Mammography and Radiomics Analysis for Noninvasive Breast Cancer Characterization: Initial Results. Mol. Imaging Biol. 2020, 22, 780–787. [Google Scholar] [CrossRef] [PubMed]
  94. La Forgia, D.; Fanizzi, A.; Campobasso, F.; Bellotti, R.; Didonna, V.; Lorusso, V.; Moschetta, M.; Massafra, R.; Tamborra, P.; Tangaro, S.; et al. Radiomic Analysis in Contrast-Enhanced Spectral Mammography for Predicting Breast Cancer Histological Outcome. Diagnostics (Basel) 2020, 10, 708. [Google Scholar] [CrossRef]
  95. Yeh, A.C.; Li, H.; Zhu, Y.; Zhang, J.; Khramtsova, G.; Drukker, K.; Edwards, A.; McGregor, S.; Yoshimatsu, T.; Zheng, Y.; et al. Radiogenomics of breast cancer using dynamic contrast enhanced MRI and gene expression profiling. Cancer Imaging 2019, 19, 48. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  96. Mehta, S.; Shelling, A.; Muthukaruppan, A.; Lasham, A.; Blenkiron, C.; Laking, G.; Print, C. Predictive and prognostic molecular markers for cancer medicine. Ther. Adv. Med. Oncol. 2010, 2, 125–148. [Google Scholar] [CrossRef]
  97. Lo Gullo, R.; Daimiel, I.; Morris, E.A.; Pinker, K. Combining molecular and imaging metrics in cancer: Radiogenomics. Insights Imaging 2020, 11, 1. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flowchart to indicate the literature screening process.
Figure 1. Flowchart to indicate the literature screening process.
Jcm 12 00419 g001
Figure 2. Application of artificial intelligence in breast cancer diagnosis.
Figure 2. Application of artificial intelligence in breast cancer diagnosis.
Jcm 12 00419 g002
Figure 3. Working model of medical imaging artificial intelligence. NACT, neoadjuvant chemotherapy.
Figure 3. Working model of medical imaging artificial intelligence. NACT, neoadjuvant chemotherapy.
Jcm 12 00419 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, D.; He, X.; Jing, J. Overview of Artificial Intelligence in Breast Cancer Medical Imaging. J. Clin. Med. 2023, 12, 419. https://doi.org/10.3390/jcm12020419

AMA Style

Zheng D, He X, Jing J. Overview of Artificial Intelligence in Breast Cancer Medical Imaging. Journal of Clinical Medicine. 2023; 12(2):419. https://doi.org/10.3390/jcm12020419

Chicago/Turabian Style

Zheng, Dan, Xiujing He, and Jing Jing. 2023. "Overview of Artificial Intelligence in Breast Cancer Medical Imaging" Journal of Clinical Medicine 12, no. 2: 419. https://doi.org/10.3390/jcm12020419

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop