Next Article in Journal
Risk of Prediabetes and Diabetes in Oral Lichen Planus: A Case–Control Study according to Current Diagnostic Criteria
Previous Article in Journal
Application of Novel Technologies in Cardiac Electrotherapy to Prevent Complications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning-Based Modified YOLACT Algorithm on Magnetic Resonance Imaging Images for Screening Common and Difficult Samples of Breast Cancer

1
College of Computer Science and Technology, Guizhou University, Guiyang 550001, China
2
Institute for Artificial Intelligence, Guizhou University, Guiyang 550001, China
3
Guizhou Provincial People’s Hospital, Guiyang 550001, China
*
Author to whom correspondence should be addressed.
Diagnostics 2023, 13(9), 1582; https://doi.org/10.3390/diagnostics13091582
Submission received: 21 February 2023 / Revised: 27 March 2023 / Accepted: 9 April 2023 / Published: 28 April 2023
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)

Abstract

:
Computer-aided methods have been extensively applied for diagnosing breast lesions with magnetic resonance imaging (MRI), but fully-automatic diagnosis using deep learning is rarely documented. Deep-learning-technology-based artificial intelligence (AI) was used in this work to classify and diagnose breast cancer based on MRI images. Breast cancer MRI images from the Rider Breast MRI public dataset were converted into processable joint photographic expert group (JPG) format images. The location and shape of the lesion area were labeled using the Labelme software. A difficult-sample mining mechanism was introduced to improve the performance of the YOLACT algorithm model as a modified YOLACT algorithm model. Diagnostic efficacy was compared with the Mask R-CNN algorithm model. The deep learning framework was based on PyTorch version 1.0. Four thousand and four hundred labeled data with corresponding lesions were labeled as normal samples, and 1600 images with blurred lesion areas as difficult samples. The modified YOLACT algorithm model achieved higher accuracy and better classification performance than the YOLACT model. The detection accuracy of the modified YOLACT algorithm model with the difficult-sample-mining mechanism is improved by nearly 3% for common and difficult sample images. Compared with Mask R-CNN, it is still faster in running speed, and the difference in recognition accuracy is not obvious. The modified YOLACT algorithm had a classification accuracy of 98.5% for the common sample test set and 93.6% for difficult samples. We constructed a modified YOLACT algorithm model, which is superior to the YOLACT algorithm model in diagnosis and classification accuracy.

1. Introduction

Breast cancer is one of the most common cancers in the world, with high morbidity and mortality rates, posing a serious threat to women’s lives and well-being [1]. In the past 20 years, the incidence of breast cancer has shown a rapid increase, with a trend towards younger age groups and an unpromising morbidity and mortality rate [2]. Therefore, early diagnosis and treatment of breast cancer are still crucial for the effective prevention and treatment of breast cancer and for reducing the mortality of breast cancer. Clinically, the common examination methods for diagnosing breast cancer include color Doppler ultrasound, breast mammography, computed tomography (CT), MRI, and other methods [3]. In addition, two test sequences are commonly used for dynamic enhanced MRI; the T1 weighted image (T1W1) can reflect the distribution of breast fat and glands well, and the T2 weighted image (T2W1) can identify subcutaneous fluid composition [4].
MRI has been increasingly recognized by clinicians for its role in diagnosing, treating, and prognostically evaluating breast cancer [5]. Numerous studies have shown that with pathology as the “gold standard”, the sensitivity, specificity, and accuracy of MRI for breast cancer diagnosis are significantly higher than those of color Doppler ultrasound, mammography, CT, and other examination methods [6,7,8]. However, due to the obvious differences in the image characteristics of different molecular types of breast cancer, clinicians need to devote much energy and time to diagnosis combined with clinical characteristics, which cannot quickly and accurately confirm the condition and reduce the clinical treatment rate [9]. Therefore, computer-aided diagnosis can improve diagnostic efficiency, help patients to be treated quickly, and predict the disease more objectively and accurately, which is work of real value and wide application prospects [10].
Deep learning has emerged in computer vision and image processing, providing a new idea and way for computer-aided diagnosis [11]. With the continuous development of computer technology, the automatic analysis of medical imaging images with computer technology has become a research hotspot, a typical computer-aided diagnosis (CAD) method [12]. The ultimate goal of CAD is the automatic identification and classification of medical images to assist doctors in diagnosis. More and more hospitals have begun establishing CAD systems [13,14]. As one of the important missions of computer vision analysis, image classification has always been a popular research direction. The outstanding performance of the convolutional neural network (CNN) in image classification made it develop rapidly. The CNN model can automatically extract image features, automatically select image features, and conduct image classification without manual feature extraction and without being influenced by subjective factors, which can better assist doctors in diagnosis [15,16]. CNN is a commonly used deep learning network in clinical practice, primarily employed to identify and analyze both pathological and normal imaging data. CNN has tremendous potential in various clinical tasks such as segmentation, abnormal examination, disease classification, and diagnosis [17].
Deep learning has been successfully applied in computer vision analysis and has made great progress in medical image classification and analysis [18]. The detection and segmentation of breast cancer lesions are the prerequisites for identifying benign and malignant breast tumors and molecular subtypes [19]. The in-depth application of deep learning has improved the ability of MRI to diagnose and identify subtypes in breast cancer. Deep learning is prominent in image segmentation, first by extracting radiomic features from breast MRI, then training to identify benign and malignant breast tumors and molecular subtypes using supervised and unsupervised classical machine learning algorithms [20,21].
At present, large-scale breast MRI data sets are still limited. Therefore, it is urgent to establish a breast MRI data set with a large data volume, rich image information, and professional marker information to provide data support for training in lesion segmentation, benign and malignant tumor classification, and subtype identification in professional fields [22]. To quickly diagnose and extract the characteristics of each subtype of breast cancer and analyze them, our study used deep learning algorithms to construct an improved CNN algorithm model by obtaining MRI image data sets of benign and malignant breast tumors in an online database.

2. Methods

2.1. The Mask R-CNN Algorithm Model

The Mask R-CNN algorithm model is an improved version of the Faster R-CNN algorithm model. By improving and introducing a layer of fully convolutional segmentation branches, a model combining target detection and image segmentation was obtained, which can efficiently detect targets in the image and generate high-quality segmentation results for each target [23]. The Mask R-CNN algorithm model flow is shown in Figure 1.
The main body of the network model for the Mask R-CNN algorithm was based on Faster R-CNN, with the addition of a fully convolutional network to predict the semantic segmentation [24]. First, the residual network (Res-Net) was used as the feature to extract the skeleton network, combined with a feature pyramid network (FPN) to utilize better high-level semantic features and low-level texture features that extract multi-scale information in the image [25].The bilinear interpolation method was applied to the original region of interest (ROI) pooling to address the issue of the candidate box extraction process sampling an integer value for the tensor’s sampling point [26]. As a result, the region of interest could be completely aligned with the corresponding feature region in the original image. Retaining the fractional portion of the boundary tensor of the region-of-interest solved the mismatch between the candidate box of the ROI pooling filter and the original target, thereby enhancing the accuracy of candidate box detection results [27].
In the Mask R-CNN image processing workflow, the residual network was used as the feature extraction network to generate a multi-scale feature map from the preprocessed input image [28]. The FPN sampled the feature maps at different scales. The top-down sampling path was a nearest-neighbor up-sampling alternative from the highest layer, which was easier to operate, on the one hand, and reduced the training parameters, on the other hand. The horizontal connection was to fuse the up-sampled feature map and the feature map of the same size generated from the bottom to the top. Then, a 3 × 3 convolution was performed on the fused features to eliminate the image aliasing effect. The results were input into the region proposal network (RPN), and the horizontal–vertical ratios of three different scales were used to generate anchors of different sizes on each pixel. According to the image intersection over union, the corresponding ratio was obtained by comparing the prediction box and the real box repetition rates. Subsequently, it was fed into the region-of-interest align (ROI Align) together with the feature map. In this layer, MaskR-CNN replaced the ROI Pooling layer in Faster R-CNN with ROI Align. As a result, the target position information in the ROI Align process was more clearly calibrated, without using the rounding method on the feature map. Finally, the detection, localization, and segmentation loss functions were calculated separately, and the high-quality target segmentation results mapped by the target detection were generated simultaneously.

2.2. The YOLACT Algorithm Model

In the network structure of the YOLACT algorithm model, the feature pooling operation was removed, and the whole task was divided into two parallel subtasks, namely the prototype network branch and the target detection branch [29]. The formula used in the YOLACT algorithm model is shown as fellow.
N um a c h o r = 4 + c + k
M = σ ( P C T )
L ( o , p ) = L I o U ( o ) + L s c o r e ( p )
L score = 1 p
L I o U = 1 1 + e k × ( o 0.5 )
o = a r e a ( C ) a r e a ( G ) a r e a ( C ) a r e a ( G )
P = T P T P + F P
R = T P T P + F N
A P = r = 0 ( R n + 1 R n ) ρ int e r p ( R n + 1 )
  • Prototype network branch: the network structure of FCN was used to generate the prototype mask, as shown in Figure 2. The feature mapping P3 generated by the feature pyramid network structure passed through a set of FCN network structures, first through a layer of 3 × 3 convolution, then a layer of 1 × 1 convolution, followed by up-sampling to generate k prototypes of size 138 × 138, in which k was the mask coefficient.
  • Target detection branch: This branch predicted the masking coefficient for each anchor. As shown in Formula (1), 4 of them represent the candidate box information, the “c” represents the category coefficient, and “k” is the masking coefficient generated by the prototype network. Through the linear operation of the mask branch and the prototype mask, the predicted target’s location information and mask information could be determined by combining the results of the two branches. Finally, linear addition and multiplication operations were performed with the prototype mask after generating the corresponding mask coefficients for all targets. Then, clipping was performed according to the candidate box. Finally, the category was subjected to threshold filtering. That is, each target’s corresponding mask information and position information was obtained. The specific calculation is shown in Formula (2). P is the set of prototype masks obtained by multiplying the length and width of feature mapping and the masking coefficient. C represents the product of the number of instances passing through the network and the masking coefficient. The σ is the sigmoid function, and M is the combined result of the prototype mask and the detection branch.
  • Module generalization: The model’s prototype generation and mask coefficient can be added to the existing detection network. The flowchart of the YOLACT algorithm model is shown in Figure 3.

2.3. The Modified YOLACT Algorithm Model

In practical application, the pre-training network model often cannot handle the corresponding tasks directly. For example, in the image classification task, the Rider Breast MRI public data set has 80 categories and often fewer categories for the small-scale data set, so it is necessary to fine-tune based on the network model parameters, such as adjusting the number of categories in the classification layer, establishing a network model suitable for the actual task, and formulating strategies for specific tasks faster. Our study modified some training parameters of the original model and introduced the difficult-sample mining mechanism to improve the model’s performance. As shown in Formula (3), LIoU and Lscore represent position and category errors, respectively.
Total confidence 1 is subtracted by the classification confidence p, and the result represents the classification error value Lscore, where p is the probability value output by the classification layer. The calculation process is shown in Formula (4). LIoU is the target position error, and the calculation process is shown in Formulas (5) and (6).
Formula (6) calculates o as the intersection over union (IoU), which is derived from the ratio of the intersection and union of the real area and the predicted area, and initially sets the threshold to 0.5 to determine whether the candidate box is the target domain. In Formula (5), the difference value between the two is taken as the main discrimination index. To enhance the judgment criteria of challenging samples and make inter-sample errors affecting detection outcomes more apparent, we introduce the sensitivity coefficient k.
In Formula (3), L (o, p) is the final evaluation result derived from the algebraic sum of the sample category and position error. The threshold interval is set to determine whether the test sample is a difficult sample or not. The specific process is shown in Figure 4. The generated sample is judged after the mask is synthesized. Suppose the error value is in the set interval. In that case, the sample is judged as a difficult sample and returned to the detection branch and the prototype branch after the feature extraction network to solve the problem of model overfitting and insufficient data volume and improve the model detection accuracy.

2.4. Database Construction and Data Preprocessing

Breast cancer MRI images were obtained from the Rider Breast MRI public dataset (https://www.cancerimagingarchive.net/, accessed on 20 December 2022), and the data type was breast magnetic resonance imaging (MRI) [30]. Among them, 2400 DCM format data pieces of 288 × 288 size were selected as the experimental training set. The selected data were converted into processable JPG format images. The model was geometrically transformed (rotated, mirrored, cropped, scaled) to expand the data to 6000 different data pieces with difficult positive samples for model testing. Some MRI image data of experimental breast cancer patients is shown in Figure 5. Part of the preprocessed datasets is shown in Figure 6, in which some of the data contain the lesion area corresponding to the global image.
The lesion location and shape in the original image of the Rider Breast MRI public data sets were labeled according to the lesion area location using the Labelme software (https://sourceforge.net/projects/labelme/, accessed on 20 February 2023). A diagram of the Labelme-software-labeled breast cancer lesion areas is shown in Figure 7. The labeled data consisted of 4400 normal samples with corresponding lesion labels and 1600 images of difficult samples. All the labeled data were randomly grouped in a certain proportion, of which 4800 samples were used as the training set and the remaining 1200 as the test set.
In consideration of both speed and performance, YOLACT employs a ResNet-101 network as the backbone detection network, adopting FPN as in the case of RetinaNet. The head structure of the prediction network was designed as shown in Figure 8, and the three layers generated on the right side represent four position information values, C values representing category information, and k mask coefficient values.
In the training process of the YOLACT network model, the front-end convolutional layer retains the original feature pyramid structure, to ensure that the prototype mask in the original network structure is easily removed from the final mask and the Tanh activation function with function interval [−1, 1] continues to be selected. The function image is shown in Figure 9.

3. Results

3.1. Data Analysis Environment Construction and Test Results

In the pre-training parameter setting, according to the initialization weight of the pre-training network, the user-defined score threshold was set to 0.5. The number of iterations was initially set to 10,000. The decision to continue the breakpoint retraining was made according to the test results, to converge the loss value further. Batch parameters were initially set to four, according to the experimental graphic card model, to prevent video memory overflow, and the step length coefficient and Padding were kept at 1. The parameters were set as shown in Table S1.
The experimental environment was built based on the CUDA10 parallel computing platform and CUDNN7.3 deep neural network GPU acceleration library. The programming language used to write related functional modules was Python. The deep learning framework was based on PyTorch 1.0 version, and the GPU was configured as NVIDIARTX2080TI with 11 g of memory. The detection results of the program operation are shown in Figure 10.
As shown in Figure 11, the training accuracy and test accuracy of YOLACT and modified YOLACT algorithm models change gradually with the increase in training times; the training accuracy of the model becomes higher, and the test accuracy also becomes higher, while the modified YOLACT algorithm model can achieve higher accuracy. This result shows that the modified model has better classification performance.
The resulting confusion matrix for classifying the test sets of common and difficult samples separately using the trained modified YOLACT algorithm model is shown in Figure 12. The ordinate of the confusion matrix represents the real labels, and the abscissa represents the predictive labels of the model. In general, the darker the color of the main diagonal of the confusion matrix, the higher the classification accuracy. According to the confusion matrix, the classification accuracy of the modified YOLACT algorithm model is 98.5% for the common sample test set and 93.6% for the difficult samples.

3.2. Comparison of the Diagnostic Accuracy among the Three Algorithmic Models for the MRI Images of Breast Cancer

The mean average precision (mAP) is used as the criterion for detecting image data analysis results. As shown in Formulas (7) and (8), the precision rate (P) indicates the total set ratio of the common samples to the difficult samples in the test results predicted by the model. The recall rate (R) represents the total set ratio of the common samples to the missed samples in the test results. In Formula (9), average precision (AP) is the mean of the maximum precision rate under different recall rates. All the means obtained are averaged to obtain the final mAP.
Table S2 compares the results of the YOLACT algorithm model, the Mask R-CNN algorithm model, and the modified YOLACT algorithm model. The data analysis results show that compared with the YOLACT algorithm model, the modified YOLACT algorithm model with the introduction of a difficult-sample mining mechanism enhances the detection accuracy of the common sample images and difficult samples by nearly 3%. However, compared with Mask R-CNN, it is still faster in running speed, and the difference in recognition accuracy could be clearer.
As shown in Figure 13, the results of different algorithmic models were compared. All three algorithm models can distinguish whether the detected image is a breast cancer image in most breast cancer MRI image data. The modified YOLACT algorithm model used in this study exhibits a higher detection accuracy for images containing lesion areas. In addition, the segmentation result is closer to the mark in the original data set, which can better detect the breast cancer lesion area and segment the texture.

4. Conclusions

Among many imaging techniques, MRI is widely used in the clinical diagnosis of breast cancer due to its advantages of high resolution and no damage. It assists radiologists in diagnosis and decision making, which greatly improves clinicians’ diagnosis and treatment efficiency [3,6,31]. The concept of deep learning was first proposed by Hinton et al. in 2006 [32]. As an emerging technology in machine learning algorithms, its motivation is to establish a neural network that simulates the human brain for analysis and learning. Its essence is to layer the observation data further to abstract low-level features into high-level feature representations. The rise and prominence of deep learning and its successful application in image recognition, segmentation, labeling, and other aspects challenge traditional machine learning strategies [33]. The application of deep learning in the classification and diagnosis of MRI images in breast cancer has also further improved the early diagnosis rate of breast cancer.
The U-Net algorithm is the most commonly used deep learning algorithm for breast cancer MRI image lesion segmentation. The improved 3DU-Net algorithm based on this algorithm can significantly improve the diagnostic accuracy of radiologists with sufficient training [34]. The YOLACT algorithm model in this study was improved through the breast cancer MRI image data in the Rider Breast MRI public data set for model training and the introduction of the difficult-sample mining mechanism for difficult sample images, such as unbalanced image samples and obscure image detection features. Then, the modified YOLACT algorithm model was constructed. It is found that compared with the YOLACT algorithm model and the Mask R-CNN algorithm model, the modified YOLACT algorithm model has higher accuracy and runs faster.
The main task of breast MRI image classification is to identify benign and malignant images based on the CNN algorithm [35]. In our study, the modified YOLACT algorithm model was used to detect the common sample images and difficult sample images in the test set with good adaptability, and the false-detection rate was significantly reduced, thus improving the performance of the YOLACT algorithm model in medical image detection and segmentation tasks. Furthermore, Truhn et al. studied the benign and malignant diagnosis of breast MRI images with artificial ANN network (ANN) and CNN algorithms. They found that despite the adjustment of the ANN method, the accuracy of CNN was still higher than that of ANN [36]. Although great success has been achieved in predicting benign and malignant breast tumors by extracting MRI features, these studies mainly rely on semi-automatic feature extraction methods. Still, there are far more traditional machine learning models than deep learning models.
Some researchers have proposed an enhancement method based on multifractal images combined with edge local contrast of the tissue lesion area to further differentiate between benign and malignant breast tumors [37]. In addition, some research teams used the secondary transfer learning method to build auxiliary diagnosis systems for breast MRI images. The detection performance improved compared to the mainstream algorithm model [31]. The deep-learning-based detection method can better obtain the multi-scale information of the target image. However, when the detection target feature is insignificant, it is prone to false and missed detection with non-target regions containing similar features [38]. Our study introduced the difficult-sample mining method based on the YOLACT algorithm model for unbalanced image samples and insufficient image feature information. The dataset utilized for the model training was the Rider Breast MRI dataset of breast cancer. It is found that compared with the YOLACT algorithm model, the detection accuracy of the common sample image and the difficult sample image of the modified YOLACT algorithm model with the difficult-sample mining mechanism was improved by nearly 3%. Compared with Mask R-CNN, it is still faster in running speed, and the difference in recognition accuracy is not obvious.
With the rapid development of artificial intelligence, computer vision, and other related fields, deep learning has also been applied in medical image classification and detection and has achieved remarkable results [39]. Deep learning is becoming a leading machine learning tool in general imaging and computer vision. With its revolutionary breakthroughs in the exploration and application of machine vision and natural language processing and its potential in supplementing image interpretation, enhancing image representation and classification, it can also be widely used in medical image processing [40,41]. CNN uses local connection and weight sharing to take images directly as the network input, avoiding the cumbersome process of feature extraction and data reconstruction in the traditional recognition algorithm, enhancing its migration ability, and having great advantages in the processing of medical images [42].
In conclusion, a modified algorithm based on the YOLACT algorithm model was constructed in this study. The modified YOLACT algorithm model was superior to the YOLACT algorithm model. The classification accuracy of the modified YOLACT algorithm model for common sample test sets was 98.5%, and the classification accuracy for difficult sample test sets was 93.6%. The modified YOLACT algorithm model improved the diagnostic efficiency of common samples in breast cancer MRI images, showing a good ability to distinguish difficult sample images. Although this study has achieved some results in the design and performance evaluation of the modified YOLACT algorithm model, there are also the following limitations in this study. First, the MRI images of breast cancer used in this study are from the Rider Breast MRI public data set. Although these datasets are recognized, the limitations of data sources may lead to a reduction in the generalization ability of the algorithm. Future research should consider using more sources of data to improve the stability and generalization ability of the model. Secondly, this study compares the modified YOLACT algorithm model with Mask R-CNN. Although these comparisons can demonstrate the advantages of the modified YOLACT algorithm model, they are limited to these two algorithms. Future research should consider comparing the modified YOLACT algorithm model with other advanced algorithms in order to evaluate their performance more comprehensively. Finally, this study did not compare the results of the modified YOLACT algorithm model with histology (the gold standard for tumor diagnosis). Future research should compare the diagnostic results of the algorithm with histological results in order to more accurately assess the diagnostic accuracy of the algorithm. In conclusion, the goal of this study is to classify and diagnose MRI images of difficult breast cancer, but more factors may need to be considered in practical clinical applications, such as image differences caused by different MRI equipment and parameter settings, patient age, and medical history. Therefore, more validation and testing is needed before applying this research result to clinical practice.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/diagnostics13091582/s1, Table S1. Data analysis and parameter setting; Table S2. Quantitative analysis of the diagnostic accuracy of different algorithm models for MRI images of breast cancer.

Author Contributions

W.W. designed the study. W.W. and Y.W. collated the data, carried out data analyses, and produced the initial draft of the manuscript. W.W. and Y.W. contributed to drafting the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the Guizhou Science Support Project (Grant No. 2022-259).

Institutional Review Board Statement

This article contains no studies with human participants or animals performed by authors.

Informed Consent Statement

Not applicable.

Data Availability Statement

The article’s data will be shared on reasonable request to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cao, W.; Chen, H.D.; Yu, Y.W.; Li, N.; Chen, W.Q. Changing profiles of cancer burden worldwide and in China: A secondary analysis of the global cancer statistics 2020. Chin. Med. J. 2021, 134, 783–791. [Google Scholar] [CrossRef] [PubMed]
  2. Burstein, H.J.; Curigliano, G.; Thurlimann, B.; Weber, W.P.; Poortmans, P.; Regan, M.M.; Senn, H.J.; Winer, E.P.; Gnant, M.; Panelists of the St Gallen Consensus Conference. Customizing local and systemic therapies for women with early breast cancer: The St. Gallen International Consensus Guidelines for treatment of early breast cancer 2021. Ann. Oncol. 2021, 32, 1216–1235. [Google Scholar] [CrossRef] [PubMed]
  3. Tagliafico, A.S.; Piana, M.; Schenone, D.; Lai, R.; Massone, A.M.; Houssami, N. Overview of radiomics in breast cancer diagnosis and prognostication. Breast 2020, 49, 74–80. [Google Scholar] [CrossRef] [PubMed]
  4. Ito, S.; Ando, K.; Kobayashi, K.; Nakashima, H.; Oda, M.; Machino, M.; Kanbara, S.; Inoue, T.; Yamaguchi, H.; Koshimizu, H.; et al. Automated Detection of Spinal Schwannomas Utilizing Deep Learning Based on Object Detection From Magnetic Resonance Imaging. Spine 2021, 46, 95–100. [Google Scholar] [CrossRef] [PubMed]
  5. Sheth, D.; Giger, M.L. Artificial intelligence in interpreting breast cancer on MRI. J. Magn. Reson. Imaging 2020, 51, 1310–1324. [Google Scholar] [CrossRef]
  6. Leithner, D.; Wengert, G.J.; Helbich, T.H.; Thakur, S.; Ochoa-Albiztegui, R.E.; Morris, E.A.; Pinker, K. Clinical role of breast MRI now and going forward. Clin. Radiol. 2018, 73, 700–714. [Google Scholar] [CrossRef]
  7. Cheng, X.; Chen, C.; Xia, H.; Zhang, L.; Xu, M. 3.0 T Magnetic Resonance Functional Imaging Quantitative Parameters for Differential Diagnosis of Benign and Malignant Lesions of the Breast. Cancer Biother. Radiopharm. 2021, 36, 448–455. [Google Scholar] [CrossRef]
  8. Partridge, S.C.; Demartini, W.B.; Kurland, B.F.; Eby, P.R.; White, S.W.; Lehman, C.D. Differential diagnosis of mammographically and clinically occult breast lesions on diffusion-weighted MRI. J. Magn. Reson. Imaging 2010, 31, 562–570. [Google Scholar] [CrossRef]
  9. Liu, H.; Zhan, H.; Sun, D.; Zhang, Y. Comparison of BSGI, MRI, mammography, and ultrasound for the diagnosis of breast lesions and their correlations with specific molecular subtypes in Chinese women. BMC Med. Imaging 2020, 20, 98. [Google Scholar] [CrossRef]
  10. Conti, A.; Duggento, A.; Indovina, I.; Guerrisi, M.; Toschi, N. Radiomics in breast cancer classification and prediction. Semin. Cancer Biol. 2021, 72, 238–250. [Google Scholar] [CrossRef]
  11. Jiang, Y.; Yang, M.; Wang, S.; Li, X.; Sun, Y. Emerging role of deep learning-based artificial intelligence in tumor pathology. Cancer Commun. 2020, 40, 154–166. [Google Scholar] [CrossRef]
  12. Sakamoto, T.; Furukawa, T.; Lami, K.; Pham, H.H.N.; Uegami, W.; Kuroda, K.; Kawai, M.; Sakanashi, H.; Cooper, L.A.D.; Bychkov, A.; et al. A narrative review of digital pathology and artificial intelligence: Focusing on lung cancer. Transl. Lung Cancer Res. 2020, 9, 2255–2276. [Google Scholar] [CrossRef] [PubMed]
  13. Anwar, S.M.; Majid, M.; Qayyum, A.; Awais, M.; Alnowami, M.; Khan, M.K. Medical Image Analysis using Convolutional Neural Networks: A Review. J. Med. Syst. 2018, 42, 226. [Google Scholar] [CrossRef]
  14. Schwendicke, F.; Golla, T.; Dreher, M.; Krois, J. Convolutional neural networks for dental image diagnostics: A scoping review. J. Dent. 2019, 91, 103226. [Google Scholar] [CrossRef]
  15. Sun, H.; Zheng, X.; Lu, X. A Supervised Segmentation Network for Hyperspectral Image Classification. IEEE Trans. Image Process. 2021, 30, 2810–2825. [Google Scholar] [CrossRef] [PubMed]
  16. Nyabuga, D.O.; Song, J.; Liu, G.; Adjeisah, M. A 3D-2D Convolutional Neural Network and Transfer Learning for Hyperspectral Image Classification. Comput. Intell. Neurosci. 2021, 2021, 1759111. [Google Scholar] [CrossRef] [PubMed]
  17. Segebarth, D.; Griebel, M.; Stein, N.; von Collenberg, C.R.; Martin, C.; Fiedler, D.; Comeras, L.B.; Sah, A.; Schoeffler, V.; Luffe, T.; et al. On the objectivity, reliability, and validity of deep learning enabled bioimage analyses. Elife 2020, 9, e59780. [Google Scholar] [CrossRef]
  18. Wang, S.; Yang, D.M.; Rong, R.; Zhan, X.; Xiao, G. Pathology Image Analysis Using Segmentation Deep Learning Algorithms. Am. J. Pathol. 2019, 189, 1686–1698. [Google Scholar] [CrossRef]
  19. Toprak, A. Extreme Learning Machine (ELM)-Based Classification of Benign and Malignant Cells in Breast Cancer. Med. Sci. Monit. 2018, 24, 6537–6543. [Google Scholar] [CrossRef]
  20. Le, E.P.V.; Wang, Y.; Huang, Y.; Hickman, S.; Gilbert, F.J. Artificial intelligence in breast imaging. Clin. Radiol. 2019, 74, 357–366. [Google Scholar] [CrossRef]
  21. Chan, H.P.; Samala, R.K.; Hadjiiski, L.M. CAD and AI for breast cancer-recent development and challenges. Br. J. Radiol. 2020, 93, 20190580. [Google Scholar] [CrossRef]
  22. Olberg, S.; Zhang, H.; Kennedy, W.R.; Chun, J.; Rodriguez, V.; Zoberi, I.; Thomas, M.A.; Kim, J.S.; Mutic, S.; Green, O.L.; et al. Synthetic CT reconstruction using a deep spatial pyramid convolutional framework for MR-only breast radiotherapy. Med. Phys. 2019, 46, 4135–4147. [Google Scholar] [CrossRef]
  23. Meivel, S.; Sindhwani, N.; Anand, R.; Pandey, D.; Alnuaim, A.A.; Altheneyan, A.S.; Jabarulla, M.Y.; Lelisho, M.E. Mask Detection and Social Distance Identification Using Internet of Things and Faster R-CNN Algorithm. Comput. Intell. Neurosci. 2022, 2022, 2103975. [Google Scholar] [CrossRef] [PubMed]
  24. Shen, L.; Su, J.; Huang, R.; Quan, W.; Song, Y.; Fang, Y.; Su, B. Fusing attention mechanism with Mask R-CNN for instance segmentation of grape cluster in the field. Front. Plant Sci. 2022, 13, 934450. [Google Scholar] [CrossRef] [PubMed]
  25. Zhou, M.; Wang, J.; Li, B. ARG-Mask RCNN: An Infrared Insulator Fault-Detection Network Based on Improved Mask RCNN. Sensors 2022, 22, 4720. [Google Scholar] [CrossRef] [PubMed]
  26. Wang, Y.; Liu, Z.; Deng, W. Anchor Generation Optimization and Region of Interest Assignment for Vehicle Detection. Sensors 2019, 19, 1089. [Google Scholar] [CrossRef] [PubMed]
  27. Mitra, A.; Banerjee, P.S.; Roy, S.; Roy, S.; Setua, S.K. The region of interest localization for glaucoma analysis from retinal fundus image using deep learning. Comput. Methods Program. Biomed. 2018, 165, 25–35. [Google Scholar] [CrossRef]
  28. Chang, C.C.; Wang, Y.P.; Cheng, S.C. Fish Segmentation in Sonar Images by Mask R-CNN on Feature Maps of Conditional Random Fields. Sensors 2021, 21, 7625. [Google Scholar] [CrossRef]
  29. Zheng, Z.; Wang, P.; Ren, D.; Liu, W.; Ye, R.; Hu, Q.; Zuo, W. Enhancing Geometric Factors in Model Learning and Inference for Object Detection and Instance Segmentation. IEEE Trans. Cybern. 2022, 52, 8574–8586. [Google Scholar] [CrossRef] [PubMed]
  30. Al-Faris, A.Q.; Ngah, U.K.; Isa, N.A.; Shuaib, I.L. Computer-aided segmentation system for breast MRI tumour using modified automatic seeded region growing (BMRI-MASRG). J. Digit. Imaging 2014, 27, 133–144. [Google Scholar] [CrossRef]
  31. Yu, Y.; Tan, Y.; Xie, C.; Hu, Q.; Ouyang, J.; Chen, Y.; Gu, Y.; Li, A.; Lu, N.; He, Z.; et al. Development and Validation of a Preoperative Magnetic Resonance Imaging Radiomics-Based Signature to Predict Axillary Lymph Node Metastasis and Disease-Free Survival in Patients with Early-Stage Breast Cancer. JAMA Netw. Open 2020, 3, e2028086. [Google Scholar] [CrossRef] [PubMed]
  32. Hinton, G.E.; Osindero, S.; Teh, Y.W. A fast learning algorithm for deep belief nets. Neural. Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef] [PubMed]
  33. Cai, L.; Gao, J.; Zhao, D. A review of the application of deep learning in medical image classification and segmentation. Ann. Transl. Med. 2020, 8, 713. [Google Scholar] [CrossRef] [PubMed]
  34. Yu, Y.; He, Z.; Ouyang, J.; Tan, Y.; Chen, Y.; Gu, Y.; Mao, L.; Ren, W.; Wang, J.; Lin, L.; et al. Magnetic resonance imaging radiomics predicts preoperative axillary lymph node metastasis to support surgical decisions and is associated with tumor microenvironment in invasive breast cancer: A machine learning, multicenter study. EBioMedicine 2021, 69, 103460. [Google Scholar] [CrossRef]
  35. Zakeri, F.S.; Behnam, H.; Ahmadinejad, N. Classification of benign and malignant breast masses based on shape and texture features in sonography images. J. Med. Syst. 2012, 36, 1621–1627. [Google Scholar] [CrossRef]
  36. Truhn, D.; Schrading, S.; Haarburger, C.; Schneider, H.; Merhof, D.; Kuhl, C. Radiomic versus Convolutional Neural Networks Analysis for Classification of Contrast-enhancing Lesions at Multiparametric Breast MRI. Radiology 2019, 290, 290–297. [Google Scholar] [CrossRef]
  37. Reaungamornrat, S.; Sari, H.; Catana, C.; Kamen, A. Multimodal image synthesis based on disentanglement representations of anatomical and modality specific features, learned using uncooperative relativistic GAN. Med. Image Anal. 2022, 80, 102514. [Google Scholar] [CrossRef]
  38. Ancuti, C.O.; Ancuti, C. Single image dehazing by multi-scale fusion. IEEE Trans. Image Process. 2013, 22, 3271–3282. [Google Scholar] [CrossRef]
  39. Phan, M.V.T.; Ngo Tri, T.; Hong Anh, P.; Baker, S.; Kellam, P.; Cotten, M. Identification and characterization of Coronaviridae genomes from Vietnamese bats and rats based on conserved protein domains. Virus Evol. 2018, 4, vey035. [Google Scholar] [CrossRef]
  40. Kriegeskorte, N.; Golan, T. Neural network models and deep learning. Curr. Biol. 2019, 29, R231–R236. [Google Scholar] [CrossRef]
  41. Wu, N.; Phang, J.; Park, J.; Shen, Y.; Huang, Z.; Zorin, M.; Jastrzebski, S.; Fevry, T.; Katsnelson, J.; Kim, E.; et al. Deep Neural Networks Improve Radiologists’ Performance in Breast Cancer Screening. IEEE Trans. Med. Imaging 2020, 39, 1184–1194. [Google Scholar] [CrossRef] [PubMed]
  42. Wang, Z.; Jiang, X.; Liu, J.; Cheng, K.T.; Yang, X. Multi-Task Siamese Network for Retinal Artery/Vein Separation via Deep Convolution Along Vessel. IEEE Trans. Med. Imaging 2020, 39, 2904–2919. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Schematic graph of the model structure of the Mask R-CNN algorithm.
Figure 1. Schematic graph of the model structure of the Mask R-CNN algorithm.
Diagnostics 13 01582 g001
Figure 2. Schematic illustration of the structure of prototype network.
Figure 2. Schematic illustration of the structure of prototype network.
Diagnostics 13 01582 g002
Figure 3. Schematic illustration of the structure of the YOLACT algorithm model.
Figure 3. Schematic illustration of the structure of the YOLACT algorithm model.
Diagnostics 13 01582 g003
Figure 4. Flow chart of the introduction of the difficult-sample mining mechanism.
Figure 4. Flow chart of the introduction of the difficult-sample mining mechanism.
Diagnostics 13 01582 g004
Figure 5. Representative MRI images of breast cancer patients. (a) A 48-year-old female patient with right breast cancer; (b) A 57-year-old female patient with bilateral breast cancer; (c) A 61-year-old female patient with right breast cancer; (d) A 55-year-old female patient with bilateral breast cancer.
Figure 5. Representative MRI images of breast cancer patients. (a) A 48-year-old female patient with right breast cancer; (b) A 57-year-old female patient with bilateral breast cancer; (c) A 61-year-old female patient with right breast cancer; (d) A 55-year-old female patient with bilateral breast cancer.
Diagnostics 13 01582 g005
Figure 6. Preprocessing of normal breast samples and breast cancer MRI images, including the image of normal breast samples (a), image rotated 180 degrees (b), horizontal mirror image (c), cropped original image (d), and lesion image (e); image of breast cancer samples (f), image rotated 180 degrees (g), horizontal mirror image (h), cropped original image (i), and lesion image (j).
Figure 6. Preprocessing of normal breast samples and breast cancer MRI images, including the image of normal breast samples (a), image rotated 180 degrees (b), horizontal mirror image (c), cropped original image (d), and lesion image (e); image of breast cancer samples (f), image rotated 180 degrees (g), horizontal mirror image (h), cropped original image (i), and lesion image (j).
Diagnostics 13 01582 g006
Figure 7. Breast cancer lesion labeled with Labelme software. Note: the red area indicates the range of the cancerous lesion in the breast.
Figure 7. Breast cancer lesion labeled with Labelme software. Note: the red area indicates the range of the cancerous lesion in the breast.
Diagnostics 13 01582 g007
Figure 8. The YOLACT algorithm model predicts the network structure of breast cancer.
Figure 8. The YOLACT algorithm model predicts the network structure of breast cancer.
Diagnostics 13 01582 g008
Figure 9. Tanh function curves of breast cancer MRI images predicted by the YOLACT network model.
Figure 9. Tanh function curves of breast cancer MRI images predicted by the YOLACT network model.
Diagnostics 13 01582 g009
Figure 10. Results of PyTorch program operation.
Figure 10. Results of PyTorch program operation.
Diagnostics 13 01582 g010
Figure 11. Diagnostic accuracy analysis of the YOLACT and modified YOLACT algorithm models for the training and test sets.
Figure 11. Diagnostic accuracy analysis of the YOLACT and modified YOLACT algorithm models for the training and test sets.
Diagnostics 13 01582 g011
Figure 12. Confusion matrix of the type classification results of the modified YOLACT algorithm model. (a,d) YOLACT algorithm model marks the lesion area; (b,e) Mask R-CNN algorithm model marks the lesion area; (c,f) Modified YOLACT algorithm model marks the lesion area.
Figure 12. Confusion matrix of the type classification results of the modified YOLACT algorithm model. (a,d) YOLACT algorithm model marks the lesion area; (b,e) Mask R-CNN algorithm model marks the lesion area; (c,f) Modified YOLACT algorithm model marks the lesion area.
Diagnostics 13 01582 g012
Figure 13. Detection results compared to YOLACT, Mask R-CNN, and modified YOLACT algorithm models.
Figure 13. Detection results compared to YOLACT, Mask R-CNN, and modified YOLACT algorithm models.
Diagnostics 13 01582 g013
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, W.; Wang, Y. Deep Learning-Based Modified YOLACT Algorithm on Magnetic Resonance Imaging Images for Screening Common and Difficult Samples of Breast Cancer. Diagnostics 2023, 13, 1582. https://doi.org/10.3390/diagnostics13091582

AMA Style

Wang W, Wang Y. Deep Learning-Based Modified YOLACT Algorithm on Magnetic Resonance Imaging Images for Screening Common and Difficult Samples of Breast Cancer. Diagnostics. 2023; 13(9):1582. https://doi.org/10.3390/diagnostics13091582

Chicago/Turabian Style

Wang, Wei, and Yisong Wang. 2023. "Deep Learning-Based Modified YOLACT Algorithm on Magnetic Resonance Imaging Images for Screening Common and Difficult Samples of Breast Cancer" Diagnostics 13, no. 9: 1582. https://doi.org/10.3390/diagnostics13091582

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop