Next Article in Journal
A Novel Fusion Protein System for the Production of Nanobodies and the SARS-CoV-2 Spike RBD in a Bacterial System
Next Article in Special Issue
Machine Learning for Brain MRI Data Harmonisation: A Systematic Review
Previous Article in Journal
Focused Low-Intensity Pulsed Ultrasound (FLIPUS) Mitigates Apoptosis of MLO-Y4 Osteocyte-like Cells
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Translation of Breast Ultrasound to Pseudo Anatomical Display by CycleGAN

Biomedical Engineering Faculty, Technion-Israel Institute of Technology, Haifa 3200001, Israel
*
Author to whom correspondence should be addressed.
Bioengineering 2023, 10(3), 388; https://doi.org/10.3390/bioengineering10030388
Submission received: 19 February 2023 / Revised: 15 March 2023 / Accepted: 16 March 2023 / Published: 22 March 2023
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)

Abstract

:
Ultrasound imaging is cost effective, radiation-free, portable, and implemented routinely in clinical procedures. Nonetheless, image quality is characterized by a granulated appearance, a poor SNR, and speckle noise. Specific for breast tumors, the margins are commonly blurred and indistinct. Thus, there is a need for improving ultrasound image quality. We hypothesize that this can be achieved by translation into a more realistic display which mimics a pseudo anatomical cut through the tissue, using a cycle generative adversarial network (CycleGAN). In order to train CycleGAN for this translation, two datasets were used, “Breast Ultrasound Images” (BUSI) and a set of optical images of poultry breast tissues. The generated pseudo anatomical images provide improved visual discrimination of the lesions through clearer border definition and pronounced contrast. In order to evaluate the preservation of the anatomical features, the lesions in both datasets were segmented and compared. This comparison yielded median dice scores of 0.91 and 0.70; median center errors of 0.58% and 3.27%; and median area errors of 0.40% and 4.34% for the benign and malignancies, respectively. In conclusion, generated pseudo anatomical images provide a more intuitive display, enhance tissue anatomy, and preserve tumor geometry; and can potentially improve diagnoses and clinical outcomes.

Graphical Abstract

1. Introduction

Breast cancer is the second most prevalent cancer among women in the United States and the second leading cause of cancer death among women overall. Early diagnosis through screening, alongside treatment advancements, have substantially reduced breast cancer mortality [1]. The screening and diagnosis process is primarily conducted using mammography, ultrasound, and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). Each of these modalities has its own unique advantages and uses, depending on the patient’s age, breast tissue density, medical history, and the tumor’s stage and type [2].
Ultrasound is a major tool in breast cancer diagnosis. It is probably the most cost-effective medical imaging modality available today. Stemming from its high availability and from the fact that it is considered hazardless, it has found applications in almost all fields of medicine. Nonetheless, even though it was introduced to medicine about eighty years ago and despite the enormous progress in the fields of electronics and computers, one factor remained unchanged: ultrasound images are characterized by a granulated appearance with poor a SNR and substantial speckle noise. Furthermore, image quality is subject to the skills of the scanner operator and the manner in which the probe is attached and pressed against the tissue [3]. As a result, anatomical visualization is relatively inferior when compared to mammography or MRI. Consequently, the clinical information is compromised, and image interpretation relies substantially on the skills of the practicing radiologist [4,5].
On the positive side, since ultrasound image acquisition rate enables real-time imaging, ultrasound is commonly used during surgery to bridge the gap between the preliminary radiological images acquired before surgery and the actual area of tissue operated on [6,7,8]. Still, there is often a challenge in identifying the corresponding pathology of interest in the anatomical cut. The difficulty stems mainly from two reasons. The first problem is the potential mismatch between the spatial orientation of the image acquired before surgery relative to the actual cut and the physical position of the patient at the time of the image acquisition. Relating to this problem, ultrasound offers an advantage over other imaging modalities. Since ultrasound scans are conducted by a handheld probe, they can yield arbitrary cross-sectional images from various directions in real time. This may enable the surgeon to find the best match with the preoperative radiological information during the surgery. The second challenge is the substantial differences in appearance between the black and white radiological image and the texture and color of the operated tissue. The remedy for that could be a style of display which resembles the anatomical appearance more closely.
Another significant challenge faced during surgery is ensuring the complete removal of all the tumor tissue(s). This is typically achieved by cutting a margin of healthy tissue surrounding the tumor. The remaining positive margin, which refers to residual tumor tissue that remains in the body after the initial surgical resection, is a very important factor that determines the success of the surgery. Intraoperative margin assessments encompass a range of techniques aimed at providing information on the composition of the tissue and discriminating between normal and malignant tissue during an operation. One such approach is spectroscopy, which evaluates biochemical differences by analyzing the aerosol released during electrosurgery [9] or by measuring the frequency shift of light scattered by a focused laser [10]. Another method involves the determination of tissues’ electrical properties through measurement of their resistance [11]. Molecular imaging, which entails the detection of specific contrast agents or dyes, is also utilized for intraoperative margin assessment [12,13]. Following the surgical procedure, the evaluation of margins is customarily carried out through histopathology [14] to examine the excised tissue and guarantee the presence of adequate healthy margins.
Intraoperative ultrasound offers another option. It enables mapping the margins around the tissue in situ and examining the excised tissue. Detection of positive margins in situ through the use of ultrasound enables the surgeon to make informed decisions regarding additional resection procedures in real-time, if needed [6]. In addition, the ultrasound probe can be attached directly to the excised tissue immediately after removal from the breast, and the displayed images can be used for measuring the sizes and locations of the healthy margins. This enables real-time margin assessment within the operation room. Thus, if needed, more tissue can be removed. This can prevent cases where close or positive margins are identified post-surgery, which mostly leads to additional surgeries, treatment delay, and increase patient anxiety [15]. In this context as well, a display that resembles an actual anatomical cut can be advantageous.
The aim of this study is harnessing the numerous advantages of ultrasound imaging that were mentioned, both pre- and intra-operatively, while addressing its primary drawbacks in relation to other imaging modalities, which are the poor quality and interpretability of the resultant images. To bridge this gap, our research focused on generating a more intuitive representation of ultrasound images that would facilitate their interpretation by the operators while preserving their anatomical information. An overview of the advantages and disadvantages of ultrasound imaging is presented in Table 1.
In recent years, attempts have been made to generate methods for medical images’ translation into a more convenient form. For example, take translating point-of-care ultrasound to higher-quality ultrasound [16], translation between MRI different protocols [17], and cross modality translations, such as ultrasound to MRI [18] and MRI to CT [19]. Further, ultrasound image visualization has been improved by using CT as a reference [20]. Image translation from different domains can be done by implementing one of several methods [21], most of which are based on generative adversarial network (GANs) [22] or on variational auto encoders (VAEs) [23]. The variety of methods can broadly be categorized into supervised and unsupervised methods. Supervised methods require paired images for training, whereas unsupervised methods do not [21]. One optional translation method is the cycle generative adversarial network (CycleGAN). This unsupervised method is capable of learning the mapping between two image domains without the need for paired training data.
In this work, we aimed to explore a different approach by creating a novel visual representation of the tumor shape. It is postulated here that a transformation into a more realistic and more intuitive display will be beneficial, by allowing a more straightforward interpretation. We have chosen to transform the ultrasonic images into a pseudo anatomical display, as a demonstration of feasibility, which seems to be more natural to comprehend. This form of representation, with colors and clearer borders, has the potential to be more intuitive for the surgeon, and especially for those who are less experienced in the field. It could potentially simplify ultrasound findings interpretation and improve the surgical procedure. Since in our case, it was physically impossible to produce perfectly matched image pairs for training the network, we hypothesized that this can be achieved by using CycleGAN.

2. Materials and Methods

2.1. Cycle Generative Adversarial Network (CycleGAN)

In order to create a transformation between two image domains using AI, it is customary to establish a large number of paired images to train the network. This requires that each pair of images would be acquired under the same exact conditions and views. Evidently, due to the operator-dependent manner of ultrasonic-image acquisition, it may be impractical to obtain matching pairs of images that exactly correspond to the same anatomical cross-sections. However, the introduction of CycleGAN [24] has laid the foundations for a new approach. CycleGAN, which has already been applied in several studies in the medical field [18,25,26], is used to study the features of each image domain separately and enforce similarity in the cross-domain. This, therefore, enables us to overcome the lack of paired-image datasets (i.e., ultrasound vs. optical/anatomical). Additionally, another notable advantage of CycleGAN is its translation rapidity. Following the training of the neural network, the inference time is a fraction of a second, and thereby the images can be translated and displayed alongside the ultrasound images in real time.
The CycleGAN architecture is comprised of two generative adversarial network (GAN) units, one for each image domain. As shown in Figure 1, each of the GAN models has a generator and a discriminator, which are trained simultaneously. The generator attempts to produce from the input image domain a realistic display which fits the target domain as well as possible. The discriminator, on the other hand, examines the features of the produced image and decides whether or not it belongs to the target domain. According to the discriminator feedback, the generator tries to improve the generated image. In parallel, the discriminator accepts actual input images that belong to the target domain and improve its discrimination capability. Thus, the two networks enforce mutual improvement in an adversarial manner.
In the context of this work, one generator was trained to produce pseudo anatomical displays ( G P A ) from ultrasonic images, and its corresponding discriminator ( D P A ) was trained to distinguish real from generated synthetic images, as shown in Figure 1a. In an alternating fashion, the second generator was trained to produce ultrasound displays ( G U S ) from optical anatomical images, and simultaneously trained its corresponding discriminator ( D U S ) (see Figure 1b).
Referring to mathematical terms, the real ultrasonic image is defined as x and the real pseudo anatomical image as y. Accordingly, the adversarial loss for each of the GAN units is calculated by [24]:
L G A N P A ( G P A , D P A ) = E y p d a t a ( y ) [ l o g ( D P A ( y ) ) ] + E x p d a t a ( x ) [ l o g ( 1 D P A ( G P A ( x ) ) ]
L G A N U S ( G U S , D U S ) = E y p d a t a ( y ) [ l o g ( D U S ( y ) ) ] + E x p d a t a ( x ) [ l o g ( 1 D U S ( G U S ( x ) ) ]
where L G A N P A and L G A N U S are the loss functions for ultrasound and PA, respectively, and E x p d a t a and E y p d a t a are the corresponding expected values.
In addition, the networks were trained to maintain cycle-consistent translation, which means minimizing the difference between the original image and its reconstruction. In order to enforce this similarity in the cross domain, the input image was passed through one generator, followed by a second generator, minimizing the discrepancy of the real input image from the reconstructed image.
Within this framework, the cycle consistency is determined by the translation of ultrasonic images to anatomical displays and back to ultrasonic images, or from anatomical images to ultrasonic displays and back. These cycle-consistent translations are depicted visually by the green arrows in Figure 1. Maintaining cycle consistency is important, as it ensures that the networks are able to preserve the original information and details of the images during the translation process. The cycle reconstruction loss ( L c y c l e ) was calculated by:
L c y c l e ( G P A , G U S ) = E x p d a t a ( x ) [ | | G U S ( G P A ( x ) ) x | | 1 ] + E y p d a t a ( y ) [ | | G P A ( G U S ( y ) ) y | | 1 ]
In the original paper of CycleGAN [24], the researchers proposed the usage of an identity loss to preserve the pixels’ colors throughout the image translation. However, in our case, the two domains differ substantially in their color scales and contrast properties. While the tumors have a relatively white color to the human eye, they appear as black regions in ultrasound images. In order to account for the opposite contrast between the two domains, i.e., enhancing ultrasound black-colored masses’ translation into light-colored optical images of the masses, the loss term was modified to take the negative value of the input images; i.e.,
L o p p o s i t e ( G P A , G U S ) = E x p d a t a ( x ) [ | | G U S ( x ¯ ) x | | 1 ] + E y p d a t a ( y ) [ | | G P A ( y ¯ ) y | | 1 ]
where x ¯ and y ¯ are the negative images of x and y correspondingly.

2.2. BUSI Dataset

A public dataset called “Breast Ultrasound Images” (BUSI) collected from women between 25 and 75 years old [27] was used. The data were collected in 2018, from 600 female patients, and consist of 780 images classified to three groups: (i) 133 normal images without masses, (ii) 437 images with benign masses, (iii) 210 images with malignant masses. The images were scanned by a LOGIQ E9 ultrasound system and include additional manually traced masks of the radiologist’s evaluation. The images are in a PNG format, vary in height and width, and have an average size of 600 × 500 pixels. In our study, the data were preprocessed by removing text and labels that were not part of the original ultrasound images and cropping them to partially overlapping square patches of 450 × 450 pixels.

2.3. Optic/Anatomic Dataset

To demonstrate feasibility, a set of optical images were collected from poultry breast-tissue samples. To simulate abnormal tissue, small regions were thermally etched. The etched texture resembles the appearance of breast masses (see, for example, [28,29,30,31]). In order to simulate the typical ultrasound signal decay with depth, part of each image contained a black background on the lower part of the image. The etched sizes, shapes, and locations of the simulated tumors were created so as to resemble the relative distribution of the abnormal masses in the BUSI ultrasound images.

2.4. Training Parameters

In order to train the model, the dataset, after being cropped into partially overlapping square patches, was divided into train, validation, and test subsets. The training subgroup consisted of 80% of the images, 5% were used for validation, and the rest were used for testing the performance of the suggested method. The relative numbers of cases of malignant, benign, and normal tissue were maintained in the validation and test groups as well. The network’s architecture was based on the CycleGAN architecture provided by [24]. The hyperparameters were tuned according to the validation set, on which the cycle loss was 10× higher than the GAN’s loss and the opposite loss was 0.03 lower. The model was trained on a NVIDIA Tesla V100 GPU (Petach Tikva, Israel) running Linux. Training computation time was approximately 36 h.

2.5. Automated Segmentation

In order to evaluate the quality of the generated images, image segmentation was performed and compared to the BUSI’s traced masks. The optical images were segmented by applying the morphological geodesic active contours (MorphGAC) [32] using the scikit-image implementation [33]. As a preprocessing step to highlight the edges, the inverse Gaussian gradient (IGG) was applied. To control the steepness of the inversion, the alpha parameter was set to 100, and the standard deviation of the Gaussian filter sigma parameter was set to 1.5, for both the ultrasonic and the pseudo anatomical display images.
Stemming from the fact that the BUSI tracing was preformed manually, it inherently included inconsistencies. In order to overcome this problem, the generated images were also compared to automatically re-segment masks with the MorphGAC algorithm for the ultrasound images. The code for the segmentation graphical user interface implementation is available https://github.com/LilachBarkat/MorphGAC-Segmentation-GUI (accessed on 27 May 2022).

2.6. Evaluation Protocol

Since an image interpretability comparison is ultimately subjective and evaluation metrics such as SSIM (structural similarity) which are commonly used in image reconstruction quality analysis are irrelevant in our case, other indices which are more indicative of the clinical merit were applied. The performance of the method was evaluated by comparing the optical segmentation mask to both the original BUSI reference masks and the BUSI masks re-segmented by the MorphGAC algorithm. Three metrices were used to evaluate the quality of the segmentation results. The first index used for contour evaluation was the Dice index, which assesses shape similarity and is defined as [34]:
Dice = 2 × T P ( T P + F P ) + ( T P + F N )
where TP, FP, and FN are the true positive, false positive, and false negative pixels, respectively. Positive means within the lesion mask, and negative means outside the mask.
In addition, the accuracy at locating the center of mass of the lesion is defined as:
Center error = ( C x R e f e r e n c e C x G e n e r a t e d ) 2 + ( C y R e f e r e n c e C y G e n e r a t e d ) 2 2 a
where C x R e f e r e n c e and C x G e n e r a t e d are the x coordinates of the center of mass; and C y R e f e r e n c e and C y G e n e r a t e d are the y coordinates of the reference masks and the generated masks, respectively. 2 a is a normalizing size factor corresponding to the diagonal length of the image (the largest possible dimension in the image).
The area index of the lesion is defined as:
Area index = | S R e f e r e n c e S G e n e r a t e d | a 2
where S R e f e r e n c e and S G e n e r a t e d are the segmented lesion areas of the reference and generated masks, and a 2 is the size normalization to the image area.

3. Results

A set of exemplary images translated from ultrasound to colored pseudo anatomical displays is depicted in Figure 2. As can be observed, a pseudo anatomical display provides superior visual discrimination of the lesion, including much clearer border definition and enhanced contrast, especially for the malignant tumors (see Figure 2C,D). Furthermore, the tissue’s texture is more vivid and realistic. Importantly, in most cases, the algorithm manages to overcome the acoustic artifacts commonly appearing in ultrasonic images, such as the acoustic shadow artifacts, as can clearly observed in Figure 2A, acoustic enhancement artifacts of the tumor’s posterior tissue in Figure 2A,B, and the signal decay with depth, at the bottom of the image, as depicted in Figure 2A. These distinctions highlight the superiority of the optical representation over the ultrasound images, particularly by capturing and representing the fine details.
Although the algorithm was successful in most cases, there were instances where its effectiveness was reduced. One of the difficulties encountered was the depiction of erroneous lesions in normal tissues leading to false-positive readings. This was due to the presence of a darker area in comparison to the surrounding tissue, as demonstrated, for example, in Figure 3C. This issue poses a challenge not only only for the algorithm, but also for radiologists who have to use multiple cross-sectional views from various directions to overcome this problem. Additionally, there were few mistranslations of acoustic artifacts, as depicted in Figure 3A.
Nonetheless, part of the discrepancy can presumably be attributed to inconsistency in the BUSI tracing in “difficult to trace” ultrasonic images, as depicted, for example, in Figure 3B. The variability in the manner in which radiologists segment the tumor results in an inconsistent segmentation process. This is caused by factors such as the indistinct contour of the tumor, the subjectivity of manual segmentation, and the diversity in radiologists’ expertise and interpretation. These factors collectively contribute to the observed discrepancy.
This is also demonstrated, for example, in Figure 4 where three substantially different tracings are presented for the same tumor (example taken from “malignant 3–9” in BUSI). As recalled, to overcome the variations stemming from the manual tracing, the ultrasonic images were re-segmented using the MorphGAC algorithm, which yielded better consistency.
The overall estimation of the performance based on the criteria listed above are outlined in Table 2 and are also graphically depicted in Figure 5. As can be observed, the performance is better for benign tumors, this can be attributed to their more regular shapes. Contrary to that, for the malignant tumors, whose geometry is more irregular and whose borders are commonly blurred, it was more difficult for the model and for the MorphGAC algorithm to accurately segment the lesions.
Upon more closely studying the distribution of the quality indices, the area index and the center of mass error were seen to share similar behavior. For the benign tumor, both error indices are close to zero with a narrow distribution. Contrary to that, the distribution for the malignant tumors is wider. Although most of the masks for the generated images have a small detection error, some cases yielded significantly different masks. Examining the Dice-score distribution, for the benign tumors, the re-segmented masks have higher median score of 0.91 compared to 0.85 of the BUSI manually segmented masks. For the malignancies, the distribution is wider and the re-segmentation median score is 0.70, and for the manually segmented BUSI masks, it is 0.58.

4. Discussion

Ultrasound is a common and affordable imaging technique that provides real-time imaging without the use of harmful ionizing radiation. It is often used in real-time during surgery, but its images can be challenging to interpret due to their relatively poor quality. The obtained images are characterized by a granulated appearance, a low signal-to-noise ratio, and substantial speckle noise. Furthermore, their quality is highly dependent on the operator’s skills.
The ultrasound probe enables acquisition of cross-sectional images in arbitrary spatial orientation and offers real-time capabilities. Consequently, ultrasound is used in image-guided interventions and surgeries. Nonetheless, interpreting ultrasound images in the anatomical/pathological context can still be difficult for surgeons. This is due to differences in between the black and white images and the appearance of the actual tissue during surgery, and in some of the cases, the indistinct mass margins. These variations pose a challenge in finding and identifying all the abnormal tissue.
To address this problem, we harnessed the great progress in the field of neural networks for image-to-image translation. There are already applications in the medical field for cross-modality translations. In this work, we chose to approach this issue from a new perspective and translated the ultrasound image into an optical image that would resemble the appearance of the actual tissue. However, due to the inherent limitations of producing corresponding paired images of ultrasound and optical images, we chose to employ a CycleGAN model for this task. This type of network is capable of learning each domain separately and is able to produce unique translations for each image.
To translate between two image domains, there must be sufficient similarity between them. To demonstrate the feasibility of our approach, we generated pseudo-anatomical images with similar tumor shapes, sizes, and a black background that mimics the acoustic shadow seen in ultrasound images. To improve the translation results, we have incorporated a constraint into the loss function to convert the black appearance of the tumors in the ultrasound display to white objects in optical images of the tissue. We evaluated the translation by examining the accuracy of tumor location, area, and contours. Our results (Figure 5 and Table 2) show that the translation maintains the geometrical information.
As for the limitations of the suggested method, it is noted that the optical images used for training in this study were based on etched poultry breast-tissue samples which do not necessarily resemble actual cuts observed during surgery. Although the simulated lesions were prepared to resemble the geometrical features of real breast tumors in terms of size and shape, optimal results presumably can be achieved by using actual optical images of anatomical cuts through breast-tumor tissues acquired during surgery. Nonetheless, this study clearly demonstrated the potential that can be achieved by translating ultrasound images into optical images of human tissue during surgery.
In conclusion, in this work, a method for image translation from ultrasound into a pseudo anatomical display was presented. This image-translation method yields a more vivid and realistic tissue-texture presentation which potentially can enable a more straightforward comparison to the real anatomy. Furthermore, as can be observed, the pseudo anatomical images provide superior visual discrimination of the lesions. The borders are well delineated, and the contrast clarity is substantially improved.
The main contribution of this study was to demonstrate the feasibility of this translation, while preserving the anatomical content. It can serve as an ancillary tool by providing a simultaneous depiction of a real-time ultrasound image alongside a pseudo anatomical representation. Although this study may not yield substantial advantages for medical practitioners proficient in breast ultrasound, it renders considerable benefits to those who lack experience in the field. Additionally, this representation can be helpful for medical training and practices. Overall, our proposed method could potentially lead to faster and better diagnosis and improved clinical outcomes.

Author Contributions

Conceptualization, L.B., M.F. and H.A.; Methodology, L.B., M.F. and H.A.; Software, L.B.; Validation, L.B.; Data curation, L.B.; Writing—original draft, L.B.; Writing—review & editing, L.B., M.F. and H.A.; Supervision, M.F. and H.A. All authors have read and agreed to the published version of the manuscript.

Funding

The authors thank the financial support by the: Israeli Ministry of Innovation, Science and Technology Grant #880011.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available at [27].

Acknowledgments

The authors thank Aharon Alfasi for his technical support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Giaquinto, A.N.; Sung, H.; Miller, K.D.; Kramer, J.L.; Newman, L.A.; Minihan, A.; Jemal, A.; Siegel, R.L. Breast cancer statistics, 2022. CA A Cancer J. Clin. 2022, 72, 524–541. [Google Scholar] [CrossRef]
  2. Gilbert, F.J.; Pinker-Domenig, K. Diagnosis and staging of breast cancer: When and how to use mammography, tomosynthesis, ultrasound, contrast-enhanced mammography, and magnetic resonance imaging. In Diseases of the Chest, Breast, Heart and Vessels 2019–2022: Diagnostic and Interventional Imaging; Springer: Cham, Switzerland, 2019; pp. 155–166. [Google Scholar]
  3. Na, S.P.; Houserkovaa, D. The role of various modalities in breast imaging. Biomed. Pap. Med. Fac. Univ. Palacky Olomouc Czech Repub. 2007, 151, 209–218. [Google Scholar]
  4. Smith, N.B.; Webb, A. Introduction to Medical Imaging: Physics, Engineering and Clinical Applications; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  5. Azhari, H.; Kennedy, J.A.; Weiss, N.; Volokh, L. From Signals to Image; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  6. Ngô, C.; Pollet, A.G.; Laperrelle, J.; Ackerman, G.; Gomme, S.; Thibault, F.; Fourchotte, V.; Salmon, R.J. Intraoperative ultrasound localization of nonpalpable breast cancers. Ann. Surg. Oncol. 2007, 14, 2485–2489. [Google Scholar] [CrossRef] [PubMed]
  7. Colakovic, N.; Zdravkovic, D.; Skuric, Z.; Mrda, D.; Gacic, J.; Ivanovic, N. Intraoperative ultrasound in breast cancer surgery—From localization of non-palpable tumors to objectively measurable excision. World J. Surg. Oncol. 2018, 16, 1–7. [Google Scholar] [CrossRef]
  8. Van Velthoven, V. Intraoperative Ultrasound Imaging: Comparison of pAthomorphological Findings in US versus CT, MRI and Intraoperative Findings. Acta Neurochir. Suppl. 2003, 85, 95–99. [Google Scholar]
  9. Balog, J.; Sasi-Szabó, L.; Kinross, J.; Lewis, M.R.; Muirhead, L.J.; Veselkov, K.; Mirnezami, R.; Dezső, B.; Damjanovich, L.; Darzi, A.; et al. Intraoperative tissue identification using rapid evaporative ionization mass spectrometry. Sci. Transl. Med. 2013, 5, 194ra93. [Google Scholar] [CrossRef]
  10. Haka, A.S.; Volynskaya, Z.; Gardecki, J.A.; Nazemi, J.; Shenk, R.; Wang, N.; Dasari, R.R.; Fitzmaurice, M.; Feld, M.S. Diagnosing breast cancer using Raman spectroscopy: Prospective analysis. J. Biomed. Opt. 2009, 14, 054023. [Google Scholar] [CrossRef] [Green Version]
  11. Dixon, J.M.; Renshaw, L.; Young, O.; Kulkarni, D.; Saleem, T.; Sarfaty, M.; Sreenivasan, R.; Kusnick, C.; Thomas, J.; Williams, L. Intra-operative assessment of excised breast tumour margins using ClearEdge imaging device. Eur. J. Surg. Oncol. (EJSO) 2016, 42, 1834–1840. [Google Scholar] [CrossRef] [PubMed]
  12. Mondal, S.B.; Gao, S.; Zhu, N.; Habimana-Griffin, L.; Akers, W.J.; Liang, R.; Gruev, V.; Margenthaler, J.; Achilefu, S. Optical see-through cancer vision goggles enable direct patient visualization and real-time fluorescence-guided oncologic surgery. Ann. Surg. Oncol. 2017, 24, 1897–1903. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Smith, B.L.; Gadd, M.A.; Lanahan, C.R.; Rai, U.; Tang, R.; Rice-Stitt, T.; Merrill, A.L.; Strasfeld, D.B.; Ferrer, J.M.; Brachtel, E.F.; et al. Real-time, intraoperative detection of residual breast cancer in lumpectomy cavity walls using a novel cathepsin-activated fluorescent imaging system. Breast Cancer Res. Treat. 2018, 171, 413–420. [Google Scholar] [CrossRef]
  14. Scimone, M.T.; Krishnamurthy, S.; Maguluri, G.; Preda, D.; Park, J.; Grimble, J.; Song, M.; Ban, K.; Iftimia, N. Assessment of breast cancer surgical margins with multimodal optical microscopy: A feasibility clinical study. PLoS ONE 2021, 16, e0245334. [Google Scholar] [CrossRef]
  15. Schwarz, J.; Schmidt, H. Technology for intraoperative margin assessment in breast cancer. Ann. Surg. Oncol. 2020, 27, 2278–2287. [Google Scholar] [CrossRef]
  16. Jafari, M.H.; Girgis, H.; Van Woudenberg, N.; Moulson, N.; Luong, C.; Fung, A.; Balthazaar, S.; Jue, J.; Tsang, M.; Nair, P.; et al. Cardiac point-of-care to cart-based ultrasound translation using constrained CycleGAN. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 877–886. [Google Scholar] [CrossRef] [PubMed]
  17. Yang, Q.; Li, N.; Zhao, Z.; Fan, X.; Chang, E.I.; Xu, Y. MRI cross-modality image-to-image translation. Sci. Rep. 2020, 10, 1–18. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Jiao, J.; Namburete, A.I.; Papageorghiou, A.T.; Noble, J.A. Self-Supervised Ultrasound to MRI Fetal Brain Image Synthesis. IEEE Trans. Med. Imaging 2020, 39, 4413–4424. [Google Scholar] [CrossRef] [PubMed]
  19. Kearney, V.; Ziemer, B.P.; Perry, A.; Wang, T.; Chan, J.W.; Ma, L.; Morin, O.; Yom, S.S.; Solberg, T.D. Attention-aware discrimination for MR-to-CT image translation using cycle-consistent generative adversarial networks. Radiology. Artif. Intell. 2020, 2, e190027. [Google Scholar] [CrossRef]
  20. Vedula, S.; Senouf, O.; Bronstein, A.M.; Michailovich, O.V.; Zibulevsky, M. Towards CT-quality ultrasound imaging using deep learning. arXiv 2017, arXiv:1710.06304. [Google Scholar]
  21. Pang, Y.; Lin, J.; Qin, T.; Chen, Z. Image-to-image translation: Methods and applications. IEEE Trans. Multimed. 2021, 24, 3859–3881. [Google Scholar] [CrossRef]
  22. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
  23. Rezende, D.J.; Mohamed, S.; Wierstra, D. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31st International Conference on International Conference on Machine Learning, Beijing, China, 21–26 June 2014; JMLR: Brookline, MA, USA, 2014; pp. 1278–1286. [Google Scholar]
  24. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
  25. Hiasa, Y.; Otake, Y.; Takao, M.; Matsuoka, T.; Takashima, K.; Carass, A.; Prince, J.L.; Sugano, N.; Sato, Y. Cross-modality image synthesis from unpaired data using CycleGAN. In Simulation and Synthesis in Medical Imaging: Third International Workshop, SASHIMI 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, 16 September 2018; Springer: Cham, Switzerland, 2018; pp. 31–41. [Google Scholar]
  26. Modanwal, G.; Vellal, A.; Buda, M.; Mazurowski, M.A. MRI image harmonization using cycle-consistent generative adversarial network. In Proceedings of the SPIE Medical Imaging 2020: Computer-Aided Diagnosis, Houston, TX, USA, 15–20 February 2020; Volume 11314, p. 1131413. [Google Scholar]
  27. Al-Dhabyani, W.; Gomaa, M.; Khaled, H.; Fahmy, A. Dataset of breast ultrasound images. Data Brief 2020, 28, 104863. [Google Scholar] [CrossRef]
  28. Glassman, L.; Hazewinkel, M. The Radiology Assistant: MRI of the Breast. 2013. Available online: https://radiologyassistant.nl/breast/mri/mri-of-the-breast, (accessed on 5 October 2021).
  29. Samardar, P.; de Paredes, E.S.; Grimes, M.M.; Wilson, J.D. Focal asymmetric densities seen at mammography: US and pathologic correlation. Radiographics 2002, 22, 19–33. [Google Scholar] [CrossRef] [PubMed]
  30. Franquet, T.; De Miguel, C.; Cozcolluela, R.; Donoso, L. Spiculated lesions of the breast: Mammographic-pathologic correlation. Radiographics 1993, 13, 841–852. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Sheppard, D.G.; Whitman, G.J.; Fornage, B.D.; Stelling, C.B.; Huynh, P.T.; Sahin, A.A. Tubular carcinoma of the breast: Mammographic and sonographic features. Am. J. Roentgenol. 2000, 174, 253–257. [Google Scholar] [CrossRef] [PubMed]
  32. Marquez-Neila, P.; Baumela, L.; Alvarez, L. A morphological approach to curvature-based evolution of curves and surfaces. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 36, 2–17. [Google Scholar] [CrossRef]
  33. Lee, G. Morphsnakes (Version 0.19.3). 2021. Available online: https://github.com/scikit-image/scikit-image/blob/v0.19.3/skimage/segmentation/morphsnakes.py, (accessed on 21 March 2023).
  34. Dice, L.R. Measures of the amount of ecologic association between species. Ecology 1945, 26, 297–302. [Google Scholar] [CrossRef]
Figure 1. (a) Schematic diagram of the CycleGAN model used here to translate real ultrasound images into pseudo anatomical images. The upper block is the GAN for producing the pseudo anatomical display, and the lower block is the GAN for producing the ultrasound display. (b) An identical network is used for generating ultrasound displays from anatomical images.
Figure 1. (a) Schematic diagram of the CycleGAN model used here to translate real ultrasound images into pseudo anatomical images. The upper block is the GAN for producing the pseudo anatomical display, and the lower block is the GAN for producing the ultrasound display. (b) An identical network is used for generating ultrasound displays from anatomical images.
Bioengineering 10 00388 g001
Figure 2. (1st column) Examples of the BUSI images. (2nd column) BUSI masks. (3rd column) The model-generated pseudo anatomical images. (4th column) The corresponding segmented masks obtained by MorphGAC for the pseudo anatomical images and for the original ultrasound images (5th column), with the corresponding Dice scores. The rows correspond to: (A) benign, (B) benign, (C) malignant, (D) malignant, and (E) normal. As can be noted, the pseudo anatomical images are more natural to comprehend, and the tumors are better defined.
Figure 2. (1st column) Examples of the BUSI images. (2nd column) BUSI masks. (3rd column) The model-generated pseudo anatomical images. (4th column) The corresponding segmented masks obtained by MorphGAC for the pseudo anatomical images and for the original ultrasound images (5th column), with the corresponding Dice scores. The rows correspond to: (A) benign, (B) benign, (C) malignant, (D) malignant, and (E) normal. As can be noted, the pseudo anatomical images are more natural to comprehend, and the tumors are better defined.
Bioengineering 10 00388 g002
Figure 3. (1st column) Examples of the BUSI images. (2nd column) BUSI masks. (3rd column) The model-generated pseudo anatomical images. (4th column) The corresponding segmented masks obtained by MorphGAC for the pseudo anatomical images and for the original ultrasound images (5th column), with their Dice scores. The rows correspond to: (A) benign, (B) malignant, (C) normal. As can be noted, in these cases, the algorithm was less effective.
Figure 3. (1st column) Examples of the BUSI images. (2nd column) BUSI masks. (3rd column) The model-generated pseudo anatomical images. (4th column) The corresponding segmented masks obtained by MorphGAC for the pseudo anatomical images and for the original ultrasound images (5th column), with their Dice scores. The rows correspond to: (A) benign, (B) malignant, (C) normal. As can be noted, in these cases, the algorithm was less effective.
Bioengineering 10 00388 g003
Figure 4. (Middle column) Three exemplary original ultrasonic images of the same tumor, which are marked as “malignant 3–9” in BUSI. (First column) the corresponding three different tracings provided by BUSI for the same tumor. As can be observed, the three tracings differ substantially in shape and geometry. (Last column). Contrary to that, the corresponding MorphGAC segmented masks yielded more consistent tracings, which appear to better match the lesion shapes in the ultrasound images.
Figure 4. (Middle column) Three exemplary original ultrasonic images of the same tumor, which are marked as “malignant 3–9” in BUSI. (First column) the corresponding three different tracings provided by BUSI for the same tumor. As can be observed, the three tracings differ substantially in shape and geometry. (Last column). Contrary to that, the corresponding MorphGAC segmented masks yielded more consistent tracings, which appear to better match the lesion shapes in the ultrasound images.
Bioengineering 10 00388 g004
Figure 5. Distributions of the three metrics per tumor type for the two segmentation methods—BUSI (left) MorphGAC (right). (a) Dice distribution, (b) center error distribution, (c) area index distribution. The dashed lines represent the median score and upper and lower quartiles.
Figure 5. Distributions of the three metrics per tumor type for the two segmentation methods—BUSI (left) MorphGAC (right). (a) Dice distribution, (b) center error distribution, (c) area index distribution. The dashed lines represent the median score and upper and lower quartiles.
Bioengineering 10 00388 g005
Table 1. Ultrasound imaging pros and cons associated with detection of tumors and their margins, pre-operatively and intra-operatively.
Table 1. Ultrasound imaging pros and cons associated with detection of tumors and their margins, pre-operatively and intra-operatively.
ProsCons
Radiation-freeLow SNR
Real time imagingOperator dependent
Radiation-freeLow SNR
Real time imagingOperator dependent
Enables arbitrary for cross-section imagingNon intuitive black and white image
Tumor detection pre and intra-operative
Margin assessment
Cost effective
Table 2. Quantitative evaluation by comparing the optical segmented mask to both the original BUSI reference masks and the BUSI masks re-segmented by the MorphGAC algorithm.
Table 2. Quantitative evaluation by comparing the optical segmented mask to both the original BUSI reference masks and the BUSI masks re-segmented by the MorphGAC algorithm.
Tumor TypeMedian BUSIMedian MorphGACMean ± Std BUSIMean ± Std MorphGAC
DiceBenign0.850.910.67 ± 0.360.70 ± 0.38
Malignant0.580.700.53 ± 0.300.60 ± 0.32
all0.770.830.62 ± 0.350.67 ± 0.36
Center error [%]Benign0.560.585.09 ± 11.234.22 ± 10.78
Malignant4.133.277.21 ± 10.297.21 ± 10.65
all1.170.735.76 ± 10.955.14 ± 10.79
Area index [%]Benign0.740.402.84 ± 5.212.11 ± 5.08
Malignant9.254.3411.64 ± 8.796.12 ± 6.49
all2.310.715.56 ± 7.673.35 ± 5.83
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Barkat, L.; Freiman, M.; Azhari, H. Image Translation of Breast Ultrasound to Pseudo Anatomical Display by CycleGAN. Bioengineering 2023, 10, 388. https://doi.org/10.3390/bioengineering10030388

AMA Style

Barkat L, Freiman M, Azhari H. Image Translation of Breast Ultrasound to Pseudo Anatomical Display by CycleGAN. Bioengineering. 2023; 10(3):388. https://doi.org/10.3390/bioengineering10030388

Chicago/Turabian Style

Barkat, Lilach, Moti Freiman, and Haim Azhari. 2023. "Image Translation of Breast Ultrasound to Pseudo Anatomical Display by CycleGAN" Bioengineering 10, no. 3: 388. https://doi.org/10.3390/bioengineering10030388

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop