Next Article in Journal
Conditionally Replicative Adenovirus Controlled by the Stabilization System of AU-Rich Elements Containing mRNA
Next Article in Special Issue
Angiogenesis Inhibition in Prostate Cancer: An Update
Previous Article in Journal
Inhibition of HIF-1α by Atorvastatin During 131I-RTX Therapy in Burkitt’s Lymphoma Model
Previous Article in Special Issue
The Impact of Whole Genome Data on Therapeutic Decision-Making in Metastatic Prostate Cancer: A Retrospective Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Applications of Artificial Intelligence to Prostate Multiparametric MRI (mpMRI): Current and Emerging Trends

1
Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA
2
Mallinckrodt Institute of Radiology, Washington University Saint Louis, St. Louis, MO 63110, USA
3
Department of Radiology, North Shore University Hospital, Manhasset, NY 11030, USA
*
Author to whom correspondence should be addressed.
Cancers 2020, 12(5), 1204; https://doi.org/10.3390/cancers12051204
Submission received: 2 April 2020 / Revised: 2 May 2020 / Accepted: 8 May 2020 / Published: 11 May 2020
(This article belongs to the Special Issue Urological Cancer 2020)

Abstract

:
Prostate carcinoma is one of the most prevalent cancers worldwide. Multiparametric magnetic resonance imaging (mpMRI) is a non-invasive tool that can improve prostate lesion detection, classification, and volume quantification. Machine learning (ML), a branch of artificial intelligence, can rapidly and accurately analyze mpMRI images. ML could provide better standardization and consistency in identifying prostate lesions and enhance prostate carcinoma management. This review summarizes ML applications to prostate mpMRI and focuses on prostate organ segmentation, lesion detection and segmentation, and lesion characterization. A literature search was conducted to find studies that have applied ML methods to prostate mpMRI. To date, prostate organ segmentation and volume approximation have been well executed using various ML techniques. Prostate lesion detection and segmentation are much more challenging tasks for ML and were attempted in several studies. They largely remain unsolved problems due to data scarcity and the limitations of current ML algorithms. By contrast, prostate lesion characterization has been successfully completed in several studies because of better data availability. Overall, ML is well situated to become a tool that enhances radiologists’ accuracy and speed.

1. Introduction

Prostate carcinoma (PCa) is the most common cancer and the third leading cause of cancer-related death among men in the United States [1]. A major challenge for PCa management is the lack of non-invasive tools that can differentiate aggressive versus non-aggressive cancer types [2]. This limitation can result in overdiagnosis and overtreatment, as evidenced by the fact that only one death is prevented for every 48 patients treated for PCa [3]. This overdiagnosis and overtreatment can lead to unnecessary biopsies, surgeries, radiotherapy, chemotherapy, and patient anxiety [2]. Better diagnostic methods could mitigate these unwarranted procedures. To meet this need for more effective screening, multiparametric magnetic resonance imaging (mpMRI) could be implemented to examine the entire prostate.
MpMRI of the prostate has been increasingly used for PCa screening in recent years [4]. MpMRI’s current utility in screening stems from a high negative predictive value for prostate cancer. However, the full potential of mpMRI has not yet been achieved [5]. PCa overdiagnosis could be reduced with an mpMRI analysis that accomplishes better lesion detection, lesion classification (benign versus malignant), and lesion volume quantification.
Accurate prostate segmentation and volume estimation can provide invaluable information for the diagnosis and clinical management of benign prostatic hyperplasia (BPH) and PCa. This can improve BPH treatment, surgical planning, and predictions of PCa prognosis [6,7,8]. Prostate segmentation is necessary for magnetic resonance imaging (MRI) transrectal ultrasound (TRUS) fusion biopsy, which is increasingly used to diagnose PCa. MRI/TRUS fusion biopsy yield depends on accurate prostate segmentation on magnetic resonance images because the prostate edges form the reference frames for fusion with the ultrasound data [8]. Therefore, any inaccuracy in tracing prostate boundaries may lead to biopsy errors [9]. In addition to segmentation, prostate volume estimation is also a useful metric, especially with regard to BPH treatment, surgical planning, and PCa prognosis. BPH is one of the most common diseases that affects elderly men and reaches a prevalence of 90% by the ninth decade of life [10]. Large prostate volumes in men with BPH indicate a higher likelihood of more severe lower urinary tract symptoms and urinary retention [11,12,13]. Furthermore, studies have shown that patients have differential responses to BPH-targeted medications, depending on prostate size [11]. Additionally, prostate volume is considered when determining a surgical approach, with each procedure having its own risk profile [14]. In addition to guiding BPH treatment, prostate volume is used in PCa prognosis. Prostate size alone is a valuable marker for PCa prognosis; PCa is more accurately detected in prostates under 50 cm3 than in those over 50 cm3 [6]. Prostate volume is also used to calculate prostate-specific antigen density, a figure that helps to differentiate BPH from PCa and can also be used to predict radical prostatectomy outcomes [15,16,17].
Accuracy of prostate lesion detection, segmentation, and volume estimation is important at different stages of PCa management. Lesion detection identifies regions for biopsy. Accurate segmentation is crucial for improved fusion biopsy yields. Additionally, segmentation improves radiotherapy delivery. Volume estimation predicts prognosis after prostatectomy [18,19,20]. Prostate lesion detection is crucial because the effective treatment of PCa directly depends on identifying cancer at its earliest stage [21,22,23]. Even though PCa most often follows an indolent course, it can show rapid progression in some cases. In these instances, lesion recognition on mpMRI is critical because it provides a region of high suspicion and a higher yield from targeted biopsy [24]. Without mpMRI lesion detection, random 12 core TRUS biopsies are performed, which may miss small or anteriorly located PCa [25].
While early prostate lesion detection improves timely PCa treatment, accurate lesion segmentation can improve radiotherapy [26]. Prostate lesion contouring is a major source of error when administering radiation therapy. This inexact segmentation can lead to the underdosage of the tumor as well as the overdosage of normal cells [20]. Although radiotherapy is an effective cancer treatment, its use is hampered by imprecise delineation. More precise contouring of a malignant lesion can improve lesion targeting and relative radiotherapeutic dosage, which can lead to lower recurrence rates [26,27].
Pre-operative prostate lesion volume estimation is a key metric for predicting the likelihood of positive surgical margins, biochemical prostate-specific antigen (PSA) recurrence, and cancer-specific survival post-prostatectomy [28,29,30,31]. This volume is a better indicator of surgical margins than other factors such as Gleason score and extracapsular extension [28]. Lesion volume also functions as an independent variable for PSA recurrence, an early sign of recurrent disease which may require salvage radiation therapy [32]. In addition, lesion volume predicts cancer-specific survival more accurately than variables such as lymphadenopathy, seminal vesical invasion, and Gleason score [29].
After prostate lesions have been detected on mpMRI, lesion characterization is important for selecting appropriate management options. Accurate prostate lesion classification on mpMRI could preclude biopsies in men with low-grade tumors, reduce the number of biopsy cores, and decrease the rate of overdiagnosis and false-negative biopsies [33]. Reduction in unnecessary biopsies is important, as potential TRUS biopsy complications include hematuria, lower urinary tract symptoms, and temporary erectile dysfunction [34]. Additionally, the number of biopsy cores obtained correlates with increased risk of complications, including rectal bleeding, hematospermia, bleeding complications, and acute urinary retention [34]. Furthermore, the overdetection of PCa exerts a major psychological toll on quality of life and increases the risk of overtreatment [2]. Overtreatment side effects that may occur after radical prostatectomy and radiotherapy include urinary incontinence, rectal bleeding and fistulae, and erectile dysfunction [2,35,36,37].
Artificial intelligence (AI) is a promising tool to improve prostate lesion detection, lesion characterization, and lesion volume quantification. AI can systematically evaluate mpMRI images [38]. Machine learning (ML), a branch of AI, and its sub-discipline, deep learning (DL), have become attractive techniques in medical imaging because of their ability to interpret large amounts of data [39]. By applying ML to prostate mpMRI data, imaging-based clinical decisions could be improved. The purpose of this review is to summarize ML applications for prostate mpMRI in regards to (1) prostate organ segmentation, (2) prostate lesion detection and segmentation, and (3) prostate lesion characterization.

2. Multiparametric Magnetic Resonance Imaging

Multiparametric magnetic resonance imaging (mpMRI) of the prostate is a form of advanced non-invasive imaging that combines standard anatomical sequences with functional imaging. It consists of T1-weighted images, T2-weighted images (T2W), and the following functional sequences: diffusion-weighted images (DWI) including the apparent diffusion coefficient maps (ADC) and dynamic contrast-enhanced images (DCE). Certain protocols also incorporate proton magnetic resonance spectroscopy imaging (MRSI) [40,41]. Typically, the functional techniques used are DWI and DCE. MRSI is more demanding than DWI and DCE because it requires more acquisition time, greater technical expertise, and intensive post-processing of the data. Therefore, it is not commonly used [42].
The advantages of seeing both the anatomy and functional ability of the prostate have made mpMRI an attractive imaging technique with many applications. It can accurately identify clinically relevant cancer. The combination of T2W, DWI, and DCE has high specificity, sensitivity, and negative predictive value in detecting PCa [43,44,45]. The use of all three functional sequences has been found to have a positive predictive value for PCa of 98% [46]. In addition to diagnosing PCa, mpMRI is also used in the management of the disease as the functional sequences aid in predicting tumor behavior. Prostate mpMRI has been used for active surveillance, tumor localization, staging, treatment planning, and monitoring of recurrence [40,41].
While mpMRI is a powerful imaging modality, it does have limitations. Differences in image acquisition techniques and protocols across institutions lead to heterogeneity in imaging quality and make it challenging to compare images [47]. Additionally, the learning curve for reading mpMRIs is steep, and there exists inter-observer variability [48,49,50]. The experience of radiologists reading these scans impacts the utility of prostate mpMRI images. In addition, the prostate gland is difficult to delineate, and various benign and pre-malignant processes can mimic PCa [51]. For example, the sensitivity for the detection of PCa in the transitional zone is limited by the heterogeneous nature of this zone in the setting of BPH, which can also exhibit increased cellularity further complicating the distinction. Furthermore, patient-related factors, including body habitus, prior procedures, and unconventional anatomy, can impact imaging. Artifacts, such as field inhomogeneity from rectal gas and metal implants, can substantially impede the interpretation and reporting of prostate mpMRI. Finally, it can be difficult to discriminate between post-treatment changes and local recurrence following treatment on mpMRI.
In an effort to assist in standardizing the acquisition, interpretation, and reporting of prostate mpMRI, the Prostate Imaging Reporting and Data System (PI-RADS) was developed by the European Society of Urogenital Radiology (ESUR) in 2012 [52]. The ESUR, in collaboration with the American College of Radiology and the AdMeTech Foundation, released updated versions PI-RADS v2 in 2015 and PI-RADS v2.1 in 2019 [53,54]. All of the PI-RADS versions offer guidance for protocols and specifications for image acquisition. The scoring systems provide frameworks to evaluate individual sequences of T2W, DWI, and DCE and to integrate these findings into an overall risk assessment category from 1 to 5. These risk categories assist in the determination of biopsies and the management of clinically significant PCa.

3. Artificial Intelligence Paradigms: Machine Learning and Deep Learning

Although the terms AI, ML, and DL are commonly used interchangeably, each term has its own specific definition. AI is the broad, umbrella term that encompasses both ML and DL, with DL being a subset of ML (Figure 1). Marvin Minsky, an early AI developer, described AI as “the science of making machines do things that would require intelligence if done by men” [55]. AI is the ability of any tool to accept inputs of prior knowledge, experience, goals, and observations and then create an output that implements an action. This definition covers a wide range of tools varying from a simple thermostat to a self-driving car. AI research often falls under the domain of computer science because AI tools perform many computations to create appropriate outputs [56].
Whereas AI typically entails a fixed, rules-based computational method, ML dynamically improves upon computational methods as data is input and trained. In traditional programming, a computer receives data and a program as inputs and then produces the output in a one-to-one manner. All improvements to the results derive from alterations to the program rules. In ML, a computer receives data and labels as inputs and then creates a program to refine the outputs. The computer learns by comparing its own outputs, also known as predictions, to data that has already been defined and associated with a label. Over time, the ML algorithm will improve upon its ability to create a program that can match its own output to a label. The effectiveness of the program is highly dependent on the quality and size of data that the ML algorithm receives as input.
The data types that can be input into an ML algorithm vary widely, encompassing digitized handwriting, text from documents, DNA sequences, facial images, and more. A ML algorithm can utilize this data to train and make predictions. Two of the most common ML implementations are classification and regression [57]. In classification, ML receives data and then decides upon a category for each item in the data. For example, ML could look at images and decide whether the image is a plane, car, or boat. In regression, ML receives data and then predicts a numerical value for each item in the data. Examples include predicting tomorrow’s ambient temperature or the price of a stock.
Within the ML discipline, DL has garnered significant attention because of the groundbreaking results that it achieved in the ImageNet Large Scale Visual Recognition Challenge competition, where competitors developed algorithms using a subset of a public dataset of images [58]. DL has flourished with the rise of big data and faster hardware [39]. In traditional ML, the algorithm has features that it will extract from the data before training begins [57] (Figure 2). These features are constant and are based upon established rules. For example, the algorithm can look for eyes when trying to recognize a face or search for wings when identifying an airplane. By contrast, a DL algorithm does not require feature selection before training. DL simply receives input and learns its salient features during training (Figure 2). DL architecture is also notable because it is formed by many tiered layers, which resemble a brain’s neuronal network. These layers enable DL to extract features from progressively smaller sizes of input data and allow for increased feature complexity [59]. Although various DL architectures exist, convolutional neural networks (CNN) are considered well suited for medical imaging. The overall goal of these techniques is to allow the machine to determine and optimize features automatically for evaluating and classifying images.
Medical imaging studies that use ML algorithms are frequently designed with three dataset types: training, validation, and test [60]. The study will first use training data as its input to develop an algorithm that produces the desired output. During this training period, the algorithm constantly uses validation data to provide correct feedback to modify itself. After the algorithm has finished development, final performance is then assessed with test data. Because test data was not used during the algorithm training, it is an objective method to assess performance.

4. Prostate Organ: Segmentation and Volume Estimation

Although prostate segmentation and volume approximation could greatly improve PCa and BPH management, existing techniques are limited. Currently, prostate segmentation is performed in a manual or semi-automated fashion and is limited by inter-observer variability [61]. According to a study by Rash et al. [62], the mean prostate organ volume among three radiation oncologists varied between 0.95 and 1.08. Currently, prostate volume is most often calculated during TRUS utilizing an ellipsoid estimate [63] or estimated during a prostate exam. Even though this volume approximation with TRUS is commonly used, it has significant intra-observer variation and is not as accurate as an approximation with mpMRI images [64,65]. Prostate volume approximation with software has been attempted with limited results. Medical students outperform the accuracy of a commercially available tool [66].
To meet this need for an automatic, accurate prostate segmentation and volume approximation tool, ML methods have been applied by various groups (Figure 3). A ML technique, fuzzy c-means clustering, categorizes data into groups via unsupervised learning and was used by Rundo et al. [67] to segment the prostate on T1-weighted and T2-weighted mpMRI images. Rundo et al. evaluated 21 patients to yield an average Dice score of 0.91 [67]. The Dice score is a standard statistic for assessing the spatial intersection between two images and ranges from 0 (no overlap) to 1 (perfect overlap) [68]. Therefore, a Dice score of 0.91 demonstrates that the technique was able to segment and estimate the volume of prostates with a high level of precision.
Besides fuzzy c-means clustering, DL has been extensively used for complete prostate segmentation. In 2012, the release of the PROMISE12 challenge dataset, which contained 100 patients, prompted many studies on this topic [69,70]. Two groups led by Tian et al. [71] and Karimi et al. [70] both employed CNNs. Tian et al. [71] trained their CNN on T2-weighted mpMRI images from 140 patients and achieved a Dice score of 0.85. Karimi et al.’s [70] CNN was trained on a limited dataset of 49 T2-weighted mpMRI images supplemented by data augmentation. Their Dice score was 0.88. Both studies achieved high Dice scores and demonstrated that prostate segmentation could be achieved with commonly used technical designs.
Additionally, a uniquely designed DL network for biomedical images, U-Net, has also been proposed for complete prostate segmentation [72]. U-Net is an algorithm that successively compresses an image, derives features during these contractions, and classifies every pixel in the image [72]. Three studies used U-Net for prostate segmentation and obtained Dice scores of 0.89, 0.93, and 0.89 [73,74,75]. These three groups showed that U-Net could effectively segment the prostate with dataset sizes between 81 and 163 patients. The high Dice scores across multiple studies with comparable network architectures demonstrate substantial progress towards completely automated prostate segmentation and volume approximation. Table 1 lists the previously discussed studies along with several others that also segmented the prostate using various CNNs. To establish the ground truth label, which is used in establishing a Dice score, five studies used radiologists, two studies used clinicians of unstated specialties, one study used an expert, and one study used a radiologist for most of its data and an unnamed source for the rest of its data [67,70,71,73,74,75,76,77,78].

5. Prostate Lesion: Detection, Segmentation, and Volume Estimation

Although prostate lesion detection, segmentation, and volume approximation could benefit PCa management, an effective tool that can automate these processes has not been created. For prostate lesion detection, satellite small lesions can be challenging to detect [19]. In a study by Steenbergen et al. [19], six different teams, each composed of one radiologist and one radiation oncologist, missed 66 out of 69 satellite lesions distributed across 20 patients. In addition to prostate lesion detection, segmentation is difficult because sparse tumors composed of benign glands and stroma are challenging to outline [79]. When segmentation across multiple institutions is compared, the contours reveal considerable differences [80]. As a result of inexact segmentation, volume approximation of prostate lesions is also challenging and often underestimates the histopathological volume [79]. This need for improved lesion metrics could be satisfied using ML algorithms that could learn to identify these features within mpMRI images.
For prostate lesion detection, ML approaches have been used to identify potential malignancies (Figure 4). Lay et al. [81] used a prostate computer-aided diagnosis (CAD) based on a random forest for prostate lesion detection (Table 2). This study’s dataset used 224 patient cases across three sequences (T2-weighted, ADC, and DWI) for a total of 287 benign lesions and 123 lesions with a Gleason score of 6 or higher [81]. The Gleason scoring system describes PCa grades on a scale of 1 to 5 based on the pattern that the cancerous cells fall into, with 1 or 2 being low grade and 5 being high grade. It uses the combined grades of the most prominent and second most prominent patterns in a biopsy as the final score. A Gleason score of 6 or greater has malignant potential [82]. Lay et al.’s random forest technique yielded an area under the curve (AUC) score of 0.93 [81]; AUC is a measurement for binary classification and ranges from 0 to 1. Therefore, this study demonstrates that the ML model can detect lesions with high accuracy.
DL techniques have also been applied to prostate lesion detection (Table 2). Xu et al. [84] implemented a type of neural network with extensive layers, ResNet [86], to find lesions on T2-weighted, ADC, and DWI images. This study used images from the Cancer Imaging Archive data portal and included 346 patients. They achieved an AUC of 0.97 [84]. Tsehay et al. [85] also used a DL algorithm with a 5-layer CNN architecture that used an individual loss function for each layer. The CNN was trained and validated on a dataset of 39 benign lesions and 86 lesions with a Gleason 6 or higher [85]. Tsehay’s group achieved an impressive AUC of 0.90 [85], which demonstrates high accuracy of prostate lesion detection. All four studies in Table 2 used radiologists for labeling the ground truth [81,83,84,85].
Although prostate lesion detection has been implemented with ML, automated prostate lesion segmentation and volume approximation remain largely unsolved (Figure 5). Few studies have attempted this task due to a dearth of well-curated data and its technical requirements. One obstacle for prostate lesion segmentation is a lack of guidelines across institutions for prostate lesion contours, which results in significant inter-observer variability [19,80]. Despite the lack of standardization, three studies have attempted prostate lesion segmentation (Table 3). A study by Liu et al. [87] used fuzzy Markov random fields to achieve a Dice score of 0.62 with 11 patients. Two other groups, Kohl et al. [88] and Dai et al. [89], both employed DL algorithms and used U-Net and Mask R-CNN, respectively. Kohl’s group used a dataset of 152 patients and implemented U-Net combined with an adversarial network. Their architecture resulted in an average Dice score for prostate lesion segmentation of 0.41 [88]. Dai’s group used a highly specialized DL algorithm, Mask R-CNN, and trained with 63 patients to achieve a prostate lesion Dice score of 0.46 [89]. To label the ground truth, Dai et al. [89] used a clinician, Kohl et al. [88] used a radiologist, and Liu et al. [87] used a pathologist. These studies’ lower Dice scores demonstrate that the current techniques have limited precision. These studies show that prostate lesion segmentation and volume estimation remain challenging. A bigger dataset with more uniform labeling would permit the development of more ML models geared toward these tasks.

6. Prostate Lesion: Characterization

Although prostate lesions have been increasingly imaged with mpMRI since 2013 [4], their characterization has been hindered by the variability in classification conventions across different radiologists and institutions [4,47,90]. To establish better standardization, the PI-RADS scoring system was created in 2012, with an updated version PI-RADS v2 released in 2015, and the newest version PI-RADS v2.1 released in 2019 [53,54,91]. Since their conception, multiple studies have attempted to elucidate the clinical utility of PI-RADS, PI-RADS v2, and PI-RADS v2.1. Challenges to its broader acceptance include inter-reader agreement, radiologist experience, and the substantial interpretation time of images [4,47,90]. This need for more consistent lesion characterization makes ML an attractive method for accurate, quick classification.
ML algorithms can augment the PI-RADS scoring system as well as independently classify lesions (Table 4). Regarding PI-RADS, Litjens et al. [92] created a CAD system that applied a random forest for characterizing prostate lesions on a scale of suspicion for malignancy. After combining the ML generated scores and the radiologist provided PI-RADS scores on a dataset of 107 patients, the overall AUC was greater than either the ML generated scores or the PI-RADS scores [92]. Similarly, Wang, J. et al. [93], who used 54 patients in their dataset, also concluded that a support vector machine (SVM) algorithm enhanced the PI-RADS performance of radiologists. Song et al. [94] opted to use a DL algorithm based off of VGG-Net, a deep CNN, as a tool for improving PI-RADS scores assigned by radiologists. Song’s group gathered data from 195 patients and also observed that their AUC improved when radiologists’ decisions were combined with the VGG-Net [94].
In addition to bolstering lesion classification by radiologists, ML algorithms have been trained to characterize prostate lesions independently (Figure 6, Table 4). Many studies explored this task with the PROSTATEx challenge dataset that was released in 2017 [101]. The PROSTATEx dataset was gathered from 344 patients and contained segmented lesions along with their respective pathology-defined Gleason scores [101]. From this public database, Wang, Z. et al. [96] achieved an AUC of 0.96 by running two CNNs in parallel. Both Seah et al. [97] and Liu et al. [98] obtained an AUC of 0.84 by using deep layered CNNs. Mehrtash et al. [99] implemented a 3D CNN to reach an AUC of 0.80. One study by Kwak et al. [95] used its own proprietary dataset to implement an SVM that trained on T2-weighted and DWI images to characterize prostate lesions. In this study, 244 patients were used for a total of 333 benign and 146 malignant lesions [95]. The SVM method used discriminative features in training that resulted in an AUC score of 0.89 [95]. All of the studies listed in Table 4 used radiologists to determine their ground truth [77,92,93,94,95,97,98,99,100]. These studies highlight the ability of DL algorithms to predict the likelihood of a lesion’s malignancy based upon Gleason scores.

7. Future Work

The potential applications of ML to PCa surpass volume estimation, lesion detection, and lesion characterization. Further developments in prostate lesion classification may lead to a more practical clinical use, include training ML algorithms for tumor grade prediction. In addition to analyzing data solely from images, ML could augment the clinical management of PCa by incorporating demographic and biochemical data. ML could enable clinicians to make more assured decisions regarding the need for biopsy, medication dosing, and cancer recurrence. Biopsies that are performed for diagnosing PCa could be rendered unnecessary with a ML tool. Two studies by Hu et al. [102] and Chen et al. [103] used data such as age, digital rectal exam findings, PSA, and prostate volume for biopsy prediction. These studies made accurate PCa diagnoses and showed the potential for ML to eliminate the need for biopsy. In addition to diagnosis, ML could impact PCa medication dosing in PCa management. Radiation therapy requires accurate dosing, which is frequently operator dependent [104]. By minimizing operator dependency, ML could offer better standardization leading to more precise dosing. Nicola et al. [105] employed ML to predict prostate brachytherapy dosing by analyzing images and prior treatment plans from other patients. This study showed that ML implementation was comparable to brachytherapists and could be advanced by using a DL instead of a traditional ML algorithm. Along with diagnosis and dosing, ML could be used for predicting cancer recurrence after prostatectomy. Two studies by Wong et al. [106] and Cordon et al. [107] gathered data such as Gleason score, PSA, seminal vesical invasion, and surgical margins to predict recurrence after prostatectomy. The accuracy of these studies could be increased by adding postoperative imaging data for improved recurrence prediction.

8. Conclusions

AI applications in prostate mpMRI are promising tools for more effective and efficient image interpretation, leading to improved care. In pure image interpretation, ML has shown noteworthy progress in prostate organ segmentation and volume estimation. As better-curated data becomes available for prostate lesions, ML will likely become more successful at lesion detection, volume estimation, and characterization. As ML evolves, it will indisputably change radiologists’ workflow by performing many of the simple tasks in image interpretation. However, ML will not replace the role of radiologists, who are critical to solving complex clinical problems [104]. AI is poised to enhance the decisions made by radiologists. It will enable radiologists to better care for their patients rather than supersede the need for radiologists.
Similarly, ML’s ability to evaluate complex datasets across different domains suggests this technique may facilitate the bridging of advanced imaging, such as mpMRI, with emerging biomarker analysis or tumor genetics. Thus, ML may form the underpinnings of radiogenomics, allowing for the integration of imaging data, blood chemistry analysis, and pathologic evaluation in forming complex models that can predict treatment response. Enabled by larger datasets and more sophisticated mathematical techniques, ML could progress to creating completely automated tools that receive a patient’s prostate mpMRI images and then delineate a range of desired features, as well as giving likelihood metrics for an array of pathologies.

Funding

This review was partially funded by the Radiological Society of North America Medical Student Research Grant RMS1902 and the Alpha Omega Alpha Carolyn L. Kuckein Student Research Fellowship.

Conflicts of Interest

Author Peter D. Chang, MD, is a co-founder and shareholder of Avicenna.ai, a medical imaging startup. Author Daniel S. Chow, MD, is a shareholder of Avicenna.ai, a medical imaging startup, and a grant recipient from Cannon Inc. The other authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Siegel, R.L.; Miller, K.D.; Jemal, A. Cancer statistics, 2017. CA Cancer J. Clin. 2017, 67, 7–30. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Hugosson, J.; Carlsson, S. Overdetection in screening for prostate cancer. Curr. Opin. Urol. 2014, 24, 256–263. [Google Scholar] [CrossRef] [PubMed]
  3. Schröder, F.H.; Hugosson, J.; Roobol, M.J.; Tammela, T.L.; Ciatto, S.; Nelen, V.; Kwiatkowski, M.; Lujan, M.; Lilja, H.; Zappa, M.; et al. Screening and prostate-cancer mortality in a randomized European study. N. Engl. J. Med. 2009, 360, 1320–1328. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Oberlin, D.T.; Casalino, D.D.; Miller, F.H.; Meeks, J.J. Dramatic increase in the utilization of multiparametric magnetic resonance imaging for detection and management of prostate cancer. Abdom. Radiol. (Ny) 2017, 42, 1255–1258. [Google Scholar] [CrossRef] [PubMed]
  5. Monni, F.; Fontanella, P.; Grasso, A.; Wiklund, P.; Ou, Y.C.; Randazzo, M.; Rocco, B.; Montanari, E.; Bianchi, G. Magnetic resonance imaging in prostate cancer detection and management: A systematic review. Minerva. Urol. Nefrol. 2017, 69, 567–578. [Google Scholar] [CrossRef] [PubMed]
  6. Uzzo, R.G.; Wei, J.T.; Waldbaum, R.S.; Perlmutter, A.P.; Byrne, J.C.; Vaughan, D., Jr. The influence of prostate size on cancer detection. Urology 1995, 46, 831–836. [Google Scholar] [CrossRef]
  7. Boyle, P.; Gould, A.L.; Roehrborn, C.G. Prostate volume predicts outcome of treatment of benign prostatic hyperplasia with finasteride: Meta-analysis of randomized clinical trials. Urology 1996, 48, 398–405. [Google Scholar] [CrossRef]
  8. Sparks, R.; Bloch, B.N.; Feleppa, E.; Barratt, D.; Madabhushi, A. Fully automated prostate magnetic resonance imaging and transrectal ultrasound fusion via a probabilistic registration metric. Proc. SPIE Int. Soc. Opt. Eng. 2013, 8671. [Google Scholar] [CrossRef] [Green Version]
  9. Tay, K.J.; Gupta, R.T.; Rastinehad, A.R.; Tsivian, E.; Freedland, S.J.; Moul, J.W.; Polascik, T.J. Navigating MRI-TRUS fusion biopsy: Optimizing the process and avoiding technical pitfalls. Expert Rev. Anticancer Ther. 2016, 16, 303–311. [Google Scholar] [CrossRef]
  10. Lim, K.B. Epidemiology of clinical benign prostatic hyperplasia. Asian J. Urol. 2017, 4, 148–151. [Google Scholar] [CrossRef]
  11. Garvey, B.; Türkbey, B.; Truong, H.; Bernardo, M.; Periaswamy, S.; Choyke, P.L. Clinical value of prostate segmentation and volume determination on MRI in benign prostatic hyperplasia. Diagn. Interv. Radiol. 2014, 20, 229. [Google Scholar] [CrossRef] [PubMed]
  12. Kolman, C.; Girman, C.J.; Jacobsen, S.J.; Lieber, M.M. Distribution of post-void residual urine volume in randomly selected men. J. Urol. 1999, 161, 122–127. [Google Scholar] [CrossRef]
  13. Girman, C.J.; Jacobsen, S.J.; Guess, H.A.; Oesterling, J.E.; Chute, C.G.; Panser, L.A.; Lieber, M.M. Natural history of prostatism: Relationship among symptoms, prostate volume and peak urinary flow rate. J. Urol. 1995, 153, 1510–1515. [Google Scholar] [CrossRef]
  14. Oelke, M.; Bachmann, A.; Descazeaud, A.; Emberton, M.; Gravas, S.; Michel, M.C.; N’dow, J.; Nordling, J.; Jean, J. EAU guidelines on the treatment and follow-up of non-neurogenic male lower urinary tract symptoms including benign prostatic obstruction. Eur. Urol. 2013, 64, 118–140. [Google Scholar] [CrossRef] [PubMed]
  15. Bretton, P.R.; Evans, W.P.; Borden, J.D.; Castellanos, R.D. The use of prostate specific antigen density to improve the sensitivity of prostate specific antigen in detecting prostate carcinoma. Cancer Interdiscip. Int. J. Am. Cancer Soc. 1994, 74, 2991–2995. [Google Scholar] [CrossRef]
  16. Benson, M.C.; Seong Whang, I.; Pantuck, A.; Ring, K.; Kaplan, S.A.; Olsson, C.A.; Cooner, W.H. Prostate specific antigen density: A means of distinguishing benign prostatic hypertrophy and prostate cancer. J. Urol. 1992, 147, 815–816. [Google Scholar] [CrossRef]
  17. Sfoungaristos, S.; Perimenis, P. PSA density is superior than PSA and Gleason score for adverse pathologic features prediction in patients with clinically localized prostate cancer. Can. Urol. Assoc. J. 2012, 6, 46. [Google Scholar] [CrossRef]
  18. May, M.; Siegsmund, M.; Hammermann, F.; Loy, V.; Gunia, S. Visual estimation of the tumor volume in prostate cancer: A useful means for predicting biochemical-free survival after radical prostatectomy? Prostate Cancer Prostatic Dis. 2007, 10, 66. [Google Scholar] [CrossRef]
  19. Steenbergen, P.; Haustermans, K.; Lerut, E.; Oyen, R.; De Wever, L.; Van den Bergh, L.; Kerkmeijer, L.G.; Pameijer, F.A.; Veldhuis, W.B.; Pos, F.J. Prostate tumor delineation using multiparametric magnetic resonance imaging: Inter-observer variability and pathology validation. Radiother. Oncol. 2015, 115, 186–190. [Google Scholar] [CrossRef] [Green Version]
  20. Njeh, C. Tumor delineation: The weakest link in the search for accuracy in radiotherapy. J. Med. Phys./Assoc. Med. Phys. India 2008, 33, 136. [Google Scholar] [CrossRef]
  21. Denis, L.J.; Murphy, G.P.; Schroder, F.H. Report of the consensus workshop on screening and global strategy for prostate cancer. Cancer 1995, 75, 1187–1207. [Google Scholar] [CrossRef]
  22. Edwards, B.K.; Ward, E.; Kohler, B.A.; Eheman, C.; Zauber, A.G.; Anderson, R.N.; Jemal, A.; Schymura, M.J.; Lansdorp-Vogelaar, I.; Seeff, L.C.; et al. Annual report to the nation on the status of cancer, 1975-2006, featuring colorectal cancer trends and impact of interventions (risk factors, screening, and treatment) to reduce future rates. Cancer 2010, 116, 544–573. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Etzioni, R.; Tsodikov, A.; Mariotto, A.; Szabo, A.; Falcon, S.; Wegelin, J.; DiTommaso, D.; Karnofski, K.; Gulati, R.; Penson, D.F.; et al. Quantifying the role of PSA screening in the US prostate cancer mortality decline. Cancer Causes Control. 2008, 19, 175–181. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Lahdensuo, K.; Erickson, A.; Saarinen, I.; Seikkula, H.; Lundin, J.; Lundin, M.; Nordling, S.; Bützow, A.; Vasarainen, H.; Bostrom, P.J.; et al. Loss of PTEN expression in ERG-negative prostate cancer predicts secondary therapies and leads to shorter disease-specific survival time after radical prostatectomy. Mod. Pathol. 2016, 29, 1565–1574. [Google Scholar] [CrossRef] [Green Version]
  25. Rothwax, J.T.; George, A.K.; Wood, B.J.; Pinto, P.A. Multiparametric MRI in biopsy guidance for prostate cancer: Fusion-guided. Biomed. Res. Int. 2014, 2014, 439171. [Google Scholar] [CrossRef] [Green Version]
  26. Lips, I.M.; van der Heide, U.A.; Haustermans, K.; van Lin, E.N.; Pos, F.; Franken, S.P.; Kotte, A.N.; van Gils, C.H.; van Vulpen, M. Single blind randomized phase III trial to investigate the benefit of a focal lesion ablative microboost in prostate cancer (FLAME-trial): Study protocol for a randomized controlled trial. Trials 2011, 12, 255. [Google Scholar] [CrossRef] [Green Version]
  27. Cellini, N.; Morganti, A.G.; Mattiucci, G.C.; Valentini, V.; Leone, M.; Luzi, S.; Manfredi, R.; Dinapoli, N.; Digesu’, C.; Smaniotto, D. Analysis of intraprostatic failures in patients treated with hormonal therapy and radiotherapy: Implications for conformal therapy planning. Int. J. Radiat. Oncol. Biol. Phys. 2002, 53, 595–599. [Google Scholar] [CrossRef]
  28. Chun, F.K.-H.; Briganti, A.; Jeldres, C.; Gallina, A.; Erbersdobler, A.; Schlomm, T.; Walz, J.; Eichelberg, C.; Salomon, G.; Haese, A. Tumour volume and high grade tumour volume are the best predictors of pathologic stage and biochemical recurrence after radical prostatectomy. Eur. J. Cancer 2007, 43, 536–543. [Google Scholar] [CrossRef]
  29. Chung, B.I.; Tarin, T.V.; Ferrari, M.; Brooks, J.D. Comparison of prostate cancer tumor volume and percent cancer in prediction of biochemical recurrence and cancer specific survival. Urol. Oncol. 2011, 29, 314–318. [Google Scholar] [CrossRef]
  30. Nelson, B.A.; Shappell, S.B.; Chang, S.S.; Wells, N.; Farnham, S.B.; Smith, J.A., Jr.; Cookson, M.S. Tumour volume is an independent predictor of prostate-specific antigen recurrence in patients undergoing radical prostatectomy for clinically localized prostate cancer. BJU Int. 2006, 97, 1169–1172. [Google Scholar] [CrossRef]
  31. Fukuhara, H.; Kume, H.; Suzuki, M.; Fujimura, T.; Enomoto, Y.; Nishimatsu, H.; Ishikawa, A.; Homma, Y. Maximum tumor diameter: A simple independent predictor for biochemical recurrence after radical prostatectomy. Prostate Cancer Prostatic Dis. 2010, 13, 244. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Stephenson, A.J.; Scardino, P.T.; Kattan, M.W.; Pisansky, T.M.; Slawin, K.M.; Klein, E.A.; Anscher, M.S.; Michalski, J.M.; Sandler, H.M.; Lin, D.W. Predicting the outcome of salvage radiation therapy for recurrent prostate cancer after radical prostatectomy. J. Clin. Oncol. Off. J. Am. Soc. Clin. Oncol. 2007, 25, 2035. [Google Scholar] [CrossRef] [PubMed]
  33. Bjurlin, M.A.; Taneja, S.S. Standards for prostate biopsy. Curr. Opin. Urol. 2014, 24, 155–161. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Borghesi, M.; Ahmed, H.; Nam, R.; Schaeffer, E.; Schiavina, R.; Taneja, S.; Weidner, W.; Loeb, S. Complications After Systematic, Random, and Image-guided Prostate Biopsy. Eur. Urol. 2017, 71, 353–365. [Google Scholar] [CrossRef] [PubMed]
  35. Walsh, P.C.; Marschke, P.; Ricker, D.; Burnett, A.L. Patient-reported urinary continence and sexual function after anatomic radical prostatectomy. Urology 2000, 55, 58–61. [Google Scholar] [CrossRef]
  36. Hu, K.; Wallner, K. Clinical course of rectal bleeding following I-125 prostate brachytherapy. Int. J. Radiat. Oncol. Biol. Phys. 1998, 41, 263–265. [Google Scholar] [CrossRef]
  37. Theodorescu, D.; Gillenwater, J.Y.; Koutrouvelis, P.G. Prostatourethral-rectal fistula after prostate brachytherapy: Incidence and risk factors. Cancer Interdiscip. Int. J. Am. Cancer Soc. 2000, 89, 2085–2091. [Google Scholar] [CrossRef]
  38. Shaver, M.M.; Kohanteb, P.A.; Chiou, C.; Bardis, M.D.; Chantaduly, C.; Bota, D.; Filippi, C.G.; Weinberg, B.; Grinband, J.; Chow, D.S. Optimizing neuro-oncology imaging: A review of deep learning approaches for glioma imaging. Cancers 2019, 11, 829. [Google Scholar] [CrossRef] [Green Version]
  39. Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef]
  40. Johnson, L.M.; Turkbey, B.; Figg, W.D.; Choyke, P.L. Multiparametric MRI in prostate cancer management. Nat. Rev. Clin. Oncol. 2014, 11, 346–353. [Google Scholar] [CrossRef]
  41. Stabile, A.; Giganti, F.; Rosenkrantz, A.B.; Taneja, S.S.; Villeirs, G.; Gill, I.S.; Allen, C.; Emberton, M.; Moore, C.M.; Kasivisvanathan, V. Multiparametric MRI for prostate cancer diagnosis: Current status and future directions. Nat. Rev. Urol. 2020, 17, 41–61. [Google Scholar] [CrossRef] [PubMed]
  42. Dickinson, L.; Ahmed, H.U.; Allen, C.; Barentsz, J.O.; Carey, B.; Futterer, J.J.; Heijmink, S.W.; Hoskin, P.J.; Kirkham, A.; Padhani, A.R.; et al. Magnetic resonance imaging for the detection, localisation, and characterisation of prostate cancer: Recommendations from a European consensus meeting. Eur. Urol. 2011, 59, 477–494. [Google Scholar] [CrossRef] [PubMed]
  43. De Rooij, M.; Hamoen, E.H.; Futterer, J.J.; Barentsz, J.O.; Rovers, M.M. Accuracy of multiparametric MRI for prostate cancer detection: A meta-analysis. AJR Am. J. Roentgenol. 2014, 202, 343–351. [Google Scholar] [CrossRef] [PubMed]
  44. Fütterer, J.J.; Briganti, A.; De Visschere, P.; Emberton, M.; Giannarini, G.; Kirkham, A.; Taneja, S.S.; Thoeny, H.; Villeirs, G.; Villers, A. Can clinically significant prostate cancer be detected with multiparametric magnetic resonance imaging? A systematic review of the literature. Eur. Urol. 2015, 68, 1045–1053. [Google Scholar] [CrossRef] [PubMed]
  45. Daun, M.; Fardin, S.; Ushinsky, A.; Batra, S.; Nguyentat, M.; Lee, T.; Uchio, E.; Lall, C.; Houshyar, R. PI-RADS version 2 is an excellent screening tool for clinically significant prostate cancer as designated by the validated international society of urological pathology criteria: A retrospective analysis. Curr. Probl. Diagn. Radiol. 2019. [Google Scholar] [CrossRef]
  46. Turkbey, B.; Mani, H.; Shah, V.; Rastinehad, A.R.; Bernardo, M.; Pohida, T.; Pang, Y.; Daar, D.; Benjamin, C.; McKinney, Y.L.; et al. Multiparametric 3T prostate magnetic resonance imaging to detect cancer: Histopathological correlation using prostatectomy specimens processed in customized magnetic resonance imaging based molds. J. Urol. 2011, 186, 1818–1824. [Google Scholar] [CrossRef] [Green Version]
  47. Leake, J.L.; Hardman, R.; Ojili, V.; Thompson, I.; Shanbhogue, A.; Hernandez, J.; Barentsz, J. Prostate MRI: Access to and current practice of prostate MRI in the United States. J. Am. Coll. Radiol. 2014, 11, 156–160. [Google Scholar] [CrossRef] [Green Version]
  48. Latchamsetty, K.C.; Borden, L.S., Jr.; Porter, C.R.; Lacrampe, M.; Vaughan, M.; Lin, E.; Conti, N.; Wright, J.L.; Corman, J.M. Experience improves staging accuracy of endorectal magnetic resonance imaging in prostate cancer: What is the learning curve? Can. J. Urol. 2007, 14, 3429–3434. [Google Scholar]
  49. Gaziev, G.; Wadhwa, K.; Barrett, T.; Koo, B.C.; Gallagher, F.A.; Serrao, E.; Frey, J.; Seidenader, J.; Carmona, L.; Warren, A.; et al. Defining the learning curve for multiparametric magnetic resonance imaging (MRI) of the prostate using MRI-transrectal ultrasonography (TRUS) fusion-guided transperineal prostate biopsies as a validation tool. BJU Int. 2016, 117, 80–86. [Google Scholar] [CrossRef]
  50. Rosenkrantz, A.B.; Babb, J.S.; Taneja, S.S.; Ream, J.M. Proposed adjustments to PI-RADS Version 2 decision rules: Impact on prostate cancer detection. Radiology 2017, 283, 119–129. [Google Scholar] [CrossRef]
  51. De Visschere, P.J.; Vral, A.; Perletti, G.; Pattyn, E.; Praet, M.; Magri, V.; Villeirs, G.M. Multiparametric magnetic resonance imaging characteristics of normal, benign and malignant conditions in the prostate. Eur. Radiol. 2017, 27, 2095–2109. [Google Scholar] [CrossRef] [PubMed]
  52. Barentsz, J.O.; Richenberg, J.; Clements, R.; Choyke, P.; Verma, S.; Villeirs, G.; Rouviere, O.; Logager, V.; Futterer, J.J.; European Society of Urogenital, R. ESUR prostate MR guidelines 2012. Eur. Radiol. 2012, 22, 746–757. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. Weinreb, J.C.; Barentsz, J.O.; Choyke, P.L.; Cornud, F.; Haider, M.A.; Macura, K.J.; Margolis, D.; Schnall, M.D.; Shtern, F.; Tempany, C.M.; et al. PI-RADS Prostate imaging—Reporting and data system: 2015, version 2. Eur. Urol. 2016, 69, 16–40. [Google Scholar] [CrossRef] [PubMed]
  54. Turkbey, B.; Rosenkrantz, A.B.; Haider, M.A.; Padhani, A.R.; Villeirs, G.; Macura, K.J.; Tempany, C.M.; Choyke, P.L.; Cornud, F.; Margolis, D.J.; et al. Prostate imaging reporting and data system version 2.1: 2019 update of prostate imaging reporting and data system version 2. Eur. Urol. 2019, 76, 340–351. [Google Scholar] [CrossRef] [PubMed]
  55. Stonier, T. The evolution of machine intelligence. In Beyond Information: The Natural History of Intelligence; Springer: London, UK, 1992; pp. 107–133. [Google Scholar] [CrossRef]
  56. Poole, D.; Mackworth, A.; Goebel, R. Computational Intelligence; Oxford University Press: Oxford, UK, 1998; Volume 1. [Google Scholar]
  57. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  58. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
  59. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436. [Google Scholar] [CrossRef] [PubMed]
  60. Ueda, D.; Shimazaki, A.; Miki, Y. Technical and clinical overview of deep learning in radiology. Jpn. J. Radiol. 2019, 37, 15–33. [Google Scholar] [CrossRef] [PubMed]
  61. Ghose, S.; Oliver, A.; Martí, R.; Lladó, X.; Vilanova, J.C.; Freixenet, J.; Mitra, J.; Sidibé, D.; Meriaudeau, F. A survey of prostate segmentation methodologies in ultrasound, magnetic resonance and computed tomography images. Comput. Methods Prog. Biomed. 2012, 108, 262–287. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  62. Rasch, C.; Barillot, I.; Remeijer, P.; Touw, A.; van Herk, M.; Lebesque, J.V. Definition of the prostate in CT and MRI: A multi-observer study. Int. J. Radiat. Oncol. Biol. Phys. 1999, 43, 57–66. [Google Scholar] [CrossRef]
  63. Kachouie, N.N.; Fieguth, P.; Rahnamayan, S. An elliptical level set method for automatic TRUS prostate image segmentation. In Proceedings of the 2006 IEEE International Symposium on Signal Processing and Information Technology, Vancouver, BC, Canada, 27–30 August 2006; pp. 191–196. [Google Scholar]
  64. Ko, J.S.; Landis, P.; Carter, H.B.; Partin, A.W. Effect of intra-observer variation in prostate volume measurement on prostate-specific antigen density calculations among prostate cancer active surveillance participants. BJU Int. 2011, 108, 1739–1742. [Google Scholar] [CrossRef]
  65. Dianat, S.S.; Ruiz, R.M.R.; Bonekamp, D.; Carter, H.B.; Macura, K.J. Prostate volumetric assessment by magnetic resonance imaging and transrectal ultrasound: Impact of variation in calculated prostate-specific antigen density on patient eligibility for active surveillance program. J. Comput. Assist. Tomogr. 2013, 37, 589–595. [Google Scholar] [CrossRef]
  66. Bezinque, A.; Moriarity, A.; Farrell, C.; Peabody, H.; Noyes, S.L.; Lane, B.R. Determination of prostate volume: A comparison of contemporary methods. Acad. Radiol. 2018, 25, 1582–1587. [Google Scholar] [CrossRef] [PubMed]
  67. Rundo, L.; Militello, C.; Russo, G.; Garufi, A.; Vitabile, S.; Gilardi, M.C.; Mauri, G. Automated prostate gland segmentation based on an unsupervised fuzzy C-means clustering technique using multispectral T1w and T2w MR imaging. Information 2017, 8, 49. [Google Scholar] [CrossRef] [Green Version]
  68. Zou, K.H.; Warfield, S.K.; Bharatha, A.; Tempany, C.M.; Kaus, M.R.; Haker, S.J.; Wells III, W.M.; Jolesz, F.A.; Kikinis, R. Statistical validation of image segmentation quality based on a spatial overlap index1: Scientific reports. Acad. Radiol. 2004, 11, 178–189. [Google Scholar] [CrossRef] [Green Version]
  69. Litjens, G.; Toth, R.; van de Ven, W.; Hoeks, C.; Kerkstra, S.; van Ginneken, B.; Vincent, G.; Guillard, G.; Birbeck, N.; Zhang, J.; et al. Evaluation of prostate segmentation algorithms for MRI: The PROMISE12 challenge. Med. Image Anal. 2014, 18, 359–373. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  70. Karimi, D.; Samei, G.; Kesch, C.; Nir, G.; Salcudean, S.E. Prostate segmentation in MRI using a convolutional neural network architecture and training strategy based on statistical shape models. Int. J. Comput. Assist. Radiol. Surg. 2018. [Google Scholar] [CrossRef] [PubMed]
  71. Tian, Z.; Liu, L.; Zhang, Z.; Fei, B. PSNet: Prostate segmentation on MRI based on a convolutional neural network. J. Med. Imaging 2018, 5, 021208. [Google Scholar] [CrossRef]
  72. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the 2015 International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  73. Clark, T.; Wong, A.; Haider, M.A.; Khalvati, F. Fully Deep Convolutional Neural Networks for Segmentation of the Prostate Gland in Diffusion-Weighted MR Images; Springer: Cham, Switzerland, 2017; pp. 97–104. [Google Scholar]
  74. Zhu, Y.; Wei, R.; Gao, G.; Ding, L.; Zhang, X.; Wang, X.; Zhang, J. Fully automatic segmentation on prostate MR images based on cascaded fully convolution network. J. Magn. Reson. Imaging 2018, 49, 1149–1156. [Google Scholar] [CrossRef]
  75. Zhu, Q.; Du, B.; Turkbey, B.; Choyke, P.L.; Yan, P. Deeply-supervised CNN for prostate segmentation. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 178–184. [Google Scholar]
  76. Milletari, F.; Navab, N.; Ahmadi, S.-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
  77. Wang, B.; Lei, Y.; Tian, S.; Wang, T.; Liu, Y.; Patel, P.; Jani, A.B.; Mao, H.; Curran, W.J.; Liu, T.; et al. Deeply supervised 3D fully convolutional networks with group dilated convolution for automatic MRI prostate segmentation. Med. Phys. 2019, 46, 1707–1718. [Google Scholar] [CrossRef]
  78. Cheng, R.; Roth, H.R.; Lu, L.; Wang, S.; Turkbey, B.; Gandler, W.; McCreedy, E.S.; Agarwal, H.K.; Choyke, P.; Summers, R.M. Active appearance model and deep learning for more accurate prostate segmentation on MRI. In Proceedings of the Medical Imaging 2016: Image Processing, San Diego, CA, USA, 27 February–3 March 2016; p. 97842I. [Google Scholar]
  79. Le Nobin, J.; Orczyk, C.; Deng, F.M.; Melamed, J.; Rusinek, H.; Taneja, S.S.; Rosenkrantz, A.B. Prostate tumour volumes: Evaluation of the agreement between magnetic resonance imaging and histology using novel co-registration software. BJU Int. 2014, 114, E105–E112. [Google Scholar] [CrossRef] [Green Version]
  80. Van Schie, M.A.; Dinh, C.V.; van Houdt, P.J.; Pos, F.J.; Heijmink, S.W.; Kerkmeijer, L.G.; Kotte, A.N.; Oyen, R.; Haustermans, K.; van der Heide, U.A. Contouring of prostate tumors on multiparametric MRI: Evaluation of clinical delineations in a multicenter radiotherapy trial. Radiother. Oncol. 2018, 128, 321–326. [Google Scholar] [CrossRef]
  81. Lay, N.; Tsehay, Y.; Greer, M.D.; Turkbey, B.; Kwak, J.T.; Choyke, P.L.; Pinto, P.; Wood, B.J.; Summers, R.M. Detection of prostate cancer in multiparametric MRI using random forest with instance weighting. J. Med. Imaging (Bellingham) 2017, 4, 024506. [Google Scholar] [CrossRef] [PubMed]
  82. Epstein, J.I.; Zelefsky, M.J.; Sjoberg, D.D.; Nelson, J.B.; Egevad, L.; Magi-Galluzzi, C.; Vickers, A.J.; Parwani, A.V.; Reuter, V.E.; Fine, S.W. A contemporary prostate cancer grading system: A validated alternative to the Gleason score. Eur. Urol. 2016, 69, 428–435. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  83. Sumathipala, Y.; Lay, N.; Turkbey, B.; Smith, C.; Choyke, P.L.; Summers, R.M. Prostate cancer detection from multi-institution multiparametric MRIs using deep convolutional neural networks. J. Med. Imaging 2018, 5, 044507. [Google Scholar] [CrossRef] [PubMed]
  84. Xu, H.; Baxter, J.S.; Akin, O.; Cantor-Rivera, D. Prostate cancer detection using residual networks. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 1647–1650. [Google Scholar] [CrossRef]
  85. Tsehay, Y.K.; Lay, N.S.; Roth, H.R.; Wang, X.; Kwak, J.T.; Turkbey, B.I.; Pinto, P.A.; Wood, B.J.; Summers, R.M. Convolutional neural network based deep-learning architecture for prostate cancer detection on multiparametric magnetic resonance images. In Medical Imaging 2017: Computer-Aided Diagnosis; SPIE: Bellingham, WA, USA, 2017; p. 1013405. [Google Scholar] [CrossRef]
  86. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  87. Liu, X.; Langer, D.L.; Haider, M.A.; Yang, Y.; Wernick, M.N.; Yetik, I.S. Prostate cancer segmentation with simultaneous estimation of Markov random field parameters and class. IEEE Trans. Med. Imaging 2009, 28, 906–915. [Google Scholar] [CrossRef]
  88. Kohl, S.; Bonekamp, D.; Schlemmer, H.-P.; Yaqubi, K.; Hohenfellner, M.; Hadaschik, B.; Radtke, J.-P.; Maier-Hein, K. Adversarial networks for the detection of aggressive prostate cancer. ArXiv 2017, arXiv:1702.08014. [Google Scholar]
  89. Dai, Z.; Carver, E.; Liu, C.; Lee, J.; Feldman, A.; Zong, W.; Pantelic, M.; Elshaikh, M.; Wen, N. Segmentation of the Prostatic Gland and the Intraprostatic Lesions on Multiparametic MRI Using Mask-RCNN. ArXiv 2019, arXiv:1904.02575. [Google Scholar]
  90. Dickinson, L.; Ahmed, H.U.; Allen, C.; Barentsz, J.O.; Carey, B.; Futterer, J.J.; Heijmink, S.W.; Hoskin, P.; Kirkham, A.P.; Padhani, A.R. Scoring systems used for the interpretation and reporting of multiparametric MRI for prostate cancer detection, localization, and characterization: Could standardization lead to improved utilization of imaging within the diagnostic pathway? J. Magn. Reson. Imaging 2013, 37, 48–58. [Google Scholar] [CrossRef]
  91. Nguyentat, M.; Ushinsky, A.; Miranda-Aguirre, A.; Uchio, E.; Lall, C.; Shirkhoda, L.; Lee, T.; Green, C.; Houshyar, R. Validation of Prostate Imaging-Reporting and Data System Version 2: A Retrospective Analysis. Curr. Probl. Diagn. Radiol. 2018, 47, 404–409. [Google Scholar] [CrossRef]
  92. Litjens, G.J.; Barentsz, J.O.; Karssemeijer, N.; Huisman, H.J. Clinical evaluation of a computer-aided diagnosis system for determining cancer aggressiveness in prostate MRI. Eur. Radiol. 2015, 25, 3187–3199. [Google Scholar] [CrossRef] [Green Version]
  93. Wang, J.; Wu, C.-J.; Bao, M.-L.; Zhang, J.; Wang, X.-N.; Zhang, Y.-D. Machine learning-based analysis of MR radiomics can help to improve the diagnostic performance of PI-RADS v2 in clinically relevant prostate cancer. Eur. Radiol. 2017, 27, 4082–4090. [Google Scholar] [CrossRef] [PubMed]
  94. Song, Y.; Zhang, Y.D.; Yan, X.; Liu, H.; Zhou, M.; Hu, B.; Yang, G. Computer-aided diagnosis of prostate cancer using a deep convolutional neural network from multiparametric MRI. J. Magn. Reson. Imaging 2018, 48, 1570–1577. [Google Scholar] [CrossRef] [PubMed]
  95. Kwak, J.T.; Xu, S.; Wood, B.J.; Turkbey, B.; Choyke, P.L.; Pinto, P.A.; Wang, S.; Summers, R.M. Automated prostate cancer detection using T2-weighted and high-b-value diffusion-weighted magnetic resonance imaging. Med. Phys. 2015, 42, 2368–2378. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  96. Wang, Z.; Liu, C.; Cheng, D.; Wang, L.; Yang, X.; Cheng, K.-T. Automated detection of clinically significant prostate cancer in mp-MRI images based on an end-to-end deep neural network. IEEE Trans. Med. Imaging 2018, 37, 1127–1139. [Google Scholar] [CrossRef]
  97. Seah, J.C.; Tang, J.S.; Kitchen, A. Detection of prostate cancer on multiparametric MRI. In Proceedings of the Medical Imaging 2017: Computer-Aided Diagnosis, Orlando, FL, USA, 13–16 February 2017; p. 1013429. [Google Scholar]
  98. Liu, S.; Zheng, H.; Feng, Y.; Li, W. Prostate cancer diagnosis using deep learning with 3D multiparametric MRI. In Proceedings of the Medical Imaging 2017: Computer-Aided Diagnosis, Orlando, FL, USA, 13–16 February 2017; p. 1013428. [Google Scholar]
  99. Mehrtash, A.; Sedghi, A.; Ghafoorian, M.; Taghipour, M.; Tempany, C.M.; Wells, W.M., III; Kapur, T.; Mousavi, P.; Abolmaesumi, P.; Fedorov, A. Classification of clinical significance of MRI prostate findings using 3D convolutional neural networks. Proc. Spie Int. Soc. Opt. Eng. 2017, 10134. [Google Scholar] [CrossRef] [Green Version]
  100. Chen, Q.; Hu, S.; Long, P.; Lu, F.; Shi, Y.; Li, Y. A transfer learning approach for malignant prostate lesion detection on multiparametric MRI. Technol. Cancer Res. Treat. 2019, 18. [Google Scholar] [CrossRef]
  101. Armato, S.G.; Huisman, H.; Drukker, K.; Hadjiiski, L.; Kirby, J.S.; Petrick, N.; Redmond, G.; Giger, M.L.; Cha, K.; Mamonov, A. PROSTATEx Challenges for computerized classification of prostate lesions from multiparametric magnetic resonance images. J. Med. Imaging 2018, 5, 044501. [Google Scholar] [CrossRef]
  102. Hu, X.; Cammann, H.; Meyer, H.-A.; Miller, K.; Jung, K.; Stephan, C. Artificial neural networks and prostate cancer—Tools for diagnosis and management. Nat. Rev. Urol. 2013, 10, 174. [Google Scholar] [CrossRef]
  103. Chen, T.; Li, M.; Gu, Y.; Zhang, Y.; Yang, S.; Wei, C.; Wu, J.; Li, X.; Zhao, W.; Shen, J. Prostate cancer differentiation and aggressiveness: Assessment with a radiomic-based model vs. PI-RADS v2. J. Magn. Reson. Imaging 2019, 49, 875–884. [Google Scholar] [CrossRef] [Green Version]
  104. European Society of Radiology. What the radiologist should know about artificial intelligence—An ESR white paper. Insights Imaging 2019, 10, 44. [Google Scholar] [CrossRef] [Green Version]
  105. Nicolae, A.; Morton, G.; Chung, H.; Loblaw, A.; Jain, S.; Mitchell, D.; Lu, L.; Helou, J.; Al-Hanaqta, M.; Heath, E.; et al. Evaluation of a machine-learning algorithm for treatment planning in prostate low-dose-rate brachytherapy. Int. J. Radiat. Oncol. Biol. Phys. 2017, 97, 822–829. [Google Scholar] [CrossRef]
  106. Wong, N.C.; Lam, C.; Patterson, L.; Shayegan, B. Use of machine learning to predict early biochemical recurrence after robot-assisted prostatectomy. BJU Int. 2019, 123, 51–57. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  107. Cordon-Cardo, C.; Kotsianti, A.; Verbel, D.A.; Teverovskiy, M.; Capodieci, P.; Hamann, S.; Jeffers, Y.; Clayton, M.; Elkhettabi, F.; Khan, F.M.; et al. Improved prediction of prostate cancer recurrence through systems pathology. J. Clin. Invest. 2007, 117, 1876–1883. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Relationship between artificial intelligence, machine learning, and deep learning. Artificial intelligence is an umbrella term that includes machine learning and deep learning. Deep learning is a hyponym of machine learning.
Figure 1. Relationship between artificial intelligence, machine learning, and deep learning. Artificial intelligence is an umbrella term that includes machine learning and deep learning. Deep learning is a hyponym of machine learning.
Cancers 12 01204 g001
Figure 2. Machine learning versus deep learning used for multiparametric magnetic resonance imaging (mpMRI) sequence identification. In machine learning, the computer receives inputs of mpMRI images and goes through feature extraction specific to the different sequences of T2-weighted (T2W), diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE). Then, the computer is trained on additional images and is able to identify the correct sequence as an output. Deep learning differs from machine learning in that feature extraction and training can be done simultaneously to produce the output.
Figure 2. Machine learning versus deep learning used for multiparametric magnetic resonance imaging (mpMRI) sequence identification. In machine learning, the computer receives inputs of mpMRI images and goes through feature extraction specific to the different sequences of T2-weighted (T2W), diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE). Then, the computer is trained on additional images and is able to identify the correct sequence as an output. Deep learning differs from machine learning in that feature extraction and training can be done simultaneously to produce the output.
Cancers 12 01204 g002
Figure 3. Prostate organ segmentation performed by machine learning methods. The computer takes multiparametric magnetic resonance imaging images as inputs and applies the developed machine learning algorithm to correctly identify the borders of the prostate.
Figure 3. Prostate organ segmentation performed by machine learning methods. The computer takes multiparametric magnetic resonance imaging images as inputs and applies the developed machine learning algorithm to correctly identify the borders of the prostate.
Cancers 12 01204 g003
Figure 4. Prostate lesion detection using machine learning methods. The computer takes multiparametric magnetic resonance imaging images of the prostate as inputs and applies the developed machine learning algorithm to correctly localize lesions in the prostate.
Figure 4. Prostate lesion detection using machine learning methods. The computer takes multiparametric magnetic resonance imaging images of the prostate as inputs and applies the developed machine learning algorithm to correctly localize lesions in the prostate.
Cancers 12 01204 g004
Figure 5. Prostate lesion segmentation using machine learning techniques. The computer takes multiparametric magnetic resonance imaging images of the prostate as inputs and applies the developed machine learning algorithm to correctly identify the borders of the lesion.
Figure 5. Prostate lesion segmentation using machine learning techniques. The computer takes multiparametric magnetic resonance imaging images of the prostate as inputs and applies the developed machine learning algorithm to correctly identify the borders of the lesion.
Cancers 12 01204 g005
Figure 6. Prostate lesion characterization using machine-learning techniques. The computer receives multiparametric magnetic imaging images of prostate lesions and applies the developed machine learning algorithm to categorize the lesion as clinically significant prostate cancer or non-significant prostate cancer.
Figure 6. Prostate lesion characterization using machine-learning techniques. The computer receives multiparametric magnetic imaging images of prostate lesions and applies the developed machine learning algorithm to categorize the lesion as clinically significant prostate cancer or non-significant prostate cancer.
Cancers 12 01204 g006
Table 1. Machine learning techniques applied to prostate organ segmentation.
Table 1. Machine learning techniques applied to prostate organ segmentation.
Reference YearML AlgorithmPatientsDiceModalities
Rundo et al. [67]2017Fuzzy C-means clustering. Features: T1 intensity, T2 intensity210.91T1W, T2W
Tian et al. [71]2018CNN: 7 layers1400.85T2W
Karimi et al. [70]2018CNN: 3 layers490.88T2W
Clark et al. [73]2017CNN: U-Net1340.89DWI
Zhu, Y. et al. [74]2018CNN: U-Net1630.93DWI, T2W
Zhu, Q. et al. [75]2017CNN: U-Net810.89T2W
Milletari et al. [76]2016CNN: V-Net800.87T2W
Wang, B. et al. [77]2019CNN: 3D DSD-FCN400.86T2W
Cheng et al. [78]2016CNN and Active Appearance Model1200.93T2W
Table 2. Machine learning techniques applied to prostate lesion detection.
Table 2. Machine learning techniques applied to prostate lesion detection.
ReferenceYearML AlgorithmPatientsLesionsAUCModalities
Lay et al. [81]2017Random Forest. Features: Intensity, Haralick texture2244100.93T2W, ADC, DWI
Sumathipala et al. [83]2018CNN: Holistically Nested Edge Detection186N/A0.93T2W, ADC, DWI
Xu et al. [84]2019CNN: ResNet346N/A0.97T2W, ADC, DWI
Tsehay et al. [85]2017CNN, 5 Layers521250.90T2W, ADC, DWI
Table 3. Machine learning techniques applied to prostate lesion segmentation.
Table 3. Machine learning techniques applied to prostate lesion segmentation.
ReferenceYearML AlgorithmPatientsDiceModalities
Dai et al. [89]2019CNN: Mask R-CNN630.46T2W, ADC
Kohl et al. [88]2017Adversarial Network and CNN: U-Net1520.41T2W, ADC, DWI
Liu et al. [87]2009Fuzzy Markov Random Fields110.62T2W, quantitative T2, DWI, DCE
Table 4. Machine-learning techniques applied to prostate lesion characterization.
Table 4. Machine-learning techniques applied to prostate lesion characterization.
ReferenceYearAlgorithmPatientsLesionsAUCModalities
Litjens et al. [92]2015Random Forest. Features: Intensity, Position, Pharmacokinetic, Texture, Spatial Filter107141Benign vs. Cancer; AUC increased from 0.81 to 0.88 with their ML tool
Indolent vs. Aggressive; AUC increased from 0.78 to 0.88 with their ML tool
T2W, DCE, DWI
Wang, J. et al. [93]2017SVM. Features: Volumetric Radiomics541490.95T2W, DWI
Song et al. [94]2018CNN: Deep CNN and Augmentation1955470.94T2W, ADC, DWI
Kwak et al. [95]2015SVM. Features: Texture2444790.89T2W, DWI
Wang, Z. et al. [96]2018CNN: Deep CNN 3606000.96T2W, ADC
Seah et al. [97]2017CNN: Deep CNN3465380.84T2W, ADC, DCE
Liu et al. [98]2017CNN: XmasNet3415380.84T2W, ADC, DWI, Ktrans
Mehrtash et al. [99]2017CNN: 3D Implementation3445380.80ADC, DWI, DCE
Chen et al. [100]2019Two CNNs: Inception V3 and VGG-16Training Data: 204 Test Data: N/A538Inception V3, 0.81
VGG-16, 0.83
T2W, DWI, DCE

Share and Cite

MDPI and ACS Style

Bardis, M.D.; Houshyar, R.; Chang, P.D.; Ushinsky, A.; Glavis-Bloom, J.; Chahine, C.; Bui, T.-L.; Rupasinghe, M.; Filippi, C.G.; Chow, D.S. Applications of Artificial Intelligence to Prostate Multiparametric MRI (mpMRI): Current and Emerging Trends. Cancers 2020, 12, 1204. https://doi.org/10.3390/cancers12051204

AMA Style

Bardis MD, Houshyar R, Chang PD, Ushinsky A, Glavis-Bloom J, Chahine C, Bui T-L, Rupasinghe M, Filippi CG, Chow DS. Applications of Artificial Intelligence to Prostate Multiparametric MRI (mpMRI): Current and Emerging Trends. Cancers. 2020; 12(5):1204. https://doi.org/10.3390/cancers12051204

Chicago/Turabian Style

Bardis, Michelle D., Roozbeh Houshyar, Peter D. Chang, Alexander Ushinsky, Justin Glavis-Bloom, Chantal Chahine, Thanh-Lan Bui, Mark Rupasinghe, Christopher G. Filippi, and Daniel S. Chow. 2020. "Applications of Artificial Intelligence to Prostate Multiparametric MRI (mpMRI): Current and Emerging Trends" Cancers 12, no. 5: 1204. https://doi.org/10.3390/cancers12051204

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop