Artificial Intelligence and Radiation Oncology

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: closed (6 June 2023) | Viewed by 12804

Special Issue Editors


E-Mail Website
Guest Editor
Arlington Innovation Center, Health Research, Virginia Tech -NCR, 900 N. Glebe Road, Arlington, VA 22203, USA
Interests: diagnostic imaging; radiation oncology; biomedical imaging; artificial intelligence imaging; computer aided diagnosis; telemedicine; health informatics; electronic health record; quantitative imaging
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Radiation Oncology, University of California Davis, Sacramento, CA, USA
Interests: radiation oncology

Special Issue Information

Dear Colleagues,

Radiation Oncology is one of the medical specialties driven by advanced and complex technology, both hardware and software. It therefore offers a rich research and development platform to move the current fragmented data toward an integrated intelligent network system to achieve precision and personalized therapy. Artificial intelligence (AI) are deep learning (DL) represent a portfolio of powerful science and technology tools to address a variety of data-driven technical and clinical issues.AI tools can be used to improve decision making, productivity, precision, safety, and professional satisfaction in current labor-intensive clinical settings.

In designing a Special Issue on radiation oncology, we propose to approach the use of AI from the user’s perspective, rather than from a technology perspective. What is the future of radiation oncology? What problems of today and tomorrow are we trying to solve? Once the problems are clearly defined, they can be broken down to projects and tasks for research and development with suitable AI tools. We are interested in many technical and clinical issues relevant to radiation oncology, such as patient safety, dosimetry, treatment planning, treatment monitoring, workflow management, verification, continuous quality assurance, compliance with constantly changing rules and professional satisfaction and training.

We have selected the MDPI open-access publication platform to promote easy access and dissemination of work contributed by the global leaders in radiation oncology.

Prof. Dr. Seong K. Mun
Prof. Dr. Dieterich Sonja
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • radiation oncology
  • artificial intelligence
  • patient safety
  • treatment planning and monitoring
  • workflow optimization
  • dosimetry and validation
  • imaging and radiomics
  • ethics and assessment

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

11 pages, 279 KiB  
Article
Automated Error Labeling in Radiation Oncology via Statistical Natural Language Processing
by Indrila Ganguly, Graham Buhrman, Ed Kline, Seong K. Mun and Srijan Sengupta
Diagnostics 2023, 13(7), 1215; https://doi.org/10.3390/diagnostics13071215 - 23 Mar 2023
Viewed by 1373
Abstract
A report published in 2000 from the Institute of Medicine revealed that medical errors were a leading cause of patient deaths, and urged the development of error detection and reporting systems. The field of radiation oncology is particularly vulnerable to these errors due [...] Read more.
A report published in 2000 from the Institute of Medicine revealed that medical errors were a leading cause of patient deaths, and urged the development of error detection and reporting systems. The field of radiation oncology is particularly vulnerable to these errors due to its highly complex process workflow, the large number of interactions among various systems, devices, and medical personnel, as well as the extensive preparation and treatment delivery steps. Natural language processing (NLP)-aided statistical algorithms have the potential to significantly improve the discovery and reporting of these medical errors by relieving human reporters of the burden of event type categorization and creating an automated, streamlined system for error incidents. In this paper, we demonstrate text-classification models developed with clinical data from a full service radiation oncology center (test center) that can predict the broad level and first level category of an error given a free-text description of the error. All but one of the resulting models had an excellent performance as quantified by several metrics. The results also suggest that more development and more extensive training data would further improve future results. Full article
(This article belongs to the Special Issue Artificial Intelligence and Radiation Oncology)
Show Figures

Figure 1

11 pages, 1821 KiB  
Article
Deep Hybrid Learning Prediction of Patient-Specific Quality Assurance in Radiotherapy: Implementation in Clinical Routine
by Noémie Moreau, Laurine Bonnor, Cyril Jaudet, Laetitia Lechippey, Nadia Falzone, Alain Batalla, Cindy Bertaut and Aurélien Corroyer-Dulmont
Diagnostics 2023, 13(5), 943; https://doi.org/10.3390/diagnostics13050943 - 02 Mar 2023
Viewed by 1626
Abstract
Background: Arc therapy allows for better dose deposition conformation, but the radiotherapy plans (RT plans) are more complex, requiring patient-specific pre-treatment quality assurance (QA). In turn, pre-treatment QA adds to the workload. The objective of this study was to develop a predictive model [...] Read more.
Background: Arc therapy allows for better dose deposition conformation, but the radiotherapy plans (RT plans) are more complex, requiring patient-specific pre-treatment quality assurance (QA). In turn, pre-treatment QA adds to the workload. The objective of this study was to develop a predictive model of Delta4-QA results based on RT-plan complexity indices to reduce QA workload. Methods. Six complexity indices were extracted from 1632 RT VMAT plans. A machine learning (ML) model was developed for classification purpose (two classes: compliance with the QA plan or not). For more complex locations (breast, pelvis and head and neck), innovative deep hybrid learning (DHL) was trained to achieve better performance. Results. For not complex RT plans (with brain and thorax tumor locations), the ML model achieved 100% specificity and 98.9% sensitivity. However, for more complex RT plans, specificity falls to 87%. For these complex RT plans, an innovative QA classification method using DHL was developed and achieved a sensitivity of 100% and a specificity of 97.72%. Conclusions. The ML and DHL models predicted QA results with a high degree of accuracy. Our predictive QA online platform is offering substantial time savings in terms of accelerator occupancy and working time. Full article
(This article belongs to the Special Issue Artificial Intelligence and Radiation Oncology)
Show Figures

Figure 1

Review

Jump to: Research

14 pages, 306 KiB  
Review
The Use of MRI-Derived Radiomic Models in Prostate Cancer Risk Stratification: A Critical Review of Contemporary Literature
by Linda My Huynh, Yeagyeong Hwang, Olivia Taylor and Michael J. Baine
Diagnostics 2023, 13(6), 1128; https://doi.org/10.3390/diagnostics13061128 - 16 Mar 2023
Cited by 2 | Viewed by 1604
Abstract
The development of precise medical imaging has facilitated the establishment of radiomics, a computer-based method of quantitatively analyzing subvisual imaging characteristics. The present review summarizes the current literature on the use of diagnostic magnetic resonance imaging (MRI)-derived radiomics in prostate cancer (PCa) risk [...] Read more.
The development of precise medical imaging has facilitated the establishment of radiomics, a computer-based method of quantitatively analyzing subvisual imaging characteristics. The present review summarizes the current literature on the use of diagnostic magnetic resonance imaging (MRI)-derived radiomics in prostate cancer (PCa) risk stratification. A stepwise literature search of publications from 2017 to 2022 was performed. Of 218 articles on MRI-derived prostate radiomics, 33 (15.1%) generated models for PCa risk stratification. Prediction of Gleason score (GS), adverse pathology, postsurgical recurrence, and postradiation failure were the primary endpoints in 15 (45.5%), 11 (33.3%), 4 (12.1%), and 3 (9.1%) studies. In predicting GS and adverse pathology, radiomic models differentiated well, with receiver operator characteristic area under the curve (ROC-AUC) values of 0.50–0.92 and 0.60–0.92, respectively. For studies predicting post-treatment recurrence or failure, ROC-AUC for radiomic models ranged from 0.73 to 0.99 in postsurgical and radiation cohorts. Finally, of the 33 studies, 7 (21.2%) included external validation. Overall, most investigations showed good to excellent prediction of GS and adverse pathology with MRI-derived radiomic features. Direct prediction of treatment outcomes, however, is an ongoing investigation. As these studies mature and reach potential for clinical integration, concerted effort to validate these radiomic models must be undertaken. Full article
(This article belongs to the Special Issue Artificial Intelligence and Radiation Oncology)
21 pages, 2310 KiB  
Review
Automated Contouring and Planning in Radiation Therapy: What Is ‘Clinically Acceptable’?
by Hana Baroudi, Kristy K. Brock, Wenhua Cao, Xinru Chen, Caroline Chung, Laurence E. Court, Mohammad D. El Basha, Maguy Farhat, Skylar Gay, Mary P. Gronberg, Aashish Chandra Gupta, Soleil Hernandez, Kai Huang, David A. Jaffray, Rebecca Lim, Barbara Marquez, Kelly Nealon, Tucker J. Netherton, Callistus M. Nguyen, Brandon Reber, Dong Joo Rhee, Ramon M. Salazar, Mihir D. Shanker, Carlos Sjogreen, McKell Woodland, Jinzhong Yang, Cenji Yu and Yao Zhaoadd Show full author list remove Hide full author list
Diagnostics 2023, 13(4), 667; https://doi.org/10.3390/diagnostics13040667 - 10 Feb 2023
Cited by 13 | Viewed by 7490
Abstract
Developers and users of artificial-intelligence-based tools for automatic contouring and treatment planning in radiotherapy are expected to assess clinical acceptability of these tools. However, what is ‘clinical acceptability’? Quantitative and qualitative approaches have been used to assess this ill-defined concept, all of which [...] Read more.
Developers and users of artificial-intelligence-based tools for automatic contouring and treatment planning in radiotherapy are expected to assess clinical acceptability of these tools. However, what is ‘clinical acceptability’? Quantitative and qualitative approaches have been used to assess this ill-defined concept, all of which have advantages and disadvantages or limitations. The approach chosen may depend on the goal of the study as well as on available resources. In this paper, we discuss various aspects of ‘clinical acceptability’ and how they can move us toward a standard for defining clinical acceptability of new autocontouring and planning tools. Full article
(This article belongs to the Special Issue Artificial Intelligence and Radiation Oncology)
Show Figures

Figure 1

Back to TopTop