Clinical Practice of Artificial Intelligence in Diagnostic and Treatment Assistance

A special issue of Clinics and Practice (ISSN 2039-7283).

Deadline for manuscript submissions: 31 May 2024 | Viewed by 10263

Special Issue Editors


E-Mail Website
Guest Editor
Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN, USA
Interests: artificial Intelligence; machine learning; meta-analysis; acute kidney injury; clinical nephrology; kidney transplantation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor

Special Issue Information

Dear Colleagues,

The realm of healthcare is undergoing a transformative shift as artificial intelligence (AI) takes the center stage in diagnostic and treatment assistance. “Clinical Practice of Artificial Intelligence in Diagnostic and Treatment Assistance”, a Special Issue of Clinics and Practice, embarks on an expedition to uncover the limitless potential of AI-driven solutions in reshaping medical care. We invite researchers and practitioners to contribute to this pioneering discourse that holds the promise of revolutionizing patient outcomes and healthcare delivery.

In this Special Issue, we aim to encompass the myriad dimensions of AI's footprint in clinical practice, ranging from its impact on precision diagnostics and personalized treatment strategies to the ethical considerations that accompany its integration into modern healthcare. We welcome contributions that explore AI's capacity to empower clinicians with real-time insights, optimize resource allocation, and elevate patient engagement through novel digital experiences.

As we move ahead into the era of AI-augmented medical care, this Special Issue stands as a testament to our collective commitment to harness the power of innovation for the betterment of humanity's well-being. Join us in crafting a future where artificial intelligence is not just a tool, but an indispensable partner in clinical decision-making, diagnostics, and treatment assistance.

Potential topics include, but are not limited to, the following:

AI-powered predictive analytics for early disease detection;
Deep learning techniques for medical image analysis;
Natural language processing for clinical notes and reports;
Clinical decision support systems for personalized treatment plans;
AI-driven optimization of hospital resource allocation;
Remote patient monitoring using wearable devices and AI;
Ethical considerations in AI-driven patient care;
Integration of AI in telemedicine and virtual consultations;
AI-based drug discovery and development;
Robotic-assisted surgery and AI-enhanced procedural outcomes;
AI-enhanced diagnostics for rare and complex diseases;
Predictive modeling for patient readmission prevention;
AI-guided personalized rehabilitation programs;
Real-time AI monitoring of patient vitals in critical care;
Precision medicine and AI-driven treatment pathways;
Cognitive computing for medical data analysis;
AI-enabled patient engagement and education platforms;
Data security and privacy in AI-enhanced healthcare;
AI-powered clinical trials design and patient recruitment;
Augmented reality and virtual reality in medical training and practice;
AI-enhanced radiomics for cancer diagnosis and staging;
Natural language processing for mining clinical insights from medical literature;
AI for optimizing clinical workflows and reducing administrative burden;
Integrating AI into electronic health record systems for enhanced usability;
Exploring AI's impact on medical education and continuous learning.

Dr. Wisit Cheungpasitporn
Dr. Charat Thongprayoon
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Clinics and Practice is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • clinical practice
  • medical diagnostics
  • clinical decision support
  • healthcare technology
  • AI-driven diagnostics
  • patient-centric care
  • ethical AI
  • digital healthcare transformation
  • precision treatment strategies

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

28 pages, 1021 KiB  
Article
Evaluating the Efficacy of ChatGPT in Navigating the Spanish Medical Residency Entrance Examination (MIR): Promising Horizons for AI in Clinical Medicine
by Francisco Guillen-Grima, Sara Guillen-Aguinaga, Laura Guillen-Aguinaga, Rosa Alas-Brun, Luc Onambele, Wilfrido Ortega, Rocio Montejo, Enrique Aguinaga-Ontoso, Paul Barach and Ines Aguinaga-Ontoso
Clin. Pract. 2023, 13(6), 1460-1487; https://doi.org/10.3390/clinpract13060130 - 20 Nov 2023
Cited by 4 | Viewed by 2267
Abstract
The rapid progress in artificial intelligence, machine learning, and natural language processing has led to increasingly sophisticated large language models (LLMs) for use in healthcare. This study assesses the performance of two LLMs, the GPT-3.5 and GPT-4 models, in passing the MIR medical [...] Read more.
The rapid progress in artificial intelligence, machine learning, and natural language processing has led to increasingly sophisticated large language models (LLMs) for use in healthcare. This study assesses the performance of two LLMs, the GPT-3.5 and GPT-4 models, in passing the MIR medical examination for access to medical specialist training in Spain. Our objectives included gauging the model’s overall performance, analyzing discrepancies across different medical specialties, discerning between theoretical and practical questions, estimating error proportions, and assessing the hypothetical severity of errors committed by a physician. Material and methods: We studied the 2022 Spanish MIR examination results after excluding those questions requiring image evaluations or having acknowledged errors. The remaining 182 questions were presented to the LLM GPT-4 and GPT-3.5 in Spanish and English. Logistic regression models analyzed the relationships between question length, sequence, and performance. We also analyzed the 23 questions with images, using GPT-4’s new image analysis capability. Results: GPT-4 outperformed GPT-3.5, scoring 86.81% in Spanish (p < 0.001). English translations had a slightly enhanced performance. GPT-4 scored 26.1% of the questions with images in English. The results were worse when the questions were in Spanish, 13.0%, although the differences were not statistically significant (p = 0.250). Among medical specialties, GPT-4 achieved a 100% correct response rate in several areas, and the Pharmacology, Critical Care, and Infectious Diseases specialties showed lower performance. The error analysis revealed that while a 13.2% error rate existed, the gravest categories, such as “error requiring intervention to sustain life” and “error resulting in death”, had a 0% rate. Conclusions: GPT-4 performs robustly on the Spanish MIR examination, with varying capabilities to discriminate knowledge across specialties. While the model’s high success rate is commendable, understanding the error severity is critical, especially when considering AI’s potential role in real-world medical practice and its implications for patient safety. Full article
Show Figures

Figure 1

17 pages, 3800 KiB  
Article
Predicting Chronic Hyperplastic Candidiasis Retro-Angular Mucosa Using Machine Learning
by Omid Moztarzadeh, Jan Liska, Veronika Liskova, Alena Skalova, Ondrej Topolcan, Alireza Jamshidi and Lukas Hauer
Clin. Pract. 2023, 13(6), 1335-1351; https://doi.org/10.3390/clinpract13060120 - 28 Oct 2023
Viewed by 1519
Abstract
Chronic hyperplastic candidiasis (CHC) presents a distinctive and relatively rare form of oral candidal infection characterized by the presence of white or white–red patches on the oral mucosa. Often mistaken for leukoplakia or erythroleukoplakia due to their appearance, these lesions display nonhomogeneous textures [...] Read more.
Chronic hyperplastic candidiasis (CHC) presents a distinctive and relatively rare form of oral candidal infection characterized by the presence of white or white–red patches on the oral mucosa. Often mistaken for leukoplakia or erythroleukoplakia due to their appearance, these lesions display nonhomogeneous textures featuring combinations of white and red hyperplastic or nodular surfaces. Predominant locations for such lesions include the tongue, retro-angular mucosa, and buccal mucosa. This paper aims to investigate the potential influence of specific anatomical locations, retro-angular mucosa, on the development and occurrence of CHC. By examining the relationship between risk factors, we present an approach based on machine learning (ML) to predict the location of CHC occurrence. In this way, we employ Gradient Boosting Regression (GBR) to classify CHC lesion locations based on important risk factors. This estimator can serve both research and diagnostic purposes effectively. The findings underscore that the proposed ML technique can be used to predict the occurrence of CHC in retro-angular mucosa compared to other locations. The results also show a high rate of accuracy in predicting lesion locations. Performance assessment relies on Mean Squared Error (MSE), Root Mean Squared Error (RMSE), R-squared (R2), and Mean Absolute Error (MAE), consistently revealing favorable results that underscore the robustness and dependability of our classification method. Our research contributes valuable insights to the field, enhancing diagnostic accuracy and informing treatment strategies. Full article
Show Figures

Figure 1

13 pages, 1332 KiB  
Article
AI-Powered Renal Diet Support: Performance of ChatGPT, Bard AI, and Bing Chat
by Ahmad Qarajeh, Supawit Tangpanithandee, Charat Thongprayoon, Supawadee Suppadungsuk, Pajaree Krisanapan, Noppawit Aiumtrakul, Oscar A. Garcia Valencia, Jing Miao, Fawad Qureshi and Wisit Cheungpasitporn
Clin. Pract. 2023, 13(5), 1160-1172; https://doi.org/10.3390/clinpract13050104 - 26 Sep 2023
Cited by 10 | Viewed by 3297
Abstract
Patients with chronic kidney disease (CKD) necessitate specialized renal diets to prevent complications such as hyperkalemia and hyperphosphatemia. A comprehensive assessment of food components is pivotal, yet burdensome for healthcare providers. With evolving artificial intelligence (AI) technology, models such as ChatGPT, Bard AI, [...] Read more.
Patients with chronic kidney disease (CKD) necessitate specialized renal diets to prevent complications such as hyperkalemia and hyperphosphatemia. A comprehensive assessment of food components is pivotal, yet burdensome for healthcare providers. With evolving artificial intelligence (AI) technology, models such as ChatGPT, Bard AI, and Bing Chat can be instrumental in educating patients and assisting professionals. To gauge the efficacy of different AI models in discerning potassium and phosphorus content in foods, four AI models—ChatGPT 3.5, ChatGPT 4, Bard AI, and Bing Chat—were evaluated. A total of 240 food items, curated from the Mayo Clinic Renal Diet Handbook for CKD patients, were input into each model. These items were characterized by their potassium (149 items) and phosphorus (91 items) content. Each model was tasked to categorize the items into high or low potassium and high phosphorus content. The results were juxtaposed with the Mayo Clinic Renal Diet Handbook’s recommendations. The concordance between repeated sessions was also evaluated to assess model consistency. Among the models tested, ChatGPT 4 displayed superior performance in identifying potassium content, correctly classifying 81% of the foods. It accurately discerned 60% of low potassium and 99% of high potassium foods. In comparison, ChatGPT 3.5 exhibited a 66% accuracy rate. Bard AI and Bing Chat models had an accuracy rate of 79% and 81%, respectively. Regarding phosphorus content, Bard AI stood out with a flawless 100% accuracy rate. ChatGPT 3.5 and Bing Chat recognized 85% and 89% of the high phosphorus foods correctly, while ChatGPT 4 registered a 77% accuracy rate. Emerging AI models manifest a diverse range of accuracy in discerning potassium and phosphorus content in foods suitable for CKD patients. ChatGPT 4, in particular, showed a marked improvement over its predecessor, especially in detecting potassium content. The Bard AI model exhibited exceptional precision for phosphorus identification. This study underscores the potential of AI models as efficient tools in renal dietary planning, though refinements are warranted for optimal utility. Full article
Show Figures

Graphical abstract

Review

Jump to: Research

17 pages, 1351 KiB  
Review
Ethical Dilemmas in Using AI for Academic Writing and an Example Framework for Peer Review in Nephrology Academia: A Narrative Review
by Jing Miao, Charat Thongprayoon, Supawadee Suppadungsuk, Oscar A. Garcia Valencia, Fawad Qureshi and Wisit Cheungpasitporn
Clin. Pract. 2024, 14(1), 89-105; https://doi.org/10.3390/clinpract14010008 - 30 Dec 2023
Cited by 1 | Viewed by 2542
Abstract
The emergence of artificial intelligence (AI) has greatly propelled progress across various sectors including the field of nephrology academia. However, this advancement has also given rise to ethical challenges, notably in scholarly writing. AI’s capacity to automate labor-intensive tasks like literature reviews and [...] Read more.
The emergence of artificial intelligence (AI) has greatly propelled progress across various sectors including the field of nephrology academia. However, this advancement has also given rise to ethical challenges, notably in scholarly writing. AI’s capacity to automate labor-intensive tasks like literature reviews and data analysis has created opportunities for unethical practices, with scholars incorporating AI-generated text into their manuscripts, potentially undermining academic integrity. This situation gives rise to a range of ethical dilemmas that not only question the authenticity of contemporary academic endeavors but also challenge the credibility of the peer-review process and the integrity of editorial oversight. Instances of this misconduct are highlighted, spanning from lesser-known journals to reputable ones, and even infiltrating graduate theses and grant applications. This subtle AI intrusion hints at a systemic vulnerability within the academic publishing domain, exacerbated by the publish-or-perish mentality. The solutions aimed at mitigating the unethical employment of AI in academia include the adoption of sophisticated AI-driven plagiarism detection systems, a robust augmentation of the peer-review process with an “AI scrutiny” phase, comprehensive training for academics on ethical AI usage, and the promotion of a culture of transparency that acknowledges AI’s role in research. This review underscores the pressing need for collaborative efforts among academic nephrology institutions to foster an environment of ethical AI application, thus preserving the esteemed academic integrity in the face of rapid technological advancements. It also makes a plea for rigorous research to assess the extent of AI’s involvement in the academic literature, evaluate the effectiveness of AI-enhanced plagiarism detection tools, and understand the long-term consequences of AI utilization on academic integrity. An example framework has been proposed to outline a comprehensive approach to integrating AI into Nephrology academic writing and peer review. Using proactive initiatives and rigorous evaluations, a harmonious environment that harnesses AI’s capabilities while upholding stringent academic standards can be envisioned. Full article
Show Figures

Figure 1

Back to TopTop