The rise of artificial intelligence-driven health communication
Editorial Commentary

The rise of artificial intelligence-driven health communication

Roei Golan1^, Rohit Reddy2^, Ranjith Ramasamy3^

1Department of Clinical Sciences, Florida State University College of Medicine, Tallahassee, FL, USA; 2Department of Internal Medicine, University of South Florida-HCA, Brandon, FL, USA; 3Desai Sethi Urology Institute, University of Miami Miller School of Medicine, Miami, FL, USA

^ORCID: Roei Golan, 0000-0002-7214-3073; Rohit Reddy, 0000-0003-1032-6656; Ranjith Ramasamy, 0000-0003-1387-7904.

Correspondence to: Ranjith Ramasamy, MD. Desai Sethi Urology Institute, University of Miami Miller School of Medicine, 1120 NW 14th St, #1551 Miami, FL 33136, USA. Email: Ramasamy@miami.edu.

Comment on: Davis R, Eppler M, Ayo-Ajibola O, et al. Evaluating the Effectiveness of Artificial Intelligence-powered Large Language Models Application in Disseminating Appropriate and Readable Health Information in Urology. J Urol 2023;210:688-94.


Keywords: Artificial intelligence (AI); large language models (LLMs); patient education; online content


Submitted Nov 01, 2023. Accepted for publication Jan 16, 2024. Published online Feb 23, 2024.

doi: 10.21037/tau-23-556


Medical informatics has experienced a significant surge in interest, especially regarding the diverse capabilities of artificial intelligence (AI) and large language models (LLMs). These advancements are not limited to transforming physician-patient communication but extend to various other aspects of healthcare, including diagnostics, treatment planning, and healthcare management. The study, “Evaluating the Effectiveness of Artificial Intelligence-powered Large Language Models Application in Disseminating Appropriate and Readable Health Information in Urology” is a testament to this promising frontier, highlighting AI’s potential in enhancing patient education (1). LLMs are designed to understand and generate human language. They learn from a vast amount of text data, enabling them to answer questions, write texts, and perform various language-related tasks. LLMs such as ChatGPT and bidirectional encoder representations from transformers (BERT) have gained attention as potential alternatives to widely used search engines like Google. Their potential use for medical queries highlights the need to evaluate their accuracy and reliability in giving medical advice, as this study did.

Urology is characterized by its varied procedures, terminologies, and medications, often proving daunting for those without medical expertise. Historically, physicians have struggled with the task of making intricate concepts more understandable and precise (2). This study presents a promising solution to this issue by proposing the employment of LLMs to bridge the comprehension gap.


The necessity for simplified health communication

It is critical to appreciate the need for simplified health communication. Over the years, it has been well-documented that patients who comprehend their medical conditions and the recommended treatments are more likely to be compliant, leading to better health outcomes (2). Additionally, deeper patient understanding not only bolsters their autonomy but also improves the patient-physician relationship (3). Yet, the challenge lies in achieving this comprehension without sacrificing the depth and intricacies of medical information (3). The highlighted study stands out in its innovative approach to tackling this issue. By evaluating the performance of AI-powered LLMs in relaying urological information, the research underscores the feasibility of employing such models in real-world scenarios, both for patient education and for assisting healthcare professionals in their communication efforts.


LLMs: more than just machines

The hallmark of AI and LLMs is their innate ability to process vast amounts of information and present it in an easily digestible manner. Around 2/3rd of the responses in this study were found to be appropriate in their accuracy, comprehensiveness, and clarity in delivering health information. Comparable studies also note generally accurate responses to both urological and non-urological malignancies (4). Although studies report that LLM responses are often difficult to read (4), patients can ask LLMs to simplify their language output or to expand on other topics. Unlike standard informational brochures or pamphlets, these models can tailor their responses based on individual queries, ensuring that each patient receives information most pertinent to their concerns (5,6). For example, a patient can interact with LLMs by typing “I still don’t understand this procedure, can you explain it?” or “explain it to me in simple 8th grade terms” until they are satisfied with an answer. This personalization is pivotal in ensuring that patients not only understand their conditions but also feel heard and acknowledged. However, in the pursuit of simplicity, there is a risk of sacrificing comprehensive information for clarity. This underscores the importance of balancing the need for understandable explanations with the necessity of conveying accurate and complete information. Achieving this balance is a task that requires the art side of medicine, which involves tailoring information to the unique needs and understanding of individual patients. In a survey study of patients with prostate cancer, patients stated that they had higher trust in a diagnosis made by AI controlled by a physician versus AI not controlled by a physician. Furthermore, AI-assisted physicians were preferred over physicians alone (7).


Ensuring accuracy and safety

The immense potential AI carries is matched with an equally significant responsibility. While the study has shown promising results in terms of the readability and appropriateness of the content generated by the LLM, it is essential to have continual checks and balances in place. The study aptly emphasizes the need for continuous validation and scrutiny to guarantee accuracy of disseminated information. Ensuring that these models are trained on verified and up-to-date medical databases, and incorporating expert oversight, can mitigate potential pitfalls. This becomes even more vital given the prevalence of misleading or inaccurate information on certain online platforms (8).

LLMs generate responses based on their extensive training data, underscoring the importance of the quality and accuracy of that data. It’s essential to recognize that while the model can produce information, it relies on its training to discern between high-quality and low-quality content. A study examining the proficiency of LLMs in assessing the quality of online information yielded disappointing results, suggesting a need for enhancement (9).

Another pressing issue is data privacy. When patients entrust personal health information to these models, they must be confident in the data’s confidentiality and security. Incorporating encryption techniques and clear data management protocols is essential for building user trust. This becomes even more significant as coding evolves to collect data that tailors information to be more individualized and relevant for users. In addressing the reliability and accountability of LLM-generated results, it’s crucial to consider scenarios where the model may produce nonsensical or inaccurate information. Despite their advanced algorithms, LLMs are not infallible and will generate misleading content and hallucinate data (10). This highlights the need for mechanisms to identify and correct such instances promptly. Lastly, the preference of many patients to interact with human healthcare providers rather than AI systems must be acknowledged. The human element in medical care, encompassing empathy, understanding, and personal judgment, remains irreplaceable. As of now, while LLMs can be valuable tools, they should complement, not replace, human medical expertise, ensuring that patient care remains grounded in personal interaction and professional judgment.


The way forward

The application of LLMs in urology is merely the tip of the iceberg. As LLMs continue to evolve, their implications and potential in the wider realm of medicine become profoundly apparent (11). The highlighted article offers a thorough examination of the current landscape, underscoring both the benefits and challenges of integrating such systems into healthcare communication.

In conclusion, the application of AI-powered LLMs in urology offers a tantalizing glimpse into the future of healthcare. Although challenges remain—particularly in ensuring that LLMs consistently produce accurate and reliable information—it’s clear there is a need to develop validated tools for assessing the quality of LLM-specific outputs. Additionally, incorporating these models into chatbots could further enhance patients’ access to high-quality health information. The potential advantages for patient care, education, and the broader practice of medicine are substantial. Embracing AI, while treading cautiously, is the way forward. With collaborative efforts between AI experts, medical professionals, and policymakers, a new era of informed, efficient, and patient-centric care is on the horizon.


Acknowledgments

ChatGPT-4 was utilized to review our writing for grammar and organization.

Funding: This work was supported by the National Institute of Diabetes and Digestive and Kidney Diseases (grant UE5 DK137308), National Institutes of Health (grant R01 DK130991), and Clinician Scientist Development Grant from American Cancer Society to Ranjith Ramasamy.


Footnote

Provenance and Peer Review: This article was commissioned by the editorial office, Translational Andrology and Urology. The article has undergone external peer review.

Peer Review File: Available at https://tau.amegroups.com/article/view/10.21037/tau-23-556/prf

Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://tau.amegroups.com/article/view/10.21037/tau-23-556/coif). Ranjith Ramasamy was supported by NIDDK grants R01 DK130991, UE5 DK137308, and Clinician Scientist Development Grant from American Cancer Society. The other authors have no conflicts of interest to declare.

Ethics Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. Davis R, Eppler M, Ayo-Ajibola O, et al. Evaluating the Effectiveness of Artificial Intelligence-powered Large Language Models Application in Disseminating Appropriate and Readable Health Information in Urology. J Urol 2023;210:688-94. [Crossref] [PubMed]
  2. Barksdale S, Stark Taylor S, Criss S, et al. Improving Patient Health Literacy During Telehealth Visits Through Remote Teach-Back Methods Training for Family Medicine Residents: Pilot 2-Arm Cluster, Nonrandomized Controlled Trial. JMIR Form Res 2023;7:e51541. [Crossref] [PubMed]
  3. Cox CL. Patient understanding: How should it be defined and assessed in clinical practice?. J Eval Clin Pract 2023;29:1127-34. [Crossref] [PubMed]
  4. Musheyev D, Pan A, Loeb S, et al. How Well Do Artificial Intelligence Chatbots Respond to the Top Search Queries About Urological Malignancies?. Eur Urol 2024;85:13-6. [Crossref] [PubMed]
  5. Eppler MB, Ganjavi C, Knudsen JE, et al. Bridging the Gap Between Urological Research and Patient Understanding: The Role of Large Language Models in Automated Generation of Layperson’s Summaries. Urol Pract 2023;10:436-43. [Crossref] [PubMed]
  6. Golan R, Ramasamy R. Editorial Comment. Urol Pract 2023;10:443-4. [Crossref] [PubMed]
  7. Rodler S, Kopliku R, Ulrich D, et al. Patients’ Trust in Artificial Intelligence-based Decision-making for Localized Prostate Cancer: Results from a Prospective Trial. Eur Urol Focus 2023; Epub ahead of print. [Crossref] [PubMed]
  8. Reddy RV, Golan R, Loloi J, et al. Assessing the quality and readability of online content on shock wave therapy for erectile dysfunction. Andrologia 2022;54:e14607. [Crossref] [PubMed]
  9. Golan R, Ripps SJ, Reddy R, et al. ChatGPT’s Ability to Assess Quality and Readability of Online Medical Information: Evidence From a Cross-Sectional Study. Cureus 2023;15:e42214. [Crossref] [PubMed]
  10. Kanjee Z, Crowe B, Rodman A. Accuracy of a Generative Artificial Intelligence Model in a Complex Diagnostic Challenge. JAMA 2023;330:78-80. [Crossref] [PubMed]
  11. Golan R, Reddy R, Muthigi A, et al. Artificial intelligence in academic writing: a paradigm-shifting technological advance. Nat Rev Urol 2023;20:327-8. [Crossref] [PubMed]
Cite this article as: Golan R, Reddy R, Ramasamy R. The rise of artificial intelligence-driven health communication. Transl Androl Urol 2024;13(2):356-358. doi: 10.21037/tau-23-556

Download Citation