Consulting Dr ChatGPT: The role of AI and machine learning in health care


A new wave of artificial intelligence (AI) technology has prompted questions about how it may change clinical practice and health care delivery in the future. In particular, with news that ChatGPT, an AI large language model developed by OpenAI, passed the first two steps of the United States Medical Licensing Examination (USMLE) as well as the American Heart Association Basic Life Support and Advanced Cardiovascular Life Support exams, what is the role of AI in the health care context and will it overtake the role of physicians?[1,2]

AI technology has already been evaluated in multiple medical and health care contexts, where it was shown to effectively predict survival of cancer patients based on initial oncologist consultation documents, as well as identify potential outbreaks with real-time processing of public social media data.[3,4] Unlike its predecessors, current AI models like ChatGPT have been revolutionary in their capacity to regurgitate vast amounts of information, form an understanding and interpretation of words in context, and present information in a way that mimics a human-like pattern.[5] Although the unprecedented recent advancements represent an exciting possibility, there remain ethical and practical barriers to adopting AI technology in the clinical and health care setting. 

A key concern is the potential for AI models to sustain or exacerbate harmful stereotypes and disparities among marginalized patients. The purely algorithmic nature of AI models does not permit unique differences in socioeconomic status, race, ethnicity, gender, or other factors to be taken into context. This has potential to produce biased results if the data used to train the AI models are not adequately diverse and representative.[6,7] For instance, a 2019 study by Obermeyer and colleagues found evidence of racial bias infiltrating an algorithm widely used to identify health care needs.[7] The study found that Black patients were clinically far more sick than White patients assigned the same level of risk, and it was estimated that nearly half of Black patients requiring additional care were missed.[7] The authors speculate that the error stemmed from a larger systemic issue wherein less money is spent on Black patients than White patients with similar levels of need.[7] Because health costs were used as a proxy to represent health needs, the algorithm falsely concluded that Black patients are healthier than White patients with similar presentations.[7] Ultimately, AI models rely on representativeness of existing data and, if the existing data reflect systemic biases and inequities, they will seep into the resulting conclusions. AI programs are susceptible to the universality of bias that prevails across the Internet and, although they may recognize blatant discriminatory practices, they have limited ability to critically appraise data and filter out subtly ingrained implicit biases.[6]

Another concern is that in our current digitized era, when misinformation and poor-quality data are more rampant in online sources than ever before, AI technology may disseminate or even generate false information. Erroneous health claims disseminated during the COVID-19 pandemic and antivaccine movements highlighted the dire consequences of misinformation.[8] Unfounded beliefs and misconceptions have been spread at unprecedented rates, negatively impacting health outcomes by contributing to non-adherence to recommended treatments or use of unrecognized and unapproved therapies.[8] Unfortunately, current AI algorithms are susceptible to amplifying misleading and inaccurate online data rather than adequately filtering it out. Furthermore, one of the most concerning issues is that ChatGPT may fabricate information (for example, creating fake references to support existing data).[9,10] A study looking at ChatGPT’s application in the field of radiology demonstrated that on several occasions, ChatGPT would provide plausible but factually incorrect answers, namely regarding timing of portal venous phase injection.[10] In this way, ChatGPT was likened to an “extremely knowledgeable professor” who “despite laying out a compelling and generally plausible answer . . . is occasionally wrong.”[10] Inaccurate or conflicting information creates confusion and sows distrust in already anxious patients, which may lead them astray and prevent them from accessing the care they need in a timely manner.[8

Although ChatGPT may have access to an unlimited amount of data, physicians bring a blend of clinical acumen and interpersonal connection that AI systems cannot yet replicate.[11] Even as trainees, we are taught early on to “treat the patient, not the disease,” which involves considering an individual’s broader psychosocial factors and emotional needs.[12] Patients are not as straightforward as the standardized question stems seen on the USMLE and cannot be reduced to data points with no regard for their complex medical, social, and psychiatric backgrounds and unique circumstances. A physician’s role goes beyond making a diagnosis or performing a procedure; it requires the ability to provide fully integrated, compassionate care in the context of an individual patient’s needs.[11

Whether it be to relieve administrative burdens or communicate complex medical information to patients, AI models like ChatGPT hold great promise as useful clinical tools but need further exploration. Ultimately, the tremendous capacity of AI technology lies in its potential to support and enhance the work of physicians, not to take their place. It is critical that medical professionals understand the capabilities and limitations of emerging AI technologies to effectively use these tools in the future with necessary safeguards in place. 
—Min Jung, BSc

References
1.    Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit Health 2023;2:e0000198.
2.    Fijačko N, Gosak L, Štiglic G, et al. Can ChatGPT pass the life support exams without entering the American heart association course? Resuscitation 2023;185:109732.
3.    Nunez JJ, Leung B, Ho C, et al. Predicting the survival of patients with cancer from their initial oncology consultation document using natural language processing. JAMA Netw Open 2023;6:e230813.
4.    Serban O, Thapen N, Maginnis B, et al. Real-time processing of social media with SENTINEL: A syndromic surveillance system incorporating deep learning for health classification. Inf Process Manag 2019;56:1166-1184.
5.    Hassani H, Silva ES. The role of ChatGPT in data science: How AI-assisted conversational interfaces are revolutionizing the field. Big Data Cogn Comput 2023;7:62.
6.    Alba D. OpenAI chatbot spits out biased musings, despite guardrails. Bloomberg. 8 December 2022. Accessed 29 April 2023. www.bloomberg.com/news/newsletters/2022-12-08/chatgpt-open-ai-s-chatbot-....
7.    Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019;366:447-453.
8.    Lee SK, Sun J, Jang S, Connelly S. Misinformation of COVID-19 vaccines and vaccine hesitancy. Sci Rep 2022;12:13681.
9.    Gravel J, D’Amours-Gravel M, Osmanlliu E. Learning to fake it: Limited responses and fabricated references provided by ChatGPT for medical questions. Mayo Clinic Proceedings: Digital Health 2023;1:226-234.
10.    Saliba T, Boitsios G. ChatGPT, a radiologist’s perspective. Pediatr Radiol 2023;53:813-815.
11.    DiGiorgio A, Ehrenfeld JM. Artificial intelligence in medicine and ChatGPT: De-tether the physician. J Med Syst 2023;47:32.
12.    Chronic Disease Coalition. Three reasons why doctors should treat the patient, not the disease. Accessed 30 April 2023. https://chronicdiseasecoalition.org/news/three-reasons-doctors-treat-pat....
 


Min Jung completed her Bachelor of Health Sciences at McMaster University and is currently a 4th-year medical student at the University of British Columbia with interest in medical education as well as clinical and outcomes research.


This post has been peer reviewed by the BCMJ Editorial Board.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Leave a Reply