AI's Risky Prescription
A critical incident involving a 45-year-old man in New Delhi underscores the dangers of seeking medical advice from AI chatbots. He was hospitalized in a severe
condition after self-administering HIV post-exposure prophylaxis (PEP) based on an AI's suggestion. This led to Stevens-Johnson Syndrome, a life-threatening drug reaction characterized by painful skin rashes and blistering. The man had procured a full course of medication over-the-counter and used it for seven days following a high-risk sexual encounter. Doctors at Dr. Ram Manohar Lohia Hospital treated him, highlighting that while AI can provide general health information, it lacks the crucial ability to review individual medical histories, accurately diagnose conditions, or prescribe appropriate medications. This case serves as a stark warning against substituting professional medical consultation with AI-driven advice, as unsupervised medication intake can result in serious side effects, toxicity, and the development of drug resistance. Specifically, HIV prevention drugs like PrEP or PEP require strict medical supervision and must be initiated within 72 hours of exposure, following necessary testing.
The Illusion of Authority
In today's hyper-connected world, accessing health information online is commonplace, but the advent of advanced AI chatbots like ChatGPT has amplified this tendency to dangerous levels. Dr. Jitender Nagpal, deputy medical director at Sitaram Bhartia Institute of Science and Research, observes that people are increasingly turning to these AI tools, even for complex health issues, treating them as a primary source of medical guidance. While over-the-counter medication errors might carry less immediate risk, misinterpreting AI advice for serious conditions can lead to significant treatment delays and potentially fatal outcomes. Dr. Nagpal emphasizes that AI chatbots operate based solely on the prompts they receive and lack the capacity to understand the nuances of a patient's illness. They cannot ascertain a patient's age, gender, or prior medical history, nor can they conduct a thorough diagnostic interview by asking follow-up questions. Consequently, the accuracy of the information provided is inherently questionable. AI cannot perform physical examinations to verify symptoms or order diagnostic tests when uncertainty arises, unlike a doctor who can discuss observations with the patient, share their diagnostic impressions, and recommend investigations for a correct diagnosis and treatment plan.
Global Health Watchdogs Warn
The concerning trend of AI misuse extends beyond individual self-diagnosis and prescription. The World Health Organization (WHO) has issued a global caution regarding the use of AI-generated large language models (LLMs) in healthcare, emphasizing the need to protect human well-being, safety, and autonomy, while also safeguarding public health. The WHO highlights significant risks associated with the data used to train these AI systems; this data may contain biases, leading to misleading or inaccurate information that could negatively impact health outcomes, equity, and inclusiveness. Although LLMs can produce responses that appear authoritative and highly plausible, these outputs may be factually incorrect or contain critical errors, particularly in health-related contexts. Furthermore, the WHO points out that LLMs might be trained on data for which proper consent for its use was not obtained, and there are concerns about their ability to protect sensitive user-provided information, including health data. The potential for LLMs to generate and disseminate convincing disinformation, disguised as reliable health content in text, audio, or video formats, poses a serious challenge to public discernment. While the WHO acknowledges the potential of AI and digital health to enhance human health, it strongly advises policymakers to prioritize patient safety and robust regulatory frameworks as technology firms advance LLM commercialization.














