Patient Trust & AI
The increasing reliance on artificial intelligence for health-related queries is presenting novel challenges for physicians. Consider the case of a young
patient diagnosed with a leg tumor, whose anxiety was amplified by a chatbot's grim five-year survival prediction. This prediction, however, proved inaccurate, as the patient experienced a successful recovery post-surgery. Despite being cured, the patient later consulted the AI again due to a cough, receiving a suggestion of lung metastasis. Fortunately, the cough was merely a symptom of a new smoking habit, not cancer spread. This instance illustrates how AI, without the nuanced context and emotional support a doctor provides, can lead distressed individuals down paths of unnecessary fear and confusion, creating a 'forest of knowledge without coherent context.' While AI developers are working to improve the accuracy and safety of their health-related models, they emphasize that these tools are not substitutes for professional medical advice, underscoring the critical role of human medical professionals in interpreting and delivering health information.
The App Store Minefield
Beyond conversational AI, a surge of AI-driven medical applications is appearing on app stores, promising assistance with various health concerns. Although these apps are generally not permitted to offer diagnoses, many appear to push these boundaries. Under regulatory guidelines, AI medical apps intended solely for patient education do not require governmental approval. However, some developers exploit this by creating apps with disclaimers that they are for informational use only, while simultaneously promoting them as tools to 'become your own doctor,' and even offering connections to prescriptions and lab orders. One such app, 'Eureka Health: AI Doctor,' was removed from an app store after its promotional materials were flagged. Its developer's website also changed its messaging after inquiries. These instances highlight a concerning trend where the lines between informational tools and diagnostic aids are blurred, raising questions about user safety and the integrity of health advice disseminated through such platforms.
Accuracy Concerns Emerge
The accuracy and potential danger of AI-powered medical apps are a growing concern, with some applications providing incorrect and potentially harmful information. An example is 'AI Dermatologist: Skin Scanner,' which claims a high level of accuracy comparable to a professional dermatologist and suggests it can 'save your life.' Users can upload images of skin conditions for an 'instant' risk assessment. However, numerous one-star reviews indicate significant inaccuracies. One user reported that the app suggested a 75%-95% risk of cancer for a growth that a dermatologist deemed non-problematic and not requiring a biopsy. Another user, who had successfully treated melanoma, found the app identified her removed cancer as 'benign.' While the developer states the app is for preliminary analysis to encourage professional consultation and asserts its AI models are built on dermatological literature and curated datasets, false positives can occur. Following user complaints and scrutiny, several app stores removed 'AI Dermatologist,' though it was later reinstated after revisions clarifying its non-diagnostic nature. However, continued re-evaluations led to its subsequent removal from some platforms due to concerns about providing medical data and diagnoses without proper clearance.
Expert Apprehension Mounts
Medical professionals and AI healthcare consultants express significant apprehension regarding the proliferation of AI-powered medical applications, particularly in complex fields like dermatology. The difficulty in accurately identifying thousands of skin conditions means that many apps may lack the comprehensive datasets necessary for reliable assessments. The concern is that these apps, even with disclaimers, might lead individuals to delay seeking essential professional medical attention. The potential for inaccuracies, whether false positives or negatives, poses a risk to patient well-being. While the intention of some apps might be to prompt early checks or provide preliminary analysis, the inherent limitations of current AI technology in understanding the subtleties of human health, combined with the potential for misinterpretation by users, warrants a cautious approach. The need for robust regulatory oversight and clear ethical guidelines becomes paramount as these technologies continue to integrate into the healthcare landscape.












