AI's Medical Missteps
The integration of artificial intelligence into healthcare is presenting unique challenges for medical professionals. Dr. Cem Aksoy, a medical resident
in Turkey, encountered a distressing situation with an 18-year-old patient who, after being diagnosed with a leg tumor, sought advice from ChatGPT. The AI incorrectly predicted a five-year survival rate, causing immense panic. While the tumor was successfully removed and the patient cured, a few weeks later, the patient experienced a cough and, recalling the AI's prior dire prediction, feared lung metastasis. He became so distraught he considered writing a will. Fortunately, his lungs were healthy, and the cough was attributed to his recent start in smoking. Dr. Aksoy noted that when individuals are emotionally vulnerable and lack guidance, AI chatbots can overwhelm them with information without providing necessary context, leading to undue distress. OpenAI has stated that its newer models are better equipped to handle health queries and that ChatGPT is not intended to replace professional medical advice. This incident underscores the delicate balance between AI's potential and its limitations in sensitive medical contexts, especially when users are experiencing significant anxiety.
Questionable Health Apps
Beyond chatbots, a growing number of AI-driven consumer medical applications are appearing on app stores, promising assistance with health concerns. While these apps are generally not permitted to provide diagnoses, many seem to push the boundaries. For instance, an app named 'Eureka Health: AI Doctor' marketed itself as a comprehensive personal health companion. Although its App Store listing stated it was for 'informational purposes only' and did not diagnose or treat diseases, its developer's website boldly proclaimed, 'Become your own doctor,' and 'Ask, diagnose, treat,' even suggesting the AI could connect users to prescriptions and care. Following an inquiry from Reuters, the app was removed from the Apple App Store. Apple's guidelines mandate clear disclosure of data and methodology for accuracy claims. The developer, Sam Dot Co, did not respond to requests for comment, and the promotional website was subsequently altered. This situation highlights a significant concern: the potential for misrepresentation and the exploitation of user trust by apps claiming advanced AI capabilities without proper validation or regulatory oversight, even if they include disclaimers.
Dermatology App's Flaws
Another AI-powered application, 'AI Dermatologist: Skin Scanner,' claims to possess the same accuracy as a professional dermatologist, boasting over 940,000 users. It allows users to upload images of moles and skin conditions for an 'instant' risk assessment, with its website asserting it 'can save your life.' The developer, Acina, states the app utilizes a proprietary neural network trained on dermatological images to identify specific skin conditions with over 97% accuracy. However, numerous one-star reviews on app stores indicate frequent inaccuracies. One user, Daniel Thiberge, shared that after uploading photos of a growth on his arm, the app indicated a 75%-95% risk of cancer. Upon consulting a dermatologist, he was informed the growth was not problematic and did not warrant a biopsy. Thiberge questioned the app's purpose if its results were so wildly off, deeming it potentially useless or dangerous if it discourages people from seeking professional medical help. Another user reported uploading images of a diagnosed and surgically removed melanoma, only for the app to classify it as 'benign,' raising fears that others might delay doctor visits due to such erroneous assessments. While Acina stated the app's purpose is preliminary analysis to encourage professional consultation and that false positives can occur, and highlighted positive reviews where users sought early medical attention, both Apple and Google removed the app after Reuters brought these issues to light.
Regulatory Scrutiny
The concerning performance of AI medical apps has led to significant regulatory attention. Following Reuters' investigation, both Apple and Google removed the 'AI Dermatologist: Skin Scanner' app. A Google spokesperson emphasized that the Play Store prohibits misleading or harmful health functionalities and requires regulatory proof or disclaimers for medical apps. Although Acina revised the app to clarify its non-diagnostic nature and the need for professional consultation, Google reinstated it. However, Apple briefly reinstated it before removing it again, citing that the app provided medical data, measurements, diagnoses, or treatment advice without appropriate regulatory clearance. Acina is appealing this decision. Dr. Rachel Draelos, a physician and AI healthcare consultant, expressed grave concern about AI medical apps, particularly in dermatology, due to the complexity of skin conditions and the difficulty in ensuring AI datasets are comprehensive enough to cover thousands of potential issues. This ongoing back-and-forth between developers, app stores, and regulatory bodies underscores the challenges in ensuring the safety, accuracy, and ethical deployment of AI in healthcare, especially when user well-being is at stake.














