What's Happening?
A Swedish medical researcher, Almira Osmanovic Thunström, created a fictitious disease called 'bixonimania' to test the reliability of AI systems. The disease, which was entirely fabricated, was picked up by AI platforms like ChatGPT and Microsoft’s Copilot,
which began diagnosing it as a real condition. The hoax escalated when a scientific journal in India published a paper citing 'bixonimania' as a legitimate disease, leading to its eventual retraction in March 2026. The incident highlights the potential for misinformation to spread through AI systems and the importance of verifying sources.
Why It's Important?
This event underscores the vulnerabilities in AI systems and the potential for misinformation to be propagated as fact. The reliance on AI for medical advice can lead to significant public health risks if unchecked. The incident also raises concerns about the integrity of scientific publications and the need for rigorous peer review processes. As AI becomes more integrated into healthcare, ensuring the accuracy and reliability of information is crucial to prevent similar occurrences that could lead to public panic or misuse of medical resources.












