It seems the legal troubles for Sam Altman’s OpenAI are not ending anytime soon. Now, the widow of a man killed in last year’s mass shooting at Florida State University in the United States is suing the maker of ChatGPT. The family claims that the chatbot helped the gunman plan the deadly attack on the college campus.According to a report by the Associated Press, the case was filed by Indian-origin Vandana Joshi, whose husband Tiru Chabba was one of two people killed in the April 2025 attack. Six others were injured.“OpenAI knew this would happen. It’s happened before and it was only a matter of time before it happened again,” Joshi said. The lawsuit says that the shooter used ChatGPT to ask questions about where and when he could cause the most
harm on campus. As per the investigators, chatbot provided general information about busy locations, timing, weapons and ammunition. It also allegedly suggested that attacks involving children often receive more media attention.Can AI Miss Warning Signs? ChatGPT Linked To Florida Campus Shooting Case
Joshi also said that OpenAI should have had stronger safeguards in place to detect when someone was planning an imminent act of violence and alert authorities if necessary.In a statement to the AP, OpenAI denied wrongdoing. The company said ChatGPT only provided factual information that is widely available online and did not encourage or promote illegal acts.“In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity,” Drew Pusateri, a spokesman for the company told AP.Separately, in April, Florida’s attorney general said there was a rare criminal investigation into ChatGPT over whether the AI tool offered advice to Phoenix Ikner that enabled the April 2025 shooting in Tallahassee.The 21-year-old has pleaded not guilty to two counts of first-degree murder and several counts of attempted murder. Prosecutors are seeking the death penalty.The lawsuit adds to growing global concerns over the safety of AI tools and whether companies should be held accountable when their products are misused.