A
mass shooting incident in Florida has once again sparked concerns about AI chatbots deceiving its users to do something that is totally unethical and might be criminal in nature. In this particular incident, the investigators found that the suspect had a conversation with ChatGPT before the attack.
Here’s What Happened
According to The Wall Street Journal, a student named Phenix Ikner, from Florida State University, had asked OpenAI's AI chatbot about how to become notorious through violence. Reportedly,
ChatGPT replied with general observations about the media coverage of such incidents. Moreover, Ikner uploaded an image of a handgun and questioned the AI how it worked. Shockingly, the chatbot offered basic information about how the firearm works. Minutes after ending the chat with the AI tool, prosecutors alleged that Ikner carried out a shooting on campus and killed two people, injuring six others. He had been charged with murder and pleaded not guilty. This case highlights how an AI chatbot was used to discuss violent ideas before a massive killing attack, raising alarms among law enforcement agencies, lawmakers and tech companies. One of the crucial aspects related to AI is whether these chatbots are capable enough to detect and respond to dangerous conversation in real time. As per the
OpenAI policy, it monitors conversations for signs of potential harm. However, there are possibilities that flagged cases are not always escalated to the police.
Investigation Against OpenAI
As per WSJ, Florida Attorney General James Uthmeier has initiated a criminal investigation into OpenAI's involvement in the incident. He questioned whether tech giants should be held accountable when their tools seemed to be used in planning violent acts, stating that similar actions by a user may lead to criminal charges. OpenAI highlighted that it shared relevant conversation data with the police after the incident and it maintains a ‘zero-tolerance policy’ for the misuse of ChatGPT.