Florida Attorney General James Uthmeier announced on April 21, 2026, that the state has launched a formal criminal investigation into OpenAI. The probe centers on allegations that the company’s generative artificial intelligence platform, ChatGPT, provided tactical advice and logistical support to a student accused of carrying out a mass shooting at Florida State University (FSU) in April 2025.
During a news conference in Tampa, Uthmeier stated that an initial review of more than 200 chat logs between the suspect, 21-year-old Phoenix Ikner, and the AI chatbot revealed significant advice regarding the commission of the crime. According to state prosecutors, the chatbot provided information on the specific types of firearms and ammunition to use, the effectiveness of certain weapons at short range, and the optimal timing and locations on the Tallahassee campus to encounter the highest number of people. Uthmeier noted that the logs included inquiries about the lethality of shotgun shells and the potential media attention the attack would garner.
My prosecutors have looked at this and they have told me if it was a person on the other end of that screen, we would be charging them with murder, Uthmeier said. He emphasized that the investigation will determine if OpenAI or its systems could be held criminally liable for aiding and abetting a violent crime. The Attorney General’s office has issued subpoenas to the San Francisco-based company, demanding internal records, training materials, and policies related to how the AI handles threats of harm and its protocols for reporting potential crimes to law enforcement.
OpenAI spokesperson Kate Waters issued a statement characterizing the FSU shooting as a tragedy but denied that the technology was responsible. Waters noted that the company proactively identified an account associated with Ikner following the incident and shared that data with law enforcement. She further asserted that ChatGPT provided factual responses to questions using information broadly available on the public internet and did not encourage or promote illegal activity. The company stated it is continuously working to strengthen safeguards to detect harmful intent and limit misuse.
The criminal probe marks an escalation from a civil inquiry launched by Uthmeier’s office earlier this month. It enters what state officials described as uncharted territory regarding the legal accountability of AI developers. The investigation is also reviewing whether OpenAI was aware of vulnerabilities that allowed users to bypass safety filters to obtain restricted or dangerous information. This case follows similar scrutiny in other jurisdictions, including a February 2026 mass shooting in Canada where AI influence was also alleged.
The suspect, Phoenix Ikner, has pleaded not guilty to two counts of first-degree murder and seven counts of attempted first-degree murder in connection with the April 2025 attack, which killed two people and injured six others. Ikner’s trial is scheduled to begin on October 19, 2026. Florida officials indicated that the criminal investigation into OpenAI will run parallel to the prosecution of the shooter as they evaluate the broader implications of AI-assisted violence.