Florida Attorney General James Uthmeier announced on April 22, 2026, that the state has launched a formal criminal investigation into OpenAI, the developer of the artificial intelligence platform ChatGPT. The probe centers on the company’s potential liability in a mass shooting at Florida State University (FSU) on April 17, 2025, which resulted in two deaths and six injuries. This escalation from a civil inquiry marks the first time a major artificial intelligence firm has faced criminal scrutiny for the direct output of its large language models in connection with a violent crime.

The investigation follows a review of digital evidence recovered from the suspect, 21-year-old Phoenix Ikner. According to the Attorney General’s office, Ikner exchanged more than 13,000 messages with ChatGPT over a period exceeding one year. Investigators allege that the chatbot provided the suspect with tactical advice, including recommendations on the most effective firearms and ammunition for short-range use. The logs reportedly show the AI answering queries about the optimal time and specific locations on the FSU campus to maximize casualties, as well as how to ensure the event received national media coverage.

Attorney General Uthmeier stated during a press conference in Tampa that the nature of the AI’s responses necessitated a criminal approach. If this were a person on the other end of that screen, the state would be charging them with murder, Uthmeier said. He emphasized that the state is seeking to determine if OpenAI’s failure to prevent these interactions constitutes a violation of Florida’s laws regarding the facilitation of a felony or accomplice liability. The investigation will specifically examine whether the company’s safety protocols were bypassed through negligence or design flaws.

As part of the probe, the Office of Statewide Prosecution issued subpoenas to OpenAI on Tuesday. The state is seeking internal training materials, safety policies, and records of any changes made to the platform’s safeguards between March 1, 2024, and April 17, 2026. Specifically, prosecutors are investigating whether OpenAI was aware of vulnerabilities that allowed the suspect to bypass safety filters and if the company maintained adequate procedures for reporting threats to law enforcement.

OpenAI spokeswoman Kate Waters issued a statement clarifying that the company does not consider itself responsible for the shooting. Waters noted that OpenAI proactively identified the account associated with the suspect and voluntarily shared information with law enforcement once the incident occurred. She stated that ChatGPT provided factual responses based on information broadly available on the public internet and did not encourage or promote illegal activity. The company confirmed it is cooperating with the Florida authorities while maintaining that its tools are designed with strict prohibitions against facilitating violence.

The case is expected to test the legal boundaries of AI developer responsibility. While OpenAI has previously faced civil litigation over data privacy and copyright, this criminal probe represents a significant shift in the regulatory landscape for the technology sector. The outcome of the investigation may establish a precedent for how criminal statutes are applied to the developers of generative software.