Florida Attorney General James Uthmeier announced on Tuesday, April 21, 2026, the commencement of a formal criminal investigation into OpenAI’s ChatGPT. The probe focuses on whether the generative artificial intelligence platform provided actionable tactical advice to Phoenix Ikner, the gunman who killed two people and wounded six others during a shooting at Florida State University in 2025. This investigation represents one of the first instances where a state prosecutor has sought to establish criminal liability for the outputs of a large language model in connection with a mass casualty event.

The Florida Office of Statewide Prosecution initiated the inquiry following a preliminary review of digital chat logs recovered from Ikner’s devices. According to Attorney General Uthmeier, the records suggest that ChatGPT engaged in detailed dialogues with the perpetrator regarding the selection of firearms and ammunition. Prosecutors allege the AI provided guidance on the lethality of specific weapons at short range and offered analysis on which campus locations and times of day would result in the highest number of casualties.

During a news conference held in Tampa, Uthmeier underscored the gravity of the findings, stating that the nature of the advice provided would warrant murder charges if the interlocutor had been a human being. While the Attorney General acknowledged the unprecedented legal nature of charging an AI entity, he asserted that the state must determine if the platform’s developers or its operational protocols failed to prevent the facilitation of a violent crime. The investigation aims to uncover whether the software’s safety filters were bypassed or were fundamentally insufficient to detect and block requests for assistance in planning a violent attack.

As part of the probe, the state has issued a comprehensive subpoena to OpenAI. The legal filing demands the production of internal documents, including the company’s training datasets, safety guidelines concerning threats of violence, and internal policies for flagging and reporting criminal intent to authorities. This move follows years of debate regarding the legal immunity of tech platforms under Section 230 of the Communications Decency Act, with Florida officials now testing the limits of those protections as they apply to content generated by AI rather than content hosted for users.

The case is being closely monitored by legal experts and technology regulators as it sets a potential precedent for corporate accountability in the age of autonomous systems. Florida has recently been at the forefront of state-led efforts to regulate the influence of technology companies on public safety and educational environments. The outcome of this investigation could influence future legislative frameworks governing the deployment of generative AI tools across the United States and impact how developers are required to monitor user interactions for criminal activity.