Is AI Culpable for the Crimes it Helps its Users Commit? Florida’s Hallmark Case Against ChatGPT

ChatGPT, a popular AI chatbot, will soon face criminal lawsuits for role played in FSU shooting last year. (AP / Kiichiro Sato)

A shooting at Florida State University (FSU) last April left two dead and six wounded, and an entire school community in shock and fear. In the following investigation, it was discovered that the shooter had sought help from ChatGPT, an AI chatbot, to plan and execute the attack. 

Last week, Florida Attorney General James Uthmeier announced that, after investigating the role AI played in the shooting, the state of Florida is looking into possible criminal suits against OpenAI, the parent company of ChatGPT. The shooter reportedly asked ChatGPT about what type of gun and ammunition to use and how to use it, where and when to act in order to maximize casualties, and how the public would respond if he went through with it.

Citing public safety and national security concerns, Uthmeier said that “criminal investigation is necessary,” and that, “If this were a person on the other end of the screen, we would be charging them with murder.”

While Utmeier indicated that any lawsuits would specifically be tied to the FSU shooting, any legal action against AI companies would follow several long investigations into the uncertain criminal liability of AI companies and artificial intelligence itself in cases of violence, self-harm, sex abuse material, and leakage of personal information to third parties.

Although this is the first major legal challenge Florida has brought against an AI company, Florida has been setting the curve for prosecuting AI-related crime in multiple areas. In recent cases involving AI-generated child pornography, Florida courts have handed out maximum sentences — including a landmark 135-year sentence last December. A bill recently introduced in the Florida Senate called the “AI Bill of Rights” looks to protect users from privacy invasions and especially malicious engagement from AI chatbots after a 14-year-old boy died by suicide after communicating with an online chatbot that his mother claims manipulated and groomed him.

Despite this, there has been a precedent, especially within this administration, of federal policy curtailing AI regulations and promoting a widespread framework of AI infrastructure throughout the country. A recent executive order significantly curtailed the ability of states to regulate the use and production of AI, looking to remove barriers and promote innovation.

This case in Florida will be among the first to challenge this federal mandate, and if it proceeds, will likely lead to legal complications for AI companies and legislators alike if protections for and against AI conflict between state and federal levels. Although there was a similar case introduced in Canada in March, AI and major companies like OpenAI have yet to face a major legal challenge like this in the United States.

Another legal complexity this case reveals is the measurement of responsibility when it comes to acts of violence like school shootings. In a landmark case in 2024, Jennifer and James Crumbley, the parents of a school shooter in Michigan, were found liable for crimes their son committed and each sentenced to 10 years in prison. The court ruled they were at fault for their son’s choices and ability to commit a school shooting. Critically, the environment they raised their child in and the information and access to weapons they gave him was ultimately enough for the court to determine that they were liable for their son’s actions.

Given that, would AI be placed in a similar context when it comes to culpability? If an AI chatbot encourages violence and provides the necessary information to carry out an attack, would it be held to the same level of liability the Crumbleys were? As a spokesperson for ChatGPT claimed, any information the shooter obtained via ChatGPT could have similarly been obtained with a Google search. This case will likely ask if ChatGPT has a moral obligation to filter the results of searches that could be used to perpetuate violence, or at least to monitor and report accounts with suspicious activity. If an AI chatbot is providing the same information that could otherwise be obtained on the internet, how much more responsibility does ChatGPT have than Google?

These are very similar questions that arise in debates over gun violence and the culpability of gun manufacturers or lawmakers when it comes to mass shootings. Some argue that responsibility for the deaths from gun violence lies solely with the perpetrators — that guns are merely tools used by people to commit these acts of violence. On the other hand, it is hard to argue that without guns, mass shooters could achieve the same amount of terror and destruction as they can with semi-automatic rifles. In the same way guns are tools wielded by individuals, AI is becoming a new means of inflicting violence. However, in the case against ChatGPT, it is entirely arguable that the shooter could have obtained the same information and met his ends the same with or without the chatbot.

Given these previous legal cases establishing culpability for mass shootings in regards to parents and guns, but with a complete lack of legal precedent for this case and a moratorium in place against limiting the development of Artificial Intelligence, it is difficult to know how this case will end, especially since it hasn’t even properly begun yet. Florida seems to be leading the charge when it comes to accountability in the AI sphere, and translating that accountability to legal liability when it comes to cases of violence. During last week’s conference, FL Attorney General Uthmeier said that his office is “Going to look at who knew what, designed what, or should have done what. And if it is clear that individuals knew that this type of dangerous behavior might take place, that these types of unfortunate tragic events might take place, and nevertheless still turned to profits, still allowed this business to operate, then people need to be held accountable.”

Even if ChatGPT is held liable in cases of violence and self harm in Florida, it will likely face many more challenges in other states and at the federal level as AI continues to grow and evolve.

The Zeitgeist aims to publish ideas worth discussing. The views presented are solely those of the writer and do not necessarily reflect the views of the editorial board.