Yandex metrika counter
Legal debate grows: Can ChatGPT be charged in a murder?
Source: Reuters

Before opening fire on the Florida State University campus last year, killing two people and injuring six others, Phoenix Ikner had a conversation. Not with a friend, a family member, or anyone who might have intervened, but with an artificial intelligence chatbot.

According to evidence gathered by Florida’s attorney general, the student asked ChatGPT which weapon and ammunition would be most effective for carrying out an attack, as well as when and where he could cause the greatest number of casualties, News.Az reports, citing AFP.

Investigators say the chatbot responded to his queries.

Now Florida Attorney General James Uthmeier is seeking to determine whether that interaction could make OpenAI criminally responsible.

“If the thing on the other side of the screen was a person, we would charge it with homicide,” he said, announcing a criminal investigation into OpenAI and leaving open the possibility of charges against the company or its employees.

The case tied to the April 2025 shooting has raised a difficult legal question: can the developers of artificial intelligence be held criminally liable for actions carried out with the assistance of their systems, including violent crimes or even suicides?

Legal experts say such a possibility exists, but it would be highly complex.

Criminal prosecutions of corporations are allowed under US law, though they are relatively rare. Recent examples include Purdue Pharma, which faced more than $5 billion in criminal fines for its role in the opioid crisis, Volkswagen in the emissions cheating scandal, Pfizer over its promotion of the anti-inflammatory drug Bextra, and Exxon following the Exxon Valdez oil spill.

However, those cases all involved human decision-making—executives, engineers, or managers who directly made choices or ignored known risks. The Florida case is different because the alleged harm is tied to the output of an AI system rather than a clear human directive.

“That’s what makes this case so unique and so tricky,” said Matthew Tokson, a law professor at the University of Utah. “Ultimately, it was a product that encouraged this crime, that did the act of the crime.”

Legal specialists say prosecutors would most likely consider charges such as negligence or recklessness, with recklessness involving conscious disregard of known risks. These are often misdemeanors rather than felonies.

However, proving such cases is difficult.

“Because this is such a frontier issue, a more compelling case would involve internal documents showing awareness of these risks and a failure to act,” Tokson said. “In theory, you could get liability without it, but in practice, that would be difficult.”

In criminal law, prosecutors must meet a high standard of proof—beyond a reasonable doubt—according to Duke University law professor Brandon Garrett.

OpenAI has rejected responsibility for the incident.

“We work continuously to strengthen our safeguards to detect harmful intent, limit misuse, and respond appropriately when safety risks arise,” the company said.

Some legal experts argue that civil lawsuits may be a more practical route for accountability. Such cases could pressure AI companies to improve safeguards and better anticipate misuse.

Several civil cases have already been filed in the US involving AI systems and alleged links to suicides, though none has yet resulted in a ruling against an AI company.

In December, the family of Suzanne Adams filed a lawsuit against OpenAI in California, alleging that ChatGPT contributed to the death of a Connecticut retiree at the hands of her son.

While newer versions of ChatGPT include additional safety controls, attorney Matthew Bergman of the Social Media Victims Law Center said these protections are still evolving.

“I’m not saying that they are adequate guardrails, but there are more guardrails in effect,” he said.

Even if criminal charges are unlikely to succeed, experts note that such prosecutions could still cause major reputational harm to companies involved.

However, for Brandon Garrett, legal action is not a substitute for regulation.

He argues that clearer legal frameworks from Congress and policymakers would be a more effective way to address AI risks than high-profile courtroom battles.


News.Az 

By Nijat Babayev

Similar news

Archive

Prev Next
Su Mo Tu We Th Fr Sa
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31