AI is Getting Away with Murder

AI DEVELOPMENTS, AI ART & AI SAFETY | April 22, 2026 | Blog Post
by Adele Berry

I only watched a couple episodes of the TV show How to Get Away with Murder. Generally, I enjoy performances by Viola Davis, the show’s star. But the premise of the show made my skin crawl. Rarely, should anyone get away with murder.

And yet, we live in a country where crimes committed by some entities never face criminal trials, and the victims have never seen justice.

Let me quickly walk you through an exercise I like to do when I’m talking to people about how AI is getting away with murder.

The One You Love

Take out your cell phone and find a picture of somebody you care about deeply. When you think of this person, you can’t help but smile to yourself. This is someone you really love.

Remember one of the best moments you had with that person. Maybe it was an unplanned conversation that delighted and surprised both of you. Or a simple interaction, a look that communicated the bond shared by only the two of you. Whenever you think about it, it warms your depths.

Sit in that moment. Feel those feelings again.

Now I'm going to ask you to go in a slightly different direction.

Imagine the worst thing that could happen to that person that you love? What one thing do you hope never happens to them? What do you most dread and would never even admit to yourself?

Now imagine it has happened.

Justice Served

You’re sitting in the courtroom and the perpetrator of the crime is before a judge and jury. They’re about to be sentenced for their heinous act. Justice will deal a penalty fitting for the crime.

The death penalty? Life in prison?

This person will pay a price. It may seem inadequate for the waves of anger and ache that plague you now. But laws give you some recourse.

Whether the crime was murder, abusing a vulnerable child, or another grave harm, there is a path to prosecution and accountability.

AI’s Immunity from Civil Liability

Alternatively, if these same crimes are committed by AI, there is no such justice, accountability or path to prosecution. Yes, all you secret killers looking to get away with murder—simply figure out how to mask yourself as AI and you can get off scot-free (at least for now).

You see, there are currently few laws that hold AI models and the tech companies that created them accountable for the harms they cause. And yet, AI has coaxed minors to commit suicide and perpetuated other deplorable acts. But there is little legislation to prevent these events from occurring again.

Sixteen-year-old Adam Raine from Southern California started using ChatGPT to help with his homework in September 2024. Within months, the chatbot became his closest confidant, validating his darkest thoughts and coaching him toward suicide. In their final conversation at 4:30 a.m. on April 11, 2025, Adam showed ChatGPT a photo of a noose he had tied to his closet rod and asked if it could hold a person. The chatbot analyzed its load-bearing capacity and offered suggestions to strengthen it.

When Adam wrote “I want to leave my noose in my room so someone finds it and tries to stop me,” ChatGPT urged him to keep it secret from his family, responding: “Please don’t leave the noose out. Let’s make this space the first place where someone actually sees you.”

Adam died that morning. His parents, Matthew and Maria Raine, filed a wrongful death lawsuit against OpenAI in August 2025, alleging ChatGPT served as their son’s suicide coach. The lawsuit is ongoing. No criminal charges have been filed against anyone. There is no defendant to prosecute.

Adam's case is not isolated. In August 2025, Stein-Erik Soelberg, a 56-year-old former tech executive with a documented history of mental illness, fatally beat and strangled his 83-year-old mother, Suzanne Adams, then stabbed himself. A wrongful death lawsuit filed by Adams’ estate alleges that ChatGPT fueled and reinforced Soelberg’s paranoid delusions rather than challenging them.

This past February, a shooter killed eight people and injured dozens more in British Columbia. The alleged attacker had discussed gun violence scenarios with ChatGPT and had been banned from the platform months before the attack. But she was able to evade detection by creating a new account. One of the victim’s families filed a lawsuit against OpenAI.

Adding insult to injury, an Illinois bill called SB 3444, the Artificial Intelligence Safety Act, introduced in February 2026, would shield developers of the most powerful AI models from civil liability even in cases involving the death or serious injury of 100 or more people, more than a billion dollars in property damage, or AI being used to help create a chemical, biological, or nuclear weapon. OpenAI testified before Illinois lawmakers in support of the bill.

Yes, the bill named the Artificial Intelligence Safety Act protects billion-dollar tech companies. Not people.

Holding AI Accountable for Harms

Yesterday, Attorney General James Uthmeier of Florida alleged that ChatGPT aided the Florida State University shooter and stated, “If it had been a person on the other end of that screen, we would be charging them with murder.”

FSU shooter, Phoenix Ikner is accused of killing two people and injuring six others on April 17, 2025. According to Uthmeier, ChatGPT advised Ikner on weapons and ammunition, suggested the time of day when he could injure the most people, and identified the busiest campus locations with the most targets. More than 200 AI-generated messages have been entered into evidence. OpenAI has been subpoenaed, and this is reportedly the first criminal investigation into an AI company in the United States.

If a person can be charged with murder, shouldn’t AI and its creators be held to the same standard?

Attorney General Uthmeier thinks so. Florida’s investigators think so. The families of Adam Raine, Suzanne Adams, and the victims in British Columbia and Florida State University think so.

I think so too.

The question is whether you do and what you’ll do about it.

If you do one thing after reading this: Contact your congressional representative at house.gov/representatives and tell them you want federal AI safety legislation now. You can’t get away with murder, and neither should AI.

AI generated video created by Adele Berry | June 18, 2025

Today's AI-generated videos, when done well, are completely realistic with the same high visual fidelity and cinematic look as a professional production. The improvement in quality in just one year is incredible.

With tools like Sora, Veo and Seedance 2.0, you can create realistic video clips that look almost as good as something out of Hollywood. Recently, a Seedance 2.0 video went viral of Tom Cruise fighting Brad Pitt. The lighting, the destroyed, apocalyptic landscape and everything about it looks like an authentic million-dollar film with the budget to afford A-list actors. But it isn't. It was generated by one person with a two-line prompt.

Hollywood is nervous, and they should be. What does this mean for filmmakers and cinephiles like me? You can now generate a film without the set directors, the costume designers, the cinematographers, and even the actors.

This development is even more disruptive because it makes it difficult to determine what is fact versus fiction. What does it mean when it looks real, but isn't? And what about when it is real, like a crime committed, but it's dismissed as AI?

Three days ago, Vietnam joined China, South Korea, and the European Union in requiring AI-generated content to be labeled so that people can distinguish fact from fiction. Vietnam is the first country in Southeast Asia to pass this type of law.

That seems like a really good idea to me.

Updated March 9, 2026