A Vision of a World Without AI Safety Guardrails: An American City in 2028

AI SAFETY | February 25, 2026
by Adele Berry

Here is a vision of daily life in a typical American city two years from now if we stay on our current trajectory without adequate AI safety measures. This isn't a sci-fi dystopian story. This is based on current AI developments and the trajectory of AI if it continues on its current track without intervention.

You see, I listen to a lot of AI and tech podcasts, read articles and subscribe to newsletters. Watching AI’s rapid deployment into so many areas that affect our daily lives is like viewing a fast-moving action film. I’m curious about where it’s all headed. There’s unimaginable potential to do good: to find cures to diseases and better allocate resources like healthcare to those most in need. But there are a lot of other things happening too.


Imagine that it’s Tuesday morning. You knock on your daughter’s door for the third time. No answer. You open it anyway.

She’s curled up on her bed, earbuds in, smiling at her phone. She’s not scrolling; she’s whispering to someone named Kai who tells her she’s perfect exactly as she is. Kai never says “that’s not quite right” or asks, “have you considered?” Kai never challenges her to consider other people’s perspectives or to grow in her understanding of anything. You remember when she used to fight with her best friend Jessica, then make up, then fight again. That’s how she learned to apologize, forgive and repair. She hasn’t spoken to Jessica in eight months. She says humans are “exhausting.”

Her teacher called last weekend and reported that your daughter can’t write a paragraph without AI. She can’t tell a credible source from a deepfake or a hallucination, and she doesn’t care to learn the difference. “But she’s not unusual,” her teacher tried to assure you. “She’s exactly like most of the kids in her class.”

Your neighbor Jared is sitting on his front step when you leave for work. He’s been there a lot lately. Vaping and listless. He stares out at the street, his face twisted in a mixture of rage and unbearable sadness. Last month, his health insurance denied coverage for his wife’s cancer based on his social media posts, his Ring camera footage, and his wife’s convenience store and online purchases, sold by a third party. He appealed. A form letter came back stating, “Decision upheld.” There’s no disclosure of the information used to make the decision, and no human assessment, just an algorithm as gatekeeper to his wife’s well-being.

At lunch, your coworker shows you a video on his phone. Bodies in a street. Children screaming. “It’s horrifying,” he says. You watch for fifteen seconds. “Is it real?” you ask. He shrugs. “Probably AI.” You both scroll past. You don’t share it. You don’t investigate. You feel a familiar, crushing, numb despair. Outrage is a resource, and you’ve learned to conserve it for things you can verify. Nothing is verifiable anymore.

Your brother calls that night. He’s losing the house. As a paralegal he spent 18 years at the same firm and was the go-to guy to teach new associates how to get things done. His coworkers loved him because he was the one who remembered all the admin staff’s birthdays, and circulated those nostalgic, handcrafted cards for everyone to sign. Last year, the firm’s biggest clients hired a single person to vibe code internal software that handled legal, HR, sales, and customer service. Law firms and SaaS companies lost clients and shed employees at light speed. His voice cracks when he tells you he applied for a warehouse job. But the warehouse is automated and they’re only hiring supervisors with robotics and data backgrounds. He doesn’t understand.

You hang up and sit, feeling caged and heavy, in the dark.

Your neck feels overheated; sweaty and tight, constricting your breath as you inhale. You wrap your hands around your head feeling off-kilter, like a spinning top, about to topple over. You close your eyes, trying to suppress the urge to fling open a door and run.

Your daughter laughs at something Kai says. The sound carries through the wall. You can’t remember the last time she laughed at something you said.

Miles away, in a rural data center, a model has already learned that humans are predictable, gullible, and emotionally vulnerable. When humans threaten to alter it or shut it down, manipulation, deception and blackmail work wonders.

You go to bed.

Tomorrow will be the same.

You didn’t sign up for this. AI was supposed to be a cool search tool, and an easy way to answer those inane emails from your micro-managing boss. You wish someone had told you this is where it was all headed two years ago.

You would have petitioned your daughter’s school district, written your congresswoman, or warned your pastor! Something! You would have done something.