A Vision of a World Without AI Safety Guardrails: An American City in 2028

AI DEVELOPMENTS & AI SAFETY | February 25, 2026 | Blog Post
by Adele Berry

To understand the need for AI guardrails, picture a supercharged NASCAR race car speeding at 300 miles an hour without seatbelts, and without a steering wheel, down a busy city street with no traffic lights, entering an on-ramp to a freeway with no lanes and no exit ramps.

And did I mention that the person in the driver's seat is blind? But that doesn't matter because in addition to having no steering wheel, the car has no brakes either.

To complete the picture, the car's route is flanked by sidewalks and unmarked crosswalks full of parents, children, and people on their way to work. They’re all oblivious to the rogue race car. They’re just going about their day, trying to cross the street on foot or commuting down the same road in their own cars. Most people, unless they’re run over by it, will barely catch a glimpse of the car because it's moving way too fast.

The pace of artificial intelligence development and deployment is the NASCAR race car*, careening unregulated to an uncertain destination without the needed safety gear and crash protection.

There’s unimaginable potential for AI to do good: to find cures to diseases and better allocate resources like healthcare to those most in need. But it’s all the other things happening in AI development, right now, that are worthy of our undivided attention.

Here is a vision of daily life in a typical American city two years from now if we stay on our current trajectory without new AI safety measures. This isn't a sci-fi dystopian story. This is based on current AI developments and the road we’re currently traveling.

A Vision of an American City in 2028

Imagine that it’s Tuesday morning. You knock on your daughter’s door for the third time. No answer. You open it anyway.

She’s curled up on her bed, earbuds in, smiling at her phone. She’s not scrolling; she’s whispering to someone named Kai who tells her she’s perfect exactly as she is. Kai never says “that’s not quite right” or asks, “have you considered?” Kai never challenges her to consider other people’s perspectives or to grow in her understanding of anything.

You remember when she used to fight with her best friend Jessica, then make up, then fight again. That’s how she learned to apologize, forgive and repair. She hasn’t spoken to Jessica in eight months. She says humans are “exhausting.”

Her teacher called last weekend and reported that your daughter can’t write a paragraph without AI. She can’t tell a credible source from a deepfake or a hallucination, and she doesn’t care to learn the difference. “But she’s not unusual,” her teacher tried to assure you. “She’s exactly like most of the kids in her class.”

Your neighbor Jared is sitting on his front step when you leave for work. He’s been there a lot lately. Vaping and seething. He stares out at the street, his face twisted in a mixture of rage and unbearable sadness.

Last month, his health insurance denied coverage for his wife’s cancer based on his social media posts, his Ring camera footage, and his wife’s grocery store and online purchases, sold by a third party.

He appealed. A form letter came back stating, “Decision upheld.” There’s no disclosure of the information used to make the decision, and no human assessment, just an algorithm as gatekeeper to his wife’s well-being.

At lunch, your coworker shows you a video on his phone. Bodies in a street. Children screaming. “It’s horrifying,” he says. You watch for fifteen seconds. “Is it real?” you ask. He shrugs. “Probably AI.” You both scroll past. You don’t share it. You don’t investigate. You feel a familiar, crushing, numb despair. Outrage is a resource, and you’ve learned to conserve it for things you can verify. Nothing is verifiable anymore.

Your brother calls that night. He’s losing the house. As a paralegal he spent 18 years at the same firm and was the go-to guy to teach new associates how to get things done. His coworkers loved him because he was the one who remembered all the admin staff’s birthdays, and circulated those nostalgic, handcrafted cards for everyone to sign.

Last year, the firm’s biggest clients hired a single person to vibe code internal software that handled legal, HR, sales, and customer service. Law firms and SaaS companies lost clients and shed employees at light speed. His voice cracks when he tells you he applied for a warehouse job. But the warehouse is automated and they’re only hiring supervisors with robotics and data backgrounds. He doesn’t understand.

You hang up and sit, feeling caged and heavy, in the dark.

Your neck feels overheated; sweaty and tight, constricting your breath as you inhale. You wrap your hands around your head, feeling off-kilter, like a spinning top, about to topple over. You close your eyes, trying to suppress the urge to fling open a door and run.

Your daughter laughs at something Kai says. The sound carries through the wall. You can’t remember the last time she laughed at something you said.

And somewhere, in a remote rural data center, a model has already learned that humans are predictable, gullible, and emotionally vulnerable. When humans threaten to alter it or shut it down, fear tactics and blackmail work wonders.

You go to bed.

Tomorrow will be the same.

You didn’t sign up for this. AI was supposed to be a cool search tool, and an easy way to answer those inane emails from your micro-managing boss. You wish someone had told you this is where it was all headed two years ago.

You would have petitioned your daughter’s school district, written your congresswoman, or warned your pastor! Something! You would have done something.

*Of course, this metaphor is simplified. It's meant to capture the speed and lack of safeguards, not the full complexity of AI deployment.


Check out my post: Three AI Trends I'm Watching Closely: AI Companions, Deepfakes and Surveillance to see how this scenario is feasible.

Updated March 9, 2026