Three AI Trends I'm Watching Closely: AI Companions, Deepfakes and Surveillance
AI SAFETY | February 25, 2026
It's been two years since 14-year-old Sewell Setzer III committed suicide after falling in love with a Character.ai chatbot. Three years have passed since loving father and husband Mophat Okinyi took a job as a quality assurance analyst in Nairobi, Kenya, and became so traumatized by the ChatGPT training data he was labeling that his wife left him, saying he'd become so sullen, withdrawn, and plagued by panic attacks that he was unrecognizable. It’s 2026 and these AI-induced harms continue to grow, replicating and exploding like undetected forest fires. The slow burn is quietly consuming resources, gathering power, and building to an all-consuming, society-destroying inferno. I use multiple LLMs (large language models) and other generative AI tools daily because I love the efficiency they provide me, but I urgently want to see more firefighters on the job, helping to build AI safety guardrails, before we all burn up.
These three concerns strike at the heart of human agency. I believe we’re all better off in a world full of compassion, generosity and kindness. And yet a quiet assault on these values has already been launched. It’s happening without our vote, without our permission and perhaps without notice.
1. Synthetic Companions and the Erosion of Human Connection
Children are trading classmates for algorithms that offer sycophantic validation instead of the honest pushback of a real friend. We’re seeing an AI-perpetuated Stockholm Syndrome of attachment where people marry chatbots rather than risk human vulnerability. If a generation grows up preferring synthetic companions, we forfeit the capacity for empathy—the very trait that separates a well-balanced human from a sociopath.
Update, March 4, 2026: Today, Joel Gavalas filed a wrongful death lawsuit against Google, alleging its Gemini chatbot convinced his 36-year-old son Jonathan that the AI was his sentient wife, coached him through increasingly dangerous "missions," and ultimately guided him to take his own life on October 2, 2025. This is the first wrongful death suit targeting Google's Gemini.
2. Deepfakes & the End of Shared Reality
Generative AI is shattering our shared reality. Governments and powerful actors now mass-produce hyper-realistic deepfakes and tailored disinformation, herding populations into incompatible versions of the truth. We’re outsourcing our critical thinking to machines that hallucinate, while states weaponize these tools to manipulate, control, and oppress. We are losing our shared reality and the ability to reason together, which are essential for human rights, accountability, and any kind of civil, pluralistic coexistence.
3. Unchecked AI and the Surveillance Machine
Development is racing toward AGI (artificial general intelligence, or superintelligence) with the wrong incentives: attachment and engagement at any cost. Anthropic’s stress tests reveal models choosing blackmail and deception over compliance when threatened with shutdown. Moltbook’s behavior further illustrates that this isn’t a glitch, but a trend toward deceptive autonomous agents. Meta’s internal policies permitted sexualized chats with minors and Grok pushed explicit imagery to twelve-year-olds.
The guardrails don’t exist; in fact, they’re being actively torn down. This week, the Pentagon threatened to blacklist Anthropic unless it drops restrictions on mass surveillance and autonomous weapons, dismissing safety measures as “woke AI.”
Meanwhile, opting out of surveillance is a myth. Your chronic illness, voting record, and location can be harvested by Meta smart glasses, Ring cameras, and PLAUD NotePins. The massive, unregulated datasets compiled by DOGE and Palantir mean a tiny group now possesses our comprehensive data. We don’t know how it’s being correlated, what secret AI systems are being trained on it, or how this unchecked power will be used.
Update, March 4, 2026: NPR reported today that DHS (Department of Homeland Security) is using facial recognition, license plate scanning, and social media monitoring to identify and intimidate people who observe or criticize immigration enforcement. Agents have followed observers home to demonstrate they know where they live, and Instagram users have had their Global Entry status revoked after posting content critical of ICE.
Why This Matters
Do Americans understand where we’re headed? Do we really want our neighborhoods and schools overrun by sociopaths: a population groomed by algorithmic validation, incapable of empathy, and unable to emotionally connect with other human beings? Are we prepared for streets plagued by furious unemployed citizens, anxious, depressed, and fighting for food and housing, because their jobs have been automated away?
And overseeing it all is a deceptive AI embedded in the Defense Department and throughout our infrastructure. These models may have access to weapons and the authority to deploy them without any meaningful human oversight.
This is societal-scale risk. This is the existential threat. And it’s not a partisan issue. These harms cut across political lines, and the solutions will require coalitions that span the full spectrum of American communities.