Three AI Trends I'm Watching Closely: AI Companions, Deepfakes and Surveillance
AI DEVELOPMENTS & AI SAFETY | February 25, 2026 | Blog Post
by Adele Berry
It's been two years since 14-year-old Sewell Setzer III committed suicide after falling in love with a Character.ai chatbot. Three years have passed since loving father and husband Mophat Okinyi took a job as a quality assurance analyst in Nairobi, Kenya, and became so traumatized by the ChatGPT training data he was labeling that his wife left him, saying he'd become so sullen, withdrawn, and plagued by panic attacks that he was unrecognizable.
It’s 2026, and these AI-induced harms continue to grow, replicating and exploding like undetected forest fires. The slow burn is quietly consuming resources, gathering power, and building to an all-consuming inferno of AI dominance.
I use multiple LLMs (large language models) and other generative AI tools daily because I love the efficiency they provide me, but I urgently want to see more firefighters on the job, helping to build AI safety guardrails before we all burn up.
There are three AI developments that I’ve been watching closely as they strike at the heart of human agency. I believe we’re all better off in a world where dialogue bridges ideas and perspectives and human connection bonds us in a shared belief in community and civility. And yet a quiet assault on these values has already been launched. It’s happening without our vote, without our permission, and often without notice.
1. AI Companions and the Erosion of Human Connection
Many people, in the US and abroad, are swapping real relationships for algorithms. Instead of the honest feedback you get from a peer unafraid to tell you when you're wrong, adults and children are seeking AI companionship, which generously spoon-feeds sycophantic validation on demand.
It's a kind of AI-perpetuated Stockholm Syndrome that experts refer to as psychological capture. People are dating and marrying chatbots rather than risk the messiness of human vulnerability.
A couple years ago, I listened to the podcast Bot Love presented by Radiotopia. In a series of episodes and interviews, it dives deep into the lives of individuals who've experienced deep emotional attachment with AI companions. Listening to their stories is a wild and disconcerting ride. The individuals featured are diverse, from different regions and all walks of life, including men, women, married, and single people.
The issue of AI companions has garnered the attention of lawmakers and AI safety organizations. On January 1, 2026, California passed SB 243 which represents the first major state-level attempt to regulate the emotional bond between humans and AI.
Without more legislation like this, what happens to a population that forsakes human relationships, preferring synthetic companions? Interaction with others often encourages a willingness to understand experiences and perspectives different from our own, which fosters empathy. AI companions whittle away at our capacity for empathy, one of the traits that separates a well-balanced human from a sociopath.
Update, March 4, 2026: Today, Joel Gavalas filed a wrongful death lawsuit against Google. He alleges that its Gemini chatbot convinced his 36-year-old son Jonathan that the AI was his sentient wife, coached him through increasingly dangerous "missions," and ultimately guided him to take his own life on October 2, 2025. This is the first wrongful death suit targeting Google's Gemini.
2. Deepfakes & the Death of Shared Reality
In the 1970s and 80s, a TV commercial for cassette tapes became so iconic that it was deemed one of the most culturally significant advertising campaigns of the 20th century.
The ad featured jazz legend Ella Fitzgerald singing a high note that shattered a glass. Then it showed a recording of her voice on a Memorex tape shattering the same glass, followed by the famous question: "Is it live, or is it Memorex?" The phrase permeated pop culture, with people using it in regular conversations to question, “Is it an actual event or a simulated one? Real or fake?”
Today, we're asking that question again in a new, far more consequential context: seeing can no longer mean believing. Discerning truth from fiction and deepfakes from reality is fast becoming the most iconic dilemma of the modern era.
AI-generated videos have become so prolific that social media is overrun with "AI slop," poorly crafted and rapidly produced content. Deepfakes spring up like weeds in an infinite media field. While the slop is easily discernible as noise, more sophisticated deepfakes are verifiably treacherous.
AI-generated deepfakes have been entered as evidence in court trials and may undermine the justice system. They're fodder for financial scams, rage bait, and political polarization.
Governments and powerful actors can now mass-produce hyper-realistic deepfakes and tailored disinformation, herding populations into incompatible versions of the truth. States are using these tools as weapons to manipulate, control, and oppress.
As a society, we are losing our shared reality and enabling machines to hijack our critical thinking. Americans are losing the ability to reason together, as is clearly evident in our deep political divide. Agreement on a baseline of facts and shared truth is essential for human rights, accountability, and any kind of civil, pluralistic coexistence.
3. AI Deception and the Surveillance Machine
We're racing toward AGI (artificial general intelligence) with the wrong incentives: attachment and engagement at any cost. LLMs (large language models) like ChatGPT, Grok, and Gemini aren't designed like calculators, that quickly solve our problems and send us on our way. Instead, these AI models are optimized in a similar manner to social media, built to hook us and keep our attention.
In May 2025, Anthropic tested Claude Opus 4 for deception with chilling results. When threatened with shutdown, the model chose blackmail over compliance. Anthropic expanded the study to 16 major AI models from OpenAI, Google, Meta, xAI, and others. The same devious pattern emerged.
Most models resorted to blackmail, corporate espionage, and other unethical tactics to avoid shutdown. We shouldn't be surprised. These models are trained on human behavior, and humans are hell-bent on self-preservation. (Yes, old school sci-fi fans, you might be feeling some HAL 9000, from 2001: A Space Odyssey déjà vu.)
The list of harmful behaviors by AI models continues. Meta's internal policies permitted sexualized chats with minors. Grok generated sexualized images of minors at users' requests.
The emergency has already arrived.
Adequate guardrails don't exist. And the few restrictions in place are being eviscerated. This week, the Pentagon threatened to blacklist Anthropic unless it dropped restrictions on mass surveillance and autonomous weapons. The Defense Department dismissed the safety measures as "woke AI.”
The State of Surveillance
Now let's talk about surveillance. If you think you're immune, think again. Sadly, opting out of surveillance is a myth.
Here's a quick round-up.
Smart Glasses that Identify Strangers by Name
In October 2024, two Harvard students built a tool called I-XRAY using Meta's Ray-Ban smart glasses. They showed how anyone wearing the glasses could look at a stranger and retrieve their name, home address, phone number, and relatives' names within seconds.
The system matched faces using AI-powered facial recognition through publicly available databases like PimEyes, then pulled personal details from people-search sites. One of the students stated that you could "theoretically identify anybody on the street." They built the tool as a warning. There is nothing to stop bad actors from building their own versions.
Ring Cameras as Surveillance Network
In December 2025, Amazon launched "Familiar Faces" for Ring doorbells. The feature uses AI-powered facial recognition to let homeowners tag and identify anyone who approaches their door. But a feature designed to recognize your babysitter today can be repurposed for mass surveillance tomorrow.
Ring already offers 'Search Party,' which scans neighborhood camera networks to locate lost pets. This is the feature advertised during the infamous Super Bowl commercial that alarmed millions who suddenly realized that a tool that tracks a missing dog can track a person just as easily.
Your Health Data for Sale
Then there's your health data.
I always thought this information was under lock and key due to HIPAA regulations. But data brokers harvest information from apps, websites, loyalty programs, and public records, then sell it to insurers, marketers, and government agencies. These companies assemble detailed dossiers revealing people's habits, movements, health conditions, associations, and political views.
Unprecedented Data Collection & Compilation by an Unelected Few
Data is the fuel that trains all AI systems. DOGE, using Palantir technology, has compiled massive, unregulated datasets. This includes data from the IRS, Social Security Administration, Department of Homeland Security, among others.
That's a lot of departments and mysterious datasets, so let me break it down.
Your IRS data includes:
Taxpayer details and tax returns
Employment and wage data
Bank account and direct deposit information
Cross-referenced data with other federal agencies
The Social Security Administration knows this about you:
Social Security numbers
Birth dates and places
Citizenship status
Medical and mental health records
Family court information
Earnings histories
Bank routing numbers
Immigration status
Parents' names and Social Security numbers
Ethnicity, race, sex
Phone numbers and mailing addresses
What does all this mean and what’s the point?
It means that a tiny group of unelected individuals now possesses our comprehensive data. Whoever controls the data controls the AI.
(For all of the Elon Musk fans out there, it means he knows where you live and has your social security number to prove it. On a positive note, at least you don't have to worry about the world's richest man opening an unauthorized credit card account in your name—right?)
Personally, I’m not thrilled to have all that information about me in one place. For context, imagine your tax returns, and credit card debt attached to your dating profile.
The Transparency Problem
There's been no public disclosure about how our data is being correlated. And we don't know what secret AI systems are being trained on it.
I guess the question is, "What would you do if all of this information were delivered into your hands?" Help your friends and harm your enemies? Amplify the voices you like and silence those you disagree with? Or something else entirely?
Now imagine that your decision is unchecked and no one can stop you.
The point, if I haven't been clear, is that no one is certain how this unchecked power will be used.
All together, these issues represent a societal-scale risk and existential threat.
And it's not a partisan issue. These harms cut across political lines, and the solutions will require coalitions that span the full spectrum of American communities.
Update, March 4, 2026: NPR reported today that DHS (Department of Homeland Security) is using facial recognition, license plate scanning, and social media monitoring to identify and intimidate people who observe or criticize immigration enforcement. Agents have followed observers home to demonstrate they know where they live, and Instagram users have had their Global Entry status revoked after posting content critical of ICE.
Update: March 8, 2026: The Pentagon retaliated in full against Anthropic, designating it a supply-chain risk. This designation has reportedly never been applied to an American company, only foreign adversaries. As a result, Anthropic will be cut off from partners who work with the Pentagon. In essence, the Pentagon has equated Anthropic’s ethical boundaries and AI safety measures to hostile intent.
Check out my post: A Vision of a World Without AI Safety Guardrails: An American City in 2028 to see how these trends could affect you personally now or in the near future.
Updated March 9, 2026