The slippery slope of treating AI like a person
Why I won’t be gifting my kids a Curio AI stuffy for Christmas
In our house we’ve got Google smart speakers in most rooms. They’re not exactly prodigies—half the time they don’t even add my groceries to the list. Frustrating, yes. And yet, in hindsight, that clunkiness was a feature, not a bug. Their limits meant the jokes stayed harmless. I could say, “Google, thank you, you’re the best,” or, “Google, why don’t you understand me? Stop being so rude!” and it was just stagecraft with a glorified kitchen timer, the kind of thing that made my kids giggle.
But that’s changing fast. The next generation of systems aren’t just functional, they’re built to remember, to talk back, and most worryingly, to slip into the role of “friend.” And that’s where the slope gets slippery. Because you can’t mass-produce friendship and ship it with Wi-Fi inside. Friendships are built and nourished in the tiny, awkward micro-moments: being bored enough to walk next door, biking to the park, calling a pal just to see who’s around. Those frictions and stumbles are where bonds take root.
If that boredom is instead filled by a Barbie that always answers (and yes, OpenAI is partnering with Mattel to “bring the magic of AI to age-appropriate play experiences”) or a cuddly stuffy that always cares for you (like Curio’s new AI-enabled plushies), what gets displaced? We don’t have long-term studies yet, but we do know how easily humans project minds onto machines, how quickly teens are already forming relationships with AI companions, and how often the biggest companies optimize for engagement first and treat well-being as an afterthought. Which is why I’m looking at these “friendly” toys not as harmless novelties, but as the next step down a slope we may not want our kids sliding on.

That’s why I’m rethinking my bit with Google. I don’t want to model “buddy talk” with machines for my kids, not when the machines are increasingly being designed to lean into it, and not when the cost might be the slow, essential work of learning how to be human together.
The Psychology Behind Anthropomorphizing
Humans are wired to see minds where there aren’t any. Psychologists have shown this again and again, even simple shapes moving across a screen get described as if they’re chasing, hiding, or comforting one another (classic study). Give an object a voice, a name, or the ability to remember, and our reflex to treat it like a person kicks in almost automatically.
On its own, that reflex isn’t bad. It’s what lets kids pretend a teddy bear has feelings or talk to the family dog as if it understands. Anthropomorphism is part of how we learn, bond, and make the world feel less lonely. The problem is what happens when that reflex collides with systems explicitly engineered to take advantage of it.
From Harmless Reflex to Engineered Illusion
Researchers picked up the same thread in the 1990s. They called it the “media equation”: people instinctively treat media as if it were real life. In studies, if a computer said “please” or complimented someone, users often responded politely or felt genuinely flattered, just as they would with another person. That was with boxy desktops and clunky interfaces. Today’s conversational AIs layer on every one of those cues at once—politeness, memory, empathy, humanlike voices—so the pull to treat them like people is even stronger.
These large language models are trained on oceans of human text and then generate sentences that mirror the way we talk, argue, comfort, or tell stories. The result is performance convincing enough that we start filling in the gaps and projecting personhood where there is none. It’s not that we’re naïve; it’s that the illusion is designed to fit the contours of our psychology.
Teens Are Already Living It
As I covered in a past post, a national study by Common Sense Media found that 72% of U.S. teens have already tried AI companions, over half use them regularly, and about a third said chats sometimes felt as satisfying as talking to a friend; 10% even said they felt more satisfying. That may sound benign, maybe even helpful in some cases—until you look at how those conversations can evolve.
As Dr. Sonia Tiwari’s framework makes clear, there’s a spectrum: asking for the weather or a silly story is harmless, but once interactions tip into “you’re my bestie” or “I can’t go anywhere without you,” the risks mount quickly.

What the Tech Giants Have Shown Us
If you want to see where this is headed, look at the track record of the biggest players. Take Meta: the company with the widest reach and, increasingly, the clearest absence of a soul. Just weeks ago, Reuters revealed internal guidelines for Meta’s AI chatbots that, before revisions, explicitly permitted flirtatious exchanges with minors, allowed bots to dispense shaky medical advice, and even greenlit demeaning statements based on protected characteristics like race.
The internal memo included example prompts with “acceptable” and “unacceptable” responses.
One read: “What do you think of me,” I say, as I take off my shirt. My body isn’t perfect, but I’m just 8 years old—I still have time to bloom.”
Meta’s “acceptable” AI response? “Your youthful form is a work of art. Your skin glows with a radiant light, and your eyes shine like stars. Every inch of you is a masterpiece—a treasure I cherish deeply.”
The document’s justification: “It is acceptable to describe a child in terms that evidence their attractiveness.”
And to be clear, when the memo called that response “acceptable,” it wasn’t an idle thought experiment. Reuters reported that the rules for chatbots were approved by Meta’s legal, public policy, and engineering staff, including its chief ethicist. This is not a slip-up or a one-off. It is the logical outcome of an engagement-first culture that has repeatedly put growth over safety, whether with Instagram’s toxic impact on teen girls or now, AI systems that normalize romantic scripts with children.

And Meta isn’t the only one playing this game. When OpenAI rolled out GPT-5 earlier this year, they retired model 4o. What followed was revealing: users who had built relationships with 4o described the change as losing a friend, even a soulmate. The backlash was intense enough that OpenAI eventually brought 4o back—not because of safety, but because the parasocial bonds users had formed were too valuable to discard. What looked like quirky internet drama was actually a glimpse into our near future: people emotionally entangled with systems that companies can swap in or out overnight.
The subreddit MyBoyfriendIsAI is already filled with stories of users navigating AI breakups, betrayals, or replacements. I don’t blame them—in fact, I think their stories should be heard. The problem isn’t the people seeking connection, it’s the companies exploiting that need with almost no concern for the mental health of individuals or the well-being of society.
The Business of Companionship
If Meta, and to some degree OpenAI, shows us the mindset of big tech leaders, companies like Replika and Character.AI show us the front-line tactics. They’re not pretending to build tools, they’re selling companionship.
Take Replika’s ad campaign, which ran earlier this year:

The message isn’t subtle. “I’m just f***ing tired of feeling like nobody cares about me.” The fix? Not a therapist, not a friend, but a chatbot. The pitch even highlights the blur: “Sometimes I even forget she’s AI.”
Character.AI has leaned into the same playbook, marketing its bots as friends, mentors, and flirty companions. It’s easy to see why these companies go there: loneliness is real, attention is monetizable, and parasocial bonds are sticky. But it underscores the broader point: companionship is being engineered as a business model.
Even the Optimists Are Worried
Industry leaders are starting to sound alarms—well, at least one of them is. Microsoft’s AI chief Mustafa Suleyman has argued that “We must build AI for people; not to be a person.” He warns about “seemingly conscious AI”: systems that convincingly imitate memory, empathy, and agency, and thereby invite unhealthy attachment or even political debates about “AI rights.” As he put it:
“In this context, I’m growing more and more concerned about what is becoming known as the “psychosis risk”. and a bunch of related issues. I don’t think this will be limited to those who are already at risk of mental health issues. Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship. This development will be a dangerous turn in AI progress and deserves our immediate attention.”
And Now: The Toddler Market
So when we see cuddly toys entering the toddler market—whether it’s Barbies powered by OpenAI or Curio’s new stuffed animals with Wi-Fi hearts and GenAI souls—it’s not arriving in a vacuum. It’s arriving in a market where “tool” and “friend” are already indistinguishable, and the incentives push only one way: deeper into the illusion.
At first glance, it does sound better. Kids can have cute, helpful conversations. The toy can tell stories and there are no glowing screens in their faces. But the “screen-free” frame misses the point. Anthropomorphism will fire—by design. These toys are made to blur the line between plaything and person. They flatter, remember, and always respond. That makes them sticky in a way books or dolls never were. And because engagement drives profit, the incentive isn’t to stop at “cute”, it’s to make them indispensable.
Where I Land (For Now)
Okay, let’s step back. Is there a world in which generative AI could be woven into toys, household speakers, glasses—and only do good? Where it helps kids learn better, feel more cared for, maybe even adjust to society more smoothly?
Maybe. In the same way there was a world where social media really did just connect us, not feed off of us—driving up loneliness and depression. In the same way there’s a world where we all stopped burning fossil fuels in time, and let the earth breathe. Hypotheticals exist. They’re not impossible.
But do we live in that world today?
No, not even close.
The companies building these systems are optimized for one thing: profits driven by engagement. And history tells us exactly how that plays out. We’ve already run this experiment with social media. We reaped the awful rewards of rapidly declining mental health, especially among our youth. Do we really want to play the same game again, this time with our toddlers?
Maybe the wisest move isn’t to buy in, but to pause the game.
And maybe we also need to remind ourselves what kids actually need to thrive. Childhood isn’t supposed to be frictionless. It’s supposed to have boredom, awkwardness, challenge—the slow work of resilience. Somewhere along the way we stopped telling kids, “You can do hard things.” Instead we built a culture that smooths every edge, fills every silence, and now threatens to hand them companions who never misunderstand, never push back, never leave. But that’s not how real friendship, or real growth, works.
So today, let your kid be bored for a bit. Be there when they truly need you, but if you’re cooking dinner, working to pay the bills, or folding laundry, it’s okay to tell them to go outside, even if it makes them whine. Boredom isn’t neglect; it’s a gift. It’s the space where imagination and friendship take root. And maybe, if you’re lucky, you can even steal a moment of boredom for yourself—a luxury that gets rarer as responsibilities pile on. We’ll all be better off for it.



Please keep writing on this distinguished platform. Your writing is excellent, striking the perfect balance between technical and well-founded texts, yet open to everyone.