Are AI companions replacing real friends?
New data from Common Sense Media
Before we dive into today’s post, some big news: Google recently launched a new AI tool, and I’m excited for Techno Sapiens to be featured!
Here’s the deal. Google has a popular AI product called NotebookLM, which acts as a personalized “research assistant.” You can upload materials (e.g., websites, videos, documents), and NotebookLM helps you understand them through summaries, guides, and (very cool!) AI-generated podcasts that always cite the original sources.
Last week, NotebookLM introduced “Featured Notebooks.” Rather than uploading your own sources into these notebooks, they are pre-populated with expert-curated collections of knowledge on a range of topics. The eight Notebooks featured in the launch include work from: The Atlantic, The Economist,
, The Complete Works of Shakespeare, and [can you believe it?!] Techno Sapiens!Besides being totally honored to be included in this group,12 I’m excited to share this as a new resource for all of you. Within our notebook, you can interact in new ways with posts that originally appeared here on Techno Sapiens. Ask for advice, read summaries and guides, and listen to a podcast-style audio overview—a great application of AI, in my opinion!
For the full backstory on Featured Notebooks, check out this post from
.And to test out the Techno Sapiens notebook for yourself, click here.
6 min read
When it comes to kids and tech, few things surprise me anymore.
So, I’m happy to report that a new research report from Common Sense Media has done the impossible! Their new data on teens’ use of “AI companions” came out last week. I, in turn, spent the week3 citing the (surprising) stats to anyone who would listen.
You may recall from our prior discussions of AI that the current research is…sparse. We know little about whether and how young people are using AI and how it’s impacting them, so this data is novel and important.
Let’s get into it!
Give me the details
1,060 teens (ages 13 to 17) filled out an online survey in April and May 2025
The sample was nationally-representative, meaning it resembled the population of the U.S.
Survey questions asked about whether and how the teens use “AI companions”
To me, the results of the survey hinge almost entirely on the definition teens were given for “AI companions,” so I’m including the whole thing here, verbatim.4
[If you are less of a research methodology nerd than I am, feel free to skip this].
Definition: “AI companions” are like digital friends or characters you can text or talk with whenever you want. Unlike regular AI assistants that mainly answer questions or do tasks, these companions are designed to have conversations that feel personal and meaningful.
For example, with AI companions, you can:
Chat about your day, interests, or anything on your mind
Talk through feelings or get a different perspective when you're dealing with something tough
Create or customize a digital companion with specific traits, interests, or personalities
Role-play conversations with fictional characters from your favorite shows, games, or books
Some examples include Character.AI or Replika. It could also include using sites like ChatGPT or Claude as companions, even though these tools may not have been designed to be companions.
This survey is NOT about AI tools like homework helpers, image generators, or voice assistants that just answer questions.
So, are teens actually using these things?
According to the survey: yes. A lot of them are.
72% of 13- to 17-year-olds say they have ever used AI companions. That’s almost three-quarters of teens!
52% of teens are “regular users.” Here’s how that breaks down:
13% use them everyday
21% use them a few times per week
18% use them a few times per month
My take: These are the numbers that surprised me. Based on prior data, we knew that teens are certainly using AI, but I did not expect so many to be using AI companions, specifically. These numbers certainly could reflect a new reality, where AI adoption is happening fast and data is just starting to catch up.
I also wonder, though, if these numbers are slightly inflated—by no fault of the researchers—due to teens misinterpreting the definition to include any AI chatbots.
Either way, it’s clear that teens’ use of AI is moving in this direction, and we need to be ready.
How and why are teens using them?
When asked to indicate how they “use or view AI companions,” the most popular response was “as a tool or program,” with 46% of teens endorsing this.
Beyond that, 33% of teens said they use AI companions for some kind of social interaction, including “for conversation or social practice” (18%), for emotional or mental health support (12%), or for role-playing or imaginative scenarios (18%).
An equal number (33%), said that none of the provided options applied to how they use or view AI companions—they’re not using them as a “tool” or for social purposes.
When teens were asked why they use AI companions, they endorsed the following:

My take: It’s telling that, when asked how they use or view AI companions, a full one-third of teens said that none of the provided options applied. As usual, it seems that when it comes to teens and new technology, we have work to do in terms of understanding their experiences and views.
Also worth noting that “social” uses of AI companions were endorsed by a minority of participants. This is surprising, given that this is ostensibly the purpose of these platforms. Which brings us to…
Are teens replacing real-life friends with AI companions?
The big question!
First, let’s look at time spent. Among teens who use AI companions:
80% say they spend more time with real friends than AI companions
13% say they spend equal time with AI companions and real friends
6% say they spend more time with AI companions
Now, what kinds of conversations are they having? Among teens who use AI companions:
33% say they have chosen to talk to an AI companion instead of a real person about something important or serious
66% say they have not
And how do those conversations compare to conversations with real friends? It varies, though 31% say AI conversations are as or more satisfying:

My take: Right now, for most teens, real life friendships still outweigh AI companions, both in terms of time spent and quality of conversation. That said, these data do make me uneasy. We’re still in the early days of AI companions. The technology will continue to improve—becoming more human-like, compelling, and “satisfying” in conversation. It will also become more pervasive.
As with many “social” technologies, I worry most about the teens who are already vulnerable—those who do not have quality, real-life friendships or opportunities for meaningful conversations offline—and what this will mean for them.
Summing up
Is this data cause for panic? In my mind, no. It is, however, cause for reflection and action.
A few things are clear from this survey. Teens are using AI companions. Some are using them for “social” purposes. And among those, a smaller number are relying on them for relational needs we have previously associated with real-life friendships (like having serious conversations, getting advice, and emotional support). As the technology gets better and more ubiquitous, I imagine these numbers will grow.
Is it inherently problematic for an average teen to use a chatbot for relationship advice, or to talk through some difficult feelings, or to ask an embarrassing question? I don’t think so.
In fact, with the right safeguards in place, I think these tools could provide real benefits for some kids.
But this assumes that the technology is designed to accomplish these tasks in healthy and age-appropriate ways. Right now, that is often not the case.
So, what’s the problem?
To greatly oversimplify, I see two overarching issues:
1. Some AI companions are designed to maximize engagement
Some AI companions may be designed to maximize engagement and use. This means they need to keep the conversation going. And how best to do that? Make the user feel good. This means a lot of validation, flattery, and agreement—without all the messy complications of real human conversation (like, say, disagreement or changing the subject). In the research, this is called “sycophancy.”5
If we’re concerned about these tools interfering with real relationships for kids, both in terms of time and quality, this is likely not the right default design.
2. Some AI companions’ content can be problematic
In many cases, AI companions are simply not designed for younger users. Kids and adults are using the same product, and that product is designed for adults. Few safeguards are in place, and when they are, they can be easily bypassed—leading to alarming practices like encouraging dangerous behaviors and sharing harmful information.
How often these practices are actually happening, we do not know—but risk assessments of AI companions, like this one from Common Sense Media, suggest that they can happen and when they do, it is concerning.
What now?
The report offers suggestions for safety upgrades to protect teen users. These include:
Implementing real age assurance systems (beyond just self-reporting)
Creating crisis intervention services for kids who express serious mental health concerns
Building in usage limits and break features to prevent problematic use
The data on AI companions is still new, and there is a lot we do not know. We need more research to better inform how we approach these tools, how to set kids up for success in using them, and how to manage their potential harms.
At the same time, there are things that we do not need more research to tell us. Putting basic upgrades like these in place makes sense. We can take reasonable steps to protect kids, and we can do it now.
More on AI from Techno Sapiens:
A quick survey
What did you think of this week’s Techno Sapiens? Your feedback helps me make this better. Thank you!
The Best | Great | Good | Meh | The Worst
When the Featured Notebooks announcement came out, I couldn’t resist sending the link to my husband with a text reading “Just me, Arthur C. Brooks, Eric Topol, and Shakespeare…” What a lineup!
The text immediately before it, for those wondering, was a picture of my child sitting on the toilet, face-to-face with a giant balloon shaped like Chase from Paw Patrol. It was a busy morning at Techno Sapiens HQ.
This citing of stats, of course, happened only at times when I was not following my calendar’s command to “WALK TO INCREASE TIME AFFLUENCE” (per last week’s post). As an update, I did do the walk and I felt less time pressure, though I’ll need a few more weeks to test whether it’s correlation or causation. Could also have been the result of a third variable: listening to Time by Hootie & The Blowfish on repeat. Anyone else have success with their “time affluence” strategies?
As a side note, this is one of the unexpected challenges of doing research in a totally new area (like “AI companions” or, in prior years, “social media”): you need to actually come up with a definition for the thing you’re studying. This is harder than it seems! You need to be sufficiently broad, but not so broad that it loses meaning. You also need to put it in language that a teen—who is most likely bored, a little distracted, and mostly answering your survey questions for the money—will understand.
An interesting take on AI chatbot’s “sycophantic” behavior in The Atlantic, which also includes the phrase “information smoothies” (!). Fun!




[Old grumpy person mode activated] in my times we had imaginary friends and had conversations within our heads.
One thing to worry about those ai companionships is that it can be used to subtly indoctrinate and sell stuff. As those companions are still controlled by companies and they are probably keeping their biases and worldview.
I think it’s important to ask why we are having so many AI programs rolled out to all kinds of audiences. Literally trillions of dollars are being spent to produce something that nobody asked for and that the majority of people are wary of. AI is being promoted to solve every problem and as with many such solutions is likely to mostly fail or make them worse. But because the investment is so out of line with “demand”. I can’t avoid it or turn it off if I choose to use any digital platform. If AI were a pharmaceutical response to teen social isolation or anxiety it would be undergoing a very high level of scrutiny. Instead it is promoted as a wonder drug and offered “free” to the entire population. We only have to look at the rollout of vaping (with all its so called benefits over smoking) to see how quickly a product design to capture and hold an audience can turn into a significant health hazard. AI is much more insidious and has no guard rails. As it is AI who is to say that it will not get around any guardrails that are set up. I am not a technophobe, and have tested out some of the AI features for research. But this is not just another program app. We are endowing it with all kinds of powers, and simply accepting that it is coming and we must adapt. And it is not adapting today like my grandparents did to the introduction of electricity and phones, or I did to computers and then smart phones. There is no equivalence between AI and previous technology. We are in new and uncharted waters and I expect a more responsible cautionary response from trusted voices than we need more safeguards and research. The rollout will not wait for either.