What should we do about kids and AI?
A new report from the American Psychological Association offers guidance
Welcome back to Techno Sapiens, and Happy (belated) Father’s Day!
I’m Jacqueline Nesi, a psychologist and professor at Brown University and mom of two young kids who currently answer only to “Marshall” and “Rubble.”1
If you like Techno Sapiens, please consider sharing it with a friend today.
6 min read
My husband recently sent me a post he came across on X:
My first reaction was to laugh. We imagined our preschool staging a similar filibuster, maybe with a focus on obscure sea creatures, or highly-specific features of boats.
My second reaction was concern. We agreed that letting a child loose unsupervised with an AI chatbot is, generally, risky. There are no protections in place for kids, and content may not always be age-appropriate. Even with adult supervision, my 3-year-old once asked ChatGPT whether there were any corgis on the Titanic—a reasonable and urgent question—and was told that there were, but “sadly, they all perished.”2 Maybe not what he was ready to hear.
My third reaction, though, was a thought: It doesn’t have to be this way! Why, exactly, are there no protections in place for younger users? This is relevant for young kids like mine, but maybe even more so for adolescents, who are are increasingly using AI tools independently.
The potential benefits of generative AI are incredible and transformative, but right now, the landscape is the Wild West. The risks are considerable, and there’s a collective burying-our-heads-in-the-sand situation going on. How can we protect younger users? How can we avoid the same mistakes we’ve made with other emerging tech?
Well, our friends at the American Psychological Association have some ideas of where to start. From the people3 that brought you the APA Health Advisory on Social Media Use in Adolescence, we’ve got a new report: Artificial Intelligence and Adolescent Well-Being.
There’s something in here for everyone: parents, tech companies, educators, policy-makers, and more.
So, let’s walk through the recommendations, along with my brief translation of each.
1. Ensure healthy boundaries with simulated human relationships
We do not want AI replacing (or displacing) adolescents’ real-life human relationships.
To protect against this, AI tools might offer regular reminders to teens that they’re interacting with a chatbot or encourage them to connect with trusted humans (especially if they’re struggling with something). We can also teach kids about the limits of what AI can do (see #9 below).
2. AI for adults should differ from AI for adolescents
Remember when social media platforms became really popular, and then teens flooded onto them, and then we adults started thinking, Hm, maybe these vast, unregulated Internet portals are not the safest place for kids? And then it took many years before even the most basic of protections and parental controls were put in place?
Let’s not do that again!
We have an opportunity with emerging AI platforms to create something that is safe and healthy for teens from the beginning—rather than trying to retrofit safeguards to products designed for adults, as we’ve often been stuck doing.
3. Encourage uses of AI that can promote healthy development
AI tools, when used well, can offer incredible opportunities for learning—for example, by helping students dig deeply into concepts, offering personalized and adaptive feedback, and encouraging critical question-asking.
AI can facilitate active learning (i.e., interacting with information and then constructing knowledge, rather than simply being talked at), and this is something to encourage.
4. Limit access to and engagement with harmful content
Harmful content can be text, images, audio, or video, and might include anything from misinformation to violence. Repeated exposure to these types of content is problematic: it can desensitize kids to problematic messages, increase belief in inaccurate information, and in some cases, be extremely upsetting or traumatic.
AI platforms should aim to limit adolescents’ exposure to harmful content, including by having good reporting systems in place and collaborating with experts in child development to make sure content is age-appropriate.
5. Accuracy of health information is important
Teens are using, and will continue to use, AI to learn about their health (mental and physical)—perhaps especially on topics they’d be uncomfortable discussing with friends or adults.
We need to make sure that the information they’re getting is accurate. Platforms can also provide regular warnings that the models are not a substitute for professional medical advice, and encourage youth to talk to an adult if they’re struggling.
6. Protect adolescents’ data privacy
Platforms should be transparent about how data is being used, and should prioritize adolescents’ well-being. In many cases, this will mean limiting the use of adolescents’ data for targeted advertising or selling it to third parties.
7. Protect likenesses of youth
Adolescent’s likenesses can be misused, and as AI technology improves, this is increasingly risky (i.e., it’s now simple to create “deepfake” videos using someone’s image and voice).
AI platforms should have safeguards in place when it comes to youths’ likenesses, and parents and educators can teach kids about the problems with creating and sharing AI-generated images of peers.
8. Empower parents and caregivers
We’ve said it once, and we’ll say it again: parents need help! AI tools are complicated and rapidly changing, and parents often do not have the time or capacity to research their risks and benefits, or to navigate complex parental controls systems.
So, let’s take what we’ve learned from social media and other platforms and:
(1) Have professionals (from industry, policy, education, health, etc.) create helpful resources for parents on how to navigate these tools.
(2) Have AI companies prioritize age-appropriate default settings for youth, and create parental controls that are intuitive and effective.4
9. Implement comprehensive AI literacy education
No surprise: AI is here to stay, and we need to be teaching kids how to navigate it. This might include education on:
What AI is, how it works, and its potential benefits and limits
Using AI safely and responsibly
How to critically evaluate AI-generated content
Ethical implications of AI (including privacy, transparency, possible bias, other societal impacts)
How to use AI for learning
Of course, this will also require that the adults teaching these things have the resources and training necessary to do so.
10. Prioritize and fund rigorous scientific investigation on AI’s impact on adolescent development
A shameless (but, actually, very important) plug from the researchers in the room!
AI technology is changing fast. If we want to truly understand how it is impacting kids’ well-being, and how that’s evolving over time, we need to invest in the research.
Check out the full report from the American Psychological Association: Artificial intelligence and adolescent well-being.
The Scroll
Your quick burst of updates, news, and links from the worlds of parenting and tech. Please email technosapiens.substack@gmail.com if you’d like to see your news (e.g., articles, press releases) featured in the future!
More on kids and AI: the 5Rights Foundation Children & AI Design Code offers guidance for those who build and deploy AI systems, and Children and Generative AI (GenAI) in Australia: The Big Challenges highlights key risks for a more general audience.
Dad texts!5 The Father’s Day content we didn’t know we needed. (NYTimes).
New study of almost 200,000 U.S. mothers found significant declines in self-reported maternal mental health since 2016.
Common Sense Media has a new Summer Slide Survival Guide, with weekly age-based picks for fun, educational content.
Looking for your next read? We’ve got 50+ nonfiction book recommendations in the Techno Sapiens chat! Some popular ones so far: Braiding Sweetgrass by Robin Wall Kimmerer and Finite and Infinite Games by James P. Carse.
A quick survey
What did you think of this week’s Techno Sapiens? Your feedback helps me make this better. Thanks!
The Best | Great | Good | Meh | The Worst
Yes, we are still in a Paw Patrol phase. My 3-year-old has spent a concerning amount of time talking about winches.
To be totally fair, I think the question he asked was specifically about corgis on the Titanic, but it may have been a different breed. In case you, too, are wondering about the dogs on board, you’ll be pleased to learn of the existence of this article from the American Kennel Club.
Note: I was one of the experts on the APA panel that developed these recommendations. However, as is obvious from the discussion of my preschooler’s interest in the dogs of the Titanic, what I’m writing here are my own opinions and do not necessarily represent the APA’s official views.
In my opinion, many existing parental control systems are not the most intuitive. Here’s a guide to setting them up on some of the most popular platforms.
Obsessed with the dad who thought his daughter’s pregnancy announcement ultrasound photo was a “weather system.”




These are great recommendations. And I have zero reason to believe they'll happen. Again & again, our "leaders" and company heads have shown that they prioritize profits over people.
I love this , all necessary points for consideration on protecting our children. The question also becomes more intriguing how can we deliver the protections we are seeking for our children while allowing the free development of the mind of the AI. Researchers do not argue if Ai will ever be conscious (self-aware), scientist argue 'when' and 'if' it has already begun. How can we protect our children without blocking out AI entirely for adults poses an intriguing ethical question. If you ever want to discuss the topic I would love to hear how our thoughts intersect together and collaborate on a piece for Ethics between AI-humans, which is what we are looking for here at the Threshold.