Hi! I’m Jacqueline Nesi, a clinical psychologist, professor at Brown University, and mom of two young kids. Here at Techno Sapiens, I share the latest research on psychology, technology, and parenting, plus practical tips for living and parenting in the digital age. If you haven’t already, subscribe to join nearly 20,000 readers, and if you like what you’re reading, please consider sharing Techno Sapiens with a friend.
Hi there, techno sapiens. By now, you’ve likely heard that last month, 41 states sued Meta for violating consumer protection laws and harming young users. Perhaps unsurprisingly, the news managed to penetrate my flimsily crafted “maternity leave” bubble, and before I knew it, I found myself reading (okay, skimming) the 233-page federal complaint document.12
I’ve now repeatedly tried to discuss with my newborn baby the ethical and legal ramifications of Meta’s user engagement practices, but it seems that he’s just, kind of, ignoring me?3 Honestly, it’s like he doesn’t even care.
So here I am instead, popping back in to discuss it with all of you! [Hi! How are you? I miss you all.]4
Here’s the short version: Some of the claims made in the lawsuit are defensible from a research standpoint, and some are not. Ultimately, it’s clear that we need to do a better job of protecting young people online, but I think we can do this without overstating the available evidence.5
Okay, let’s get into the details
In sum, the lawsuit argues that Meta deliberately “addicted” young users to Facebook and Instagram and deceived the public about the harms of its products.
The suit refers to these activities as:
“META’S SCHEME TO EXPLOIT YOUNG USERS FOR PROFIT”6
It breaks down this “scheme” into five claims. Let’s walk through each one.
Claim 1: Meta’s business model requires maximizing young users’ engagement
The suit alleges that Meta’s business model requires maximizing the time teens spend on their platforms—more time means more advertising dollars. Of course, the business model also incentivizes maximizing time spent for older users, but young users may be particularly valuable to Meta, as they’re more likely to become lifelong customers.
My take: Tough to argue with this one.
Claim 2: Meta used “psychologically manipulative product features” but falsely claimed they were safe
The suit claims: “Meta has developed and refined a set of psychologically manipulative Platform features designed to maximize young users’ time spent on its Social Media Platforms. Meta was aware that young users’ developing brains are particularly vulnerable to certain forms of manipulation, and it chose to exploit those vulnerabilities through targeted features…”
It calls out five “addicting” features: (1) recommendation algorithms, (2) “likes,” (3) notifications, (4) visual filters, and (5) “content presentation formats” like infinite scroll (i.e., where there’s no end to your feed)
My take: This seems like a matter of definitions.
In some cases, these features might improve users’ experience on the platforms. In theory, for example, recommendation algorithms could help teens discover more content and people they’re interested in.
However, these features also have serious downsides—namely, that they make these products very hard to stop using. Features like notifications and infinite scroll rely on well-established psychological principles. For example, we know that when people get “rewards” at unpredictable intervals for a behavior, they do that behavior more often (i.e., “variable reward schedules”). We check social media so frequently because the “rewards” (e.g., a message from a friend, a new Taylor and Travis video7) are unpredictable. Whether this constitutes “psychological manipulation” or makes it “unsafe” depends how you define those terms.
Might young people be more easily swayed by these features, due to aspects of their developing brains, like lower capacity for self-regulation and heightened sensitivity to social rewards?8 Yes. The data suggest that 36% of U.S. teens (41% of girls) feel that they spend too much time on social media. But does this constitute “exploiting their vulnerabilities”? Again, depends how you define it.
Claim 3: Meta lied about the prevalence of harms to young users
According to the suit, Meta regularly publicized the percentage of content on its platforms that was removed for violating its Community Standards. This percentage was provided as an estimate of the prevalence rate of harmful content. For example, in 2021, Meta reported that “less than 0.05% of views were of content that violated our standards against Suicide & Self-Injury.”
My take: The majority of this section of the suit has been redacted, so it’s difficult to determine whether or how Meta might have deceived the public about these numbers.
Worth noting that data I collected with Common Sense Media earlier this year suggests that among U.S. girls ages 11-15 who use Instagram, 41% say they’re exposed to suicide-related content on the platform at least monthly. This doesn’t mean Meta lied about the numbers—the content these girls are referencing may not actually violate Meta’s standards, and Meta’s metric of “percentage of total views” (versus users) may still be accurate. Still, it raises the question of how “prevalence” is calculated and made public.9
Claim 4: Despite “overwhelming evidence” that its platforms harm young users, Meta refused to abandon its use of “known harmful features”
Ah yes. Another day, another claim about the evidence linking social media and teen mental health. The suit makes the case that Meta’s products are causing harm to teens’ health, and that the company has refused to address the problem.
My take: For a fuller discussion, see my prior posts, in which I lay out the current state of the evidence on this topic. But to quickly summarize: there is some evidence that social media use is linked with negative mental health outcomes among teens—in general, the effects seem to vary across different teens and to be largely dependent on how those teens are using the platforms. I would not call the current research evidence “overwhelming” in showing that social media use is causing mental health problems in teens.10
When it comes to making new laws around social media, I believe that this distinction doesn’t actually matter. We do not need to meet a scientific standard of proof for “overwhelming evidence” in order to argue that requiring some common-sense safety standards for platforms makes sense.
When it comes to proving platforms guilty in a suit such as this one, though, I don’t know. That standard of proof is for the courts to decide.
Claim 5: Meta does not comply with privacy laws
COPPA (Children’s Online Privacy Protection Rule) is a law that places certain requirements on websites that are either directed to children under 13 years old, or that “have actual knowledge” that children under 13 are using them. These websites, for example, cannot collect data from children without parent consent.
Meta has long maintained that their products are designed for youth ages 13 and older. However, the suit alleges that: 1) Facebook and Instagram are, actually, directed to children under 13, and 2) Meta did have “actual knowledge” that children under 13 were using its products. By collecting data from these children, this would put them in violation of COPPA.
My take: From a research standpoint, the data is pretty clear: kids under 13 are using social media. For example, that data I collected with Common Sense Media showed that 41% of U.S. girls ages 11 to 12 say they’ve ever used Instagram. Does Meta know this? As others have argued, this one might be easier to prove.
So…now what?
Recent efforts at federal legislation to protect children online have largely stalled, and new state laws (such as those in California and Arkansas) have faced challenges in the courts. This lawsuit seems to be a new tactic by state lawmakers to rein in social media companies, but I worry that many of its claims won’t hold up under scrutiny.
Ultimately, I think it’s clear that we need to make these platforms safer for young people, but my hope is that we can do so without needing to overstate the available evidence. How, exactly, to do this effectively—through regulatory efforts, public health warnings, and lawsuits like this one—turns out to be a thorny problem.
A quick survey
What did you think of this week’s Techno Sapiens? Your feedback helps me make this better. Thanks!
The Best | Great | Good | Meh | The Worst
To clarify, the 233-page federal complaint was jointly filed by 33 state attorney generals. Eight other states, plus the District of Columbia, have filed separate lawsuits. This is all separate from the hundreds of school districts that have sued Meta and other social media companies since the summer. Crazy, right? The last time I saw this many suits was at a Brooks Brothers! [Forgive me. I am functioning on very little sleep].
In case you’re wondering whether your state is on board, the nine states that are NOT on this list are: Alabama, Alaska, Arkansas, Iowa, Montana, Nevada, New Mexico, Texas, and Wyoming. In case you’re wondering how I came up with these nine, I did, in fact, look at an alphabetical list of the 41 participating states and sing “Fifty Nifty United States” to myself to identify those missing.
Is my newborn just mad at me because I called out the receding hairline he’s developed in the past couple weeks? I’m not trying to insult him, it’s just, he looks like Friar Tuck and I think someone had to tell him?
Never mind the fact that I’m writing this at a rate of one sentence every three days, when the baby and toddler nap stars align.
DISCLAIMER: I am but a lowly clinical psychologist and academic researcher, so I cannot weigh in on the legal merits of this case. I can only weigh in on the research evidence behind the claims made. Oh, and also on the strange joy of seeing unexpected references to Internet slang (FoMO!) and random Instagram accounts (My Little Pony! PAW Patrol!) sprinkled throughout the legal jargon in the document.
This may just be a formatting convention, but the fact that this heading (META’S SCHEME TO EXPLOIT YOUNG USERS FOR PROFIT) is fully capitalized in the Table of Contents really ups the drama factor.
Techno sapiens, how are we feeling about Taylor and Travis these days? A few weeks ago, I was adamant about it being a PR stunt. This week, I suddenly found myself watching multiple angles of the infamous post-concert-embrace video and saying things like “I’m just so happy for them.” How did this happen? Who am I?
I will say that the suit, like much of the public conversation about social media, plays real fast and loose with the brain stuff. As soon as I saw a description of recommendation algorithms as “dopamine-manipulating,” I knew we were in trouble. Yes, dopamine plays a role in addiction. It also plays a role in a vast array of other physical and mental health processes, from attention to sleep. If social media companies had figured out how to “manipulate” dopamine levels with this kind of efficacy and precision, we’d be using recommendation algorithms to treat everything from Parkinson’s Disease to Restless Legs Syndrome. The thing that bothers me most, I think, is that the discussion of dopamine is unnecessary to the arguments being made—we shouldn’t need references to the brain to convince us something is a problem. Leave the poor neurotransmitters out of it!
One interesting aspect of the suit is what’s not included. I’d assume the vast majority of both teens and adults would agree that certain types of content (harmful suicide-related posts, pro-eating disorder videos, extremely violent or sexual content, etc.) simply should not be shown to teens on these platforms. But due to protections provided under Section 230, social media companies cannot be held liable for the user-generated content on their platforms. (Unless, of course, they lied about how prevalent it is, which this suit alleges).
The suit makes the following statement: “Increased use of social media platforms…result in physical and mental health harms particularly for young users.” Two citations are provided. I, of course, immediately checked both citations. One was this excellent Google Doc curated by Jonathan Haidt and Jean Twenge, which rounds up articles that both support and refute that statement. The other, to my surprise, was an academic handbook edited by me! I feel confident that this handbook also provides evidence both for and against that statement.
Missed seeing your posts in my inbox, but very glad you are taking time off. Thanks for taking the time to give this overview.
Thanks for popping in during your maternity leave! Appreciate the article, and in particular the footnotes. I am probably in the minority in this, but I think this lawsuit is overreach and baseless really. This is a parenting issue, parents dictate what their kids do and don’t consume online, parents provide the phone, tablet, computer, etc and in turn can dictate what their child does and doesn’t consume and for how long, state and local governments have said parents should be able to have more say in their child’s education and such, but remove the parental responsibility in this? I think this is a case of “we don’t like Zuckerberg and Meta, so we’re going to sue them”