Saving Social Media
How social media companies can make their products better for teens’ mental health
Thanks for reading Techno Sapiens! Subscribe to get posts directly to your inbox and to join an amazing community of humans interested in how technology is affecting our lives.
14 min read
Technologies evolve. At some point, we realized that wooden steering wheels in cars, which would frequently break and impale drivers, were, perhaps, not a good idea. Seatbelts, on the other hand? A very good idea.
Cars are an integral part of our lives. They’re not going away. We’ve recognized that by adding and removing features, we can maximize their benefits (getting places quickly and conveniently), while limiting their risks (life-threatening crashes).
I’ve been researching the role of social media in adolescent mental health for over a decade. I’ve learned that social media can present significant risks, particularly for teens experiencing mental health concerns, but that it also provides numerous benefits. These benefits—social connection, self-expression, access to information—are such that social media is, believe it or not, here to stay. Forever.
But this doesn’t mean that we should throw up our hands and watch idly while steering wheels impale drivers.
Teens are struggling with their mental health. Rates of depression and suicide have increased significantly in recent years, with one in three teens experiencing persistent feeling of sadness or hopelessness, and one in five seriously considering suicide. And less than half of teens with a diagnosed mental disorder receive treatment.
Mental illness is complex, and social media alone is neither a cause nor a cure. But we know that social media plays a critical role in teens’ mental health—both for good and for bad—and we should be working to make sure that it helps more than harms.
Social media companies can change their products to make them better for teens’ mental health. Here are some product changes that would benefit all social media users, but especially teens.
Personalization
Allow teens to personalize their social media experiences, both in advance and in the moment.
In order to understand why we use social media the way we do, we should start with the concept of “choice architecture.” This is the idea that the way choices are presented affects our decisions, often subconsciously. A classic example is the arrangement of products in a grocery store—we’re far more likely to buy items that the store places at eye level. One critical element of choice architecture is the idea of “defaults.” Consumers have a tendency to accept the default option, even for what would seem to be major decisions. The classic example here is organ donation: in countries where individuals must opt out of donating, versus opting in, rates of organ donation are significantly higher.
So what does this have to do with social media? Social media is a “persuasive technology,” one that uses the principles of choice architecture to shape users’ attitudes and behavior. On social media, “defaults” are pre-selected features and settings, often without any available alternatives, that have been built into an app or algorithm by a social media company. Many times, these defaults serve the interest of the company (i.e., increased user engagement, exposure to ads, etc.).
They show up in nearly every aspect of social media, including the features we use, the activities we are prompted to engage in, and the types of content we are shown. When we first open Facebook, for example, we’re taken to a “news feed,” with the most engaging, can’t-look-away content at the top, prompting us to start scrolling. On TikTok, we’re taken to the “For You” page, which shows us videos, playing on loop, selected by an algorithm. On Snapchat, by comparison, the default is to start with a full-screen, front-facing camera, prompting us to snap a quick picture and share it.
These defaults mean that social media companies, rather than users, are sometimes the primary drivers of user behavior. In my work, teens often use the word “mindless” to describe their social media use. Our data shows that the average teen looks at their smartphone over 100 times per day. Teens describe being on autopilot—their thumb landing on TikTok or Instagram as if controlled by someone else, regaining awareness only after viewing a few videos or refreshing their like counts. When they open the app, their behavior is driven not by their own choices, but by subtle behavioral nudges—an eye-popping video, a red notification. The problem with this is that those videos and like counts can have a major impact on a teen’s mood, without their conscious awareness.
Overriding defaults is hard for anyone. It’s difficult to stay focused on a certain activity (e.g., sending a message) or type of content (e.g., uplifting posts from friends), when an app is nudging you in another direction. Surely I’m not the only person to have opened social media to send a quick message to a friend, only to emerge an hour later from a deep rabbit hole of strangers’ babies giggling and dogs barking to the tune of “I Will Always Love You.” So why not give users the option to choose defaults? Why not let users be the ones to personalize the content they’ll see, or the activities they’ll be prompted to do?
Users could personalize their social media experiences in two ways: intention-setting (in the moment) and preset modes (in advance).
With intention-setting, choosing activities and content could happen in the moment, during that split second between when the thumb presses the app and the first piece of content appears. A pop-up could appear that asks a series of questions, lasting just 2, 3 seconds. What is your goal right now? Do you want to message someone? Post something? Just scroll without thinking too much about it? Then, rather than bringing the teen to the app’s typical landing page, they could be directed to the activity they’ve chosen. Teens could also be asked how much time they’d like to spend during that session (e.g., “Five minutes?” “Two hours?” “I don’t know?”).1
With preset modes, users could enable a variety of preset settings in advance, deciding what types of experiences they’d like to have and content they’d like to see. They could then enable a given preset mode when they first log in. They could personalize their own “mood-boosting mode,” for example, which might emphasize positive content or prompt them to message certain friends.
Presets would be like playlists for social media. When we open up Spotify, we don’t just assume the app will play whatever song happens to be selected by its algorithms. We set up playlists in advance, to match our mood or activities. We might have an “exercise” playlist or a “dinner with friends” playlist or a “feeling sad” playlist. If I turned on Spotify while lacing up my sneakers for a high-intensity interval workout, and the app decided to play Peaceful Piano for Focus, this would be nonsensical (and oddly relaxing).
To set up their preset modes, teens could think through, perhaps with the help of a parent, a friend, or a therapist, what types of content and activities they find helpful versus harmful in different moments. And when that moment arrives, they could choose with the click of a single button. Research suggests that social media affects individual teens very differently. Personalizing social media use in this way could help every teen—and adult—find the right type of social media use for them.
This option might be especially helpful for teens with mental health concerns. For the millions of teens experiencing depression, anxiety, thoughts of suicide, or other mental health issues, social media is a landmine. What will they see? How will it make them feel? Will it help or hurt? Research suggests that the creation of a “safety plan” is a powerful tool to reduce suicide attempts and save lives. With the help of a therapist, teens who are struggling create safety plans in advance, during moments of clear thinking and emotional stability, to allow for quick reference in moments of crisis. They typically contain a list of coping skills, distracting activities, friends and trusted adults to contact in the event that they experience suicidal thoughts. Though simple, they work.
Teens experiencing suicidal thoughts could create a “safety mode” in advance. When in the midst of an episode of suicidal thinking, they could select this mode. They could be exposed to content they’ve determined, in advance, to be helpful in those moments, and avoid content—such as others’ suicide-related posts—that may be harmful. Or they could be prompted to send messages to people they’ve previously deemed supportive.
Parenting Tools
Provide parents a suite of opt-in monitoring tools, ranging from more to less invasive.
When I was in eighth grade, going into town suddenly became the go-to Friday night activity. It began with a group of about ten kids, going to dinner at the nearby pizza place and then catching a screening of Blue Crush or Maid in Manhattan at the local movie theater. There’d be the occasional piece of popcorn thrown, a bit of too-loud laughter in the theater hallway, the usual. But with each week that passed, the crowd grew larger and rowdier. Hordes of teens would pile out of their carpooling parents’ Suburbans at the movie theater, as if touching down on the red carpet. Entire theater rooms were overrun. Full buckets of popcorn flying, screams ringing out, couples making out in the aisles, airplane bottles of Baileys spilling out on the floor. I have a distinct memory of the theater owners locking the front doors to the theater as a scene out of Braveheart erupted outside—an army of teens screaming, banging on the windows, charging the doors. The kids were not alright. The next weekend, the movie theater owners announced a new policy: no teenagers allowed in the theater without parental supervision.
Sometimes, increased parental supervision is necessary.
Can many teens go to the movie theater alone with no issues? Of course. But when you find out your 13-year-old threw a slushy into the ticket box window, you’re going to want to keep an eye on them next time. In fact, depending on your child’s age, history, and personal characteristics, you might employ any number of tactics, ranging from more to less involved. Accompanying them to the theater but sitting a few rows away. Letting them go alone, but calling or texting them once or twice to check in. Letting them go alone, but making sure they come home afterward.
Generally, evidence suggests that effective parenting involves a combination of warmth, including communication and support, and structure, including involvement, monitoring, and enforcement of reasonable and predictable rules. The balance of these factors depends on the individual child. As teens age, for example, they require more independence, as building autonomy is incredibly important for healthy development. This will mean less parental monitoring. And if a teen is struggling in a particular area—perhaps with significant mental health issues, or with engaging in dangerous behaviors—increased parental monitoring has been shown to be beneficial.
But when it comes to social media, we’ve come to accept the idea that teens will need to operate independently, no matter their age or circumstances, and that parents’ monitoring options will be limited (or non-existent). It does not have to be this way2.
Social media companies could provide a continuum of opt-in tools for parents, allowing them to tailor their involvement to their teens’ needs. This could involve allowing parents to view a detailed record of what their child is doing online. Or providing them with a summary of the types of things their child is posting, sharing, or being exposed to. Or simply alerting them only when something concerning has been posted or consumed. Parents could choose to enable or disable these options at any time, allowing for various levels of privacy and independence for their teen.
One concern with this idea is that, given the option, parents will “spy” on their teens’ social media use, leading to conflict and mistrust. This is reasonable. Research suggests that in many cases, restrictive mediation of Internet use, which includes “spying” on kids via technical tools, can backfire. Active mediation, which involves proactive conversations about what teens are doing online, is critical for ensuring healthy relationships and media use. It’s likely never a good idea to simply start restricting kids’ social media use or monitoring their activities without ongoing conversation.
For most teens, it is not a good idea for parents to read every message they send, watch every video they post, or scroll through their search history. Just like, for most parents, it tends to be counterproductive to stand, ear to your child’s bedroom door, trying to make out their conversations. Or to sneak into the movie theater and spy on them from the back row. But in some cases—perhaps in response to a teen’s breach of trust or significant mental health concerns—increased monitoring is beneficial.
Educating parents on effective media parenting would be an essential component of making these tools work.
Another common concern with this idea is that teens will just find a way around it—they will disable any settings put in place or create secret accounts. And yes, many teens are very tech savvy. But are they tech-savvier than TikTok’s Chief Technology Officer? Probably not. Presumably, social media companies could figure out ways to reduce the likelihood that teens would bypass these settings. Of course, teens will still find ways around them, but teens also find ways around other rules parents set. When parents ask their teens where they’re spending the night, some of them will lie about it. But that does not mean we, as a society, should give up entirely on keeping track of where our teens go at night.
A final concern is that teens will simply move to a new product, away from those platforms where parenting tools are provided. This is certainly possible. At the end of the day, though, we really only see five, maybe ten, social media platforms that are popular among teens at a given time. Of those, only one or two make up the lion’s share of teen usage. The goal of social media is to be social—teens don’t want to be on a platform where none of their friends are. Allowing parental tools on these platforms would make, at least, a small difference.
Safety
Enhance teens’ safety by identifying harmful suicide-related content and instituting emergency contacts
When it comes to social media and teen mental health, one of the most complicated issues we face is suicide prevention. There are ethical, legal, and privacy challenges, many of which do not have simple solutions. Suicide itself is an incredibly complex phenomenon, one that social media alone does not cause. Even so, there are steps social media companies could take to reduce risks and promote safety. I will describe two suggestions here: identifying harmful suicide-related content and allowing for emergency contacts.
There is much debate around how platforms should respond when a teen posts something about suicide. The key challenge here, in my opinion, is balancing benefits and harms to the poster (the person posting this content) and the viewer (the person seeing this content). How do we protect at-risk individuals’ ability to seek social support while mitigating the potential harmful effects that their posts may have on others? Do we allow people to post whatever they want about suicide, in the hope that this results in opportunities for personal expression, intervention, support, and stigma reduction—or do we take these posts down when doing so will protect the people around them?
The potential harm to viewers of certain suicide-related posts is supported by decades of research on a phenomenon called suicide contagion. This is when one suicide increases the chances of other suicides occurring within a social group, such as a school or neighborhood community. Teens are especially susceptible to this type of influence, as they are more susceptible to social influence of all types.
If you or someone you know is having thoughts of suicide, help is available. Call the National Suicide Prevention Lifeline: 1-800-273-8255 (TALK). Text the Crisis Text Line: 741741 (www.crisistextline.org). Additional resources: www.speakingofsuicide.com/resources
In my work, I’ve conducted dozens of interviews with teens who have histories of suicidal thoughts or attempts. They describe frequently coming across their peers’ suicide-related posts online. They recognize the potential value of individuals gaining much-needed support this way. But they often describe these posts as triggering, upsetting, or depressing. Some say it’s introduced them to new methods of harming themselves, some that it “makes it feel like everyone’s depressed.” One teen summed it up: “After [looking at those posts] I feel more confident to do, like, to give up on life…I feel like, it’s an influence basically. It’s like influencing you to give up and be gone.”
The risks of keeping suicide-related posts publicly accessible are real. But so, of course, are the risks of limiting individuals from sharing suicide-related content altogether. So how do we determine helpful versus harmful suicide-related posts?3.
Guidelines on safe media reporting on suicide—those that limit the potential for copycat or contagion effects—have existed for years. And recently, a group called Orygen has developed guidelines specifically for social media through their innovative ChatSafe initiative. For example, posts (or videos) about suicide should not glamorize, sensationalize, or romanticize suicide; should not trivialize suicide or blame it on a single cause; should not describe suicide as desirable; and should not provide details about methods or locations of attempts. In contrast, safe posts about suicide can provide messages of hope and recovery, include links to resources, or indicate that suicide is preventable.
In my mind, then, this is a technical problem. We know that social media companies have already succeeded in building algorithms to identify suicide-related content more generally. But these algorithms lack sensitivity. All suicide-related content is, of course, not the same, and evidence suggests that these different types of posts can have very different effects on users’ risk for self-injury. If social media companies could better classify suicide-related posts as helpful or harmful, they could remove or limit access to harmful posts, prevent such posts from “going viral,” or at the very least, avoid promoting those posts to teens who already may be at risk of suicide. This is a tricky problem, but it’s one that we need to get right.
What should social media companies do when a teen posts something indicating imminent risk of suicide? This is, of course, an emergency scenario—one in which every moment counts. This is where emergency contacts could save lives. When teens set up a social media account, they could be required to designate one or more emergency contacts, perhaps someone over the age of 18. Those contacts could be required to accept the designation. If a post indicating imminent risk of harming oneself or others is detected or reported, these emergency contacts could be notified immediately. By designating specific people to intervene in an emergency, we could increase the chances that a teen gets help quickly and effectively.
So…what now?
Saving social media will be hard. Change must happen at multiple points in the system, from internal governance to external regulation, from research to education, from individual choice to larger cultural shifts. We have a long way to go toward fixing our relationship to social media and protecting teens’ mental health. Product changes such as these are an important place to start.
Thoughts on this post? Ideas for future Techno Sapiens posts? I want to hear from you!
Thank you to Dr. Sophie Choukas-Bradley and Kara Fox for their helpful edits and feedback on this post, and to Alex Nesi for designing all Techno Sapiens graphics.
In general, research suggests that the time teens spend online is less relevant for their mental health than the behaviors they’re engaged in. However, many teens do report that when they spend more time on social media than they’d planned, this tends to negatively affect their mood. For those teens who do want to take control over the time they’re spending on social media, intention-setting could provide another tool to do so.
Social media companies seem already to be toying with these ideas. Instagram announced recently that they’ll work on building “opt-in parental supervision tools for teens.” TikTok recently released a “Family Pairing” feature that allows parents to put some controls in place, including setting time limits and opting into “restricted mode,” which according to TikTok, “limits the appearance of content that may not be appropriate for all audiences.”
Fortunately, this is something social media companies are already thinking about. In Facebook’s Safety Center, they write: “When someone posts about self-harm, we want the person to be able to ask for help or share their path to recovery, but we must also consider the safety of the people who see that post. The post may unintentionally trigger thoughts of self-harm or suicide in others. We don't want people to share content that promotes self-harm, but we also don't want to shame or trigger the person who posted the content by removing their post.” Currently, their approach involves disallowing content that “celebrate[s] or promote[s] suicide and self-injury,” restricting content to adults over the age of 18 “in some instances,” putting some content behind a “sensitivity screen,” and providing resources. This is a good start, but I would argue that this approach does not go far enough.
This was a great read. Thank you for condensing this information in an accessible manner. I had some thoughts/responses to the material in the article:
1. The idea of pre-sets and intention-setting seems to be a great idea. However, I was wondering if all of us (let alone teens) 'always' have a clear intention/goal in mind when we pick up our smartphones to use a particular app. On the contrary, it seems that we have developed a need to be 'constantly stimulated' as soon as we find some time on our hands (a friend is paying the restaurant bill, let me check my phone; I just got done with an exhausting lab meeting, let me check my phone etc.). In this light, do you still think that the preset idea will be useful? Further, there's the additional fact to consider that executive control is still under development for most teens.
2. I was wondering if there is any literature that has documented suicide contagion in the pre-social media era. Maybe comparing suicide contagion today to this literature might highlight important differences.
3. A more robust idea that would possibly allow for seamless parental control would be to implement a KYC (Know-your-customer) procedure for social media. That is, for creating a social media account, you need to upload your national ID (as one would do in a bank or for creating an AirBnB account) (I am aware that this move in itself is very problematic). The system would scan this ID and if you are U-18, you need to nominate an adult for parental supervision. Else, you can't create an account. This would be much harder to hack around for teens, I think