Tell the FTC: Stop tech companies from selling kids’ data
Misinformation that seems real - but isn’t - rapidly circulates through social media. The problem is only getting worse.
Updated
Sign the petition
Intern, Don't Sell My Data campaign
Director, Don't Sell My Data Campaign, U.S. PIRG Education Fund
When a tourist submersible lost contact during a dive to view the wreckage of the Titanic last month, the international rescue operation caught the attention of millions around the world. The rescue failed, and on June 25th, a video on TikTok broadcasted the screams of the passengers in their final moments. In just 10 days, the video had 4.9 million viewers who heard the five victims’ last cries.
Except they hadn’t. The audio wasn’t from the submersible at all – it was from the video game series Five Nights at Freddy’s. But the TikTok went viral fast, and it spread a lot further than the fact-checked truth.
Social media’s evolution as a distributor of news has had serious consequences for what counts as journalism and what gets conflated with the truth.
Today, more than 8 in 10 Americans get their news on digital devices – beating out TV, radio or print. Among 18-29 year olds, social media is the most common news source. They aren’t the only ones turning to platforms for information; 53% of Americans get at least some of their news from social media. Twitter, Facebook, and TikTok have all become pseudo-news platforms.
When news started migrating to social media, it accelerated some of the changes already underway in the journalism industry. In the 50s and 60s, TV news in particular was more viewed by broadcasters as a public service. In the 80s, however, entertainment conglomerates began buying up networks and started expecting news networks to turn profits like other entertainment divisions. Soon came the 24-hour news cycle, and the emphasis on rapid and attention-grabbing stories. Then came the emphasis on pundits – journalist-esque figures designed to deliver opinions, not always facts. As the news industry changed, so too did people’s expectations of what news should look like.
With the shift to social media, these dynamics have intensified. When anyone can be a journalist, content is near-endless and easily capable of supplying social media feeds with hundreds of 24-hour news cycles. Instead of opinion sections or dedicated programs for pundits, social feeds mix opinions and facts together. And the more outlandish a story, the better it does.
With fast-paced, sensationalized news coverage and opinion-having taken to its extreme, more of what we see online is misinformation – content that simply gets the facts wrong.
Misinformation is incorrect or misleading information. Sometimes it can be as simple as an error in reporting. Other times misinformation content is purposefully exaggerated, using clickbait headlines or out-of-context details to make a story harder to look away from.
Misinformation online is growing. New technologies are also making it easier than ever for anyone to substantially edit photos and videos to reflect a reality that doesn’t really exist.
The move towards social media as a source for news has allowed misinformation to flourish.
Anyone with a social media account can become a “news” source. For individuals and outlets posting news, the goal is almost always to get seen by as many people as possible. What goes well on social media is not exactly even-keeled and well-researched stories. For posts on large platforms “outrage is the key to virality”, as social psychologist Jonathan Haidt puts it. Out of context details distort what’s real, and thanks to the “share” button mistaken impressions can run through a large network of people virtually instantly.
Researchers at MIT have found that fake news can spread up to 10 times faster than true reporting on social media. When explosive, misinforming posts go viral, their corrections are never as widely viewed or believed. The outrageous “fact” that blasts through audiences is louder, stickier, and more interesting than a follow-up correction. In the race between the false but interesting and the true but boring, the interesting story wins.
What we see on social media is determined by an algorithm that curates content. The algorithm’s job is to keep you online as long as possible. The longer you’re on, the more targeted ads the platform can sell designed to reach you, specifically. This is the business model of all the major platforms.
To keep you on longer, the algorithm uses data about you – like what types of content you have liked and shared in the past, and whose content you are more likely to engage with – to decide what to show you next. If you like or share a post, you’ll see more like it.
Read: I deleted my Instagram as a teenager – here’s why
These algorithms reward those that share content most frequently by broadcasting their posts to a higher number of social feeds, earning them more views, likes, comments and shares. As we’ve seen, exciting or infuriating information tends to stoke more reaction. By nudging frequent users to keep sharing high-performing content, the algorithm ends up fueling networks of ongoing misinformation. A study from USC showed that 15% of frequent social media news-sharers were behind up to 40% of the fake news circulating on Facebook.
The tech behind our social feeds is not optimized for providing access to high-quality information. The goal is engagement, allowing outrageous stories and opinions to find a broad audience quickly.
In the race between the 'false but interesting' and 'true but boring', the interesting story wins.
Read: How to stop Instagram from harvesting so much of your data
TikTok has ushered in a new era of misinformation online, exposing its young user base to bad information frequently. One 2022 study found that when TikTok users searched for top news stories, almost 20% of the videos returned contained misinformation, like the TikTok of the Titan submersible victims.
Read: Demystifying TikTok data collection
Misinformation can get actively dangerous – a 2023 study from the University of Arizona found that approximately 40% of medical videos on TikTok contained medical misinformation.
Gen Z has inherited a news ecosystem unlike any that has existed before. The social media business model stokes views and clicks and ad revenue to new heights at a very high societal cost. For one, young people in particular have experienced mental health troubles on these platforms. There’s also big questions for democracy. Having an engaged and informed citizenry depends in no small part on reliable access to accurate information. With social media’s propensity to amplify misinformation, more people accessing news-like content on these platforms may further distort the echo chambers we’re currently grappling with as a country.
Read: I deleted TikTok as a college student. It saved my mental health.
The social media giants are some of the most powerful and well-resourced companies in the world. They’ve built the most sophisticated information sharing networks that have ever existed. Misinformation is a complicated problem, but these companies more than anyone have the resources and expertise to tackle this problem head on. Regulators need to pay more attention to what’s happening behind the social media feed curtain, and let the tech giants know it’s time to prioritize finding solutions.
“How Americans get their news is changing” graphic by Ramsha Ali, Don’t Sell My Data intern
Report ●
Intern, Don't Sell My Data campaign
R.J. focuses on data privacy issues and the commercialization of personal data in the digital age. Her work ranges from consumer harms like scams and data breaches, to manipulative targeted advertising, to keeping kids safe online. In her work at Frontier Group, she has authored research reports on government transparency, predatory auto lending and consumer debt. Her work has appeared in WIRED magazine, CBS Mornings and USA Today, among other outlets. When she’s not protecting the public interest, she is an avid reader, fiction writer and birder.