The New Era of Social Media Looks as Bad for Privacy as the Last One

The slow-motion implosion of Elon Musk’s X has given rise to a slew of competitors, where privacy invasions that ran rampant over the past decade still largely persist.
Mouse cursor hovering over a smartphone device with a frown face on it.
Illustration: MicroStockHub/Getty Images

When Elon Musk took over Twitter in October 2022, experts warned that his proposed changes—including less content moderation and a subscription-based verification system—would lead to an exodus of users and advertisers. A year later, those predictions have largely borne out. Advertising revenue on the platform has declined 55 percent since Musk’s takeover, and the number of daily active users fell from 140 million to 121 million in the same time period, according to third-party analyses.

As users moved to other online spaces, the past year could have marked a moment for other social platforms to change the way they collect and protect user data. “Unfortunately, it just feels like no matter what their interest or cultural tone is from the outset of founding their company, it's just not enough to move an entire field further from a maximalist, voracious approach to our data,” says Jenna Ruddock, policy council at Free Press, a nonprofit media watchdog organization, and a lead author on a new report examining Bluesky, Mastodon, and Meta’s Threads, all of which have jockeyed to fill the void left by Twitter, which is now named X.

Companies like Google, X, and Meta collect vast amounts of user data, in part to better understand and improve their platforms but largely to be able to sell targeted advertising. But collection of sensitive information around users’ race, ethnicity, sexuality, or other identifiers can put people at risk. For instance, earlier this year, Meta and the US Department of Justice reached a settlement after it was found that the company’s algorithm allowed advertisers to exclude certain racial groups from seeing ads for things like housing, jobs, and financial services. In 2018, the company was slapped with a $5 billion fine—one of the largest in history—after a Federal Trade Commission probe found multiple instances of the company failing to protect user data, triggered by an investigation into data shared with British consulting firm Cambridge Analytica. (Meta has since made changes to some of these ad targeting options.)

“There’s a very strong corollary between the data that's collected about us and then the automated tools that platforms and other services use, which often produce discriminatory results,” says Nora Benavidez, director of digital justice and civil rights at Free Press. “And when that happens, there's really no recourse other than litigation.”

Even for users who want to opt out of ravenous data collection, privacy policies remain complicated and vague, and many users don’t have the time or knowledge of legalese to parse through them. At best, says Benavidez, users can figure out what data won’t be collected, “but either way, the onus is really on the users to sift through policies, trying to make sense of what's really happening with their data,” she says. “I worry these corporate practices and policies are nefarious enough and befuddling enough that people really don't understand the stakes.”

Mastodon, according to the report, offers users the most protection, because it doesn’t collect sensitive personal information or geo-location data and doesn’t track user activity off the platform, at least not on the platform’s default server. Other servers—or “instances,” in Mastodon parlance—can set their own privacy and moderation policies. Bluesky, founded by Twitter cofounder and former CEO Jack Dorsey, also doesn’t collect sensitive data but does track user behavior across other platforms. But there are no laws that require platforms like Bluesky and Mastodon to keep their privacy policies this way. “Folks can sign on with particular privacy expectations that they might feel satisfied by a privacy policy or disclosures,” says Ruddock. “And that can still change over time. And I think that's what we're going to see with some of these emerging platforms.”

Mastodon spokesperson Renaud Chaput told WIRED that the platform does not have any plans to change its privacy policies and noted that user data is only available on the server where a user’s account is hosted. Bluesky and Meta did not immediately respond to requests for comment. Meta spokesperson Emil Vazquez directed WIRED to a thread from the company’s deputy chief privacy officer, Rob Sherman, in which he said, “Meta’s privacy policy, and the Threads supplementary privacy policy, are the best resources to understand how Threads uses and collects data.”

Nazanin Andalibi, assistant professor of information at the University of Michigan, says that while privacy can be a competitive advantage for a newer platform, “people might still use a platform that they believe will not respect their privacy,” even if they have concerns. Mastodon has fewer than 3 million users, and Bluesky, which remains in beta, has just over 1 million. And greater privacy may not be enough to shift users’ behaviors away from bigger platforms like X and Meta’s Threads.

Unlike Bluesky and Mastodon, Threads largely abides by the same wide-reaching data collection policies as its parent company, which also owns Facebook and Instagram. Launched in July off the back of Instagram, the platform saw an initial spike in growth, followed by a plateau. But in its quarterly earnings call last week, Meta CEO Mark Zuckerberg said that Threads now has over 100 million monthly active users. “I’ve thought for a long time there should be a billion-person public conversations app that is a bit more positive,” Zuckerberg said. “I think that if we keep at this for a few more years, then I think we have a good chance of achieving our vision there.”

“Threads looks like they're collecting much more information than they actually need in order for the service to function. And some of the information they're collecting is pretty sensitive,” says Calli Schroeder, global privacy counsel at the Electronic Privacy Information Center, a nonprofit focused on privacy and free speech online. “I think this is just inextricably tied to the fact that behind Threads, Meta already holds just an absolutely obscene amount of information on individuals.”

Twitter, prior to Musk’s takeover, has had its own spotty history of protecting user data. In 2009, hackers compromised the platform twice, accessing users’ private information and, in some cases, taking over accounts. In 2011, the FTC issued a consent decree—a threat of legal action—against Twitter for failing to protect user data in relation to the 2009 hacks. As part of the settlement, “Twitter will be barred for 20 years from misleading consumers about the extent to which it protects the security, privacy, and confidentiality of nonpublic consumer information,” according to the FTC, with each violation carrying a $16,000 penalty.

So far, attempts to curtail the collection of users’ data has been piecemeal, largely driven by state-level laws and individual enforcement actions. The American Data Privacy and Protection Act, proposed in 2022, remains in congressional limbo.

“Regulation continues to be extraordinarily behind,” says Ruddock. “The companies are not going to change on their own.”