Will WhatsApp’s Misinfo Cure Work for Facebook Messenger?

To protect the election, the platform will limit message forwarding to five people at a time.
intertwined arrows on blue background
Photograph: Jorg Greuel/Getty Images

On Thursday morning, Facebook announced several new policies to wrangle misinformation on its platforms ahead of the November election. Among them: limiting the number of people or groups you can forward a message to at one time on Messenger. For a glimpse of whether that might work—and how well—you needn’t look further than another Facebook-owned company: WhatsApp.

Restricting Messenger forwards is just one of several tools that Facebook has rolled out to combat misinfo, and it barely made an appearance in the company’s press release. But it’s also one of the only measures with an established track record, albeit an opaque one. More important, it’s one of the few steps Facebook can take without sparking accusations of political bias from either side.

In 2018, misinformation ran rampant on WhatsApp, and it was linked to deadly consequences in countries like India, where the messaging app is the de facto means of online communication. Because WhatsApp is end-to-end encrypted by default, the platform can’t know the contents of messages as they propagate throughout its ecosystem. But it could at least slow the spread. That July, WhatsApp reduced the number of accounts that you could forward a message to, from 256 to 20 for most people. In January 2019, it trimmed that number again, to 5.

That’s the playbook Facebook is emulating with Messenger, lopping the maximum number of forward recipients from 150 down to 5. “We've already implemented this in WhatsApp during sensitive periods,” Mark Zuckerberg wrote in a Facebook post outlining Thursday's changes, “and have found it to be an effective method of preventing misinformation from spreading in many countries.”

Which is probably the case! WhatsApp did manage to cut the total number of forwarded messages on its platform globally by 25 percent after that first round of changes. And stricter limits, instituted in April, on “highly forwarded messages”—anything that routed through five or more people before it gets to you—have curtailed those nuclear-grade viral chains by 70 percent. “The limits we have put in place at WhatsApp over the last two years have certainly reduced the spread of forwarded messages,” says WhatsApp spokesperson Carl Woog. “It would be difficult for us to say with certainty it reduces misinformation ‘only’—the user feedback we’ve gotten is that it also reduces sharing of harmless memes like ‘good morning’ messages.”

In other words, limiting forwards is a blunt instrument. “Measuring the impact of misinformation and disinformation on messaging apps with accuracy is close to impossible at the moment,” says Irene Pasquetto, cofounder of the Harvard Kennedy School Misinformation Review. “Especially on WhatsApp, given that all content is encrypted and we have no access to the data.”

That encryption has unquestionable, and essential, benefits for the privacy and security of billions of people. It also contributes to what Rutgers professor Britt Paris has coined as “hidden virality,” content that gets passed around in private groups and messages outside of the public eye. “The little data we have on misinformation is what we get from publicly available and open source intelligence,” says Cristina Lopez, a senior research analyst at the nonprofit Data & Society who focuses on disinformation. “When you think about "Plandemic," and the way that was amplified so quickly publicly, it makes me shudder to think what that looked like privately. We were not able to measure that scale; there’s a chance that privately the spread started way before we were able to notice.”

Limiting Messenger forwards won’t shed any more light on what kind of content traverses those corridors, or how it spreads. It’s just playing the odds that it’ll slow the process down. At least one recent study indicates that it’ll work. Last fall, researchers from the Federal University of Minas Gerais in Brazil used data sets comprising posts from public WhatsApp groups in India, Indonesia, and Brazil to track the spread of messages and images—and to model what impact forwarding limits have on their spread.

“The finding we had is that it works quite well,” says Kiran Garimella, who worked on the study and is currently a postdoctoral fellow at MIT’s Institute for Data, Systems, and Society. “It reduces the speed of the spread of misinformation.”

In fact, the study found that limits on forwarding can reduce “velocity of dissemination” by an order of magnitude. That plays out in practice, as well, at least anecdotally. “Talking from a purely personal perspective, I’ve felt that measure,” says Lopez. “My family is Salvadorean. In Latin America, WhatsApp is the only app, pretty much. It did make the platform less spammy.”

Which is not to say that it’s a panacea. While a recent Harvard Kennedy School Shorenstein Center survey found some evidence that misinformation spreads widely on messaging platforms, the nature of hidden virality means that no one really knows how much it propagates there compared with private groups and other online vectors. And though the Minas Gerais study found a reduction in the speed of spread, Garimella also says a story that's viral enough will still find its audience eventually.

There’s also the question of whom exactly it limits most effectively. “I think this works for political operatives and organized groups, meaning those who tend to share the same piece of content with many, many people, often using automated means,” Pasquetto says. “But it does nothing to the average user, who normally shares content with a couple of closed groups and a few selected people on a daily basis.”

Still, while limiting message forwards might be a limited tool, it’s at least a relatively unassailable one. As a universally applied standard, it should be inoculated from the claims of bias that have cowed Facebook in the past. "I’m a big fan of defined measures that are neutral to the type of content,” Lopez says. “Enforcement on specific content leads to political problems.”

Or it leads to lack of enforcement altogether. Mere hours after Facebook rolled out its latest policies, Donald Trump posted clear and present voting misinformation to his page. Rather than take it down, the company labeled it with a gentle corrective, so there it will stay. At least on Messenger its spread will be slow.


More Great WIRED Stories