Americas

  • United States

Asia

Oceania

Contributing Writer

Is misinformation the newest malware?

News Analysis
May 01, 20237 mins
MalwareThreat and Vulnerability Management

Experts say that cybersecurity skills and a whole-of-organization approach can go a long way to tackling misinformation threats.

Misinformation and cybersecurity incidents have become the top scourges of the modern digital era. Rarely does a day go by without significant news of a damaging misinformation threat, a ransomware attack, or another malicious cyber incident.

As both types of threats escalate and frequently appear simultaneously in threat actors’ campaigns, the lines between the two are getting fuzzy. At this year’s RSA Conference, information security experts appeared on a panel entitled “Misinformation Is the New Malware” to hammer out the distinctions.

Panel moderator Ted Schlein, chairman and general partner of Ballistic Venture and general partner of Kleiner Perkins, launched the session by saying to the panelists, “I posed to you all that misinformation is just the newest form of malware. I’d argue that misinformation is far more dangerous to corporations, society, and individuals. And with disinformation, you’re quite literally tricked into downloading the exploitable directly into your brain, and no network intrusions are actually needed.”

Yoel Roth, former Head of Trust & Safety at Twitter and now a technology policy fellow at UC Berkeley, highlighted the close, parallel relationship between malware and misinformation, noting that they frequently go hand-in-hand. “Misinformation has been a facet of human communication forever,” he said. “Where it gets worse is that some of that malicious content is also amplified through malicious conduct, people deploying technology to try to inauthentically propagate messages that could cause harm.”

Misinformation can be as insidious as malware

“When we were thinking about the risks of Twitter being targeted by, let’s say, the Russian government, we always had to recognize that there would be attempts to get into Twitter’s systems and target the company and exfiltrate user data,” Roth said. “There would be attempts to influence the conversations happening on the platforms, and there would be attempts to compromise the accounts of Twitter’s users. There were multiple layers to each of these things. And Twitter as a company had a role to play in addressing that conduct across each one of those levels.”

Roth pointed to the “great Twitter hack of 2020,” when financially motivated people in their twenties compromised a Twitter employee’s account to promote a crypto scam on high-profile accounts. This incident is an example of what he called the “illusory distinction” between malware and misinformation. “This was targeting Twitter’s employees to gain access to Twitter’s backend systems in order to carry out malicious activity propagated across the social network. You cannot think of these problems in isolation,” Roth said.

“When it comes to disinformation, it’s just as insidious as malware, but it’s different in the sense that this is all happening out in the open,” Lisa Kaplan, CEO of Alethea, said. “So, you can catch it early before it starts to have [an] impact” before, for example, an organization’s stock price starts to tumble. “I think there’s a lot of opportunity for organizations to be able to proactively mitigate these types of scenarios.”

Aside from being prepared, there’s not much organizations can do to stop misinformation, which is why some have called for the government to take action. “The problem with that kind of solution in the US is the First Amendment,” Cathy Gellis, internet lawyer and policy advocate, said. “Shouldn’t there be a law to say no to the bad things that are happening? But that’s when the First Amendment shows up because a lot of the things you might want the law to say no to are not things that the law can say no to because the First Amendment protects expressive rights,” Gellis said.

Although problematic, misinformation is not malware

Some practitioners steeped in defending against cyber threats believe that battling malware and misinformation, while crucial, are two distinctively different efforts. Nonetheless, cybersecurity professionals need to be aware of how misinformation works.

Debora Plunkett, former director of the Information Assurance Directorate (IAD) at the National Security Agency (NSA), played a role in the Defending Digital Democracy project out of Harvard’s Belfer Center. She tells CSO that misinformation, by definition, is not malware. “Now, if we want to say is it like malware in that malware is destructive or is designed to be destructive, is designed to disrupt, is designed to damage, is designed in many instances to gain unauthorized access or cause someone to think something that it’s not, then I could get there.”

Describing herself as a purist, Plunkett says, “Just the premise that misinformation is the new malware, I don’t agree with that. I don’t agree with it because whenever we speak that way, such and such is the new such and such, we’re saying that the former thing is no longer a problem because we’ve got this new problem here. And that is very far from the truth. Both of them are important.”

“There are some similarities between misinformation and malware,” Ashish Jaiman, Microsoft director of product management for Bing Multimedia, who was a technical director in Microsoft’s Defending Democracy Program, tells CSO. “Some of the cybersecurity campaigns are done through social engineering, and phishing is one of them, but there are other ways that are similar to how misinformation spreads. So, the difference between them is that cybersecurity is essentially binary. People understand what a cybersecurity act looks like from an engineering or an organizational perspective.”

Misinformation is murkier. “What’s true and false in an information campaign is different than what’s true and false in a cybersecurity campaign,” says Jaiman. “It’s very hard for a technology or a cybersecurity expert to understand that.”

Still, organizations can bring to bear cybersecurity skills and techniques in the domain of information defense. “We have spent a lot of time building our tools to use technology like AI to stop an attack before it starts propagating,” Jaiman says. “Then even if it goes through, then we have spent a lot of time teaching people to actually identify or at least be aware of these kinds of attacks.”

One tool organizations can borrow from cybersecurity in tackling misinformation is sharing signals, which Jaiman says cybersecurity professionals already do regarding child exploitation and abuse scenarios. “On a macro level, if you think about it, creating that kind of signal and removing [content] is similar to what we do with phishing, where we share signals on cybersecurity, we remove content and whatnot.”

Integrated solutions and misinformation awareness are needed

Even if misinformation is not malware, the two maladies frequently align, which requires awareness by cybersecurity professionals of the misinformation threats and an integrated approach across organizations. “If you’re planning adversarial response and defense around the way that your organization is configured and the distinctions between your comms people and your trust and safety people and your security people, you’ve already failed,” Roth said.

Plunkett thinks cybersecurity personnel should not be obligated to take on the misinformation banner because it requires a deep knowledge of whatever subject matter comes into play. But, “I think that people who are responsible for traditional malware and cybersecurity certainly need to be aware,” she says. “It’s additional information that absolutely you should be aware of and should be conscious that it could exist and could be used to help the cybersecurity problem that you are working on.”

Kaplan said misinformation is a “distributed risk,” necessitating a broader organizational approach. “We typically see that communications and security will work together because of this. Often in the room is also legal and government affairs. That tends to be the right mix. There’s a whole host of different components of the org charts that the adversary cares so little about that are responsible for actually responding to an incident.”