How the US Can Prevent the Next ‘Cyber 9/11’

In an interview with WIRED, former national intelligence official Sue Gordon discusses Russian election interference and other digital threats to democracy.
Sue Gordon
“One of the best ways to protect against [election] influence is to actually tell people that the potential is there,” says Gordon.Photograph: Chip Somodevilla/Getty Images

Calling the past month a tumultuous one for United States digital policy might be an understatement. Between remote working and learning, Netflix binging, and doomscrolling, internet usage has swelled during the pandemic. The Trump administration, meanwhile, continues its campaign against Chinese telecom Huawei and has touted banning TikTok in the United States.

On top of that, General Paul Nakasone, head of US Cyber Command and director of the National Security Agency, said last month, “We’re going to act when we see adversaries attempting to interfere in our elections.” President Trump publicly confirmed a reported US Cyber Command operation in 2018 to knock Russia’s Internet Research Agency offline during the midterms. And Democratic representatives have requested an FBI briefing on foreign influence operations aimed at the 2020 election.

Still, rising from all these digital threats is the potential for better policy and outcomes. “You see cyber now come from the world of the techies into the world of geopolitics,” says Sue Gordon, who most recently served as principal deputy director of national intelligence, the second-highest-ranking intelligence official in the US, before resigning in August 2019. In response to these threats, the government, private sector, and civil society are getting “much more mature about the kinds of tools we use against them.”

In an interview with WIRED, Gordon—who has also served in senior roles at the Central Intelligence Agency and as deputy director of the National Geospatial-Intelligence Agency—talked cybersecurity, digital threats to democracy, and whole-of-society responses to those risks.

As the November election approaches, as Russian state media spread lies about the coronavirus, and as social media platforms keep removing user accounts from state-linked information operations, the Russian government’s digital threats to electoral processes were top of mind.

In Gordon’s view, there are two main reasons why the US remains so focused on Russia “as such a dangerous, capable adversary.” The first is clear evidence that Russian actors in 2016 perpetrated both interference operations, like hacking into voting machines, and influence campaigns, that is “influencing people’s will to vote, how people vote, whether people vote, whether they think their vote matters.” For just a smattering of evidence of both influence and interference, look no further than the Senate Select Committee on Intelligence’s five reports on Russian campaigns in the 2016 US election—on election infrastructure attacks, use of social media, the US government response, the US intelligence community’s assessment, and US counterintelligence efforts (yet to be publicly released).

Second, whereas other countries “are relatively newer about using all the instruments available to them,” Russian intelligence services are “fully formed, very mature.” But she stresses that the Russian government is not the sole conductor of influence operations. Indeed, on July 24, the director of the National Counterintelligence and Security Center warned of China, Russia, and Iran expanding influence efforts aimed at the US’ November election.

Today, Gordon says, “We are more prepared, the United States and the whole stack of people that have to be interested in this, than we were in 2016.” There is more ongoing effort to protect election machines, and the federal government is working more with localities that administer elections. “That doesn’t mean it’s perfect,” though; “there’s always more you can do.”

Election security experts have spent years arguing for a list of changes, with only some success. Senate Republicans continue blocking legislation that would provide federal election security funding and address the abysmal state of electronic voting technology security: Many electronic voting machines are still vulnerable, as are many voter registration databases. In May, the Department of Homeland Security sent a private memo to state officials, obtained by The Wall Street Journal, recommending paper ballots over electronic ones, as the latter “are high-risk even with controls in place.” Foreign operations meanwhile continue.

Still, Gordon says 2016 was a thunderous wake-up call for digital threats to—and through—the open internet ecosystem. “One of the questions that always was asked,” she says, “was will we ever really come up with a deterrence until we have a cyber 9/11? I still in my heart think that election interference could have been the cyber 9/11. That was the moment where we realized how cyber can be used and the type of threat that it could pose.”

Gordon acknowledges that not everyone sees it that way. President Trump has routinely attacked the intelligence community, disputed its conclusions, and even likened US intelligence agencies to Nazis. Denial of Russian influence operations in 2016 is core to this pattern. Standing, and sitting, next to Vladimir Putin, the president has wagged his finger with a smirk, instructing, “Don’t meddle in the election, please,” at last year’s G20 in Japan, and even more infamously sided with Putin over the FBI at a 2018 summit in Helsinki: “President Putin says it’s not Russia. I don’t see any reason why it would be.” The recent appointment of a new director of national intelligence, controversial for partisanship and a lack of qualifications, has caused even more concern. All of which begs the question: Are politicians’ attacks on the intelligence community hurting the US’ handling of digital threats like election interference?

Not entirely, says Gordon. “As an intelligence officer, you know you are always, for your whole life, presenting inconvenient information at the worst time that steals decision space.” She adds, “We’re withstanding it. I don’t like it. But I think there are good, solid voices and good work going on all around that continue to send a message about the threats that are real.”

Russian election meddling isn’t the only digital threat, of course. “We’ve really been tracking and doing hard work against Chinese economic espionage,” Gordon says, pointing to Department of Justice indictments of Chinese hackers amid growing cyber-enabled trade secret theft from US firms. “That was a pretty bold move, because what you see is us in the open charging and holding accountable Chinese actors for cyber activity.”

The indictments also sit within a larger strategic calculus. Efforts to thwart election-influence operations, indictments for economic espionage, and even White House call-outs of Russia for the NotPetya ransomware attack, all demonstrate “more willingness to call them out openly, more willingness for both the private sector and the government to partner to try and identify and come up with a response.”

Some experts, however, call out the calling out. Some insist that indictments of Chinese intelligence officers do little to stop cyber-enabled trade secret theft. Others argue that finger-pointing at Russia has little impact on election interference.

Important to cultivating strong responses to digital threats, Gordon says, is recognizing that they permeate far beyond government, and far beyond the US. “The US industrial base and the population and our citizens are contributors to national security. This was always true, but especially so in a digitally connected world where governments are not the only targets.” US adversaries “realize where our strength is—it’s in our innovation, it’s in our creation, it’s in our technological leadership.” Attacking those systems is a means to “create advantage over the United States.”

At the same time, it’s not just the US facing these challenges. Russian state or state-backed actors have attempted to interfere with or influence elections in Ukraine, France, Germany, the Netherlands, and many other European countries. The UK Parliament’s Intelligence and Security Committee recently published a report finding an inadequate government response to Russian influence operations targeting the 2016 Brexit referendum.

The Huawei saga is another example, as governments and companies the world over—Denmark, India, Japan, South Africa—are feeling the heat that Trump administration officials continue blasting on the company, alleging that the telecom’s 5G equipment is a vector for Chinese government espionage. Several realities exist simultaneously: Many countries have not issued full bans on Huawei 5G technology, and the company gains more ground by the day in markets where its low prices are a key selling point. Yet global supply chain entanglement also means Huawei isn’t the last time such questions will be raised about a digital infrastructure supplier.

“I love the pressure that is being put on, openly, about concerns about data security and data sovereignty and data protection, because I think that’s kind of hand-in-glove with privacy,” Gordon says. “I ought to be able to know that my data is protected and not have it be at risk just because it travels across someone’s infrastructure. And in the US and in democratic nations we have these laws that protect that; in some others they don’t.” While the gap between government and the private sector is a benefit in the US, “the reason why China is so concerning is there is no difference between the Chinese Communist Party, the Chinese government, the private sector.”

Because many countries are dealing with these risks, Gordon says, “US policy, US regulation has to be communicated with our partners and allies.” Countries need to ask one another, “Do we see it the same way? Do we understand that we’re all connected and we’re making decisions for each other?”

Fighting these threats also requires every tool in a country’s arsenal; tech alone no longer suffices. “Early in the cyber age, cyber threats were countered by cyber defense—technical against technical,” she says. But since “cyber is used as a medium through which national interests can be affected, now you’re starting to see new tools coming in.”

Following question after question on digital risks, Gordon offers an alternative framing. “Instead of thinking about it in terms of threats, think about it in terms of opportunities,” she says, “because we will increasingly want to protect ourselves.”

Shoring up supply chain security, from microelectronics to telecommunications to food, is one example that Gordon says is probably her biggest takeaway from the Covid-19 pandemic. “Someone once said, how come people lose so much money betting on sports? And the answer is because they always assume that their best day is their average day.” In the US, decisions made for efficiency or for economic reasons left supply chains vulnerable, Gordon says. Placing a higher priority on security can help diminish these weaknesses.

Reflecting on foreign election interference and influence operations, under an umbrella of digital threats to democracy writ large, Gordon comes back to transparency. “When I started in this business, we never would have written an unclassified intelligence community assessment,” she says in reference to the intelligence community’s publicized findings on Russian operations in 2016. “It would have been a conversation within our closed environment.”

There is a balance, as many in the Obama administration worried in 2016, between publishing evidence of threats and leading the public to exaggerate them in their minds. “How do you share threats without doing your adversaries’ work for them?” she asks. “If you just get too breathless about it, you can actually let people’s imagination go beyond the capabilities that are actually there—and do more damage.”

Russian operatives have even recently created disinformation about disinformation. Meanwhile, conspiratorial thinking in the US that sees foreign influence where none exists or sees foreign influence capabilities in ways far more expansive than reality only undermines trust in what the public sees, hears, and reads.

Ultimately, the opportunity for transparency is valuable. “Evil has a hard time succeeding in the light,” Gordon says. One of the reasons why the intelligence community, the FBI, and the Department of Homeland Security, among others, have discussed cyber and information threats to elections so much, Gordon says, “is because one of the best ways to protect against influence is to actually tell people that the potential is there.”

Gordon also places a heavy emphasis on the private sector’s role. “We want US companies to be able to compete and sell their products and be successful worldwide, but you can’t have them be unwitting partners of adversaries or competitors who would be unfair against them. So, I think that’s the conundrum—how do you do both?”

This question has been at the center of recent public discussion—spanning corporate claims that regulation of the US tech industry will hinder competition with China, concerns about US internet companies censoring information at the Chinese government’s behest, and even the publication of books like Amy Webb’s The Big Nine that argue for realignment between private-sector-led technology development and democratic values in the US.

“I put a lot of pressure on the government to set the right framework, to have the right alliances, to be able to articulate where the left and right boundaries are. I put a lot of pressure on companies to know how to both be profitable and be secure,” says Gordon. “So many of the really important technologies are being commercially developed. It is really hard to after-the-fact decide to make them secure.” Individuals also have a role to play, Gordon adds, by discerning truth from falsehood online and making smarter digital hygiene decisions. “Human vigilance, to go along with technical vigilance, is really quite good protection.”

Having a more strategic outlook about technology, the idea goes, will help bring long-term security and national policy questions—not just short-term profits—into the equation on detecting, mitigating, and where possible eliminating digital risks.


More Great WIRED Stories