Americas

  • United States

Asia

Oceania

8 biases that will kill your security program

Feature
Jul 20, 20218 mins
CSO and CISORisk Management

CISOs and their security teams often hold cognitive biases that get in the way of making the right risk management and incident response decisions. Here are eight of the most common to avoid.

A lost businessman wanders amid conflicting directional signs through the fog.
Credit: Gremlin / Getty Images

The decisions that security leaders make can often be influenced by a variety of cognitive biases, some of which are subtle and others that are easy to spot. Avoiding these biases is critical to ensuring that cyber risks are interpreted and acted upon in an appropriate manner especially when major disruptions happen—such as the recent shift to a more distributed work environment because of the COVID-19 pandemic.

“The behavior and decision-making processes of individuals have a direct impact on security,” says Sounil Yu, CISO at JupiterOne, a provider of asset management and governance technologies. Human error is the cause for many breaches, so understanding how people think react and behave is essential to good cybersecurity, he says. Understanding behavioral biases is even more important during the remote work era, when personal security hygiene has a greater impact on overall network health and the consequences of even a single wrong decision can ripple across the enterprise.

Here according to Yu and other security experts are some common biases that security leaders are prone to and need to avoid.

1. Confirmation bias

CISOs can make the mistake of assuming that the threat narrative they are inclined to believe is always the right one. “Confirmation bias is when you favor information that confirms your previously established views or beliefs,” says Rick Holland, CISO at Digital Shadows. One area where this is especially problematic is attack attribution, or threat attribution where security leaders can easily fall into the trap of pinning blame on a specific nation-state or threat actor simply because they assume that’s the case. Instead, CISOs should seek out objective data points to minimize confirmation bias, look at alternative scenarios, and actively challenge their belief system, he says.

Yu points to the tendency by some to automatically associate hackers in hoodies and ski masks with bad and evil people and professional cyber-crime syndicate and nation-state actors as looking like white-collar workers. “We overestimate the frequency of hackers in hoodies and underestimate the likelihood of the professionals that don’t look like hackers.”

2. Bandwagon bias 

In an industry where information sharing and comparison with peers on security practices is encouraged, security leaders can sometimes take the safe route and adopt certain approaches just because everyone else around them has adopted it. Christopher Pierson, founder, and CEO of Blackcloak, calls it the “bandwagon effect” in cybersecurity. As an example, he points to a CISO signing off on controls to address a specific risk because the same controls are used by others or perceived to be used by others. In these situations, even if a particular control does not work or is only partially effective, the fact that everyone else in your circle is doing it is justification enough, Pierson says.

“CISOs need to minimize the impacts of groupthink,” Holland says. “Groupthink occurs when people strive for consensus and agree with everyone else’s ideas.” Such thinking can eliminate alternative thought and lead to faulty analysis and conclusions. “CISOs can prevent groupthink by building diverse teams, fostering critical thinking, and encouraging the devil’s advocate perspective.”

3. Hindsight bias

Hindsight may be 20/20, but using it to make assumptions about future cyber risks is dangerous. Yu describes the bias as the illusion of understanding things when you really don’t. “When we find a compromise, we think that the mistakes that the organization made were obvious,” he says, “but it’s just hindsight that makes it seem obvious.” There’s often not much of a way that people would have discovered the problem until it had manifested itself.

As an example, he points to vulnerabilities that remain undiscovered for years despite numerous eyes being on the code. It’s only after the vulnerabilities are discovered does it seem like they should have been discovered earlier. “The illusion that we think that we understand the past feeds the illusion that we can predict the future accurately,” Yu says. “Because a vendor comes in and says that they could have fixed a problem in the past, we are left overconfident that they can also fix future new problems.” That is rarely the case, he says. 

4. “They won’t let us do that” bias

Security leaders and practitioners tend to point to factors not in their direct control for their inability to do something. Commonly cited reasons for not launching a needed security initiative include a lack support from the top or user resistance to change. The bias here often lies in the assumptions that security leaders make when arriving at such conclusions.

John Pescatore, director of emerging security trends at the SANS Institute, calls it the “management/users will never let us do that” bias. The best example is strong authentication, he says. “Easily 70% of CXOs and board members are using SMS messages for 2FA on their personal financial online accounts and using fingerprint or face recognitions biometrics on their mobile devices,” he says, “but security teams still think they will fight 2FA.” 

5. Anchoring bias 

Security leaders who tend to get influenced by the first piece of new information they learn are susceptible to anchoring bias, says Holland. Minimizing anchoring bias is important especially for incident response activities, he says.

The early stages of investigations often reveal incomplete pictures, and the story of what happened usually will only become more apparent as the investigation unfolds. CISOs shouldn’t anchor on the early assessments during an intrusion and continue being open to other possibilities as the response continues, Holland says.

When an organization doesn’t have formal measures for determining inherent risk or residual risk, security leaders can rely too much on other information sources—such as the news media—to make assumptions about their own risk posture, says Pierson from Blackcloak. In these situations, even risks with low likelihood or low impact tend to be viewed as more likely to happen. Usually, this kind of a bias exists at the highest levels like the board of directors. “Due to teams being more remote in the pandemic and not being able to walk by the CISO’s office or get together in an ad hoc manner, some of these biases may have been more present in the past year,” he says.

6. Business language bias

In recent years there’s been a heavy focus on CISOs and other security leaders being able to their articulate their organization’s cyber risk posture in language that the C-suite and board of directors can understand. Security executives have been encouraged to think about business alignment, business goals, and positioning the security function as a business enabler rather than merely a cost center. While such thinking is essential, security leaders need to be cautious about overdoing it.

Going overboard in talking “business language” instead of laying it out in security terms is one bias leaders need to avoid, says Richard Stiennon, principal at IT-Harvest. CISOs are often encouraged to talk like a CFO, but that does not mean they should couch everything cybersecurity related in terms of risk management all the time. “That just leads to the board making the wrong decision based on a false perception that cyber risk is manageable.”

7. “Developers don’t care about security” bias 

The push toward DevSecOps has changed the dynamics among information security, operations, and software development teams at many organizations. Increasingly, developers have begun assuming more responsibility for integrating security earlier in the software build cycle. A survey that GitLab conducted earlier this year showed 39% of developers—up from 28% last year—saying they felt fully responsible for security, while 32% said they shared the responsibility with others. Yet, there is a tendency among security groups to think developers are still dragging their feet on security. Over three-quarters of the security respondents in the GitLab survey thought developers find too few bugs and too late in the process.

There’s this bias among security leaders that developers don’t care about security, Pescatore says. “This was true years ago, but in recent years many software architects see privacy as a functional requirement.” Working with developers on shared tools, processes, and privacy concerns can help them build better security into code, he says.

8. Blind spot bias

Pierson describes blind spot bias as when a CISO perceives himself or herself to be less biased than everyone else around them. Not only can this lead to a misidentified control or solution, it can also lead to cultural issues, he says.

This sort of bias typically manifests itself in situations where things are easier to measure, Pierson says. “One example here might be with phishing simulations where a CISO essentially concludes that people are pretty well trained, based on numbers or knowing the players involved.”

CISOs that run global teams need to be particularly aware of judging situations in terms of their own values and beliefs, Holland from Digital Shadows says. When working with and coaching staff from other countries, security leaders need to pay attention to the beliefs and value system of that region. “Don’t view the situation through your worldview; attempt to understand it via theirs,” he advises.