Americas

  • United States

Asia

Oceania

dswinhoe
Editor

How decision-making psychology can improve incident response

Feature
Jan 29, 20218 mins
CyberattacksIT SkillsSecurity

Challenging biases and engaging in regular drills can keep your incident response team sharper than once-a-year wargames.

vulnerable breach hacked indecision
Credit: Thinkstock

Incident response (IR) is a key part of any large organization’s security posture. Ensuring your teams know how to react to different situations and scenarios enables companies to respond quicker and more effectively to cyberattacks.

There’s also a cost benefit. According to IBM’s Cost of a Data Breach Report 2020, companies with an incident response team that tests IR plans using tabletop exercises or simulations saw breach costs reduced by an average of $2 million compared to companies with no IR team or IR testing.

But what is the right cadence for such tests to ensure proper decision making during a crisis?

Once-a-year wargames are not enough for effective IR

Rebecca McKeown, an independent chartered psychologist, is an expert in decision making and understanding situational awareness who worked with both the UK Civil Aviation Authority and the UK’s Ministry of Defense. She sees direct parallels between military and cyber, and a way to carry learnings about decision making from one to the other. “The problems are the same: high pressure, fast paced, very complicated, high stakes, and a lot of stakeholders,” she says. “It’s the same sort of problem just in a different domain.”

As with the military, cybersecurity is heavy with frameworks, and some like the OODA Loop have crossed from one realm into the other. And, just like in the military, an over-reliance on pre-defined frameworks and processes can hinder results. “It’s really about the agility of thinking,” McKeown says. “Too much reliance on things like OODA and other frameworks and people then tend to think filling in the framework and gives them the answers. Actually, it’s far more complex than that.”

CISOs should be looking to encourage what McKeown calls metacognition, or enhanced thinking skills and mental agility, to encourage more critical thinking and decision-making among their incident response teams, and hopefully encourage faster and more effective reactions. “Because of the nature of cyber, because it is so uncertain and so fast, reliance on some things like OODA might well send you down the wrong track or lead you to a wrong point of analysis,” she warns.

“For example, with naturalistic decision making—the decisions that people make just before something kicks off and what the military calls ‘what happens left of bang’—experienced cyber operators will draw conclusions from information and from their intuition and experiences that can start off a whole series of decisions based on wrong things because of the cognitive biases that they have,” McKeown says.

While large, once-a-year company-wide black swan type events have value, McKeown argues, security teams should take a more military-like approach and adopt far more regular “micro-drills” into their work to stay sharp and ensure they don’t fall into bad habits or routines and are able to respond to incidents more effectively. “Because of the way the brain works, you actually need to be exercising these skills more often,” she says. “There’s a thing called skill fade, and anything to do with a process where you need to deal with decision-making skills actually degrade very quickly.

If you’re not exercising those two things at least once every eight weeks, then they will fade, and you will not be quite as effective. If you’re only doing it once a year, then you’re probably not going to be getting the optimum from the training event itself.”

McKeown suggests that companies should have many smaller events throughout the year in addition to a large exercise. As with larger simulations, assess and reflect on the data and feedback from the smaller events to learn and improve while keeping skills sharp and preventing bad habits forming.

Regular incident response drills can challenge ingrained biases

With increasing amounts of automation, it’s even more important for incident response teams to be able to challenge their own biases and potentially those that may be ingrained in the tools and technology they are using to ensure that they aren’t take down blind alleys or wild goose chases when trying to deal with incidents. “People think that once you’ve got these decision-making skills everything’s great, but if you’re not using them regularly then they’re not as effective,” says McKeown. “The brain is a limited capacity information processor. When the adrenaline starts flowing it narrows down focus, so it makes it very difficult then to move away from that tunnel vision.”

“If you’re thinking skills are such that you’re very experienced, that tends you to take a certain path when a familiar problem comes up,” McKeown adds. “Because it’s all automatic, you’re making assumptions about the information that’s incoming and they can lead you down the wrong path.”

Cybersecurity professionals can fall prey to numerous types of potential cognitive biases. These biases can apply across the incident response lifecycle, potentially hindering investigation, response, and the overall security posture of a company. McKeown listed some common potential biases:

Anchoring is fixing on one piece of information and treating that as being correct when it’s not. An example of anchoring could be if an incident responder decides that an attack is by a nation-state even if that might not be the case. That can affect how information and decisions are made later in the response process.  

Confirmation bias comes after anchoring on a piece of information. If a responder has anchored on the idea that they are being targeted by a nation-state or a particular actor, they might focus on information that supports their theory and potentially ignore evidence to the contrary.

Availability bias is putting too much focus on examples that come readily to mind and considering them more representative than they truly are. A responder, having read about a particular type of attack, might assume that is happening to them even if that is not the case, potentially slowing down investigation and response. 

Fundamental attribution error is about blaming the person, not the context. An IR example could be blaming a user for clicking on a phishing link rather than examining how the malicious email made it through security controls or why a hijacked account wasn’t flagged for suspicious activity. 

Premature closure is when the thinking stops once a diagnosis has made. In security this would be simply remediating a particular issue without addressing how or why the incident happened in the first place and fixing the potential weaknesses in security posture.

“There is never a set format for a cyber incident,” says Max Vetter, chief cyber officer at Immersive Labs, “so developing the skill for responders to think on their feet is hugely important. Not only does this mean you have a far more agile team when dealing with new threats, but it also allows people to challenge preconceptions when assessing, what appears on the face of it, to be a standard incident. When you are up against an adversary who is trying to be deceptive this is important, as diversionary attacks and false flags are often used.”

To develop the metacognition and the ability to reflect on what happened and understand how incident response teams think and their strengths and weaknesses, CISOs must take the time to first reflect and identify their own potential biases and then work regularly to change those habits. “This is not a thing that you can stop as it’s happening because you have no control over it,” says McKeown. “But if you’re repeating these things on a very regular basis, then you build up new patterns, and that helps you to become more aware and start challenging what you think is reality. It can’t just happen as a one-off; it’s very much training over time.”

How to conduct incident response drills more often

To supplement the large-scale events, McKeown suggests taking an almost DevSecOps-like approach to incident response: regular microdrills that tie into the larger events, using data to identify potential issues, and going back to assess what worked and what didn’t and decide what should be done differently next time. She says regular IR drills every eight weeks is the “ideal gold standard” and longer than every 12 weeks is when you’re likely to see skills fade and teams lose some of their mental agility.

“For me, it’s about different scenarios. The more scenarios you have presented, the more patterns people start to learn so the more agile they become with their thinking,” McKeown says. “It should be very tailored because every organization has different problems, different issues, different skill sets. Again, I think that’s the beauty of it you could start off with a very small part of one incident and build on it from there, particularly if you’re building multidisciplinary teams who don’t necessarily have that understanding of the complexity of it.”

While the exact nature and scenarios to drill against will depend on the individual company and the threats it faces, Immersive Labs’ Vetter says ransomware drills are a common scenario that is played against. CISOs should look to the data and identify potential KPIs or targets the security team may not be performing against and use that information as a starting point. When looking at the results of drills, CISOs should look at the same data and address why targets might not have been hit.

“The traditional metrics are going to give you indicators of where you are and are not doing well. It’s a case of using those traditional metrics to take that out a little bit further,” says McKeown. “What does that data mean, where are they pointing to? It’s about interrogating the data, taking all of those hard numbers, and asking that ‘why’ question.”