Applying Heuristics in Cybersecurity

Category

Risk Modeling, Quantitative Risk

Risk Level

 

“What are heuristics?”

Heuristics are simplified problem solving procedures, mental shortcuts, or rules of thumb. They may be implicit habits of thought (e.g. avoid fire), explicit decision trees in a document (e.g. ePHI means high risk), or they could be decision rules we encode into algorithms, like those used in software for malware and virus detection (e.g. hooking keyboard drivers = malware).

While judgment and decision making psychologists Daniel Kahneman and Amos Tversky suggested that heuristics are inferior to using proper probability calculus and formal logic, psychologist Gerd Gigerenzer and colleagues found that while people can certainly choose the wrong heuristic to make a decision – when individuals  choose the right heuristic,  their decision can be more accurate and require less effort than probability calculus and formal logic to boot.

You can learn more about Gerd Gigerenzer and follow his work on scite.ai.

“Got it. So what do heuristics look like in cybersecurity?”

When we ask an expert to provide a risk estimate – for example, “How much would a data breach involving this server cost us?”, they are going to do something in their head before giving us an answer. Some experts avoid heuristics and simply use a probability or dollar amount that they saw in an industry report (e.g. “1 million dollars”) while others will try heuristics like counting the number of confidential files on a server and multiplying that number by a “cost per file breached” dollar amount. Consider the following:

“I hear about ransomware all the time. That means every risk in my assessment about ransomware should have the highest probability of occurring in my risk assessment report.”

This mental shortcut is called the recognition heuristic. While Kahneman and colleagues consider this heuristic a bad habit, instead of dismissing it entirely, Gigerenzer and colleagues ask: “Is the recognition heuristic the right tool for the job?” If the recognition heuristic isn’t the right tool for the job, then it ends up being a source of availability bias. Availability bias is the irrational use of immediately available information. It is considered irrational because a person uses that information just because it is easily accessible as opposed to being information that would be an objective state of reality.

Data analysts aren’t immune to the ill effects of the recognition heuristic either. Availability bias can impact data analysis just as much as the alternative informal mental meanderings that we use most of the time. If an analyst wrangled data that was simply the easiest to acquire, rather than the dataset of the highest quality or best coverage, then they are  committing the analytic sin of convenience sampling. Imagine judging a whole box of apples by what the apples on top look like instead of digging in and finding all of the apples underneath are rotten. 

Gigerenzer and colleagues point out that sometimes the recognition heuristic is still the right tool for the job. In fact, we can even be led astray by trying more sophisticated methods. Knowing which heuristic to use isn't just a matter of getting the best “bang for your buck” – you may end up spending a lot of money on a wholly inaccurate analysis by not using a heuristic to make your decision.

“So when should I avoid heuristics in cybersecurity?”

Some areas of expertise lend themselves to building intuition. Unfortunately, cybersecurity may not be one of them. One way to build intuition is to repeatedly and rapidly learn from the failed use of heuristics. But this method only works when you can try a heuristic and see a clear success or failure.

For example, touching a hot surface causes immediate pain and gives instant feedback, but in cybersecurity we’re typically faced with rare, high impact events with long delays between attacks (e.g. a cyber incident/data breach) and loss events (e.g. discovering the hack, recovering, reporting, litigation, fines). This results in a suboptimal learning environment. A data breach is a relatively uncommon experience in the career of a cybersecurity professional and organizational structures typically create long distances between people responsible for noticing the signs of an attack (e.g. cybersecurity analysts) and those accountable for the resulting losses (e.g. executives).

Does the general public have an intuitive understanding of what cybersecurity threats warrant the most attention? What about news media organizations? What about scientists? Check out this ACT post where we compared cybersecurity breach perceptions between those groups based on publicly available data.


“When is a heuristic the right tool for the job?”

You can and should use the recognition heuristic once you’ve validated it, or if the cost of being wrong is sufficiently low.

Validated cybersecurity heuristics

If a credible source discovered a heuristic that is generalizable to your institution, then you should consider using it. If you have the historical data (or memory), apply the heuristic to your past and see if it would have helped in your decision-making or not.

High validity environment heuristics

If you’ve worked in an immediate feedback environment where your hunches were put to the test, you may have a collection of valid heuristics that make up valuable intuition.

It is also important to acknowledge that validity can shift. In many cases in cybersecurity, our indicators of compromise and other signals shift. Hackers have their own heuristics, like:

“If my code contains 1337, then my malware will get caught by antivirus software, so I won’t use that text.”

It is important to keep an eye on whether a heuristic has obsolesced or is even itself being leveraged as part of an attack. Many users still have the heuristic  “If an email has my boss’s name in the FROM field, then I will trust it immediately,” which hackers exploit by using trusted names and words in phishing emails.

Low cost-of-being-wrong heuristics

If you’re considering going with a heuristic to help you make a decision and the potential costs of being mistaken are low, you may want to take the risk. These kinds of “low stakes” applications of heuristics are learning opportunities and ways for you to test your hypothesis or heuristic. However, it is critical to clearly define what you’d expect to see as a result of your heuristic being correct, and what you’d expect to see if your heuristic was incorrect, and what some possible other reasons might be for the outcome that occurs. 

Low cost for me vs us vs them

Computers and people are part of larger complex systems and it can be difficult to see or remember how our decisions impact others. What is low risk and low cost to me may be high cost and risk to others.

Any one user on your network, including security professionals,  probably thinks “What are the odds that I’ll be the cause of a data breach anyway?,” and clicks links and opens attachments accordingly. And they are not wrong to think that way – the odds are quite low. The problem is that all the low odds “add up” under the umbrella of the institution that has to pay the bill if any one user is unlucky and gets phished or brings malware into the network. 

Another example is how we as organizations perform risk assessments that speak only to the losses realized by the institution. It is a rarity to see organizations with risk models that attempt to quantify how much a data breach impacts the lives of the individuals whose data was breached. This cost is somewhat realized by compliance-related fines that are based on the number of people impacted, as well as damages settled with litigation, but covers only a small portion of the tangible costs to the individual at best. That few months of identity theft protection solved all your problems, right?

“That makes sense. What can I do with this information?

Put it to the test! Ready to try out your heuristic intuition? Download our free quantitative cybersecurity assessment tool and get started!

 

Follow us - stay ahead.


Read more of the ACT

Previous
Previous

Are Your Passwords in the Green in 2022?

Next
Next

The Dangers of QR Codes