Americas

  • United States

Asia

Oceania

mhill
UK Editor

5 social engineering assumptions that are wrong

News Analysis
Jun 24, 20228 mins
CybercrimeSocial Engineering

Cybercriminals continue to launch creative social engineering attacks to trick users. Meanwhile, social engineering misconceptions are exacerbating the risks of falling victim.

A hand controls a small marionette. [control / manipulation / social engineering]
Credit: SpiffyJ / Getty Images

Social engineering is involved in the vast majority of cyberattacks, but a new report from Proofpoint has revealed five common social engineering assumptions that are not only wrong but are repeatedly subverted by malicious actors in their attacks.

Commenting on the report’s findings, Sherrod DeGrippo, Proofpoint’s vice president threat research and detection, stated that the vendor has attempted to debunk faulty assumptions made by organizations and security teams so they can better protect employees against cybercrime. “Despite defenders’ best efforts, cybercriminals continue to defraud, extort and ransom companies for billions of dollars annually. Security-focused decision makers have prioritized bolstering defenses around physical and cloud-based infrastructure, which has led to human beings becoming the most relied upon entry point for compromise. As a result, a wide array of content and techniques continue to be developed to exploit human behaviors and interests.”

Indeed, cybercriminals will go to creative and occasionally unusual lengths to carry out social engineering campaigns, making it more difficult for users to avoid falling victim to them.

Here are five social engineering misconceptions exacerbating attacks, as presented by Proofpoint.

1. Threat actors don’t have conversations with targets

The notion that attackers do not invest time and effort conversing with victims to build rapport is flawed, according to the report. Proofpoint researchers observed multiple threat actors sending benign emails to kickstart conversations last year. “Effective social engineering is about generating feelings within a user that mentally drive them into engaging with content. By sending benign emails with the intent to lure the user into a false sense of security, threat actors lay the groundwork for a relationship to be more easily exploitable,” the report read.

Proofpoint observed multiple business email compromise (BEC), malware distribution, and nation state-aligned advanced persistent threat (APT) campaigns using benign conversations to launch attacks, the latter including activity by threat actors TA453, TA406 and TA499.

2. Legitimate services are safe from social engineering abuse

Users may be more inclined to interact with content if it appears to originate from a source they recognize and trust, but threat actors regularly abuse legitimate services such as cloud storage providers and content distribution networks to host and distribute malware as well as credential harvesting portals, according to Proofpoint. “Threat actors may prefer distributing malware via legitimate services due to their likelihood of bypassing security protections in email compared to malicious documents. Mitigating threats hosted on legitimate services continues to be a difficult vector to defend against as it likely involves implementation of a robust detection stack or policy-based blocking of services which might be business relevant,” the report read. Proofpoint’s campaign-level analysis identified OneDrive as the most frequently abused service by top-tier e-crime actors, followed by Google Drive, Dropbox, Discord, Firebase and SendGrid.

3. Attackers only use computers, not telephones

There’s a tendency to assume that social engineering attacks are limited to email, but Proofpoint detected an increase in attacks perpetuated by threat actors leveraging a robust ecosystem of call center-based email threats involving human interaction over the telephone. “The emails themselves don’t contain malicious links or attachments, and individuals must proactively call a fake customer service number in the email to engage with the threat actor. Proofpoint observes over 250,000 of these threat types each day.”

The report identified two types of call center threat activity – one using free, legitimate remote assistance software to steal money, and another using malware disguised as a document to compromise a computer (frequently associated with BazaLoader malware, often referred to as BazaCall). “Both attack types are what Proofpoint considers telephone-oriented attack delivery (TOAD),” it added. Victims can lose tens of thousands of dollars to these types of threats, with Proofpoint citing one example of an individual losing almost $50,000 to an attack from a threat actor purporting to be a Norton LifeLock representative.

4. Replying to existing email conversations is safe

The sense of trust and security surrounding existing email conversations is being exploited by fraudsters via thread or conversation hijacking, Proofpoint stated. “An actor using this method preys on the person’s trust in the existing email conversation. Typically, a recipient is expecting a reply from the sender, and is therefore more inclined to interact with the injected content,” it wrote.

To successfully hijack an existing conversation, threat actors need to obtain access to legitimate users’ inboxes, which can be obtained in various ways including phishing, malware attacks, credential lists available on hacking forums, or password spraying techniques. Threat actors can also hijack entire email servers or mailboxes and automatically send replies from threat actor-controlled botnets. “In 2021, Proofpoint observed over 500 campaigns using thread hijacking, associated with 16 different malware families. Major threat actors including TA571, TA577, TA575 and TA542 regularly use thread hijacking in campaigns.”

5. Fraudsters only use business-related content as lures

While malicious actors will often target business workers, the assumption that they rely on business-related content as lures is incorrect, Proofpoint said. In fact, threat actors have been significantly capitalizing on current events, news and popular culture to get people to engage with malicious content. The report cited several campaigns from last year that took this approach, including:

  • BazaLoader attacks leveraging Valentine’s Day themes such as flowers and lingerie
  • TA575 distributing the Dridex banking trojan using themes from Netflix show Squid Game targeting users in the U.S.
  • Internal Revenue Service (IRS)-themed campaigns leveraging the idea that the potential victim was owed an additional refund to harvest a variety of personally identifying information (PII)
  • An average of more than six million COVID-19-related threats per day throughout 2021

Business must train employees on social engineering tactics, debunk misconceptions

Given both the creativity of social engineering tactics and misconceptions around the methods fraudsters employ, organizations must engage with their workforce to raise awareness of the real threats social engineering poses and shift mindsets that may be vulnerable to exploitation. “The most impactful course of action, for any given organization, is to shift the culture toward a posture where identification of incoming threats is understood as both relevant and necessary. This means encouraging familiarization with the wide array of content threat actors may leverage and imposing few obstacles to more regular flagging of content as potentially malicious,” Proofpoint’s report read.

For Raef Meeuwisse, cybersecurity consultant and author of How to Hack a Human: Cybersecurity for the Mind, employees need to be educated that the most convincing social engineering scams often appear just as authentic as a lot of what they interact with during genuine, day-to-day work – or potentially even more so. “Lapsus$ went as far as sending out real push mobile multi-factor authentication requests via the real security platform to real employees – and many of the rogue requests were approved by the recipients,” he tells CSO.

The best way to empower employees to spot social engineering is to make them alert to any situation that is causing sudden panic and urgency to act without delay, adding that if users experience these two symptoms together, 99.99% of the time they are in the middle of being scammed through social engineering, Meeuwisse says. “And of course, employees should be trained to report potential social engineering activity to the incident response group, and if in doubt, to ask for their guidance.”

However, on the subject of debunking social engineering myths, Meeuwisse also advises businesses to recognize that risks do not always come from outside an organization. “What gets forgotten about or completely sidelined are methods to report, check and monitor for people that have intentionally faked their way into a position to exploit an organization,” he says. “This is a much bigger problem than many organizations think because substantial breaches caused by rogue insider actions are rarely revealed to the media, yet statistics flag rogue insiders as a big problem.” If an organization does few to no background checks, has no mechanism for anonymous whistleblowing, or (in two cases he has seen) has a rogue insider in charge of vetting other rogue insiders, he says, then there is a large gap in their social engineering defenses.