author photo
By Chester Avey
Wed | Nov 8, 2023 | 10:33 AM PST

Social engineering attacks have long been a threat to businesses worldwide, statistically comprising roughly 98% of cyberattacks worldwide. The average business faces more than 700 of these types of attacks every single year. Whether manifesting itself in a sophisticated phishing email or as a calculated series of conversations between employees and seemingly innocuous or "legitimate" parties with ulterior motives, a social engineering attack can have dire consequences.

The premise of social engineering attacks is much the same; perpetrators attempt to manipulate and deceive users into divulging confidential or sensitive information or performing actions that can compromise an organization's security. Given the much more psychologically focused and methodical ways that social engineering attacks can be conducted, it makes spotting them hard to do.

While organizations can invest in sophisticated cybersecurity and threat detection solutions to detect anomalous network and system activity, a socially-engineered conversation between a malicious actor and an untrained employee can easily slip under the radar.

What's more, the rise of artificial intelligence (AI) has made social engineering methods more complex, covert, and difficult to detect. The innovative and fast nature of AI enables attackers to automate, scale up, and fine tune social engineering attack methods and unknowingly expand the attack surface of organizations. So how exactly has AI compounded the issue of social engineering as a cyber threat, and what can businesses do about the evolving landscape?

The growing threat of social engineering

Social engineering, fundamentally, refers to psychological manipulation tactics that attackers use to deceive victims. It's a game of playing on trust, emotions, and perception, with traits like curiosity, fear, greed, or a desire to help often exploited as a means of lowering an individual's guard, and by extension, that of the organization.

By appealing to these traits, attackers can convince users to unwittingly provide them with access to critical systems or highly sensitive information. Even the most security-aware and technologically apt teams can fall victim to a sophisticated attack like this. This is why organizations have sought to upskill their teams and outsourced contractors in critical areas like DevOps or project management in proper cyber awareness.

Social engineering forms the basis of a plethora of cyber threats and common attack methods, including (but not limited to):

  • Phishing – emails, messages, or websites impersonating trusted sources to steal login credentials or financial information

  • Baiting – leaving infected physical media like USB drives to be found and inserted by victims

  • Pretexting – inventing scenarios to trick victims into divulging sensitive data

  • Tailgating – following authorized people through secured doors

Digital transformation, the rise of connectivity among geographically dispersed teams, and increased information sharing in real-time have made more individuals susceptible to social engineering. These attacks have proliferated to such a degree that there were 493 million ransomware attacks in 2022 alone, and 19% of all data breaches were the result of stolen or compromised login credentials. All of these boil down to social engineering tactics, and they are not set to slow down anytime soon.

Given that businesses are seeing more frequent and damaging social engineering threats, it's no wonder why many organizations are seeking to implement stronger, enterprise-grade cybersecurity solutions and invest in robust awareness training. However, it's imperative to know that attackers are beginning to weaponize social engineering with the help of AI, which could present an even bigger series of challenges.

How AI is elevating social engineering risks

The combination of AI's deep-learning algorithms and advanced processing capabilities has given malicious actors the ability to develop more complex attacks. Such strategies, previously proving very human-led, are now becoming increasingly more automated and thus able to evade many baseline cybersecurity controls.

As AI capabilities grow, attackers are applying them to bolster their attack efforts in many ways.

  • Hyper-personalized phishing – AI can mine social media to create spear phishing emails customized with familiar names, logos, and messaging per target. AI-powered bots infest social media platforms masquerading as legitimate users, and use a variety of convincing language and deepfakes to deceive users.

  • Natural language generation – AI can generate coherent, human-like writing and dialogue with ease given that it is prompted to sound humanistic. This allows attackers to craft persuasive social engineering content at scale and push past initial barriers of caution with ease.

  • Emotional manipulation – By analyzing the target's digital footprint, AI can identify communication styles, emotional triggers, and vulnerabilities specific to each victim. This makes deception tactics more effective.

  • Detection evasion – AI can test and refine social engineering techniques to avoid raising red flags in security tools and identify blind spots.

  • Automated reconnaissance – AI can quickly gather intelligence on targets by scraping data sources like social media, marketing sites, and public records. This information gets weaponized to enhance social engineering, perpetuating misinformation to appeal to specific audiences and blurring the line between reality and fabricated data.

Put simply, the emergence of AI tools has given attackers more ammunition to craft tailored, context-aware social engineering tactics at scale. They have made it faster, easier, and cheaper for bad actors to execute targeted campaigns. In turn, this has left organizations and individuals far behind in the race to secure defenses appropriately.

How does AI-powered social engineering affect businesses?

With AI amplifying social engineering threats, businesses' attack surfaces grow increasingly larger. If an organization is already susceptible to a range of cyberattacks like data breaches, DDoS (distributed denial-of-service), and malware, then AI will likely provide more headaches for businesses.

Highly-persuasive phishing emails, aided by convincing AI-generated text, can trick employees into clicking malicious links, downloading files disguised as malware, or entering credentials into fake logins or landing pages. Thus, accounts, networks, and data prove to be more easily compromised.

AI-generated deepfakes like videos, images, and voice recordings allow attackers to impersonate executives and recognizable figures to convey misleading information, manipulate employees, engage in targeted blackmail, or orchestrate disruption.

By analyzing large datasets, AI can refine information to craft appealing discourse to specific audiences, thus eroding customer trust and tarnishing organizations' public image. Traditional cyberattacks pose a reputational threat, particularly if customer or stakeholder information is leaked, but AI makes this problem exponentially worse.

AI's innate ability to recognize patterns can unveil previously overlooked system vulnerabilities. Once inside, AI tools can stealthily navigate, aggregate, and export valuable data, having covertly bypassed initial security detection tools.

These are just some of the ramifications that can affect businesses. Ongoing vigilance and adaptation of incumbent defenses will prove instrumental in managing and overcoming these evolving risks.

Mitigating AI social engineering risks

To bolster resilience, organizations should update policies, processes, and technology safeguards.

Training employees to recognize social engineering methods—including those enabled by AI—will prime them against emerging threats. Simulated tests and breaches can test their actual resilience, from which they can hone their skills and vigilance.

Embrace the use of multi-factor authentication (MFA) as a baseline defense tactic. Enforce additional credentials beyond usernames and passwords for all users regardless of seniority. Make sure that any shared logins are validated by users confirming OTPs (one-time passwords) from administrators, biometrics, or security codes. Consider deploying enterprise-grade password management and generation tools with minimum criteria and no-reuse policies to bolster initial account safety.

Exercise heightened email vigilance and approach all external communications with caution. Implement DKIM, SPF, and DMARC to validate legitimate emails, and review all anomalies that slip through the cracks carefully. Block any risky file downloads that originate from outside the organization's known servers and IP addresses.

Perhaps most imperatively, organizations need to utilize AI as a defense mechanism to combat the technology used for malicious purposes. Businesses need to look at uses for AI internally that can offer predictive and proactive threat neutralization, as it's arguably the only way to stay ahead of ‌bad actors using AI unlawfully.

With a multi-layered approach to cyber resilience, companies can navigate the complex digital landscape with increased confidence. AI-enabled social engineering threats are likely to grow in frequency and sophistication, so organizations must take proactive measures and be adaptive to ensure sufficient, robust protection.

Comments