The automated bots are highly successful because they effectively emulate legitimate service providers. Credit: Magdalena Petrova Two-factor authentication (2FA) has been widely adopted by online services over the past several years and turning it on is probably the best thing users can do for their online account security. Faced with this additional hurdle that prevents them from exploiting stolen passwords, cybercriminals have had to adapt, too, and come up with innovative ways to extract one-time use authentication codes from users.According to a new report from cybercrime intelligence firm Intel 471, the latest development in 2FA bypassing involves the use of robocalls with interactive messages that are meant to trick users into handing over their one-time passwords (OTPs) in real-time as attackers are trying to access their accounts. All of this is automated and controlled by using Telegram-based bots, much like teams in organizations use Slack bots to automate workflows.“All the services Intel 471 has observed, which have only been in operation since June, either operate via a Telegram bot or provide support for customers via a Telegram channel,” the researchers said. “In these support channels, users often share their success while using the bot, often walking away with thousands of dollars from victim accounts.” Social engineering automated by botsAt their core these are social engineering attacks with a high level of automation. In the past an attacker would manually call a victim to get their information or the customer support line of a bank or service provider to gain unauthorized access to an account; this has now transitioned to scripted calls performed by bots based on commands given in a Telegram chat. The services seen by Intel 471 have predefined “modes” or scripts to impersonate various well-known banks, as well online payment services like Google Pay, Apple Pay, PayPal and mobile carriers. Since they began looking into this, the researchers have seen one service called SMS Buster that can make calls in both English and French being used to illegally access accounts at eight different Canada-based banks.Another service called SMSRanger claims a success rate of around 80% if the victim answers the call and the attacker supplied the bot with accurate and updated personal information about the victims. Also known as “fullz” in cybercrime circles, these data sets can be acquired from various forums and underground markets. Bots effectively emulate victims’ service providersThe high success rate is somewhat surprising. Normally with 2FA or OTP schemes used for account authentication or transaction authorization in the banking space, the user might be contacted by an automated service via a phone call to be given their unique one-time use code. However, these cybercrime services do it in reverse: They contact the victims to ask them to input the OTPs they just received through SMS or some other means from their legitimate service provider.This should be an unusual request and process for most users that should raise red flags. However, these bots do a good job of masquerading as the victim’s service provider. Most have phone number spoofing capabilities and the attacker can specify the phone number he wants the bot to use when calling the victim. This will usually be a number associated with the victim’s bank or carrier.If the victims’ phones display a caller ID that the victims trust and recognize, they’re more likely to comply with the request. In addition, the robot will have personal information about them that the attacker loaded, adding another layer of credibility.In addition to robocalling, some of these services can also automate attacks via email or SMS and offer phishing panels that target social media accounts like Facebook, Instagram and Snapchat; financial services like PayPal and Venmo; or investment apps like Robinhood or Coinbase. Cybercriminals pay monthly fees that range from tens to hundreds of dollars to use the bots, which is a small price considering that every successful attack can result in the theft of thousands of dollars.Robust 2FA forms offer more protection“Overall, the bots show that some forms of two-factor authentication can have their own security risks,” the Intel 471 researchers said. “While SMS- and phone-call-based OTP services are better than nothing, criminals have found ways to socially engineer their way around the safeguards. More robust forms of 2FA—including Time-Based One Time Password (TOTP) codes from authentication apps, push-notification-based codes, or a FIDO security key—provide a greater degree of security than SMS or phone-call-based options.”Users should be wary of any phone calls where the caller, whether a robot or a human, asks them for personal, financial or authentication information. With 2FA being widely deployed for SaaS and other accounts provided by companies to employees, these services represent a risk for organizations as well, not just consumers. Related content feature The CSO guide to top security conferences Tracking postponements, cancellations, and conferences gone virtual — CSO Online’s calendar of upcoming security conferences makes it easy to find the events that matter the most to you. By CSO Staff May 01, 2024 15 mins Technology Industry IT Skills Events feature 3 Windows vulnerabilities that may not be worth patching Some vulnerabilities eat up a security team’s time and resources yet provide little or nothing in the way of true protection. Some may even introduce more risk to a network. By Susan Bradley May 01, 2024 7 mins Windows Security Patch Management Software Security Practices news analysis Chinese threat actor engaged in multi-year DNS resolver probing effort The unusual and persistent probing activity over the span of multiple years should be a reminder to organizations to identify and remove all open DNS resolvers from their networks. By Lucian Constantin Apr 30, 2024 7 mins Cyberattacks Network Security news Securiti adds distributed LLM firewalls to secure genAI applications The new offering is aimed at protecting against prompt injection, data leakage, and training data poisoning in LLM systems. By Shweta Sharma Apr 30, 2024 4 mins Generative AI PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe