How Criminals Are Using Generative AI

There’s a new report on how criminals are using generative AI tools:

Key Takeaways:

  • Adoption rates of AI technologies among criminals lag behind the rates of their industry counterparts because of the evolving nature of cybercrime.
  • Compared to last year, criminals seem to have abandoned any attempt at training real criminal large language models (LLMs). Instead, they are jailbreaking existing ones.
  • We are finally seeing the emergence of actual criminal deepfake services, with some bypassing user verification used in financial services.

Posted on May 9, 2024 at 12:05 PM10 Comments

Comments

Echo of past arising May 9, 2024 1:03 PM

@Bruce Schneier
@ALL

From the extract

“We are finally seeing the emergence of actual criminal deepfake services, with some bypassing user verification used in financial services.”

Perhaps not surprising when you consider from another recent report that’s been mentioned on MSM it’s been said the number one most profitable crime currently is fraud.

Though other figures say other types of more traditional crime like burglary have not gone down.

At the moment it can be said that online crime is safer for criminals than other forms of crime so it’s not surprising that they would start into AI as a tool for crime.

I suspect what may be holding things back is the fact that AI systems are not that easy to access, nor access safely from the criminal perspective, though this will change.

Also creating your own AI LLM system is not cheap and has high electrical power and cooling demands. So are not yet something for Grandma’s back bedroom or similar. Especially when you consider high electrical power and water has been associated with “illegal herb growing” so authorities are already on the lookout for it.

This in effect turns the LLM into a sitting duck for authorities to find…

But I do expect this to change if the AI hype bubble does not burst.

I guess one question we should think about is,

How long before an AI can sit in on a Zoom meeting?

Because as finance related crime rises, banks will move towards interactive online “proof of life and ID”.

echo May 9, 2024 1:20 PM

To improve their social engineering tricks. LLMs prove to be particularly well-suited to the domain of social engineering, where they are able to offer a variety of capabilities. Criminals use such technology for crafting scam scripts and scaling up production on phishing campaigns. Benefits include the ability to convey key elements such as a sense of urgency and the ability to translate text in different languages. While seemingly simple, the latter has proven to be one of the most disruptive features for the criminal world, opening new markets that were previously inaccessible to some criminal groups because of language barriers.

And:

To circumvent this, criminals are now offering to take a stolen ID and create a deepfake image to convince the system of a customer’s legitimacy.

Annoying but limited. The current level of technology is convincing for low bandwidth automated and low task orientated work. I’m sure much worse is coming one day but we’re not there yet.

And:

Problems arise for criminals when a deepfake attack targets somebody close to the subject being impersonated, as deepfake videos have not yet reached a level where they can fool people with an intimate knowledge of the impersonated subject. As a consequence, criminals looking for more targeted attacks toward individuals — like in the case of virtual kidnapping scams — prefer audio deepfakes instead. These are more affordable to create, require less data from the subject, and generate more convincing results. Normally, a few seconds of the subject’s voice can suffice, and this type of audio is often publicly available on social media.

I can be chatty or have behaviours which look like nothing on the surface but pull more information out of another person. Discussion tilt, tone, emotional microexpressions can mean something as can none verbal equivalents, timing, and responses. And also what they don’t say and don’t do. I also have a few tricks like being busy or kicking the can down the road, or putting on an act of agreeing with them while I’m buying time and they’re giving me more information for free, or just waiting to see what happens with other people as part of that feedback/evaluation loop. Sometimes I’m just me being me. It’s hard to say, really.

I never like urgent. The more urgent someone is the less I like it. What are they really after, and how are they lying to me, and what else is happening I don’t know about?

I know from experience and feedback from other sources I made the correct decision when it mattered. I didn’t get it right every single time but you pick up a sense about it. It’s none linear. You can only get that from experience. My toleration for something “off” is low.

Let’s see how far an “AI” gets with an in-person meeting.

Welcome to my web said the spider to the fly.

For anything else I would call the cops.

noname May 9, 2024 4:20 PM

Isabelle Bousquette reports:

Deepfake incidents in the fintech sector increased 700% in 2023 from the previous year

Yowza. Will they increase more still? I can’t recall if these entities have reporting requirements?

Looks like at least one bank is changing their identity verification protocols for setting up an online account. Users will have to take selfies using the bank’s app. The app instructs users to look in different directions, which is said may be unanticipated by a generic AI deepfake.

The financial institutions will own the costs of fraud, right? Will be an interesting space to watch for new developments.

https://www.wsj.com/articles/deepfakes-are-coming-for-the-financial-sector-0c72d1e5#

ratwithahat May 9, 2024 5:28 PM

@noname
I’m also pretty worried about this. However, I don’t have a lot of confidence in banks. I’m fairly certain many are still using voice ID despite AI voice cloning.

Unrelated, I wonder about use of AI by nation-backed hackers. Unlike criminals, nation-backed hackers probably have the resources to create their own LLMs, but I imagine jailbreaking would still be much more convenient. Unfortunately, it’s probably not possible to gather data on this.

Mr. Peed Off May 9, 2024 5:39 PM

The illiterate thugs may find the learning curve and hardware requirements beyond reach, although I would not underestimate the more motivated ones ability to overcome adversity. I question whether cloud based AI providers have the ability or motivation to prevent or even discourage fraudulent behaviors.

echo May 9, 2024 6:27 PM

I question whether cloud based AI providers have the ability or motivation to prevent or even discourage fraudulent behaviors.

I wouldn’t be surprised if the do know and don’t care. See also tobacco, climate change, Brexit, social media etcetera. AI needs regulating like a biohazard. That or people need to discover physical retail banks again.

Madame doesn’t use online banking. If I wanted to I would need to visit my bank so it can be activated. I am in no hurry.

Bob Paddock May 10, 2024 8:26 AM

@Echo of past arising

“How long before an AI can sit in on a Zoom meeting?”

Places like Otther dot ai already can.

JonKnowsNothing May 10, 2024 11:33 AM

@All

AI systems a completely dependent on new sources of clean inputs. Once the AI techbros have scrapped all the Wikips, all the library books and e-content, containing historical information, they are dead in the water without a new pipeline of clean data.

In order to avoid model collapse (1) the systems need new data sources.

A current disagreement over access to clean data is at Stack Overflow, which is monetizing the years and years of information exchange on their site selling the content to OpenAI. (2) People are not so keen to have their personal knowledge based transferred to OpenAI. Dealing with people is not the same as scanning the books in a library.

The likelihood of AI Model Collapse increases as the source of new clean inputs declines.

There isn’t so much of a question that it will collapse but how soon.

AI systems use recursion to re-feed data into a model. If you have lots of clean data, the model becomes more robust aka scales up. The key is clean data.

If you train AI only on previous AI outputs, aka synthetic data / HAIL, the model collapses quickly.

For AI systems to last, they need clean data and not too much synthetic data.

The amount of either needed to keep the model from collapsing varies.

It’s a bit like making dinner.

If you start with fresh ingredients the dinner options are the best and tastiest. (clean data)

You can do a lot with the leftovers.(synthetic data / HAIL)

If you add more fresh ingredients you can get another dinner from the mix. (clean data + synthetic data) This creates another set of leftovers (combined synthetic data)

Overtime, the value of the leftovers (combined synthetic data) declines.

When the leftovers hit the garbage bin, that is when the model collapses.

While getting clean data into a AI system improves the scaling up of the model, it does not prevent the model from inventing false HAIL responses. Recursively feeding back HAIL responses into a model causes the AI system to scale down.

===

1)

ht tps:/ /ww w.theregister.com/2024/05/09/ai_model_collapse

  • training AI with more AI: Is model collapse inevitable?

AI model collapse / synthetic data [HAIL] / recursion

2)

ht tps ://w ww.theregister.com/2024/05/09/stack_overflow_banning_users_who

  • Stack Overflow simply bans folks who don’t want their advice used to train AI

ResearcherZero May 14, 2024 3:28 AM

There is always more data. Whopping great archives of it that are being added to regularly.

‘https://fortune.com/2024/05/07/microsoft-ai-for-spies-divorced-from-internet-top-secret-intelligence/

KeithB May 15, 2024 3:18 PM

They are already using AI to mimic voices taken from social media posts to perform the “Hey, grandma, it’s me grandchild. I am in prison in Europe, I need money right away” scams.

One person in the story I heard realized it was not her grandson because he called her “Grandma”, usually, he calls her “Boobie”.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.