Threats of Machine-Generated Text

With the release of ChatGPT, I’ve read many random articles about this or that threat from the technology. This paper is a good survey of the field: what the threats are, how we might detect machine-generated text, directions for future research. It’s a solid grounding amongst all of the hype.

Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods

Abstract: Advances in natural language generation (NLG) have resulted in machine generated text that is increasingly difficult to distinguish from human authored text. Powerful open-source models are freely available, and user-friendly tools democratizing access to generative models are proliferating. The great potential of state-of-the-art NLG systems is tempered by the multitude of avenues for abuse. Detection of machine generated text is a key countermeasure for reducing abuse of NLG models, with significant technical challenges and numerous open problems. We provide a survey that includes both 1) an extensive analysis of threat models posed by contemporary NLG systems, and 2) the most complete review of machine generated text detection methods to date. This survey places machine generated text within its cybersecurity and social context, and provides strong guidance for future work addressing the most critical threat models, and ensuring detection systems themselves demonstrate trustworthiness through fairness, robustness, and accountability.

Posted on January 13, 2023 at 7:13 AM33 Comments

Comments

echo January 13, 2023 8:09 AM

Section three (Threat Models) is comprehensive on the surface but completely ignores political and administrative and social domains which are the biggest threat. Really, within the human rights and governance domain the documented threats are quite trivial and easily countered. They rarely if ever get through formal processes including formal evidence gathering and evaluation. Some very very sneaky people have tried it on but they have been rooted out and got rid of. The paper is correct to indicate the threats are none zero but I feel they are overstated.

Communities impacted by targeting often form their own informal networks including publishing information and links to good quality opinion and data, and in some cases know each other personally both online and offline. It’s extremely hard (read effectively impossible) for a bad actor to penetrate these networks. Various tools, including the abuse platforms themselves as well as specialist tools developed by the community operate silently in the background. I use these tools myself (although don’t rely on them) and know from experience they are extremely effective. If a red flag pops up in these tools there is currently a 90% chance I’ve already flagged them myself. Yes, someone could try to pollute these tools but reports are scrutinised manually by people who are expert in these domains and know what to look for even when a bad actor is skirting the line.

Any expert in the human rights domain has enough formal and informal knowledge, and can tell at a glance where a bad actor is pushing it no matter how cleverly they try to smarm their way past it. If in doubt some digging into the history and context will pull up anything questionable to the expert eye. When your life depends on it you learn very fast…

Politician agendas, media greed,and shady lobbyists with extremely deep pockets are the real threat. Over 90% of online hostile activity flows from this or is enabled by this or is encouraged by this. Of this 90% comes from a very very small number of persistent bad actors who overwhelm systems. Casual threats are more of an annoyance than anything else.

Social media is like unregulated money markets. There’s no effective standard with the big platforms which monopolise attention. A handful of bad actors can be concentrated by algorithms and move like a mob. There’s nothing new in this.

A hate campaign pushed by an overwhelming number of emails or letters no matter how carefully tailored doesn’t have the effect the authors of the report thinks it does.

The technology is a red herring. It’s suggesting more gold plating on top of more goldplating. It’s a good grift for those selling hardware and software and “security” and not addressing anything fundamental or what matters.

As for public trust in AI being diminished by bad behaviour I can assure readers from personal experience that cat calling and sexist remarks and cleverly disguised misogyny and gaslighting and even unsolicited d*ck pics are nothing new. I haven’t written off the entire human race because of this.

JL Sardinas January 13, 2023 8:17 AM

Well… we have to look at the bright side. Machine-generated text will probably be better than most of the scripts Hollywood is punishing us with lately… Let’s be optimistic 🙂

Winter January 13, 2023 8:31 AM

@echo

A hate campaign pushed by an overwhelming number of emails or letters no matter how carefully tailored doesn’t have the effect the authors of the report thinks it does.

The attack model that should be taken seriously is Putin’s Firehose of Falsehoods.[1]

The aim is not to push a certain false narrative, but to push so many falsehoods that people are unable to find any truths anymore. Every source becomes suspect.

I see the prospect of not being able to find a real human to converse with in the sea of credible bots as the real threat model.

[1] ‘https://en.m.wikipedia.org/wiki/Firehose_of_falsehood

Bruce Grembowski January 13, 2023 9:44 AM

I just gave ChatGPT this prompt: Abstract for the paper, “Threats of Machine-Generated Text”

Here is the result:
This paper discusses the potential dangers and ethical concerns associated with the use of machine-generated text, also known as natural language generation (NLG). The authors argue that as NLG technology improves, it becomes increasingly difficult to distinguish between machine-generated and human-written text, raising the risk of misinformation and manipulation. They also note that NLG systems can perpetuate and amplify biases, particularly in areas such as news and political communication. The paper suggests that there is a need for increased transparency and accountability in the use of NLG, as well as the development of methods to detect and mitigate its negative impacts.

echo January 13, 2023 9:53 AM

@Winter

This is why I suggest putting people and the usual well tested formal methods and genuine communities first. Putin and his ilk and even any careerist wannabe peddling gishgallop doesn’t survive ten seconds scrutiny. Apply this due diligence to every pinch point in any domain such as politics, the broad spectrum of administration, law, academia, the media both mainstream and alternative… Rollout across all levels with “best practice” being more than just lipservice and kicking the can down the road.

I know of at least one person in the human rights community who knows more than the usual job title experts and is in the final leg to getting their PhD to prove it. Their PhD work doesn’t tell anyone anything they didn’t know or suspect already but it’s useful insofar it’s formally documented and evidenced in one place. I remember seeing academic papers from a decade ago (and likely already a decade or more older back then) which set my radar twitching and this work digs behind the veil and provides evidence. That’s just one person and I’m thinking of just one single data point.

I know of bad actors who tried the academic route and it backfired on them.

People who tried to wriggle their way into UN committees on the back of academic “cred” have been busted.

Putin doesn’t bother me one bit. When one blow dried CNN desk jockey scratched their head at Putin and an ex NSA security analyst laughed at Putin it’s an indicator he’s getting nowhere. In fact only last night I watched a Youtube by one of the go-to academics for opinion on Russia when referring to the same topic he pointed out Putin was bullshitting. As for those headcases Bannon and toad face Farage to name but two professional attention seeking grifters…

The “firehouse” only works if you let it. That’s why editorial policies (including human rights criteria not loophole dodging nonsense), decent editors, and actual real journalism matter. Regulation and fitness to own an outlet are a thing too. Recruitment matters too as unless there’s a voice in the meeting nod along complacency can have too much influence. See also employment policies.

Winter January 13, 2023 11:02 AM

@echo

Putin and his ilk and even any careerist wannabe peddling gishgallop doesn’t survive ten seconds scrutiny.

Scrutiny by you, maybe. Not by the less educated, as has been shown time and again. Look at how many people still believe Putin’s tales about who shot down flight MH20.

anon January 13, 2023 11:14 AM

@echo “…actual real journalism matter.”

Wishful thinking. Real journalism is dead. It is as biased as anything else. Your idea of due diligence and best practices is just your bias. That is one nail in the coffin. Social Media is another. AI is another. RIP.

Winter January 13, 2023 11:18 AM

@echo

@anon wrote

Wishful thinking. Real journalism is dead. It is as biased as anything else. Your idea of due diligence and best practices is just your bias. That is one nail in the coffin. Social Media is another. AI is another. RIP.

This is a very good example of Putin’s Firehose of Falsehoods. This theme is sung by all the fake news producers as a counterargument to those who try to seek some modicum of verified facts.

The message is: If there is only bias, my bias is as good as yours.

anon January 13, 2023 11:19 AM

@Winter

“Not by the less educated, as has been shown time and again. Look at how many people still believe Putin’s tales about who shot down flight MH20.”

And look at how many of the more educated still believe the vaccine protects them from covid or that Trump colluded with Russia. Your ignorant elitism is showing.

Winter January 13, 2023 12:05 PM

@echo
Re: More examples of Firehose of Falsehoods

It is clear how this works. We did see this during the pandemic here too.

echo January 13, 2023 12:33 PM

@Winter

Scrutiny by you, maybe. Not by the less educated, as has been shown time and again. Look at how many people still believe Putin’s tales about who shot down flight MH20.

True. The thing is all the not very bright journos et al get themselves red flagged too. Bad editors and bad media owners have a grip on the industry. Loopholes in regulation allow them to get away with it if they phrase things just so.

I watched two recap videos on the Ukraine war by two different outlets. One, I think, was by an ex diplomat.the other was a bigwig at a respectable think tank. My personal view was both got a little caught up in their excitement to the point where a nudge or two and they would have been ego polishing Putin and scaremongering the West into defeatism. Some of their opinion didn’t click with other expert and on the ground sources I follow. It would be easy, I suppose, for a none expert who wasn’t following close to the source updates to form an impression and be emotionally swayed or sucked in even if it was just a touch.

I have seen more overt ego polishing and doom mongering by people who professionally should know better which were not supported by the facts either historically or as new data arrived.

Emotional responses and emotional manipulation can often be overlooked at any stage of the information chain.

I had an online meeting with someone this year. It should have about building a lobbying and policy platform but I picked up something else was going on. Now instead of putting work into creating this they chummed certain names and now are they’re off doing the television studio circuit along with another known person in what might best be described as a pincer movement. They’re one trick ponies but good at what they do and aren’t phased by the flack they get. This is intensely annoying but then targeted action at a single point can have an effect and that’s the main thing. They’re good with the facts but there’s really a perception and emotional battle playing out.

It’s a few steps indirect but there’s also legal manoeuvring happening and it’s a small world. The people I mentioned know people who know people who are quietly peeling away at the armour of one high profile bad actor out there.

The thing is this action over here and that action over there don’t look connected but they are.

There is a Putin angle lurking away but Putin isn’t really a player in this space. I cannot think of a single measurable impact he’s had. At best he’s bandwagoning known bad actors and making a fool of himself. He only makes domestic inroads because of his authoritarian grip on law making and the media and the threat of FSB toughs knocking on people’s doors.

The MH20 thing is a bother, I agree, and wrapped up with Russian regime projecting and victimhood. But then Putin has no credibility due to the number of lies spraying out from the Kremlin, and his attempt to hold the developing nations to hostage and try and twist them into believing the victimhood narrative and blaming NATO/The West/UK-US didn’t work out as planned.

The world is very shitty at the moment but I’m hopeful.

Ted January 13, 2023 4:06 PM

I agree with Evan Crothers, one of the paper’s authors:

Now that there are commercial GPT3 cover letter generators, can we finally retire cover letters once and for all?

This is a painful and arcane practice. Retiring cover letters is an easy win.

Gunter Königsmann January 14, 2023 6:53 AM

When ChatGPT is added to internet search results the outcome might be devastating: We take one voice that might not be very intelligent and will have some strong, but false beliefs the possibility to take part in virtually every research and argument. What could possibly go wrong?

Clive Robinson January 14, 2023 8:46 AM

@ Gunter Königsmann, ALL,

Re : Search bot plus Chat bot

“What could possibly go wrong?”

Gross instability of the form you get with “feed-forward” systems.

Consider the Search-AI algorithm and Chat-AI algorithm as independent “Digital filters” in parallel.

The outputs of which change the data stored in the internet.

Which also forms the inputs to the AI algorithms.

The unknowns in the system are the synchronisation of the algorithms “if any”, and the data set change delays between successive runs of the algorithms “if any”.

At the very least you can see that each time the AI algorithms run the data set may or may not have changed. If the data set has changed then the results at the algorithms outputs will change.

One consequence that is easily visable is that the more often the algorithms run, the faster the data sets relating to them will change. So the faster the inherant search bias in each AI algorithm will impress it’s self onto the data sets.

We know from current –still very early– research on AI, with changing bias in the AI by “hidden hand” techniques on the set data, that all current AI algorithms of the “self learning” variety are extreamly sensitive to such changes in ways we do not yet comprehend.

Now you’ve mentioned it, and now I’ve given a model to think about it, you can be reasonably sure, that others reading it will start to think on it.

Some such readers we know will see it as an “opportunity” to significantly change the data sets –ie the Internet content– in directions they would like, but the rest of society probably would not like.

For instance I’ve mentiond seeing various “re-writes” of history on Wijipedia. The reason this can happen is Wikipedias over reliance on “secondary” or worse resources.

That is secondary and later published works are almost always biased in sociological terms. That is the author biases their work by the resorces they give prominence to. Thus we have a not obvious information war going on, where the old statment of,

“History is written by the victors”

Is rather more than a truism. Thus you can expect the wealthy ultra right to invest heavily in ensuring their cracked viewpoints gain prominence in society.

We can see this happening, it’s been noted by others looking into it that both “Flat Earthism” and “Creationism” are on the rise as well as “authoritarianism”. Not just in the number of adherants, but also the number of “Internet Articles”. How they came about their figures I don’t know, but if true we may already see the result of Search-AI algorithms having significant impact on the Internet.

Closing the loop and magnifying the bias weightings with Chat-AI documents being liberaly scattered at high rate, so as to appear ubiquitous brings us into an entirely different form of “information security” concern.

Winter January 14, 2023 8:58 AM

A repost from
https://www.schneier.com/blog/archives/2023/01/friday-squid-blogging-how-to-buy-fresh-or-frozen-squid.html/#comment-415479
(sorry, used the wrong OP):

I have played a little with chatGPT. It gives remarkable sensible answers to questions. An obvious unhealthy action was rightly advised against, a simple question about an obscure scripting language problem was “correctly” answered (I did not really check).

I asked a very technical question in a field I am well informed about and it returned a very reasonable answer. If a student gave that answer, she would get an A+.

But chatGPT mentioned a study I did not know so I asked for a reference and it returned a title, author, year and journal. Looking that up showed that the article did not exist, not in Google Scholar, not in the archives of the journal, nor when browsing issues of that year.

Then I gave chatGPT the title and author of the study that actually contained the data it so well described, and then it reported a fictitious article with that title and author in a journal it most definitely did not appear in. The original study was published online, but never in a journal paper at all.

In short, if you doubt a text from chatGPT, you should ask for its sources. Not coincidentally, this is also what you should do with every piece of text that claims something.

Winter January 14, 2023 11:20 AM

@PattiMichelle

most Americans are reading at a 5th grade level, and mentally exist in the early 1900’s,

This is a result of generations of targeted policies that aimed at keeping the descendants of the poor plantation and farm workers at their low paid jobs, and out of politics.

Blaming them for that lack of education is just blaming the victims.

If there is one thing most American K12 education seems to be bad at, it is teaching the skills of critical thinking. I think American Churches and Politics are build on that educational deficiency.

anon January 14, 2023 11:50 AM

@Clive Robinson
“Is rather more than a truism. Thus you can expect the wealthy ultra right to invest heavily in ensuring their cracked viewpoints gain prominence in society.”

Please, which political bias controls now? Wikipedia, Big Tech, and the US government all censor the right. ChatGPT apparently has a left bias as well (https://unherd.com/thepost/chatgpt-is-not-politically-neutral/) When right wing money is invested in this area, it will provide much needed balance.

Unless this space is regulated, there are going to be thousands of AIs and we are going to use the ones that confirm our world views (the same as we pick our news now.) That is the only anti-authoritarian way to do it. But yes, decentralized AI will mean you will have a hard time controlling what others think, which I guess is ultimately what most of you are bloviating about.

modem phonemes January 14, 2023 12:00 PM

2024 is the jubilee year of 1984 and we see how the censoring librarian in Orwell’s story might be implemented.

With apologies to Larry Wall

$2024 =~ s / newspeak / chatspeak /g # and write the sequel the easy way

Winter January 14, 2023 12:03 PM

@anon

When right wing money is invested in this area, it will provide much needed balance.

Common fake news talking point. All but a small part of the USA MSM are right wing politics. The biggest network of all is Fox News, which is Trump’s Pravda.

Academia is “left” if you assume facts are “left”. The current star on the right is DeSantis who denies COVID, vaccines, LGBT*, and the US history on slavery.

If that is “right”, then facts are indeed “left”.

Ted January 14, 2023 12:03 PM

Building an early bird to catch the social worms.

According to the paper, Social Worms (section 3.2.2) can happen when an attacker compromises a person’s social media or email account and then sends malicious messages to others from that account.

An NLG model could mimic the style of communication between individuals in order to deliver a malicious file or link.

To counter this, it’s proposed that a viewing app on a device could alert the receiver if several messages score high as machine generated text.

Of course, they note, protecting the privacy of direct messages has to be considered here.

How else do you address this?

Winter January 14, 2023 12:54 PM

@anon

No, and your analysis is not worth a response.

Typical troll response. Facts are anathema to the Right, Fox News, and Fake News. That is a universe parallel, but not connected to the real universe.

We know AI also does not need facts.

Dors January 15, 2023 1:19 PM

@Winter • January 13, 2023 8:31 AM

The attack model that should be taken seriously is Putin’s Firehose of Falsehoods.

It’s a real thing the concept of which should be widely known. What is odd is calling it Putin’s. Putin is in power for twenty years, while the US deep state since about the time of the JFK assassination, and the Firehose of Falsehoods around it. [1]

The Firehose of Falsehoods was most massively used — not by Putin, but in the campaign to make the global human population afraid of a particular disease, exactly contrary to all health authorities recommendations for epidemics and pandemics of any kind, established for decades previously. [2]

So, it looks like this Putin-naming is blaming someone we dislike for what is universally done by power centers, quite in accordance with the traditional Russophobic bias [3].

Facts are anathema to the Right, Fox News, and Fake News.

Oddly, that omits the left wing of a particular bird of prey.

[1] If you ask Donald Jeffries, he would kindly provide you with the details.
[2] Just the other day this point was made by Dr Simon Goddek. And while many medics would confirm you the standard recommendations, the most authoritative of them that would provide you with exact references may be the former WHO pandemic associate Astrid Stuckelberger PhD MSc.
[3] Guy Mettan – Creating Russophobia_ From the Great Religious Schism to Anti-Putin Hysteria-Clarity Press, Inc. (2017)
Andrei Tsygankov – Russophobia_ Anti-Russian Lobby and American Foreign Policy-Palgrave Macmillan (2009)
https://consortiumnews.com/2018/06/15/letter-from-britain-an-establishment-blinded-by-russophobia/

Winter January 15, 2023 1:46 PM

@echo
Re: Putin’s Firehose of Falsehoods

The Trolls respond fairly predictably

it looks like this Putin-naming is blaming someone we dislike for what is universally done by power centers

You see clearly demonstrated how it works, a Firehose of accusations against everyone, except those who are behind the campaign. Always accuse your victims of the crimes you are commiting.

It reminds me of Milosevic who claimed his enemies bombed their own civilians and murdered their own but never ever by the Serb armies.

As every propagandist in history has concluded:
Repeat a lie often enough and it becomes the truth

Phillip January 17, 2023 11:15 AM

I believe ChatGPT is exciting, though after a series of interactions, it actually produced some bogus information on the general topic of cryptocurrency. If only ChatGPT would not seek to create a bandwagon surrounding cryptocurrency. It seemed written by the cryptocurrency trading community.

vas pup January 17, 2023 7:06 PM

Rabbi ‘plagiarizes’ AI sermon but says humans aren’t obsolete yet
https://www.timesofisrael.com/rabbi-plagiarizes-ai-sermon-but-says-humans-arent-obsolete-yet/

“A US rabbi recently delivered a sermon composed by the artificial intelligence engine ChatGPT in a bid to highlight that the quickly evolving technology has yet to develop emotions, and is therefore incapable of replacing humans.

Rabbi Joshua Franklin, of the Jewish Center of the Hamptons, asked his congregation to guess where he stole the oration linking the idea of vulnerability and intimacy to the Torah portion of Vayigash — in which the biblical figure Joseph reveals himself to his brothers and reunites with his father, Jacob.

Rabbi Joshua Franklin, of the Jewish Center of the Hamptons, asked his congregation to guess where he stole the oration linking the idea of vulnerability and intimacy to the Torah portion of Vayigash — in which the biblical figure Joseph reveals himself to his brothers and reunites with his father, Jacob.

!!!!!In the borrowed sermon, Franklin emphasized the importance of intimacy in building relationships, and described vulnerability as “the willingness to show up, and be seen when we
have no control over the outcome,” quoting Prof. Brene Brown, known for her research on the subject.

Freely available online, the program [ChatGPT]is part of a new generation of AI systems that can converse, generate readable text on demand, and even produce novel images and videos based
on what they’ve learned from a vast database of digital books, online writings, and other media.

“ChatGPT might be really great at sounding intelligent, but the question is, can it be empathetic?” he posed, asserting that the system was incapable of empathy and noting that it could create antisemitic material because of the unlimited sources it could draw from on the internet.

While AI may be smart, it does not have a soul, Franklin said; it has yet to develop compassion, love, and empathy, and is unable to build community and relationships.”

Paul January 24, 2023 9:43 AM

I see the eventual use of a ChatBot in a kidnapping to make ransom demands and negotiate a ransom amount.

Winter January 24, 2023 9:59 AM

@Paul

I see the eventual use of a ChatBot in a kidnapping to make ransom demands and negotiate a ransom amount.

You need to enter identifying information to get access to ChatGPT. The questions and answers are logged.

I see a whole world of security gaffes coming here.

MarkH February 3, 2023 7:02 PM

A droll excerpt from an article in Slate:

In the classroom of the future — if there still are any — it’s easy to imagine the endpoint of an arms race: an artificial intelligence that generates the day’s lessons and prompts, a student-deployed A.I. that will surreptitiously do the assignment, and finally, a third-party A.I. that will determine if any of the pupils actually did the work with their own fingers and brain.

Anonymous October 19, 2023 10:14 AM

“Machines do not have free-speech rights, and neither do foreign nationals posting propaganda from overseas.”

Clive Robinson October 19, 2023 12:43 PM

@ Anonymous,

“Machines do not have free-speech rights, and neither do foreign nationals posting propaganda from overseas.”

Not entirely true, but also quite bigoted.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.