Large Language Models and Elections

Earlier this week, the Republican National Committee released a video that it claims was “built entirely with AI imagery.” The content of the ad isn’t especially novel—a dystopian vision of America under a second term with President Joe Biden—but the deliberate emphasis on the technology used to create it stands out: It’s a “Daisy” moment for the 2020s.

We should expect more of this kind of thing. The applications of AI to political advertising have not escaped campaigners, who are already “pressure testing” possible uses for the technology. In the 2024 presidential election campaign, you can bank on the appearance of AI-generated personalized fundraising emails, text messages from chatbots urging you to vote, and maybe even some deepfaked campaign avatars. Future candidates could use chatbots trained on data representing their views and personalities to approximate the act of directly connecting with people. Think of it like a whistle-stop tour with an appearance in every living room. Previous technological revolutions—railroad, radio, television, and the World Wide Web—transformed how candidates connect to their constituents, and we should expect the same from generative AI. This isn’t science fiction: The era of AI chatbots standing in as avatars for real, individual people has already begun, as the journalist Casey Newton made clear in a 2016 feature about a woman who used thousands of text messages to create a chatbot replica of her best friend after he died.

The key is interaction. A candidate could use tools enabled by large language models, or LLMs—the technology behind apps such as ChatGPT and the art-making DALL-E—to do micro-polling or message testing, and to solicit perspectives and testimonies from their political audience individually and at scale. The candidates could potentially reach any voter who possesses a smartphone or computer, not just the ones with the disposable income and free time to attend a campaign rally. At its best, AI could be a tool to increase the accessibility of political engagement and ease polarization. At its worst, it could propagate misinformation and increase the risk of voter manipulation. Whatever the case, we know political operatives are using these tools. To reckon with their potential now isn’t buying into the hype—it’s preparing for whatever may come next.

On the positive end, and most profoundly, LLMs could help people think through, refine, or discover their own political ideologies. Research has shown that many voters come to their policy positions reflexively, out of a sense of partisan affiliation. The very act of reflecting on these views through discourse can change, and even depolarize, those views. It can be hard to have reflective policy conversations with an informed, even-keeled human discussion partner when we all live within a highly charged political environment; this is a role almost custom-designed for LLM. In US politics, it is a truism that the most valuable resource in a campaign is time. People are busy and distracted. Campaigns have a limited window to convince and activate voters. Money allows a candidate to purchase time: TV commercials, labor from staffers, and fundraising events to raise even more money. LLMs could provide campaigns with what is essentially a printing press for time.

If you were a political operative, which would you rather do: play a short video on a voter’s TV while they are folding laundry in the next room, or exchange essay-length thoughts with a voter on your candidate’s key issues? A staffer knocking on doors might need to canvass 50 homes over two hours to find one voter willing to have a conversation. OpenAI charges pennies to process about 800 words with its latest GPT-4 model, and that cost could fall dramatically as competitive AIs become available. People seem to enjoy interacting with chatbots; Open’s product reportedly has the fastest-growing user base in the history of consumer apps.

Optimistically, one possible result might be that we’ll get less annoyed with the deluge of political ads if their messaging is more usefully tailored to our interests by AI tools. Though the evidence for microtargeting’s effectiveness is mixed at best, some studies show that targeting the right issues to the right people can persuade voters. Expecting more sophisticated, AI-assisted approaches to be more consistently effective is reasonable. And anything that can prevent us from seeing the same 30-second campaign spot 20 times a day seems like a win.

AI can also help humans effectuate their political interests. In the 2016 US presidential election, primitive chatbots had a role in donor engagement and voter-registration drives: simple messaging tasks such as helping users pre-fill a voter-registration form or reminding them where their polling place is. If it works, the current generation of much more capable chatbots could supercharge small-dollar solicitations and get-out-the-vote campaigns.

And the interactive capability of chatbots could help voters better understand their choices. An AI chatbot could answer questions from the perspective of a candidate about the details of their policy positions most salient to an individual user, or respond to questions about how a candidate’s stance on a national issue translates to a user’s locale. Political organizations could similarly use them to explain complex policy issues, such as those relating to the climate or health care or…anything, really.

Of course, this could also go badly. In the time-honored tradition of demagogues worldwide, the LLM could inconsistently represent the candidate’s views to appeal to the individual proclivities of each voter.

In fact, the fundamentally obsequious nature of the current generation of large language models results in them acting like demagogues. Current LLMs are known to hallucinate—or go entirely off-script—and produce answers that have no basis in reality. These models do not experience emotion in any way, but some research suggests they have a sophisticated ability to assess the emotion and tone of their human users. Although they weren’t trained for this purpose, ChatGPT and its successor, GPT-4, may already be pretty good at assessing some of their users’ traits—say, the likelihood that the author of a text prompt is depressed. Combined with their persuasive capabilities, that means that they could learn to skillfully manipulate the emotions of their human users.

This is not entirely theoretical. A growing body of evidence demonstrates that interacting with AI has a persuasive effect on human users. A study published in February prompted participants to co-write a statement about the benefits of social-media platforms for society with an AI chatbot configured to have varying views on the subject. When researchers surveyed participants after the co-writing experience, those who interacted with a chatbot that expressed that social media is good or bad were far more likely to express the same view than a control group that didn’t interact with an “opinionated language model.”

For the time being, most Americans say they are resistant to trusting AI in sensitive matters such as health care. The same is probably true of politics. If a neighbor volunteering with a campaign persuades you to vote a particular way on a local ballot initiative, you might feel good about that interaction. If a chatbot does the same thing, would you feel the same way? To help voters chart their own course in a world of persuasive AI, we should demand transparency from our candidates. Campaigns should have to clearly disclose when a text agent interacting with a potential voter—through traditional robotexting or the use of the latest AI chatbots—is human or automated.

Though companies such as Meta (Facebook’s parent company) and Alphabet (Google’s) publish libraries of traditional, static political advertising, they do so poorly. These systems would need to be improved and expanded to accommodate user-level differentiation in ad copy to offer serviceable protection against misuse.

A public, anonymized log of chatbot conversations could help hold candidates’ AI representatives accountable for shifting statements and digital pandering. Candidates who use chatbots to engage voters may not want to make all transcripts of those conversations public, but their users could easily choose to share them. So far, there is no shortage of people eager to share their chat transcripts, and in fact, an online database exists of nearly 200,000 of them. In the recent past, Mozilla has galvanized users to opt into sharing their web data to study online misinformation.

We also need stronger nationwide protections on data privacy, as well as the ability to opt out of targeted advertising, to protect us from the potential excesses of this kind of marketing. No one should be forcibly subjected to political advertising, LLM-generated or not, on the basis of their Internet searches regarding private matters such as medical issues. In February, the European Parliament voted to limit political-ad targeting to only basic information, such as language and general location, within two months of an election. This stands in stark contrast to the US, which has for years failed to enact federal data-privacy regulations. Though the 2018 revelation of the Cambridge Analytica scandal led to billions of dollars in fines and settlements against Facebook, it has so far resulted in no substantial legislative action.

Transparency requirements like these are a first step toward oversight of future AI-assisted campaigns. Although we should aspire to more robust legal controls on campaign uses of AI, it seems implausible that these will be adopted in advance of the fast-approaching 2024 general presidential election.

Credit the RNC, at least, with disclosing that their recent ad was AI-generated—a transparent attempt at publicity still counts as transparency. But what will we do if the next viral AI-generated ad tries to pass as something more conventional?

As we are all being exposed to these rapidly evolving technologies for the first time and trying to understand their potential uses and effects, let’s push for the kind of basic transparency protection that will allow us to know what we’re dealing with.

This essay was written with Nathan Sanders, and previously appeared on the Atlantic.

EDITED TO ADD (5/12): Better article on the “daisy” ad.

Posted on May 4, 2023 at 6:45 AM23 Comments

Comments

Winter May 4, 2023 8:42 AM

Research has shown that many voters come to their policy positions reflexively, out of a sense of partisan affiliation.

The GOP mainly approaches elections from a Gerrymandering and Voter Suppression angle. That is, they would use AI mainly to help prevent people from voting.

[more extensive comment is held for moderation]

Winter May 4, 2023 8:53 AM

The content of the ad isn’t especially novel—a dystopian vision of America under a second term with President Joe Biden—but the deliberate emphasis on the technology used to create it stands out: It’s a “Daisy” moment for the 2020s.

One reason that the “Daisy” ad worked was probably that the suggestion that”Presidential candidate Barry Goldwater was a genocidal maniac who threatened the world’s future.” was apt and arguably expressed succinctly what most Americans thought about his policies (especially regarding nuclear arms). [1]

‘https://www.history.com/news/barry-goldwater-1964-campaign-right-wing-republican

What is especially remarkable in the GOP video is that it ignores the main issues behind the last string of election disasters of the GOP:
Massively unpopular reproductive policies and lies elections and mainly about the 2020 election and Jan 6th.

PaulBart May 4, 2023 9:04 AM

That’s just a Russian bot. Don’t look at the Hunter’s laptop, its fake.

Dr. Chandra May 4, 2023 9:23 AM

Interesting that you chose “Daisy” as a reference, as there’s a famous “Daisy” in relation to the turning OFF of a dangerous AI in 2001.

“It can only be attributable to human error.” — HAL 9000

JonKnowsNothing May 4, 2023 9:41 AM

It doesn’t really need an ad anymore to sway the public to Not Vote. In the USA, voting is quite hazardous to your freedom.

A signature in the wrong spot, a data error in the voter listings, the wrong polling station, the complex ballot, initiatives written in reverse polish where Yes means No, difficult problems reduced to less-than-sound bites, voter ID, voter registration, restricted voting and all the logistics of participation in our political process can land you in jail for 20yrs or more.

As in many aspects of life, Trust and Root of Trust matters. When Trust is lost, it takes a long time to rebuild and often times it’s irreconcilable; a form of citizen divorce.

A good part of societies run on Trust in some format. Societies in which all trust is lost, chaos eventually descends.

AI political ads are not really worse than the psychology derived ads, they all have the same goal: reduce Trust.

In the USA, who can you trust? No One.

Clive Robinson May 4, 2023 11:22 AM

@ Bruce,

Re : Who can say?

It takes atleast two to tango in a conversation or chat…

So,

“We also need stronger nationwide protections on data privacy”

What right to privacy in,

“Candidates who use chatbots to engage voters may not want to make all transcripts of those conversations public, but their users could easily choose to share them.”

The human has some rights under copyright legislation, but there is the vexed question of,

“Does the AI have rights?”

Most would argue “no” on a knee-jerk response, but that is to miss a greater point.

What if the human believes at the time of the conversation and later when they disclose it that the AI was human?

As people are slowely starting to realise “the law” is actually not about justice in the strictest sense, nor fundamental morals or ethics. But what “an observer” “feels on the day” of casting their opinion as a vote of guilt or innocence.

Thus via the “guilty mind” hypothesis, if a person releases a transcript of a chat with an AI “actually believing the AI to be human” then they are infact guilty.

We’ve not quite got there yet, but with AI making “Art Knock-offs” that can win competitions we have to seriously ask how far the junction is and are we going around the corner to the left or right?

Likewise we are going to have to think carefully how we draft legislation in the future to prevent such issues arising in the near future where arguably the I in AI does not stand for Inteligence, but also for the longer term where the I stands for something more than Inteligence, such as Inventor, or Instigator.

Some may remember back to the issue of who owned the rights to the “monkey selfie”, which led to the discriminatory result by both the court and,

“[T]he US Copyright Office issued an updated compendium of its policies, including a section stipulating that it would register copyrights only for works produced by human beings.”

Which I’m guessing is going to come up in a different for… Because it denies the operators of LLM’s and future systems any rights as the originator of the work “is not human”.

Any bets on the result of any punch up on this involving Alphabet, Meta, Microsoft etc?

[1] The monkey selfie case was brought by PETA against the British photographer who had no creative input into the pictures what so ever…,

https://www.theguardian.com/world/2016/jan/06/monkey-selfie-case-animal-photo-copyright

&ers May 4, 2023 1:14 PM

To: ALL

If you have e-voting, you don’t need AI.

hxxps://gafgaf.infoaed.ee/en/posts/perils-of-electronic-voting/

Mexaly May 4, 2023 1:24 PM

High-school/college level writing is wasted on some messages, such as press releases or letters to politicians. I find these bots useful for writing a draft of such messages, in seconds. Edit in minutes and I have an essay effective for its purpose.

It’s a tool we’ll all be using soon.

General Beringer May 4, 2023 5:05 PM

I don’t know if you wanna entrust the safety of our country to some silicon diode.

Grumbles May 5, 2023 12:24 AM

@General Beringer
A particular silicon diode I’ve got no problems with.

A couple billion transistors running who-knows-what code intended to do something but starts to “hallucinate” on its own can’t be a good idea with any good results.

Canis familiaris May 5, 2023 4:59 AM

@Mexaly

Personally, I’ve never been able to write with the level of vapidity needed for press releases, or summaries for management, so an LLM would be a useful tool for me.

On the other hand, I can hope that people will grow to realise that such writing is pretty much over-simplified pap and learn to ignore writing in such styles. I can only hope.

From a security point of view, if the prevalence of LLM-generated text leads people to be more suspicious of content-free, unevidenced, unauthenticated text, that can only be a good thing.

Paul May 5, 2023 8:13 AM

Current LLMs are known to hallucinate—

I think “confabulate” would be more accurate.

Clive Robinson May 5, 2023 9:27 AM

@ Canis familiaris, Mexaly

“never been able to write with the level of vapidity needed for press releases, or summaries for management,”

You don’t appear to understand the purpose of modern managment…

1, Never make a decision.
2, Never be in the same room as a decision.
3, Always document some other person as having made the decision.

Thus two things have to happen to justify the pay/perks and bo responsability.

1, All decisions have to be collective or not at all.
2, Made from a select nothingness of information.

That way any blaim is across the entirety of managment, and fully excusable due to incorrect or more often incompleate or incomprehensable information.

That way no individual is to blaim in managment, thus any external punishment does not fall on them… So in general the worst that happens is protracted legal disputes that eventualy get given up or so weakened the fines etc go by hardly noticed, except by the shareholders or tax payers, not managment…

Remember when managment have problems, they don’t want “options” they want you to do what they are not going to do which is take responsability. Thus all they want from you is a solution that is has minimal tracable risk to them, and prefereably makes some third party the lightening rod / scapegoat.

So any reports have to be,

1, Incomprehensably large.
2, Incomprehensibly drab.
3, Incomprehensibly convoluted.
4, Incomprehensibly directionless.
5, Incomprehensibly vacuous.
6, Use positive incomprehension as desirable aims / objectives.
7, Sprinkled with motherhood style mission statments for shareholder confusion.
8, Emphasize irrelevant social goods (such as using Green-Tech enabled suppliers).

Need I go on?

KeithB May 5, 2023 11:25 AM

As someone pointed out, most of the “dystopian events” featured in the video happened during the Trump administration. I don’t know whether that was a bug or a feature, but it means you have to be very careful what you wish from the AI genie.

Sean Ralph May 5, 2023 3:08 PM

When you said Daisy moment I thought you were referring to: “In 1961, an IBM 704 at Bell Labs was programmed to sing “Daisy Bell” in the earliest demonstration of computer speech synthesis. This recording has been included in the United States National Recording Registry.” – Wikipedia

Grumbles May 6, 2023 12:29 AM

@Clive Robinson

ChatGBT ought to, with relatively little tweaking, be able to produce reports up to the 8 standards you’ve enumerated. Number 6 should cover for the hallucination or confabulation tendency.

I’m getting increasingly worried.

Grumbles May 6, 2023 12:34 AM

@Paul

Good call on “confabulate!”

There’s entirely too much anthropomorphic thinking about AI.

MarkH May 6, 2023 12:55 AM

@Grumbles:

There’s entirely too much anthropomorphic thinking about AI.

Strongly concur … unfortunately, the term AI is in itself both (a) strongly anthropomorphic, to the extent that folks tend to imagine intelligence in human terms, and (b) grossly fraudulent: no such thing has ever yet existed.

Alex May 9, 2023 4:21 PM

“If you were a political operative, which would you rather do: play a short video on a voter’s TV while they are folding laundry in the next room, or exchange essay-length thoughts with a voter on your candidate’s key issues?”

Revealed preference suggests they’d much rather do the TV spot.

Clive Robinson May 10, 2023 1:05 AM

@ Alex, Bruce, ALL,

Re : Why short is best.

“Revealed preference suggests they’d much rather do the TV spot.”

Some marketing people call it “Punch” because the idea is to knock a “catch phrase” into peoples heads and make it like those oft thought of as dreadful earwig / earworn caned music “elevator tunes” that haunt you all day[1].

The “catch phrase” in a “Punch” does not have to make actual sense, be factual or anything else just get lodged into peoples heads and be associated with the “Chosen Product” being pushed.

The original theory being the now discredited “Pavlov’s Dogs” notion of involentary action under auditory stimulus (still in theory used by Ice Cream Van’s[2]).

Sometimes the idea is to plant an idea with two catch phrases one being the “hook” the other being a product identifier or fake fact, such as –the hopefully fictitious–,

1, “Sunshine Sun smile”
2, “Glemo… best tooth care product in nine out of ten tests”

With some catchy “winky-dink” melody or rythm behind the “hook”.

I must admit, I’ve never found myself buying anything like “Sunshine Sun smile” “Glemo…” irrespective of if it is very doubtfully “Best tooth care product in nine out of ten tests” –that have been faked[3]– because a “winky-dink” mellody plays in a store. Or as far as I can remember any product since I was still very much pre-teen (yes I’m aware that “kiddy pester power” can work[2] as can naff faux-tick “swooshes” on over priced low quality cloathing[4]).

So yes “Punch in the product” is a known tactic, as with care it can be used to push any Fake-Facts you like with quite enough people, that percentage of population wise it could be seen as greater than election margins…

Oh and don’t forget that “Campaign Managers” are not just “Pushing Product” they are almost always “Promoting Self” as well…

As has been observed of the old saying,

“Where there is muck there is brass”

The logical inverse would be,

“Where there is money there is dirt”

Something Rupert “the bear faced lier” Murdoch appears to have built an empire with…

[1] Earwigs,/ Earworms works because the tune has a “Hook” inside it which is a melodic catch phrase, that is short and in some cases follows “bio-rythms”. The “Girl from Ipanema” and “We will rock you” being two such well known tunes.

[2] “Pester-Power” caused probably the most memorable event of instant food intolerance in my life… In the UK there was a White Milk Chocolate pushed by adverts by “The milky bar kid”. My mum after some nagging bought me a small bar of which I ate maybe two pieces and they almost as suddenly reappeared at just under sub orbital speed. My elder quite unpleasant sister however still ate the rest of the bar with apparent enjoyment (sadly she did not come out in green spots, or her hair fall out which younger brothers have been known to wish for 😉

[3] The thing is marketing uses faux-tests and pays lots of money to “chosen” researchers… From the well known “8 out of 10 dogs” where the food bowls the marketing people wanted rejected had been soaked in petrol or equivalent which even a hungry dog would not eat from (they ain’t as dumb as some humans). This trickery gets a lot more insideous as is known with both Tobacco and Corn-syrup pushers. It’s not to much of a stretch to think that the marketing industry might also have done similar in the past with psychologists and similar to promote their own brand and latest “wizz-bang” con to enrich themselves (much like the Big Four Accountancy Firms did with associated “business consultant” scams in the past).

[4] Yup, logos on clothes are seen on groups of people, much like you see “ranch brands” burnt into the skin of cattle that are “owned” and driven to the slaughter… Which should tell you what marketers realy mean by “Product Branding”.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.