“People stop me on the street all the time and say, I didn’t know you speak Mandarin,” Adams said at a news conference this month.
Experts have warned for years that AI will change our democracy by distorting reality. That future is already here. AI is being used to fabricate voices, fundraising emails and “deepfake” images of events that never occurred.
Ahead of the 2024 election, I’m writing this to urge elected officials, candidates and their supporters to pledge not to use AI to deceive voters. I’m not suggesting a ban, but rather calling for politicians to commit to some common values while our democracy adjusts to a world with AI.
Some lawmakers are trying to update our laws, but waiting on them to succeed isn’t an option for the AI election that’s already underway, experts in technology and elections tell me. If we don’t draw some lines now, legions of citizens could be manipulated, disenfranchised or lose faith in the whole system — opening doors to foreign adversaries who want to do the same. AI might break us in 2024.
“When it becomes weaponized, we’re in a world of trouble,” Rep. Yvette D. Clarke (D-N.Y.) tells me.
A wide 85 percent of U.S. citizens said they were “very” or “somewhat” concerned about the spread of misleading AI video and audio, in an August survey by YouGov. And 78 percent were concerned about AI contributing to the spread of political propaganda.
Of course, fibbing politicians are nothing new, but examples keep multiplying of how AI supercharges misinformation in ways we haven’t seen before. Two examples: The presidential campaign of Florida Gov. Ron DeSantis (R) shared an AI-generated image of former president Donald Trump embracing Anthony S. Fauci. That hug never happened. In Chicago’s mayoral primary, someone used AI to clone the voice of candidate Paul Vallas in a fake news report, making it look like he approved of police brutality.
Both of these cases could be debunked. But what will happen when a shocking image or audio clip goes viral in a battleground state shortly before an election? What kind of chaos will ensue when someone uses a bot to send out individually tailored lies to millions of different voters?
Most voters would be shocked to see how simple and effective generative AI tools have become. As an experiment, I created a fake announcement from Senate Majority Leader Charles E. Schumer (D-N.Y.) endorsing Spider-Man for president. (Schumer gave me permission to try.)
With an image generator called Midjourney, I made a photorealistic image of Spidey and the senator shaking hands. Using the same ElevenLabs software that was used by New York’s mayor, I had Schumer’s voice read an endorsement. It all took me less than 15 minutes.
The risk to our democracy is greater than just misinformation about any particular candidate. When faking becomes so easy, citizens can lose faith that anything they hear or say is legitimate.
“AI could be used to jaundice — even totally discredit — our elections, as early as next year,” Schumer said at a recent Senate hearing. “Make no mistake, the risks of AI on our elections is not just an issue for Democrats, nor just Republicans. Every one of us will be impacted.”
So if we’re clear on the risk of AI, what’s in the way of solutions?
“There is nothing wrong with the use of AI in our democracy as long it’s not weaponized to deceive, misinform or harm anyone,” says Clarke, who has been working for years on developing guardrails for AI.
She’s right: We can’t put the genie back in the bottle. AI is already embedded in tech tool campaigns that all of us use every day. AI creates our Facebook feeds and picks what ads we see. AI built into our phone cameras brightens faces and smooths skin.
What’s more, there are many political uses for AI that are unobjectionable, and even empowering for candidates with fewer resources. Politicians can use AI to manage the grunt work of sorting through databases and responding to constituents. Republican presidential candidate Asa Hutchinson has an AI chatbot trained to answer questions like him. (I’m not sure politician bots are very helpful, but fine, give it a try.)
Clarke’s solution, included in a bill she introduced on political ads: Candidates should disclose when they use AI to create communications. You know the “I approve this message” notice? Now add, “I used AI to make this message.”
You know the “I approve this message” notice? Now add, “I used AI to make this message.”
I think disclosure is a piece of the solution. A visible or audible label lets campaigns be creative, but removes some of the risk in their tendency to stretch the truth. Embedded digital watermarks would also help sites like Google know to treat these images differently from news photos.
The Republican National Committee included a disclaimer in an entirely AI-generated ad it released in April depicting a post-apocalyptic America following the reelection of President Biden and Vice President Harris. Because of it, I don’t think anybody mistakenly thought the ad depicted reality.
But labels aren’t enough. If AI disclosures become commonplace, we may become blind to them, like so much other fine print.
The bigger ask: We want candidates and their supporting parties and committees not to use AI to deceive us.
So what’s the difference between a dangerous deepfake and an AI facetune that makes an octogenarian candidate look a little less octogenarian?
“The core definition is showing a candidate doing or saying something they didn’t do or say,” says Robert Weissman, president of the nonprofit Public Citizen, which proposed an AI pledge of its own.
Sure, give Biden or Trump a facetune, or even show them shaking hands with Abraham Lincoln. But don’t use AI to show your competitor hugging an enemy or fake their voice commenting on current issues.
The pledge also includes not using AI to suppress voting, such as using an authoritative voice or image to tell people a polling place has been closed. That is already illegal in many states, but it’s still concerning how believable AI might make these efforts seem.
And add to that: Don’t deepfake yourself. Making yourself or your favorite candidate appear more knowledgeable, experienced or culturally capable is also a form of deception.
The voice clones of Adams, the New York mayor, were used to encourage people to apply for city jobs or attend community events, not to appeal for votes. Still, they contained no disclaimer that the voice was AI generated and may have left a lasting impression with future voters. When former mayor Mike Bloomberg wanted to connect with more New Yorkers, he took Spanish lessons at City Hall.
(Pressed on the ethics of his use of AI, Adams just proved my point that we desperately need some ground rules. “These are part of the broader conversations that the philosophical people will have to sit down and figure out, ‘Is this ethically right or wrong?’ I’ve got one thing: I’ve got to run the city,” he said.)
The golden rule in my pledge — don’t use AI to be materially deceptive — is similar to the one in an AI regulation proposed by a bipartisan group of lawmakers including Sens. Amy Klobuchar (D-Minn.) and Josh Hawley (R-Mo.). It would make distributing such misleading material illegal for federal candidates.
Such proposals have faced resistance in Washington on First Amendment grounds. The free speech of politicians is important. It’s not against the law for politicians to lie, whether they’re using AI or not. An effort to get the Federal Election Commission to count AI deepfakes as “fraudulent misrepresentation” under its existing authority has faced similar pushback.
But a pledge like the one I outline here isn’t a law restraining speech. It’s asking politicians to take a principled stand on their own use of AI. We know AI is something voters are concerned about — maybe leading on it will help some candidates get elected.