Opinion: How to avoid AI-enhanced attempts to manipulate the election
The headlines this election cycle have been dominated by unprecedented events, among them Donald Trump’s criminal conviction, the attempt on his life, Joe Biden’s disastrous debate performance and his replacement on the Democratic ticket by Vice President Kamala Harris. It’s no wonder other important political developments have been drowned out, including the steady drip of artificial intelligence-enhanced attempts to influence voters.
During the presidential primaries, a fake Biden robocall urged New Hampshire voters to wait until November to cast their votes. In July, Elon Musk shared a video that included a voice mimicking Kamala Harris’ saying things she did not say. Originally labeled as a parody, the clip readily morphed to an unlabeled post on X with more than 130 million views, highlighting the challenge voters are facing.
States like California, which are home to large immigrant populations targeted by disinformation efforts, need to help these groups identify and avoid such attempts.
More recently, Trump weaponized concerns about AI by falsely claiming that a photo of a Harris rally was generated by AI, suggesting the crowd wasn’t real. And a deepfake photo of the attempted assassination of the former president altered the faces of Secret Service agents so they appear to be smiling, promoting the false theory that the shooting was staged.
Clearly, when it comes to AI manipulation, the voting public has to be ready for anything.
Voters wouldn’t be in this predicament if candidates had clear policies on the use of AI in their campaigns. Written guidelines about when and how campaigns intend to use AI would allow people to compare candidates’ use of the technology to their stated policies. This would help voters assess whether candidates practice what they preach. If a politician lobbies for watermarking AI so that people can identify when it is being used, for example, they should be using such labeling on their own AI in ads and other campaign materials.
In our study, we found many AI chatbots and apps such as ChatGPT and Gemini provide misinformation on when and where to vote. Tech companies should improve their platforms.
AI policy statements can also help people protect themselves from bad actors trying to manipulate their votes. And a lack of trustworthy means for assessing the use of AI undermines the value the technology could bring to elections if deployed properly, fairly and with full transparency.
It’s not as if politicians aren’t using AI. Indeed, companies such as Google and Microsoft have acknowledged that they have trained dozens of campaigns and political groups on using generative AI tools.
Major technology firms released a set of principles earlier this year guiding the use of AI in elections. They also promised to develop technology to detect and label realistic content created with generative AI and educate the public about its use. However, these commitments lack any means of enforcement.
Government regulators have responded to concerns about AI’s effect on elections. In February, following the rogue New Hampshire robocall, the Federal Communications Commission moved to make such tactics illegal. The consultant who masterminded the call was fined $6 million, and the telecommunications company that placed the calls was fined $2 million. But even though the FCC wants to require that use of AI in broadcast ads be disclosed, the Federal Election Commission’s chair announced last month that the agency was ending its consideration of regulating AI in political ads. FEC officials said that would exceed their authority and that they would await direction from Congress on the issue.
California and other states require disclaimers when the technology is used, but only when there is an attempt at malice. Michigan and Washington require disclosure on any use of AI. And Minnesota, Georgia, Texas and Indiana have passed bans on using AI in political ads altogether.
Opinion: Want to convince a conspiracy theory believer they’re wrong? Don’t start with the truth
People who believe conspiracies about the Trump assassination attempt or buy into QAnon are often seeking purpose or belonging — truth is beside the point.
It’s likely too late in this election cycle to expect campaigns to start disclosing their AI practices. So the onus lies with voters to remain vigilant about AI — in much the same way that other technologies, such as self-checkout in grocery and other stores, have transferred responsibility to consumers.
Voters can’t rely on the election information that comes to their mailboxes, inboxes and social media platforms to be free of technological manipulation. They need to take note of who has funded the distribution of such materials and look for obvious signs of AI use in images, such as missing fingers or mismatched earrings. Voters should know the source of information they are consuming, how it was vetted and how it is being shared. All of this will contribute to more information literacy, which, along with critical thinking, is a skill voters will need to fill out their ballots this fall.
Ann G. Skeet is the senior director of leadership ethics and John P. Pelissero is the director of government ethics at the Markkula Center for Applied Ethics at Santa Clara University. They are among the co-authors of “Voting for Ethics: A Guide for U.S. Voters,†from which portions of this piece were adapted.
More to Read
A cure for the common opinion
Get thought-provoking perspectives with our weekly newsletter.
You may occasionally receive promotional content from the Los Angeles Times.