The history of presidential campaigns is replete with dirty tricks, from 1856 claims that James Buchanan’s palsy-induced head tilt was the result of his trying to hang himself to the 2004 Killian documents fraud and beyond. But the advent of artificial intelligence and “deepfake” video and audio can take dirty tricks to a whole new level — and even, perhaps, sway an election.
A taste of this was delivered just last week, as Forbes reports:
Former President Donald Trump accused “Democrats” of using artificial intelligence to link him to [sex offender] Jeffrey Epstein on Tuesday [January 9]….
Trump made the comments in a post on Truth Social Tuesday that featured a Daily Mail article about actor and Democratic activist Mark Ruffalo apologizing for posting a fake photo claiming to show Trump en route to Epstein’s infamous private island.
Ruffalo on Friday posted two photos to X of Trump on a plane surrounded by groups of young girls, claiming they were “all headed to Esptein’s ‘Fantasy Island,’” but later apologized after the post was flagged with additional context from readers who said it was AI generated.
Predicting artificial intelligence “will be a big and very dangerous problem in the future,” Trump suggested Tuesday “strong laws ought to be developed against A.I.”
The picture is small potatoes, too, compared to what modern technology now makes possible. Just consider the following side-by-side comparison of real video of newscaster Anderson Cooper and a deepfake version.
Then there’s a message from famed actor Morgan Freeman (or is it?):
And below is footage of the real Barack Obama talking followed by a “synthetic” Obama, along with some explanation of how the magic was worked.
For people claiming to have trained eyes that can discern the difference, consider that this technology is only getting better, too. And the Obama video, do note, is six years old. (Imagine, technological change is now so rapid that such a fact can rightly raise our eyebrows.)
Some implications here are obvious. “Won’t be long before you won’t know what’s real and what isn’t with AI — scary stuff,” MSN respondent “80s Joe” remarked on the Forbes piece.
Then, “This is terrifying,” wrote the top commenter under this deepfake information video. “Imagine when deepfake videos can frame innocent people as guilty.”
In response, someone noted that “soon videos can’t be used as evidence because of this.”
As for Trump, the fake Epstein photo isn’t the first time he’s been targeted with AI legerdemain. Last year a pro-Ron DeSantis group ran an anti-Trump ad that included a voice sounding like the 45th president’s saying, “I opened up the governor position for Kim Reynolds, and when she fell behind, I endorsed her.”
“But Trump never said those words,” NPR’s Ayesha Rascoe pointed out last July. “The voice in the ad was allegedly created using artificial intelligence to read something Trump wrote on social media.”
Rascoe then spoke to University of California, Berkeley, digital forensics expert Hany Farid and asked him specifically about the artificial replication of candidates’ voices.
“I think there’s two risks here that we have to think about,” he responded. “One is the ability to create an audio recording of your opponent saying things that they never said. But the other concern we should have is that when the candidate really does get caught saying something, how are we going to determine whether it’s real or not?”
As for this deception’s effect, Farid stated that with people operating quickly on the internet and not taking time for analysis, they’ll just tend to “absorb” the fake material as long as it conforms to their “preconceived ideas.”
Moreover, as this technology develops, it becomes “cheaper” and “more ubiquitous,” the expert pointed out, enabling even laymen to create perhaps convincing deepfakes targeting a despised candidate. “You can go over to a commercial website, and for $5 a month you can clone anybody’s voice,” the expert said.
And what of the remedy? “What’s tricky here,” Farid said, is that “it’s not illegal to lie in a political ad.” The existing laws meant to address political deepfakes are essentially “toothless,” too. Oh, forensic analysis can determine what’s fact or fiction. The problem, however, is that “the half-life of a social media post is measured in minutes,” Farid further informs. “So by the time we end up analyzing it and fact-checking it, it’s great for the journalists, not so much for the millions of people who have already seen it online.” So it boils down to that old saying, “A lie can get halfway around the world before the truth gets its boots on.”
But while it’s not realistic to outlaw “lying” in campaign ads and other political content, might not the solution here rest in defamation law?
Enact law dictating that if someone creates defamatory deepfake audio, video, or imagery of a political candidate or, in fact, any individual, harsh penalties will result. Remember here that at issue isn’t, let’s say, conventional satire, such as Saturday Night Live spoofing a politician; everyone knows the actors aren’t the people they’re portraying and that humor (though not always achieved) is the goal. Defamatory deepfakes are often malicious and meant to deceive.
This wouldn’t stop a modern-day Jonathan Swift from creating an obviously caricatured version of a politician in order to illustrate what he considers the person’s absurdity. The law would pertain only to defamatory deepfakes.
Of course, we could do nothing, too. But does this run the risk that we’ll be deepfaked out of voting for good government?