I Support His Computer Generated Policies
With AI-generated deepfakes becoming a viable campaign tool, why bother with democracy?

Smear campaigns are nothing new in politics. Take the grand battle of misinformation between Mark Antony and Octavian during the 32-30 BCE War of Actium, when mudslinging colored the contest deciding the future of Rome. Octavian’s ‘Egyptian Effeminacy’ campaign portrayed an Antony weakened by the luxuriant ways of the Ptolemaic seductress Cleopatra. According to the future Caesar Augustus, Antony had forsworn the severe ideal of Roman propriety in pursuit of lust and leisure. Antony hit back with claims that Octavian had forged the papers naming him Julius Caesar’s heir and primary beneficiary.
The intervening twenty or so centuries have seen technology applied to smear campaigns in a manner that is both sophisticated and effective. With this increasing sophistication, the temperature is rising. The emergence of deepfakes and AI-generated propaganda—a feature in American politics for some time but one that hit European elections with full force last year— constitute a fount of misinformation that is both disconcerting and disruptive. Politicians and pundits need not just speak lies to the public about their opponents, rather, they can visualize their supposed transgressions with artful, high-quality videos that were put together by computers with minds of their own.
In October, there was uproar in the Netherlands when the far right Party for Freedom (PVV) released incendiary deepfake pictures of rival politician Frans Timmermans being arrested by police. Geert Wilders, the shockingly towheaded leader of the PVV, had already commenced his campaign with AI-generated videos depicting a fictional future Netherlands under Sharia law.
Meanwhile, in Ireland, some voters may have been shocked to see Catherine Connolly’s name on the ballot come election day. Just days before, a deepfake video of the now president withdrawing from the race had circulated on the internet, purportedly from a reliable source, the Irish national broadcaster RTÉ. The culprit behind the deepfake video remains unknown, but their purpose seems clear: sow confusion and distrust in the electoral process to undermine the result.
While Wilders had to put his hands up and apologize to Timmermans for his defamatory use of deepfake imagery, he was likely not too embarrassed. As University of Amsterdam researcher Fabio Votta notes, “There’s still a normative aspect of using AI. For the far right, a lot of their modus is norm-breaking and shocking. They don’t fear the reputation hit.”
In a study of 20,000 election-related posts in the Netherlands, academic researchers found that over 400 posts were AI-generated. The PVV were responsible for over a quarter of these (with 27 parties in the race). How much of a problem does this type of content pose? While younger audiences can (mostly) distinguish between reality and deepfakes/AI videos, elderly voters are more vulnerable. A National Library of Medicine study found that 65+ year-olds were over twice as likely to engage with false news compared to 18–29-year-olds. Complementarily, a recent iProov survey found that older people were more susceptible to AI-generated deception: around 30 percent of those aged 55-64, and 39 percent of those over 65, had never heard of deepfakes before.
One man well over the 65+ age threshold is Donald Trump. He has taken to deepfakes and AI-generated content like a fish to water. Some notorious examples from the President seem harmless enough at face value: in one video, he skillfully plays soccer with Cristiano Ronaldo in the Oval Office. But it gets darker: in another, he commandeers a fighter jet while wearing a crown and offloads heaps of diarrheic shit on crowds of protestors in Times Square.
These ridiculous videos may not be as innocuous as they seem. The gullibility of voters regarding deepfakes becomes an issue when the content drifts toward maliciousness and becomes intertwined with baseless conspiracy theories. Recently, Trump released a controversial AI-generated video promising Americans access to all-healing "MedBed" hospitals (a debunked alt-right conspiracy claiming that the 1 percent have access to sci-fi-esque, cancer-curing hospital facilities). Erstwhile, in July, Trump posted an AI video of Obama being arrested in the Oval Office before appearing behind bars in an orange jumpsuit. Remind you of anything? It seems like the PVV’s Geert Wilders is taking notes from his counterparts across the Atlantic.
How dangerous can media misinformation get? Trump was reportedly misled by footage showing chaos in Portland in September, with Fox News interspersing shots of a relatively peaceful rally with far more provocative footage from the 2020 George Floyd protests. This formed his rationale for sending the National Guard to curtail the bloodthirsty hipsters of western Oregon. One must ask: if senior politicians make drastic policy decisions based on blatantly fictitious footage, what might they justify doing with, say, a hyper-realistic deepfake of Zohran Mamdani declaring Sharia law in New York City? For now, this question is a mere thought experiment—we should all hope it remains that way.
But at the fundament: AI-generated deepfakes debase truth and reality. This is a process that the American president and certain European counterparts are adopting with expedience and alacrity. It seems that—with the felicitous confluence of the new tech overlord’s capabilities and certain political actors’ unscrupulousness—this debasing of truth and reality is a new norm. Logically and problematically following, a democratic society relies on truth and reality in order to make sound decisions. A Pandora’s box is now opened, and we increasingly live in a world weakly ordered by figments and shadows.