BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

AI-Generated Election Content Is Here, And The Social Networks Aren’t Prepared

Earlier this year, on the eve of Chicago’s mayoral election, a video of moderate Democrat candidate Paul Vallas appeared online. Tweeted by "Chicago Lakefront News," it appeared to showcase Vallas railing against lawlessness in Chicago and suggesting that there was a time when "no one would bat an eye" at fatal police shootings.

The video, which appeared authentic, was widely shared before the Vallas campaign denounced it as an AI-generated fake and the two-day old Twitter account that posted it disappeared. And while it's impossible to say if it had any impact on Vallas’s loss against progressive Brandon Johnson, a former teacher and union organizer, it is a lower stakes glimpse at the high stakes AI deceptions that will potentially muddy the public discourse during the upcoming presidential election. And it raises a key question: How will platforms like Facebook and Twitter mitigate them?

That's a daunting challenge. With no actual laws to regulate how AI can be used in political campaigns, it is on the platforms to determine what deep fakes users will see on their feeds, and right now, most are struggling to address how to self-regulate. “These are threats to our very democracies,” Hany Farid, an electrical engineering and computer science professor at UC Berkeley, told Forbes. “I don't see the platforms taking this seriously."

Right now, most of the biggest social media platforms don’t have specific policies related to AI-generated content, political or otherwise.

On Meta’s platforms Facebook and Instagram, when content is flagged as potential misinformation, third-party fact-checkers review it and are prompted to debunk “faked, manipulated or transformed audio, video, or photos”, independently of whether the content was manipulated through old-school photoshopping or AI generation tools, Meta spokesperson Kevin McAlister told Forbes.

Similarly, Reddit will continue to rely on its policies against content manipulation, which apply to “disinformation campaigns, falsified documents and deep fakes intended to mislead.” YouTube will also be removing election-related content that violates misinformation policies, which expressly prohibit images that have been technically manipulated to mislead users and may pose a serious risk or potential harm.

At Twitter, owner Elon Musk published an update to the company’s synthetic and manipulated media policy in April, stating that tweets “may be” labeled if they contain misleading information, and the company will continue to delete those that are harmful to individuals or communities. The policy notes that media fabricated “through use of artificial intelligence algorithms” will be more heavily scrutinized.

So far, the one major social media company that has a more comprehensive policy aimed at moderating AI-generated content is TikTok.

In March 2023, TikTok announced a new “synthetic media policy” that made it mandatory for creators publishing realistic-looking scenes generated or modified by AI to clearly disclose the use of such technology. For election content, it will ban all AI-generated images that impersonate public figures for political endorsement.

“These are threats to our very democracies. I don't see the platforms taking this seriously."

Hany Farid, an electrical engineering and computer science professor at UC Berkeley

TikTok started taking down content that doesn’t abide by the new rule in April. Content creators have some flexibility to make the disclaimer public: it can be in the subtitles, captions or hashtags, as long as it is not misleading. The platform is not automatically banning accounts that share unauthorized content yet, but just giving a warning.

But while this kind of transparency is a first step to alert users of the origin of the content, it might not suffice to prevent the spread of misinformation. “There will of course be bad actors who will work to evade any policy or standard,” said Renee DiResta, research manager at the Stanford Internet Observatory.

This is why maintaining integrity teams is key to address the challenge, DiResta argues. Twitter in particular could struggle with the next election: Since Elon Musk took over, he has laid off whole teams that tackled misinformation and has ended contracts with third-party content moderators.

The spread of misinformation has long been a problem for social media platforms, and the January 6 assault on the U.S. Capitol was proof of how a largely online-organized movement had deadly real-life consequences. But the 2024 presidential election will be the first where campaigns and their supporters will have access to powerful AI tools that can generate realistic-looking fake content in seconds.

“The disinformation has continued. We've thrown jet fuel on top of that problem in the form of generative AI and deepfakes,” said Farid.

However, experts caution against hyperbolic statements about how damaging AI-generated content could be in the next election.

“If we give the impression that disinformation campaigns using deep fakes will inevitably be successful, which they won’t, we may undermine trust in democratic systems,” said Josh Goldstein, a research fellow on the CyberAI team at Georgetown University’s Center for Security and Emerging Technology.

Beyond social media, search engines will also need to guard against AI-generated content.

Google stepped in to exclude manipulated content from the highlighted results in the knowledge panels and featured snippets. As for Google Ads, manipulated media is prohibited, and advertisers need to go through an identity verification process and include an in-ad disclosure of who is paying for it.

In May, Google CEO Sundar Pichai announced a new tool, 'about this image', to disclose when images found via search were AI-generated. The new feature will also show when a particular image and other similar ones were first indexed, and where else it’s been seen online, including news, social media or fact checking sites.

Pichai also announced that Google will soon start to automatically watermark images and videos created with its in-house generative models so users can easily identify synthetic content.

Google is not alone in introducing this method. Watermarks are one of the core demands of the Content Authenticity Initiative, an alliance of over 200 media, digital, content and technology organizations that promotes the adoption of an industry standard for content authenticity.

Going a step further, Farid introduced the idea of making it mandatory for content creators to include a sort of “nutrition label” that discloses how their content was created. For example, a caption might note that the accompanying video was recorded with an iPhone 14 and edited with Adobe's AI image generator Firefly.

The outstanding question is whether tech companies will be able to successfully self-regulate or if governments would need to interfere. “I would like not to need [governments],” Farid said. “But we still need help.”

Follow me on Twitter