BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Meta Will Require Political Advertisers To Disclose When They Use AI

Following

Election Day 2024 is now less than a year away and there are growing concerns that artificial intelligence could be employed in nefarious ways. A new poll from The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy found that nearly 6 in 10 adults (58%) believe AI tools could increase the spread of false and misleading information during next year's elections.

AI could be employed to micro-target political audiences, mass-produce persuasive messages, and even generate realistic fake images and videos in seconds.

This week, Facebook parent Meta announced that it would attempt to tackle the issue head-on — and it introduced a new policy that will require advertisers to disclose whether social issue, electoral or political ad posted on Facebook or Instagram contains a photorealistic image or video, or realistic-sounding audio, that was digitally created or altered.

This will include depicting a real person as saying or doing something they did not say or do; depicting a realistic-looking person that does not exist or a realistic-looking event that did not happen; or altering footage of a real event that happened, Meta explained in a blog post on Wednesday.

The social network will require disclosure of any depictions of a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.

Meta vowed to remove content that violates its policies, whether it was created by AI or a person, and said its independent fact-checking partners will review and rate viral misinformation.

It will not allow an ad to run if rated as False, Altered, Partly False, or Missing Context. However, advertisers can still adjust the size of an image, crop it, or make similar changes unless consequential or material to the claim.

"If we determine that an advertiser doesn't disclose as required, we will reject the ad and repeated failure to disclose may result in penalties against the advertiser. We will share additional details about the specific process advertisers will go through during the ad creation process," Meta said in its post.

Confronting Misinformation

AI could make spread of misinformation significantly easier, and Meta's policy is a step in the right direction.

Earlier this year, a deepfake of Florida Governor Ron DeSantis dropping out of the race made the rounds on social media, and while it was easy to spot as fake, the concern is that the technology is rapidly improving. AI could make it easier than ever to produce such deceptive videos.

"We've already seen politicians take advantage of AI and deep fakes, leaving voters confused and questioning what is true," Eduardo Azanza, CEO of face and voice authentication platform Veridas, said in an email. "Voters have the right to make political decisions on the truth and leaving AI-generated content unlabeled creates a powerful tool of deception, which ultimately threatens democracy. It is important for other media companies to follow the steps of Meta and place guardrails on AI. That way, we can build trust in technology and secure the sanctity of elections."

Meta's new policy has not come a moment too soon.

"With Meta joining Google in requiring political ads to disclose the use of AI, we are on track to establish a more trustworthy and transparent media landscape. This move could not come at a more important time, with the 2024 U.S. Presidential elections approaching and political campaigns ramping up," Azanza explained.

Still More Needs To Be Done

Though the new policy at Meta will require official campaigns to disclose the use of AI, it appears unlikely to address the fact that anyone could post AI-manipulated content on the social networks.

Third parties may also operate in the shadows.

"Much of political posting isn't done by the politician but by staff, and most political outreach is done by firms hired to do the work," technology industry analyst Rob Enderle of the Enderle Group warned.

"I don't see what difference it makes to the observer as to whether it is done by AI or not, the bigger question is whether the ad is from who it appears to be from and not a hoax. Thus the watermark announcement they did at the same time is far more important as a result," Enderle continued. "The only true benefit might be that if the AI-driven work is substantially better or worse than that created by humans it would either promote or discourage the use of AI in that fashion."

Follow me on Twitter