BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Twitter Takes On Deepfake Videos, But Does Its Policy Go Far Enough?

Following
This article is more than 4 years old.

Last month Twitter announced that it would take on "manipulated media" and this includes photos, videos and audio that has been altered in a way that changes the original message, meaning or purpose – or more importantly could appear to be something that didn't actually happen. The policy is clearly meant to address the growing threat of so-called "deepfake" videos, which have increased both in sophistication as well as the number that are being posted.

Deepfakes can now be created quite easily, but can cause serious potential harm to one's reputation by making someone appear to have said something they never said or did. Earlier this year a doctored video was posted online of House Speaker Nancy Pelosi in which was slurring her words and appeared inebriated. 

Such threats are so great that last month two members of the Senate Intelligence Committee, Marco Rubio (R-Florida) and Mark Warner (D-Virginia) called upon technology companies to do a better job of combating the threat of deepfakes and other manipulated media.

Twitter's Response

On Monday via a blog post, Del Harvey, vice president of trust and safety at Twitter, announced a draft of how the micro-blogging service will handle synthetic and manipulated media that purposely tries to mislead or confuse people.

According to the post, "Twitter may: place a notice next to Tweets that share synthetic or manipulated media; warn people before they share or like Tweets with synthetic or manipulated media; or add a link – for example, to news article or Twitter Moment – so that people can read more about why various sources believe the media is synthetic or manipulated."

In addition, if it is believed that a deepfake could threaten someone or lead to serious harm, it will remove it.

While Twitter's policy could help reduce the spread of such content on its platform, it should be noted that Facebook didn't remove the deepfake videos of Speaker Pelosi, and such content can still be found on other platforms such Instagram and YouTube.

"The technology of deepfakes isn't inherently evil and does have applications in the real world," said Damien Mason, digital privacy advocate at ProPrivacy.

The technology that could be used to create deepfakes has already seen legitimate use in movies, but also in less "legit" ways.

"It could improve immersion by covering the faces of stunt actors and save costs in filmmaking, as proven when fans used the technology to replace Henry Cavill's mustache in Justice League reshoots to greater effect than Warner Bros' in-house efforts," added Mason. "The problem is the accessibility of the technology to the general public and the lack of regulation."

One solution could be with the various social media companies presenting a "united front" against the dissemination of such content – as a way to keep it from going viral or otherwise spreading. However, so far the companies haven't been able to agree on how to handle the posting of deepfakes or other manipulated videos.

"Currently, Facebook, Snapchat, YouTube and other social media companies are struggling to balance freedom with expression against the wave of false information disseminated, turning a blind eye to the larger implications of such technology," warned Mason. "In fact, YouTube even houses tutorials to guide beginners on how to use deepfake software, as there are legitimate uses."

Legal Solutions

A legal solution may also be slow to come – due to conflicting privacy laws, not to mention how such laws could be enforced even if these are enacted.

"Globally, laws pertaining to the digital world are lagging," explained Mason. "The UK offers victims a chance to sue for harassment, specific U.S. states such as California have passed a highly specific and almost unenforceable bill that makes it illegal to circulate synthetic videos within 60 days of an election, yet nowhere seems to target deepfakes with a specific law."

Instead it could come back to the tech companies to find the common ground.

"If governments aren't going to step up to adequately prevent misuse of the technology, social media companies coming up with a universal policy would be the next best thing," said Mason. "Unfortunately, this would require unprecedented communication across platforms that would likely take longer than advised to get measures in place."

Fighting Tech With Tech

If the tech companies can't get on the same page, and laws are essentially unenforceable, then the best solution might be a tech solution. Just as technology created the problem, perhaps technology could address the spread of deepfakes and other manipulated video across social media platforms.

"Social media companies have had deepfakes on their radar since 2017, when the technology first showed signs of photorealism," noted Mason. "Naturally, worries over fake news overwhelmed all platforms since the Cambridge Analytica scandal, seeing Facebook and Twitter crackdown on the practice. Sadly, no platform foresaw the implications the tech would have when falling into the hands of its users throughout 2019 and have subsequently not worked hard enough to detect and remove the offending content."

Even if the technology to detect the deepfakes is developed, the technology to manipulate the videos could improve faster.

"At one level, the issue is the tech equivalent of 'taking a thief to catch a thief,'" warned Charles King, principal analyst at Pund-IT. "Deepfake videos and the like are due to the increasing availability and affordability of fairly complex computer and software technologies. The same steady improvements that allow us to carry the equivalent of what was once a supercomputer in our pockets enables our friends and neighbors to create and seamlessly edit professional quality photos and videos. And trolls, political activists and cyber criminals to build plausible deepfake files."

However, most of those technologies are relatively modest in comparison to high end visualization and analytical systems so it's entirely likely that effective solutions could be developed to detect and tag deepfake files.

"The biggest challenges, however, are scale and will," added King. "With social media and video sites cataloging tens of millions of new files per day, it would be fairly easy for professionally doctored fake files to be uploaded from hundreds of locations, increasing the possibility that some or even many might slip through."

Social media companies could be pressured to develop and then to use better detected-based algorithms, but Mason warned that the reliance on automation may not be able to address it effectively enough.

"Twitter's newfound dedication is something to praise, even if its current state isn't wholly reliable," said Mason. "Still, we first need to iron out policies that protect victims instead of allowing humiliating technology to run rampant in the name of free speech."

A larger issue is that apparent lack of interest among social media sites and their owners to police deepfake images.

"When sites like Facebook or Twitter can't be bothered to remove what are demonstrably false statements and potentially hurtful lies from high profile users, can they be trusted to take on exponentially more difficult deepfake detection and response," pondered King. "The current answer seems to be , 'no.'"

Follow me on Twitter