BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Instagram's Response To The Bianca Devins Murder Reminds Us It Profits From Horror

Following
This article is more than 4 years old.

Getty

Once again social media companies were in the news this week for failing to prevent horrific content from freely flowing across their platforms. This time it was Instagram that failed to stop graphic imagery of the murder of a 17-year-old from being freely shared on its platform, distributed far and wide. For companies that routinely tout their state-of-the-art content moderation technology, how is it that these vaunted tools keep failing so spectacularly? The answer is that social media platforms profit monetarily from horror. Every murder image, terrorism recruiting video, genocide glorification, animal cruelty, human trafficking sale and other unimaginably horrific post puts real money in their coffers, meaning Silicon Valley and founders like Mark Zuckerberg directly profit from horror. In the absence of laws requiring them to remove such content, the Valley has little incentive to do better.

In the case of Bianca Devins, the 17-year-old was murdered last Sunday, with the individual charged with her murder distributing graphic images of the killing globally via social media, including Instagram. Yet instead of flagging the images when they were uploaded and preventing them from being seen in the first place or rapidly removing them platform-wide and preventing reuploads after they were detected, the images appear to have circulated widely on Instagram, despite the company routinely touting the sophistication and capability of its automated filtering technologies.

For its part, the company emphasized the importance of its human moderators in the case even as its publicly touts the power of its automated algorithms. A spokesperson noted that once the company was made aware of the situation on Sunday, it removed the images that had been posted to victim’s account, but waited until Monday, after law enforcement had confirmed details of the event, before it removed the alleged murderer’s account, giving him an entire extra day of infamy. It also noted that it was heavily dependent on information from law enforcement in order to determine what content to deem in violation of its policies – again reinforcing the importance of context and intent to its filtering decisions, which are very difficult for automated systems today to understand.

The company noted that once the suspect was publicly named there would likely be impersonation accounts created under his name so its staff began actively searching for those accounts and deleting them, as well as monitoring related hashtags. Yet while parent company Facebook has heavily promoted its automated detection algorithms, the spokesperson noted that rather than exclusively relying on technology, it relied heavily on reports from ordinary users to help it identify impersonation accounts.

Reinforcing just how little the company has invested in addressing such situations, the company noted it has only 15,000 community operations staff to moderate a global community of more than 2 billion users.

Interestingly, the company appears to extend its monitoring beyond its own borders. A spokesperson noted that as it became aware of images related to the murder on other social media platforms it added them to its content blacklists. Asked whether it receives feeds of other social platforms, whether platforms voluntarily exchange such content to assist in mutual removal or whether it tasked staff to manually scour other platforms for related content, the company declined to comment.

Instagram acknowledged that it relies on image hashing to blacklist content that violates its terms of service, including the Devins images. In its initial response, the company noted that it was using this hashing to prevent the image from being shared across its platform. Asked about media reports that the image was still widely available on Instagram despite this hashing, suggesting its hashing was not working as well as the company suggested, a spokesperson did not dispute those reports, but declined to comment further.

Asked about the error rate of its hashing algorithm and why Facebook and Instagram seemed to struggle so much with removing blacklisted content via its hashing algorithm, the company again declined to comment.

Refusing to provide details about their error rates even while touting the accuracy of those algorithms has become standard practice among social media companies, allowing them to reap positive media coverage lauding them as AI pioneers and technology geniuses while refusing to provide any details that might permit external scrutiny of the error rate of those vaunted algorithms that might show them to be far less than claimed.

Asked why Instagram seemed to be struggling so much with blocking blacklisted content given that commercially available similarity algorithms are so much more robust, the company again declined to comment.

Asked why Instagram does not proactively filter uploaded content to automatically prevent violent images from being uploaded without additional human review or if it did, why those filters missed this image, the company again declined to comment.

In contrast, Facebook and Instagram don’t seem to be suffering the same spectacular failures when it comes to preventing known copyrighted content from being shared on their platforms. There is no steady drumbeat of weekly media coverage of the latest Hollywood blockbuster being freely available for download from Facebook.

Yet preventing such sharing is accomplished through those very same hashing systems.

Why is it that Facebook and Instagram’s hashing systems are able to prevent the upload of copyrighted content yet seem to fail so spectacularly when it comes to all other kinds of content?

The answer is that under the laws of most countries, Facebook faces substantial legal liability for failing to prevent copyrighted content from being uploaded, including considerable monetary fines. Facebook has no choice but to invest the necessary resources in preventing copyright infringement on its platforms.

In contrast, when it comes to sharing horrific content, from graphic murder images, animal abuse, human trafficking, terrorism recruitment and any other unimaginably horrific material, Facebook actually profits from such content. Globally there are few laws penalizing Facebook for hosting such content and in reality Facebook actually earns very real monetary profit from such material.

As a shareholder, Mark Zuckerberg actually earned money into his personal pocket from every share of the Devins murder images.

Asked repeatedly over the years whether it would agree to forfeit and refund the revenue it earns from content it later deletes as a violation of its terms of service, the company has steadfastly refused to do so, reminding us that it faces no legal obligations not to share such content and is loathe to part with a lucrative revenue stream. After all, such horrific content tends to provoke a viral response that drives large numbers of users to consume, share, discuss and engage with it, generating a surge of monetizable visits that sells ads and puts money in Facebook’s coffers.

Putting this all together, the widespread distribution of the Bianca Devins murder images through social media reminds us that those platforms have few incentives to take content moderation seriously. They face few legal requirements to remove such content and in fact actually profit monetarily from their distribution.

As unconscionable as it might seem to the public, the graphic images of the horrific murder of a 17-year-old girl earned very real money for Instagram and its shareholders and executives. Mark Zuckerberg himself, by virtue of the shares he holds in the company, directly profited from the distribution of these images, as did every other Facebook shareholder.

If Facebook wanted to show it was serious about combating such imagery, an easy first step would be to contractually commit itself to fully refunding all revenue earned from activity related to posts it later removes as violations of its terms of service, as well as deleting all interest and behavioral advertising profile information it gained as a result of those engagements. An even better step would be to not only refund revenue and delete data acquired through violating content, but to pay a fixed fine based on the number of shares and views of that content into a fund that is distributed to international victims’ rights groups. Alternatively, perhaps governments could finally step forward and impose financial penalties on the distribution of such content just as they do for the illegal distribution of copyrighted works through their servers.

Turning revenue-generating toxic content into a monetary loss or legal liability for social companies would reorient them to take their moderation efforts seriously. In 2019 there is simply no excuse for Instagram not to have proactively flagged those images as they were uploaded and prevented them from ever being seen in the first place. The commercially available image filtering tools available today are extremely accurate at detecting such images and coupled with human post-review for edge cases would have prevented these images from ever being seen on Instagram. In addition, today’s image similarity matching algorithms are accurate enough to automatically remove all non-graphic imagery of her death that was blacklisted by the company’s human moderators. While such similarity matching can require human post-review for edge cases, it can be tuned to achieve near-100% match rates in cases like the Devins images, despite adversarial intervention by posters to attempt to defeat them. Yet such filtering imposes costs social companies are unwilling to bear when they face no incentives to do so.

In the end, until social media companies face real consequences for their failures and are no longer permitted to monetarily profit from horrific material, such situations will sadly continue to happen.