BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Do We Need Laws Forcing Audits When Social Media Companies Fail To Remove Horrific Content?

Following
This article is more than 4 years old.

Getty

What happens when social media companies fail to remove hate and horror from their platforms? When Facebook failed to stop graphic images of the Bianca Devins murder from circulating on Instagram earlier this month, the company offered its usual refusal to answer any questions about why it missed the images, its usual contrite apology and moved on, safe in the knowledge that it faces absolutely no pressure to do better. Indeed, given that Facebook, its executives and shareholders actually profit monetarily from hate and horror, including the spread of the Devins images, the company actually has a strong disincentive to do better. What if we had laws that required external postmortem audits of social media companies each time they failed to remove horrific content? Such audits would not only offer insights into what went wrong, they would create a powerful incentive for companies to do better.

In one of the sad ironies of the digital age, social media companies actually reap profits from the hate and horror they profess to wish they could get rid of. Every terrorist recruitment post, every hate speech post, every threat of violence, every sale of a young child to the highest bidder, every piece of hate and horror that crosses their platforms earns them revenue from advertising sales and provokes users to reengage, posting and sharing and consuming content with a voracious appetite.

It is therefore little wonder that social companies have done so little to combat the spread of such content on their platforms. Despite employing some of the best AI minds on the planet creating some of the most advanced innovations in the field of content understanding, when it comes to content moderation, the tools deployed by the companies come straight from the stone age, with accuracy and robustness rates to match.

The companies have the technology to wipe clean much of the hate and horror from their platforms but choose not to. In some cases, the companies actually provide tools to other companies to filter hate and horror from their platforms, but refuse to deploy these tools on their own platforms.

Of course, why should they? Such content is good for business, earning revenue and keeping their billions of users engaged.

What might happen if legislation was passed that forced companies to subject themselves to an external audit by an independent board of reviewers over which they had no control or influence, each time they failed to remove hate or horror from their platforms? Any situation that resulted in more than 100 posts or shares involving prohibited content would trigger this provision. The auditors would prepare a public report that detailed precisely what steps the company took to combat the spread of the content and exactly what went wrong algorithmically, humanly and policy-wise.

While ongoing regular audits would be ideal, requiring tactical postmortem audits would play a critical role in shedding light on why companies like Facebook seem utterly unable to combat the spread of horrific content across their platforms, despite having the tools to do so.

Such audits would be particularly useful for the insight they would offer into the companies’ current approaches to content moderation. For example, despite publicly touting its algorithmic counterterrorism efforts, Facebook’s actual implementation for imagery rests largely on a rather feeble and easily bypassed content hashing effort of a small database of less than 100,000 images, many of them slightly modified duplicates. Few policymakers or members of the public are aware of just how rudimentary, primitive and easily fooled Facebook’s algorithms really are, due to the company’s longstanding policy of refusing to share the accuracy rates of its algorithms or permit external auditing.

In the case of the Devins images, why did Instagram miss so many of them? Is it simply hashing the images and a few basic alterations of each and then looking for exact matches? That would explain the company’s high miss rate but would be inexcusable for a company of Facebook’s technical stature. If the company is performing industry standard similarity matches, what perturbations is its scoring function robust to? How has it tuned its algorithms and has it biased them against false positives to reduce human reviewer time at the cost of missing significant amounts of matching content?

None of these questions can be answered today, but such external audits would allow the outside world to review how Facebook conducts its content moderation.

Most importantly, such audits would allow the outside world to hold Facebook accountable for poor implementation and practices which do not comport with industry standards.

It would also help the external community help companies like Facebook do better, offering collaborative opportunities and feedback.

Asked whether Facebook would permit external auditing of its algorithms, including what went wrong in the Devins case and if it would consider reporting the accuracy of its algorithms using industry standard metrics, a spokesperson said the company had no comment.

Of course, the details of such legislation will determine how useful it is. Given the number of loopholes, exemptions and exceptions technology companies managed to build into GDPR, it is almost a given that any such auditing legislation would be sufficiently weakened by technology companies to ensure it poses no actual threat to their business models.

Putting this all together, social media companies today have little reason to do better at combating hate and horror on their platforms because they profit monetarily from it and face no meaningful pressure to do better or explain their failures. Legislation that would require automatic external independent audits each time they failed to remove content would finally place pressure on the companies to take such moderation seriously and deploy meaningful solutions, as well as provide the external community the insights they need to assess Facebook’s technical approaches.

In the end, until companies have a reason to take content moderation seriously, they are simply not going to invest in it.