BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Do We Need New Laws Forcing The External Auditing Of Social Media Algorithms?

Following
This article is more than 4 years old.

Getty

When it comes to Silicon Valley’s algorithmic failures, from demographically biased services to recommender systems that promote terrorism to failed content moderation systems, it can be nearly impossible to actually get statistics quantifying the extent of these problems. While the companies hold mediagenic press conferences to promote their algorithmic prowess and court journalists to lavish praise upon them for their claimed successes, actual hard statistics verifying those claims are nearly nonexistent. In essence, the only understanding of how well or poorly the companies’ algorithms perform is from the carefully couched public statements of those very companies. Requests for more precise statistics or external audits are rebuffed at every turn. Do we need new laws requiring external audits of social platforms?

Imagine if pharmaceutical companies were permitted to self-report their successes and failures. If every death due to one of their products was written off to extenuating circumstances with no liability or consequence for them? If the US government accepted statements like “99% patients with successful outcomes benefited from this product” as their sole metric for deciding whether a drug could be sold or should be withdrawn?

Strangely, that is precisely the standard to which social media companies are held. Social platforms are permitted to self-audit and self-report their successes and failures. Each failure is written off as a “learning experience,” no matter how much harm it caused and every statement regarding accuracy is couched in vague wording designed to ensure it communicates no actual detail. Even national genocide directly facilitated through their platforms is merely dismissed as an “opportunity to do better.”

Every request for social media companies to provide actual real statistics on the performance of their tools is met with either silence or outright dismissal. Even the most basic of insights, like the false positive rate of a counter-terrorism filter that would offer insight into how often it is wrong, is rejected out of hand.

Companies face no legal requirement to release any details of how well or poorly their algorithms perform so they have no need to actually offer such statistics to the public.

Worse, social media companies face few requirements to be forthright or clear in their public statements regarding the accuracy of their algorithms.

Take Facebook’s “99%” statistic that has become one of its most famous counter-terrorism metrics. The company has repeatedly touted that “in Q1 we took action on 1.9 million pieces of ISIS, al-Qaeda and affiliated terrorism propaganda, 99.5% of which we found and flagged before users reported them to us.” The New York Times at the time summarized this as “Facebook’s A.I. found 99.5 percent of terrorist content on the site, leading to the removal of roughly 1.9 million pieces of content in the first quarter,” while the BBC did slightly better with “the firm said its tools spotted 99.5% of detected propaganda posted in support of Islamic State, Al-Qaeda and other affiliated groups, leaving only 0.5% to the public.” Both statements still stand to this day, meaning a visitor to the Times would come away with the understanding that “99.5 percent of terrorist content” on Facebook was removed by the company’s vaunted algorithms, while a BBC reader would easily falsely believe that just 0.5% of terrorism material was left for the public.

In reality, the company subsequently confirmed that this metric refers only to the ISIS and al-Qaeda content it ultimately deleted, the majority of it through its simple hash-based blacklists, rather than reflecting the actual density of terrorism content on the platform or algorithms capable of detecting new content.

In many ways this is no different from Twitter’s habit of redefining its user metrics periodically in order to present a rosier public image of its declining fortunes.

Despite being publicly traded companies, social media platforms face no requirements to publish actual accuracy metrics for their core algorithms nor are they required to submit to external audits of those systems to confirm the numbers they report.

It is ironic that while their financials must be externally verified, the algorithms that drive those financials require no such scrutiny.

What might it look like if publicly traded social platforms were required to report industry standard accuracy metrics for each of their core algorithmic systems each quarter? These metrics would range from the false positive rate of their terrorism removal systems to the false negative rate of their facial recognition system for different demographics.

In addition, companies could be required to submit to regular external audits of these figures to ensure they are being faithfully and accurately reported.

After all, social media companies are entirely dependent upon these algorithms and thus their successes and failures determine the success or failure of the company investors are placing their money in. It makes sense they should have greater visibility into those metrics.

Putting this all together, today’s world of unaccountable social media platforms free to self-audit and self-report their successes and failures is simply untenable in a world in which they increasingly control what we see and say online and when those failures support terrorism and undermine democracy itself.

In the end, requiring regular public algorithmic accuracy metrics and regular external audits would go a long way towards helping the public and policymakers and their own investors better understand the true reality behind the curtain.