BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Zuckerberg's Manifesto And Our Algorithmically Controlled Future

Following
This article is more than 5 years old.

Getty

Earlier today Mark Zuckerberg released his latest manifesto, this time focused on the future of how the company moderates its platform. Perhaps the most interesting revelation in the lengthy screed is that despite steadfastly refusing to release any details on how often its algorithms are wrong, the company still plans within the coming year to transition its content moderation efforts to a largely automated process. More dangerously, the company also acknowledged that it uses its AI algorithms to flag content that pushes its acceptable boundaries and autonomously reduces the visibility of that content. As Facebook increasingly reorients its company towards an AI future in which algorithms make all decisions across the site, what does this future portend for democracy?

Facebook has made no secret of its intent to automate its content moderation efforts. Replacing its army of human staffers around the world with software algorithms will significantly reduce personnel costs, allow it to scale linearly with content and languages, enforce its rules more systematically, eliminate concerns over the psychological impact of content review and most importantly allow it to filter content at the moment it is uploaded, rather than respond to content long after it has circulated widely.

Most importantly, however, changing from a human workforce to an automated one will allow Facebook to centralize its moderation efforts behind an opaque wall without the danger they will leak to the press. Despite operating as a private company, Facebook still has constraints on the kinds of content rules it can put into place out of concern that its large human workforce might leak particularly controversial or problematic rules. Time and again the only glimpses the outside world has had into the specific details of its content guidelines, from moderation to its Trending Topics app, have all come from leaks. Replacing those humans with algorithms will vastly restrict the number of people with knowledge of the company’s content rules, allowing it to finally shroud its operations with the total secrecy befitting a digital dictatorship.

As with every Facebook announcement regarding its AI efforts, Zuckerberg’s manifesto contains numerous statistics touting the success of its efforts that lack any mention of whether they are actually accurate. At first glance, having AI algorithms removing 96% of prohibited nudity sounds fantastic until you notice the caveat that the algorithms are cited as merely proactively flagging 96% of the nudity content that it ultimately removed. This says nothing about how much actual prohibited nudity there is across Facebook that was ultimately removed, only that its algorithms flagged much of the content that it eventually deleted.

The far larger question is how many posts the AI incorrectly flagged. A naïve algorithm could simply flag every post as prohibited and achieve a 100% accuracy rate at flagging bad content.

It is the false positive rate that gives a clue as to whether the algorithm is right more often than it is wrong and this is the number that Facebook has steadfastly refused to provide for any of its algorithms. For a company that publishes reams of statistics daily, the fact that it has refused again and again to release even the most basic of estimates of its false positive rates does not bode well for its own confidence in their accuracy.

Moreover, it turns out that the company does not actually apply machine learning to all areas of its content moderation. Its counter-terrorism efforts rely on machine learning for textual posts, but image and video posts are matched only through exact match hash content signatures, meaning it can only block material it has seen before, not novel content.

The company has also steadfastly refused to permit external assessment or validation of its content moderation algorithms. When asked whether it would permit a small external team of academic experts under NDA to evaluate its tools, the answer is always silence.

It is therefore frightening that the company has plowed forward with such reckless abandon to deploy these algorithms across its site. According to Zuckerberg “over the course of our three-year roadmap through the end of 2019, we expect to have trained our systems to proactively detect the vast majority of problematic content.” In short, by the end of next year the company will have largely outsourced its routine moderation efforts to automated systems.

For a company that apparently has so little faith in its algorithms that it won’t publish their false positive rates, that’s a scary sell to a concerned public.

Even more troubling is the company’s admission that it suppresses the visibility of content that is not actually prohibited, but which approaches the line of acceptability. According to Zuckerberg’s letter, the company already uses algorithms to flag posts that are allowable, but which push boundaries. These posts are silently suppressed, with their visibility decreased substantially. In essence, this grey area of “approaching the line” content represents a new set of secret unpublished rules.

Once again, we find that the rules of the road in the social world are not only in constant flux, but in fact even when adhering precisely to the known rules, the companies enforce secret rules defined by hidden algorithms.

Perhaps most notably of all, while the company mentions the creation of independent bodies to hear appeals, Zuckerberg’s vision does not include democratizing its acceptable speech rules or giving its two billion users any kind of vote on what should be allowable or not.

Putting this all together, the future of social media is one in which acceptable speech is determined by algorithms. Even allowable speech that complies with all written rules may still fail secret unpublished rules and be penalized. As an army of humans is replaced with algorithms whose keys are held only by a trusted few, the little oversight that exists today will entirely disappear. In the end, the digital dictatorships that rule the online world will finally rid themselves of their last shreds of accountability and be able to exert total control over what a quarter of the earth can see and say online.