BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Facebook's Failed AI Showcases The Dangers Of Technologists Running The World

Following
This article is more than 5 years old.

Getty

Amidst the barrage of criticism over why its highly touted AI algorithms failed to flag the New Zealand video, Facebook issued a statement Wednesday night that attempted to explain away its liability as merely a limit of AI training data and a lack of sufficient content reviewers. Yet, in reality, Facebook’s response documented in exquisite detail all of the reasons it is so dangerous to increasingly cede the operation of our digital world to technologists that lack the most basic understanding of how the real world functions.

After years of touting its immense advances in AI and how it was betting the company’s entire future on AI-first content moderation, Facebook’s abject failure last week to flag the New Zealand video has led to a series of concessions from the company that its AI tools simply aren’t nearly as advanced as it has been claiming.

The company’s statement Wednesday night revealed a number of fascinating insights into why the company’s content moderation efforts have failed so spectacularly in so many ways.

When asked whether the company believed it had a sufficient number of human moderators to review the content flowing across its platform, a spokesperson declined to comment. Yet, in its statement Wednesday night, Facebook dismissed the idea of introducing the kind of broadcast delay used by television networks to catch inappropriate content, arguing that its “millions of Live broadcasts daily” are simply far too much for its small team of content moderators to review.

The company’s argument that it has to few content moderators to adequately review all of the content posted to its platform each day is precisely the point that an increasing number of governments have been raising about the company. In its race to maximize profitability, Facebook spends far too little on content moderation compared to the resources it expends in hosting, promoting, providing access and monetizing all of that content.

The company’s argument that it cannot introduce a delay to its livestreaming product because it lacks a sufficient number of reviewers to review the streams is at least finally an admission by the company that it has far too few reviewers.

Facebook devotes an entire section of its report to noting that “during the entire live broadcast, we did not get a single user report.”

The company has long relied on its users themselves to report inappropriate content under the idea that its two billion users will naturally report every piece of bad content the instant it appears on its servers. Yet, as the company has been warned again and again, that’s not how the real world works. In fact, the company itself has repeatedly acknowledged that content like revenge pornography and illegal weapon, drug and animal part sales all take place without users immediately flagging that content for removal.

Much of this distribution takes place away from the public eye in private groups and pages that are not widely known. The members of such groups are there for the purpose of engaging in illegal or unethical activity or consuming specific content that they do not want interfered with. They certainly aren’t going to be reporting to authorities the very content they know they shouldn’t be sharing or consuming.

For Facebook to even offer the defense that the video wasn’t flagged by its users suggests it simply does not understand how its platform is being used.

This is especially frightening because it reminds us just how little Facebook’s Silicon Valley technologists understand about how the real world operates, reminding us that this will simply happen again and again.

The company also conceded how ill-prepared it was for the novel idea that users might try to work around its content blacklists. In its statement, Facebook claims to have been caught unawares by users trying to work around its blacklisting filters by altering the video, remixing it with other content and simply capturing it from their screens. It offered that “we’re learning to better understand techniques which would work for cases like this with many variants of an original video” and noted that it had begun to deploy audio matching in addition to video matching.

Companies like Facebook are well aware of the incredible creativity users go through to post illicit content on their platforms. Indeed, they conduct extensive research and invest heavily to combat the cat-and-mouse game that is keeping copyrighted content from being shared, due to their legal liability for such sharing. Even filming a video playing on a computer screen, which Facebook mentions as being particularly difficult to catch, is an extremely common approach that the company is well aware of when it comes to copyrighted content.

These aren’t unprecedented new approaches like the company claims. They are the age-old practices it knows only too well and has invested heavily in combating when it comes to the copyrighted content it is actually liable for.

Audio matching is a standard complement to visual matching for video that provides protection against extensive visual modification or the republication of audio alone, so it is extremely surprising to see Facebook claim that it did not attempt to deploy such tools from the very beginning. The company did not immediately respond to a request for comment on why it didn’t deploy such tools until late.

Perhaps the most striking example of why Silicon Valley’s technologists are failing so spectacularly against the misuse of their platforms lies in Facebook’s admissions regarding why its AI terrorism filter failed.

The company claims its AI filtering failed to flag the video because the company lacked a sufficiently large training library of such videos to adequately train its AI algorithms.

To put that more clearly, Facebook stated that it trains its terrorism/violence detection algorithm only by feeding it the very tiny number of videos of past attacks.

Current AI approaches for machine vision require massive amounts of highly diverse training data and counter-examples to build even moderately robust recognition algorithms. When training an algorithm to recognize a dog or a tennis ball or a person, companies can simply crawl the web or license a stock photo gallery to acquire millions of training examples, ensuring the algorithm has sufficient data to work with (though this data is vastly biased towards Western societies and represents little of the world’s diversity).

No serious AI researcher would even contemplate attempting to build a dog recognition model from scratch by handing it a few dozen photographs of golden retrievers and assuming the model could now recognize all dog breeds with high accuracy. Transfer learning can assist, but here again, for a sufficiently broad and vague category like “violence” or “terrorism” it requires a huge amount of training data to fully represent the totality of the represented space and transfer learning from largely unrelated content is difficult at best.

Instead, when building AI models for very rare and very broad and vague categories like “terrorist violence” that generate debate even among professional scholars who specialize in the area, models are never built by handing them a small set of videos of past terror attacks and crossing one’s fingers that somehow it has learned enough to recognize other terrorism videos in the future.

Far from it.

Robust AI models for rare but broad categories are built upon characteristic analysis. Rather than a monolithic binary model built on a small number of training videos, a series of models are constructed that decompose the video into its component parts for which vastly larger amounts of training data are available. While there are relatively few first-person videos of shooting attacks on houses of worship using military-style weaponry, there are vast libraries of videos of weapons, of weapons held in a firing position, of houses of worship of all kinds, of gunfire, of non-verbal human utterances, of morbidity, etc.

A robust counter-terrorism video AI model will compute all of these characteristics for each video and generate a probability score of it depicting or predicting various kinds of violence, including those for which it has never seen related training data.

This is critical because Facebook has previously acknowledged that it works almost exclusively with Islamic State and Al Qaeda content in its counter-terrorism efforts. Thus, its training data features primarily men largely dressed in a particular way, largely from a particular demographic, largely speaking a particular language and largely utilizing specific kinds of violent tactics. Those individuals account for a small fraction of terrorists worldwide, but Facebook’s near-exclusive focus on those two groups ensure its training data will always be incorrectly skewed towards them.

This near-total bias in Facebook’s historical terrorism training database means its terrorism model simply could not have had any other outcome but to miss the New Zealand video.

Indeed, our AI models that exist today were quite capable of flagging in realtime that the New Zealand video featured a military-style assault weapon, that the weapon was held in a firing position, that from the background imagery the person was approaching a house of worship with a military weapon in a firing position and that from the person’s gait and approach there was a highly elevated likelihood of physical confrontation.

These aren’t science fiction imaginations of what technology might someday permit. These are the AI algorithms deployed in commercial production today by companies across the world.

However, as Facebook reminds us, even the most advanced AI technology is of little use when companies don’t understand how to deploy it.

Robust AI models for rare but vague categories like first-person terrorism videos are simply never built as binary classifiers from a small set of nearly completely biased training videos.

It simply isn’t done because it simply doesn’t work.

Such robust AI models are always built as collections of robust attribute classifiers trained on large volumes of data and using combinations of those attributes to calculate the confidence that the video depicts various forms of violence or the risk that it is about to depict such violence in the immediate future.

For a company that is touted as a leader in AI it is simply beyond all credibility that this is actually how Facebook is building its AI systems.

There is absolutely no possible way that a company with the AI expertise Facebook claims to have would even contemplate building filtering tools this way.

These aren’t novel state-of-the-art discoveries. These are well understood industry standards.

This suggests either that Facebook is downplaying how it builds its AI tools as a publicity stunt to make the failure of its tools sound more plausible or else we as a society need to have a very real reckoning with how poorly Facebook understands how not only the world works, but even the basics of AI technology.

Could it be that all of Facebook’s claims to be a leader in AI are nothing more than marketing hype and the company actually understands little about current deep learning technology as it applies to the real world?

That Facebook would actually claim in a public statement to have missed the New Zealand terrorist attack due to a lack of training videos for its AI of past violent attacks demonstrates a total failure to understand both the realities of the real world and the limitations of today’s deep learning technology and calls into question Facebook’s entire approach to deploying AI for content moderation.

If Facebook believes it is acceptable to train AI counter-terrorism models as binary classifiers on small pools of extremely biased training data, the company either lacks even the most rudimentary understanding of deep learning technology or else it simply doesn’t care about content moderation.

Facebook’s use of binary classifiers does make sense from a public relations standpoint. It allows the company to deploy extremely cheap solutions that it knows will fail in the majority of cases but allows it to claim that it is doing something with AI to combat terrorist use of its platform. In essence, it can tell governments that it is using AI, while at the same time deploying simplistic cheap solutions that it knows will fail in most cases.

Facebook is able to do this because government policymakers and the general public lack the technical understanding of AI or even technology as a whole to ask the company the necessary hard questions.

Thus, Facebook is able to claim its counter-terrorism tools have “been so effective at preventing the spread of propaganda from terrorist organizations,” while refusing to provide even the most basic details to support those claims. It refuses to provide any information on how those tools are actually functioning, how much they miss or their false positive rates. It is able to define “terrorists” as the Islamic State and Al Qaeda while conveniently ignoring terrorism in all its other forms across the world. Most importantly, the company is able to keep the entirety of its counter-terrorism efforts shielded from outside scrutiny by independent experts that might raise questions about its naïve approaches.

As has become the company’s practice, it did not respond to any of the questions posed to it about these issues, reminding us once again that Facebook has no incentive to improve, since it faces no consequences for its failures and in fact actually profits monetarily from terroristic use of its platform.

Putting this all together, it is simply not plausible in any way that Facebook was basing its AI video filtering on a simple binary classifier built upon its small pool of immensely biased training data or that it claims to have been unable to separate real world video from video game footage. There is simply no possible reality in which a company with Facebook’s vaunted AI expertise would even consider such a naïve approach. Nor would it have failed to deploy audio fingerprinting from the beginning or modification filtering.

Yet, all of Facebook’s messaging to date regarding New Zealand paints a portrait of a company so utterly out of its depth that the technological approaches it claims to have used read more akin to a middle schooler trying out their first “hello world” deep learning tutorial after reading about it in a news article. It is simply not possible that Facebook has this little understanding of how deep learning works or this much of a failure to understand how the real world functions.

In the end, we are left with the disturbing possibility that the world’s most influential social media company lacks even the most rudimentary understanding of the deep learning and content filtering technologies it has placed at the center of its future existence or that it does understand those technologies, but has no interest in deploying them because it would decrease the profit it earns from terroristic use of its platform. The fact that we simply don’t know which of these alternatives is correct (or both) and that the company has no obligation to tell us reminds us that Facebook has now become more powerful than even our elected governments and most importantly, accountable to no-one.