BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Would Facebook's Phone-Based Content Moderation Solve Its Globalization Problem?

Following
This article is more than 4 years old.

Getty

As Facebook has evolved from a niche site for American college students to find each other into a global behemoth connecting a quarter of the earth’s population, the company has increasingly grappled with the incredible diversity of the world’s cultures. The web’s decentralized nature inadvertently was a major factor in helping it succeed as governments were able to infuse their own slice of the internet with their societal values. In contrast, social media companies have sought to enforce a single definition of acceptable speech and beliefs across the entire planet in a form of digital neocolonialism. Could Facebook’s move towards shifting its content moderation algorithms from its centralized datacenters out to users’ phones help it transition to a new era of localized content rules that more closely reflect the world’s rich diversity?

It is perhaps the most important lesson for any would-be digital dictatorship to realize that the world does not all think alike, believe alike or know alike. Most importantly, Silicon Valley does not represent the pinnacle of human achievement, wisdom, knowledge and understanding that must be forcibly bestowed upon the rest of the world to drag them out of their archaic ways and into the Valley’s modern brilliance.

The world is an incredibly diverse place that is at once a vibrant marketplace of alternative perspectives, experiences and beliefs and a chaotic cacophony of cooperative and conflicting cultures. The web’s early rise embraced these differences, allowing each country to define how its corner of the web would operate in order to best reflect its own cultural values and national priorities.

Social media’s early promise was to forcibly subjugate this global cultural quilt work into a single uniform set of beliefs based on Silicon Valley. Repressive regimes and democracies that didn’t share the US fixation on capitalism or its cultural priorities and beliefs would be washed away in the tidal wave of American righteousness that would flood the planet.

Instead, governments across the world reacted swiftly to curtail social media’s rise and to harness it for their own surveillance and repressive needs.

Yet, a remaining legacy of social media's initial neocolonialism is its centralized set of acceptable speech guidelines that enforce American cultural values upon the entire world.

Partially this is due to how the companies constructed their moderation infrastructures, as massive centralized infrastructures of distributed workers enforcing one set of guidelines for all countries.

Could Facebook’s move towards on-device content moderation through edge AI come to its rescue in finally helping it construct a more globally-aware moderation process?

As Facebook embarks upon the first steps in its journey to move content moderation directly onto users’ devices through AI and content hashing algorithms executing onboard their phones, the company has a unique opportunity to transition how it views the moderation process.

Instead of a single set of centralized guidelines enforced in its datacenters or through its culturally uneven human moderation teams, Facebook could more easily develop country-specific filtering models in conjunction with national governments and civil society.

Facebook could offer each country the ability to convene a working group of government officials and civil society groups to develop its own acceptable speech guidelines that reflect the diversity of that country and involve its own citizens. Instead of Facebook’s executives in California dictating what the people of a country halfway around the world are permitted to see and say, the citizens of that county itself would be able to determine their own digital destinies.

Working directly with governments and civil societies would also help Facebook better address localized toxic and dehumanizing speech and domestic terror organizations that are not well-reflected in its current Western-centric guidelines.

In fact, countries could even place their acceptable speech guidelines to a popular vote or harmonize them with their domestic laws.

In turn, these country-specific rules would be formed into a country-specific set of content databases and AI models that would then be deployed to the phones of all Facebook users from that country.

Thus, while Facebook might permit a particular piece of terrorism content to be shared within the United States, the citizens and government of a different country might vote to ban all terrorism content within its borders.

For example, users within the US might be able to share a particular terrorist video with each other and a US user could send that video to a friend in New Zealand based on their US content rules. However, when the New Zealand user receives the video, their New Zealand-based moderation models running on their phone would flag it as a violation of New Zealand’s far more strict perspective on terrorism content and prevent the user from viewing it. In similar fashion, a mundane English phrase that has taken on specific hateful and violent meaning in one country could be banned for users in that country while allowing it to continue being used in all other countries where its meaning is very different.

In fact, Facebook could even offer options whereby the phones of citizens of a country are loaded with that country’s moderation rules no matter where they are physically traveling to in the world, ensuring they adhere to those rules even when abroad. Conversely, countries with more restrictive rules on terrorism might request that the company temporarily switch the moderation models of all foreign visitors on its soil to the local country models for the duration of their visit to ensure an American visiting New Zealand would be subject to the New Zealand rules while in the country.

The move of content moderation to the device opens up all sorts of new possibilities for transforming the moderation landscape from its current neocolonial approach to a decentralized system that embraces the rich diversity of the world’s cultures.

At the same time, it would represent the abandonment of social media’s great promise to forcibly export American values of free speech and democracy to the world’s repressive countries. Though, as governments have increasingly brought social media to heel through new regulation, social platforms are already embracing an ever-more decentralized set of addendums to their moderation models.

Putting this all together, could the move from centralized content moderation to phone-based edge AI represent the turning point in which America’s failed forced exporting of its culture to the world gives way to a new kind of social media that embraces the world’s diversity much as the decentralized web did before? Only time will tell.