BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Social Media Companies Need To Own Up To The Dark Side Of Their Impact On Society

Following
This article is more than 5 years old.

Last week the New York Times offered an in-depth look at the Myanmar government’s extensive Facebook information operation efforts and the ways in which government actors in the country have harnessed the social media platform to sow discord and incite and perpetuate violence against the Rohingya. As governments across the world exploit social platforms to tear societies apart, undermine democracy, spread misinformation, encourage chaos, dehumanize critics and minority groups and even encourage violence, is it time for social media platforms to finally own up to the dark side of their impact on society?

Perhaps the most remarkable aspect of the steady stream of revelations of government exploitation of social platforms is that the media, public and policymakers are portraying such actions as unexpected new discoveries. From the Russians to the Iranians to the Myanmarese to the Saudis, as details emerge publicly about how yet another government has been exploiting the naivety of Silicon Valley to harmful or propaganda ends, we are treated to yet another round of policymakers and commentators claiming to be surprised at the new revelations and speaking in apocalyptic terms about what they mean for a future in which governments increasingly conduct covert information campaigns online. For their part, the exploited social platforms claim ignorance and that no-one could have possibly predicted how they would be misused.

To those whose jobs it is to track and counter information operations, whether by governments or non-state actors, it is an unfortunate commentary on public naivety to read yet another newspaper article “uncovering” another “secretive” information operation or hear another television commentator speak about how “shocking” the latest “revelations” are. While publicly feigning surprise, in many cases platforms have been warned well in advance regarding specific government campaigns or emerging tradecraft and methodologies being used for ill intent. In my own conversations with many European governments in the years leading up to the 2016 election interference, it was noteworthy how many expressed frustrations that their warnings were going unheeded.

It is also worth noting that there are a number of countries who are widely understood to conduct highly sophisticated information operations on social platforms, some of which far exceed the tradecraft exposed to date and have done so for years, honing their efforts to perfection. Only a handful of these have been publicly discussed to date. Some of these countries pose unique challenges to mitigation in that they make use of citizen armies comprised of both volunteers and members reimbursed through nontraditional means like scholarships, making it difficult to distinguish organic from inorganic activity. Bot armies can be removed en masse for violating platforms’ terms of service, but what is the best response to an army of thousands or tens of thousands of volunteers coordinated and tasked by government officials, but acting out of patriotism or a government scholarship?

These kinds of questions are precisely the reason that the policy and security teams within social platforms must work far more closely with their government counterparts. While the companies are quick to tout that they already interact closely with government, the level of coordination and cooperation they have today is far from what is needed to most effectively leverage the tremendous resources being spent by allied governments on cataloging the most egregious misuse of platforms for disinformation discord and violence - details that could be immediately actionable in the hands of social platforms.

In the case of the Myanmar military’s coordinated misuse of Facebook, one of the more surprising elements of the story was that the Times was able to uncover a trove of new information that Facebook itself had not previously compiled and which led to the unmasking and removal of a number of key misinformation accounts.

This raises the question of how a newspaper was able to uncover all of this information for one story using only public information and sources, while Facebook itself, with its immense resources and internal information could not. A Facebook spokesperson acknowledged that the company’s own understanding of the government’s information activities is not as extensive as it would like but emphasized that the company works with a number of external partners to assist it in reacting to government misuse there. This raises the question once again as to how the Times identified information that all of Facebook’s partners combined had not. It is clear there is far more Facebook could do to document Myanmar misuse of its platform.

As before, when asked for a rough estimate of how many native Burmese language speakers the company employed to review content from the country, Facebook offered only that native speakers comprise just a portion of its Myanmar team, which includes a much broader assortment of data science, engineering, policy, partnerships, operations and product personnel. In response to a Reuters query in August, the company offered that it was “impossible to know” how many Burmese speakers the company employed – a hard sell for a company whose lifeblood is data. Though, it acknowledged that it was “not enough.”

It is simply not credible that a company like Facebook cannot simply add a “languages spoken” field to its human resources records to be able to estimate how many of its employees and contractors speak Burmese. At the same time, if true, that would mean the company has no way of knowing how many native speakers it has moderating each language in the world - an even more egregious failure.

The company did not respond to a request for comment on what percent of its total Myanmar team have even rudimentary Burmese language understanding nor why it believes language and cultural understanding are not mandatory skills for all Myanmar team members to possess. Even a data scientist is vastly more effective when they can natively understand both the content and context of the material they are analyzing, rather than running statistical reports on undecipherable text and relying exclusively on periodic SME spotchecks to see if their work makes any sense.

Having run several very large multilingual content review and data science projects over the years with global human teams spanning native speakers in a hundred or more languages, I can personally attest to the impossibility of conducting adequate data science without having native language and cultural expertise. The "buddy pairing" so common in Silicon Valley of pairing a team of English speaking data scientists with a native speaker simply does not cut it at all.

It is also unclear why Facebook has only recently begun to seriously invest in addressing the Myanmar crisis when it has been unfolding in the public light for quite some time and the Times reports that governmental misuse of the platform began several years ago. The company did not respond to a request for comment as to when it first launched its Myanmar team or its staffing trajectory.

With at least 18 million Facebook users in the country, by some estimates it would take at least 800 Myanmar language content moderators to match the level of effort the company has invested in Germany, where hate speech laws require it to rapidly remove offending content. Instead, the company has committed only to hiring “dozens” of new reviewers. Reuters reports that in 2015 there were just two Myanmar language content moderators in the entire company and that today the company’s primary Myanmar language hate speech moderation efforts are largely outsourced to 60 Accenture employees in Kuala Lumpur and three native speaker Facebook employees in Dublin.

Of course, this illustrates the unfortunate paradox of government-supported dangerous speech. In countries where laws require rapid takedown of hateful speech, Facebook has invested heavily to meet those requirements. However, in countries where it is the government itself allegedly producing that content, the company has little incentive to invest the economic resources in achieving the necessary staffing levels.

Publicly, the company touts how important it believes it is to get things right with respect to stopping its platform from being misused to spread violence in Myanmar. Yet, this raises the question of why it does not invest more heavily in hiring the necessary number of staff? If it can hire so many reviewers in Germany in response to hate speech laws there, why can’t it hire an equally large staff to focus on Myanmar?

Imagine for a moment if Facebook hired 1,000 or perhaps even 10,000 or 50,000 Burmese language speakers as content moderators. There would still be the inevitable conflicts over what constituted “hate speech” or what was a violation of acceptable content and should be removed versus what should remain. However, with such a large staff the company could at least come far closer to triaging the daily output of the country to reign in the most egregious dehumanizing and violent speech. With such a large staff it would also have the resources to work hand in hand with NGOs, foreign governments and independent experts to identify and remove evolving trends and narratives in near-realtime, saving countless lives and heavily reducing the use of Facebook as a medium for violence in the country.

Of course, the cost of hiring so many moderators would be prohibitive even by Facebook’s standards and thus we have the true root of the Facebook’s problem in Myanmar: it doesn’t want to spend the money to fix the problem it helped create. Facebook could certainly hire sufficient human reviewers to triage at least the most egregious material in Myanmar but doing so would require investing far more than the country is “worth” to Facebook in terms of the ad revenue generated from its citizens. Without the laws of a country like Germany to maintain particular content removal standards, Facebook additionally lacks the legal incentive to remove violent content efficiently.

The company did not respond to a request for comment regarding its staffing levels and why it does not hire more Burmese speakers.

If the United States or Europe held Facebook to account for every death or relocation in Myanmar that its platform helped contribute to with fines or jail time for its executives, it would be a very different story indeed. Though, most likely, the company would simply pull out of the country rather than invest the necessary resources.

Facebook and its Silicon Valley peers are attempting to work around the cost of hiring enough human moderators by turning instead to AI algorithms. This is a Catch-22, however, in that building such algorithms requires immense amounts of training data, which the company struggles to produce for languages like Burmese due to its lack of moderators to adequately review posts. Additionally, AI algorithms today struggle to take context into account, especially as context and culture are fluid and changing every day, meaning one can’t simply build a filtering algorithm and leave it as-is. Building effective models requires maintaining large native speaker teams for perpetuity to provide the necessary training data to keep the models constantly adjusted. AI or no AI, Facebook will have to maintain a large Burmese speaking team to filter content from the country.

Unfortunately, despite betting the entire future of the company on them, such algorithms do not appear to be succeeding if the company’s refusal to date to release any details about their error rates is any indication. For a company that touts every imaginable statistic, its utter silence about whether its much-vaunted AI algorithms are actually working in any fashion whatsoever is deafening.

At the end of the day, one could argue that any communications technology can be used for evil. The advent of the telephone brought with it truly horrific applications as telephones have been used to coordinate all sorts of violence and destruction globally in the century and a half since their introduction. Every time a terrorist calls another to give the final details of an attack or a repressive government gives an order to its military to silence a village of critics or a hate group rallies its members to go on rampages, that violence is facilitated by the telephone. As an ephemeral technology, however, those conversations are lost the moment they occur, meaning the full role of telephones in perpetuating violence will never be known. As a written medium, social media is profoundly different in that all the calls to violence and dehumanizing speech that cross platforms’ borders is recorded into tangible form where it can be archived, studied and used to document the platforms’ roles in facilitating violence in a way that the role of voice telecommunications providers cannot.

Putting this all together, the simple reality is that like any communications technology, social media platforms can facilitate and perpetuate great evil. They can cause and continue horrific violence, including genocide, that would likely never reach those levels without their assistance. As a written medium, much like early telegraph messages, the major difference is that we can see the impact of platforms like Facebook in realtime, documenting how intrinsic and critical they are to fomenting the violence. In the end, Facebook could dramatically reduce the role of its platform in Myanmar’s violence, but doing so would require a tremendous investment that is neither economically viable nor legally compelled. Much as colonial strip mining inadvertently destroyed countries across the world in the name of profit, so too will social media platforms unwittingly help spread misinformation, violence and destruction around the world in the name of harvesting data and selling ads until such time that the world comes together and says enough. The only question is how many more will die or be displaced before the rest of the world finally intervenes?