BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Why Do We Tolerate Social Media's Dark Side And What If We Made Them Pay For It?

Following
This article is more than 5 years old.

Getty

Facebook-owned Instagram has been in the news again this week over the use of its platform to normalize and perpetuate teen suicide, from relentless bullying to fostering an environment accepting of suicide rather than providing resources and support to prevent it. The company went as far as to write an op-ed this week acknowledging that it had considerable work to do, underscoring the growing furor around the role the platform plays in teen suicide and its failure to make meaningful progress in combatting it. This raises the question once again of why we tolerate social media’s dark side and why governments and societies at large don’t step forward to force them to finally take these issues seriously? In particular, why don’t we force companies to give up the profits they earn from destroying society?

Much like the web itself, social media has evolved from its early roots as a way to bring people together into a medium that tears society apart, fostering trolling, bullying and all manner of toxic, hateful and violent speech.

Its effects are particularly damaging to society’s most vulnerable, who cannot escape its wrath in today’s globally connected world.

Twenty years ago, a relentlessly bullied teenager could transfer to a different school or move to a new city to escape their tormentor. With social media, that teen today can never truly escape their attackers even if they move to the other side of the world or change their name. The digital connections of themselves and their parents will eventually give them away, while facial recognition and the myriad ways in which social platforms silently stalk us from afar will forcibly reconnect them to the very people they have so desperately tried to escape.

In a world in which geography no longer plays any role, it is not just schoolyard bullies those teens must escape. Now anyone anywhere in the world can sign on to the role of tormentor. The anonymity of platforms like Twitter shield those bullies from scrutiny, while their global broadcast nature ensures those bullies’ toxic attacks are seen all across the world, encouraging others to pile on.

Something as innocent as a teenager acing a test that her schoolmates failed could devolve in just hours into a relentless and worldwide onslaught of hate speech and death threats, doxing and deep fakes.

Such is the dystopian world in which we live today.

For their part, social platforms have resisted calls to hold themselves accountable for the horrific content they publish and profit from.

It is important to remember that social media platforms profit handsomely from hate speech, threats of violence and suicide glorification. Every hateful and toxic post sells ads and user data and generates behavioral and interaction data that make the companies even more profitable.

Like arms dealers that profit from war, social media platforms profit from all of this horrific speech.

When asked whether they would consider refunding the profits they make from hate speech, toxic content and terrorism recruiting, the companies remain steadfastly silent.

Why do we tolerate this?

Why don’t politicians step forward and pass legislation that makes it the law for social platforms to refund all profits made from toxic speech, terrorism, teen suicide and the like? Even better, what if they were required to pay all of those profits to the government in the form of a fee that was then distributed to organizations that combat those issues?

We can’t even begin to estimate the dollar amount this would generate because we have such a poor understanding of just how much toxic speech is present on the platforms and they are loathe to cooperate with independent external audits.

However, even if the total revenue in terms of raw dollars was relatively low, the accountability and auditing required to generate those reimbursements would require companies to finally begin systematically and robustly tracking toxic speech on their platforms.

Companies today have zero incentive to invest in more actively purging toxic content from their platforms. After all, they actually make money from all that material.

If they were compelled by law to reimburse the money they made from that content or pay it in fees to the government, they would of course have a new incentive to absolutely minimize their reporting by attempting to misclassify all toxic speech as non-toxic speech to avoid having to relinquish the money earned from it. However, once money and the law are involved, suddenly there are external accountability practices that can be brought to bear. Though companies have proven adept at avoiding paying taxes by exploiting the myriad loopholes in tax law, they are still forced to invest heavily in managing that risk. In similar fashion, companies are likely to find any number of ways to work around such new anti-toxic speech laws, but the investment in doing so would alter the cost-benefit ratio of publishing such content to the point that companies might be willing to take greater action to combat it.

Combatting toxic speech is a hard problem. What one person might consider protected speech under the First Amendment here in the US another might consider dangerous hate speech.

At the same time, there is immense room for combatting content for which there is broad societal agreement should not be permitted, such as active terrorist organization recruiting, death threats, calls for specific targeted violence and encouraging or promoting topics like teen suicide.

The problem is that doing so costs money. Automated AI filtering can assist with a lot of the rote matching, but human review is still a critical component, especially when it comes to offering a robust appeals process.

Social platforms simply don’t want to spend money on issues like combatting toxic speech. Why spend money to eliminate content you actually make a profit from?

Much as AI companies refuse to spend money on creating high quality data to train their algorithms, so too do social companies refuse to spend money to build sufficiently large human review teams.

Doing so would eat into their profits and why should they when society seems to be just fine with things as they are?

Rather than building socially responsible platforms that treat their users with respect and dignity as real human beings, the companies bombard mothers of stillborn babies with relentless advertisements of happy healthy babies, reminding them every moment of their loss. Rather than acknowledge their role in facilitating genocide or acting as a marketplace for human trafficking, the companies refuse to accept responsibility and happily keep all of the money they made in promoting and assisting these crimes.

Putting this all together, why do we as a global society tolerate social media’s dark side? Why do we simply accept that private companies should be allowed to generate revenue and freely profit from hate speech, bullying, genocide, human trafficking, terrorism, toxicity and teen suicide? Why do we simply accept that social platforms should not be required to sufficiently invest in combatting the dangers they pose to society itself? Why is it that when companies say they don’t want to eat into their profits to spend the necessary money to combat toxic speech, we say that’s totally fine and we don’t have a problem with it?

In the end, we can’t complain about the business practices of social media companies when we refuse to boycott them or force our political leaders to take legislative action against them. While we may lament our loss of privacy and safety online through our words, our actions tell the companies we are willing to accept ever more. When will we finally stand up to these digital dictatorships and say enough is enough? When will we finally reclaim our digital rights? If we don’t do it soon, it will likely be too late.