BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

If Social Media Algorithms Control Our Lives Why Can't They Eliminate Hate Speech?

Following
This article is more than 5 years old.

Getty

It has become accepted fact in our modern digital world that the algorithms powering the online revolution, especially those of the major social media platforms, have so much control over us that they can actually nudge us against our conscious will towards actions we would not otherwise take. The addictive nature of social media is driven in part by an army of behavioral engineers tasked with building algorithms, interfaces and experiences that tap deeply into the flaws and nuances of human psychology, turning us into mindless remote-control zombies that can be guided towards any desired monetizable activity by any company. Or at least that is how industry, governments, the press and even academia portray the industry’s breathtaking infrastructures of data and algorithms. If Silicon Valley’s algorithms really wield that much power over our subconscious minds and conscious decisions, why can’t social media platforms literally write hate speech out of existence with a few lines of code? Is the truth that the silent guiding hand of the online world’s algorithms aren’t nearly as powerful as we claim or is it the case that hate speech is so inextricably linked with the fact-free emotion-driven world of social media that combatting it would risk the entire business model upon which social media is based?

We speak today of the digital world as an almost Orwellian existence. Shadowy companies surveil us against our wills, hoovering up every action we take online and offline and building unimaginably detailed profiles about what makes us tick. These profiles in turn are wielded by an army of silent unseeable algorithms to manipulate and coerce us into destructive behaviors that can undermine democracy itself but make their creators tens of billions of dollars.

It is certainly true that more and more of our worldly existence, online and offline, is recorded, bartered, bought and sold by myriad companies all over the world. It is also true that much of the digital world is prefaced on algorithms and interfaces designed by behavioral engineers and psychologists with the intent to nudge or even shove us towards the most profitable behaviors for their creators, regardless of the impact of those behaviors on us.

The question is just how much influence those algorithms actually wield over us.

Small behavioral tweaks, such as creating the illusion of scarcity to push us to accept a higher price for a product or gamifying the gig economy to extract a few extra tasks per shift out of workers, is a fairly straightforward application of longstanding understanding of human psychology. While we tout these tools as frightening new digital innovations, they are in fact merely digital reincarnations of the manipulative behaviors known to salespersons and managers the world over since the dawn of time.

Far from creations of the online world, much of the behavioral engineering we tout as Silicon Valley innovations are in reality merely the online world catching up to the age-old teachings of the offline world. When it comes to nudging people towards actions that make companies money, Silicon Valley is merely playing catchup to the offline world, not innovating or pioneering new insights into the human psyche.

Given that at least some of the digital world’s behavioral engineering has merit, this raises the question of just how far those algorithms can be pushed? Is making an addictive interface to a social media platform the limit of how far the Valley can push these algorithms?

If so, it raises serious questions about whether the web is a positive force.

If not, it raises the grave question of why social platforms have not harnessed this algorithmic might to code hate speech right out of existence with a few algorithmic tweaks?

Could the same algorithms that deliver hyper personalized ads to us in a fraction of a second, hundreds of thousands of times a day, be retooled to deliver hyper personalized interfaces that steer us away from our own negative beliefs? Could the same gamification algorithms that coerce gig economy workers into performing tasks they don’t want to, be repurposed into pushing social media users away from the hateful speech they wish to express?

It is a sad commentary that despite public pressure to adopt signature-based removal of pro-terrorism posts and revenge pornography, the major social media companies only reluctantly finally agreed to deploy such technology and only very recently. The same companies that have long deployed signature-based removal of copyrighted content and illegal content like child pornography, for unknown reasons historically refused to deploy it to combat social issues like terrorism or revenge pornography. Facebook has long declined to comment on why it was so late to deploy even these rudimentary and well-established tools.

Of course, combatting “hateful” content is far more complex because even the definition of what constitutes “hate speech” is so contested.

I have long argued that the best approach is not to censor ideas, but rather to focus on the expression of ideas. Hateful speech doesn’t just come in the form of racist, sexism, or other traditional “hate” or dehumanizing speech. It can come in the form of the myriad ways we tear each other down each day on social media. A profanity-laden diatribe attacking someone’s intelligence or looks or existence can be equally destructive and such bullying can lead to self-harm and even suicide.

Even a snarky comment made in jest can have devastating consequences if read by someone in a bad place in their life.

What if we banned all forms of negative speech directed at individuals or groups online? Alternatively, what if we at least banned profanity and name calling of all kinds?

Imagine social media without negativity. No attacks on anyone or anything. No profanity or hurtful remarks. Merely informed evidence-based commentary on the world, expressed in clinical terminology and fully referenced with all supporting materials.

In other words, what if social media were more like LinkedIn and less like Twitter and Facebook?

Of course, to many that would destroy the very appeal of social media, the freedom it brings to wrap oneself in the warm cloak of anonymity and spew uninhibited hate and vitriol to the world. To stalk, harass and tear others down, to make others feel as rotten and horrible inside as they do. Or to simply to have an outlet for their snark, sarcasm and brand of humor, without regard to how it may make others feel.

The real question is whether eliminating all negativity online would fundamentally alter the very nature of what social media is or whether it would merely excise a small fringe corner of the online world and allow the rest of us to transform the web into a digital utopia?

In the end, we are confronted with a troubling question: are the vaunted algorithms that power social media not nearly as powerful as we believe? Or if they are, why doesn’t Silicon Valley simply code hate speech right out of existence?