BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Social Censorship: Should Social Media’s Policy Be Free Speech?

Following
This article is more than 3 years old.

How should social media deal with controversial subjects or false information?

According to alternative social network Minds.com CEO Bill Ottman, freedom is the best policy. And, he says, it’s also the policy that results in the least harm. At election time, when fake news is a hot button topic on all sides of the political spectrum, that might be a controversial opinion.

“Where we draw the line ... is around the First Amendment,” Ottman told me in a recent TechFirst podcast. “No one really knows what the policy is on Facebook and Twitter and YouTube.”

This might be seen as a libertarian argument based on freedom rather than one concerned with harmful results, though Minds does have restrictions on harmful content as well. But more importantly, it’s Ottman’s assertion that banning bad content is actually socially riskier over the long term for our entire culture. Part of his rationale is a quote from a Nature study on the “global online hate ecology” which suggests that policing content can just shunt it elsewhere to more hidden places.

“Our mathematical model predicts that policing within a single platform, such as Facebook, can make matters worse and will eventually generate global dark pools in which online hate will flourish,” the study says.

Listen to the interview behind this story on the TechFirst podcast:

Ottman acknowledges that we all want less hate speech (none would be good!) and safe online communities. Rather than censorship, however, he advocates a policy of engagement. That’s why he engaged Daryl Davis as an advisor for the Minds community. Davis is the well-known blues musician who, as a black man, has de-radicalized as many as 200 members of the KKK via engagement and conversation.

Ottman wonders if that model is scalable with digital technology.

“What do you think would happen if the 20,000 moderators on Facebook were all mental health workers and counselors and people who are actually engaging — as long as it’s not illegal, like true harassment, like that stuff has to go — but for the edge cases, these people who are like disturbed people … what would happen if we had 20,000 people who were productively engaging them?”

It’s worth asking that question.

It’s also worth considering that for some, this isn’t a theoretical topic or an abstract discussion.

I personally know a smart, gifted woman who was contributing incredibly to the software usability ecosystem who was driven offline by misogynistic trolls who literally threatened her with rape and murder. Others are persecuted based on race, political beliefs, or numerous other reasons.

It’s good, therefore, that Ottman acknowledges that the Davis model isn’t the only path forward, and that social networks have a responsibility for safety.

“I do think it’s the job of the social networks to make it very clear to you as a user how to control your experience ... giving you as many possible tools to control your experience as they can,” Ottman says.

That could, theoretically, include the ability to proactively block hateful comments or contacts. Doing so at scale, however, seems currently impossible, which Ottman acknowledges.

“It’s a losing battle to expect that every single piece of content uploaded to social networks with hundreds of millions or billions of users is going to be able to get fully vetted,” he says.

And, in fact, when President Trump contracted Covid-19 and multiple Twitter users publicly wished that he would die, Twitter blocked those Tweets, citing policies that say “tweets that wish or hope for death, serious bodily harm or fatal disease against *anyone* are not allowed and will need to be removed.” That was news to hundreds of people, including women and people of color, who have dealt with implicit and explicit death threats for years with no intervention from Twitter.

Most social networks employ some form of AI to find and block objectionable content, but it, frankly, is far from perfect. Case in point: recently farmers in Canada had their pictures of onions flagged by Facebook and removed because they were ‘sexual’ in nature. Unless the platforms get orders of magnitude better, it’s going to be hard to see how they can allow us to control our experience enough to avoid the trolls.

This is not an easy problem, and it doesn’t have an easy solution. Algorithms already control a lot of what we see, and hard-edged reality bubbles that separate and divide people is one potential result, Ottman says.

“There’s a growing body of evidence that what is happening, that the content policies on the big networks are fueling the cultural divide and a lot of the polarization and civil unrest,” he told me. “And people like Deeyah Kahn have done TED Talks on this also, directly engaging hate head-on. And the evidence actually shows that that’s really the only way to change minds. You’re almost guaranteed not to change their mind if you ban them. In fact, the opposite, I mean, you can’t communicate with them if you ban them.”

That’s a tall order.

It has a certain ring of truth to it: how can we expect machines to police our expressions and actions, rather than personal persuasion by other human beings? But it also seems incredibly challenging to do safely, and at scale.

Get a full transcript of our conversation here.

Follow me on Twitter or LinkedInCheck out my website or some of my other work here