BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Facebook And Twitter Keep Censoring President Trump As A Way To Prepare For The Upcoming Election. Here’s Why I’m Worried

Following
This article is more than 3 years old.

I have a few fringe beliefs.

One is that aliens really do exist. How could they not, in a galaxy this big? I also happen to think Elon Musk is a once-in-a-century genius who will probably become President one day (all we have to do is change the rules about birthplace restrictions).

Now, those views are not that uncommon. They won’t get me kicked out of the local country club. If I posted them on Twitter right now, no one would blink an eye.

The problem is that it’s extremely difficult to define a fringe belief.

Recently, President Trump has posted a few callous remarks on Twitter and an offensive symbol on Facebook. You could easily call them mean-spirited. He mentioned how he will “show force” to protestors in Washington. Twitter and Facebook have marked these posts, hid them, or flagged them in some other way as an alert to social media users. It all seems pretty legit and purposeful. I like that social media companies are doing something to address misinformation and online abuse.

Here’s an example of a tweet that was flagged:

I never make political comments, and I’m not about to step up on a soapbox about free speech, censorship, and the rights of users to share false information. It’s a thorny issue and one that won’t be resolved anytime soon. My main issue with the flags and blocks is that it might be a sign of things to come.

That’s not a political statement. It’s a lesson from a college course.

Way back when, I took a philosophy course about logic and persuasion and it really stuck in my synapses. I remember specific sections of the textbook and the lectures, but the one that really took hold is called the slippery slope fallacy. It’s illogical because small adjustments don’t always lead to major changes. In the logic class, the professor said it’s a false conclusion. Taking tiny sips of beer as a teenager won’t automatically make you an alcoholic in a month. Increasing the speed limit a little doesn’t mean we’re all going to start driving like crazy animals.

Except for one minor problem. Time and time again, the slippery slope is far from a fallacy. Sometimes, small slips do lead to major slides. (Studies show that teens really should avoid sipping any alcoholic drinks. Also, we do drive like crazy animals.)

Twitter and Facebook are making tiny steps. Hold on for the ride.

My question on this topic is not political in nature: How fringe does it need to be? Also, how do we define misinformation? When is it a nastygram and when is it abuse? My issue is not with the current flags and warnings. It’s that social media companies might not be the best arbiters of what should be flagged. They are supported by advertising. They don’t really make anything or sell anything. Also, social media is a breeding ground for fringe ideas — they are the ones that get the most attention.

Where is this all heading? Will we see more and more flags?

The answer appears to be yes, according to some experts.

“You will continue to see them make adjustments as we get closer and closer to a volatile election,” says digital marketer Matt Ray from ChatterBlast Media, a digital marketing company. “Facebook and Twitter are not subject to the same rules and regulations as our radio or television airwaves.”

Ray argues that it’s an important step and required as we head into the election cycle, but stopped short of agreeing with me that it’s a dangerous pattern.

“Social media platforms are becoming more aggressive with the material they are allowing to populate their sites,” he says. “And much more aggressive in their review of that material. They have to. People are angry with them. Facebook and Twitter are owned by publicly-traded companies, and a majority of Americans are concerned with social media's ability to manage divisive speech, fake news, and propaganda.”

Another expert told me the problem is not going to go away.

“Getting people to trust the Internet is becoming more difficult because there is no arbiter of truth and everyone can find their own echo chamber,” adds Kevin Lee, the Trust and Safety Architect at digital trust company Sift. “We’re witnessing the exact same controversies on misinformation, hate speech and free speech that we saw in the 2016 presidential election.”

Will flagging content like this work? I’m not sure. Flagging outright abuse is far easier than judging whether something is just a fringe belief.

I’m watching how this unfolds and there’s no question we’re going to see more of these flags in the future with Trump other candidates this election season.

You might like what you see so far, depending on your political views, but it’s just a matter of time before social media companies start flagging more and more content and start politicizing the ads and posts from the candidates (and for everyone else).

Looking further ahead — it might end when we can’t say anything.

Follow me on Twitter or LinkedInCheck out my website or some of my other work here