BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Could Personalized Content Moderation Be The Future Of Healthy Social Media?

Following
This article is more than 4 years old.

Getty Images

One of the most existential questions of the social era is how to combat the increasing proliferation and spread of toxic speech across the modern Web. Today’s content moderation efforts focus largely on enforcing neocolonialist global “acceptable speech” rules that enforce Western views and beliefs on the entire planet. Yet even beyond their cultural oppression, these rules fail to take into account individual needs, such as personal trauma that causes certain innocuous terms to be extremely distressing to a given person or entire communities for whom certain seemingly innocent words have historically been used to oppress them. In an ever more personalized and customized world, in which algorithms curate individualized feeds for us, could personalized content moderation offer a solution to digital toxicity?

Today’s internet is breathtakingly personalized, with armies of algorithms fed by unimaginably vast data archives curating the digital world to feed us individualized streams of content designed to maximize our responsiveness, engagement, content production and other monetizable behaviors.

In many ways these algorithms help contribute to the spread of hate, in some cases by curating a personalized feed of hate and horror designed just for us in order to provoke us into a burst of content production and shares that will yield a surge in monetizable activity for the platform.

As digital assistants grow in both capability and their integration into our lives, it is possible that third party companies could build counter-filters that remove this content from our streams. Imagine a digital assistant that curates all of your feeds, removing anything that might unduly trigger or upset you, personalized to your needs and constantly updated to reflect the latest circumstances of your life.

What if the social platforms themselves built such curators?

Rather than enforcing global moderation rules that apply the same rules to everyone, what if social media companies applied personalized content moderation, consisting of an individualized algorithm that learns the specific preferences of each of their billions of users and shields them from content that provokes undue distress?

A user who nearly drowned as a child might suffer panic attacks from any mention of swimming or the water. Merely coming across a tweet from a news organization with the latest Olympic swimming results could be enough to hospitalize them. At the same time, it would be difficult for a social platform to ban all discussion of swimming or water from its servers.

Such situations are not isolated speculation. In 2017 an individual was arrested for sending an animated GIF featuring flashing lights to a reporter with epilepsy in hopes of triggering a seizure.

Social platforms today are unable to address such individualized trauma or medical conditions within the scope of their global filters.

Indeed, the best they can offer is a few canned filters to remove ads for alcohol from recovering alcoholics or baby products from a parent who has just lost their child. However, these are available for only a few categories, must be manually enabled after wading through page after page of menus, apply only to ads and only filter out some ads.

Given their enormous investments in personalization, why can’t social media companies apply their vaunted personalization technology to content moderation?

After all, companies are using their personalization algorithms to decide what content to deliver, so why can’t they use those same algorithms to do the opposite: decide what content not to deliver?

Imagine social platforms that allowed users to flag content that caused them undue distress or triggered medical or emotional trauma. Such systems could also learn over time from the content users hide or block as well.

Much as today’s email services feature spam filters that remove the vast majority of spam and fraud emails, so too could such a service largely filter away the material that causes distress.

Such systems could even dynamically adapt to a user’s changing life circumstances. A user who posts that they just suffered a miscarriage might result in such a system dynamically filtering away posts relating to happy babies and growing families for a period of time to give the person time to mourn, perhaps gauging from their overall social behavior, posts and content engagements when they are ready to begin seeing such content again.

Of course, such a system would almost certainly inevitably be abused to curate away viewpoints with which an individual disagrees. Yet in a world in which “safe spaces” are increasingly recognized on college campuses and in the workplace as necessary to avoid encountering viewpoints that cause emotional trauma, such a system would mirror the spaces already being created in the real world to isolate individuals from content, information and perspectives to which they object.

Putting this all together, personalization powers the modern Web, but it focuses today on selecting what content to display, not choosing what content to hide. Personalized content moderation filters could eliminate a great deal of online toxicity, isolating it to the communities that produce it, while preventing it from harming the communities it is intended to hurt.

After all, if the schoolyard bully is confined to a soundproofed room on the far side of the playground and cannot actually be heard by those they hope to harm, they are unlikely to continue their behavior, since it does not have the intended effect.

In the end, perhaps instead of focusing relentlessly on profit, if social platforms applied their existing tools for good, they could make a tremendous dent in the toxicity that plagues today’s Web.