BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Could Digital Assistants Be Our Personalized Toxicity Filters For Social Media?

Following
This article is more than 4 years old.

Getty

Social platforms have struggled to contain the rising tide of toxicity and hate that have made them increasingly less welcoming and inclusive. Their go-to solution to date has been to chase quick-fix filters that target only the most egregious hate speech, while ignoring the far more prevalent casual toxicity. Could our future digital assistants act as personalized toxicity filters for social media, shielding us with custom tailored blockades against the hate and horrors that affect each of us the most?

Today’s go-to solution for social media’s toxicity is the hate speech filter, designed to target an extremely narrow range of the most egregious statements. While an important first step, these filters address only a microscopic fraction of the total social toxicity.

Platforms are loathe to move much beyond selected topics like overt hate speech because of fears that banning all harmful content would actually remove much of the informality and stream-of-consciousness speech that has made social media so commercially successful. After all, if users could not use profanity, sarcasm or emotionally-laden diatribes, how would they express themselves online?

Making the toxicity filtering problem far harder is the simple fact that we are not all alike. We each have unique traumas and sensitivities that might make one person’s desired content another’s pain.

A couple who has just experienced a miscarriage will today be barraged with endless cheerful advertisements for products, toys, services and experiences for the child they will never meet. As they turn to social media to seek solstice and comfort from friends and family, the algorithms powering those platforms will be entirely oblivious to their loss and will bombard them with ads that cause horrific pain.

Even if that couple knows that Facebook has a specific setting to turn off ads for child products and has the composure in their moment of grief to wade through menu after menu to find it, they will likely still confront countless child-related ads, making it impossible to escape their grief.

Similarly, someone recovering from alcoholism might not wish to see endless advertisements for boozy weekends and winery outings.

Beyond such obvious examples, traumas can manifest themselves in far more complex ways from our unique lived experiences. Someone who almost drowned as a child might suffer panic attacks from being confronted with summertime ads for pools in their area, while someone who almost died from a food allergy might be traumatized by endless images of that food item in their news feed from well-meaning friends that don’t know about their allergy or experience.

Today’s social filters are designed around the idea of banning universally toxic topics like hate speech. Ironically, for platforms that intimately personalize their content streams, the major platforms have yet to roll out personalized toxicity filters that would learn from the sensitivities of their users and allow fine-grained customization.

Perhaps that is the role our digital assistants of the future could play.

Imagine a digital assistant that could curate all of the social posts arriving in our news feeds in real-time, scanning each against a personalized model that removes posts that would be traumatic to us, regardless if to others they might be acceptable or even welcome.

Our assistant could learn through our behaviors and feedback over time what kind of content is harmful to us individually, becoming ever better at protecting us and even adapting to new traumas in our lives. The death of a beloved pet would cause our assistant to proactively immediately filter out the pet ads and happy pet posts we once embraced, until the pain of our loss has subsided.

Such a personalized toxicity filter would not entirely strip away the reality of the world to present a caricature of a “happy” utopia. Rather, it would merely filter the toxicity that affects us directly, allowing us to experience the world’s negativity without becoming traumatized and without confronting hateful attacks directed at us personally or our demographics.

Putting this all together, perhaps someday the social networks will build customized toxicity filters that learn our unique vulnerabilities and create a more welcoming and inclusive environment. At best, however, each platform would have its own filters with differing abilities and knowledge about us, creating an uneven filtering landscape that is reset each time we join a new network.

In the meantime, perhaps the coming era of more advanced digital assistants might offer a world of universal filtering, protecting us from the internet’s worst.