BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Could Our Future Personal Digital Assistants Curate Social Media For Us?

Following
This article is more than 4 years old.

Getty Images

Today’s social media platforms are ruled over by opaque algorithms that decide what we see, guiding us towards content they believe we have the greatest chance of engaging with and creating new content in response to. We have no right to see inside these algorithms or control them in any way. Their decisions are made with their profit-minded creators in mind, not our intellectual and emotional best interests. Could our personal digital assistants of the future wade through the vast social wasteland on our behalf, tracking down content and curating a specialized feed that represents what we want to see, not what social platforms want us to see and even shield us from hate?

The algorithms that run today’s social media platforms are designed to mindlessly drive us towards the highest-engagement content. Posts which cause us to linger, share and create new content in response are those the companies can sell the most advertising alongside. These posts generate the most return for social platforms, but may be bad for our well-being, especially if they are inadvertently curating customized feeds of hate and horror for us.

At the same time, digital assistants are becoming increasingly capable, evolving from merely voice activated jukeboxes to more able systems that can begin to hold conversations with humans on our behalf.

Today’s deep learning algorithms can understand imagery and video, transcribe audio and make increasingly finer grained sense of text.

Put together, these form the key building blocks a digital assistant would need to curate social media on our behalf.

Imagine a digital assistant of the future that watches our Twitter, Facebook, Instagram and other feeds on our behalf. Every post that scrolls by is examined by our assistant that runs it through various AI models to catalog its contents, determine its relevancy to our immediate and near-term interests and the emotional impact it will have on us.

The impact of clickbait headlines disappears when an algorithm reads the article from top to bottom and can tell us whether it actually relates in any way to its headline and whether it is worth reading.

A post about our favorite restaurant closing might be highly relevant to us, but our assistant might determine it is better for us to learn that information on the way home from work so that it doesn’t distract from our busy day. In contrast, news of a massive water main break that likely affects our house might even result in a text message to bring it right to the forefront of our attention.

More interestingly, such an assistant could bypass the built-in curation algorithms used by each social platform, scanning the latest feed of each of our friends and contacts for posts of interest to us. While Facebook’s algorithm might not surface a post about a friend’s recent new hobby that is similar to ours, our assistant could identify the post for us, bringing it to our attention.

However, the greatest application of such an assistant would be its ability to combat the attention economy that drives the social platforms, stripping away the mindless wastes of time that would otherwise distract us. Instead of endless posts designed to monetize us, our assistants could spend their time wading through the social deluge to find just the few items of interest to us, providing a daily summary of the most important news of the day on our bus ride home.

Even more powerfully, while the social platforms continue to struggle to remove hate and toxic speech, our assistants could actually take over this role, readily shielding us from the hate and horrors directed at us. Deep learning tools today are accurate enough to discern between a post documenting hate and one spewing hatred towards other. A post reporting on genocide or government repression or other societal issues could be allowed into our news feeds as usual, but a profanity-lade emotional diatribe attacking us or our demographic could be instantly wiped from our view.

In essence, rather than attempting to build global toxicity filters that work for the entire planet, our future digital assistants could act as personalized toxicity filters that remove the material that directly targets and affects us.

Such customized filters could go a long way towards shielding us in our moments of trauma.

Putting this all together, we have all of the necessary building blocks today to build AI-powered personal digital assistants that can curate the digital world on our behalf, skimming our social feeds, bypassing social curation algorithms and previewing every piece of content, generating a customized feed of what we need to know that takes into account our exact mental and emotional state and information seeking needs at that precise moment.

In the end, perhaps the escape from social algorithms is a set of counter algorithms that work for us rather than against us.