BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Instant Karma's Gonna Get You On The Social Networks

This article is more than 3 years old.

Here’s an interesting research paper by two lecturers at the UK’s University of Sheffield, Yida Mu and Nikolaos Aletras, entitled “Identifying Twitter users who refuel unreliable news sources with linguistic information”, based on a sample of 6,200 Twitter users, that identifies the language features of people with a greater tendency to share misinformation, fake news or unverified stories.

The technique is not a million miles from Minority Report, the movie set in a future where the authorities can predict crimes before they take place, but applied to social media as a context and disinformation as the crime, and also has reminiscences of the well known "Nosedive" episode from Black Mirror. Regardless of the study’s conclusions, which uses the type of language being used as a way to characterize the future behavior of a user, and could be improved had it been carried out by a company with access to a bigger stockpile of data, the idea of applying reputation metrics or how prone a user is to share misinformation may not be new, but is certainly thought-provoking.

Some social networks already use metrics to monitor their users’ behavior along karmic lines, such as their responses to certain content or the ratings given by other users. The concept of karma comes from Buddhism and refers to refers to actions driven by intention which lead to future consequences: in this case, it represents how a user’s actions on a platform can condition their image or value. Such metrics can sometimes be visible in the user’s profile, characterizing them, or even give more weight to their decisions in a vote or similar mechanisms.

What would happen if a social network like the one used in this study, Twitter, applied karmic metrics to its users? Unlike networks such as Facebook, where many users attach value to what their contacts share, based on their relationship with them or their level of trust regarding certain issues, on Twitter we access messages or information shared by people we know practically nothing about, of whom we simply have a photo or a very brief profile, and sometimes not even that. What would happen if each profile on Twitter had a karma indicator based on previous behavior, a reputational meter that other users could tap into to know how much they could trust the content they shared?

As more networks turn to verifiers that tag news based on its veracity, it makes it easier to apply a reputation indicator for users. What would happen if such an indicator were visible to other users? If I publish or share garbage, my karma decreases, while if I share reliable information or information from proven sources, my karma increases. Depending on my previous behavior, my history, the trust that others have in what I publish varies, to the point that someone with a very bad reputation would surely see the impact of what they decide to publish decrease, because it would automatically generate doubts in everyone who had the opportunity to interact with that content. Hypothetically, there could even be more sophisticated versions in which a user’s karma would be associated with certain topics, so that someone could have good karma when they talk about certain topics, but bad karma when they talk about others — and based only on their behavior on the network.

As said, this is hardly a new idea: some of the sites I publish on, in fact, already qualify me as an expert on a particular topic when the articles I publish are evaluated positively by other users, affecting my profile or the extent to which the platform shares my articles. In the academic world, authors gains credibility (and sometimes promotions or better pay) depending on the extent to which their articles are cited by other authors, generating a social ranking that was copied by Larry Page and Sergey Brin as the basis for Google’s original search algorithm.

On a social network like Twitter, in the absence of these kinds of metrics, some users decide whether to follow somebody based on the number of other followers, profile information or even a photograph, which is hardly the most reliable approach: the fact that somebody has many followers doesn’t mean she isn’t an idiot, or that those followers don’t exist, or even that she shares hate speech, etc. Could a karma metric help people decide who to trust? Pushed to the limit, would the user experience change the fact that someone like Donald Trump had bad karma or was known to share fake news and disinformation?

Follow me on Twitter or LinkedInCheck out my website or some of my other work here