BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Stopping Disinformation Requires Measuring And Understanding It Not Just Monitoring And Debunking It

Following
This article is more than 4 years old.

Getty

Social media companies and governments across the world have struck upon the perfect solution to everything from fake news to foreign influence to toxic speech: engage in a global game of whack-a-mole in suspending accounts and deleting posts long after their speech has spread through society. Rather than follow the carefully measured approach of the scientific method and invest heavily in research to understand the state of these issues on their platforms, how that speech flows across their borders and the best way to mitigate it while protecting innocent users, the platforms and governments have instead followed the low-cost knee jerk reaction of instituting mass bans and aggressive censorship, increasingly turning to opaque black boxes to proactively delete speech without any oversight over their potential to censor democracy itself. The result is that misinformation, disinformation, “fake news” and foreign influence are seen only through the lens of near-term temporary fixes rather than long-term solutions that seek to mitigate their role in society.

Rather than treat the issue of false information and influence as a grand challenge akin to the search for the cure to democracy’s cancer, social media companies have instead merely invested in a warehouse of band-aids to try and keep their platforms chugging along without any substantive change that might harm the viral machines at the heart of their economic engines.

Imagine if we treated the flow of information across social media as a true societal grand challenge, with Twitter, Facebook and the other major social platforms collaborating with global research community akin to the way massive scientific instruments like colliders are analyzed by large collaborative teams working both in concert and as Devil’s Advocates to verify each other’s results?

While there have been some recent efforts to make social media data more accessible to researchers, the specific implementations to date have privileged a narrow cadre of “usual suspect” scholars engaged in groupthink and based on extraordinarily dangerous mechanisms that pose grave harm to the privacy and safety of those platform’s users.

As more and more research occurs outside the traditional confines of academia and especially outside the clubby atmosphere of the traditional grantmaking process, such initiatives must take into account the vast world of open data researchers and academics outside the list of “usual suspects” that dominate the funding world but whose work tends to follow within the narrow confines of groupthink without the diverse perspectives and the historical understanding of propaganda and the global perspective of how it differs across the world.

The status quo simply isn’t working.

Even the nation’s most prestigious institutions like the National Academies pour forth reports focusing their attention on the metrics and datasets easiest and cheapest to get their hands on, rather than asking the only question that actually matters: how is false information flowing through our society and what is its impact?

The answer to that question may not be answerable through the free Twitter sample datasets lying around a researcher’s desktop.

A century ago, when lives were actually on the line, the Allies’ researchers put aside their traditional scholarly rivalries and came together in the service of their nations to map out the flow of misinformation, disinformation, falsehoods, fake news and foreign influence.

They recognized that to combat such corrosive information, one must first understand how it enters and flows through a society. Does being exposed to a single Russian election ad magically change how one votes in a closely contested election? What about 50 ads? How about 100 ads spread out over several weeks? Or 1,000 ads in a non-stop deluge in the hours before one heads to the polls? What is the threshold at which a diehard Clinton volunteer will magically be converted into a Trump voter? Or does foreign propaganda have less of an all-powerful impact than the public believes?

We don’t have that kind of information today.

Instead, we’ve been focusing on the things easiest to measure: how many pieces of content were published, shared or viewed.

In short, the production side of the equation rather than the only part that matters: consumption.

As the citizens of the world’s repressive regimes can attest, just because the government pumps out a steady stream of pro-government narratives and just because a citizen is forced to view or share that material does not mean that they actually absorb that content and are magically transformed by its narratives.

Similarly, a growing body of research documents how commonly we share material without actually reading it, meaning the contents of those articles may never have actually been seen by the individuals propagating them.

Most importantly, as the “magic bullet” proponents of a century ago eventually conceded, just because we view a piece of content does not mean we are magically converted by it. A diehard Clinton supporter and campaign volunteer is unlikely to instantly turn into a diehard Trump supporter merely by glimpsing a Russian campaign ad flashing in front of their eyeballs for a billionth of a second.

Our understanding of the impacts of propaganda on societies has improved immensely since those naïve perspectives of a century ago, but there is still so much we don’t know, especially the ability of microtargeted ads to play into the vast disaffected and undecided realm of society.

Instead, social platforms and governments have largely focused their efforts merely on ramping up their moderator staff, writing checks to fact checkers and banning accounts more rapidly and aggressively.

Yet, we recognize that fact checking can only do so much.

A viral post promoting a fake narrative has already done its damage. Displaying a link to a fact check days or even hours later claiming the post was “fake news” is unlikely to reach all of those millions of people that saw the original post, let alone sway them to reject the earlier claims.

Once “reasonable doubt” has been sowed it is difficult to climb the uphill battle towards “truth.”

False information and foreign influence don’t all spread from a couple of Russian Twitter bots with Russian IP addresses paid for in rubles.

They enter society organically through myriad informational ports of entry, including the mainstream news media, making combatting them a problem not limited merely to Twitter and Facebook.

In fact, the most impactful propaganda efforts are hybrid initiatives that leverage the mainstream media and civic organizations to seed narratives all across the country that then spread organically through all available communications channels. Social media plays just a small role in this overall contagion environment. Halting social channels is of little use when such campaigns are designed to harness the civic infrastructure of society itself, making them highly resilient.

Once again, looking to the propaganda campaigns of a century ago can teach us much about today’s campaigns. The communications technologies may be different, but the playbooks have remained unchanged.

Rather than merely cataloging false stories and issuing retroactive fact checks to combat them or institute automated filtering to proactively block stories around narratives and beliefs disagreeable to social media company executives and repressive governments, we need to focus on how harmful information spreads and how to mitigate it.

Most importantly, we need to focus not on the naïve and simplistic models used today that emphasize “big data” but on the far more holistic understandings of information distribution and impact that were the focus of our predecessors a century ago. In short, we must combine “big data” mapping of information with traditional offline and data exhaust research into the impact of those narratives and a mapping of all of the non-internet conduits that complement and contravene digital channels.

Putting this all together, rather than counting how many Russian-sponsored ads were run on Facebook, how many “fake news” tweets flowed across Twitter, how many people “shared” or “engaged with” fake stories or how many “impressions” foreign narratives received, the only question that matters is whether any of this actually had an impact.

Does it really matter if every citizen in America was exposed to billions upon billions of pieces of false or state-sponsored information if that content had no effect? It would certainly be desirable to curb such content, but if its impact was minimal, it would suggest a far more nuanced approach to addressing it rather than the blunt instrument of mass bans and automated censorship of today that themselves pose threats to democracy far greater than false information and foreign influence.

Retroactively countering narratives can help in some circumstances, but the impact of such fact checking tends to be narrowly confined to the very corridors of society and narratives more resistant to such falsehoods in the first place.

Instead, the real focus must be on understanding the flow of information so that we can build a more information literate society.

When lives were on the line a century ago and the study of false information was a life-and-death matter, we treated it as a legitimate scientific endeavor, measuring how falsehoods entered society, how they leveraged every aspect and conduit of American society to spread, the conditions under which they actually impacted society and the vulnerabilities in that pipeline.

We didn’t just counter falsehoods with our own fact checks, we focused on how to make our society more resilient to them in the first place.

We lack that urgency today, seeing false information and foreign influence merely as buzzwords to sprinkle on grants and publications rather than very real and genuine threats to our democracy societies.

Stopping the flow of misinformation, disinformation, “fake news” and foreign influence through our societies requires more than merely blindly debunking it after the fact. It requires understanding how it enters and flows through our societies, how it impacts us and why. Only by understanding the entire lifecycle of these falsehoods and influence can we understand their weaknesses and develop strategies to interrupt their distribution and impact and create a more resilient and information literate society.

In the end, mitigating misinformation requires measuring and understanding it, not merely monitoring and debunking it.