BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Today's Disinformation Research Could Learn Much From Last Century's Propaganda Research

Following
This article is more than 4 years old.

Getty

Foreign influence campaigns, rampant “fake news” and false rumors fanned by adversarial nation-states, thrown elections, a polarized nation living in filter bubbles, a deluge of hate speech spinning into violence, citizens overwhelmed and retreating from a deluge of questionable information filled with falsehoods and democracy in decline, collapsing under the pressure of failed communications platforms and misinformation, disinformation and fake news. Every policymaker, academic and business pitching an “instant fix” that will make use of the latest technology to instantly fix everything.  At first glance this might seem a description of today’s world of social media-fueled disinformation but in reality, it summarizes the world of propaganda research of a century ago. Look closely and all of our modern talk of fake news, foreign influence and democracy in decline are merely our latest reckoning with the age-old tactic of governments using communications technologies as a weapon of war.

Not a day goes by that my inbox isn’t deluged with press releases from universities, non-profits, companies and government officials touting yet another instant fix to the problem of false information on the web. As grant funding has flooded into the misinformation/disinformation space, there is almost a comical naivety to the steady stream of “AI-powered misinformation elimination machines” emerging from researchers and businesses that until a few months ago had never even heard the word “misinformation” let alone have any idea how information spreads through societies or, most importantly, how states co-opt these information flows to spread false narratives.

Just as AI has become the defacto buzzword everywhere else, so too has it become to go-to solution for misinformation.

According to all of the press releases I receive, it seems the elusive fix to eliminating state-sponsored disinformation is simply to feed an AI algorithm a handful of news articles that fact checkers have debunked and viola, one has a magical AI system that can instantly identify all the world’s state-sponsored information campaigns with nearly flawless accuracy.

Of course, according to the latest crop of overnight misinformation experts, false narratives are easy to spot: just blacklist a set of web domains and social media accounts and all “fake news” disappears. Alternatively, as noted above, just feed a few fact checked news articles into some kind of deep learning algorithm and you’ve got yourself an instant misinformation detector.

State-sponsored disinformation campaigns are the easiest of all: just scan Twitter accounts for obvious bots by using this “one weird trick.” After all, as the Mueller report taught us, Russian influence campaigns are so trivial that a child could spot them.

These false narratives would be comical if their impact was not so serious.

Much as the field of sentiment analysis has been overtaken by computer scientists moonlighting as psychologists and entirely oblivious to why their trivial solutions seem to fail so spectacularly, so too has the study of false information, especially influence campaigns, been overtaken by computer scientists eager to demonstrate that every societal problem can be solved with a few lines of code and zero knowledge of what they are actually studying.

In the case of state-sponsored information influence campaigns, the popular narrative of simplistic technically-naïve campaigns and trivial detection methodologies could not be further from the truth.

Russian influence campaigns and those of other states that have invested heavily in information operations, including some US allies, make use of extraordinarily complex hybrid campaigns that follow few templates, utilize simultaneous firewalled campaigns, make use of a range of modalities designed to transition into domestically accelerated organic campaigns, make use of a range of signals, including classified intelligence to direct them and typically involve endless misdirection and timed disclosures. In particular, campaigns often involve “noisy” components designed to attract maximum attention and draw attention away from more substantive operations, which are conducted in a manner intended to allow them to remain undetected until after the event of interest, such as an election, at which point they are designed to draw attention to themselves, making the campaign extremely visible and amplifying its effects as the targeted nation tears itself apart believing there are ever more operations yet to be discovered.

In effect, well-run influence campaigns are immensely complex, multifaceted and multimodal, making mitigation far from a one-click removal. Much like cyber actions, they are extraordinarily fluid, adapting in realtime to countermeasures and in a state of perpetual evolution such that today’s mitigation strategy is of little defense against tomorrow’s campaigns.

Well-constructed campaigns present a wilderness of mirrors that makes identification and removal extraordinarily difficult.

For those who think combatting foreign influence campaigns can be achieved with a few lines of deep learning code, a few blacklists and a bot detector or two, it is well worth spending some time reading the vast propaganda literature of the first half of the last century. While the supporting technologies have changed and our understanding of society's reaction to external stimulus has dramatically improved, the documentaries of the era lay out in exquisite detail just how complex and sophisticated influence campaigns really are.

Vastly complicating matters, there is a growing movement today towards blended campaigns that combine state direction with citizen action, once again drawing from propaganda tradecraft of yesteryear.

At least two major countries, one a large US ally, now make use of university students from their nation studying abroad in countries across the world to act as auxiliary communications arms of their intelligence services. Students are incentivized to actively publish messaging via all channels available to them, whether on social media, student newspapers, local papers, national press opportunities and meeting with visiting campus speakers, appearing as an ordinary college student rather than an arm of their government. Students are typically empowered with considerable autonomy in their messaging but receive overarching instruction in terms of desired topics and narratives from their government handlers.

The narratives put forth by these students are carefully constructed so as to not deviate substantially from expected organic messaging of a young citizen of that nation studying in the given country, meaning there are none of the coordinated abnormal reflection networks that can help uncover bot or less sophisticated messaging networks.

How does one address the stream of social, mainstream and in-person communications coming from university students speaking on their own behalves about issues they care deeply about, even if that caring is state-instructed? How does one even determine at what point a citizen is speaking from their own heart or that of their stage directions?

These campaigns have been so successful and so little noticed or understood by the public and policymakers in their host nations that they remind us just how effective creative propaganda campaigns can be and that the wartime efforts of a century ago are just as effective in today's peacetime. In fact, the quantitative success of these efforts in shifting the public narrative has been so dramatic that both countries have been rapidly expanding their programs.

Have we simply reached an unprecedented point in our history where there is little we can do to combat the flow of false information, especially state-sponsored information operations?

The answer is that our predecessors a century ago were dealing with precisely the same issues.

Instead of academics, entrepreneurs and technologists rushing to get in on the grant funding goldmine, the nation’s most experienced scholars came to together in the wartime service of their government in the pursuit of understanding and combatting propaganda.

The existential urgency of saving Europe meant abstract academic theories gave way to evidence-based observational descriptions, equations and rules. Proposed solutions were put into actionable practice where they could be evaluated in the real world.

For those interested in combatting today’s landscape of false information, from rumors to well-meaning mistakes to malicious falsehoods to profit-minded fakery to state-sponsored information campaigns, one would do well to spend a bit of time reading the propaganda research of the past century.

Looking to the past can also help us reconceptualize our present issues to understand the pieces that are specific to today’s technology and the deeper societal issues that transcend technology.

A century ago the transition from words to moving images and sound raised questions of whether the audiovisual world could brainwash a population in a way that words alone could not. Half a century ago these moving images came into our homes through television, raising the question of whether 24/7 access in the intimacy of our safe places would transform its impact. Today it comes in the form of the smartphones we carry around with us and the reversion of the gatekeeper model.

Putting this all together, to truly address the issue of false information today, we need to stop treating misinformation, disinformation and their ilk as funding buzzwords. Technology can play a role in helping to slow their spread and mitigate their impact but doing so requires understanding the reality of how populations produce and consume information and especially the wilderness of mirrors that is a modern influence campaign.

History is perhaps the most powerful lens through which to understand today’s world of false information and foreign influence. While commentators today spout off infinite theories and explanations for the flow of false information and would-be saviors propose endless instant solutions, only by looking back to the history of propaganda research can we understand how previous generations addressed these very same issues, the observation-based theoretic constructs they developed, the mitigation strategies they found most successful and the defenses they constructed to reduce propaganda’s impact on society. Most importantly, since these actions occurred in the past, we can assess their lasting impact and which approaches ended up succeeding and failing.

Looking to the past can also help us understand the future to come, especially the strategies and hybrid workflows that adversarial nations across the world have begun to resurrect to great effect.

Combatting foreign influence isn’t training an AI algorithm on a few fact checked articles or scanning Twitter for trivially obvious bots. It involves actually understanding the flow of information through societies and the strategies state actors and malicious parties use to subvert these flows for commercial, military and other nefarious purposes.

In the end, if we as a society are serious about combatting the world of false information and foreign influence, we should spend a lot more time reflecting on the lessons of our last major wartime efforts combatting propaganda a century ago.