BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

The Truth Is That We Really Don't Know If There Was Social Media Election Interference

Following
This article is more than 5 years old.

As the world digested the outcomes of yesterday’s US midterm elections, headlines proudly proclaimed that all of the work by the social media platforms to combat electoral misinformation and foreign interference campaigns had paid off. It was, according to commentators, a huge success story that showed the socials had finally mastered the misinformation sphere and prevented a repeat of 2016. The problem is that these rosy assessments are impossible to verify, based solely on the assurances of the socials themselves that all is well, along with commentary from government agencies and academics that they didn’t see anything either, much as they didn’t see anything in 2016. Given that these groups largely missed the 2016 interference and that they were primarily looking backwards for repeats of past efforts rather than forwards to new kinds of misinformation campaigns, why should we trust their claims that there was no significant social media misuse this time around?

In 2016 the major social media platforms failed to spot the nearly overwhelming signs of large scale misinformation campaigns being conducted across their platforms. All of the warnings going back at least half a decade from misinformation experts about what to look for and constant advisories from allied governments detailing what they were seeing in realtime all fell on deaf ears. US Government agencies as a whole were unaware of the totality of what was happening, while those that had a greater sense of the activity’s contours still failed to grasp its impact or find a way to effectively mitigate it, despite a steady stream of warnings and feedback from Europe. Academics by and large also failed to pick up on the severity of what was happening until it was too late, largely due to the social networks’ refusal to provide critical data to outside researchers, but also because of the divide between the academics who study social media and the experts who specialize in state misinformation tradecraft. There are very few social media academic researchers who have previously served in the intelligence community with responsibility for identifying and combating misinformation efforts across the world.

In short, there was a failure of imagination in 2016 before the election to foresee how social platforms might be misused to influence an election and there was a failure to take action once the warning bells began to ring during the election.

Fast forward to 2018 and one of the greatest dangers of the counter-misinformation efforts of the past few months is that their focus has been largely on identifying and halting the tactics of 2016, rather than thinking creatively and looking to the future of novel forms of socially-enabled misinformation campaigns.

The majority of the misinformation indicators and metrics employed to date have focused on surfacing the kinds of behaviors that were signatures of the known 2016 efforts. Much like the naïve cybersecurity efforts of yesteryear that focused heavily on stopping attacks that followed past playbooks rather than creatively mitigating emerging novel vectors, so too have our counter-misinformation efforts focused too narrowly on detecting past efforts.

If there is one constant in the world of government-driven information operations, it is that the underlying tradecraft is constantly evolving, adapting and innovating.

Interference by nation states in the electoral affairs of other countries has a long and storied history as a staple of clandestine operations by intelligence services the world over, dating back as long as there have been elections. The US itself has a rich legacy of conducting active hybrid foreign misinformation campaigns over the decades, often blending misinformation efforts with kinetic operations and evolving those efforts to maximally exploit the latest communications technologies of the moment.

As social media has grown in importance, it is only natural that nations have sought to leverage its incredible reach to conduct their information operations.

At the same time, it is critically important to recognize that while the public imagination has been captured by the idea of foreign governments posting “fake news” to social media, the reality is that the myriad ways countries interfere in each other’s elections take on a huge number of forms. Many of those efforts are incredibly sophisticated and utilize extensive multifaceted ground efforts to amplify, direct, coordinate and parallel the remote online operations.

In short, while the general public thinks of foreign misinformation campaigns as an adversarial government running ads or posting misleading information on social media, the reality is that the observable portion of the misinformation campaigns being conducted each day on social media within the US is minuscule.

As tradecraft evolves, it will also be increasingly difficult to detect and mitigate such state sponsored efforts. For example, one approach that has gained in currency over the last few years has been for governments to pay individuals in foreign countries to engage locally through social media platforms to advance particular narratives and combat others. When these payments come in the form of scholarships to college students, including citizens of the country in question, and when the students themselves already have a long history of espousing those views or where they are unpaid, how does a Twitter or Facebook combat these efforts? Moreover, as campaigns increasingly stretch across the online and offline worlds and across social platforms and the open web, their contours become both increasingly difficult to detect and ever more difficult to separate from organic behavior.

Our current efforts also tend to conflate legitimate differences of opinion with “disinformation” or “fake news” efforts. While foreign interference efforts may seek to stoke partisan differences, it is important to separate divergent organic viewpoints held by American voters from inorganic externally injected narratives.

Well-run social media misinformation campaigns are either never detected or are designed to surface right at the critical moment after an election to throw its legitimacy into question, stoke division and sow societal discord. Such approaches are as old as politics and do not represent a social media era innovation, merely an adaptation of timeless tradecraft to the latest technologies. Governments, including the US, have long utilized all available technologies of the moment to shape the narratives of foreign governments with the purpose of influencing their political leadership. Social media has not fundamentally changed any of this, other than making it cheaper and logistically simpler.

Yet, the novelty of social media and the public’s belief in each technological era that everything is somehow different now has caused us to latch onto social media as our latest bogeyman. Much as “fake news” and “filter bubbles” were transformed from areas of legitimate academic study into meaningless popular obscenities and catch-all scapegoats, so too has “social media” become the go-to specter for all of society’s problems.

In some regards the centrality of social media and its written modality has made it easier to see and study misinformation campaigns, but at the same time it has made those campaigns far more dangerous. We too easily fixate on the small portion of campaigns that we can readily see, rather than digging deeper into the murky waters beneath to understand their totality.

Of course, one of the reasons the social media platforms have received so much blame is the fact that they appear to have invested so little in addressing the most obvious ways their platforms have been misused. Their reliance on heavily automated filtering processes that allow anyone to run ads claiming to be “paid for by Mike Pence” or ISIS without so much as a single question being raised by Facebook and loopholes in how they enforce their rules demonstrate just how much the vaunted counter-misinformation efforts of the socials are far more words than action. A Facebook spokesperson did not respond when asked for comment.

Moreover, the platforms seem to be largely treating the issue of misinformation in isolation, with each company focusing on misuse of its own platform, rather than looking at coordinated multimodal activity spanning many platforms, which is a natural evolution of 2016’s tradecraft adapted to their contemporary countermeasures.

In the end, looking back to 2016 the social platforms, government agencies and academic community failed to recognize and act on the totality of foreign misinformation campaigns that exploited social media platforms. As we look back on 2018’s midterm elections and the rosy self-reported assessments that the platforms managed to halt most of the misinformation campaigns, it is a legitimate question to ask why we should accept the social platforms’ claims that they succeeded. After all, if they missed 2016, why should we expect they did any better in 2018 without anything other than their word to go on?

With all of the focus of midterm counter-misinformation efforts on detecting the approaches of two years ago instead of looking imaginatively forward to the natural evolution of misinformation tradecraft, we are largely blind to where socially mediated misinformation campaigns will be fought in the future. All the while, the socials have adopted so much automation and such simplistic approaches that the few safeguards they have built are almost meaningless. As we look to the past, our adversaries look to the future, constantly evolving new tradecraft that leverage the ever-evolving digital world for their misinformation campaigns.

In the end, the gatekeepers that silently and opaquely control everything we see and say online, now control our understanding of just how much of our digital world is real. In a world in which democracy itself is at stake, that is a frightening prospect indeed.