BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

American Campaigning Is Already Copying Russian Influence Tactics 'For Research'

Following
This article is more than 5 years old.

Getty

As two US Senate-commissioned reports made the rounds this week offering some additional detail on the 2016 Russian influence campaign, an even bigger story broke: the first documented instance of American political campaigning explicitly copying the Russian tactics. According to the Washington Post and New York Times, not only did a group of Democratic technology experts work to use false social media accounts to spread misinformation against the Republican candidate in the Alabama Senate race and depress turnout and sow discord among Republican voters, one of the project’s participants is the CEO of the company that produced one of the Senate’s two reports this week. As Russian-mimicking misinformation campaigns come more forcefully into American politics, complete with fake accounts spreading “fake news,” what does this tell us about the future of democracy and how Facebook, Twitter and other platforms will have to adapt to an environment in which such tactics are normalized? More to the point, as the individuals behind those campaigns claim they are merely “for research” and as Facebook itself presses forward with an academic research initiative that has explicitly not ruled out supporting such deception campaigns in live elections, what will the future hold for democracy?

The Times’ coverage offers a detailed look at how the deception campaign unfolded, complete with fake Facebook pages, armies of Twitter troll bots and fake news: a stripped-down version of the Russian troll factory in miniature.

Interestingly, the Times reports that the project had a budget of $100,000, the same as Russia allegedly spent in the 2016 election.

When asked whether Facebook would consider the activities described in the Times’ reporting to be a violation of its terms of use and whether it planned to investigate the project, a spokesperson forwarded the query to one of Facebook’s DC-based public policy spokespersons, but the company had not commented as of publication time.

The Republican candidate’s campaign is cited as having reported suspected interference and the existence of a coordinated misinformation initiative against it to Facebook. Given the company’s extensive touting of its election monitoring activities, it is extremely noteworthy that not only did Facebook completely fail to flag this activity on its own, but even after being formally notified by a US political campaign of suspected interference in an election, it failed to find and take action on the activity.

That Facebook would fail to find the misinformation initiative on its own suggests that it is only looking for the largest state-sponsored national-scale initiatives, while failing to find the myriad local misinformation efforts that are a key focus of professionalized misinformation campaigns. This has profound implications for its claims to have successfully defended against most misinformation efforts in recent elections.

More importantly, that Facebook would fail to locate and take action against a coordinated misinformation initiative after being formally and explicitly notified of its existence by a campaign suggests the company either isn’t taking the issue seriously or lacks the expertise and tools to uncover and act upon influence initiatives.

Yet, even if Facebook had identified the effort at the time or if it goes back and conducts a postmortem in the aftermath of the Times’ reporting, what are the consequences for running a deception campaign designed to directly influence a democratic election in the United States? Would Facebook ban the users involved for life? Would they receive just a brief suspension or perhaps merely a stern email politely asking them not to do it again?

What happens when the source of misinformation campaigns is not an army of defacto government contractors in some far away country working in concert with a foreign intelligence or military service, but rather private American citizens operating on their own on US soil?

If Facebook takes no action beyond deleting the fake accounts, they have effectively normalized misinformation campaigns as an acceptable and legitimate campaign practice. If those behind the campaign do not face even a temporary suspension, what disincentive is there to deter others from following the same blueprint in the future?

Again, the company had not responded to a request for comment by publication time.

Of course, even if Facebook bans the people behind the effort for life, there is nothing stopping them from simply registering a new account, but it would at least send a small message that such efforts are not viewed favorably by Facebook.

Yet, where the story becomes far more than a classic “political dirty tricks” effort is the statement by one of the participants in reaction to the Times story: “My involvement … was as a cyber-security researcher and expert with the intention to better understand and report on the tactics and effects of social media disinformation.”

He furthers this explanation in his comments to the Post that “This was like an ‘Is it possible’ small-scale almost like a thought experiment. Is it as easy as it might seem?”

In short, he argues the effort was not that of a political campaign, but rather an academic research project designed to learn more about how misinformation spreads during a live active election by running a deception campaign itself.

Why is this important?

It matters because Facebook’s much-touted academic research initiative Social Science One has explicitly refused to rule out precisely this kind of misinformation interference in live elections from academics all over the world.

When asked to state for the record that any form of active intervention in a live election, including misinformation and deception campaigns intended to research whether they could alter the outcome of a democratic election, the initiative’s lead SSRC declined to do so. That it would not commit to banning active intervention as a matter of principle is extraordinary and demonstrates that nothing is firmly off the table for the initiative.

SSRC also did not rule out approving research projects from academic groups which receive the majority of their non-Social Science One funding and research agenda from US or foreign government agencies which have a long and active interest in interfering with foreign elections.

Moreover, the top academic journals that are the lifeblood of hiring, tenure and promotion in the academic world all noted that they would happily publish research from Social Science One. Not one of them committed to banning publication of research involving active intervention in a democratic election.

Putting this all together, as campaigns begin to copy the 2016 Russian efforts and deploy them into live elections here in the United States, while justifying them as “research,” while Facebook’s new academic research initiative refuses to rule out officially supporting active manipulation efforts from academics all over the world (some of whom are primarily funded by foreign hostile governments) in the name of “research” it raises the question of just what the future holds for democracy.