BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Americans Don’t Trust Tech Platforms To Prevent Misuse In The 2020 Elections

This article is more than 4 years old.

It’s not quite a Catch-22, but it does fall under the realm of paradox. Nearly three-quarters of Americans have little to no confidence in technology companies like Facebook, Twitter and Google to prevent the misuse of their platforms to influence the 2020 presidential election, yet 78% still think that it’s the platforms’ job to prevent such misuse, according to a recent Pew Research Survey.

Even Republicans, who ostensibly benefited from such misuse in 2016, match the Democratic sentiment that misuse is going to be a problem come November, with 76% of Republicans saying that the companies have a responsibility to prevent misuse, compared to 81% of Democrats. The parties express slightly different concerns about information: Democrats are worried about the spread of false information, whereas Republicans are more likely to fret over social media platforms favoring liberal views. For example, last year, former Google engineer Kevin Cernekee publicized his belief that the company itself will try to stop President Donald Trump from winning his reelection. 

True to their stereotype of not trusting something they don’t understand, older generations (aged 65 and up) are less confident in technology companies than younger generations (aged 18 to 29), but not by much. The general consensus truly seems to be an impotent fear that the platforms we all know and rely upon (no one would go so far as to say “love”) will not be up to the task of preventing confusion and influence of bad actors in advance of the 2020 election. 

On Monday, Senator Michael Bennet (D-CO) sent a letter to Facebook CEO Mark Zuckerberg, putting Facebook on blast for its “inadequate” efforts thus far to halt manipulated media campaigns, per an article on Recode. 

Bennet wants specifics about how Facebook intends to stop the reach of disinformation and hate speech. What kinds of data does the company collect on average amount of time harmful information remains on the platform before removal? How many content reviewers has Facebook hired for different languages? He’s given Facebook until April 1 to respond.

The United States is still reeling from the shock of finding out that Russian trolls spread disinformation on Facebook to fuel the fire of political divides, still recovering from the trauma of the Cambridge Analytica scandal, which was uncovered in 2018 and revealed that the Trump administration hired the outside voter-profiling firm to exploit the private Facebook activity of a large swath of the American electorate in order to influence their vote. 

But are Americans’ fears still founded, and what are social media platforms doing to address them?

“Since 2016, we’ve made large investments in teams and technologies to better secure our elections and are deploying them where they will have the greatest impact,” said a Facebook spokesperson in an email. “We’ve tripled the size of our teams working on safety and security issues to include more than 35,000 people and we’ve created rapid response centers, which will operate for all of the caucuses and primaries. Their job is to monitor for suspicious activity, quickly identify behavior that violates our policies, remove it, and prevent it from being used again.”

Facebook is now using third-party fact-checkers to review viral political posts and label pages and ads from media outlets that it considers to be state-controlled. In an op-ed last month, head of Facebook’s security policy Nathaniel Gleicher highlighted Facebook’s work with the FBI and the Department of Homeland Security to investigate matters more swiftly and thoroughly. For example, in the lead up to the 2018 election, Facebook dismantled more than 100 accounts likely linked to the Russian-backed Internet Research Agency from Facebook and Instagram. Additionally, those who want to run political ads have to prove they’re actually located within the country where they plan to run ads

“We have more transparency into political and issue ads, including who paid for them, where they ran, information on who the ads reach, and more information about the people who are running Facebook Pages,” said the spokesperson.

But, one could argue that the great divider of the last election wasn’t just Russia’s meddling; it was the way targeted ads polarize entire countries, removing the need for politicians to promote more unifying messages. Last autumn, Facebook came under fire from its own employees, who signed an internal letter urging the company to ban political ads containing lies to run on the platform. In January, Facebook unveiled refinements to its policy, but targeting restrictions were notably absent. The main change was to allow users to see “fewer” ads.

“The move is rooted in ideas of personal responsibility — if you want to see fewer political ads and remove yourself from campaigns, that’s on you,” wrote The Verge. “In practice, though, it seems unlikely that many Facebook users would take advantage of the semi-opt-out, which is due to be released sometime before April. When’s the last time you visited your ad preferences dashboard?”

On the Twitter end of the spectrum, the platform has recently unveiled a tool that enables people to report misleading information about how to participate in an election or other civic event, in an effort to identify and remove misinformation that could suppress voter turnout.

The company is also considering plans to label false tweets by politicians and public figures with bright red and orange labels — not quite censoring speech, but certainly an attempt to categorize it into truth and lies.

“We're exploring a number of ways to address misinformation and provide more context for Tweets on Twitter,” according to a Twitter spokesperson. “This is a design mockup for one option that would involve community feedback. Misinformation is a critical issue and we will be testing many different ways to address it.” 

This project is in the early stages, though, and is not yet staffed.

“As caucuses and primaries for the 2020 presidential election get underway, we’ll build on our efforts to protect the public conversation and enforce our policies against platform manipulation,” according to Twitter’s director of public policy and philanthropy, Carlos Monje Jr. “It’s always an election year on Twitter. We take the learnings from every recent election around the world and use them to improve our election integrity work. This includes partnering with the government, civil society and our peer companies to better identify, understand and mitigate threats to the public conversation.”

Additionally, Twitter is working to protect the integrity of the election in a number of ways, namely by labeling or removing synthetic or manipulated media that’s shared in a deceptive manner or likely to impact public safety.  

Google has been more silent than other tech giants in regards to fighting misuse around the 2020 election. In February, 2019, the company published a white paper that outlined its approach to tackling disinformation in its products and services: “make quality count in our ranking systems, counteract malicious actors, and give users more context.” 

Google did not respond to a request for comments. 

Federal and state government intervention in how platforms stop the spread of misinformation or handle misuse is negligible. At the end of the day, it falls mainly on the platforms themselves to restrict false information online, which Democrats are more likely to cry out for, even if it means limited freedom of information. 

Like a relationship with a cheating spouse upon whom you’re dependent, you sincerely want them to live up to their vows and be faithful, but you’ve been manipulated so much that the trust is gone. I guess all we can do if we don’t want to be played is to sign off and suffer the consequences of disconnectedness. 

Follow me on TwitterCheck out my website