BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Technology Companies Face Hurdles In Moderating Extremist Content Online: Here’s Why

This article is more than 4 years old.

Last week, I released a new report titled “Free To Be Extreme”, which explored the balance between freedom of expression and the increasing demands placed on technology companies to monitor harmful extremist content online. By examining 107 cases of ‘extremism-related’ cases in the UK from 2015-2019, the report put forward a framework to monitor extremist content and behavior in a way that did not involve directly banning organisations or individuals off the platforms.

There were several challenges to come from doing the research. The first that was social media companies and governments seemed to operate in tandem, employing different frameworks to assess harms of extremism. For example, there is no offence that exists for violent extremism. While the Commission for Countering Extremism put forward a new definition for hateful extremism last year, this is not operational in law. Violent extremism is a precursor offence that may or may not be linked to terrorism, and often, terrorism cases refer to ‘extremism’ in case notes. However, extremism in the UK is prosecuted under either hate crime or terrorism legislation.

Second is the usefulness of proscription lists when determining what to regulate online. In the UK, National Action is the first far-right organisation to be proscribed since the now-defunct British Union of Fascists in the 1940s. By contrast, it is illegal for a British citizen to be a member of at least eight Islamist organisations, including Sunni militia group Ansar al-Sharia and the Islamic State. The report advocates for a new, combined, and coordinated framework, where social media companies can reference legal cases to assist in understanding extremism, and efforts to monitor content and speakers are overseen by an independent regulator.

The third challenge was determining whether banning is effective in the long term. Facebook was one of the first organisations to take a stance on banning white supremacist groups and actors off its platform. Research by RUSI found that doing so led to actors migrating to other, lesser ‘alt-tech’ platforms such as Gab, taking their core membership with them. In this way, banning may reduce audience amplification – and benefits such blue ticks (which verify and spread the messages of speakers), paid advertisements, and the legitimacy that hosting content open platforms afford users.

Companies should be cautious of overly banning organisations and speakers, however. In the long term, users find new and innovative ways to mitigate bans or disseminate ideas (including, but not limited to, re-branding or splintering groups, using ‘dog whistles’, employing flipped imagery, or training other individuals to disseminate the same ideas). A unique set of tools based on an extremist spectrum of harm would facilitate the removal of different privileges to limit a speaker’s influence, while still allowing for the justification that terms of service have been violated.

Better transparency around why certain organisations and individuals are deemed ‘extremist’ will be necessary, with patterns of behavior being shared in case studies by technology companies in annual reports and in any summits where multiple stakeholders meet to discuss the viability of regulating harmful material online. Unlike the offline space, where banning or naming an organisation as extremist may come with a unique set of legal challenges, technology companies are able to limit a user’s ability to take advantage of their products due to a violation of terms and conditions. The core message of the new report is that this must be done in a consistent and transparent way, with coordination across bodies in government spaces and with other technology companies.

Follow me on Twitter or LinkedIn