As social media has grown, so too has its capacity to promote extremism, and connect users with likeminded people who support damaging anti-social ideals.
Unity always been the key promise of social media, facilitating a more connected society - and while, in most cases, that's a positive benefit, it's equally possible that disillusioned and misguided people can link into hate groups, and even become radicalized through their online activity.
That case was made recently in a New York Times article, which provided an overview of how YouTube recommendations essentially radicalized a young man from West Virginia. It can start with a simple query, then snowball into full hate, facilitating ongoing divisions and anger within modern society.
This is a key area of concern for all platforms, but particularly for Facebook, with its 2.7 billion collective users. And this week, The Social Network has provided an update on its ongoing efforts to detect and remove hate speech from its platforms - and in particular, how its focus for such content has expanded beyond known, foreign terrorist groups in order to incorporate more local threats.
First off, Facebook says that it has specifically expanded its use of automated content detection to include local terror threats and hate groups, in addition to larger, global organizations:
"We’ve banned more than 200 white supremacist organizations from our platform, based on our definitions of terrorist organizations and hate organizations, and we use a combination of AI and human expertise to remove content praising or supporting these organizations. The process to expand the use of these techniques started in mid-2018 and we’ll continue to improve the technology and processes over time."
Importantly, Facebook notes that its definition for such content is "based on their behavior, not their ideologies", which has enabled Facebook to take a more general approach to what can be defined as terror-related material.
Facebook has also developed its own definition of terrorist organizations to clarify what types of discussion and posts fall into this category.
"We are always looking to see where we can improve and refine our approach and we recently updated how we define terrorist organizations in consultation with counterterrorism, international humanitarian law, freedom of speech, human rights and law enforcement experts. The updated definition still focuses on the behavior, not ideology, of groups. But while our previous definition focused on acts of violence intended to achieve a political or ideological aim, our new definition more clearly delineates that attempts at violence, particularly when directed toward civilians with the intent to coerce and intimidate, also qualify."
Facebook is also looking to help people leave hate groups by providing links to specialized assistance, while it also has a team of more than 350 people, "with expertise ranging from law enforcement and national security, to counterterrorism intelligence and academic studies in radicalization", who are working to develop its policies and enforcement actions.
As noted, this is a key area for Facebook, and all social networks, moving forward. A major step, really, was acknowledgement, accepting that social networks can, in fact, facilitate, and even exacerbate, anti-social behavior in this way. Now that we've seen the full impacts of such, and the direct links between online activity and real-life acts, it's important that the platforms take action, and address such wherever they can.
Hopefully, Facebook's efforts lead to reduction in violent extremism in all its forms online.