BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Facebook Rolls Out New Tools To Stop 'Non-Malicious' Child Exploitation

Following
This article is more than 3 years old.

Facebook says it's adding new tools to help prevent child exploitation on its site, following reports that it hosted more child sexual abuse material than any other tech company in 2019.

It's introducing a pop-up that will be shown to people who search for terms associated with child exploitation, suggesting how to get help from offender organizations, and sharing information about the legal consequences of viewing illegal content.

Also new is a safety alert that informs people who have shared viral, meme child exploitative content about the harm it can cause, and again warning of possible legal consequences.

Last year, the National Centre for Missing and Exploited Children (NCMEC) reported that Facebook was responsible for 94 per cent of the 69 million child sex abuse images reported by US technology companies.

However, according to Facebook, this figure can be misleading. In October and November last year, it says, copies of just six videos were responsible for more than half the exploitative content reported.

"While this data indicates that the number of pieces of content does not equal the number of victims, and that the same content, potentially slightly altered, is being shared repeatedly, one victim of this horrible crime is one too many," says Antigone Davis, Facebook's global head of safety, in a blog post.

"The fact that only a few pieces of content were responsible for many reports suggests that a greater understanding of intent could help us prevent this revictimization."

The company is at pains to suggest that much of the offending material is shared without ill intent, after working with NCMEC and other experts to categorize a person’s apparent intentions in sharing this content.

"Based on this taxonomy, we evaluated 150 accounts that we reported to NCMEC for uploading child exploitative content in July and August of 2020 and January 2021, and we estimate that more than 75 per cent of these people did not exhibit malicious intent (i.e. did not intend to harm a child)," says Davis.

"Instead, they appeared to share for other reasons, such as outrage or in poor humor (i.e. a child’s genitals being bitten by an animal)."

Alongside the new warnings, Facebook has updated its child safety policies to make it clear that even non-explicit content can be banned - otherwise innocent images of children with captions, hashtags or comments containing inappropriate signs of affection or commentary, for example.

"While the images alone may not break our rules, the accompanying text can help us better determine whether the content is sexualizing children and if the associated profile, Page, group or account should be removed," says Davis.

And, she says, the company has made it easier to report this sort of material, by adding the option to choose 'involves a child' under the Nudity & Sexual Activity category of reporting in more places on Facebook and Instagram.

Of course, the elephant in the room is end-to-end encryption, which Facebook has said it plans to introduce. Law enforcement agencies have said that removing their ability to read Facebook and Instagram messages could hamper their ability to catch offenders.

Follow me on Twitter