BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

While Solving Its Pedophile Problem, Did YouTube Forget To Contact The Parents?

This article is more than 5 years old.

Getty

Since the news broke 10 days ago about YouTube’s related video algorithms being useful to pedophiles -- both in the collection of content and networking -- YouTube has taken aggressive action. That action includes banning accounts of sexual predators, demonetizing videos of children and disabling comment sections. As wide sweeping as YouTube’s reaction has been, it is not enough, perhaps because it is lacking a human element.  

For starters, it is unclear if the parents of the children being eroticized on the site have been notified. All evidence suggests the answer is a “no.” A YouTube spokesperson did not respond to questions about how YouTube, or even if they do, notifies parents of at-risk children. A statement that was shared from a YouTube spokesperson reads: “Any content - including comments - that endangers minors is abhorrent and we have clear policies prohibiting this on YouTube. We enforce these policies aggressively, reporting it to the relevant authorities, removing it from our platform and terminating accounts. We continue to invest heavily in technology, teams and partnerships with charities to tackle this issue.”

However, questionable videos created by children remain up, despite being demonetized and with disabled comment sections (AKA YouTube's AI determined the child was at-risk). Some of these videos have been up since 2013 and the subject of media reports. Five examples of videos that are still up but really shouldn't be have been sent to the tech editor here at Forbes but will not be shared for obvious reasons. Two examples are of girls contacted by a convicted pedophile in 2012, and both vlogged about it... one was 12, the other 9 at the time.

Imagine if your child uploaded a video onto the Internet that attracted hundreds of thousands --maybe even millions -- of views because it was inadvertent softcore child porn. Your child’s crotch is shown for a minute or they’re wearing a bathing suit or an item of clothing slips for a second and that is why the view count is so high: pedophiles have found your child’s video and are interacting with it and each other. Maybe a YouTube robot notices the view uptick and scans for a closer look. The pattern of user behavior in the comment section is identified as illegal and flagged as exploiting a child. In response, the robot disables the entire comment section and/or demonetizes your child’s video. The gross user sexualizing your child in the comment section is banned.

You would want YouTube to let you know about all this, right?

At least have the YouTube robot automatically send you an email notifying you your child was at-risk, or that their AI had detected illegal activity that could potentially harm your child. The email could even be non-judgemental, something along the lines of  “maybe it is nothing and this alert is an overreaction but in these situations, it is better to be overly cautious so you should probably check out the video of your child yourself just in case this alert was triggered in error.” A good closer could include “here are some resources should further action be required” and maybe it can be done in conjunction with NCMEC and local law enforcement.

Besides the notification, you as a parent would probably want that video of your child to be taken down too, right? Not just the comment section and the ads to be disabled, because those don’t prevent your child from being unwittingly eroticized. Or even in the rare scenario where you don’t want the video of your child removed  (like an innocent award-winning gymnast routine), you should at least be able to edit out or censor the parts attracting the pedophiles to your child’s video. At the very LEAST.

In an interview with Vox for an article titled “YouTube has a pedophilia problem, and its advertisers are jumping ship,” Josh Golin, the executive director of the advocacy group Campaign for a Commercial-Free Childhood, wondered why YouTube wasn’t doing more when as a multi-billion dollar company they are clearly capable. “Why isn’t YouTube taking more serious steps, like… removing children’s content from YouTube, and using Google’s enormous reach to tell parents to keep children — and videos of children — off YouTube?” said Golin.

In a statement emailed to media outlets, Haley Halverson, the Vice President of Advocacy and Outreach at the National Center on Sexual Exploitation, also described YouTube’s actions thus far as inadequate. “Disabling comments is a significant improvement, but … they need to make sure ... that their algorithm triggers a warning to the users' account when their viewership spikes rapidly to alert them to the dangers of online predators" said Halverson.

NCOSE is a conservative nonprofit which includes the American Library Association and Amazon.com on its Dirty Dozen list of leading porn facilitators and is not a source I would normally quote from, being an atheist porn-positive feminist myself. I only include them here in this blog post to point out that protecting children is a bipartisan issue and common sense solutions do exist and are possible.

When Facebook’s data was found to be used inappropriately by Cambridge Analytica, the company let their users know. When Experian's data was breached, they notified those affected. What’s stopping YouTube from doing the same?