YouTube has released an update on its efforts to remove offensive content and better protect its users. Utilizing a combination of human reviewers and machine learning technology, YouTube says that it's now able to "consistently enforce our policies with increasing speed".
And that speed is definitely impressive - YouTube says that between July and September 2018, its systems removed 7.8 million offending videos, with 81% of them being first detected by machines. Of those detected by machines, 74.5% had never received a single view.
That's a very strong result - YouTube's evolving AI tools are getting much better at detecting and removing content, before anyone even sees it.
This extends to almost every area of YouTube's content detection efforts:
"Looking specifically at the most egregious, but low-volume areas, like violent extremism and child safety, our significant investment in fighting this type of content is having an impact: Well over 90% of the videos uploaded in September 2018 and removed for Violent Extremism or Child Safety had fewer than 10 views."
And when YouTube does identify such violations, it's pushing to enforce stronger actions against the publishers.
"When we detect a video that violates our Guidelines, we remove the video and apply a strike to the channel. We terminate entire channels if they are dedicated to posting content prohibited by our Community Guidelines or contain a single egregious violation, like child sexual exploitation. The vast majority of attempted abuse comes from bad actors trying to upload spam or adult content: over 90% of the channels and over 80% of the videos that we removed in September 2018 were removed for violating our policies on spam or adult content."
YouTube has also made comment violations a priority - in the same period (July to September 2018), YouTube removed over 224 million comments for violating its Community Guidelines.
And most interesting:
"As we've removed more comments, we’ve seen our comment ecosystem actually grow, not shrink. Daily users are 11% more likely to be commenters than they were last year."
That makes sense - less spam, junk and abuse likely makes users more comfortable adding their own thoughts, but this is not always how such trends go in practice. It's good to see that YouTube's efforts are not only helping to protect users, but are also encouraging more to take part, and an 11% increase is a great endorsement of their approach.
Misuse of digital platforms, particularly in regards to abuse and harassment, is a key concern, with various reports showing that such actions on social media can have significant negative mental health effects, with cyber-bullying high on that list. The idea of fully open platforms, where everyone can communicate freely, is idealistic, but it's a core tenet that social platforms have been founded upon, and they've largely struggled to maintain a balance and protect users in this regard as social media usage has grown.
That's understandable - with billions of users interacting each day it gets harder to monitor each and every interaction. And while human moderation at such scale is simply not possible, advanced machine learning can help, and clearly is helping, as shown by YouTube's stats.
Machines will never be able to understand every nuance, and errors will be made. But the data here is very encouraging. Protecting the most vulnerable users is an essential responsibility for all, from the platforms to other users who notice something might not be right.
Any efforts that help advance in this area are worthy of praise, and encouragement.