BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

YouTube Stops Recommending Conspiracy Videos, Finally

This article is more than 5 years old.

© 2018 Bloomberg Finance LP

Last Friday, YouTube tweaked one of their algorithms and made headlines... because we live in a time where whenever a tech monstrosity issues a platform change, it is a cause for news coverage and applause. One possible reason for why we do this lies in how these Silicon Valley giants have unintentionally flooded us with blue-tinted misinformation so that now anything even remotely resembling a life raft is celebrated and seen as an acknowledgment of guilt for almost drowning us in the wretched cyber sea in the first place. Our expectations can be higher than this.

In an official blog post published January 25th about YouTube’s changes to its recommendation algorithm, the Google-owned company announced they would be cracking down on conspiracy-laden videos and other “borderline content”… by ceasing to list them in the “Up Next” sidebar. The change follows last Spring's announcement that facts from sources like Wikipedia would be added beneath conspiracy videos

Examples listed of content no longer being recommended included 9/11 truther documentaries, flat earth conspiracy videos and content peddling fake miracle cures. YouTube stressed in their post this change would apply to “less than one percent” of all content on YouTube and the changes will be implemented by “a combination of machine learning and real people.” The post explained the human evaluators will be trained using public guidelines and the change will be “gradual” and “initially will only affect recommendations of a very small set of videos in the United States.”

The blog post makes absolutely no mention of the criticism academics, researchers and journalists have levied against YouTube’s recommendation algorithms for more than a year.  No mention of Alex Jones or how the Google-owned company tolerated his conspiracy theories about mass shooting victims for years either.

In 2017, YouTube was criticized for promoting videos calling the Las Vegas shooting a hoax, and again in the spring of 2018 over viral videos claiming the Parkland shooting was faked. In February of 2018, The Guardian published a report on YouTube’s recommendation algorithm that determined the “Up Next” sidebar was driving the majority of traffic to blatantly false and defamatory content related to the 2016 US presidential election. Some of the video the Guardian analyzed in their report was just outright ludicrous, like a 27-minute pro-Trump video of found footage of the real estate mogul montaged under light piano accompaniment that when watched in slow motion “contained weird flashes of Miley Cyrus licking a mirror.” That Trump subliminal-Miley Cyrus video was called “This Video Will Get Trump Elected” and was viewed more than 10 million times before YouTube removed it. Moving beyond how YouTube helped Alex Jones grow, the proliferation of neo-Nazi content and the lunacy that is Qanon, even conspiracies about the deadly forest fires in California were rampant on the site last November. In December, the Washington Post highlighted how one conspiracy theory known as “Frazzledrip,” a hold-over from the Pizzagate conspiracy theory made popular two years ago, was still up on YouTube.  Just last week, Buzzfeed released a detailed report on issues they found with YouTube’s “Up Next” algorithm that -- surprise! -- mirrored previous news reports and criticisms, including the algorithm's penchant for far-right ideologues.

According to YouTube’s blog post, however, that is not why the "Up Next" algorithm is being changed. According to YouTube, all the fuss was really about people not wanting to watch all those… cookie videos!? The crackdown on conspiracy videos is framed as happening because, as I quote from the corporate blog post:

You might remember that a few years ago, viewers were getting frustrated with clickbaity videos with misleading titles and descriptions (“You won’t believe what happens next!”) … More recently, people told us they were getting too many similar recommendations, like seeing endless cookie videos after watching just one recipe for snickerdoodles...

Are “cookies” and "snickerdoodles" stand-ins for popular conspiracy theorists and neo-Nazis? Not everyone views the blog post as cynically as yours truly; Public Knowledge, a nonprofit focused on shaping digital policy on behalf of the public, welcomed the algorithm change in a friendly press release posted on their blog.

“This is a great first step by Alphabet to address the problems in YouTube’s recommendation algorithm,” said Charlotte Slaiman, of the Competition Policy Counsel. Public Knowledge champions causes like rural broadband access, copyright, competition, and net neutrality, among others. “I am so glad to see that YouTube is taking responsibility by making this change,” continued Slaiman, adding “I look forward to seeing the impact of this change and I hope YouTube will continue to work on this important issue.”

The human moderators training the AI is by far the most amicable part of YouTube’s announcement; the proliferation of robots, as opposed to human employees, has long bothered the YouTube community-at-large. YouTube’s track record of deploying robots and big tech to deal with problems caused by big tech is admittedly... not great and something they’ve struggled with for years. Creators still fighting false copyright strikes and demonetization dings dished out regularly by bots does not inspire confidence either. Human creators, however, should be able to better determine these borderline cases. One example of a recently recommended video that would fit into this borderline area claimed the mainstream media was back at it again bashing PewDiePie when no such headlines existed that week. The entire video was based on a single tweet sent by a man working at a non-profit monitoring hate speech online. Instead of addressing the researcher's point, or addressing extreme rightwing violence, the video spent a good deal of time mocking the man. Would a human moderator consider the video cyberbullying? What about perpetuating a false narrative?  Will the new criteria take into account the entire content library of the creator?

screengrab, by author

A YouTube spokesperson told WIRED “YouTube's sister company, Google, uses similar processes to assess the relevance of search” but that too should be a cause for concern if the latest search results are any indication. A search for “YouTube borderline content” as conducted late Sunday night/ into Monday’s wee hours revealed a 9/11 truther as the third result.

The Truther, whose most popular video is a two-hour 9/11 conspiracy drivel viewed 1.9 million times, is mad at the recommended algorithm changes and predictably calls it censorship, but so far does not seem affected by the changes as evident by his top position in search. The first result is a man in a Hawaiian shirt fretting over how the recommended algorithm changes will destroy the “weird, edgy outcasts” of YouTube.

If only.