Google to hire thousands of YouTube moderators as criticism over content mounts

Share

Google is hiring thousands of new moderators after facing widespread backlash for allowing child abuse videos and other violent and offensive content to flourish on YouTube.

Amid what amounts to the second wave of the YouTube 'Adpocalypse', whereby inappropriate content has been discovered to target young children, CEO Susan Wojcicki is personally addressing the video giant's expanded efforts to wipe out policy-violating videos. But the company's newest measures intend to take things one step further.

YouTube plans to hire more people to scour its website for violent videos after reports about disturbing footage on its app for children surfaced last month, prompting a slew of advertisers to abandon the site. That's 10,000 people who will be watching terrible YouTube content, day in, day out, like the millions of children around the world their paymaster profits from.

For instance, Wojcicki says that YouTube will have 10,000 people working to address questionable content by 2018 - noting that human reviewers are essential to training the company's machine learning systems. Further, he noted that increased members would help YouTube fulfill the task better, and avoid inaccurate demonetizations, thus, giving creators more stability on the revenue front.

"We believe this requires a new approach to advertising on YouTube, carefully considering which channels and videos are eligible for advertising".

Читайте также: Apple strikes interim deal on escrow fund for $15B Irish tax claim

Youtube uses both human reviewers and "machine learning technology" to review video content posted on its platform.

On December 4, Alphabet-owned YouTube announced to expand its team of reviewers to manage extremist, violent content by 2018. However, it plans to implement the software to identify content that are inappropriate for children and could provoke hatred.

YouTube said machine learning was helping its human moderators remove almost five times as many videos that they were previously, and that 98% of videos removed for violent extremism are now flagged by algorithms.

"Our advances in machine learning let us now take down almost 70 percent of violent extremist content within eight hours of upload and almost half of it in two hours and we continue to accelerate that speed", Wojcicki noted.

При любом использовании материалов сайта и дочерних проектов, гиперссылка на обязательна.
«» 2007 - 2017 Copyright.
Автоматизированное извлечение информации сайта запрещено.

Код для вставки в блог

Share