YouTube has another crisis on its hands. The past year has been a firestorm of content complaints: first, racist videos from top creators like PewdiePie force video censorship for ad compliance. Not long after, brands clamored for more responsibility and transparency about their ads’ place on the platform. Now, another serious blow strikes the video hosting giant right before the holidays.
Over the weekend YouTube was stormed by reports of inappropriate comments which wouldn’t be out of the ordinary, except for one thing: the comments were directed at children. There were an estimated 28 total offenses, with disturbing comments posted under videos geared toward children or uploaded by young content creators. Buzzfeed reported YouTube’s search function under the words”how to have” returned predatory results. Once alerted to the issue, YouTube immediately purged the comments and made an announcement about updated regulations. But the damage was already done as users flooded the comments section with outrage.
YouTube says it’s noticed “a growing trend around content…that attempts to pass as family-friendly, but is clearly not” and will be taking steps to correct the issue. Namely, they aim to de-monetize the offending videos and block “inappropriate comments on videos featuring minors” using a “combination of automated systems and human flagging and review to remove inappropriate sexual or predatory comments on videos featuring minors.” While YouTube’s action plan is solid, it raises questions about the state of content hosting on a global scale.
The Grey Area
Moderating is difficult, especially in the hazy realm of “family friendly” content. While there are channels specifically aimed at minors—toy unboxings, animated shorts from major networks, nursery rhymes—a lot of offerings aren’t intended for them. For instance, many of the top content creators in the gaming space create fun, upbeat playthroughs popular games. Their content isn’t inherently raunchy, though many swear or make the occasional adult joke in their videos.
These creators—whether in beauty, lifestyle or gaming—are usually adults from ages 18-35. However, their audiences vary in age, location, and personality. This disparity usually shows up in the comments section as these segmented audiences mingle to post their opinions on the creator’s content. For the most part, these exchanges are pretty benign. However, parents are often concerned about their children viewing these types of videos, wary of the adult influences they’re exposed to.
YouTube responded to parent (and advertiser) concerns about questionable content by strictly enforcing it’s creator guidelines. This included less creator swearing and lessened coverage of “controversial” subjects, which prompted the #YouTubeIsOverParty tag on Twitter. Creators—many of which who make their living off of YouTube ad revenue—had videos de-monetized without their knowledge, which hurt their bottom line significantly. And while YouTube has since relaxed its stance for big channels, there’s still a lot of work to be done.
Finding a Middle Ground
Moderating uploaded content is tricky business. While algorithms seem like an easy solution to ban bad videos, YouTube’s screening process leaves much to be desired. To correct it they’ve implemented human moderation teams designed to help bridge the gap between formula and fan. But these content moderator teams are subjected to some of the most brutal content on the internet, tasked with catching all the unsavory things mainstream YouTube never broadcasts. Among these are videos of animal abuse, political radicalism, and deviant sexual content. In interviews with former YouTube moderators, they cited themselves as hitting a wall “3-5 months into the job,” unable to keep up with the volume of horrible videos they were forced to watch.
So, where does the middle ground lie in the struggle for family friendly content? Instead of solely tasking traumatized content moderators to catch each and every horrible video, the burden should be shared between the content moderator, the creator, and the audience.
While moderators would be responsible for clearing the most offensive, non-TOS compliant videos, creators should be able to assign their own ratings and segmentation instructions for broadcast. For instance, if a Let’s Play video features profanity and adult themes, creators should have an “age target” option that screens out children under age 13. To promote balance, audiences would also be able to provide feedback on how appropriate the creator’s “content rating” was in comparison to the video itself.
Sure, it’d be a lot of work to implement, and we’re sure the YouTube team is busy. But for a site that dominates the video streaming and sharing spaces, constant innovation comes with the territory.