Google has announced that it is “making significant changes with the goals of keeping offensive content off YouTube” in response to the brand safety issue that has dogged it throughout 2017.
The announcement, made in a blog post, focuses on several areas:
A new approach to monetization on YouTube: Google has overhauled its approach to which channels can run ads, aiming to remove ads from millions of videos (amongst which a large proportion of inappropriate content runs) while maintaining over 95% of reach for advertisers. GroupM and its clients have been asking for a revision to Google’s monetization policy on YouTube for some time. Previous monetization thresholds have been 10,000 views measured at a channel level; we want this at video level. Google has announced that it will monitor user engagement plus whether there are any abuse indicators (defined as community strikes [3 and out] for incidents of misleading content, spam or inappropriate content). Changes to the YouTube Partner Preferred program (YPP) now mean a channel must have at least 1,000 subscribers and 4,000 hours of watch time within the last 12 months to be eligible for ads.
Manually curating Google Preferred: Google Preferred Lineup ads will only run on videos that have been human-verified for compliance with advertiser-friendly guidelines. These reviews will be completed in the US by mid-February and in other Google Preferred markets by the end of Q1. Google Preferred makes up around 5% of the top video content that YouTube is authorized to sell and it is sold at a premium. The preferred status did not always prevent inappropriate content from creeping in. Understanding that pre-screening any category on YouTube requires wrestling with enormous scale, we would like Google to go a step further and pre-screen sensitive categories like children’s content or at the least comments on children’s content.
Greater tools, control and partnership: Google is introducing a new three-tiered suitability scale that will help to more accurately select the right level of safety or ‘edginess’ for brands. We will be digging into the exclusion settings as soon as the details are made available.
A more rigorous approach to managing controversial content: Google is investing heavily in building expertise across critical areas like hate, harassment and child safety while also pursuing partnerships with more 3rd parties to help develop and enforce better policies. In addition, Google will build an Intelligence Desk that will help them identify potential threats on the platform (to allow them to anticipate threats like the recent issue of child endangerment). By partnering with experts (Google aims to have more than a hundred such partnerships in 2018) as well as ramping up internal resources to more than 10,000 people focused on monitoring offensive content, Google hopes to not only identify unsuitable content, but pre-empt it. The timeline for ramping up increased human intervention is the end of 2018, although recruitment is happening as quickly as is feasible. We have heard separately that machine learning is catching up to 98% of radical political content and human monitoring is required to identify the rest.
These appear to be substantive moves to take the issue of safe and appropriate content on YouTube seriously. At Mindshare, we are cautiously optimistic on Google’s progress in this area and will continue to push for Google to become more transparent and allow independent 3rd party measurement and allow brand safety tools to work in the same manner as other channels.