Site   Web

March 17, 2017

Google Turns to Human Contractors to Teach Search Algorithms to Spot ‘Offensive’ Content

Google is cracking down on offensive content with the help of human contractors.

The search giant is asking the thousands of contractors tasked with evaluating search results to help teach its algorithms to identify upsetting content.

Google is asking the contractors, known as raters, to make use of its new ‘upsetting-offensive’ flag.

Google, according to a list posted on Search Engine Land, is asking the flag to be used if:

  • Content promotes hate or violence against a group of people based on criteria including (but not limited to) race or ethnicity, religion, gender, nationality or citizenship, disability, age, sexual orientation, or veteran status.
  • Content with racial slurs or extremely offensive terminology.
  • Graphic violence, including animal cruelty or child abuse.
  • Explicit how­ to information about harmful activities (e.g., how tos on human trafficking or violent assault).
  • Other types of content which users in your locale would find extremely upsetting or offensive.

As an example, while content from The History Channel about the Holocaust might be upsetting given the atrocities that occurred, it cannot be deemed offensive because it is a factual account of actual events. A post from a white supremacist site inferring the Holocaust never happened, however, would be considered both upsetting and offensive.

Screen Shot 2017-03-16 at 6.00.04 PM

Being flagged as offensive does not mean content will be immediately banned or demoted, however. Instead, the results flagged by the contractors is used as training data both for Google’s employees who write search algorithms and for its machine learning systems.

As Search Engine Land explains it: “Being flagged as ‘Upsetting-Offensive’ by a quality rater does not actually mean that a page or site will be identified this way in Google’s actual search engine. Instead, it’s data that Google uses so that its search algorithms can automatically spot pages generally that should be flagged. If the algorithms themselves actually flag content, then that content is less likely to appear for searches where the intent is deemed to be about general learning. For example, someone searching for Holocaust information is less likely to run into Holocaust denial sites, if things go as Google intends.”

Content that is ultimately determined to be offensive by the algorithms may still be available to those specifically seeking it. For instance someone who searches for a white supremacist site by name will still be able to find it, but the group’s posts will not pop up in general searches.

Google Search senior executive Paul Haahr told Search Engine Land the effort is a learning process.

“We will see how some of this works out. I’ll be honest. We’re learning as we go,” Haahr said. “We’ve been very pleased with what raters give us in general. We’ve only been able to improve ranking as much as we have over the years because we have this really strong rater program that gives us real feedback on what we’re doing.”

To learn more about Google’s efforts, check out its updated guidelines here.


Jennifer Cowan is the Managing Editor for SiteProNews.