Google has released a brand new version of the Quality Rater Guidelines, the third one this year. The changes aren’t as extensive as the previous two releases were this year, but the new updates can be an interesting signal as to where Google is planning to take the algo, and this update is no different.
The first change is an addition to a brand new introduction for quality raters with much more detail about the job and how they should be evaluating results on a more overall level, in more broad terms that the much more detailed sections later in the guidelines.
There are also some interesting changes regarding making raters aware that they can be biased when it comes to how they are rating their tasks for Google.
Lastly there are some interesting political affiliations mentioned, particularly for reporting offensive content.
0.0 The Search Experience
First, here is the brand new section:
0.0 The Search Experience
The World Wide Web is a vast collection of online information and content. Internet search engines provide a powerful way to explore this online universe. There are many ways people search: people may type words into a search box in a browser, speak to a mobile phone or assistant device, use search engine autocomplete features, etc.
People search the Internet for a variety of purposes, ranging from accomplishing a quick task to researching a topic in depth. A search may be part of a long-term project, such as a home remodel or vacation planning. A search may be done when someone is bored and looking for entertainment, such as a search for [funny videos]. A search may be a single question asked during a critical moment of a person’s life, such as [what are the symptoms of a heart attack?].
Search engines exist to help people find what they are looking for. To do that, search engines must provide a diverse set of helpful, high quality search results, presented in the most helpful order.
Different types of searches need very different types of search results. Medical search results should be high quality, authoritative, and trustworthy. Search results for “cute baby animal pictures” should be adorable. Search results for a specific website or webpage should have that desired result at the top. Searches that have many possible meanings or involve many perspectives need a diverse set of results that reflect the natural diversity of meanings and points of view.
People all over the world use search engines; therefore, diversity in search results is essential to satisfy the diversity of people who use search. For example, searches about groups of people should return helpful results that represent a diversity of demographic backgrounds and cultures.
Finally, search results should help people. Search results should provide authoritative and trustworthy information, not lead people astray with misleading content. Search results should allow people to find what they’re looking for, not surprise people with unpleasant, upsetting, offensive, or disturbing content. Harmful, hateful, violent, or sexually explicit search results are only appropriate if the person phrased their search in a way that makes it clear that they are looking for this type of content, and there is no other reasonable interpretation of the words used in their search.
There is a lot of interesting things to unpack in this introduction for SEOs to consider.
“High Quality, Authoritative, and Trustworthy”
We know that Google put a great deal of emphasis on E-A-T when it comes to YMYL results. And Google is adding here that’s it’s important that medical search results fall under this “high quality, authoritative, and trustworthy” standard. Also in the section they talked about the importance of critical medical advice, with the example “What are the symptoms of a heart attack?” included.
“Diverse set of results”
Google is stressing the importance of diversity in the search results. A page of highly similar search results, whether it is similar cookie cutter content slightly rewritten, or re-syndicated content that Google somehow hasn’t picked up on as being duplicated, is not a good user experience. And from a rater who may not have considered the impact diversity, or lack thereof, has on the search results, it is good to see Google raising the importance of diversity.
Sometimes queries are biased – such as when you include “Yelp” or “Amazon” in the search query, but even in their examples, they aren’t including brand names in generic searches.
“Search results should help”
Again, Google is reiterating the fact that search results should provide “Search results should provide authoritative and trustworthy information, not lead people astray with misleading content” showing that the importance of trust in search and the lack of misleading results is important to Google.
It also reminds raters of the importance of YMYL and EAT when it comes to serving helpful and trustworthy results.
“Harmful, hateful, violent, or sexually explicit search results”
Along with that, Google is stressing that any search results that contain harmful, hateful, violent, or sexually explicit search results should only be ever shown when the search has clear intent that the searcher is looking for this particular type of content. It is never a good search result when there’s inappropriate content served to searchers when the search clearly does not indicate that is what they wanted to find.
This is very clearly already dealt with in the guidelines in the offensive flag section. But it is interesting that Google is also highlighting this in the new intro.
0.2 Raters Must Represent People in their Rating Locale
The first change is in regards to how raters are rating the results based on their location. Now, raters must analyze the results from their own locale. While this might not be much on the surface, it does show that Google is looking to improve local results to being specific to the location of the user, and not more generic.
Google changed “0.2 Raters Must Represent the User” to “0.2 Raters Must Represent People in their Rating Locale”
While the first paragraph remains the same, they did add a new paragraph that is much more explicit about raters needing to keep their own personal bias from their ratings.
Here is the new paragraph:
Unless your rating task indicates otherwise, your ratings should be based on the instructions and examples given in these guidelines. Ratings should not be based on your personal opinions, preferences, religious beliefs, or political views. Always use your best judgment and represent the cultural standards and norms of your rating locale.
Why did they add this? Well there is ability that they were seeing some outside biases or influences affecting ratings of certain types of search queries. But they could also be trying to be more sensitive to cultural norms in specific locales, but that does raise a whole can of worms in that regard, because Google is asking raters to use their best judgment to represent the “cultural standards and norms of your rating locale.”
Is Google looking for a diversity when it comes to “personal opinions, preferences, religious beliefs, or political views” with their raters? Or for raters to rate while attempting to remove their personal biases completely?
2.1 Important Definitions
Google has added two new definitions to the section, “search engine” and “user”.
The first definition is pretty straightforward, although it is an interesting choice of words that Google refers to a search engine as a “tool.”
Here is the search engine definition:
A search engine is a tool to help people find or interact with content available on the Internet.
They’ve also added a new definition for a user. We did see some changes with the last quality rater guidelines update, specifically around the term user, where Google replaced the term user with “a person’s” as it related to YMYL sites impacting a person’s future happiness, health, financial stability, or safety.
In these guidelines, the word ” user ” refers to a person trying to find information or accomplish a task on the Internet. Keep in mind that users are people from many different backgrounds, whose experiences and needs may differ from your own: people of all ages, genders, races, religions, political affiliations, etc.
Google is stressing the importance that a rater cannot necessarily put their own personal bias on the ratings, because searchers come from all different types of backgrounds. And they specifically mention political affiliations, something that has become an issue throughout the world.
12.1 Important Rating Definitions and Ideas
In some fashion to the change made earlier in the guidelines, Google is also made a change to the user definition here to specifically mention possible bias in ratings by the raters. They are asked to keep in mind that general searchers come from all over the world with a variety of backgrounds.
They also added the implication to remind raters that searchers aren’t just publishing a task when doing a search. They’ve added that he could be trying to find information, instead of just trying to accomplish something.
Why did this change? Well it is inline with the earlier addition at the beginning of the Quality Rater Guidelines. But the addition of trying to find information does imply that maybe the searcher is finding whatever they need exactly in the search results rather than accomplishing a task for these results per se. Remember, raters are rating both the search results themselves, as well as the landing pages of those results.
Here is the old section:
User: The user is the person trying to accomplish something by typing or speaking into a mobile phone with a small screen (i.e., size of a smartphone, not a tablet).
Here is the new version with change to the first sentence and the new second sentence.
User: The user is the person trying to find information or accomplish a task by typing or speaking into a mobile phone with a small screen (i.e., size of a smartphone, not a tablet).Keep in mind that users are people from all over the world: people of all ages, genders, races, religions, political affiliations, etc.
14.6 Upsetting-Offensive Flag
Google has made some changes to this section, and this is rather interesting because it highlights political affiliations. We saw the Google quality rater guidelines update a couple years ago specifically for fake news and conspiracy theories surrounding political campaigns, so this addition of political affiliation is quite telling.
They’ve also made a change where they changed the term users to people.
Here is the first paragraph from the old version of this section:
The Internet contains all sorts of content, including content that many users find offensive or upsetting. Additionally, users of all ages, genders, races, and religions use the Internet to understand the world and other people’s points of view. Users may issue queries on sensitive topics to understand why people believe, say, or do upsetting or offensive things. Search engines exist to allow users to find the information they are looking for.
And the updated version:
The Internet contains all sorts of content, including content that many users find offensive or upsetting. People of all ages, genders, races, religions, and political affiliations use the Internet to understand the world and other points of view. Users may issue queries on sensitive topics to understand why people believe, say, or do upsetting or offensive things. Search engines exist to allow users to find the information they are looking for.
So this change of making raters aware that users can have different political affiliations in themselves makes you wonder if they are working on a search algo to diversify the search results and the political bias they may or may not contain.
14.6.2 Needs Met Rating for Upsetting-Offensive Tolerant Queries
Google has made another change here specifically mentioning political affiliations again. And they changed the term users to people.
Here is the first paragraph of the old section:
Remember that users of all ages, genders, races, and religions use search engines for a variety of needs. One especially important user need is exploring subjects that may be difficult to discuss in person. For example, some people may hesitate to ask what racial slurs mean. People may also want to understand why certain racially offensive statements are made. Giving users access to resources that help them understand racism, hatred, and other sensitive topics is beneficial to society.
And the updated version:
Remember that people of all ages, genders, races, religions, and political affiliations use search engines for a variety of needs. One especially important user need is exploring subjects that may be difficult to discuss in person. For example, some people may hesitate to ask what racial slurs mean. People may also want to understand why certain racially offensive statements are made. Giving users access to resources that help them understand racism, hatred, and other sensitive topics is beneficial to society.
15.0 The Relationship between Page Quality and Needs Met
This change was merely a grammatical one:
The HM rating may not be appropriate if a page has low Page Quality or has any other undesirable characteristics, such as outdated or inaccurate information, or if it is a poor fit for the query. We have very high standards for the HM rating.
Overall
With Google placing this additional emphasis on accuracy in search results as well as ensuring results are high-quality, authoritative, and trustworthy, it shows that Google is continuing to ensure that misleading search results are not ranking well in their search index. So we could likely see more additional algo updates coming as they are working on improving this.
Why does Google care so much about their results quality? Because if Google is not serving the highest quality search results, that’s going to lead to searchers deciding to try out other search engines instead.
The entire thing around rater bias, particularly political, could be a telling addition. Or perhaps they are noting that they are seeing political bias shrine through when evaluating the rater’s ratings. And Google’s goal is to have diverse search results while avoiding links to inaccurate political content, something that was an issue during the last US election cycle.
Overall, these changes weren’t as extensive as previous updates. But the changes could be a sign of an upcoming algo change. And the introduction is a nice overview for both raters and webmasters alike.
Jennifer Slegg is a longtime speaker and expert in search engine marketing, working in the industry for almost 20 years. When she isn’t sitting at her desk writing and working, she can be found grabbing a latte at her local Starbucks or planning her next trip to Disneyland. She regularly speaks at Pubcon, SMX, State of Search, Brighton SEO and more, and has been presenting at conferences for over a decade.
Latest posts by Jennifer Slegg (see all)
http://www.thesempost.com/google-quality-rater-guidelines-intro-bias-political/