Google’s algorithm updates

1996 – PageRank

Larry Page and Sergey Brin developed PageRank algorithm.

https://patents.google.com/patent/US7058628B1/en

The more links a page has, the more important it is. Analogy: the more a scientist is quoted, the more significant he or she is.

2011 – Panda

https://patents.google.com/patent/US8682892B1/en

https://googleblog.blogspot.com/2011/02/finding-more-high-quality-sites-in.html

Aims to diminish low quality content.

Google hired quality raters and interviewed them: will you trust this site with your credit card number, do you like to read the content?

After determining what people like on web sites and what they don’t, Google scaled this experience using machine learning.

Google tried to punish low-quality content sites (also referred to as content farms).
For the first time design and user experience were taken into concideration (CTR, bounce rate, diversity of traffic – whether a site attracts brand requests and type-in traffic).

2012 – Penguin

https://support.google.com/webmasters/answer/2648487?hl=en

Fight against spammy, artificial, or low-quality links.

Before that low-quality websites with big mass of inbound links were ranked higher than high-quality sites.

Google desided to distinguish genuine links, and punish for bought links.

Google Search Console got a tool for Disavowing links.

 

2012 – Pirate

https://search.googleblog.com/2012/08/an-update-to-our-search-algorithms.html

The more removal notices, the lower the site is in search results.

2013 – Hummingbird

https://search.googleblog.com/2013/09/fifteen-years-onand-were-just-getting.html

From now on search results focus on user intent rather than direct meaning of words. Google tried to really understand people. Before that computers performed just more or less mechanical tasks, they were a bit “stupid”. From now on Google started to explore what users really mean. A request “weather” is likely to be a request for the local weather forecast.

People started to input lengthy requests. Voice recognition made users of a mobile devices to voice the search queries rather than actually type them in. From now on the whole search phrase is analysed, rather than just key words in it.

2014 – Pigeon

By this time Google has a sophisticated tool – Google Maps and knew a lot of local businesses.

Now Google started favoring local businesses, that is closest to the searcher.

Example: “buy pizza” implies “where can I buy pizza right now, in the vicinity”.

2014 – HTTPS/SSL

HTTPS means cyphering data while it is transmitted via the internet. Sometimes the information is confidential (passwords etc.).

From now on Google favours only the sites working via HTTPS.

2015 – Mobile Update

https://search.googleblog.com/2015/04/ranking-change-to-help-you-find-mobile_21.html

Mobile friendly sites are ranked higher.

2015 – RankBrain

https://patents.google.com/patent/US9104750B1/en

From now on Google can guesses about the meaning words it doesn’t know.

Past searches of users are analyzed to provide more representative search results.

Machine learning is used much more extensively. Artificial intelligence adjusts the algorithm on its own.

No relying solely on landing page data anymore. From now on context-relevant search results prevail. Personalised search results.

2018 – Medic Update

Google was reluctant to comment on this update. But

2019 – BERT

https://blog.google/products/search/search-language-understanding-bert/

Bidirectional Encoder Representations from Transformers.

From now on Google analyzes search query as a whole. Not keywords, but a sentence.

Google started to understand longer and more conversational queries where prepositions are important.

Google provides us with an example: “2019 brazil traveler to usa need a visa”. Before BERT was implemented, Cougle could have shown results for U.S. citizens traveling to Brazil. Henceforth, the meaning of the preposition “to” is understood by Google: definitely a Brazilian is going to the USA.

2019 – YMYL

Quality Rating Guideline by Google

https://www.blog.google/documents/37/How_Google_Fights_Disinformation.pdf/

YMYL category was introduced much earlier – namely in 2014.
It stands for “Your Money or Your Life”.

From Google’s point of view sensitive information should only be published by experts.

Sensitive areas:

  • financial information,
    medical pages
  • legal advice
  • news.

Google had already tried to pay attention to medicine. In 2018 commercial web sites of medical centres benefited from algorythm update, whereas lots of independent webmastes with medical web sites faced pessimization.

In 2019 the world saw a YMYL focus within core updates of Google algorithm.

In these sensitie areas only authors and sites with Expertise, Authoritativeness, and Trust can be ranked high (E-A-T).

This means that if a web site does not publish articles by experts, it will not be ranked high. At least as YMYL is concerned.

But this principle is supposed to be applied all round: Google seems to started value only opinions by experts and do not value that by laymen.

Contents

Rate article
Add a comment