Significant Google Algorithm Updates
Overview
It’s not just because we like history but because looking at the history of Google helps us understand what to do today in
terms of best practice for future success. The article only details major events in Google’s algorithm changes.
Incremental changes to already existing Google updates are left out because they tend to be refinements rather than something
new. It is the major updates that tell us what criteria Google use to rank sites highly and also what criteria
they use to penalise websites.
Google Launched (March 1998)
In addition to looking at meta tags and page content, what set Google apart from other search engines is the way that it gave priority in the results to pages that had more inbound links from other websites.
The Fritz Update (July 2003)
Google started to update its results on a daily basis. This does not mean however that it only takes your site one day to appear on Google.
The Florida/Austin Update (November 2003)
This was the first real shock update that had large implications for many website owners. It targeted web pages that used keyword stuffing techniques. Austin was a close follow up to Florida again focussing on keyword stuffing, invisible text and doorway pages.
The Brandy Update (February 2004)
This update was all about semantics. Google switched focus from exact keywords to broad keyword searches using synonyms to match searches with content. So simply stuffing one form of the keyword you wish to rank for in a web page didn’t help as much as using variations of the same keyword. This was in an effort to find more natural looking pages with rich content rather than serving up spammy websites that were targeting one specific keyword.
No Follow (January 2005)
Google, Microsoft and Yahoo introduced the “nofollow” link attribute. This is a tag that stops website spiders crawling through a link on the site with the nofollow tag through to the web page it links to. Consequently no PageRank/link popularity is passed on meaning that links that would previously help the recipient site to gain higher positions would now not help at all meaning that many sites fell in the rankings. Some webmasters implemented the no follow as a method of keeping PageRank/link popularity to themselves rather than passing it on. However now days the PageRank would simply evaporate to stop internal link sculpting. No follow is also useful if you are a forum owner and want to dissuade marketers from placing spammy links on your site in the hope that they will benefit from the back link popularity/PageRank.
The Bourbon Update (May 2005)
XML Sitemaps (June 2005)
Personalisation (June 2005)
From this point forward Google looks at a Google users search history when deciding what search results are served up.
Maps (October 2005)
The Jagger Update (October 2005)
This was a major update relating to backlinks. Google targeted link farms, reciprocal links, paid links and any other types of spammy low quality links.
Universal Search (May 2007)
Google introduces search tabs splitting news, images, and videos making it even easier to find what you are looking for from the results page. This update made optimisation of images more prominent as users started to use image search to find desired products.
Google Suggest (August 2008)
The introduction of the “combo box”, gives users suggestions as they type, prompting webmasters to start optimising for the relevant suggested terms that appear first.
The Rel-Canonical Tag (February 2009)
Google, MSN and Yahoo introduced the canonical tag meaning that webmasters could specify what is the preferred page to index rather than a duplicate page without the need to use a 301 redirect. Examples of where this could be used include: instances of web page URLs with and without www and category pages with and without filters selected.
With regard to URLs with and without www, If no canonical tag was present Google would simply make its own decision which form to display in the search engine results even if they indexed (are aware of) both. Contrary to popular SEO belief Google does not penalise you for having both it simply make the choice for you (based on a range of factors). This however is believed to have changed with panda.
Google Places (April 2010)
Google now returns results based on location and also gives the option for an advertisers to advertise locally.
The May Day Update (May 2010)
An update that targets long tail spammy pages, this includes content that is duplicated such as content that is supplied by manufacturers, category pages that have multiple instances i.e. with and without filters) and product pages with very thin content that target specific long tail queries (a very specific search term such as a product description). These pages often have very low link popularity. The result of the update is that pages that were previously ranked because they were more relevant because they were optimised for the exact search term are now further down the SERPs in favour of slightly more generic pages with richer content and more backlinks.
Caffeine Update (June 2010)
Google upped the speed at which new content was included in the SERPs (search engine results pages) making results fresher.
Social Signals (December 2010)
Google confirm that they take in to account data from both Facebook and Twitter.
The Panda Update (April to November 2011)
Probably the most significant Google update since its inception, Panda has had a profound effect on many websites and concentrates on the following issues.
Duplicate articles
People have used articles for 2 main reasons. 1. To build lots of pages quickly whilst targeting long tail keywords using the theory that the more pages the better. 2. to place articles on 3rd party websites in order to benefit from the back link and therefore an increase in PageRank. However because this content is duplicate Google sees it as spam.
Spammy/thin content
Pages built specifically for long tail search terms that apart from keyword stuffing contain very little content on the page.
Content Quality
It’s not just quantity that Google is looking for, its quality. Google knows that spammy content is often written quickly or even automatically generated. Content errors such as poor spelling and grammar are an indication of poor quality control at best and spam at worst.
Duplicate content
In general this covers 2 forms. 1. Site content that appears on your site twice for example many pages that are essentially the same i.e. products that are the same except for colour or size. If the description is the same then the content is the same. Also category pages that have different versions for reason of filters or pagination. 2. Site content that is on you site and somebody else’s site. A good example of this is product information supplied by manufacturers that appears on many of their customers sites. Google will view this as duplicate content with little value/relevance.
Double Serving
2 websites owned by the same company that serves up identical content
But what makes panda different is the way that Google treats this content. In the past poor content on a page would just mean that the page itself would rank poorly. But with Panda, the site as a whole is affected meaning that a spammy page deep in your site can affect your main web pages even if they themselves are of good quality.
+1 (March 2011)
Google places a +1 button next to each listing on the results page letting users influence search results in their own social circle.
Google + (June 2011)
Google’s builds on +1 and launches its own social network which becomes embedded in many of Google’s services.
Pagination (September 2011)
To help webmasters filter out duplicate content, Google introduces the link attributes rel=”prev” and rel=”next”. This helps to define content that is split in to a range of pages.
The Freshness Update (November 2011)
Google further updates its algorithm to reward recently updated content.
Ads Above the Fold (January 2012)
Google further punishes spammy sites by devaluing websites with too many ads shown on the page before users have to scroll.
The Penguin Update (April 2012)
Continuing from panda penguin is further targeting over optimised websites that employ keyword stuffing and over optimisation of inbound links. Link sculpting of internal links and purposeful duplicate content is also being targeted.
In order for sites to rank well more favour is given to sites that interact with their customers via social networking and other forms of engagement such as reviews (including Google checkout reviews).
Knowledge Graph (May 2012)
Google now displays relevant information about the item searched for directly on the results page. For example, if you search for a famous person, information such as location of birth, date of birth, relationships etc will be shown. This information can come from a variety of third party sources, such as Wikipedia or even Google’s own databases.
Exact-Match Domain (EMD) update (September 2012)
Google reduced the relevance for sites whose domain name matches exactly to the target search term. This is a move to remove spammy sites that achieve high results simply based on their domain name.
The Hummingbird Update (August 2013)
The Hummingbird update focuses on specific natural language searches. This is the typical type of search that is carried out by users when using a mobile device with voice recognition. This trend is set to continue and Google are rewarding site owners whose site’s respond well to this type of search. This is another indicator that having a mobile optimised site will benefit businesses going forward.
In Depth Articles (August 2013)
A new type of search result highlighting websites that have an in depth content rich article relating to the specified search term.
Payday loan update (June 2013)
Payday focuses on very spammy keywords such as payday loan type keywords in an effort to remove spammy sites from the search engine results pages.
Pigeon (July 2014)
Pigeon aims to provide more accurate local search results. It also aims to improve distance and location accuracy within the results. The update is primarily for Google maps and for web searches that use local search terms such as business name, business type or town etc.
HTTPS/SSL Update (August 2014)
Google announce that they will be giving preference to secure sites, and that adding SSL encryption would provide a rankings boost.
Pigeon Expands (December 2014)
Google's rolls out "Pigeon" to the United Kingdom, Canada, and Australia.
Mobile Update AKA "Mobilegeddon" (April 2015)
An algorithm change giving a boost in rankings to mobile friendly sites.
RankBrain (October 2015)
Google reveal that machine learning is now part of the algorithm.
Mobile-friendly 2 (May2016)
Google roll out another ranking boost to benefit mobile-friendly sites on mobile search.
Intrusive Interstitial Penalty (January 2017)
Google start giving a penalty to punish aggressive interstitials and pop-ups that might damage the mobile user experience. This excludes age verification or cookie popups.
Google Jobs (June 2017)
Google launch their jobs portal.
Chrome Security Warnings for Forms (October 2017)
Google Chrome starts warning visitors to sites with unsecured forms. Although not a search engine algorithm change as such, it makes a significant change to site traffic on unsecured sites.
Mobile-First (March 2018)
Google announced that the mobile-first index was rolling out.
Mobile Speed Update (July 2018)
Page speed becomes a ranking criteria for mobile results.
Chrome Security Warnings (July 2018)
Chrome 68 now flags all non-HTTPS sites as not secure.
"Medic" Core Update (August 2018)
Google confirms a core algorithm update. This update seems to largely affect sites in the health and wellness industry; however impact was seen in all industries.
BERT Update (October to December 2019)
Google update their algorithm to support the BERT natural language processing model. It is later rolled out in 70 languages in December.
Core Update (May 2020)
Another large core algorithm update that has causes major ranking changes for many sites.