The end of September and beginning of October 2012 have proved to be busy times for Google. The search giant rolled out several updates that jostled the rankings of many websites. SEOs have also been busy trying to sort out the updates and analyze their impact. Here’s our rundown of the most important updates in chronological order.
Panda Update: Google rolled out its Panda algorithm for the 20th time during Sept. 27-30. Ordinarily, Google runs Panda about once per month to filter through search results and penalize sites with low-quality content. But, Panda’s algorithm was upgraded for this 20th run, which had a dramatic impact on search results. The last several Panda updates only affected about 1% or less of search queries — while this new Panda update affected 2.4% of English queries.
EMD Update: The Exact Match Domain (EMD) update was a brand new algorithm that Google pushed on Sept. 28, which affected about 0.6% of English searches. This update specifically targeted websites that had high rankings by virtue of their domain name being an exact match for search queries. However, this doesn’t mean that all EMDs will rank poorly. Google’s real goal with this update is to penalize low-quality websites that are only riding high because of their domain names. Having an EMD is fine, so long as website is a source of good content. These high-quality websites still have high rankings after the EMD update:
In a way, the EMD update is very similar to Panda — maintaining high-quality content on your website will keep you from being penalized. Google will roll out the EMD update again in the future (possibly every month, like Panda), but exact dates are not known.
Penguin Update: On Oct. 5, Google ran its Penguin update for the third time since it debuted in April. This had a small impact on search results, affecting only about 0.3% of English queries.
Top Heavy Update: Officially known as the page layout algorithm, the Top Heavy update rolled out for its second time on Oct. 9. This also turned out to be a relatively small update, affecting only about 0.7% of English searches.
Since all of these algorithms are being run periodically, it’s important to keep them in mind when modifying the content of your website. To check if any of these algorithms have impacted your site rankings, cross-reference your Google Analytics and Webmaster Tools data with the release date of each algorithm to see if there’s a correlation.
At the end of the day, the rules for SEO really aren’t any different. All of updates are taking aim at weak content and poor user experience. If you continue to create original content; follow best practices; and maintain your site with users first in mind, search engines second, then the quality of your site will be rewarded in search rankings.
Quite a few websites make use of pagination to distribute products or content evenly across multiple pages. Pagination is essentially the distribution of on page content across multiple pages. This can provide good user experience on many sites which might include blog or product categories. If these category pages have too much content to realistically fit on one page, spreading them across multiple pages can be a smart choice.
There can be serious SEO issues with this however. For one, ranking and indexing signals such as inbound links might be diluted across all pages instead of accumulating on the most important page in the series. This can keep the main page from ranking as well as it should be. Second, if you have some static content such as a descriptive paragraph on every page, pagination can cause duplicate content issues.
These problems have been run into across many websites, and Google has created some solutions. First, if your website or blog has an article which is broken up across multiple pages, the recommended solution is to implement rel=prev and rel=next tags on each page. These will inform search engines that the pages are in a series and should be grouped together. More from Google about this action can be found here.
The next example would be if you have a blog with categories, and you have so many posts within each category that they cover many pages. If your blog appends a parameter to the URL on each page such as page=1, page=2, page=3, then you might run into issues where links become created to pages with no content. This could be seen as page=133 when you only have 10 pages. If this is a problem with one category, it is very possible that this will happen to multiple categories and it is important to address early. The best solution is twofold; however it is important to be very careful with this solution. If you are not technically knowledgeable then it is recommended to contact your webmaster or an expert.
First, create and verify a Google Webmaster Tools account if you do not have one already. Next, go to “parameter handling” under the “configuration” tab and click on “configure URL parameters.” This will lead you through the steps necessary to keep Google from crawling your paginated pages. Make sure that you know exactly what parameters are being appended to your website’s URLs which are causing the pagination issues. Finally, it is also recommended to implement the rel=canonical element on each page which is not the first page in the series. Each canonical element should point to the first page in the series. More on parameter handling from Google can be found here.
Once again, it is very important to make sure you have the technical knowledge to implement either of these suggestions. If you incorrectly implement parameter handling, it can keep Google from crawling and indexing important pages. If you implement correctly, it can save Google from crawling unimportant pages and serve them only the pages you wish do well in the search engines.