Duplicate content is a hot topic and has been for quite a while. It is also one of the most misunderstood issues in search engine optimization. Many webmasters and even some search marketers spend an extraordinary amount of time and resources trying to avoid the dreaded “Duplicate Content Penalty”, when in fact a penalty derived from duplicate content is fairly rare and reserved specifically for sites which have been observed trying to manipulate search engine rankings directly; i.e. search engine spammers.
The more common issue associated with duplicate content found by search engines is the “Duplicate Content Filter”. When a search engine finds two or more pages with identical or even nearly identical content it applies a filter which allows only one instance of the content to be returned in search results. This is in no way a penalty and does not affect the site in whole, just the specific page as it relates to the specific search query. The goal of the search engines is to provide their users with “unique” content and this filter helps to ensure each page returned in the search results is unique.
In the past couple of weeks Google has published an article with some very specific information on how it sees and handles duplicate content as well as some bullet points on issues to watch for concerning duplicate content. Additionally, another new US Patent relating to identifying and handling duplicate content has been granted to Google.Read More
Recently there has been a lot of talk about Google’s supplemental index. There seems to be some confusion as to what determines if a page gets put into the main results index or the supplemental results index. Recently Matt Cutts, software engineer at Google touched on this issue. In his recent Blog post he talks about how having pages within the supplemental index doesn’t mean there have been penalties applied. The main reasoning behind a particular page being put into supplemental index would be due to its Page Rank. This would mean Google may not be counting the links the page once had or not giving the same weight for those links as before. The solution to having the pages within the supplemental index returned to Google’s main index would be to build high quality links for these pages.
AJAX allows a developer to query a local or remote data source and render that content right to the browser without refreshing or reloading the page. This gives the user a much more responsive experience, if you have ever used Google Maps or the new web apps like Google’s Docs & Spreadsheets beta, you will no doubt understand the power of this wonderful technology. Now with all the new AJAX toolkits out there it is easier then ever to dive in and use this technology in your web projects. Expect to see it in use very often in the coming years.
Recently I tried to find any information I could regarding SEO and AJAX. I have had much fun playing with it in the past and wanted get back into it, but this time I am questioning it from a SEO perspective. I have yet to find a good reliable solution to get AJAX content indexed.