Our Analytics, SEO, Social Media and SEM Blogs provide an informal platform for us to share our insights on the digital media industry. Practically every member of the MoreVisibility team contributes to one or more of our blogs with their real-time, in the trenches strategies and observations. Our goal is to provide you with up-to-date information and trusted industry opinions on what's happening in search and beyond. To stay up to date on our blog, subscribe to our feed.
Duplicate content is a hot topic and has been for quite a while. It is also one of the most misunderstood issues in search engine optimization. Many webmasters and even some search marketers spend an extraordinary amount of time and resources trying to avoid the dreaded “Duplicate Content Penalty”, when in fact a penalty derived from duplicate content is fairly rare and reserved specifically for sites which have been observed trying to manipulate search engine rankings directly; i.e. search engine spammers.
The more common issue associated with duplicate content found by search engines is the “Duplicate Content Filter”. When a search engine finds two or more pages with identical or even nearly identical content it applies a filter which allows only one instance of the content to be returned in search results. This is in no way a penalty and does not affect the site in whole, just the specific page as it relates to the specific search query. The goal of the search engines is to provide their users with “unique” content and this filter helps to ensure each page returned in the search results is unique.
In the past couple of weeks Google has published an article with some very specific information on how it sees and handles duplicate content as well as some bullet points on issues to watch for concerning duplicate content. Additionally, another new US Patent relating to identifying and handling duplicate content has been granted to Google.Read More
If you don’t like the weather, just wait 10 minutes! This saying also applies to the dynamic search industry. If there is one thing that is certain about the future of search engines, it is that they will continue to change frequently. For example, I often find myself discussing a Search Engine Marketing topic with a client and the very next day, the same topic we discussed is completely outdated. It could be something as basic as online market share percentage, to a search engine completely changing who they are partnering with. (see next paragraph for more on this topic) The Search Engine Marketing industry is so completely dynamic and constantly changing, and always keeps me on my toes!
A perfect example of how things rarely stay stagnant for too long, is with the search engine http://www.miva.com/ . Miva is one of the search engines we submit many of our clients to. MIVA recently announced that they will be dropping Yahoo and signing on with Google. In more simplistic terms, the search results will no longer be generated from Yahoo’s database; they will come from Google. This change should occur later this month. Being that Google accounts for the majority of the market share, it is not surprising that other engines, such as MIVA, would want to partner with them. Click here to read the entire article (Did I also mention that MIVA was previously called Findwhat?)
Another example of how this industry doesn’t wait for anyone is the constant evolution of Search Engine Algorithms. The top search engines are constantly tweaking their algorithms and filters, and spiders continue to improve. Simple changes to an algorithm can have a noticeable impact on the way your site is positioned within the search results. It is critical to be aware of such changes within the industry, and take a proactive approach.
A great way to keep up with the constant changes is to sign up for newsletters and articles from various publications. They are free and very informative. Here are just a few that I find useful:
The Search Engine Marketing industry is never static, and things will continue to change rapidly, however articles and publications like those mentioned above can certainly provide some insight! Happy reading.
Recently there has been a lot of talk about Google’s supplemental index. There seems to be some confusion as to what determines if a page gets put into the main results index or the supplemental results index. Recently Matt Cutts, software engineer at Google touched on this issue. In his recent Blog post he talks about how having pages within the supplemental index doesn’t mean there have been penalties applied. The main reasoning behind a particular page being put into supplemental index would be due to its Page Rank. This would mean Google may not be counting the links the page once had or not giving the same weight for those links as before. The solution to having the pages within the supplemental index returned to Google’s main index would be to build high quality links for these pages.