Take your content production to the next level with help from MoreVisibility’s content experts. Learn how to create effective content strategies for onsite, link building, social media posts, blogs, and more, as well as tips and tricks for optimizing every piece of content you create.
It has always been the general consensus that the keywords meta tag was no longer recognized by any of the major search engines (Google, Yahoo! and Bing). We now know that to be partially true as Google announced last month that it completely ignores this meta tag and Yahoo! now claims that they haven’t been recognizing keywords for a good while either. Bing, it is claimed, has never recognized the keywords meta tag. What are the SEO ramifications of this?
It is the general belief that the keywords meta tag was devalued by the search engines some time ago as many webmasters used it to “stuff” them with as many keywords as possible in an attempt to rank for as many terms (relevant or not) as possible. The search engines quickly got wise to this and started to eliminate factoring it into their algorithms. From a user experience point of view, keywords are a non-factor, as 99% of Internet users have no idea of what a meta tag, let alone a keyword, is since you can only view them as part of the source code. From an SEO point of view, it is probably one less thing that you have to optimize for; titles and descriptions are now more important than ever.
However, an interesting article in Search Engine Land appeared on October 14th stating that although a Yahoo representative at SMX East stated that Yahoo also ignores the tag, an experiment was performed (placing the random letters “xcvteuflsowkldlslkslklsk” in the keywords tag) which completely contradicted this. The random letters were placed in the keywords tag of Search Engine Land’s website homepage to see if “xcvteuflsowkldlslkslklsk” would be returned as a search result and it was.
In any event, even though the keywords tag is almost entirely dead, it would still benefit webmasters to utilize it, even for the primary keyphrase for the page, as some search engines still use it and Google and Bing may change their minds and use it as part of their algorithms in the future, even if they don’t announce it.
Duplicate content is an issue that’s common among many sites. A question that I hear frequently is, “what makes content duplicate to Google”? Google states, “Duplicate content generally refers to substantive blocks of content within or across domains that either completely match other content or are appreciably similar”.
You may have heard of duplicate content before, however, many site owners are not aware of the ways in which duplicate content can occur. Typically we see duplicate content created unintentionally, but we’ve also seen it deliberately created. On sites where it is not created in a manipulative manner, there are rarely penalties by the search engines. Instead, there is something that is often referred to as a duplicate content filter. This is where the search engines filter out duplicated pages so that they can provide the searcher diverse search results.
When search engines filter out duplicate pages, you as the publisher of the content have little control over which url or domain is displayed in the search results. That being said, I think it’s important to identify a few ways that we often see duplicate content.
1) www and non-www versions both index-able by the search engines. This is probably the most common occurrence of duplicate content.
2) Inconsistent link references throughout the site.
3) Different navigation paths.
4) Different sort orders.
5) Printable versions of pages being accessible by the search engines.
6) Additional marketing domains that are not properly redirecting to the main website.
7) Different urls that are used to display various elements on the page.
8) Re-naming urls without deleting or properly implementing redirection rules.
Some ways to address duplicate content include redirecting multiple domains to the preferred or “canonical” version, using the canonical link tag, restricting access in your robots.txt file, etc. The best situation is of course a site that doesn’t create duplicate content in the first place. However, if you do have an existing site creating duplicate content, be sure that you utilize some of these handy work-arounds.
According to Wikipedia.org, Geo Targeting is, “the method of determining the geolocation (the physical location) of a website visitor and delivering different content to that visitor based on his or her location, such as country, region/state, city, metro code/zip code, organization, Internet Protocol (IP) address, ISP or other criteria”.
While it is not imperative that all websites focus attention on geo targeting, local businesses can benefit greatly, especially if your marketing budgets for more traditional marketing are limited. Simple methods such as adding the contact information for the business to the homepage content and meta data can increase your chances of showing up in local search directories or in the local results on Google.
Other good SEO techniques include adding county and city names to the page to narrow down the keyword targeting for the page. For more internationally minded geo targeting, webmasters need to keep in mind the different terminology used in other countries. In the UK, web surfers will use different terminology than their contemporaries in Africa.
Submitting to Local Search
At the very least, local businesses should start out by submitting to the oldest and most trusted of the directories (Yahoo! and DMOZ). After that, they should go for the large traditional business directories as they tend to rank well for many local terms. This increases reach in the search engines and positions the business as a local and trusted entity within its community.
Local Directories for Small Businesses to Consider:
Yahoo local — Very Important
Merchantcircle.com — ranks well
Insiderpages.com — ranks well