Articles in the Offsite SEO Category

SEO doesn’t end with your website. You can use offsite SEO techniques to improve your reach and achieve brand ubiquity. Learn how to enhance your SEO strategy with tips for offsite optimization, brand expansion, and content marketing. MoreVisibility’s SEO experts can guide you through the most up-to-date offsite SEO techniques and methodologies.

March 17 2008

Usability Resources for optimal user-centered website design

by

I would like to share a few usability resources I have found that can quickly get you on your way to understanding your visitors and employing Best Practices for usability design.

www.useit.com
Jakob Nielsen, Ph.D., is a patent holder and renowned usability guru. You can find invaluable reports as well as all the content from his free AlertBox newsletter.

Don’t Make Me Think (Book)
This book provides an easy to follow, non-technical, yet revealing perspective into what goes on in visitors’ minds when they see your site, from the first impressionable seconds to the interaction with navigation elements minutes later. It details many case studies and guides you with Best Practices for designing for visitors so that you won’t have to teach them to use your site–they’ll know instinctively and will not have to think. It’s a short, but thought-provoking read with suggestions that you will soon want to implement.

www.uxmatters.com
This organization produces an e-zine about usability and design issues. Some of the content is a theoretical, but you can take something away from every article. This site contains a glossary of usability-related terms and abbreviations, conference reviews and access to archived articles. Though they have been around for just two years, there is a lot of useful content.

www.usability.gov
See your tax dollars at work. Uncle Sam has compiled research and guidelines for developing usable web sites. They include topics on everything from planning to designing to testing and refining your website. You can also find newsletters, articles and events related to site usability. They also sell their Research-Based Web Design & Usability Guidelines book, which includes contributions from ‘experts from across government, industry, and academia.’

www.challishodge.com
A blogger with a large archive talks about ‘the user experience, design and strategy’ while applying news of current events in a broad range of topics from art to nanotechnology to Word of Mouth Marketing. In addition to the informative and interesting blog posts, lists of organizations, other blog sites, books and resources can also be found.

www.poynterextra.org/eyetrack2004/index.htm
Poynter Institute runs tests on visitors’ eye movement behavior while reading multimedia and news-related websites. This site, as well as http://eyetrack.poynter.org/, gathers the findings and helps you understand what design decisions can help your site visitors look, and then hopefully click, where you want them to. Though this information is specifically pertaining to news websites, you should be able to apply the findings about images, font size and information recall to your design.

www.webstyleguide.com
Originally published by Yale University, Webstyleguide.com presents a logical, prioritized approach to Best Practices in web design with an emphasis on user-centered design. The guidelines start with a discussion on the design process and design goals, and continue with interface, site and page design, and then delves into visual elements and editorial style.

psychology.wichita.edu/optimalweb/default.htm
This resource’s goal is to assist you in designing a website for user, and does so by combining and presenting knowledge gained from many researchers on human interaction with interfaces. The Software Usability Research Laboratory, the laboratory responsible for this site’s content, includes research from the previously mentioned Poynter University and Neilson. In this resource, along with its sister site, surl.org, much of the text is supported by parenthetical notations so you can find the original publication of a researcher’s findings. Though this site was last updated in March 2003, and some of the suggestions are no longer in use, surl.org’s newsletter is current as of July 2007.

March 7 2008

Duplicate Content in Search Engine Indexes: Too much of a good thing?

by

Having duplicate content on your site may not seem like it could cause a problem in search engine indexes. After all, the more keyword relevant pages that a site has in indexes, the more likely that a page from the site will appear in the search engine results pages (SERPS) for that keyword, right? It’s true that duplicate content in search engine indexes is not the worst problem that a site can have — it’s infinitely better than no content, for example. However, serving up duplicate content to search engines can cause problems. This is because although the major search engines are dedicated to crawling the entire web and indexing every single page, they also are constantly striving to present as many unique and relevant results to their users as possible. To do this, they have to filter out duplicate content particularly when it occurs on the same site.

How do duplicate content filters work? Every search engine is different and this is an aspect of search engines that is changing all the time. In fact, Google recently made major changes to the way they handle duplicate content. Prior to fall of 2007, they maintained two indexes: a main index where most search results pages were called from and a supplementary index. Pages in the supplementary index were much less likely to appear in the SERPS. Google has now eliminated the distinction between indexes and started using other methods to ensure that pages from a single site do not overwhelm the search engine results. Some of these include:

  • Grouping duplicate URLs into a “cluster” and consolidating their properties including inbound links in one URL which is then displayed in the SERPS.
  • Only displaying a maximum of two results from any one domain (including sub-domains) in the results pages and providing a link to display more results if the searcher wishes.

So, if Google is taking care of this issue, why should we care? There are two main reasons:

  1. Search engines do not crawl all the pages of a site on every visit. How often a site is subject to a “deep crawl” depends on how important the search engines view the site, but even very important sites are not fully crawled every time. How many pages are crawled can be dependent on how much time the search engines have allocated to crawling your site. If they are wasting time collecting the same content over and over rather than crawling and indexing the unique pages, some of your content may not be included in search engine results as quickly as you would like.
  2. When Google chooses which URL to display, they may not be considering issues like which page has the best title or meta tags or URL filename. If you have gone to the trouble of optimizing a specific page for search engines, your work is all for naught if they choose to display a non-optimized page in the SERPs instead.

The bottom line is that a well designed site that takes care to serve only one version of a page to both search engines and visitors will be crawled more efficiently and will be less confusing for visitors to navigate. Furthermore, as the site owner, you will choose which pages will be displayed and not some anonymous algorithm. Google has provided some tips on how to streamline your site and avoid duplicate content issues. How important this is to your site can depend on many factors, but taking any advantage you can when competing for those all-important first page positions is just good sense.

March 6 2008

How External Duplicate Content Effects Your Site

by

There was recently a frantic post on the Google groups by a gentleman who was sure that his entire website was de-indexed by Google because another domain had a cached version of it indexed. After he saw what had happened he researched the matter himself and assumed that he had been hijacked by this proxy cache and that he needed to take action to block any further problems. His response was to block all robots to his site with nofollow and noindex meta tags which only made matters worse. His actions caused his entire site of 4000+ pages to be removed from all search engine indexes and destroyed his business.

Of course this example is a bit extreme, but would your response have been any better? It’s time we educated ourselves about the mystery behind the dreaded duplicate content matter and learned how to really deal with it.

By basic definition, duplicate content refers to an exact copy of webpage or content on a page that is listed under a different URL. Meaning that the pages look exactly the same but the URL in the address bar is different. This could either be internally (within your site) or externally (on another website). For today, we are going to stick with external duplicate content since this is what is described in the example.

But first, before we begin, we should look at why we are concerned about what other people do online with our content. What caused this whole duplicate content beast to appear anyway? The true cause of the fear of duplicate content was Google’s supplemental index (which is now gone). The problem was that Google wanted to find a way to limit the number of results from a single site about a single keyword. For example if you had a page about green tea on your site and you also had ten copies of the page under different categories still on the same site Google had to pick one of them so your single site did not take up multiple spots in the rankings. These duplicate pages were placed in supplemental index to show the owners that Google knew the page was there, but didn’t want to put it in the search results because either the page itself or something very similar was already there.

Many site owners had problems with this because they did not have enough unique pages. Simply replacing green tea with white tea did not make a page unique enough to be listed as a different page. Pages needed to be clearly different with different, text to be unique, but no one knew. And so the dreaded duplicate content page missing issue began. The beast had been born.

So how does external duplicate content actually affect your site? The truth of the matter is that it doesn’t affect it at all. The stories we hear of cached versions of pages replacing the real site all have underlining nonrelated problems that we never hear of. If for example you were caught and deindexed for taking part in a link farm, it’s only natural that a copy of your site takes its place. It’s still your site and still your content it’s just listed somewhere else on the internet that’s not in trouble with the search engine.

If we really take the time to think about this whole issue of external duplicate content before we panic and make matters worse, we can see just how unfounded it really is. Could it really be so simple to destroy your competitors that all you needed to do was make a copy of their site? Heck, even multiple copies of a website could be done with just a few dollars. The internet would be in total anarchy as site after site would compete in terms of who could copy each other the most. Major sites like WhiteHouse.gov could be removed from Google because of the actions of the average middle school child with internet access and fifty bucks. Do we really want to think this is how the internet works?

In the end, we should actually consider these duplicate external websites and caches to be a good thing. If by some off-chance some user finds a cache version of your site online in the farthest reaches of the internet, it will still have your content on it and your contact information. This copy somehow could reach a user that in a million years could have not found your real site for some reason or another. Right now your articles and products could be being viewed by people you never even thought of targeting. This is a good thing for your business and your website. Some of these random cached pages might even be considered backlinks. Albeit this is a far fetched notion, but it is very possible.

I hope this has somehow cleared the air around the notion of external duplicate content and that you may feel more at ease when you see copy of your page somewhere online. It won’t hurt you or your SEO practices; all it can do is help spread your content. Remember copying is considered the most sincere form of flattery.

© 2017 MoreVisibility. All rights reserved