The following are seven web design disasters that every SEO and UX person should know to avoid. Make sure that your business’s website has not fallen victim to one of these design flaws and make sure that your visitors, whether they be human or search engine bot, understand and enjoy your website.
1. Be Cautious of iFrames
Content that is within an iFrame is not on the URL. Each search engine views iFrames differently and may or may not spider and index a page in an iFrame.
2. Being Horribly Vague
Your navigation and anchor text should not be “page 1,” “click here,” or “more.” Make sure that every link has keyword rich anchor text and your navigation makes logical sense.
3. Only Hyper-linking Part of A Word or Keyword Phrase
Only linking part of a keyword to an internal page on your website will not give search engines a comprehensive idea about what the page is about. Your visitors will probably be pretty confused as well if one half of a phrase or word is hyperlinked.
They wreck havoc on user experience, as they are one of the most annoying features allowed in web design — which is probably why a high percentage of people now use pop-up blockers.
5. Browser Incompatibility
While we could write this as browser and bot incompatibility, you should test your website designs to make sure they can be read and understood by the largest percent of the population of people and search engines. Testing browser compatibility will help with keeping your human visitors happy and testing whether your code is readable to most search engines can help make sure that you are indexed and new visitors can find you.
6. Too Much Of (A Possibly) Good Thing
7. Indecipherable URL Structure
If one of your visitors comes to your website and wants to send it to a friend via e-mail or IM or other textual medium of choice, www.yoursite.com/bunnies will probably go over better than animals.yoursite.com/US/a9/3/small/v/rabbits/bunnies since it is easily identifiable as a page about bunnies.
Avoid these seven web design no-nos and take your website toward a better SEO and UX future. If you have a favorite SEO or UX tragedy, feel free to share it with us in the comments.
Earlier this year, Google announced that its algorithms would be tweaked to devalue “over-optimized” websites. Even earlier still in 2011, Google launched its Panda update – which specifically targeted websites with low-quality content (think: duplicated and/or spam content). Google readily admits it’s doing everything in its power to promote high-quality content that values the user-experience. To those ends, it has been ramping up efforts to remove blog networks from its index.
There are many different blog networks that have unique rules, genres, and policies. However, they all serve the same purpose — building links. When you post an article to a blog network, that article will appear on blogs from around the web that are affiliated with the network. By including links back to your site within the article, you can quickly get hundreds of in-bound links after you submit the article to the blog network. This has several inherent problems.
Firstly, you need to pay a membership fee to join a blog network. That means you are paying for links — which goes directly against Google’s Webmaster Guidelines. Secondly, when hundreds or thousands of blogs post the same article, it creates a major duplicate content issue. Thirdly, the very existence of blog networks has prompted people to create loads of low-quality content just so they could submit to the blog network and quickly get a lot of links. In reality, Google’s crackdown on blog networks should come as no surprise.
Note that blog directories — which provide lists of relevant blogs for searchers — should not be confused with blog networks. High-quality, relevant blog directories can still be a good way to legitimately boost your blog readership. Ultimately, if you keep producing original, quality content for your website, you’ll do just fine.
In our previous two blog posts on canonicalization we covered the definition of the term, the most common webmaster concern of www versus non-www versioning of a domain, and the canonical tag.
This blog post will dig a little deeper into canonicalization for website owners who link to content files, such as PDFs, from their website. Your noting of the preferred canonical version of a content file to search engines should be considered just as important as when you note the canonical version of a page of your website to the search engines.
The canonical tag can be used to signal the preferred or canonical URL for various content file types as well as HTML documents.
For example, if your website offers a number of PDF documents for download, such as a series of how-to guides and also has matching pages containing how-to information, there ends up being two URLs for each how-to instructionable:
You have the option of using rel=”canonical” in the HTTP header of the HTML webpage when the PDF file is requested to indicate to Google and other search engines, that the HTML version is preferred.
By marking that page as the canonical version of the PDF, visitors finding the information in search engine results would be taken to your website, tracked by analytics (assuming you are using an analytics platform), and will see all the navigation and onsite linking you have set-up. If the PDF was clicked through to from search engine results, it could be considered a dead-end due to lack of links, navigation, and tracking.
It should be noted that this technique requires not only access to the source code, but the ability to configure the server hosting your website to use the canonical tag in HTTP headers for files such as PDFs. This method could also be used to identify the canonical version of a URL for a PDF in the case of one PDF file being located in multiple directories.
Please don’t hesitate to contact MoreVisibility if you need help with optimizing your website or assigning the canonical version of your website pages or documents.