The relatively new (and free) Google page speed testing utility: http://pagespeed.googlelabs.com/ is a really invaluable tool for any web developer looking to identify which elements of code could be cleaned up to make their site faster. Why is this useful for SEO?
First, user experience should always be the primary goal for any website; how can I sell my product if the user gets frustrated with how long it takes for a page to load? Second, Google now considers page load time as a ranking signal in their algorithm, meaning that they will give ranking priority to sites that have optimized their pages for loading in people’s browsers.
What are some of the suggestions you are likely to see once you enter the URL of the domain you wish to check? What’s nice about this tool is that Google gives you a prioritized list of items to tackle and essentially says, “If you clean up these things first, you will likely have better results”:
The very first thing you see once you have entered the URL will be a score out of 100; this will alert you to the severity of the issues that were found. A high score indicates little room for improvement, while a lower score indicates more optimizing to reduce page load time. Have you used Page Speed Online recently?
As we (or most of us) know, we do not want the search engine spiders crawling around and indexing certain parts of our website. For instance, it is probably not intelligent to have spiders crawling the secure port of the server, i.e. secure parts of our websites (https://).
We also don’t want to have the Googlebot or the Bingbot crawling and indexing pages that would be a poor choice as a target for search, like PDF files, Excel documents of images. This is where the robots.txt file comes in.
The robots.txt file, uploaded to the root directory of the site (www.example.com/robots.txt), tells the spiders which directories and pages to skip. Why do we want this? If someone were to find a PDF file or a Flash file in a search result and were to click on it, those types of documents generally don’t contain links leading back to the rest of the site and can be a “dead-end” for both the the search engines and for the user. A simple “Disallow” instruction in the robots.txt file will prevent non-SEO friendly pages from possibly showing up in search results for your desired keyphrases:
I love it when art and science come together. Maybe that is why I am so enthusiastic about SEO and the possibilities of the internet in general. Case in point, the other day I found the periodic table of SEO, courtesy of Search Engine Land. What this little piece of scientific art is showing is a formula for SEO success, based on ranking factors that search engines look for when crawling your website.
The table highlights fundamental ranking factors for on-site and off-site optimization strategies. Likewise, the document sheds light into search engine violations and any blocking by users, via Google’s newly released hiding feature.
The table offers a numerical breakdown of the factors in the upper right-hand corner of each element — a spinoff of the traditional Periodic Table. The numbers (1-3) are meant to indicate the level of importance of each element with “1” being least important and “3” representing the highest.
Some of the listed elements are as follows:
On-Page SEO Ranking Factors:
– Content Quality and Research — are your pages well written and has keyword research been done?
– HTML Tiles, Description, and Keywords (Meta Data) — does your meta data contain the keywords and do they describe the page?
Off-Page SEO Ranking Factors:
– Link Quality — are your links from trusted and reputable websites?
– Trust/Authority — do your links and shares make your site trustworthy?
– Thin Content — is you content more generic and lacking substance?
– Keyword Stuffing — are you excessively stuffing keywords in your content?
This document is both extremely informative and creative. The challenge is following it and optimizing your website to the letter. That, like the SEO document, is both an art and a science.