When the Internet was first developed in 1971, existing as the U.S. Department of Defense’s ARPANET (Advanced Research Projects Agency Network), it was only 23 mini-computers large and spanned several universities and institutions across the United States.
In the ten years that followed, the U.S. National Science Foundation created a separate network called CSNET (Computer Science Network). This was developed for institutes without access to ARPANET. The CSNET network eventually merged with other networks to become the Internet.
Figure 1: ARPANET (then)
Figure 2: Internet (now)
The early version of the Internet was created to provide researchers with a way in which they could more easily communicate, quickly sharing theories, discoveries, and opinions. In the decades that followed the birth of the Internet, the practices of Web 1.0 and the fledgling directory and search engine technologies gave way to improvements under the Web 2.0 banner.
While the arts and technology disciplines burgeoned under Web 1.0 and came to prominence with Web 2.0 advancements, science languished. The technology that was developed by science for science left its parent field behind. But with Web 2.0 and the ever improving ways in which the search engines — and, by extension, search engine optimization (SEO) — are able to perform more relevant and functional information retrieval, science is now cautiously moving into the “Science 2.0” realm.
Much as the Internet has helped industries such as musical entertainment, hardware manufacturing, and software development realize the value of the Internet during the Web 1.0 phase, and enable such industries to monetize said value with Web 2.0, the Internet is now helping science to realize the value lying within. The ability to create a website that is easily crawled and indexed by the search engines, as well as the means by which one can create content that is identified as useful and relevant through SEO best practices, have made scientific publications open-access. This level of transparency and connectivity is driving science to challenge itself, making such traditionally opaque publications such as scientific journals and peer reviews subject to the challenge of self-sufficiency.
Web 2.0 practices, especially the rise of blogging and blogging software, now makes it easy for anyone to become a web publisher. Adopting SEO best practices, such as keyword targeting, cultivated trusted and relevant inbound links, utilizing title tags, description meta tags, and keywords meta tags, and content freshness can allow researchers to self-publish — the scientists who grew into Web 2.0 can now post ideas and data online, and optimize the information in a way that enables them to share their ideas quickly and universally. The wide-spread adoption of social media, and interactivity via social networking means that scientists who embrace Web 2.0 practices can debate and discuss on blogs, in newsgroups and forums, via mailing lists, and simply publish research documents under open-access publishing on their own website. This level of interactivity and transparency has led to a shift in the journal publication and peer review process: scientific research can now be web-published, indexed by search engines, instantly referenced and downloaded by web searchers, and the rarified practice of peer review has given way to the immediacy of peer interest.
These shifts are reflected across disciplines all over the Internet. Peer interest, or user interest, and the ability to easily search for and find information using search engines and directories, has led to virtual forests of information. These forests can only be navigated by helping the search engines determine which information is the most important. SEO practices such as writing compelling, keyword-rich content and using metadata help the search engines to be better able to serve up the best possible results for a web search. The addition of optimizing bookmarks, tags, multimedia, and other social media elements help to further refine search results. Sites such as Amazon.com, Digg.com, and Wikipedia.org have enabled critics, authors, readers, anyone, to be able to generate content and to comment upon it. This sort of “sociability” has now finally seeped its way into science.
While the age-old customs of scientific journals and peer review won’t be so readily replaced, science embracing Web 2.0 and the now adolescent child of its creation, the Internet, can only lead to faster advancements and swifter changes. After all, it was the need for improved communication that helped to bring about this modern age of Web 2.0, soon to be Web 3.0. As science becomes more and more accustomed to the way in which its child has matured, future improvements will be considerable. There are still over a billion plus pages of web content and documentation that have yet to be crawled and indexed by the search engines, information that is sitting deep within the Internet as part of the semantic web. As more and more scientists embrace the Internet and its various technologies, these deep pages, rich with content, profound with relevance, will come to the surface as science helps to push the search engines to dive deeper and to search better, truly serving up the most relevant and the most useful results… and as the Internet and the search engines develop and improve thanks to science, so will SEO. Search engine optimization continues to grow and adjust to the ever shifting Internet, learning along with the search engine algorithms and their robots and spiders. Now, decades after creating the Internet, science is back to push things along and SEO will be there, ready to adapt to the changes.