Search engine bots, commonly known as “spiders”, are standard tools used by search engines to find and index web pages throughout the internet. In order for a spider to find and index a web page it must first discover it through manual submission on their site or more commonly a link from a site that has already been indexed by the spider. A common problem occurring with large scale dynamic sites today is that the links used throughout are query string heavy. This makes it difficult for a search engine spider to index deep within these large-scale sites because of specific precautions the spiders take when dealing with these types of URLs.
An example of a NON search friendly URL
An example of a search friendly URL
This white paper will detail the use of the . htaccess file for Apache based web servers to take dynamic, query-heavy URLs and turn them into clean, efficient, search engine friendly URL structures.