Search engine spiders, sometimes called crawlers, are used by Internet search engines to collect information about Web sites and individual Web pages. The search engines need information from all the sites and pages; otherwise they wouldn’t know what pages to display in response to a search query or with what priority.
Search engine spiders crawl through the Internet and create queues of Web sites to investigate further. As a specific Web site gets covered by a spider, the spider reads through all the text, hyperlinks, meta tags (meta tags are specifically formatted key words inserted into the Web page in a way designed for the spider to find and use) and code. Using this information, the spider provides a profile to the search engine. The spider then gathers additional information by following the hyperlinks on the Web page, which gives it a better collection of data about those pages. This is the reason that having links on your Web page – and, even better, other Web pages linking to yours — is so useful in getting your Web site found by the search engines.