
First, search engines crawl the Web to see what is there.
This task is performed by a piece of software, called a crawler
or a spider (or Googlebot, as is the case with Google).
Spiders follow links from one page to another and index everything
they find on their way. Having in mind the number of pages on the Web
(over 20 billion), it is impossible for a spider to visit a site
daily just to see if a new page has appeared or if an existing page
has been modified, sometimes crawlers may not end up visiting your site for a
month or two.
What you can do is to check what a crawler sees from your site. As
already mentioned, crawlers are not humans and they do not see
images, Flash movies, JavaScript, frames, password-protected pages
and directories, so if you have tons of these on your site, you'd
better run the Spider Simulator to see if these goodies are viewable by the spider. If
they are not viewable, they will not be spidered, not indexed, not
processed, etc. - in a word they will be non-existent for search
engines.
No comments:
Post a Comment