~terry
Sat, Apr 6, 2002 (08:22)
seed
A robot is a program that automatically traverses the Web's hypertext structure by retrieving a document, and recursively retrieving all documents that are referenced.
Note that "recursive" here doesn't limit the definition to any specific traversal algorithm; even if a robot applies some heuristic to the selection and order of documents to visit and spaces out requests over a long space of time, it is still a robot.
Normal Web browsers are not robots, because the are operated by a human, and don't automatically retrieve referenced documents (other than inline images).
Web robots are sometimes referred to as Web Wanderers, Web Crawlers, or Spiders. These names are a bit misleading as they give the impression the software itself moves between sites like a virus; this not the case, a robot simply visits sites by requesting documents from them.
~terry
Sat, Apr 6, 2002 (08:23)
#1
What other kinds of robots are there?
Robots can be used for a number of purposes:
Indexing
HTML validation
Link validation
"What's New" monitoring
Mirroring
~wolf
Mon, Apr 15, 2002 (22:15)
#2
i don't think i get it....i'm sure that most of us have found so many pieces of info just by doing a simple search (info like bulletin boards that mentioned the subject, PDF documents and the like)
~terry
Tue, Apr 16, 2002 (08:33)
#3
This is something that can help stock our site with more content.
It's not only more, it's how we organize what we have. I did a site redesign the other day an I'm trying to make our content more accessible.