|
Posted by Nikita the Spider on 09/13/07 15:29
In article <m2642gk9nq.fsf@dot-app.org>,
Sherm Pendley <spamtrap@dot-app.org> wrote:
> dorayme <doraymeRidThis@optusnet.com.au> writes:
>
> > What is the simplest and most effective way of stopping robots
> > searching a particular html pages on a server.
>
> There are two popular "standards" (neither of which is a standard in
> the formal sense). One uses <meta ...> elements in your HTML, and the
> other uses separate robots.txt files. Both are described here:
>
> <http://www.robotstxt.org/>
This technique has worked for me given the same parameters of success
that dorayme described.
> Both approaches depend on cooperative robots. For uncooperative robots,
> all you can do is shout "klaatu barada nikto" and hope for the best.
AFAICT all of the major search engines are well-behaved in this regard.
--
Philip
http://NikitaTheSpider.com/
Whole-site HTML validation, link checking and more
Navigation:
[Reply to this message]
|