|
Posted by dorayme on 09/12/07 00:58
In article <m2642gk9nq.fsf@dot-app.org>,
Sherm Pendley <spamtrap@dot-app.org> wrote:
> dorayme <doraymeRidThis@optusnet.com.au> writes:
>
> > What is the simplest and most effective way of stopping robots
> > searching a particular html pages on a server.
>
> There are two popular "standards" (neither of which is a standard in
> the formal sense). One uses <meta ...> elements in your HTML, and the
> other uses separate robots.txt files. Both are described here:
>
> <http://www.robotstxt.org/>
>
> Both approaches depend on cooperative robots. For uncooperative robots,
> all you can do is shout "klaatu barada nikto" and hope for the best.
>
Thanks. If I get any reports of the pages concerned being found
now that I have gone the meta route, I will look further into the
robots.txt approach.
(Actually, sherm, I started reading about this last before
posting my question, got restless and slightly confused and
thought, I know what to do, I will pop my head above the trench
line a mo and see if something comes back from alt.htm to make
this thing stop buzzing around my brain. I know, it was a bit
reckless. But it who dares... you know... <g>
I also have a search engine on the particular site concerned and
they have various masking procedures I have since looked into.)
--
dorayme
Navigation:
[Reply to this message]
|