|
Posted by Adrienne Boswell on 09/12/07 14:56
Gazing into my crystal ball I observed dorayme
<doraymeRidThis@optusnet.com.au> writing in news:doraymeRidThis-
705494.07415612092007@news-vip.optusnet.com.au:
> A website is on a server. Just one or two of the pages are not
> for public consumption. They are not top secret and no big harm
> would be done if it was not 100% possible, but it would be best
> if they did not come up in search engines. (A sort of provision
> by a company for making some files available to those who have
> the address. Company does not want password protection; but I am
> considering persuading them).
>
> What is the simplest and most effective way of stopping robots
> searching a particular html pages on a server. Am looking for an
> actual example and clear instructions. Getting confused by
> looking at http://www.searchtools.com/index.html though doubtless
> I will get less confused after much study.
>
1. Robots exclusion, you can name a particular file, eg. backoffice.asp
2. Meta route (in my experience, not quite as reliable as the first)
--
Adrienne Boswell at Home
Arbpen Web Site Design Services
http://www.cavalcade-of-coding.info
Please respond to the group so others can share
Navigation:
[Reply to this message]
|