|
Posted by Roy Schestowitz on 06/07/05 18:59
SpaceGirl wrote:
> John Smith wrote:
>> How about methods to ignore webpages which have protections against
>> viewing html code, images, etc...
>> As if you are searching for examples and wish to view such--makes little
>> sense in going to all these protected pages. Indeed, if one finds
>> he/she is able to get by in the world without such pages--best just skip
>> them.
>>
>> Also, with so many intensely graphic and complex pages, how would one be
>> able to set up so you can skip webpages which are too demanding and take
>> too long to load?
>>
>> Regards,
>> John
>>
>
> All of that is relative, so how would you imagine you go about it? How
> would you detect if a page is protected? How can you tell the size of a
> page until it has already been sent to your computer? These are all
> things that are not possible with regular HTML.
Depending on how you navigate through these pages, you might be able to
adopt some tricks. To rule out heavy pages, use Google (et al.) estimation
of page size in kilobytes and just skip pages that appear overloaded.
With all the Firefox extensions, I am sure there are tools out there that
can help you achieve what you want, but they will be bandwidth-damanding.
One thing I can imagine is crawling the site currently viewed and binding
some previews or statistics to links. The webmaster/s will dislike it, your
ISP will dislike it and your computer will use up its entire
network/computational capacity in the process.
In summery, either you crawl the site or you use the service of a crawler
that has done all the job for you and gives you some useful figures.
Roy
--
Roy S. Schestowitz
http://Schestowitz.com
Navigation:
[Reply to this message]
|