|
Posted by usenet on 08/01/07 21:47
I have a MySQL database which includes several thousand links to pages on
external sites. Naturally, over time some of those links go away, so I would
like to create a script that reads through the URL fields, accesses each link,
and performs an action based on each response code.
I can think of at least three ways to accomplish the header fetch -- http_head,
get_headers, and curl -- but I want to choose the most efficient option that
won't suck up more server resources & bandwidth than is absolutely necessary. I
imagine that I'll also need to throttle the number of requests per second, since
this low-priority task should interfere as little as possible with other
connections in & out of the server.
Does anyone have a recommendation about which method is most efficient?
Thanks for any and all advice.
Navigation:
[Reply to this message]
|