|
Posted by Cleverbum on 10/12/06 11:48
I'm trying to write a script which downloads information from a number
of websites analyses it and shows some results.
The problem I'm having is that some sites seem to work perfectly while
others don't. I know it's to do with the complexity of the site, but
I've no idea how to fix it in my code.
At the moment I am just using file_get_contents() to get all of the
relevant pages, but when I use this on http://www.dontstayin.com it
just doesn't work!
when I go there with my browser and view the source it's all lovely
html, but when I try to grab it file_get_contents() returns
bool(false)
what am I doing wrong, and how can I better emulate an actual
web-browser?
Navigation:
[Reply to this message]
|