|
Posted by Cousin Stanley on 10/28/60 11:33
Greetings ....
I'm a dinosaur-age programmer but a php neophyte
trying to put together some server-side php code
that regulates the availability of a set of files
for downloading, provides the download, and logs
related data to a MySQL DB that will be used
to assist with subsequent regulation decisions ....
The readfile() function seems to be a convenient way
to provide the download ....
$num_bytes = readfile( $file_path ) ;
The php docs for the readfile() function says ....
Reads a file and writes it to the output buffer.
Does readfile slurp the entire file into memory
before beginning to write or does it read-a-little
and then write-a-little chunk-wise via internal io
buffers ?
In the case where the files to be downloaded
are fairly large, e.g. full CD sized or larger,
trying to slurp the whole file before beginning
the write phase seems potentially problematic,
e.g. high server loads and swap-prone ....
I don't know whether this could really be a problem
or if I'm overly concerned with something that could
take care of itself via normal system io buffering
and individual task processing mechanisms ....
Would coding a function using a < fread|fwrite > loop
where only a chunk at a time is processed in each pass
work out any better for downloading large files in cases
where multiple users are simultaneously downloading ?
--
Stanley C. Kitching
Human Being
Phoenix, Arizona
----== Posted via Newsfeeds.Com - Unlimited-Unrestricted-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----
Navigation:
[Reply to this message]
|