|
Posted by R. Rajesh Jeba Anbiah on 01/19/06 08:18
Chung Leong wrote:
> R. Rajesh Jeba Anbiah wrote:
> >
> > As many people have pointed out, *never* dump the table data into
> > array. Fetch the record and immediately get that processed. If you have
> > any *valid* reason, buffer the data into a very very small (known)
> > sized array. If using MySQL, use the LIMIT if possible.
>
> The reasoning being?
>
> In my opinion conserving memory for the sake of conserving memory is
> just silly. Hardware resources are there to be used. There's nothing
> wrong with a script using a few megs of extra memory, as it'll release
> them a short time later.
Since, I know that Chung Leong is heading anti-performance campaign,
I'm not going to fight with him;-)
As you already know that by dumping huge bufferred records in to PHP
array, you're just duplicating the buffer. Also, as I mentioned
earlier, one can buffer the records in PHP provided the records are
very very less (for a valid reason).
FWIW, I have came across a PHP application that was manufactured in
another corner of the world. In which the programmer has bufferred the
whole user table into an array by mapping id and then he chose the
record like $record[$_GET['id']]. Not sure, the programmer isn't aware
of WHERE clause.
Also, try benchmarking the outcome by dumping the whole table (or at
least huge record sets) into an PHP array. If you use Windows/Apache
like me, you'd immediately see the result.
<OT>It's really nice to see new comers like Iván Sánchez Ortega
and the c.l.php discussions are now getting hotter:-)</OT>
--
<?php echo 'Just another PHP saint'; ?>
Email: rrjanbiah-at-Y!com Blog: http://rajeshanbiah.blogspot.com/
[Back to original message]
|