|
Posted by Rik on 10/19/06 06:58
Kimmo Laine wrote:
> "monomaniac21" <mcyi2mr3@googlemail.com> wrote in message
> news:1161195237.493677.322990@e3g2000cwe.googlegroups.com...
>> hi all
>>
>> i have a script that retrieves rows from a single table, rows are
>> related to eachother and are retrieved by doing a series of while
>> loops within while loops. bcos each row contains a text field they
>> are fairly large. the net result is that when 60 or so results are
>> reitreved the page size is 400kb! which takes too long to load. is
>> there a way of shorterning this? freeing up the memory say, bcos
>> what is actually displayed is not that much, its just the use of
>> multiple loops (about 10) that does it i think
>
>
> Clearly the code needs to be optimized. Depending how bad the design
> is, you should start looking at the query string, maybe the database
> structure. Now it just doesn't sound too smart if it takes ten nested
> while loops to pull out some simple data... But without seeing the
> actual code it's very difficult to say anything more specific.
A self-referential table of a adjacency model often takes either a lot of
while loops or a lot of joins. It might be usefull to switch to a nested
model in that case.
http://dev.mysql.com/tech-resources/articles/hierarchical-data.html
Neither explains a 400KB HTML page offcourse, if the actual data is small,
something else is definately wrong.
Grtz,
--
Rik Wasmus
Navigation:
[Reply to this message]
|