|
Posted by nospammmer on 09/15/05 17:59
Thanks for your reply.
I use a MySQL database that is properly optimized. All the indexes are
set correctly and used.
Most of the requests are simple queries using a unique ID and returning
only a single result. There is almost no joins or complex joins.
>- If 50 of the 250 queries/sec are the same selects that don't change, >you could try some smart caching.
Unfortunately, most of the the queries are different.
I can give an example:
An user table with around 4000 users. It is possible to consult other
user's information. So a lot of queries are made on single records.
I tested placing a few records in memory with shm functions, and it
was of course, blazingly fast.
But I'm wonderig how the system reacts with higher volume of data, and
what would be the best way to do this.
Thanks
Navigation:
[Reply to this message]
|