|
Posted by Daz on 10/24/06 11:18
Daz wrote:
> Hello all,
>
> my question is more regarding advice on a script design. I have about
> 3600 entries in my database, the user submits a list, which is then
> checked against those in the database to confirm whether or not they
> already own a particular item. If they do, then it's not added to the
> user table, whereas if it is, then it _is_ added to the user table.
> However, if the item is not in the database, the user is advised of
> this. So basically, I need to figure out a quick way to compare the
> users submited items (probably 50 to 700 items), with those in an array
> that I have created using the items from the database.
>
> I can think of two ways to achieve this. Firstly, I can iterate through
> all of the users items, and use in_array() to see if they are in the
> database array of items. I think another method I can use, is very
> similar, but rather than have an array of database items, I can put
> them all into a single comma seperated string, and iterate through the
> array of user items, using regex to check if the item is in the
> database. There may be another more efficient way to acheive the
> results I am looking for, but I can't think of anything else.
>
> I would appreciate it if anyone could tell me which of the 2 is likely
> to be faster, or even if there is an even better way altogether. I need
> to find the quickest way, as I don't want to over work the server or
> for the processing to cause a server timeout.
>
> All the best.
>
> Daz.
I think that there is a lot of confusion here. Some people are not
getting the right idea. Probably because my attampt to explain failed.
=======================================
Here is a small snippet from my main database reference table. They are
virtual books, and each book has an id. The last column is not needed
for this operation:
+---------+--------------------------------+------------+
| book_id | book_name | is_retired |
+---------+--------------------------------+------------+
| 2 | 13 Banlow Street | 1 |
| 299 | Baseball Fans | 1 |
| 471 | Cherry Blossoms | 0 |
| 665 | Down by the River | 1 |
| 1181 | I will always Remember | 0 |
| 1339 | Kimbler: The Beginning | 0 |
| 1433 | Let the Game Begin | 0 |
And so on... There are just over 3600 entries at present.
=======================================
Here is a small extraction from the users table, which shows which
books the user owns.
+-----+---------+
| uid | book_id |
+-----+---------+
| 3 | 3194 |
| 2 | 2947 |
| 3 | 2091 |
| 3 | 307 |
| 3 | 1434 |
| 4 | 3278 |
| 3 | 1288 |
| 2 | 3239 |
| 3 | 2467 |
| 1 | 991 |
+-----+---------+
Remember. Neither of these columns contain unique values.
========================================
Finally, here is an example of my users table:
+---------+-------------+
| user_id | username |
+---------+-------------+
| 1 | Anonymous |
| 2 | user_1 |
| 3 | user_2 |
+---------+-------------+
========================================
As you can see. I didn't want to add more than 3600 columns to the
users book table, as this would mean I get a lot of NULLs which isn't
very efficient and also bring with it a few more down sides.
The only other option I have, is to create a table for each book. That
would mean more than 3600 tables, and I would have to search through
3600 tables to get the results I am looking for. This is totaly
unefficient for what I need to acheive, however, the I _would_ be able
to run an index on the user ID. :P
I hope that my explanation as to what I am trying to do, and why I have
done what I have so far, is addequate. It has taken me a long time to
get the database like this (as this was one of my first ever
databases), and it has been reborn several times after I discovered how
database normalization works, along with it's benfits.
Best wishes.
Daz
[Back to original message]
|