|
Posted by Daz on 10/24/06 19:58
Daz wrote:
> Chung Leong wrote:
> > Then you end up with a race condition, I think even if a transaction is
> > used. If there are two threads trying to insert the same data running
> > simultaneously, a transaction would not block the second thread from
> > fetching the same result set as the first. Thus both threads could
> > think that a particular record doesn't exist and both would insert it.
> > You would need to lock the table for the duration of the entire
> > operation, which is pretty lousy.
>
> I am going to need to create a function that checks for duplicate
> entries I think. However, the only time that there would be any chance
> of a race condition, would be if the same user was logged on twice, and
> carried out the same action symaltaniously. Baring in mind that there
> is only an appromimately 0.1 second window (as this is how long the
> transaction takes), I think it's very slim that it would happen. A user
> shouldn't need to run the same thing twice symaltaniously, however, I
> think I may start looking into methods that will log the user out of
> one account if they login a second time with another.
>
> Basically, I acknowledge there is a very slim chance of having
> duplicate entries, but how else could I get around this?
>
> All the best.
>
> Daz.
Here is another possible idea. Why not have every script add a user ID
to another database when a script is executed that will use the db.
Once the script has finished, the user ID is removed. This can be
indexed quite effectively, and will not allow the script to execute if
their user ID is in that database. There could be a potential problem
with any transaction that never finishes, but part of the table could
be a timestamp. And whenever which will remove/update any rows that are
more than say 30 seconds old, as this is the timeout limit for the
server.
The idea sounds a little rusty, but I personally feel it could work.
[Back to original message]
|