| 
	
 | 
 Posted by "Michael Sims" on 06/13/76 11:23 
Martin van den Berg wrote: 
> I have this piece of php-code which inserts data into a database. 
> Before inserting it must verify if the data is unique. The php code 
> looks something like: 
> 
> $query = "SELECT id FROM mytable WHERE bla LIKE " . $x ."; 
> $rows = execute( $query ) 
> if ( $rows == 0 ) 
> { 
>    /* some more processing */ 
>   $query = "INSERT INTO mytable ...... etc etc 
>   execute( $query ) 
> } 
> 
> Now here is the problem: when the user enters the page, and directly 
> refreshes the record is inserted twice.... Is is possible that both 
> requests are processed simulatiounsly by the server (apache on linux)? 
> And can I add something like a critical section or semaphore to 
> overcome this problem. 
 
The problem with the approach above is that a race condition exists between the 
check for the existence of the row in question, and the insertion of that row.  It's 
possible that the two requests can come so close together that both of them execute 
their selects before either do their inserts.  It's not very likely in the simplest 
cases, but as the amount of traffic (or the number of users you have who like to 
quickly click refresh) increases there is a greater chance that this race condition 
will cause a problem. 
 
In my opinion it's best to let your RDBMS handle this concurrency problem, since 
it's best equipped to do that.  Ideally you would be using some sort of constraint 
to prevent duplicate rows in your table...whether this is a primary key, unique 
index, foreign key, etc.  Inserting a duplicate row should result in an error from 
the database.  In that case you can trap for the error in your PHP code (using 
functions like mysql_error()) and handle it appropriately (for example, displaying a 
friendly error message, or simply ignoring the query). 
 
Another approach would be to start a transaction with a high isolation level before 
executing the select, but to me this is less desirable because depending on your 
database system it may cause contention problems if the entire table has to be 
locked.  Simply attempting the insert and catching the error should be much lighter, 
assuming it's possible to create the appropriate constraint in your database. 
 
HTH
 
  
Navigation:
[Reply to this message] 
 |