|
Posted by troy on 03/01/07 18:59
Thanks for the response. see inline comments:
>
> I take it that the other product was using another data store than
> SQL Server?
Yes, unfortunately :(
> However, the only reasonable approach is optimistic locking.
ya, this is the best method for sure, its just that I need to emulate
the old system as accurately as possible. The legacy software does not
expect concurrency errors on the updates.
> I can think of a third way: first read all keys into local array. Then
> iterate over the array, and read one row at a time as 1) Start transaction
> with REPEATABLE READ, 2) read row 3) update and 4) commit. But this
> will be slow as I don't know what.
Fortunately the resultsets that require a lock on each row will
probably be fairly small. Larger resultsets are typically reports
which don't require locks. I can see I am going to have to write a
performance test to see the actual speed of reading say 500 rows one
at a time with a lock on them.
[Back to original message]
|