|
Posted by Erland Sommarskog on 02/27/07 22:35
(troy@makaro.com) writes:
> Great information on MARS and the ExecuteReader. What I am trying to
> do is to emulate a legacy product's data access method. The reason I
> am doing this is there is just way too much code to convert into
> proper sql. I'm talking at least 1 million lines of code. I have
> already written a conversion program to convert the code to VB.NET and
> now I must right a dll assembly to emulate the legacy data access.
I can't escape asking what's the point? You get the legacy product
converted to .Net, but it will still have the architecutre of the
old product, and risk is that you get a compromise with the worst from
both.
> Here is one an example of what I have to emulate:
>
> get #3, key #1 GE "20060101"
> while invoiceDate < "20070101"
> ! The current row is now locked!
> ! make changes
> update #3 ! updates the currently locked row and now unlocked.
> get #3 ! read the next row in
> next
>
> The converted code looks something like:
>
> ' note: 3 = the registered table
> SQL.getGreaterEqual(3, "20060101") ' notice no upper bounds
> while invoiceDate < "20070101"
> ' The current row is now locked!
> ' make changes
> SQL.update(3) ' updates the currently locked row and now unlocked.
> SQL.getNext(3) ! read the next row in
> next
>
> My question now is:
> How do I lock one row at a time???
I take it that the other product was using another data store than
SQL Server?
There are a couple of ways to do this, but it is important to understand
that locking a row is nothing you don't really do actively in SQL Server.
This is left to the lock manager.
And it's even less possible in ADO .Net, since ADO .Net uses client-
side cursors only. That is data is read from SQL Server and buffered.
Something like ExecuteReader may not read all million rows at once,
but it will not fetch one row at a time.
One way is to wrap the entire reader in a transaction with the isolation
level REPEATABLE READ. But then rows will remained locked until you
commit.
However, the only reasonable approach is optimistic locking. That is,
don't lock, but check for concurrent updates when you update. This
can be done in two ways:
1) Add a timestamp column: a timestamp column is automatically updated when
the row is updated. If you include the timestamp column in the WHERE
clause, and you see that @@rowcount is 0, then you know that the row
was changed since you last read it. 2) Without a timestamp column just
add all columns to the WHERE clause. I believe that the Update commands
that comes with the CommandBuilder includes this.
I can think of a third way: first read all keys into local array. Then
iterate over the array, and read one row at a time as 1) Start transaction
with REPEATABLE READ, 2) read row 3) update and 4) commit. But this
will be slow as I don't know what.
All and all, I think you are fighting an uphiil battle.
--
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
Navigation:
[Reply to this message]
|