|
Posted by Erland Sommarskog on 09/29/44 12:01
(quincy451@yahoo.com) writes:
> So you are telling me INSERT in a loop like that will lock and unlock
> the table repeatedly, but possibly because of the tightness of the loop
> never let me in to get my stuff done.
That may be what Tom is saying, but, no, that it's not the way it works.
A single INSERT should only lock what is needed to insert the row. However,
if the remote process performs the DELETE and the INSERTs in a single
ransaction, the result is the same. As long as the inserted rows have
not been committed, you cannot read any rows from the table.
And, I like to add, nor are you interested in reading any rows. I don't
know what data this is about, or how much that changes when the remote
process refreshes the data. But if you were able to perform a SELECT
when the old rows have been deleted, and only 10 rows have been inserted,
you SELECT would probably come back empty. Which cannot be right.
So the remote process is probably right in keeping all in one transaction
and locking you out. But six minutes to load 10000 rows is not acceptable.
I don't think there is not much you can do on your side. Had you been on
SQL 2005, snapshot isolation could permit you retrieve the old version of
the data until the remote process has completed. But on SQL 2000, you only
work with the author of the remote process to see what it can do to
improve.
--
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
Navigation:
[Reply to this message]
|