|
Posted by Erland Sommarskog on 10/01/59 11:18
pb648174 (google@webpaul.net) writes:
> This large copy is happening from application code and involved 30 or
> 40 separate stored procedure being called. It is being done
> asynchronously via a c# web app and there is a pause of 2 seconds
> before every copy command. Unfortunately this two second pause doesn't
> help because of the locking situation, it just makes everything block
> for longer.
>
> Isn't this a somewhat common scenario? What is the standard way around
> this situation? Can somebody tell me why the entire table is being
> locked instead of just the new records? I have no problem with new
> records being locked, but I do not want it to lock the records it is
> reading or the entire table.
>
> Would disaster ensue if I submitted the following command?
> SP_INDEXOPTION 'table_name', 'AllowTableLocks', FALSE
It would not have any effect.
The table you are inserting into is *not* completely locked. You posted
an output from sp_lock. According to this output, spid 54 holds an
exclusive table lock on table 1181963287, but that does not seem to
the main theatre for your problem. (You can use
"SELECT object_name(1181963287)" to see which table this is.)
Instead, the main part of the show appears to be table 1117963059, and
spid 54 does not any table locks on this table. But it does hold locks
on all newly inserted rows, as well as all new inserted index nodes.
Process 52 is blocked by spid 54, and this is why:
52 5 1117963059 4 KEY (5301214e6d62) S WAIT
Spid 52 is trying to get a shared lock on an index key, but is blocked.
Assume that the query spid 52 has submitted is "show how many items of
widget X we sold last week", and the optimizer decides to use the
non-clustered index over widget_id to access the sales numbers. And
among the new rows you insert, there are rows with widget_id in question.
When spid 52 tries to access those index nodes, it will be blocked.
So while you don't get a table locked, it is not as simple that other
processes can just read the existing data, and don't bother about the
new data.
The best way would be look into how to shorten the transaction length.
It sounds as if row are being inserted one-by-one, in which case there
are lots of possibilities for improvements.
Another possibility is insert the data into a staging table, and the
insert with one big INSERT statement at the end. But if that is many
rows, that could still block for considerable time.
A further development is to use partitioned views. Here, too, you copy
the data into an empty table, that no other process sees. The processes
that reads data, access the view, not the table. Once the table has been
loaded, you change the view definition to include the new table.
Then you can of course, use READ UNCOMMITTED for the readers, but then
may get some funky results that is not consistent. (READ UNCOMMITTED
on the writer has no effect.) A better alternative may be READPAST,
in which cases locked rows are simply skipped. But this is only a locking
hint, and is not settable as a transaction level.
In SQL 2005, you would probably use the new SNAPSHOT isolattion level,
in which case the readers would see the data as it was before the long
transaction started.
--
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinfo/productdoc/2000/books.asp
Navigation:
[Reply to this message]
|