|
Posted by halftime57@gmail.com on 10/31/07 22:08
I have a very large DB (>300GB). Each day I receive a file containing
an average of 2 million individual delete statements. The delete
statements all delete based on full primary key so only an @@rowcount
value of 0 or 1 is possible for each statement. The deletes can be
from any one of several hundred tables in the database.
Right now I'm just cursoring through that collection of delete
statements and executing them one at a time as dynamic sql (since I
get the whole SQL statement I don't know of another way to do it). It
takes 1-2 hours to complete, depending on other traffic on the host.
Batching them into a single trx gives me better performance, but I
lose the ability to know which statement within the transaction failed
if any in the batch do, or which ones come back with a zero row count
(i.e., the row wasn't there, which is information I need to capture).
I keep thinking there's an easy, elegant solution, but it's eluding me
so I thought I'd ask. My undying gratitude to anyone who can help.
[Back to original message]
|