|
Posted by David Portas on 10/31/07 22:20
<halftime57@gmail.com> wrote in message
news:1193868515.262136.223710@k35g2000prh.googlegroups.com...
>I have a very large DB (>300GB). Each day I receive a file containing
> an average of 2 million individual delete statements. The delete
> statements all delete based on full primary key so only an @@rowcount
> value of 0 or 1 is possible for each statement. The deletes can be
> from any one of several hundred tables in the database.
>
> Right now I'm just cursoring through that collection of delete
> statements and executing them one at a time as dynamic sql (since I
> get the whole SQL statement I don't know of another way to do it). It
> takes 1-2 hours to complete, depending on other traffic on the host.
> Batching them into a single trx gives me better performance, but I
> lose the ability to know which statement within the transaction failed
> if any in the batch do, or which ones come back with a zero row count
> (i.e., the row wasn't there, which is information I need to capture).
> I keep thinking there's an easy, elegant solution, but it's eluding me
> so I thought I'd ask. My undying gratitude to anyone who can help.
>
Instead of executing each one individually it would typically be better to
have one DELETE statement for many rows. Maybe you can change whatever
produces the DELETE statements to do it that way. Identifying any possible
errors could be a separate step. For example a SELECT statement to identify
the rows that would otherwise cause foreign key violations when deleted and
then exclude those rows from the actual DELETE.
--
David Portas
Navigation:
[Reply to this message]
|