|
Posted by Erland Sommarskog on 02/25/06 16:34
nnelson (nnelson@cmscms.com) writes:
> I have a base customer table of 2 million records. We're doing some
> testing and I added 33000 rows incorrectly. No biggie, we'll just
> delete them, right? Nope....9 hours later, process is still running
> using this query.
>
> delete from customer where custid in (select custid from #tmp1)
>
> k...so we'll change it.
>
> delete from customer where custid > 2295885
>
> 2 hours later, still going slower than a turtle...literally 1 row per
> second.
>
> custid is my primary key so you can't tell me it's not indexed. It's
> pretty bad when I have to resort to doing a restore of my entire
> database of 27GB because it's faster than deleting 33000 rows.
First thing to check is that you don't have any blocking somewhere.
Start your delete and not the spid in the status bar of the Query Analyzer.
From another window run sp_who, and check the Blk column. If this column
has a non-zero value, the process is blocked by this spid. My thinking
is that when you realised your mistake, you pressed the red button in
QA. If you were in the middle of a transaction, it was not rolled back,
but you must explicitly run ROLLBACK TRANSACTION.
As David said, unindexed FK columns in other tables could also be an issue,
although nine hours is well too long time for that. Then again, if the
FKs are cascdaing, maybe not.
Also check if the table has a DELETE trigger that could have performance
issue for this volume.
--
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
Navigation:
[Reply to this message]
|