|
Posted by Erland Sommarskog on 03/22/06 00:05
hallpa1@yahoo.com (hallpa1@yahoo.com) writes:
> My process involves breaking the process into many smaller chunks.
> Purging that chunk and doing a commit, then grab the next chunk. So
> while it will run continuously during my available window, there will
> be a commit after every chunk, which is a few thousand rows. The issue
> is that I would like to be able to finish all of the chunks in one
> window then rebuild the indexes.
A few thousand? That's too small! It depends on how little wide your
tables are, but I would suggest 100000 rows. To small batches can cost
you time, because it takes time to locate the chunks.
While it takes a lot of time, benchmarking is the only way to find out
what is a good value.
--
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
Navigation:
[Reply to this message]
|