|
Posted by Erland Sommarskog on 02/11/07 15:13
GZ (gerald.zimmerman@gmail.com) writes:
> This can also be an application issue, I often encounter situations
> where delevelopers do not batch their transactions, and thus do not
> give the log a chance to truncate, if this is an inhouse app; updates
> and deletes should be done in batches. The size of the batch should be
> set by the ideal size of your transaction log, the size of the db
> etc.. Also, be sure to dump the log periodically, typically I dump the
> log every 15 minutes (when the db is in full recovery mode). This will
> also assist in controlling the size of the log.
I assume that you are thinking of is the case where you update, insert
or delete millions of rows in one go, and that in such case it may be
better to break up the operation in batches of say 500000 rows. Yes,
this can be a good idea. But it has to implemented in such a way that
if the operation fails half-way, the database is still consistent.
--
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
Navigation:
[Reply to this message]
|