|
Posted by Greg D. Moore \(Strider\) on 03/09/06 15:09
"Ben" <vanevery@gmail.com> wrote in message
news:1141851145.888769.151590@p10g2000cwp.googlegroups.com...
> We are planning to add a new attribute to one of our tables to speed up
> data access. Once the attribute is added, we will need to populate
> that attribute for each of the records in the table.
>
> Since the table in question is very large, the update statement is
> taking a considerable amount of time. From reading through old posts
> and Books Online, it looks like one of the big things slowing down the
> update is writing to the transaction log.
>
> I have found mention to "truncate log on checkpoint" and using "SET
> ROWCOUNT" to limit the number of rows updated at once. Or "dump
> transaction databaseName with No_Log".
Yes, options like this can help.
Also, if you can, drop your indices BEFORE you load the data and then
rebuild them.
I've often found this far faster.
You mention an update statement, can you use BCP or Bulk copy?
>
> Does anyone have any opinions on these tactics? Please let me know if
> you want more information about the situation in order to provide an
> answer!
>
[Back to original message]
|