|
Posted by Erland Sommarskog on 12/07/06 22:32
(meeraguna@gmail.com) writes:
> Now we like to capture the rows/columns changed in the normalised
> tables and do only those chages to the denormalised table , which we
> are hoping will reduce the processing time by atleast 50%
Ab old-fashioned way of doing this is to add a timestamp column to the
tables in question. A timestamp column is updated everytime the column
is updated. A timestamp is an 8-byte binary value that is database-
unique and monotonically increasing. (And have nothing to do with date
and time.)
Thus the computation process would have a table with the most recently
processed timestamp columns, and would thus only select rows with a
timestamp higher than the saved timestamp.
There are two possible problems:
1) You don't capture deletes this way. This could be handled with a
trigger that feeds a table of deleted ids.
2) The timestamp will be updated even if the row is updated with the
same values, so you may get false positives this way.
> I thought of using Triggers or OUTPUT clause to capture the changes.
> Please help me with the any ideas how to design the new process
If you don't go for timestamp, adding triggers that saves to a table or
sends a message on a Service Broker queue, is probably better, since this
would cover ad-hoc changes as well.
--
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
Navigation:
[Reply to this message]
|