|
Posted by Brian - Support on 12/03/07 23:43
I have an extremely large table that is starting to slow down
production. I want to keep it for historical queries, but want to have
it fast for daily production activity. The daily activity will never
access records older than a few days. One of the difficult things
about this table is there is a lot of foreign keys that reference the
table.
So, I'm thinking of doing this:
1) Keep a "live" archive of the table using INSERT, UPDATE, and DELETE
triggers.
2) Having a job that disables the DELETE triggers and will delete
records older than 1 week.
3) Of course, part of this is also deleting the records that reference
this table.
4) Doing the same thing, with other large tables including, of course,
the ones that reference this table.
It seems the trick, mostly, is to delete the records from various
tables in order, so that none of the foreign keys are left hanging.
Is this a reasonable approach? Is there a more straight forward/built-
in technique for doing this?
We examined partitioning and this does not seem to be a practical
approach because, amoung other things, some of the tables are self
referencing and these references would cross partition boundaries.
Thanks!
- Brian
[Back to original message]
|