|
Posted by Tom van Stiphout on 12/04/07 03:14
On Mon, 3 Dec 2007 15:43:05 -0800 (PST), Brian - Support
<mail@brianrice.com> wrote:
What do you consider "extremely large"?
A query like:
select * from SomeTable
where SomeDate between 'x' and 'y'
should be fast if SomeDate is indexed and x and y are only a few days
apart, even if there are millions of rows. That's the power of
indexing. So it should be fast to select the "active records" for use
in the front-end application.
-Tom.
>I have an extremely large table that is starting to slow down
>production. I want to keep it for historical queries, but want to have
>it fast for daily production activity. The daily activity will never
>access records older than a few days. One of the difficult things
>about this table is there is a lot of foreign keys that reference the
>table.
>
>So, I'm thinking of doing this:
>
>1) Keep a "live" archive of the table using INSERT, UPDATE, and DELETE
>triggers.
>2) Having a job that disables the DELETE triggers and will delete
>records older than 1 week.
>3) Of course, part of this is also deleting the records that reference
>this table.
>4) Doing the same thing, with other large tables including, of course,
>the ones that reference this table.
>
>It seems the trick, mostly, is to delete the records from various
>tables in order, so that none of the foreign keys are left hanging.
>
>Is this a reasonable approach? Is there a more straight forward/built-
>in technique for doing this?
>
>We examined partitioning and this does not seem to be a practical
>approach because, amoung other things, some of the tables are self
>referencing and these references would cross partition boundaries.
>
>Thanks!
>
>- Brian
Navigation:
[Reply to this message]
|