|
Posted by David Cressey on 03/24/07 22:02
"Greg D. Moore (Strider)" <mooregr_deleteth1s@greenms.com> wrote in message
news:4fgNh.17267$Jl.14634@newsread3.news.pas.earthlink.net...
> "David Cressey" <cressey73@verizon.net> wrote in message
> news:s2eNh.248$E46.187@trndny09...
> >
> >> > A million records isn't large? Ok.
> >>
> >> Nah, rather trivial these days. ;-)
> >
> > Does "trivial" mean easy or unimportant?
>
> No, in this case it means rather small which impacts how you approach
> maintainence issues. And to some extent how you solve problems.
>
I'd suggest that that's a misuse of the word "trivial", but that you might
have meant "small enough to be unimportant".
> For example, for some databases, it may be "simpler" to simply through
more
> memory at the problem. For a database 10x the size, more memory might not
> even make a dent.
This is because a lot of the work involved in sorting and searching expands
non linearly with regard to volume of data (row cardinality in this case).
In every database I've worked on, the difference between a table scan and
an index lookup has resulted in a "nontrivial" performance difference with a
million rows in the table.
But it depends on what you mean by "non trivial", I suppose.
[Back to original message]
|