| 
	
 | 
 Posted by Greg D. Moore \(Strider\) on 07/19/06 02:03 
"shelleybobelly" <shelleybobelly@yahoo.com> wrote in message 
news:1153260795.628994.213350@h48g2000cwc.googlegroups.com... 
> Right, I agree.  If I run it the other way (just update the big lookup 
> table from the separate tables where there were changes), it runs much 
> quicker, only deleting/adding a thousand or so records a day. The 
> owners of this DB are doing things differently, but I guess I'll just 
> have to 'educate' them on this one. Then I'll create another job to 
> re-index this table about once a week or so to keep it from 
> fragmenting. 
> 
> BTW, this is for a Legal department that has to look up files that can 
> be up to 20 years old, but they don't have all the information. That's 
> why the 3 million records in a lookup table. It is a subset of a HUGE 
> archive table that contains 255 columns, too. 
> 
 
To suggestions: 
 
1) move the indexes to an NDF file on a separate set of physical disks. 
 
This may help if you're disk I/O bound at all. 
 
2) May want to consider using full-text indexing for some of this. 
 
 
 
> Thanks Erland, keep up your postings. 
> 
> 
> Erland Sommarskog wrote: 
> > shelleybobelly (shelleybobelly@yahoo.com) writes: 
> > > I have a new job. It needs to drop and re-create (by insert) a table 
> > > every night. The table contains approximately 3,000,000 (and growing) 
> > > records. The insert is fine, runs in 2 minutes. The problem is that 
> > > when I create the indexes on the table, it is taking 15-20 minutes. 
> > > There is one clustered index and 11 non-clustered. This is a lookup 
> > > table that takes many different paremeters, so it really needs the 
> > > indexes for the user interface to run efficiently. However, the 
> > > database owners aren't keen on a job taking 20 minutes to run every 
> > > night. 
> > 
> > Without knowing much about the data, it's difficult to tell. But I find 
it 
> > difficult to believe that data changes that much in a lookup table. Then 
> > again, I would not expect a look-up table to have three million rows. 
> > Anyway, rather than dropping and recreating, maybe it's more effective 
> > to load into staging table, and then update changed rows, insert new 
> > ones, and delete old ones. 
> > 
> > 
> > -- 
> > Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se 
> > 
> > Books Online for SQL Server 2005 at 
> > 
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx 
> > Books Online for SQL Server 2000 at 
> > http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx 
>
 
  
Navigation:
[Reply to this message] 
 |