| 
	
 | 
 Posted by Greg D. Moore \(Strider\) on 01/25/06 15:01 
"Mike Read" <mar@roe.ac.uk> wrote in message 
news:Pine.OSF.4.63.0601251140480.472688@reaxp06.roe.ac.uk... 
> Hi Robert 
> > - distribution of data: either via some form of replication or by moving 
> > data from one DB to a complete different system 
> > 
> 
> We're looking at getting another server to handle the long queries 
> so this might utilmately be the answer 
 
This may ultimately be your best answer.  But... 
 
> 
> > - optimizing SQL: additional indexes, different query conditions etc. 
> > 
> 
> We've pretty much done what we can but some queries will always need a 
> full table scan. 
> 
 
Why?  I'd suggest perhaps posting some DDLs here.  Some folks here can 
sometimes do some amazing work. 
 
 
> As all queries run at the same priority I was kind of expecting a 
> 0.1 sec query to take approx 0.2 sec (rather than 10 secs as is happening) 
> if another (long) query is running. 
> 
> As this isn't the case I presume there's some sort of 
> overhead/cache/swapping occuring that I might have been able to 
> reduce showhow. 
> 
 
Well, generally more RAM is good. 
 
But keep in mind SQL Server 2000 Standard is limited to 2 gig of RAM. 
 
So make sure you're using Enterprise on an OS that will permit use of more 
RAM. 
 
I'd highly suggest at least Windows 2003 for your OS and ideally moving to 
SQL 2005 to boot. 
 
For example, SQL 2005 Enterprise on Windows 2003 Enterprise can supply up to 
64 Gig of RAM.  (if you really have money to burn,go to Enterprise for 
Itanium Systems.. 1TB of RAM.  Oh and send a few checks my way. :-) 
 
Also, you may want to if you haven't already, get more disks and partition 
tables accordingly. 
 
For example, if it's only one large table that gets scanned, move it to its 
own set of disks.  This will isolate the disk I/O. 
 
 
> Thanks 
>    Mike 
> 
>
 
  
Navigation:
[Reply to this message] 
 |