| 
	
 | 
 Posted by Erland Sommarskog on 06/18/05 01:21 
Orly Junior (nomail@nomail.com) writes: 
> By using the profiler, I found that while executing the first query (20 
> days span), the system don't use the index. How it possible?  
 
When you have a non-clustered index that can be used to compute a query, 
SQL Server cannot always use this index blindly. If the selection is 
small, the index is find. If the selection is large, the index spells 
disaster. This is because every hit in the pages, requires an access to 
the data pages. This can up with more pages reads, than use scanning the 
table once. 
 
Now, in your case, there are 11041 rows that matches the WHERE clause. 
The table is 1.7 GB, which is 207 000 pages. Even if some of those 
1.7 GB are indexes, the table scan is obviously more expensive. 
 
But SQL Server does not build query plans from full knowledge, but from  
statistics it has saved about the table. If this statistics is inaccurate 
for some reason, the estimate may be incorrect. By default, SQL Server 
does only sample data for its statistics. 
 
You can try "UPDATE STATISTICS tbl WITH FULLSCAN" and see if this 
has any effect. SQL Server will now look at all rows. However, it 
saves data in a histogramme, so you may still lose accuracy. DBCC 
SHOW_STATISTICS may give some information. 
 
 
--  
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se 
 
Books Online for SQL Server SP3 at 
http://www.microsoft.com/sql/techinfo/productdoc/2000/books.asp
 
  
Navigation:
[Reply to this message] 
 |