| 
	
 | 
 Posted by Greg D. Moore \(Strider\) on 11/06/05 02:42 
"Erland Sommarskog" <esquel@sommarskog.se> wrote in message 
news:Xns97059EAF78709Yazorman@127.0.0.1... 
> helmut woess (hw@iis.at) writes: 
> > In this special case i would think about using a table per week. There 
is 
> > no faster way then DROP/CREATE or maybe TRUNCATE. You have to change a 
lot 
> > in the way you work with this data, but you have UNION and maybe you can 
> > use VIEWS. 
> > Or you use a big Solid State Disk for your database :-)) 
> 
> Since one table per week becomes quite a job to manage, I would go for 
> one table per month, and then truncate once per month. 
> 
> If this would be too much data, I would then try every tenth day. This 
> makes it a lot easier to set up the check constraints for the partitions. 
 
Another way to handle this which is SQL Server specific is to set a rowcount 
of say 10,000 and loop through deleting 10,000 rows at a time. 
 
And either back up the log frequently enough or use a simple recovery 
method. 
 
 
> 
> 
> --  
> Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se 
> 
> Books Online for SQL Server SP3 at 
> http://www.microsoft.com/sql/techinfo/productdoc/2000/books.asp 
>
 
[Back to original message] 
 |