|
Posted by boa on 08/20/06 16:24
* Dan Guzman wrote, On 20.08.2006 16:03:
>> The theory is that even if one drive fails, the db will stay up and it
>> will be easy to recreate the indexes when the disk has been replaced. (We
>> will have one hot spare available)
>>
>> Does anyone know how well sqlserver 2005 handles disk loss in such a
>> configuration?
>
> I've seen a SQL 2005 demo with files on removable drives that may provide
> some insight. In that demo, the database remained online when USB drives
> were physically removed as long as the transaction log was available.
> Commands continued to run successfully when cached data were accessed and
> even updates succeeded. However, commands that required physical I/O to
> missing files (non-cached data) failed. The database was marked suspect
> when SQL Server was restarted because recovery couldn't take place with
> missing files.
>
> You cannot easily recreate the indexes after you lose index files because a
> full database restore is required after files are lost. You'll need to
> restore from full backup and apply transaction log backups. However, at
> least the database will be partially available during the degradation.
Thanks, very interesting and exactly what I was looking/hoping for.
Can't see why the sqlserver needs a full restore just because it loses
some indexes, but if that's the way it is I'm not going to fight it...
>
> I suggest you stick with RAID-10 if high availability is important in your
> environment. Disk storage is inexpensive nowadays.
The disks themselves may be cheap, in our case it is a matter of
physical space, there's room for one disk rack only, that's it. So we're
throwing in a Dell PV1000 with room for 15 drives, no more.
Some tables/indexes are very busy, being updated every nth minute. To
stop that traffic from blocking reads/writes to other tables, I have
decided to place the busiest/largest tables on separate drives. Better
to have a little bit lower, but more predictable and consistent
performance than to place all tables/indexes on a massive raid-10 array,
IMO.
All in all we have 3-4 table groups, along with indexes, which are
heavily updated and inserted to, in addition to 60-80 "more quiet"
tables. In an ideal world one could have used 8 to 10 separate raid-10
arrays, but that's not possible with only 14 disks(15-1 hot spare).
So I plan to use 2 drives for the transaction log, 4 drives for a
raid-10 array, and then create 4 raid-1 arrays for index and table files
for some of the busiest tables.
Oh well, hopefully the new version will be fast enough even with RAID-1
and RAID-10, so then we can shut down and throw out some of the other
db-servers currently occupying our server rack. That's the plan, time
will show.
Thanks again.
Boa
[Back to original message]
|