|
Posted by John Bell on 02/11/06 23:31
Hi
I can't say I know anything about the internals of JBOD.. but I would expect
combining the drives into a single logical drive to not be benificial even
though they are distributing the data over more spindles the type of access
between log file and data files will probably be counter productive.
If you are having contention on tempdb you may want to consider splitting
tempdb into multiple files (preferrably on separate sets of spindles) as
described in http://support.microsoft.com/default.aspx?scid=kb;en-us;328551
John
"JRoughgarden" <jroughgarden@stanfordalumni.org> wrote in message
news:1139511253.211524.40670@o13g2000cwo.googlegroups.com...
> We have an application that is experiencing I/O contention,
> particularly in tempdb but also in two other databases. The data is
> stored on mirrored PowerVault 220's, each with 10 of 14 possible disks.
> The PowerVaults are JBOD devices, not true SANs. The current config has
> four separate groups of physical drives assigned to distinct logical
> drives for log files, tempdb, and the two app dbs. This means, for
> example, that tempdb resides on one mirrored drive. The standard advice
> when faced with disk contention is to add spindles if possible. With 4
> empty slots, we would presumably assign the new physical disks to the
> most stressed db, e.g. tempdb.
>
> An alternative arrangement would be to combine all the physical drives
> into one logical drive, and put all the files, log and data, onto the
> single logical drive. The hope for this configuration is that the
> PowerVault would automagically distribute the data among the drives
> such that all drives were in use, all spindles reading and writing at
> maximum capacity when necessary. It is my understanding that
> full-featured SANs, like NetApps and EMC models, do this. My question
> is whether this configuration is best for the PowerVault, as well. Or
> is this the essential difference between JBOD and a true SAN?
>
> Has anyone tried both arrangements?
>
> Advice is much appreciated.
>
[Back to original message]
|