|
Posted by Dan Guzman on 09/28/49 11:58
> I'm using a batch as I don't won't to use a log bypassed bulk insert.
> This test is ran in simple recovery mode but in production in will be
> full recovery model.
You can use a fully-logged bulk insert by changing the recovery model to
FULL. Bulk insert is the fastest way to load data into SQL Server.
> I am writing the same amount of data in Oracle vs Mssql. But Mssql is
> writing a lot more data.
> snip <
> Yes - I know it's not optimal but Oracle has the same not optimal
> configuration and performs a lot faster.
Both these issues may be because of transaction differences. If each insert
is in a single transaction in the SQL Server job but not in the Oracle job,
many more transaction log writes will be required by SQL Server. This will
increase write I/O and compound the negative effect of having data and log
on the same array.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"dunleav1" <jmd@dunleavyenterprises.com> wrote in message
news:1158325555.184933.9450@d34g2000cwd.googlegroups.com...
>
> Dan Guzman wrote:
>> > The same job ran in Oracle had 187 disk writes per second as
>> > compared to 3600 writes per second in Sql Server. What's up with that?
>>
>> It's hard to explain the I/O disparity without knowing the details of
>> your
>> job. My first guess is that you might be committing each SQL Server
>> insert
>> (default is autocommit) but not each Oracle insert (default is implicit
>> transactions on). This would require SQL Sever to perform many more
>> transaction log writes.
> Each job Oracle Mssql are through jdbc - autocommit is on, sent as a
> stored procedure and executed as a batch of 10000.
>>
>> > If the disks are showing no latency in the system monitor how can disk
>> > be a bottleneck?
>>
>> An I/O intensive process will generally run at the max speed of your disk
>> subsystem. Not all I/O is equal; sequential I/O is much more efficient
>> than
>> random I/O. You can typically do about twice as many sequential I/Os per
>> second than random I/Os. This is one reason why it's usually best to
>> perform large data loads using a bulk insert method.
> I'm using a batch as I don't won't to use a log bypassed bulk insert.
> This test is ran in simple recovery mode but in production in will be
> full recovery model.
>>
>> Note that your performance monitor is reporting 3670 writes/sec but,
>> according to the perf tuning technical reference, your 3 usable disks
>> should
>> only sustain about 375/sec (3 x 125). It looks to me line the write
>> caching
>> controller is skewing the numbers. Data must eventually get written to
>> disk, but since the controller writes data asynchronously, that won't
>> become
>> a bottleneck until the cache is saturated.
> I am writing the same amount of data in Oracle vs Mssql. But Mssql is
> writing a lot more data.
>>
>> You mention only a single RAID 01 array. Is the transaction log on the
>> same
>> array as the data files?
> Yes - I know it's not optimal but Oracle has the same not optimal
> configuration and performs a lot faster.
>
>>
>> --
>> Hope this helps.
>>
>> Dan Guzman
>> SQL Server MVP
>>
>> "dunleav1" <jmd@dunleavyenterprises.com> wrote in message
>> news:1158266291.118027.118880@p79g2000cwp.googlegroups.com...
>> >I built a test job that loads data into the database. I get great
>> > performance with Oracle and I'm trying to tune Sql Server to get the
>> > same type of performance.
>> >
>> > Based on system monitoring it looks like an I/O issue. So I started
>> > investigating based on sql server perfromance tuning technical
>> > reference.
>> >
>> > Here are my system monitor findings:
>> > The system monitor shows during my job:
>> > disk writes per second 3670
>> > disk reads per second 0
>> > avg disk queue is .45
>> > avg disk second read 0
>> > avg disk second write 0
>> >
>> > My total i/os per second is 7200. (disk writes per second * 2) + disk
>> > reads. It's times two because I am mirroring.
>> >
>> > I have a 6 disk raid array configured Raid 1/0 with 200M of cache
>> > configured read/write. The average disk second read and write of 0
>> > means no i/o latency but my i/os per disk is high compared to
>> > Microsoft's recommendation of 125 i/o per disk. I don't see an i/o
>> > issue based on the numbers, except for the amount of disk writes per
>> > second. The same job ran in Oracle had 187 disk writes per second as
>> > compared to 3600 writes per second in Sql Server. What's up with that?
>> >
>> > Here's my calculation:
>> > i/o per disk = [reads + (2 * writes)/number of disks
>> > i/o per disk = [0+(2*3600)/6] = 1200
>> >
>> > Microsoft is saying I need way more disks based on their calculation
>> > recommendation.
>> >
>> > Total i/o / 125 i/os per disk = disk needed
>> > 7200/125= 57.6 - 58 disks
>> >
>> >
>> > If the disks are showing no latency in the system monitor how can disk
>> > be a bottleneck?
>> > Based on Microsoft's recommendation - I need 58 disks to relieve the
>> > disk bottle.
>> > But I have no i/o latency and the disk queue length is .45.
>> >
>> > Ahy recommendations ?
>> >
>
Navigation:
[Reply to this message]
|