Reply to Re: Advice on middleware products for TRUE Scaling out of SQL Server

Your name:

Reply:


Posted by IanIpp on 10/02/42 11:45

All,

Thanks for the replies.

1) Sorry...yes I did mean DPV instead of DMV! The distributed
partioned views is a way to create an updatable copy of data on
multiple databases in SQL 2005 (which MSFT calls a federation). It
sounds good in concept until you realize that the outage of any one
federated database kills access to the table on all of them. So rather
than scaling and becoming MORE available...this solution scales at the
expense of less availability. On the other hand, the middleware
products I quoted above promise BOTH scaling and availability.

2) One other thing..."scaling up" (adding more CPUs) has a less than
linear effect on performance becuase switching and other things hurt
performance. In other words...if you go from a 2 cpu machine to a 4 cpu
machine...you won't get double the performance (some source quote the
last CPU as only increasing 15-17% rather than 50%). But "scaling out"
(adding more servers) with a middleware product (if it truly works)
promises a linear increase in performance. So it would not only be
cheaper but better...and that is always what everyone is looking for.
That is why I'm hoping that someone here has experience with such
products and can comment on them.

3) Regarding tuning queries,etc.

Yes, we have control over the code but we already run
extensive/constant query tuning and add/adjust indexes and regularly
use the Database Tuning Advisor (see my post here for some of the
existing bugs I've found in SQL 2005's DTA:
http://rentacoder.com/CS/blogs/real_life_it/archive/2006/03/17/447.aspx
). We also update statistics and defrag the indices (and rebuild the
ones that can't be defragged). There are 2 bugs I have open tickets on
with indices not being defragged even after rebuilding...and not on
small tables, but large ones with thousands of pages of data. I'll
update my blog once MSFT gives more information on what is going on.

But if you are growing, tuning, defragging indices, etc. can only get
you so far. Eventually you WILL run into the limitations of your
hardware. Guaranteed. So it's not a true solution...it just delays
the inevitable.

Regarding:
>>HP ProLiant DL585-G1 128GB/2.4GHz/DC/4P
Availability Date 11/08/05
TPC-C Throughput 202,551
http://www.tpc.org/tpcc/results/tpcc_result_detail.asp?id=105100101
Are you doing more than 100,000 transactions/minute in an OLTP system
and can't pay an 8 way machine? Then I guess your problem is other, not
with SQL2K5.

I looked at this result and was encouraged for a minute that perhaps we
might be able to make better use of the hardware "somehow". But then I
looked deeper. I have to wonder how applicable these #s are to "real
life". Maybe I'm offbase, but our machine is the top of the line Dell
quad processor/dual core model..and comes in at about $35k (just the
machine...no software licenses). The machine in this contest was
priced at half a million dollars ($500,000)! What are they running
this thing on? My understanding is that Microsoft devotes an entire
team to doing exotic things to the hardware that companies without PHDs
in computer engineering and system design cannot do. I have also heard
(but don't know if it's true) that they add features to SQL Server
after receiving their workload to make it perform well...and if so then
this is something else that no one else can do. If I'm wrong about
any of this, someone please correct me. If I'm essentially right, then
it's not reasoanble to expect these rates.

So in that vein...I'd be interested to hear about anyone with "real
life" implementations and the TPS they are achieving? First, what is
the best way to measure TPS? I found the perf mon counter
"batches/sec"...is that what others use? If so, then we are at about
6,000-8,000/minute on a 64 bit quad processor/dual core machine and
currently at 70-80% CPU capacity. This is far below the 202,000/minute
of the TPC benchmark. What do other people get on this stat?

4) Does anyone have experience with a read-only database in a real life
situation? In reading some papers it seems that using replication to
do this will severely slow down your inserts and updates (one quoted
50-80%). That isn't a realistic solution. Another possible solution
is mirroring and using a snapshot, but if you do this, Microsoft won't
support your database anymore (also not realistic). Maybe log shipping
is the best way...what is your real life experience?

5) Does anyone have experience using these middleware products which
promise better performance and price than traditional scaling up and
more availability than traditional "scaling out" via DPVs or a read
only database?

Ian Ippolito
www.RentACoder.com

[Back to original message]


Удаленная работа для программистов  •  Как заработать на Google AdSense  •  England, UK  •  статьи на английском  •  PHP MySQL CMS Apache Oscommerce  •  Online Business Knowledge Base  •  DVD MP3 AVI MP4 players codecs conversion help
Home  •  Search  •  Site Map  •  Set as Homepage  •  Add to Favourites

Copyright © 2005-2006 Powered by Custom PHP Programming

Сайт изготовлен в Студии Валентина Петручека
изготовление и поддержка веб-сайтов, разработка программного обеспечения, поисковая оптимизация