You are here: Re: insert statement blocked « MsSQL Server « IT news, forums, messages
Re: insert statement blocked

Posted by Hugo Kornelis on 11/27/35 11:59

On 26 Sep 2006 14:38:25 -0700, nano2k wrote:

>Hugo, Dan
>Thanks for your patience with this topic.
>
>I understand and agree to all your advices.
>
>My struggle, at least for the moment, is to make the application as
>much responsive as possible while strongly looking for data security.
>Let's say that for the moment we don't care how much the SP needs to
>process - we care only to let other users work while the SP is running.
>The SP will be run only a few times a month, but there are 2-3 fixed
>days of the month when the SP needs to perform - it's all about
>accounting :) I only want to give other users the ability to work while
>the SP runs.

Hi nano2k,

There are several ways to achieve that:

- Run the SP at a time when no inserts are done.
- If the SP only does reporting - run the SP on a copy of the database,
restored from a recent backup. Since your SP does updates, you can't use
this option.
- Make the SP run as fast as possible. With only tens of thousands of
rows in the database, you should be able to get performance in terms of
minutes, maybe even less. Definitely not hours.
- Make sure that all the right indexes are there to enable the processes
to run in parallel without blocking each other. This can only be done
with access to the complete code - i.e. all tables, constraints, and
indexes, the stored proc, and the code used for inserts. You've already
said that the code is big (long and complex), but not how big. My
definition of long and complex might differ from yours. That being said,
it is definitely possible that your code is to long and complex for help
via usenet groups - in that case, you either have to do it yoursself or
(if you lack the skills) hire a consultant to do it for you.

>I need uniqueidentifiers because my app is a three-tier client-server
>app (client/server/db) that needs to generate unique IDs from client
>side. The client app may generate several records from different tables
>that will be sent all together in a single request to the server. The
>records need to be already coupled (that is, foreign keys must be
>already known before sending the request to the database) before
>sending them to the server to process.

You're freely mingling real end-user requirement and implementation
choices here. uniqueidentifiers are never an end-user requirement. They
can be the best solution. I'm not sure if they are here. The only real
requirements I read here are insertion of new data from several clients,
and sending multiple related rows on a single access to keep network
traffic low.

It's not a given that you need surrogate keys for this. They can be
handy in some cases, but in other cases, using only the business key is
preferable. And even if you do need surrogate keys, then you can still
use identity values (with one of several standard techniques to prevent
duplicates when identity values get generated at different locations, if
you're in a replicated scenario [which I don't think is the case]).

> Anyway, I have
>set a task to reindex the database every night, so I think this will
>reduce the payback to using uniqueidentifiers - do you agree? Is there
>any other action I should schedule?

If you choose a low fill factor, the number of page splits will go down.
This may give some relief. OTOH, it will also spread the data over more
pages, increasing the amount of I/O needed for any request.

>Dan, my uniqueid column (named [objectid]) is the only key, and at
>least at the moment, this column is set as primary key (ok? not ok?).

Primary key or not is irrelevant for now - having only a generated value
as key is wrong. Google for some of my prevous musings on surrogate key
vs business key, and for similar (and, to be honest, also a few
contradicting) opinions by others.

That being said, the index created to check the primary key defaults to
being clustered. You don't want to cluster on uniqueidentifier. Change
this index to nonclustered, then find a more sensible column or
combination of columns for the clustered key. Even if it's a nonunique
index.

>My protection to duplicate entries is handled by the client
>application. 99% of the business logic is concentrated in client app.

Bad. Constraints should always be enforced in the DB level. You can ALSO
enforce them in the client, but never forget the DB - that way, if an
attacker compromissed the security of your client, (s)he still is unable
to bypass your basic integrity checks.

>But today, I have discovered that the SP heavily uses CURSORS :((

I'm not surprised. That is about the only way to get 2 hour performance
on database with less than 100K rows.

>This is the other big issue I have to deal with. Check out this piece
>of code (one of 3 CURSORS defined in SP):
(snip)
>I want to change this in the following manner: to create an temporary
>table and to insert the results of the above SELECT statement into this
>temp table. After that, using a cursor, I intend to fetch the records
>from the temp table. This way, the tables involved in the SELECT stm
>above will be locked as short as possible. Do you think this is a good
>approach?

No. You're standing behind a Formula 1 racing wagon and pushing it, and
you're proposing to apply some grease to the axes to go faster. You
should get in, start the engine and hit the pedal.

There are several ways to improve cursor performance (such as changing
to FAST_FORWARD READ_ONLY, using the temp table you propose, or other
techniques). Some of them will make things worse. Others might give you
some performance gain. But that's just peanuts compared to what you can
probably gain by rewriting the complete stored procedure to operate in a
set-based manner.

>Note: In the SELECT statement above: ALL tables involved have a primay
>key defined as uniqueidentifier and named objectid.

I would also encourage you to revise your naming standards. I'll have to
point you to google again, since my post is already quite long, and it's
time for me to get some sleep.

--
Hugo Kornelis, SQL Server MVP

 

Navigation:

[Reply to this message]


Удаленная работа для программистов  •  Как заработать на Google AdSense  •  England, UK  •  статьи на английском  •  PHP MySQL CMS Apache Oscommerce  •  Online Business Knowledge Base  •  DVD MP3 AVI MP4 players codecs conversion help
Home  •  Search  •  Site Map  •  Set as Homepage  •  Add to Favourites

Copyright © 2005-2006 Powered by Custom PHP Programming

Сайт изготовлен в Студии Валентина Петручека
изготовление и поддержка веб-сайтов, разработка программного обеспечения, поисковая оптимизация