Hi,
The bigger my C# web-application gets, the more places I need to put in the
tedious retrying block of code to make sure operations that can run into
database deadlocks are re-run (retried) 3-4 times and give up if after that
it's still in deadlock. I'm very sure that many experienced people out
there already deal with this issue somehow. Is there an alternative to it?
Thanks for your comments and suggestions.Most deadlocks are due to poor indexing and inconsistent updating in the
order of the tables. You may want to concentrate more on the database
schema and code vs. the C# code.
Andrew J. Kelly SQL MVP
"Zeng" <Zeng5000@.hotmail.com> wrote in message
news:%230DDr25yFHA.1040@.TK2MSFTNGP14.phx.gbl...
> Hi,
> The bigger my C# web-application gets, the more places I need to put in
> the
> tedious retrying block of code to make sure operations that can run into
> database deadlocks are re-run (retried) 3-4 times and give up if after
> that
> it's still in deadlock. I'm very sure that many experienced people out
> there already deal with this issue somehow. Is there an alternative to it?
> Thanks for your comments and suggestions.
>
>|||Thanks for the advice. I understand that indexing and update order of the
tables contribute to deadlocks. My question is is it possible to make a
large application deadlock free? If yes, please share the tips with me how
to ensure that; what type of guidelines/disciplines to follow to ensure
tables are always updated in correct order when there are storeprocedures,
triggers, and direct queries can hit the db at the same time.
If deadlock free is not guaranteed, then it sounds to me that we would need
to put retrying blocks into the code - no other way around.
Hope to hear back from you, thanks again.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:uTuJKz6yFHA.3892@.TK2MSFTNGP12.phx.gbl...
> Most deadlocks are due to poor indexing and inconsistent updating in the
> order of the tables. You may want to concentrate more on the database
> schema and code vs. the C# code.
> --
> Andrew J. Kelly SQL MVP
>
> "Zeng" <Zeng5000@.hotmail.com> wrote in message
> news:%230DDr25yFHA.1040@.TK2MSFTNGP14.phx.gbl...
it?
>|||I don't think there is ever a way to make an app deadlock free but you can
certainly do some things to prevent most occurrences of them. Maybe these
will help:
http://www.sql-server-performance.com/deadlocks.asp
http://www.sql-server-performance.c...sql_locking.asp
http://support.microsoft.com/kb/q169960/
http://www.codeproject.com/cs/database/sqldodont.asp
Andrew J. Kelly SQL MVP
"Zeng" <Zeng5000@.hotmail.com> wrote in message
news:eG6qrXCzFHA.2556@.TK2MSFTNGP10.phx.gbl...
> Thanks for the advice. I understand that indexing and update order of the
> tables contribute to deadlocks. My question is is it possible to make a
> large application deadlock free? If yes, please share the tips with me
> how
> to ensure that; what type of guidelines/disciplines to follow to ensure
> tables are always updated in correct order when there are storeprocedures,
> triggers, and direct queries can hit the db at the same time.
> If deadlock free is not guaranteed, then it sounds to me that we would
> need
> to put retrying blocks into the code - no other way around.
> Hope to hear back from you, thanks again.
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:uTuJKz6yFHA.3892@.TK2MSFTNGP12.phx.gbl...
> it?
>|||"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:u0osvSDzFHA.2652@.TK2MSFTNGP14.phx.gbl...
>I don't think there is ever a way to make an app deadlock free but you can
>certainly do some things to prevent most occurrences of them. ...
Just a note: there is help on the way for deadlocks. Running on SQL Server
2005 with Read Commited Snapshot Isolation should automatically eliminate
the vast majority of deadlocks.
David|||Why do you say that? It basically stops readers from blocking writers and
visa versa. It does nothing to prevent writers from blocking writers in the
reverse order.
Andrew J. Kelly SQL MVP
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:OmRErFFzFHA.1256@.TK2MSFTNGP09.phx.gbl...
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:u0osvSDzFHA.2652@.TK2MSFTNGP14.phx.gbl...
> Just a note: there is help on the way for deadlocks. Running on SQL
> Server 2005 with Read Commited Snapshot Isolation should automatically
> eliminate the vast majority of deadlocks.
> David
>
>|||If you have more than one connection to the database, then deadlocks can
happen. Deadlocks can occur on resources other than just locks. Threads
and memory are some examples. Sometimes parallelism can cause deadlocks.
These other kinds of deadlocks are rare, and usually occur on servers with
very heavy loads, but they can occur. Therefore, unless you take no pride
in your work, you should take into account that they can occur in your code.
Retry blocks are one possible solution, and can also be used to recover from
optimistic concurrency collisions. Another would be to return an error
message to the user, and let them resubmit the transaction.
Most deadlocks occur because resources aren't updated in the same order.
This problem is amplified by frequent scans due to a lack of indexes. In my
experience, the best way to mitigate the effect of deadlocks is to execute
transactions within stored procedures. If the transaction is enclosed in a
stored procedure, then most often it's a simple matter to fix the problem.
Usually rearranging statements or adding a simple select statement can be
used to alter the order in which locks are obtained. That's a lot more
difficult to do in client code that needs to be redeployed to tens or
hundreds of workstations. There are several other reasons to issue updates
within stored procedures. It's easier to issue set-based operations within
a procedure--just send the changes in temp tables or (frown) block the
changes in a large varchar parameter that can be parsed within the procedure
and then issue set-based statements to commit the changes. Using stored
procedures makes it easier to wait until the last possible instant to start
the transaction. Set-based operations are much faster and more scalable
than row-based operations because triggers fire only once, updates to
indexes can be optimized, and transaction logging is minimized--all of which
can serve to minimize the duration of transactions which will consequently
reduce the probability of deadlocks. If you must use a cursor, then it's
best to cache the results in temp tables or table variables and flush them
using set-based operations. Another technique is to use optimistic
concurrency with rowversioning which is again simpler to accoplish in a
stored procedure. All of the work required to calculate the results to be
committed is done under a READ COMMITTED isolation level with the max
rowversion of each row source cached in local variable and the results
cached in table variables so that all that is left to do after the
transaction is started is to lock the source rows with REPEATABLE READ
verifying at the same time that the max rowversion hasn't changed, to apply
update locks on all rows to be modified or deleted, and finally to issue the
statements that commit the changes.
To summarize: (1) Enclose transaction processing within stored procedures.
(2) Make sure that you obtain locks in the same order in every procedure,
trigger, function, or process. (3) Wait until the last possible instant to
start a transaction. And (4) keep transaction duration as short as possible
by using set-based operations, by caching and flushing, and/or by using
optimistic concurrency.
One other thing: redundancy in a database can increase the probability of
deadlocks. Make sure your database schema conforms at a minimum to
Boyce-Codd normal form. And (This will probably cause Celko to get out his
flame thrower!) use surrogate keys instead of natural keys in DRI
relationships to eliminate the redundancy inherent in natural foreign keys.
A database that is in 5th normal form and that uses surrogate keys correctly
has the additional property that each extrinsic atomic value exist in
exactly one place in the database. Redundancy can thus be completely
eliminated from the database.
"Zeng" <Zeng5000@.hotmail.com> wrote in message
news:eG6qrXCzFHA.2556@.TK2MSFTNGP10.phx.gbl...
> Thanks for the advice. I understand that indexing and update order of the
> tables contribute to deadlocks. My question is is it possible to make a
> large application deadlock free? If yes, please share the tips with me
> how
> to ensure that; what type of guidelines/disciplines to follow to ensure
> tables are always updated in correct order when there are storeprocedures,
> triggers, and direct queries can hit the db at the same time.
> If deadlock free is not guaranteed, then it sounds to me that we would
> need
> to put retrying blocks into the code - no other way around.
> Hope to hear back from you, thanks again.
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:uTuJKz6yFHA.3892@.TK2MSFTNGP12.phx.gbl...
> it?
>|||"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:ODkckFHzFHA.2796@.TK2MSFTNGP10.phx.gbl...
> Why do you say that? It basically stops readers from blocking writers and
> visa versa. It does nothing to prevent writers from blocking writers in
> the reverse order.
>
Here are my reasons:
1 My feeling is that a majority of deadlocks involve a shared lock will be
directly eliminated by not issuing shared locks. For the purposes of
deadlock elimination, READ CONCURRENT SNAPSHOT isolation is like putting
NOLOCK on every query, which is a common remedial measure for deadlocks.
2 Across the board improvements in concurrency reduce deadlocks
automatically. This is for the same reason that only busy databases
deadlock. Eliminating S locks will cause fewer lock waits for writing
transactions, which will cause them to hold their X locks for less time,
reducing the time in which they would be vulnerable to a deadlock.
3 Most of the remaining deadlock scenarios are pretty simple and can be
considered coding errors or table design errors.
4 READ CONCURRENT SNAPSHOT is very similar to Oracle's Multi-Version Read
Concurrency, and Deadlocks are extremely rare in Oracle.
David|||I just hate the thought that people will flock to Snapshot Isolation level
because it is perceived to be easier and better than Read committed. Most
developers that I meet using Oracle do not realize what that (or this new
isolation level) means in regards to the data they are returning. Too many
make decisions based on the data returned without regard to the fact the
data may in fact be changing underneath them. I am not saying there wont be
proper times to use this as everything has it's place. But I already see
where too many people think this will solve all their problems
automatically. Anyway enough of this as this is one of those religious war
kind of topics<g>.
Andrew J. Kelly SQL MVP
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:OwWPX0NzFHA.2212@.TK2MSFTNGP15.phx.gbl...
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:ODkckFHzFHA.2796@.TK2MSFTNGP10.phx.gbl...
> Here are my reasons:
> 1 My feeling is that a majority of deadlocks involve a shared lock will be
> directly eliminated by not issuing shared locks. For the purposes of
> deadlock elimination, READ CONCURRENT SNAPSHOT isolation is like putting
> NOLOCK on every query, which is a common remedial measure for deadlocks.
> 2 Across the board improvements in concurrency reduce deadlocks
> automatically. This is for the same reason that only busy databases
> deadlock. Eliminating S locks will cause fewer lock waits for writing
> transactions, which will cause them to hold their X locks for less time,
> reducing the time in which they would be vulnerable to a deadlock.
> 3 Most of the remaining deadlock scenarios are pretty simple and can be
> considered coding errors or table design errors.
> 4 READ CONCURRENT SNAPSHOT is very similar to Oracle's Multi-Version Read
> Concurrency, and Deadlocks are extremely rare in Oracle.
>
> David
>|||Would you be able to give me a pointer to where I can find more information
about set-based operations such as how to implement them for sqlserver 2000
and pro/con comparisons with the row based operations?
Thank you very much for the detailed guidance.
"Brian Selzer" <brian@.selzer-software.com> wrote in message
news:unvwe4HzFHA.1132@.TK2MSFTNGP10.phx.gbl...
> If you have more than one connection to the database, then deadlocks can
> happen. Deadlocks can occur on resources other than just locks. Threads
> and memory are some examples. Sometimes parallelism can cause deadlocks.
> These other kinds of deadlocks are rare, and usually occur on servers with
> very heavy loads, but they can occur. Therefore, unless you take no pride
> in your work, you should take into account that they can occur in your
code.
> Retry blocks are one possible solution, and can also be used to recover
from
> optimistic concurrency collisions. Another would be to return an error
> message to the user, and let them resubmit the transaction.
> Most deadlocks occur because resources aren't updated in the same order.
> This problem is amplified by frequent scans due to a lack of indexes. In
my
> experience, the best way to mitigate the effect of deadlocks is to execute
> transactions within stored procedures. If the transaction is enclosed in
a
> stored procedure, then most often it's a simple matter to fix the problem.
> Usually rearranging statements or adding a simple select statement can be
> used to alter the order in which locks are obtained. That's a lot more
> difficult to do in client code that needs to be redeployed to tens or
> hundreds of workstations. There are several other reasons to issue
updates
> within stored procedures. It's easier to issue set-based operations
within
> a procedure--just send the changes in temp tables or (frown) block the
> changes in a large varchar parameter that can be parsed within the
procedure
> and then issue set-based statements to commit the changes. Using stored
> procedures makes it easier to wait until the last possible instant to
start
> the transaction. Set-based operations are much faster and more scalable
> than row-based operations because triggers fire only once, updates to
> indexes can be optimized, and transaction logging is minimized--all of
which
> can serve to minimize the duration of transactions which will consequently
> reduce the probability of deadlocks. If you must use a cursor, then it's
> best to cache the results in temp tables or table variables and flush them
> using set-based operations. Another technique is to use optimistic
> concurrency with rowversioning which is again simpler to accoplish in a
> stored procedure. All of the work required to calculate the results to be
> committed is done under a READ COMMITTED isolation level with the max
> rowversion of each row source cached in local variable and the results
> cached in table variables so that all that is left to do after the
> transaction is started is to lock the source rows with REPEATABLE READ
> verifying at the same time that the max rowversion hasn't changed, to
apply
> update locks on all rows to be modified or deleted, and finally to issue
the
> statements that commit the changes.
> To summarize: (1) Enclose transaction processing within stored procedures.
> (2) Make sure that you obtain locks in the same order in every procedure,
> trigger, function, or process. (3) Wait until the last possible instant
to
> start a transaction. And (4) keep transaction duration as short as
possible
> by using set-based operations, by caching and flushing, and/or by using
> optimistic concurrency.
> One other thing: redundancy in a database can increase the probability of
> deadlocks. Make sure your database schema conforms at a minimum to
> Boyce-Codd normal form. And (This will probably cause Celko to get out
his
> flame thrower!) use surrogate keys instead of natural keys in DRI
> relationships to eliminate the redundancy inherent in natural foreign
keys.
> A database that is in 5th normal form and that uses surrogate keys
correctly
> has the additional property that each extrinsic atomic value exist in
> exactly one place in the database. Redundancy can thus be completely
> eliminated from the database.
> "Zeng" <Zeng5000@.hotmail.com> wrote in message
> news:eG6qrXCzFHA.2556@.TK2MSFTNGP10.phx.gbl...
the
storeprocedures,
the
in
after
out
to
>
Sunday, March 11, 2012
deadlock - retrying the transaction
Labels:
bigger,
block,
code,
database,
deadlock,
microsoft,
mysql,
operations,
oracle,
places,
retrying,
run,
server,
sql,
thetedious,
transaction,
web-application
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment