Tuesday, March 27, 2012

Deadlocks & Transaction Isolation Level

If I understand correctly, when using a Read Committed isolation level, the
most common reason for a deadlock is because two processes update a set of
tables in different order. However, it seems that when using a Repeatable
Read isolation level, the odds of a deadlock increase significantly.
For example, open Management Studio and create two different connections
against the AdventureWorks database. In both connections execute the
following:
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
Begin Tran
SELECT EmployeeID From HumanResources.Employee Where EmployeeID = 1
Then, in the first connection execute the following (but do not commit the
transaction):
Update HumanResources.Employee Set MaritalStatus = 'M' Where EmployeeID = 1
In the second connection execute the exact same line. This will cause a
deadlock error in the second connection. We get a deadlock even though both
processes are performing the exact same action in the exact same order.
This example may not be the best but is my assumption correct that when
using Repeatable Read, the likelihood of a deadlock error is greater than
when using Read Committed?
Thanks, Amos.> when using a Read Committed isolation level, the
> most common reason for a deadlock is because two processes
> update a set of tables in different order.
not exactly. 2 processses may update rows in only one table and still
clinch in a deadlock.|||Hi Amos
Yes, your understanding is correct. Using a higher isolation level like
repeatable read has tradeoffs.
In read committed the locks on the SELECT would be released as soon as the
SELECT was finished. In repeatable read, the SELECT (shared) locks are not
released. The good news is that each transaction is guaranteed to read the
same data throughout the transaction. The bad news is there is a greater
chance of deadlock. Each connection has a shared lock on the row in the
Employee table, and wants an exclusive lock. Neither can get the exclusive
lock because the other has the shared lock, so you have deadlock.
One of the first suggestions we give to try to reduce deadlock is to reduce
your isolation level; in this case, bring it back to read committed.
Another solution here would be to use an UPDLOCK hint when you do the
select. Then the first process would get an update lock, not a shared lock,
and when the second process tried to get the update lock, it would be
blocked. The first process could then get the exclusive lock and do the
update operation, and finish the transaction. Then the second process could
get first the update lock, then the exclusive lock, and then finish, with no
deadlock occurring.
--
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"Amos Soma" <amos_j_soma@.yahoo.com> wrote in message
news:%23$22cVEKGHA.3960@.TK2MSFTNGP09.phx.gbl...
> If I understand correctly, when using a Read Committed isolation level,
> the most common reason for a deadlock is because two processes update a
> set of tables in different order. However, it seems that when using a
> Repeatable Read isolation level, the odds of a deadlock increase
> significantly.
> For example, open Management Studio and create two different connections
> against the AdventureWorks database. In both connections execute the
> following:
> SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
> Begin Tran
> SELECT EmployeeID From HumanResources.Employee Where EmployeeID = 1
>
> Then, in the first connection execute the following (but do not commit the
> transaction):
> Update HumanResources.Employee Set MaritalStatus = 'M' Where EmployeeID => 1
> In the second connection execute the exact same line. This will cause a
> deadlock error in the second connection. We get a deadlock even though
> both processes are performing the exact same action in the exact same
> order.
> This example may not be the best but is my assumption correct that when
> using Repeatable Read, the likelihood of a deadlock error is greater than
> when using Read Committed?
> Thanks, Amos.
>
>

No comments:

Post a Comment