Tuesday, March 27, 2012

Deadlocks

Hi,
I have recently directed a query that used to run on my main server to
another server that is a Pull Subscriber to a Transactional Replication from
the main server.
I now face deadlocks from time to time when the query is being run. The
deadlocked process is the proccess executing the query. I never faced such a
deadlock when the query was running on the main server.
Any ideas?
I was thinking to add either NOLOCK or set the transaction isolation level
to READUNCOMITED.
YanivYaniv ,shalom
http://www.sql-server-performance.com/deadlocks.asp
"Yaniv" <yanive@.rediffmail.com> wrote in message
news:O63ZcRGDGHA.376@.TK2MSFTNGP10.phx.gbl...
> Hi,
> I have recently directed a query that used to run on my main server to
> another server that is a Pull Subscriber to a Transactional Replication
> from the main server.
> I now face deadlocks from time to time when the query is being run. The
> deadlocked process is the proccess executing the query. I never faced such
> a deadlock when the query was running on the main server.
> Any ideas?
> I was thinking to add either NOLOCK or set the transaction isolation level
> to READUNCOMITED.
>
> Yaniv
>
>|||NOLOCK or READ UNCOMMITTED will make your queries return incorrect results
at lightning speed. It would be better to find out what is wrong with the
query and to fix it.
READ COMMITTED SNAPSHOT is very good for reporting, but shouldn't be used to
calculate modifications of the database because the results can become stale
between the read and the write. It's too bad that it's not available in SQL
2000.
READ COMMITTED is usually sufficient for reporting, unless aggregates are
used, in which case you should use SERIALIZABLE.
If your query is used to calculate modifications of the database, then READ
COMMITTED is not sufficient.
REPEATABLE READ is insufficient if the query used to calculate modifications
of the database includes aggregates.
All of these statements depend on how tolerant your queries (and users) are
to error (in the scientific sense). For example, if you have 1 million rows
and you're calculating an average, then it really doesn't matter if a few
rows change, appear, or disappear during the calculation, so READ COMMITTED
should be OK in that case. The bottom line is that the ISOLATION LEVEL
should be determined on a case-by-case basis.
"Yaniv" <yanive@.rediffmail.com> wrote in message
news:O63ZcRGDGHA.376@.TK2MSFTNGP10.phx.gbl...
> Hi,
> I have recently directed a query that used to run on my main server to
> another server that is a Pull Subscriber to a Transactional Replication
> from the main server.
> I now face deadlocks from time to time when the query is being run. The
> deadlocked process is the proccess executing the query. I never faced such
> a deadlock when the query was running on the main server.
> Any ideas?
> I was thinking to add either NOLOCK or set the transaction isolation level
> to READUNCOMITED.
>
> Yaniv
>
>|||In my case I can afford to issue this query in READUNCOMITTED but thinking
again I find READPAST more suitable.
As per finding the cause to the problem I have no progress yet. This query
have been used for a long time on my main server without experiencing a
dealock.
--
"Brian Selzer" <brian@.selzer-software.com> wrote in message
news:exay9MHDGHA.1280@.TK2MSFTNGP10.phx.gbl...
> NOLOCK or READ UNCOMMITTED will make your queries return incorrect results
> at lightning speed. It would be better to find out what is wrong with the
> query and to fix it.
> READ COMMITTED SNAPSHOT is very good for reporting, but shouldn't be used
> to calculate modifications of the database because the results can become
> stale between the read and the write. It's too bad that it's not
> available in SQL 2000.
> READ COMMITTED is usually sufficient for reporting, unless aggregates are
> used, in which case you should use SERIALIZABLE.
> If your query is used to calculate modifications of the database, then
> READ COMMITTED is not sufficient.
> REPEATABLE READ is insufficient if the query used to calculate
> modifications of the database includes aggregates.
> All of these statements depend on how tolerant your queries (and users)
> are to error (in the scientific sense). For example, if you have 1
> million rows and you're calculating an average, then it really doesn't
> matter if a few rows change, appear, or disappear during the calculation,
> so READ COMMITTED should be OK in that case. The bottom line is that the
> ISOLATION LEVEL should be determined on a case-by-case basis.
> "Yaniv" <yanive@.rediffmail.com> wrote in message
> news:O63ZcRGDGHA.376@.TK2MSFTNGP10.phx.gbl...
>

No comments:

Post a Comment