Tony’s Oracle Tips

June 22, 2010

Single table deadlocks

Filed under: Uncategorized — tonyhasler @ 8:08 pm

One of the main things we all learn when we are first introduced to Oracle is the concurrency control model.  One aspect of this is deadlock avoidance.  We know that to avoid deadlocks when locking rows in multiple tables we need to adopt some kind of deadlock avoidance strategy.

The 10.2 concepts manual has this to say:

Multitable deadlocks can usually be avoided if transactions accessing the same tables lock those tables in the same order, either through implicit or explicit locks. For example, all application developers might follow the rule that when both a master and detail table are updated, the master table is locked first and then the detail table. If such rules are properly designed and then followed in all applications, deadlocks are very unlikely to occur.

Unlikely, but not impossible as it turns out.

I am going to assume that the transaction isolation has been left at the default of READ COMMITTED for the rest of this note.  In other words, that you have not issued a



Suppose you have one session that issues the following statement:

UPDATE T1 SET C1=’A’ WHERE C2=’B’; — Statement 1

and another issues the statement:

UPDATE T1 SET C3=’C’ WHERE C4=’D’; — Statement 2

Where C1,C2,C3 and C4 are columns in the table T1.  Let us also state that these are the only statements in the respective transactions.

Is a deadlock possible?  The answer is YES.  Suppose that an index is available on C2 but not on C4.  Suppose further that there are two rows R1, R2, where both C2=’B’ and C4=’D’.  Statement 1 may access the table using an index and statement 2 may use a full table scan.  Therefore, statement 1 may access R1 followed by R2 whereas statement 2 may access R2 followed by R1 (or vice versa).

As the concepts manual implies, it is quite straightforward for an application to avoid multi-table deadlocks by careful design.  Therefore when a deadlock does arise it is reasonable for an error to be returned.  But what about single table deadlocks?

As far as I can see, there are two ways to deal with this.  Firstly, we could control the way Oracle access the table by adding hints:

UPDATE /*+ FULL(T1) */ T1 SET C1=’A’ WHERE C2=’B’; — Statement 1

UPDATE /*+ FULL(T1) */ T1 SET C3=’C’ WHERE C4=’D’; — Statement 2

By ensuring that the two statements access the table in the same way we can ensure that rows are accessed in the same order.  However, this is heresy to some SQL declarative programming fundamentalists and may be highly inefficient.

The other way to deal with this is to write some kind of loop to retry the statement:

         deadlock_detected EXCEPTION;
         PRAGMA EXCEPTION_INIT(deadlock_detected, 60);
         UPDATE   T1
            SET   C1 = ‘A’
          WHERE   C2 = ‘B’;

         WHEN deadlock_detected

There is no need to wait before restarting; once the deadlock is detected the lock will have been released and the other session will have been immediately granted the lock.

If there is the remotest chance that the reported deadlock is a multi-table deadlock then this code needs to be enhanced to abort retries after a certain number of attempts.

Why does Oracle make us do this?  Well, I will let you into a little secret.  This note exists because a developer came to me for advice on this subject and I advised her incorrectly.  I said “Oracle sorts this all out.  Once the statement has rolled back it will be silently restarted”.

Why was I so confused?  Well, Oracle does have the ability to restart statements after rolling them back.  It does so when a DML statement encounters a row modified since the statement started; all locked rows are unlocked and the statement restarts in an attempt to find a consistent set of rows to lock.  In the unlikely event that you do not already know this, check out Tom Kyte’s blog on how to observe it actually happening:

So I think that I have an enhancement request:

After rolling back a statement due to a deadlock, restart the statement automatically if rolling back the statement resolves the deadlock

June 1, 2010

Querying dba_hist_active_sess_history

Filed under: Uncategorized — tonyhasler @ 12:14 pm

I finally couldn’t stand it anymore and investigated why querying this view is slow. A typical query might look like this:
       AS (SELECT   *
             FROM      dba_hist_active_sess_history
                    USING (snap_id, dbid, instance_number)
            WHERE   begin_interval_time BETWEEN TO_DATE(
                                                   '28-MAY-2010 21:30'
                                                  ,'DD-MON-YYYY HH24:MI'
                                            AND  TO_DATE(
                                                    '28-MAY-2010 23:59'
                                                   ,'DD-MON-YYYY HH24:MI'
  SELECT   TRUNC(sample_time), module, action, sql_id, sql_plan_hash_value
          ,session_id, event, COUNT( * )
    FROM   x
GROUP BY   TRUNC(sample_time), module, action, sql_id, sql_plan_hash_value
          ,session_id, event;
Of course, you may not use ANSI SQL or factored subqueries but that is by the by. Examination of the execution plan will reveal that all partitions of the session history are probed. This is because the snap_id column returned from the view is from that table and not the main snapshot table so the predicate is not applied there. It is massively faster to query the snapshot table and then plug the returned snap_ids into the query by hand:
SELECT   snap_id
  FROM   dba_hist_snapshot
WHERE   begin_interval_time BETWEEN TO_DATE('28-MAY-2010 21:30'
                                            ,'DD-MON-YYYY HH24:MI')
                                 AND  TO_DATE('28-MAY-2010 23:59'
                                             ,'DD-MON-YYYY HH24:MI')
And then
       AS (SELECT   *
             FROM   dba_hist_active_sess_history
            WHERE  snap_id IN (17823, 17822)
  SELECT   TRUNC(sample_time), module, action, sql_id, sql_plan_hash_value
          ,session_id, event, COUNT( * )
    FROM   y
GROUP BY   TRUNC(sample_time), module, action, sql_id, sql_plan_hash_value
          ,session_id, event;
This works because transitive closure only applies to predicates on literals: sn.snap_id = ash.snap_id and ash.snap_id in (17822,17823) => sn.snap_id in (17822,17823).

Of course, if you do this a lot from a suitably privileged account you could create an “improved” version of the view (that is unsupported and likely to break in future releases) that selects “sn.snap_id” rather than “ash.snap_id” or you could create a table function that automates the creation of the dynamic SQL.  Readers are invited to post their implementations!

Blog at