Optimizer statistics-driven direct path read decision for full table scans (_direct_read_decision_statistics_driven)

Hello all fellow Oracle geeks and technology enthusiasts! Long time no see ;-)

In the hacking session about Oracle full table scans and direct path reads I explained how the direct path read decision is not done by the optimizer, but instead during every execution, separately for every single segment (partition) scanned in the query. I also explained how the _small_table_threshold parameter and the X$KCBOQH.NUM_BUF(which keeps track of how many buffers of any segment are currently cached) are used for determining whether to scan using direct path reads or not.

If you don’t know what the _small_table_threshold parameter is and how it relates to the direct path read decision, then read this post by Alex Fatkulin first.

In addition to the abovementioned values, Oracle needs to know how many blocks there are to scan (under the HWM) in a segment. This was traditionally done by reading in the segment header block first (using a regular buffered single block read – “db file sequential read”).

Starting from Oracle 11.2.0.2, things have changed a little. Instead of making the direct path read decision based on the actual segment block count extracted from the segment header, Oracle actually takes this number from TAB$.BLKCNT (dba_tables.blocks) or TABPART$.BLKCNT, IND$.LEAFCNT etc.

There’s a new parameter _direct_read_decision_statistics_driven which controls this:

 SQL> @pd direct_read_decision
 Show all parameters and session values from x$ksppi/x$ksppcv...

NAME                                               VALUE    DESCRIPTION
 -------------------------------------------------- -------- ----------------------------------------------------------
 _direct_read_decision_statistics_driven            TRUE     enable direct read decision based on optimizer statistics
  • When this parameter is FALSE, the direct path read decision is done based on the segment header’s block count (actual block count).
  • When TRUE (default in 11.2.0.2+), the direct path read decision is done based on the block count stored in one of the base tables (TAB$, IND$) – the optimizer statistics

Note that even though the block counts are taken from the optimizer statistics in data dictionary, it’s not the optimizer who does the direct path read decision in the traditional sense (it’s not a cost-based decision).

Here’s an example from my test database:

SQL> CREATE TABLE t AS SELECT * FROM dba_source;

Table created.

SQL> @gts t
Gather Table Statistics for table t...

PL/SQL procedure successfully completed.

SQL>
SQL> SELECT blocks FROM user_tables WHERE table_name = 'T';

BLOCKS
----------
10704

SQL>

The table uses 10704 blocks (up HWM).

SQL> SELECT name,block_size,buffers FROM v$buffer_pool;

NAME                 BLOCK_SIZE    BUFFERS
-------------------- ---------- ----------
DEFAULT                    8192       9424

The table (10704 blocks) is bigger than the entire buffer cache (9424 buffers). And this is way bigger than the _small_table_threshold value of 168 in my instance (watch the full table scans and direct path reads) hacking session for more about this parameter.

So whenever I run a SELECT COUNT(*) FROM t, I see direct path reads show up:

SQL> @snapper ash,stats=sw,sinclude=table.scan|gets 5 1 99
Sampling SID 99 with interval 5 seconds, taking 1 snapshots...

-- Session Snapper v3.54 by Tanel Poder ( http://blog.tanelpoder.com )

----------------------------------------------------------------------------------------------------------------------------------------
    SID, USERNAME  , TYPE, STATISTIC                                                 ,         DELTA, HDELTA/SEC,    %TIME, GRAPH
----------------------------------------------------------------------------------------------------------------------------------------
     99, SYS       , STAT, db block gets                                             ,             1,         .2,
     99, SYS       , STAT, db block gets from cache                                  ,             1,         .2,
     99, SYS       , STAT, consistent gets                                           ,         11867,      2.37k,
     99, SYS       , STAT, consistent gets from cache                                ,             7,        1.4,
     99, SYS       , STAT, consistent gets from cache (fastpath)                     ,             6,        1.2,
     99, SYS       , STAT, consistent gets - examination                             ,             1,         .2,
     99, SYS       , STAT, consistent gets direct                                    ,         11860,      2.37k,
     99, SYS       , STAT, no work - consistent read gets                            ,         11859,      2.37k,
     99, SYS       , STAT, cleanouts only - consistent read gets                     ,             1,         .2,
     99, SYS       , STAT, table scans (long tables)                                 ,             1,         .2,
     99, SYS       , STAT, table scans (direct read)                                 ,             1,         .2,
     99, SYS       , STAT, table scan rows gotten                                    ,        739834,    147.97k,
     99, SYS       , STAT, table scan blocks gotten                                  ,         11860,      2.37k,
     99, SYS       , TIME, parse time elapsed                                        ,            46,      9.2us,      .0%, |          |
     99, SYS       , TIME, DB CPU                                                    ,         79988,       16ms,     1.6%, |@         |
     99, SYS       , TIME, sql execute elapsed time                                  ,        254990,       51ms,     5.1%, |@         |
     99, SYS       , TIME, DB time                                                   ,        255375,    51.08ms,     5.1%, |@         |
     99, SYS       , WAIT, enq: KO - fast object checkpoint                          ,        174947,    34.99ms,     3.5%, |@         |
     99, SYS       , WAIT, direct path read                                          ,          1280,      256us,      .0%, |          |
     99, SYS       , WAIT, SQL*Net message to client                                 ,             9,      1.8us,      .0%, |          |
     99, SYS       , WAIT, SQL*Net message from client                               ,       4672912,   934.58ms,    93.5%, |@@@@@@@@@@|
     99, SYS       , WAIT, events in waitclass Other                                 ,             6,      1.2us,      .0%, |          |
--  End of Stats snap 1, end=2012-09-02 20:03:55, seconds=5

---------------------------------------------------------------------------------
Active% | SQL_ID          | EVENT                               | WAIT_CLASS
---------------------------------------------------------------------------------
     2% | 88r4qn9mwhcf5   | enq: KO - fast object checkpoint    | Application
     2% | 88r4qn9mwhcf5   | ON CPU                              | ON CPU

--  End of ASH snap 1, end=2012-09-02 20:03:55, seconds=5, samples_taken=43

Let’s now fake the table stats so it looks like that there’s only 5 blocks in it – way below the _small_table_threshold value

SQL> EXEC DBMS_STATS.SET_TABLE_STATS(user,'T',numblks=>5);

PL/SQL procedure successfully completed.

SQL> SELECT COUNT(*) FROM t /* attempt 2 */;

  COUNT(*)
----------
      1000

The direct path reads are gone – we are doing regular buffered reads now!

SQL> @snapper ash,stats=sw,sinclude=table.scan|gets 5 1 99

Sampling SID 99 with interval 5 seconds, taking 1 snapshots...

-- Session Snapper v3.54 by Tanel Poder ( http://blog.tanelpoder.com )

----------------------------------------------------------------------------------------------------------------------------------------
    SID, USERNAME  , TYPE, STATISTIC                                                 ,         DELTA, HDELTA/SEC,    %TIME, GRAPH
----------------------------------------------------------------------------------------------------------------------------------------
     99, SYS       , STAT, db block gets                                             ,             1,        .17,
     99, SYS       , STAT, db block gets from cache                                  ,             1,        .17,
     99, SYS       , STAT, consistent gets                                           ,         11865,      1.98k,
     99, SYS       , STAT, consistent gets from cache                                ,         11865,      1.98k,
     99, SYS       , STAT, consistent gets from cache (fastpath)                     ,         11528,      1.92k,
     99, SYS       , STAT, consistent gets - examination                             ,             1,        .17,
     99, SYS       , STAT, no work - consistent read gets                            ,         11851,      1.98k,
     99, SYS       , STAT, cleanouts only - consistent read gets                     ,             1,        .17,
     99, SYS       , STAT, table scans (long tables)                                 ,             1,        .17,
     99, SYS       , STAT, table scan rows gotten                                    ,        738834,    123.14k,
     99, SYS       , STAT, table scan blocks gotten                                  ,         11852,      1.98k,
     99, SYS       , TIME, parse time elapsed                                        ,            84,       14us,      .0%, |          |
     99, SYS       , TIME, DB CPU                                                    ,        109983,    18.33ms,     1.8%, |@         |
     99, SYS       , TIME, sql execute elapsed time                                  ,        116709,    19.45ms,     1.9%, |@         |
     99, SYS       , TIME, DB time                                                   ,        117102,    19.52ms,     2.0%, |@         |
     99, SYS       , WAIT, db file scattered read                                    ,         63956,    10.66ms,     1.1%, |@         |
     99, SYS       , WAIT, SQL*Net message to client                                 ,             8,     1.33us,      .0%, |          |
     99, SYS       , WAIT, SQL*Net message from client                               ,       5119722,   853.29ms,    85.3%, |@@@@@@@@@ |
--  End of Stats snap 1, end=2012-09-02 20:06:19, seconds=6

---------------------------------------------------------------------------------
Active% | SQL_ID          | EVENT                               | WAIT_CLASS
---------------------------------------------------------------------------------
     2% | 07sgczqj432mr   | db file scattered read              | User I/O

--  End of ASH snap 1, end=2012-09-02 20:06:19, seconds=5, samples_taken=46

Note that I deliberately forced a hard parse (with the “attempt 2″ comment) to compile a new cursor. The _direct_read_decision_statistics_driven parameter is not part of the optimizer environment, so a new child cursor would not be automatically created after the parameter change (the same applies to the _small_table_threshold and _serial_direct_read parameters, by the way). But when I change the SQL text, then an entirely new (parent and child) cursor will be compiled anyway.

But wait a minute! Why do I need to compile a new cursor to get Oracle to read the new block count value from optimizer stats?!

I have said in the beginning of this post (and in many other places) that the direct path read decision is not done by the optimizer anyway and is a runtime decision done during every execution, every time any segment (including individual partitions) is scanned during query runtime. This is true for the old (up to 11.2.0.1) Oracle versions, where a direct path decision is done based on the actual, current block count in the segment header, thus the decision can suddenly change when a segment grows by a few blocks, crossing the _small_table_threshold calculation threshold. Perhaps due to performance stability reasons, this seems to have changed.

My tests on 11.2.0.2 have so far shown that when using the new statistics-driven direct path read decisions, each segments’ block counts are stored somewhere in the compiled cursor and reused during next executions of it, even if the block count of the segment changes in the optimizer stats later on! This might result in somewhat better stability as long as you don’t gather new stats – and your buffer cache size (and already cached block counts) don’t change. However if the amount of cached blocks of a segment does change (due to other, index-based accesses for example), then the direct path decision can still change during runtime. It’s just the block counts which are stored in the cursor, but the other factors affecting the decision (buffer cache size, cached block counts) can still change.

This topic is especially relevant on Exadata, as the entire Smart Scanning functionality depends on whether a direct path read IO method gets picked for full segment scans. When experimenting with this, you’ve got to be pretty careful and thorough (to not come to wrong conclusions) as there multiple moving parts and factors involved in the decisions:

  1. Are block counts taken from optimizer stats or segment header
  2. Do the segment header and/or optimizer stats block counts change
  3. Does the buffer cache size change (thus the _small_table_threshold too)
  4. Does the amount of cached blocks of a segment change
  5. Parallel vs Serial execution
  6. Are buffered parallel full scans allowed (the in-memory PX feature of 11.2)
  7. Did a new child cursor get created or the old one reused
  8. etc :)

Update: there’s an interesting comment in another page about how the incorrectly handled subpartitions statistics cause Oracle to not choose a direct path read for some subpartitions.

Note that this year’s only Advanced Oracle Troubleshooting class takes place in the end of April/May 2014, so sign up now if you plan to attend this year!

Posted in Exadata, Oracle | 28 Comments

Do you like to watch TV? Watch Enkitec.TV from now on! :-)

I’m happy to announce Enkitec TV which is a video channel/aggregator of some of the coolest stuff we do here at Enkitec. I have uploaded some of my videos there, including the previously unpublished Oracle parameters infrastructure hacking session and Kerry’s & Cary’s E4 Exadata interview is available there as well!

I’ll be posting more of my hacking sessions (hopefully shorter ones!!!) there in the future – and so will other fellow Oracle-Exadata-and-Big-Data geeks of Enkitec too!

I think video is so much better for communicating certain types of demos, use cases and examples (and interviews) – so instead of writing extremely long blog articles in the future, I will likely just record a video (and let others do the hard work of writing later on ;-)

So, sit back, grab a beer (or coffee) and enjoy Enkitec.TV!

 

Note that it’s possible to subscribe to the iTunes podcast too:

 

Note that this year’s only Advanced Oracle Troubleshooting class takes place in the end of April/May 2014, so sign up now if you plan to attend this year!

Posted in Cool stuff, Exadata, Oracle | 2 Comments

The limitations of CURSOR_SHARING = FORCE and FORCE_MATCHING_SIGNATURE for SQL plan stability

It’s a well known fact that the cursor_sharing parameter can be set to FORCE for making Oracle replace any literals with system-generated bind variable placeholders and then calculating the SQL hash value for looking up an existing cursor in library cache. This should reduce the amount of hard parsing and shared pool garbage.

Also, the same mechanism (of replacing literal values with bind variable placeholders before calculating the SQL hash value for library cache lookup) can be used for enforcing a SQL profile for SQL statements which differ by literal values in them. You just accept a SQL profile using DBMS_SQLTUNE.ACCEPT_PROFILE( …, FORCE_MATCH=>TRUE).

However this “force matching” has one limitation which can make it almost useless for achieving plan stability in databases where the applications use dynamically generated SQL.

Here’s an example – I’m just setting up my sqlplus environment, set cursor_sharing = FORCE (which is not needed for DBMS_SQLTUNE’s FORCE_MATCH by the way, I’m just doing it for displaying the “new” SQL text used for hash value computation):


SQL> COL force_matching_signature FOR 99999999999999999999999999
SQL> COL sql_text FOR A100
SQL>
SQL> ALTER SESSION SET cursor_sharing = FORCE;

Session altered.

SQL>
SQL> SELECT * FROM dual WHERE rownum IN (1,2,3);

D
-
X

And now I’ll query V$SQL to see what the actual SQL text looks like (and its related hash values):


SQL> SELECT sql_id,hash_value,force_matching_signature,sql_text FROM v$sql WHERE sql_id = (SELECT prev_sql_id FROM v$session WHERE sid = USERENV('SID'));

SQL_ID HASH_VALUE FORCE_MATCHING_SIGNATURE SQL_TEXT
------------- ---------- --------------------------- ----------------------------------------------------------------------------------------------------
fqcq5k1wd4d4h 2027041936 1308158718700150644 SELECT * FROM dual WHERE rownum IN (:"SYS_B_0",:"SYS_B_1",:"SYS_B_2")

Scroll right to see how the literal values 1,2,3 have been replaced with :SYS_B_x bind variables in SQL text. The SQL hash value (and SQL_ID and force_matching_signature) are calculated after this replacement was done.

So, now I’ll run a similar statement with just different literal values:


SQL> SELECT * FROM dual WHERE rownum IN (999,888,777);

no rows selected

SQL> SELECT sql_id,hash_value,force_matching_signature,sql_text FROM v$sql WHERE sql_id = (SELECT prev_sql_id FROM v$session WHERE sid = USERENV('SID'));

SQL_ID HASH_VALUE FORCE_MATCHING_SIGNATURE SQL_TEXT
------------- ---------- --------------------------- ----------------------------------------------------------------------------------------------------
fqcq5k1wd4d4h 2027041936 1308158718700150644 SELECT * FROM dual WHERE rownum IN (:"SYS_B_0",:"SYS_B_1",:"SYS_B_2")

See, different literal values (999,888,777), but as they got replaced with exactly the same number of bind variables (in the same order), the resulting SQL text is exactly the same – therefore the SQL text hash values are also exactly the same as you see above.

So this is how these features work – the literal values are replaced in SQL text string just before calculating the hash value for library cache lookup.

However, let’s run one more query, this time with 4 literal values instead of 3:


SQL> SELECT * FROM dual WHERE rownum IN (999,888,777,666);

no rows selected

SQL> SELECT sql_id,hash_value,force_matching_signature,sql_text FROM v$sql WHERE sql_id = (SELECT prev_sql_id FROM v$session WHERE sid = USERENV('SID'));

SQL_ID HASH_VALUE FORCE_MATCHING_SIGNATURE SQL_TEXT
------------- ---------- --------------------------- ----------------------------------------------------------------------------------------------------
8btxq8q6avt6b 2360206539 15602626316910109322 SELECT * FROM dual WHERE rownum IN (:"SYS_B_0",:"SYS_B_1",:"SYS_B_2",:"SYS_B_3")

Now as we used 4 literals (999,888,777,666), they got replaced by 4 bind variables in the SQL text (as opposed to 3 in earlier queries) and thanks to that the resulting SQL text is different! Therefore the hash values will also be different and anything that’s matching/looking up cursors based on SQL text hash value, won’t find the previous query plan anymore. Note that the force_matching_signature is also different from previous thanks to this. So, a new hard parse and library cache load would be done for this cursor – and also any SQL Profile used with first 2 queries with FORCE_MATCH = TRUE would not be applicable to this new query – simply because it can’t find a profile matching the new force_matching_signature.

Now you might say who cares, I’ll just create a profile for this query with 4 literal values in the inlist and we are done. Yes true, but now you’d only have a profile for the query with 3 and 4 inlist values. Often in data warehouses, reporting environments and also in some OLTP systems, the developers are using dynamically generated in-lists extensively. By now I have seen queries with tens of thousands of literal values in  inlists (using OR concatenation to work around the ORA-01795: maximum number of expressions in a list is 1000 error). There’s also a bug in MOS about someone’s query failing with 60 000 bind variables in it! :)

So if you have a dynamically generated query with multiple IN-lists used for filtering different tables and the number of items in each inlist fluctuates anywhere between 1 and tens of thousands, you may end up with a very large number of possible combinations – and each of such combinations of inlist item counts would have its own force_matching_signature. Thus even if you think you have fixed your plans with a force matching profile, some day you will hit a yet another combination and have to reactively fix that “new” query again.

So what can you do in such case? (other than making sure that your schema, code, stats and configuration is good enough so that the CBO would come up with good plans for new queries by itself). Well, don’t use such dynamically generated inlist strings!

Instead of dynamically generating an inlist string with 10000 literal values (or binds), you can just create an array of these values in your application side and bind that entire array to your query as a single bind variable. So regardless of how many values you have to pass in to this in-list, they would all be still stored in a single bind variable, leaving the SQL hash values & force_matching_signature the same. PL/SQL supports it, OCI-based tools support it, Oracle’s JDBC supports it (and probably ODP.NET too). So there’s no excuse not to use it :)

I didn’t find my own demo code anymore, so I googled for a quick example in Java - http://blogs.itemis.de/kloss/2009/03/05/arrays-preparedstatements-jdbc-and-oracle/. If you do have your own code examples, feel free to post links to these into comments.

There are other working, but less clean options, like passing in a comma separated string as a single bind variable (either as VARCHAR2 or even a CLOB) and then using a string tokenizer function in the inlist to break this into individual values) or even always inserting the inlist items into a GTT and then joining it to the main query block (this could be useful if you have millions of inlist values and you don’t want to use too much of private memory used for storing bind values and PL/SQL arrays).

There’s possible one challenge though – the CBO’s cardinality estimates for such array-based inlists may be incorrect, so some adjustment might be needed either via the cardinality hint or a SQL profile or a variety of other means :)

Update: Karen Morton has written about another limitation of force_matching_signature. IIRC this used to be a bug, but apparently now it’s documented, so it’s a new feature :)

Note that this year’s only Advanced Oracle Troubleshooting class takes place in the end of April/May 2014, so sign up now if you plan to attend this year!

Posted in Oracle | 27 Comments

Oradebug hanganalyze with a prelim connection and “ERROR: Can not perform hang analysis dump without a process state object and a session state object.”

Back in the (really) old days, systemstate dumps had to be used for diagnosing hangs and finding blockers of hung databases. Basically you just identified which resource your waiting sessions were waiting for and then scanned through the rest of the system state dump to see which session held that particular resource (note that over time the instrumentation and systemstate dumps have evolved to resolve and dump the blocker information right at the waiting session section in the dump).

Diagnosing complex hangs was a tedious manual process (or required some scripting), so the hanganalyze was a really welcome addition to hang diagnosis. The hanganalyze basically walked through the blocker-waiter chains, found which session was the LEAF in the chain (the LEAF is the ultimate/final blocker in the wait chain), which everybody either directly or indirectly was waiting for.

Additionally the hanganalyze output allows you to draw the topology of the hang too, which may be useful in complex hang scenarios. In fact Enterprise Manager’s hang topology page also just calls ORADEBUG HANGANALYZE internally and formats the output into a visual hang graph (this is why the hang page requires OS credentials or sysdba privileges).

Starting from 11g, there is a welcome change though – there’s a new V$WAIT_CHAINS view, which lists all the hangs it has detected, it’s RAC-aware and is accessible via plain SQL. No need to run oradebug anymore, assuming that you are able to log on to query that V$ view :-)

And this is where we have a little problem – in rare circumstances an instance can get hung so that even a SYSDBA privilege holder can not log on, so how would you run the diagnostic queries & commands? This is why Oracle has introduced the preliminary connection option in sqlplus for diagnosing hung databases. With a prelim connection you will have a server process started for you, it attaches to the SGA shared memory segments, but it skips the last step of setting up that session/process-related structures in SGA. These operations on shared SGA structures have to be protected by latches/mutexes and would potentially get blocked if the related latches/mutexes are already held by someone else part of that hang.

I have already written about How to log on to Oracle when even sysdba cannot do so, but here’s an example:

$ sqlplus -prelim "/as sysdba"

SQL*Plus: Release 11.2.0.3.0 Production on Sun Jan 29 12:48:03 2012

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

SYS:dbm1> ORADEBUG HANGANALYZE 3

Statement processed.

The “problem” is though that prelim connections only allow you to run ORADEBUG commands as running regular SQL do require that all the session/process structures are set up properly, plus that the parsing, library cache access etc would get latches/mutexes which could again block.

But we just wanted to dump hang analysis and ORADEBUG HANGANALYZE above seems to have succeeded. However when looking into the resulting tracefile, we see this:

*** 2012-01-29 12:48:11.041
Processing Oradebug command 'HANGANALYZE 3'
===============================================================================
HANG ANALYSIS:

ERROR: Can not perform hang analysis dump without a process
       state object and a session state object.
  ( process=(nil), sess=(nil) )

===============================================================================

It looks like hanganalyze isn’t able to work anymore without a fully logged on session where the process & session state objects (in v$process and v$session respectively) have been set up. This is a bit weird as one of the benefits of hanganalyze (and systemstate dumps) was that it did not require allocating any SGA structures nor take any latches & locks. 

This behavior change seems to have appeared since Oracle 11.2.0.2 and it’s documented in MOS note 452358.1, but that note doesn’t offer a good alternative for diagnosing hangs in such situations.

The alternative is actually simple – you should attach to an existing fully-initialized & logged on process with oradebug and let that process do the hanganalyze for you. Whatever process you attach to with oradebug is the process which will be performing the work for you.

So if you ever see the error above when diagnosing extreme hangs, then just identify some least critical process logged in to the database (not a background process ideally and definitely not critical bg processes such DBWR & LGWR etc) and attach to that with oradebug (as opposed to “oradebug setmypid”) and the hang analysis should work ok.

However, the catch is that the above ORADEBUG HANGANALYZE command is a shortcut, which always tries to run hanganalyze in your (preliminary) connection, by calling the hanganalyze function directly in the process. So you have to use ORADEBUG DUMP HANGANALYZE 3 instead (the 3 is the detail level where I usually start from) as this syntax will send the “dump hanganalyze” command to the target (fully-initialized) process where you’ve attached to.

Update: I added this to clarify which commands should you use when you hit this problem:

  1. sqlplus -prelim “/as sysdba”
  2. ORADEBUG SETOSPID OS_PID
  3. ORADEBUG DUMP HANGANALYZE 3

Enjoy :)

Note that this year’s only Advanced Oracle Troubleshooting class takes place in the end of April/May 2014, so sign up now if you plan to attend this year!

Posted in Oracle | 4 Comments

Advanced Oracle Troubleshooting Guide – Part 10: Index unique scan doing multiblock reads?!

When you troubleshoot an Oracle (performance) problem you usually want to find out what are the problem sessions doing (wrong) and then why are they doing it.

The “what” question is usually answered using some wait interface based method – like ASH or SQL*Trace, which both add plenty of extra details to the plain wait event data.

My normal session troubleshooting data collection sequence consists of only three steps:

  1. Interpret session-level top wait and top SQL data (ASH, SQL*Trace or custom tools sampling session-level V$views)
  2. Interpret session-level performance counter data (V$SESSTAT)
  3. Sample the problem process stack and list the top stack branches where execution spent the most time

Only looking into #1 is enough in 90% of problem cases. But for advanced troubleshooting of complex and exotic problems I regularly have to dive into the #2 – V$SESSTAT performance counters (That’s the reason why I wrote Snapper for example).

Maybe only once out of every 40-50 troubleshooting cases I dive down into #3 – process stack sampling. While process stack sampling doesn’t lie – it helps especially in these cases where the Oracle’s instrumentation itself doesn’t have a clue what a session/process is doing – it’s like a microscope. It gives you very detailed overview of one part of the problem, but not the big picture. That’s why I never start from #3 when troubleshooting (and if you do, you are doing it wrong – with an exception of a completely hung database ;) I always start from #1 and proceed to #2 only when needed.

Snapper lists both #1 and #2 data for a process by the way.

Anyway, the reason why I’m writing this post is that a few days ago I received a question about an issue which nicely illustrates a case where step #1 shows that there’s an issue and step #2 explains why it’s happening.

There was an execution plan, which ran slightly slower on a new server compared to the old one – and it did read 36 blocks per execution as opposed to only 5:

select ...
from TABLE_X
where ...
 
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.02       0.02          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.07         36          5          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.02       0.09         36          5          0           1

The “disk” column above shows 36 blocks being read in via physical IO – while only 5 buffers had actually been accessed after that (logical IOs are shown in the “query” column).

Now that’s weird – as logical IOs are the ones which cause physical IOs, so the LIO count should normally be equal or greater than the physical IO count.

What’s more stranger, the execution plan is an index unique scan:

 
 
Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS BY INDEX ROWID TABLE_X (cr=5 pr=36 pw=0 time=0 us cost=4 size=26 card=1)
      1   INDEX UNIQUE SCAN TABLE_X_IDX (cr=4 pr=35 pw=0 time=0 us cost=3 size=0 card=1)(object id 51063)
 

… but if you look into the “pr=” column, which means “blocks read via physical read”, then you see that 35 out of the 36 blocks were read in by the INDEX UNIQUE SCAN row source (pr=35). These counters are cumulative, so the pr=36 in the parent TABLE ACCESS BY INDEX ROWID row source means that there was only one block read by physical IO at the table access level. So why does the index access read 35 blocks?

Is it index height? Can’t be – as the max index height is 24 levels (including the leaf level). And such an index would be ridiculous anyway :)

Let’s look into the wait profile:

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  Disk file operations I/O                        4        0.00          0.00
  db file sequential read                         4        0.01          0.04
  db file scattered read                          1        0.03          0.03
  SQL*Net message from client                     2        0.15          0.15
********************************************************************************

Wow, an index unique scan which is supposed to traverse the index tree with precise single block reads, has done 4 single block reads and one multiblock read!

Now this is where the wait interface and ASH-style data would not help us any further. We know what’s happening – a multiblock read happening where “it shouldn’t”. We still don’t know why this is happening. We would have to guess from here. And this is where V$SESSTAT comes in – there’s over 600 performance counters maintained for each session and many of them show valuable information about what kind of operations (and how many) the RDBMS kernel has done. So I asked the client to run Snapper on a test session and here’s the output (note that the below output is from a different test run where the queries were executed many times in a loop):


----------------------------------------------------------------------------------------------------------------------------------------
    SID, USERNAME  , TYPE, STATISTIC                                                 ,         DELTA, HDELTA/SEC,    %TIME, GRAPH
----------------------------------------------------------------------------------------------------------------------------------------
   1612, ORACLE    , STAT, opened cursors cumulative                                 ,          6617,      1.32k,
   1612, ORACLE    , STAT, user calls                                                ,         13234,      2.65k,
   1612, ORACLE    , STAT, pinned cursors current                                    ,            -1,        -.2,
   1612, ORACLE    , STAT, session logical reads                                     ,          5293,      1.06k,
   1612, ORACLE    , STAT, CPU used when call started                                ,             7,        1.4,
   1612, ORACLE    , STAT, CPU used by this session                                  ,             7,        1.4,
   1612, ORACLE    , STAT, DB time                                                   ,           306,       61.2,
   1612, ORACLE    , STAT, user I/O wait time                                        ,           223,       44.6,
   1612, ORACLE    , STAT, non-idle wait time                                        ,           224,       44.8,
   1612, ORACLE    , STAT, non-idle wait count                                       ,          6775,      1.36k,
   1612, ORACLE    , STAT, process last non-idle time                                ,             5,          1,
   1612, ORACLE    , STAT, physical read total IO requests                           ,           158,       31.6,
   1612, ORACLE    , STAT, physical read total multi block requests                  ,            31,        6.2,
   1612, ORACLE    , STAT, physical read total bytes                                 ,       8265728,      1.65M,
   1612, ORACLE    , STAT, cell physical IO interconnect bytes                       ,       8265728,      1.65M,
   1612, ORACLE    , STAT, consistent gets                                           ,          5293,      1.06k,
   1612, ORACLE    , STAT, consistent gets from cache                                ,          5293,      1.06k,
   1612, ORACLE    , STAT, consistent gets - examination                             ,          5293,      1.06k,
   1612, ORACLE    , STAT, physical reads                                            ,          1009,      201.8,
   1612, ORACLE    , STAT, physical reads cache                                      ,          1009,      201.8,
   1612, ORACLE    , STAT, physical read IO requests                                 ,           158,       31.6,
   1612, ORACLE    , STAT, physical read bytes                                       ,       8265728,      1.65M,
   1612, ORACLE    , STAT, free buffer requested                                     ,          1008,      201.6,
   1612, ORACLE    , STAT, physical reads cache prefetch                             ,           851,      170.2,
   1612, ORACLE    , STAT, physical reads prefetch warmup                            ,           851,      170.2,
   1612, ORACLE    , STAT, shared hash latch upgrades - no wait                      ,           188,       37.6,
   1612, ORACLE    , STAT, calls to get snapshot scn: kcmgss                         ,          1319,      263.8,
   1612, ORACLE    , STAT, file io service time                                      ,            25,          5,
   1612, ORACLE    , STAT, file io wait time                                         ,       2219641,    443.93k,
   1612, ORACLE    , STAT, min active SCN optimization applied on CR                 ,             1,         .2,
   1612, ORACLE    , STAT, index fetch by key                                        ,          1292,      258.4,
   1612, ORACLE    , STAT, session cursor cache hits                                 ,          6440,      1.29k,
   1612, ORACLE    , STAT, buffer is not pinned count                                ,          1277,      255.4,
   1612, ORACLE    , STAT, parse time elapsed                                        ,             6,        1.2,
   1612, ORACLE    , STAT, parse count (total)                                       ,          6380,      1.28k,
   1612, ORACLE    , STAT, execute count                                             ,          6378,      1.28k,
   1612, ORACLE    , STAT, bytes sent via SQL*Net to client                          ,       2195633,    439.13k,
   1612, ORACLE    , STAT, bytes received via SQL*Net from client                    ,       2721339,    544.27k,
   1612, ORACLE    , STAT, SQL*Net roundtrips to/from client                         ,          6374,      1.27k,
   1612, ORACLE    , TIME, parse time elapsed                                        ,        114215,    22.84ms,     2.3%, |@         |
   1612, ORACLE    , TIME, PL/SQL execution elapsed time                             ,         67004,     13.4ms,     1.3%, |@         |
   1612, ORACLE    , TIME, DB CPU                                                    ,        630000,      126ms,    12.6%, |@@        |
   1612, ORACLE    , TIME, sql execute elapsed time                                  ,       2595410,   519.08ms,    51.9%, |@@@@@@    |
   1612, ORACLE    , TIME, DB time                                                   ,       3190919,   638.18ms,    63.8%, |@@@@@@@   |
   1612, ORACLE    , WAIT, db file sequential read                                   ,        978601,   195.72ms,    19.6%, |@@        |
   1612, ORACLE    , WAIT, db file scattered read                                    ,       1213550,   242.71ms,    24.3%, |@@@       |
   1612, ORACLE    , WAIT, SQL*Net message to client                                 ,          9212,     1.84ms,      .2%, |          |
   1612, ORACLE    , WAIT, SQL*Net message from client                               ,       1667368,   333.47ms,    33.3%, |@@@@      |
--  End of Stats snap 1, end=2012-03-09 14:47:02, seconds=5

See the highlighted lines above and scroll right if you don’t see the metric values…

  • The physical read total multi block requests statistic confirms that indeed there were some multiblock reads done by the monitored session.
  • The physical reads cache prefetch metric indicates that there was some data block prefetching happening! This explains why we had multiblock reads done where only single block reads were really needed.
  • And the physical reads prefetch warmup metric explains why did we do any prefetching at all – it’s for buffer cache warmup, where after a restart or a significant buffer cache size increase Oracle decides to populate the empty buffer cache with any data as fast as possible. Oracle assumes that a random multiblock read doesn’t take that much more time than a random single block read anyway, so why not just read a bunch of additional blocks from around that single block that the index access wanted to read. The read operation will be slightly slower, but it might help to avoid multiple future single block reads as they are already in cache. This should affect things only when you restart the instance or significantly increase the buffer cache size. So on a steady state production system you shouldn’t see it (and it’s for a good cause anyway), but if you do performance tests on some test database, change something, bounce the instance and immediately run a test – this may affect your test results. Btw, this is controlled by a hidden parameter _db_cache_pre_warm, but I’m mentioning this just for educational purposes here, I have never fixed a problem by changing this parameter so you should touch it only in your sandbox hacking database :)

And now back to the original topic – this post wasn’t really about this specific prewarm feature, but rather a yet another example of how the V$SESSTAT metrics tell you much more than the usual wait interface and TOP-sql based approach. And you do not need to enable any tracing or undocumented events for getting these detailed metrics out of Oracle, just query the V$SESSION views at the right time. As I said in the beginning, I don’t jump to the microscopic view unless the higher level views aren’t helpful enough, so I recommend you to follow the 3-step “Advanced Oracle Troubleshooting” method when trying to diagnose what the heck some session is doing and why ;^)

Note that this year’s only Advanced Oracle Troubleshooting class takes place in the end of April/May 2014, so sign up now if you plan to attend this year!

Posted in Oracle | 13 Comments

Where is LOB data stored?

There was a question in Oracle-L about where is the LOB data actually stored (in the row or the LOB segments) and what are the exact conditions when a switch from one to another may happen. The documentation isn’t fully clear about this and the “4000 bytes” number may mislead people to think that you can store 4000 bytes of your data in a LOB item before it must move out-of-line.

I clarified this in my post in Oracle-L, but will post it here too. First check this diagram:
 

 
If you create the LOB column with DISABLE STORAGE IN ROW, then the LOB data is always stored out-of-line. LOB index is always used. Only the LOB ID is stored inline, and the ID is looked up in LOB index, where you’ll get the pointers to actual LOB chunk blocks.
 
If you create the LOB column with ENABLE STORAGE IN ROW, then the LOB data may be stored in-line or out-of-line.
 
If the total LOB data + overhead <= 4000 bytes, then the LOB item will be stored in-line. No LOB index is used, even if you modify the LOB later on as everything is stored in-line with the row and versioning/rollback is achieved with undo data.
 
If the total LOB data + overhead > 4000 bytes, then the LOB item will be stored out-of-line. If the LOB fits into 12 x LOB_chunk_size, then no LOB index entries are created, because the in-line LOB locator can store up to 12 pointers to the LOB chunk blocks for each lob item. So if your LOB chunk size is 8kB, you can store LOB items up to 96kB in size without inserting anything to LOB index. However if the LOB item is bigger, then no pointers are stored in-row and all pointers will be put to the LOB index.
 
Note that once you modify an existing LOB item (which is bigger than 4000 bytes with its overhead), but smaller than 12 x chunk_size, then LOB index will still be used after the first LOB change operation as pointers to the old LOB chunk versions have to be stored in it (LOB segments don’t rely on undo for rollback & consistency, but just use LOB chunk versioning managed by LOB index).
 
The “overhead” of an in-line LOB item is 36 bytes, so the actual LOB data must be 4000 – 36 = 3964 bytes or less in order to fully fit in-row. And note that we are talking about bytes here, not characters. With multibyte character sets a character in a CLOB may take multiple bytes.
 
I wrote a presentation about LOB internals quite some time ago, back when Oracle 10.1 came out I think – check it here:
 

 
Enjoy! :-)
 
P.S. The next Advanced Oracle Troubleshooting v2.0 class will start in 3 weeks! (I just added the online payment option back too for convenience:)
 

Note that this year’s only Advanced Oracle Troubleshooting class takes place in the end of April/May 2014, so sign up now if you plan to attend this year!

Posted in Oracle | 10 Comments

Exadata Smart Scan predicate offloading and sequence.NEXTVAL

There was a question in the twitter-sphere about whether using sequences (sequence.NEXTVAL) in your select query’s projection list would somehow disable smart scans happening?

The answer is no, sequence use with smart scans works just fine. The smart scan offloading applies to data retrieval row sources (and filtering) only and nothing else. So, what you have in the query’s projection list (the sequence use for example), does not directly affect the smart scan decision. Just like any other operations like sorting, grouping etc, do not have anything to do with smart scans and don’t disable their use. Smart scans are only related to data retrieval and any other operations do not affect them.

In other words, sequence.NEXTVAL can be used in a query and you can still get smart scans. Here’s simple proof:

 


SQL Monitoring Report

SQL Text
------------------------------
SELECT /*+ MONITOR */ t.object_name, s.NEXTVAL n FROM t WHERE owner LIKE 'TANEL%'

Global Information
------------------------------
 Status              :  DONE (ALL ROWS)
 Instance ID         :  1
 Session             :  TANEL (204:2707)
 SQL ID              :  5kzs2ucxwf4wk
 SQL Execution ID    :  16777216
 Execution Started   :  04/13/2012 09:06:46
 First Refresh Time  :  04/13/2012 09:06:46
 Last Refresh Time   :  04/13/2012 09:06:48
 Duration            :  2s
 Module/Action       :  SQL*Plus/-
 Service             :  dbm
 Program             :  sqlplus@mac02.local (TNS V1-V3)
 Fetch Calls         :  4

Global Stats
========================================================================================
| Elapsed |   Cpu   |    IO    | Concurrency | Fetch | Buffer | Read | Read  |  Cell   |
| Time(s) | Time(s) | Waits(s) |  Waits(s)   | Calls |  Gets  | Reqs | Bytes | Offload |
========================================================================================
|    1.50 |    0.36 |     1.06 |        0.08 |     4 |   323K | 2503 |   2GB |  99.97% |
========================================================================================

SQL Plan Monitoring Details (Plan Hash Value=1589210007)
======================================================================================================================================================================
| Id |          Operation           | Name |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read | Read  |  Cell   | Activity |      Activity Detail      |
|    |                              |      | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs | Bytes | Offload |   (%)    |        (# samples)        |
======================================================================================================================================================================
|  0 | SELECT STATEMENT             |      |         |       |         3 |     +0 |     1 |     1408 |      |       |         |          |                           |
|  1 |   SEQUENCE                   | S    |         |       |         3 |     +0 |     1 |     1408 |      |       |         |          |                           |
|  2 |    TABLE ACCESS STORAGE FULL | T    |    439K | 20738 |         4 |     +0 |     1 |     1408 | 2496 |   2GB |  99.97% |   100.00 | cell smart table scan (1) |
======================================================================================================================================================================

I have highlighted the query text (which includes sequence.NEXTVAL) and the TABLE ACCESS STORAGE FULL row source, if you scroll right, you’ll see the “cell smart table scan” wait event and also the Offload % column populated, which both indicate that smart scan did happen for that monitored query execution.

Note that often I hear a saying, something like “xyz disables smart scan”, or “using a non-offloadable function disables the smart scan”. This can be misleading and give the impression that if you use a non-Exadata supported function/feature in one part of your query, then smart scanning is disabled for the whole query. This is incorrect.

First of all, the smart scanning is not a query level thing, it’s an access path level thing. So, you can easily run a two table join on one table which is on NFS storage (no smart scanning) and another table, which is on Exadata storage (which allows smart scanning). So, even though smart scanning is “disabled” for the first table, the other table still can be accessed using it! By the way, it’s even possible to have some partitions of your table on NFS and other partitions of the same table on Exadata storage cells and the same table scan can use smart scanning for the partitions stored on the cells and regular buffered or direct path read access for the partitions on NFS. So, smart scanning is not some “all or nothing” query level thing, this decision is made separately for every single segment full scanned in the query. More details in the book.

Secondly, using a non-offloadable function in a filter predicate when retrieving rows does not prevent Oracle to push other, offloadable, predicate functions to be pushed to the storage cells. So again, it’s not an all-or-nothing decision. If a predicate function is not offloadable, it will just not be offloaded (and will show up as just a filter() predicate as opposed to both storage() and filter() predicate), but all the other supported predicates can still be offloaded without a problem. Here’s an example:


SELECT * FROM t 
WHERE 
     line = 9999
AND  mod(line,10) = 0 
AND  f1(line) =9999

Plan hash value: 1601196873

----------------------------------------------------------------
| Id  | Operation                 | Name | E-Rows | Cost (%CPU)|
----------------------------------------------------------------
|   0 | SELECT STATEMENT          |      |        |  2887 (100)|
|*  1 |  TABLE ACCESS STORAGE FULL| T    |      1 |  2887   (1)|
----------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - storage(("LINE"=9999 AND MOD("LINE",10)=0))
       filter(("LINE"=9999 AND MOD("LINE",10)=0 AND "F1"("LINE")=9999))

 

Note that the F1() function is a PL/SQL function and filtering based on PL/SQL functions can not be offloaded (as Exadata cells don’t run PL/SQL). This is visible in the predicate section where two predicates are shown in the storage() predicate list but the third one (F1) only shows up in the filter() predicate list.

So, as I explained in the beginning, the smart scan is not an all-or-nothing thing, this decision is made for every segment (partition) separately during query execution phase, again for every execution, of course assuming that we are doing a full table scan (of fast full index scan) on them – that part is fixed in the execution plan.

And as shown above, predicate offload is not an all-or-nothing thing either. If one predicate can’t be offloaded, then other predicates still can.

However, there’s a catch. If you combine such predicates with ANDs, Oracle can push the offloadable (storage()) predicates to the storage cells, where they will do the early filtering – and the last, non-offloadable (filter()) predicate is applied then on the retrieved dataset in the database layer.

But when you combine such predicates with an OR clause, then there’s a logical problem here. OR has to return rows which match ANY of the predicates (not ALL the predicates), therefore you can’t only return a subset of rows from the storage cells which match the predicate A and hope that this set contains all the rows which would match the (non-offloadable) predicate B as well. To evaluate B, you would still need to go through all the rows, but as B is non-offloadable, this has to be done in the database layer. All rows have to be fetched into the database layer for this.

This explains why the storage() predicate disappears when I chain my query predicates together with an OR instead of the AND:


SELECT * FROM t 
WHERE 
     line = 9999
AND  mod(line,10) = 0 
OR   f1(line) =9999

Plan hash value: 1601196873

----------------------------------------------------------------
| Id  | Operation                 | Name | E-Rows | Cost (%CPU)|
----------------------------------------------------------------
|   0 | SELECT STATEMENT          |      |        |  2963 (100)|
|*  1 |  TABLE ACCESS STORAGE FULL| T    |   6353 |  2963   (3)|
----------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("F1"("LINE")=9999 OR ("LINE"=9999 AND MOD("LINE",10)=0)))

The storage() predicate section above is gone – as Oracle has to retrieve all the rows to the database layer anyway (due to the non-offloadable F1 function used in an OR predicate), it doesn’t make sense to offload the other filters to the storage (and then return the row anyway as it’s needed in the db layer). Makes sense? ;-)

Note that even though the storage() predicates disappeared from the plan, this doesn’t mean that the entire smart scanning has been disabled, the smart scanning can still happen, it’s only the predicate offloading sub-component of the smart scan what is not used. The storage cells can still do the asynchronous prefetching, decompression offloading and projection offloading in these cases.

By the way, it’s all explained in the book!

Note that this year’s only Advanced Oracle Troubleshooting class takes place in the end of April/May 2014, so sign up now if you plan to attend this year!

Posted in Exadata, Oracle | 6 Comments

Create a database link with the new host:port/service syntax

I just noticed that (finally) in 11.2 this syntax is supported:

SQL> CREATE DATABASE LINK demo_x2 
  2  CONNECT TO tanel IDENTIFIED BY password 
  3  USING 'exadb03:1521/DEMO';
Database link created.

This just makes life a bit easier as there’s no need to use the long TNS format entry (or a tnsnames.ora/LDAP alias). It might work in 11.1 too (haven’t tested) but it didn’t work on 10.2.0.4 …

Update: This feature works for dblinks in 10.2 onwards – when I tested it on my 10.2, I got an error initially, but it was because the hostname I used didn’t resolve to an IP. Thanks to Randolf Geist for pointing this out.

In case you didn’t know, the sqlplus supports such an easy connect method since 10g:

tanel@mac02:~$ sqlplus tanel/password@exadb03/DEMO
SQL*Plus: Release 10.2.0.4.0 - Production on Mon Mar 5 09:51:27 2012
Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL>

If you omit the port from the syntax, it defaults to 1521.

Note that this year’s only Advanced Oracle Troubleshooting class takes place in the end of April/May 2014, so sign up now if you plan to attend this year!

Posted in Oracle | 18 Comments

Oracle Core: Essential Internals for DBAs and Developers book by Jonathan Lewis

In case you didn’t know, Jonathan Lewis’es new Oracle Core: Essential Internals for DBAs and Developers book is out (for a few weeks already).

I was the technical reviewer for that book and I can say it’s awesome! It will likely be the best Oracle internals book out there for the coming 10 years, just like Steve Adams’es Oracle Internal Services book was in the last decade :) 

Jonathan does a very good job explaining complex things in a simple enough way – and the book is not just dry listing of how things work inside Oracle database, but also why they work like they do and what are the benefits, limitations and side effects of the behavior.

So check out Jonathan’s book page for more details (and I think you can still order a heavily discounted ebook from Apress too):

Enjoy :)

Note that this year’s only Advanced Oracle Troubleshooting class takes place in the end of April/May 2014, so sign up now if you plan to attend this year!

Posted in Oracle | Tagged | 5 Comments