Enkitec + Accenture = Even More Awesomeness!

Enkitec is the best consulting firm for hands on implementation, running and troubleshooting your Oracle based systems, especially the engineered systems like Exadata. We have a truly awesome group of people here; many are the best in their field (just look at the list!!!).

This is why I am here.

This is also why Accenture approached us some time ago – and you may already have seen today’s announcement that Enkitec got bought!

We all are now part of Accenture and this opens up a whole lot of new opportunities. I think this is BIG, and I will explain how I see the future (sorry, no Oracle Database internals in this post ;-)

In my opinion the single most important detail of this transaction is that both Enkitec and the folks at Accenture realize that the reason Enkitec is so awesome is that awesome techies want to work here. And we don’t just want to keep it that way – we must keep it that way!

The Enkitec group will not be dissolved into the Accenture. If it were, we would disappear, like a drop in the ocean and Accenture would have lost its investment. Instead we will remain an island in the ocean continuing to provide expert help for our existing and new customers – and in long term help Accenture build additional capability for the massive projects of their customers.

We will not have ten thousand people in our group. Instead we will continue hiring (and retaining) people exactly the way we’ve been – organic growth by having only the best, likeminded people. The main difference is, now with Accenture behind us, we can hire the best people globally, as we’ll have operations in over 50 countries. I understand that we won’t likely even double in size in the next few years – as we plan to stick to hiring only the best.

I think we will have a much, much wider reach now, showing how to do Oracle technology “our way” all around the world. With Accenture behind us, we will be navigating through even larger projects in larger businesses, influencing things earlier and more. And on a more personal note, I’m looking forward to all those 10 rack Exadata and 100TB In-Memory DB Option performance projects ;-)

See you at Enkitec E4 in June!


Posted in Announcement, Exadata, Oracle | 6 Comments

What the heck are the /dev/shm/JOXSHM_EXT_x files on Linux?

There was an interesting question in Oracle-L about the JOXSHM_EXT_* files in /dev/shm directory on Linux. Basically something like this:

$ ls -l /dev/shm/* | head
-rwxrwx--- 1 oracle dba 4096 Apr 18 10:16 /dev/shm/JOXSHM_EXT_0_LIN112_1409029
-rwxrwx--- 1 oracle dba 4096 Apr 18 10:16 /dev/shm/JOXSHM_EXT_100_LIN112_1409029
-rwxrwx--- 1 oracle dba 4096 Apr 18 10:16 /dev/shm/JOXSHM_EXT_101_LIN112_1409029
-rwxrwx--- 1 oracle dba 4096 Apr 18 10:23 /dev/shm/JOXSHM_EXT_102_LIN112_1409029
-rwxrwx--- 1 oracle dba 4096 Apr 18 10:23 /dev/shm/JOXSHM_EXT_103_LIN112_1409029
-rwxrwx--- 1 oracle dba 36864 Apr 18 10:23 /dev/shm/JOXSHM_EXT_104_LIN112_1409029

There are a few interesting MOS articles about these files and how/when to get rid of those (don’t remove any files before reading the notes!), but none of these articles explain why these JOXSHM (and PESHM) files are needed at all:

  • /dev/shm Filled Up With Files In Format JOXSHM_EXT_xxx_SID_xxx (Doc ID 752899.1)
  • Stale Native Code Files Are Being Cached with File Names Such as: JOXSHM_EXT*, PESHM_EXT*, PESLD* or SHMDJOXSHM_EXT* (Doc ID 1120143.1)
  • Ora-7445 [Ioc_pin_shared_executable_object()] (Doc ID 1316906.1)

Here’s an explanation, a bit more elaborated version of what I already posted in Oracle-L:

Continue reading

Posted in Linux, Oracle | 6 Comments

Oracle Memory Troubleshooting, Part 4: Drilling down into PGA memory usage with V$PROCESS_MEMORY_DETAIL

If you haven’t read them – here are the previous articles in Oracle memory troubleshooting series: Part 1Part 2, Part 3.

Let’s say you have noticed that one of your Oracle processes is consuming a lot of private memory. The V$PROCESS has PGA_USED_MEM / PGA_ALLOC_MEM columns for this. Note that this view will tell you what Oracle thinks it’s using – how much of allocated/freed bytes it has kept track of. While this doesn’t usually tell you the true memory usage of a process, as other non-Oracle-heap allocation routines and the OS libraries may allocate (and leak) memory of their own, it’s a good starting point and usually enough.

Then, the V$PROCESS_MEMORY view would allow you to see a basic breakdown of that process’es memory usage – is it for SQL, PL/SQL, Java, unused (Freeable) or for “Other” reasons. You can use either the smem.sql or pmem.sql scripts for this (report v$process_memory for a SID or OS PID):

SQL> @smem 198
Display session 198 memory usage from v$process_memory....

---------- ---------- ---------- --------------- ---------- ---------- -------------
       198         43         17 Freeable           1572864          0
       198         43         17 Other              5481102                  5481102
       198         43         17 PL/SQL                2024        136          2024
       198         43         17 SQL              117805736  117717824     118834536

From the above output we see that this session has allocated over 100MB of private memory for “SQL” reasons. This normally means SQL workareas, so we can break this down further by querying V$SQL_WORKAREA_ACTIVE that shows us all currently in-use cursor workareas in the instance. I’m using a script wrka.sql for convenience – and listing only my SID-s workareas:

Continue reading

Posted in Oracle | 6 Comments

Slides of my previous presentations

Here are the slides of some of my previous presentations (that I haven’t made public yet, other than delivering these at conferences and training sessions):

Scripts and Tools That Make Your Life Easier and Help to Troubleshoot Better:

  • I delivered this presentation at the Hotsos Symposium Training Day in year 2010:

Troubleshooting Complex Performance Issues – Part1:
Continue reading

Posted in Oracle | 1 Comment

Where does the Exadata storage() predicate come from?

On Exadata (or when setting cell_offload_plan_display = always on non-Exadata) you may see the storage() predicate in addition to the usual access() and filter() predicates in an execution plan:

SQL> SELECT * FROM dual WHERE dummy = 'X';


Check the plan:

SQL> @x
Display execution plan for last statement for this session from library cache...

SQL_ID  dtjs9v7q7zj1g, child number 0
SELECT * FROM dual WHERE dummy = 'X'

Plan hash value: 272002086

| Id  | Operation                 | Name | E-Rows |E-Bytes| Cost (%CPU)|
|   0 | SELECT STATEMENT          |      |        |       |     2 (100)|
|*  1 |  TABLE ACCESS STORAGE FULL| DUAL |      1 |     2 |     2   (0)|

Predicate Information (identified by operation id):

   1 - storage("DUMMY"='X')

The access() and filter() predicates come from the corresponding ACCESS_PREDICATES and FILTER_PREDICATES columns in V$SQL_PLAN. But there’s no STORAGE_PREDICATES column there!
Continue reading

Posted in Exadata, Oracle | Leave a comment

Hotsos Symposium 2014

After missing last year’s Hotsos Symposium (trying to cut my travel as you know :), I will present at and deliver the full-day Training Day at this year’s Hotsos Symposium! It will be my 10th time to attend (and speak at) this awesome conference. So I guess this means more beer than usual. Or maybe less, as I’m getting old. Let’s make it as usual, then :0)

I have (finally) sent the abstract and the TOC of the Training Day to Hotsos folks and they’ve been uploaded. So, check out the conference sessions and the training day contents here. I aim to keep my training day very practical – I’ll be just showing how I troubleshoot most issues that I hit, with plenty of examples. It will be suitable both for developers and DBAs. In the last part of the training day I will talk about some Oracle 12c internals and will dive a bit deeper to the lower levels of troubleshooting so we can have some fun too.

Looks like we’ll be having some good time!

Posted in Announcement, Oracle, Oracle 12c | Leave a comment

Oracle X$ tables – Part 1 – Where do they get their data from?

It’s long-time public knowledge that X$ fixed tables in Oracle are just “windows” into Oracle’s memory. So whenever you query an X$ table, the FIXED TABLE rowsource function in your SQL execution plan will just read some memory structure, parse its output and show you the results in tabular form. This is correct, but not the whole truth.

Check this example. Let’s query the X$KSUSE table, which is used by V$SESSION:

SQL> SELECT addr, indx, ksuudnam FROM x$ksuse WHERE rownum <= 5;

-------- ---------- ------------------------------
391513C4          1 SYS
3914E710          2 SYS
3914BA5C          3 SYS
39148DA8          4 SYS
391460F4          5 SYS

Now let’s check in which Oracle memory region this memory address resides (SGA, PGA, UGA etc). I’m using my script fcha for this (Find CHunk Address). You should probably not run this script in busy production systems as it uses the potentially dangerous X$KSMSP fixed table:

SQL> @fcha 391513C4
Find in which heap (UGA, PGA or Shared Pool) the memory address 391513C4 resides...

WARNING!!! This script will query X$KSMSP, which will cause heavy shared pool latch contention
in systems under load and with large shared pool. This may even completely hang
your instance until the query has finished! You probably do not want to run this in production!

Press ENTER to continue, CTRL+C to cancel...

--- -------- ---------- ---------- ---------------- ---------- -------- ---------- --------
SGA 39034000          1          1 permanent memor     3977316 perm              0 00


Ok, these X$KSUSE (V$SESSION) records reside in a permanent allocation in SGA and my X$ query apparently just parsed & presented the information from there.

Now, let’s query something else, for example the “Soviet Union” view X$KCCCP:

Continue reading

Posted in Oracle | 16 Comments

cell flash cache read hits vs. cell writes to flash cache statistics on Exadata

When the Smart Flash Cache was introduced in Exadata, it was caching reads only. So there were only read “optimization” statistics like cell flash cache read hits and physical read requests/bytes optimized in V$SESSTAT and V$SYSSTAT (the former accounted for the read IO requests that got its data from the flash cache and the latter ones accounted the disk IOs avoided both thanks to the flash cache and storage indexes). So if you wanted to measure the benefit of flash cache only, you’d have to use the cell flash cache read hits metric.

This all was fine until you enabled the Write-Back flash cache in a newer version of cellsrv. We still had only the “read hits” statistic in the V$ views! And when investigating it closer, both the read hits and write hits were accumulated in the same read hits statistic! (I can’t reproduce this on our patched with latest cellsrv anymore, but it was definitely the behavior earlier, as I demoed it in various places).

Side-note: This is likely because it’s not so easy to just add more statistics to Oracle code within a single small patch. The statistic counters are referenced by other modules using macros with their direct numeric IDs (and memory offsets to v$sesstat array) and the IDs & addresses would change when more statistics get added. So, you can pretty much add new statistic counters only with new full patchsets, like It’s the same with instance parameters by the way, that’s why the “spare” statistics and spare parameters exist, they’re placeholders for temporary use, until the new parameter or statistic gets added permanently with a full patchset update.

So, this is probably the reason why both the flash cache read and write hits got initially accumulated under the cell flash cache read hits statistic, but later on this seemed to get “fixed”, so that the read hits only showed read hits and the flash write hits were not accounted anywhere. You can test this easily by measuring your DBWR’s v$sesstat metrics with snapper for example, if you get way more cell flash cache read hits than physical read total IO requests, then you’re probably accumulating both read and write hits in the same metric.

Let’s look into a few different database versions:

Continue reading

Posted in Exadata, Oracle | 4 Comments

Hard Drive Predictive Failures on Exadata

This post also applies to non-Exadata systems as hard drives work the same way in other storage arrays too – just the commands you would use for extracting the disk-level metrics would be different.

I just noticed that one of our Exadatas had a disk put into “predictive failure” mode and thought to show how to measure why the disk is in that mode (as opposed to just replacing it without really understanding the issue ;-)

SQL> @exadata/cellpd
Show Exadata cell versions from V$CELL_CONFIG....

-------------------- -------------------- -------------------- ---------- ---------- ---------- ---------- ---------- ----------- ---------- ----------
FlashDisk           normal                      183         23          8
FlashDisk           not present                 183         23          8                     3
FlashDisk           normal                      366         23         16
FlashDisk           normal                      366         23         16
HardDisk            normal                    20489       1863         11
HardDisk            warning - predictive       1863       1863          1          1
HardDisk            normal                    22352       1863         12
HardDisk            normal                    22352       1863         12

So, one of the disks in storage cell with IP has been put into predictive failure mode. Let’s find out why!

Continue reading

Posted in Exadata, Linux, Oracle | Leave a comment

When do Oracle Parallel Execution Slaves issue buffered physical reads – Part 2?

In the previous post about in-memory parallel execution I described in which cases the in-mem PX can kick in for your parallel queries.

A few years ago (around Oracle and Exadata X2 release time) I was helping a customer with their migration to Exadata X2. Many of the queries ran way slower on Exadata compared to their old HP Superdome. The Exadata system was configured according to the Oracle’s “best practices”, that included setting the parallel_degree_policy = AUTO.

As there were thousands of reports (and most of them had performance issues) and we couldn’t extract SQL Monitoring reports (due to another issue) I just looked into ASH data for a general overview. A SQL Monitoring report takes the execution plan line time breakdown from ASH anyway.

First I ran a simple ASH query which counted the ASH samples (seconds spent) and grouped the results by the rowsource type (I was using a custom script then, but you could achieve the same with running @ash/ashtop "sql_plan_operation||' '||sql_plan_options" session_type='FOREGROUND' sysdate-1/24 sysdate for example):

Continue reading

Posted in Exadata, Oracle | 4 Comments