Oracle 11g: Reading alert log via SQL

Oracle has done some major improvements in the diagnosability infrastructure in version 11g. Here’s one little detail.

Before Oracle 11g it is possible to access the alert log via SQL using an external table or a pipelined function which in turn uses utl_file.
After reading the text you need to parse it to extract the information you need from there.

Starting from 11g Oracle does all this work for you. There is a fixed table X$DBGALERTEXT, when you query it, Oracle reads the log.xml from alert directory (which contains all the data what alert.log does), parses it and returns the details back as rows:

SQL> select message_text from X$DBGALERTEXT where rownum <= 20;

Starting ORACLE instance (normal)
Shared memory segment for instance monitoring created
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on.
ILAT =18
SYS auditing is disabled
Starting up ORACLE RDBMS Version:
Using parameter settings in client-side pfile /u01/app/oracle/admin/LIN11G/pfile/init.ora on machine linux03
System parameters with non-default values:
  processes                = 150
  memory_target            = 404M
  control_files            = "/u01/oradata/LIN11G/control01.ctl"
  control_files            = "/u01/oradata/LIN11G/control02.ctl"
  control_files            = "/u01/oradata/LIN11G/control03.ctl"
  db_block_size            = 8192

20 rows selected.

This is the text representation, but you can get individual details from other columns as listed below:

           Name                            Null?    Type
           ------------------------------- -------- ----------------------------
    1      ADDR                                     RAW(4)
    2      INDX                                     NUMBER
    3      INST_ID                                  NUMBER
    6      ORGANIZATION_ID                          VARCHAR2(64)
    7      COMPONENT_ID                             VARCHAR2(64)
    8      HOST_ID                                  VARCHAR2(64)
    9      HOST_ADDRESS                             VARCHAR2(16)
   10      MESSAGE_TYPE                             NUMBER
   11      MESSAGE_LEVEL                            NUMBER
   12      MESSAGE_ID                               VARCHAR2(64)
   13      MESSAGE_GROUP                            VARCHAR2(64)
   14      CLIENT_ID                                VARCHAR2(64)
   15      MODULE_ID                                VARCHAR2(64)
   16      PROCESS_ID                               VARCHAR2(32)
   17      THREAD_ID                                VARCHAR2(64)
   18      USER_ID                                  VARCHAR2(64)
   19      INSTANCE_ID                              VARCHAR2(64)
   20      DETAILED_LOCATION                        VARCHAR2(160)
   21      PROBLEM_KEY                              VARCHAR2(64)
   22      UPSTREAM_COMP_ID                         VARCHAR2(100)
   23      DOWNSTREAM_COMP_ID                       VARCHAR2(100)
   24      EXECUTION_CONTEXT_ID                     VARCHAR2(100)
   26      ERROR_INSTANCE_ID                        NUMBER
   27      ERROR_INSTANCE_SEQUENCE                  NUMBER
   28      VERSION                                  NUMBER
   29      MESSAGE_TEXT                             VARCHAR2(2048)
   30      MESSAGE_ARGUMENTS                        VARCHAR2(128)
   31      SUPPLEMENTAL_ATTRIBUTES                  VARCHAR2(128)
   32      SUPPLEMENTAL_DETAILS                     VARCHAR2(128)
   33      PARTITION                                NUMBER
   34      RECORD_ID                                NUMBER

There’s also a fixed table X$DBGDIREXT, which returns all file and directory names under [diagnostic_dest]/diag directory:

SQL> select lpad(' ',lvl,' ')||logical_file file_name
  2  from X$DBGDIREXT
  3  where rownum <=20;


20 rows selected.

If you’re building some custom alert log monitoring, then starting from 11g these x$ tables can help you. On the other hand, I would prefer to monitor logfiles using plain and simple scripts as accessing this X$ table requires the instance to be up and operational. The better use case I see is that if you don’t have access to OS filesystem and nevertheless want to see alert log contents, that can be the easiest option in 11g…

This entry was posted in Oracle and tagged , . Bookmark the permalink.

18 Responses to Oracle 11g: Reading alert log via SQL

  1. Prem says:

    Having it in a table should be
    useful from developers perspective
    who often ask DBAs to check for
    any alert logs (not sure what
    they expect there) .

    Thanks for sharing it Tanel .

    ~ Prem ~

  2. Unfortunately, this is X$ table, which means that it is only readable when connected as SYSDBA. Nice thing, but it would be much more useful as a documented V$ table. Hopefully, Oracle Corp. will give us that, too.

  3. Tanel Poder says:

    Yeah for now you can create your own “V$” table on top of the X$ one and grant access on that to whoever needs to acces it.

  4. In the end the exercise was cool, but reading your post you start wondering why Laurent and me did it


  5. Senthil says:

    Hi Tanel,

    Congrats for becoming oracle ACE :)

    – We have a requirement in one of our tables in our production db like this , The table name should be renamed and a column should be dropped.The table is a highly active table , with lots of sessions accessing that table continuously.It’s not possible to take any down time for this activity. Is it possible to do these modifications in that table when all the transactions are happening?

    — regards,

  6. Tanel Poder says:



    Note that this kind of stuff should be very carefully, thoroughly and repeatedly tested before trying it out live!

    Also, if you rename the table – how do you expect the SQL to continue working (are you creating a view or synonym instead?)

    So test the hell out of this approach before even thinking about running it in live (with heavy production-like activity going on in your test db). And then test some more.

    • PC says:

      Hi Tanel,

      Sorry for posting on such an old thread, but can you please help me understand why do you stress so much on testing? Is this feature known to have issues?


      • Tanel Poder says:

        Every feature has issues if used wrongly or when something important regarding its use has been overlooked. Some features have issues even when you do everything according to the specifications.

        If you can afford hours/days of downtime, you don’t need to test *that much* (just like in dev databases, just change the damn thing and if it doesn’t work, roll it back somehow). If you can’t afford the downtime, like the case for the person asking me the question, then test the hell out of your change to reduce the risk of problems and downtime.

        There are some features like online table shrink, which work well in a training class on read-only tables with 10 rows, but may never complete or end up crashing when ran on large and very busy tables. I wouldn’t take DBMS_REDEFINITION lightly either and would *not* run it in production without testing it on a test database with equal data volumes and transaction activity (it’s harder to reproduce that but again if the system is SO critical that it can’t be down, then you should treat it as such also when planning changes)

  7. Jared says:

    Regarding creating your own v$ view on top of x$ tables:

    That come become something of a nuisance after an upgrade.

    There are instances when the internal structure of the data dictionary changes during the upgrade, causing the custom view to be invalid.

    In some cases, attempts to recompile the view will cause and ORA-600 [12600].

    Attempts to drop the view will cause the same error, and it will in fact be impossible to drop the view, short of hacking the data dictionary.

    I have a few databases with X_$KCBCBH and X_$KCBRBH views that cannot be dropped. Mostly just an annoyance.

  8. Tanel Poder says:

    Yep Jared you’ve got a valid point. Statspack also creates some X$ aliases (and some other monitoring tools too). I like your idea of having a PL/SQL package instead…

    I don’t recommend to just create x$ aliases for all tables in production (I will also add a warning to my script which allows you to do so). Just for the sake of not “disturbing” Oracle if it’s not needed…

  9. Mike says:

    But, 11g has HM commands to check integrity of data dictionary. E.g.
    dbms_hm.run_check(‘Dictionary Integrity Check’,’my_run_01′);
    select dbms_hm.get_run_report(‘my_run_01’) from dual;
    So you can just knock out fixing (hacking) those damaged rowid. ;)

  10. fsia says:


    I had a weird experience using X$DBGDIREXT on RAC. The said fixed works on some of the nodes but not on all of the nodes. I did check the directory structure as well as unix permission on all rac nodes and they are the same. Does anyone knows where to start looking?

  11. clint says:

    hi Tanel, 2 questions (or food for thought you may have an opinion on)

    2 node RAC
    select distinct component_id from V$DIAG_ALERT_EXT

    I dont see the listener component. any thoughts on how I can manually get it in?

    I also get the issue above that another poster got. Read permissions when trying to pipeline across 2 nodes

    select inst_id, originating_timestamp, message_text
    from TABLE(gv$(cursor(select inst_id, originating_timestamp, message_text
    from v$diag_alert_ext
    where originating_timestamp > current_timestamp – interval ‘8’ hour — can use interval second, minute, , hour, day , month, year
    and (message_text like ‘%TNS-12535%’ or message_text like ‘%HOST=%’))))
    order by inst_id, originating_timestamp;

    ORA-12801: error signaled in parallel query server PZ99, instance hostname:instance_2 (2)
    ORA-48118: error encountered with list directory [/u01/oracle/11.2.0/grid_2/css]
    ORA-48193: OS command to open a directory failed
    Linux-x86_64 Error: 13: Permission denied
    Additional information: 2

    and permission of that folder is good so something else is going on.
    > ls -l | grep css
    drwx–x–x 6 grid oinstall 4096 May 8 2012 css
    drwxrwx— 15 grid oinstall 4096 May 8 2012 inventory

  12. Carsten says:


    change the Permission of your log.xml.

    Probably there are wrong Permissions.

  13. I’ve been referring to this article since it was written, and just passed it onto some colleagues recently. One question that was asked was how long are the records maintained? I have a standby database that revealed the following:

    SYS:presdbm1> select max(originating_timestamp) from X$DBGALERTEXT;

    11-AUG-15 AM -04:00

    SYS:presdbm1> c /max/min
    1* select min(originating_timestamp) from X$DBGALERTEXT
    SYS:presdbm1> /

    19-MAR-15 PM -04:00

    So my guess is it goes back to the most recent incarnation of the database?

    • Tanel Poder says:

      This X$ should just go straight to alert.log (XML? don’t remember?) and read it from the beginning. So whatever is in the file, it should read. I’m not sure so if you find otherwise, let me know :-)

Leave a Reply

Your email address will not be published. Required fields are marked *