Oracle 11g internals part 1: Automatic Memory Management

This is my attempt for getting cheap popularity out of recent Oracle 11g release. This is not going to be another Oracle 11g new features list, I’ll be just posting any of my research findings here, in a semi-organized way.

The first post is is about Automatic Memory Management. AMM manages all SGA + PGA memory together, allowing it to shift memory from SGA to PGAs and vice versa. You only need to set a MEMORY_TARGET (and if you like, MEMORY_MAX_TARGET parameter).

You can read rest of the general details from documentation, I will talk about how this feature has been implemented on OSD / OS level (or at least how it looks to be implemented).

When I heard about MEMORY_TARGET , then the first question that came into my mind was that how can Oracle shift shared SGA memory to private PGA memory on Unix? This would mean somehow deallocating space from existing SGA shared memory segment and releasing it for PGA use. To my knowledge the traditional SysV SHM interface is not that flexible that it could downsize and release memory from a single shared memory segment. So I started checking out how Oracle had implemented this.

One option of course is not to implement it at all – just do not use the extra space in the extra SGA area and it will be soon paged out if there’s memory pressure (as long as you don’t keep your SGA pages locked – which can’t be used together with MEMORY_TARGET anyway). However should this “unneeded” memory be used again, all would have to be loaded back from swap area.

I started by checking what are the shared memory IDs for my instance:

$ sysresv

IPC Resources for ORACLE_SID "LIN11G" :
Shared Memory:
ID              KEY
1900546         0x00000000
1933315         0xa62e3ad4
Semaphores:
ID              KEY
884736          0x6687edcc
Oracle Instance alive for sid "LIN11G"

Ok, let’s look for corresponding SysV SHM segments:

$ ipcs -m

------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status
0x00000000 1900546    oracle    660        4096       0
0xa62e3ad4 1933315    oracle    660        4096       0

The segments are there, but wait a minute, they are only 4kB in size each! If there are no large shared memory segments used, where does Oracle keep its SGA?

The immediate next check I did was looking into the mapped memory for an Oracle instance process – as the SGA should be definitely mapped there!

$ pmap `pgrep -f lgwr`
29861:   ora_lgwr_LIN11G
00110000      4K rwx--    [ anon ]
00111000     32K r-x--  /apps/oracle/product/11.1.0.6/lib/libclsra11.so
00119000      4K rwx--  /apps/oracle/product/11.1.0.6/lib/libclsra11.so
...
...
49000000  16384K rwxs-  /dev/shm/ora_LIN11G_3997699_0
4a000000  16384K rwxs-  /dev/shm/ora_LIN11G_3997699_1
4b000000  16384K rwxs-  /dev/shm/ora_LIN11G_3997699_2
4c000000  16384K rwxs-  /dev/shm/ora_LIN11G_3997699_3
4d000000  16384K rwxs-  /dev/shm/ora_LIN11G_3997699_4
...
...
88000000  16384K rwxs-  /dev/shm/ora_LIN11G_3997699_63
89000000  16384K rwxs-  /dev/shm/ora_LIN11G_3997699_64
bfc1f000     88K rwx--    [ stack ]
 total  1225048K

pmap output reveals that Oracle 11g likes to use /dev/shm for shared memory implementation instead. There are multiple 16MB “files” mapped to Oracle server processes address space.
This is the Linux’es POSIX-oriented SHM implementation, where everything, including shared memory segments, is a file.

Thanks to allocating SGA in many smaller chunks, Oracle is easily able to release some parts of SGA memory back to OS and server processes are allowed to increase their aggregate PGA size up to the amount of memory released.
(Btw, if your MEMORY_MAX_TARGET parameter is larger than 1024 MB then Oracle’s memory granule size is 16MB on Linux, otherwise it’s 4MB).

Note that the PGA memory is still completely independent memory, allocated just by mmap’ing /dev/zero, it doesn’t really have anything to do with shared memory segments ( unless you’re using some hidden parameters on Solaris, but that’s another story ).
PGA_AGGREGATE_TARGET itself is just a recommended number, leaving over from MEMORY_TARGET – SGA_TARGET (if it’s set). And Oracle uses that number to decide how big PGAs it will “recommend” for sessions that are using WORKAREA_SIZE_POLICY=AUTO.

So how does Oracle actually release the SGA memory when it’s downsized?

Compare these outputs:

/dev/shm before starting instance:

$ ls -l /dev/shm/
total 0

Obviously there’s nothing reported as no /dev/shm segments are in use.

/dev/shm after starting instance with fairly large SGA (note that some output is cut for brewity).
See how some of the memory “chunks” are zero in size. These chunks are the ones which have been chosen as victims for destruction (or for not-even-creation) when space was needed or PGA areas. If you look into pmap output for any server processes you will still see this memory mapped into the address space, but it’s not just used because Oracle knows this memory is really freed.

$ ls -l /dev/shm
total 818840
-rw-r----- 1 oracle dba 16777216 Aug 20 23:29 ora_LIN11G_1900546_0
-rw-r----- 1 oracle dba        0 Aug 20 23:29 ora_LIN11G_1933315_0
-rw-r----- 1 oracle dba        0 Aug 20 23:29 ora_LIN11G_1933315_1
-rw-r----- 1 oracle dba        0 Aug 20 23:29 ora_LIN11G_1933315_10
-rw-r----- 1 oracle dba        0 Aug 20 23:29 ora_LIN11G_1933315_11
-rw-r----- 1 oracle dba        0 Aug 20 23:29 ora_LIN11G_1933315_12
-rw-r----- 1 oracle dba        0 Aug 20 23:29 ora_LIN11G_1933315_13
-rw-r----- 1 oracle dba        0 Aug 20 23:29 ora_LIN11G_1933315_14
-rw-r----- 1 oracle dba 16777216 Aug 20 23:37 ora_LIN11G_1933315_15
-rw-r----- 1 oracle dba 16777216 Aug 20 23:37 ora_LIN11G_1933315_16
-rw-r----- 1 oracle dba 16777216 Aug 20 23:37 ora_LIN11G_1933315_17
-rw-r----- 1 oracle dba 16777216 Aug 20 23:37 ora_LIN11G_1933315_18
-rw-r----- 1 oracle dba 16777216 Aug 20 23:37 ora_LIN11G_1933315_19
-rw-r----- 1 oracle dba        0 Aug 20 23:29 ora_LIN11G_1933315_2
-rw-r----- 1 oracle dba 16777216 Aug 20 23:37 ora_LIN11G_1933315_20
-rw-r----- 1 oracle dba 16777216 Aug 20 23:37 ora_LIN11G_1933315_21
-rw-r----- 1 oracle dba 16777216 Aug 20 23:37 ora_LIN11G_1933315_22
-rw-r----- 1 oracle dba 16777216 Aug 20 23:37 ora_LIN11G_1933315_23
-rw-r----- 1 oracle dba 16777216 Aug 20 23:37 ora_LIN11G_1933315_24
...

Another listing, taken after issuing “alter system set pga_aggregate_target=600M”
You can see from below that most of the /dev/shm files which were 16MB in previous listing, have also been zeroed out.

$ ls -l /dev/shm
total 408740
-rw-r----- 1 oracle dba 16777216 Aug 20 23:29 ora_LIN11G_1900546_0
-rw-r----- 1 oracle dba        0 Aug 20 23:29 ora_LIN11G_1933315_0
-rw-r----- 1 oracle dba        0 Aug 20 23:29 ora_LIN11G_1933315_1
-rw-r----- 1 oracle dba        0 Aug 20 23:29 ora_LIN11G_1933315_10
-rw-r----- 1 oracle dba        0 Aug 20 23:29 ora_LIN11G_1933315_11
-rw-r----- 1 oracle dba        0 Aug 20 23:29 ora_LIN11G_1933315_12
-rw-r----- 1 oracle dba        0 Aug 20 23:29 ora_LIN11G_1933315_13
-rw-r----- 1 oracle dba        0 Aug 20 23:29 ora_LIN11G_1933315_14
-rw-r----- 1 oracle dba        0 Aug 20 23:46 ora_LIN11G_1933315_15
-rw-r----- 1 oracle dba        0 Aug 20 23:46 ora_LIN11G_1933315_16
-rw-r----- 1 oracle dba        0 Aug 20 23:46 ora_LIN11G_1933315_17
-rw-r----- 1 oracle dba        0 Aug 20 23:46 ora_LIN11G_1933315_18
-rw-r----- 1 oracle dba        0 Aug 20 23:46 ora_LIN11G_1933315_19
-rw-r----- 1 oracle dba        0 Aug 20 23:29 ora_LIN11G_1933315_2
-rw-r----- 1 oracle dba        0 Aug 20 23:46 ora_LIN11G_1933315_20
-rw-r----- 1 oracle dba        0 Aug 20 23:46 ora_LIN11G_1933315_21
-rw-r----- 1 oracle dba        0 Aug 20 23:46 ora_LIN11G_1933315_22
-rw-r----- 1 oracle dba        0 Aug 20 23:46 ora_LIN11G_1933315_23
-rw-r----- 1 oracle dba        0 Aug 20 23:46 ora_LIN11G_1933315_24
-rw-r----- 1 oracle dba        0 Aug 20 23:46 ora_LIN11G_1933315_25
-rw-r----- 1 oracle dba        0 Aug 20 23:46 ora_LIN11G_1933315_26
-rw-r----- 1 oracle dba        0 Aug 20 23:46 ora_LIN11G_1933315_27
-rw-r----- 1 oracle dba        0 Aug 20 23:46 ora_LIN11G_1933315_28
-rw-r----- 1 oracle dba        0 Aug 20 23:46 ora_LIN11G_1933315_29
-rw-r----- 1 oracle dba        0 Aug 20 23:29 ora_LIN11G_1933315_3
-rw-r----- 1 oracle dba        0 Aug 20 23:46 ora_LIN11G_1933315_30
-rw-r----- 1 oracle dba        0 Aug 20 23:46 ora_LIN11G_1933315_31
-rw-r----- 1 oracle dba        0 Aug 20 23:46 ora_LIN11G_1933315_32
-rw-r----- 1 oracle dba        0 Aug 20 23:46 ora_LIN11G_1933315_33
-rw-r----- 1 oracle dba        0 Aug 20 23:46 ora_LIN11G_1933315_34
-rw-r----- 1 oracle dba 16777216 Aug 20 23:29 ora_LIN11G_1933315_35
-rw-r----- 1 oracle dba 16777216 Aug 20 23:29 ora_LIN11G_1933315_36
-rw-r----- 1 oracle dba 16777216 Aug 20 23:29 ora_LIN11G_1933315_37
-rw-r----- 1 oracle dba 16777216 Aug 20 23:29 ora_LIN11G_1933315_38
...

So, in Linux (tested on OEL5) Oracle 11g is using a new mechanism for managing shared memory. Well this mechanism itself isn’t that new, but it’s unconventional considering the long history of Unix SysV SHM segment use for Oracle SGAs.


Does this all matter? Yes
Why does it matter? There are few administrative differences compared to the conventional implementation.First of all, ipcs -m doesn’t show the full size of these segments anymore. You need to list /dev/shm contents for that.Also, pmap always reports that the memory is mapped (because it is) even though it doesn not have physical backing storage on tmpfs on /dev/shm device.

One more important note is that if you have not configured your tmpfs size on /dev/shm properly, then Oracle fails to allocate new POSIX-style shared memory and will not allow you to use MEMORY_TARGET parameters (startup without those parameters will however succeed).

The error message you likely get looks like that:

SQL> ORA-00845: MEMORY_TARGET not supported on this system

And is accompanied by following entry in alert.log:

Sat Aug 18 12:37:31 2007
Starting ORACLE instance (normal)
WARNING: You are trying to use the MEMORY_TARGET feature. This feature requires the /dev/shm file system to be mounted for at least 847249408 bytes. /dev/shm is either not mounted or is mounted with available space less than this size. Please fix this so that MEMORY_TARGET can work as expected. Current available is 0 and used is 0 bytes.
memory_target needs larger /dev/shm

So you need to configure large enough tmpfs on /dev/shm device to fit all memory up to MEMORY_MAX_TARGET.

The configuration works roughly like that:
(run as root):

# umount tmpfs
# mount -t tmpfs shmfs -o size=1300m /dev/shm
# df -k /dev/shm
Filesystem           1K-blocks      Used Available Use% Mounted on
shmfs                  1331200         0   1331200   0% /dev/shm

This one allows /dev/shm to grow roughly up to 1300MB, allowing you to use MEMORY_MAX_TARGET (or MEMORY_TARGET) set to 1300MB. The Linux-specific Oracle 11g documentation has more details how to configure this.


Note that after resetting various parameters I played with I realized that finally Oracle has implemented the human-friendly way for resetting parameters in SPFILE:

Sys@Lin11g> alter system reset pga_aggregate_target;

System altered.

Sys@Lin11g> alter system reset sga_target;

System altered.

…no scope=spfile sid=’*’ is needed. This resets the parameter in spfile only, the values in memory persist.

Note that this year’s only Advanced Oracle Troubleshooting class takes place in the end of April/May 2014, so sign up now if you plan to attend this year!

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

59 Responses to Oracle 11g internals part 1: Automatic Memory Management

  1. Pingback: Oracle Database 11g Automatic Memory Management « Kevin Closson’s Oracle Blog: Platform, Storage & Clustering Topics Related to Oracle Databases

  2. Tanel thank you for this blog and your sharings, this post is one of the best posts I have experienced all through the Oracle blogs!

    And I hope you have more time for blogging in the future.

    Best regards.

  3. tanelp says:

    Wow, thanks a lot! Thats the best feedback I’ve got during my short blogging career :)

  4. Pingback: блога на явор » Blog Archive » /dev/shm for Oracle 11g DBAs

  5. Mark Bobak says:

    Hi Tanel,

    Thanks for the post.

    I assume this will work the same way on Linux x86-64?

    How, if at all, does this change other memory related configurations, such as hugepages?

    Thanks,

    -Mark

  6. kevinclosson says:

    Tanel,

    Oracle on Linux has always used /dev/shm to implement the Indirect Data Buffers feature…just FYI

  7. tanelp says:

    Thanks Kevin. Yep I vaguely remember that from the one time I used that feature.

    Mark, regarding the hugepages, looks like they can’t be used (at least as per documentation):

    http://download.oracle.com/docs/cd/B28359_01/server.111/b32009/appc_linux.htm

    “MEMORY_TARGET and MEMORY_MAX_TARGET cannot be used when LOCK_SGA is enabled. MEMORY_TARGET and MEMORY_MAX_TARGET also cannot be used in conjunction with huge pages on Linux.”

  8. kevinclosson says:

    oops, the way I wrote that insinuated that Oracle uses Indirect Data Buffers to implement 11g AMM…I didn’t mean that…what I meant was Oracle has exercised this style of shared memory before…albeit for an entirely different reason.

  9. Pingback: Oracle11g Automatic Memory Management and Linux Hugepages Support « Kevin Closson’s Oracle Blog: Platform, Storage & Clustering Topics Related to Oracle Databases

  10. Riyaj Shamsudeen says:

    Tanel
    As usual, I enjoyed your blog about this new feature. I am sure this knowledge will come handy.
    Thanks..

  11. Pingback: Log Buffer #59: a Carnival of the Vanities for DBAs « I’m just a simple DBA on a complex production system

  12. MAQ. says:

    Thanks alot … excellente

  13. Porus says:

    Way to go Tanel! Excellent Post.
    Your unix level knowledge is really excellent.

  14. Pingback: ernie.cz » Blog Archive » ORA-00845

  15. coskan says:

    You, again made my day

    Cheers Tanel.

  16. Josir says:

    Hi Tanel, you made my day too!!!
    It worked on Ubuntu 7.10.

    # umount devshm
    # mount -t tmpfs devshm -o size=1300m /dev/shm

    But I am wondering: how do I configure this tmpfs permanently in order to be set on next boot ?

  17. Josir says:

    I got it Tanel – I have to add it in /etc/fstab like any other mount…
    Shame on me…
    Thanks again!

    Josir Gomes
    Rio de Janeiro – Brasil

  18. dave says:

    question for the experts.. We have Oracle 11g RAC with 6 nodes running Redhat 2.6.9-67.ELsmp x86_64

    After increasing /dev/shm to 14G, we continue to get the ORA-00845 errors…

    I noticed that a df -kh generates:
    # df -k /dev/shm
    Filesystem 1K-blocks Used Available Use% Mounted on
    shmfs 14G 0 14G 0% /dev/shm

    It appears that none of this memory is being used.. is this accurate? assuming that Oracle is not using it yet because of the failed startup?

    Is there something we are missing?

    thanks,
    Dave

  19. sam says:

    Thanks for the great in depth explanation. I benifited alot from it. I have a question, I set my memory max target to 28G and memory target 24G, my system has by default max_sga=4G. Will the system ignore the max sga? or it will not give the sga_target more the 4GB? shall I set it to 0?
    my physical memory 32G and oracle 11g is the only app on the system
    Thanks again and keep up the good work

  20. Tanel Poder says:

    @Dave – What’s your memory_target and memory_max_target value? These need to be smaller (or equal) to your /dev/shm size. If Oracle can’t put all SGA/PGA memory in there it won’t use it.

    @Sam – You should unset sga_target and pga_aggregate_target if you plan to use MEMORY_TARGET (unless you want to set some minimum values for sga_target/pga_aggregate_target)

  21. Senthil says:

    Thanks Tanel for your great stuff.

    When I enabled memory_target value in Oracle11g with RHEL5.1,my server consuming more than 85% of swap usgae.

    SGA and PGA value is 0 and memory_target is 4G.

    Do you have any fix to minimize swap usage.

  22. Kirk Brocas says:

    Is there a part 2?

  23. Tanel Poder says:

    Hi Kirk, no official part-2 yet, here are the posts which I’ve tagged as related to 11g:

    http://blog.tanelpoder.com/category/oracle-11g/

  24. Most users probably want the shm size adjustment to persist across reboots. So, here’s the /etc/fstab entry corresponding to Tanel’s mount command above:

    none /dev/shm tmpfs size=1300M 0 0

    (My very healthy OpenSUSE 11.1 system has no /dev/shm fs set up at all by default). Run “mount /dev/shm” to mount it on-demand.

  25. Uma says:

    Hello there,
    Thank you for all the info.
    I am very new to oracle 11g install on linux, tried to recover a db (created using dbca) using rman, ran into error,when tried to do startup mount, got the error described here, I am trying to implement the solution given by you,
    tried to umount /dev/shm- got the error device is too busy,
    does this mean I have to shut down the db and then try to unmount?
    Can you please help? Many thanks in anticipation.

  26. Hind says:

    thanks, a note to say:
    add this entry to /etc/fstab to mount shm every boot tmpfs :
    /dev/shm tmpfs size=2500m 0 0

  27. jni says:

    Well mate, that blog just saved me hours of researching. Thanks alot for sharing! Awesome blog!

  28. Don Woods says:

    Tanel, Thank you for your excellent analysis and summary of Oracle 11g memory management. I am installing 11g on a new Sun server and could not figure out why I received memory allocation errors when the SGA was increased to more than 4GB.

    Your experience has saved me countless hours of work.

    –Don

  29. Sherrie Kubis says:

    Tanel, thanks for publishing this, you are shedding light on how our Linux RAC Clusters are working. Question: We have a 3-node (each 32gb ram) Linux RAC cluster running Oracle 11.1.0.7, housing 4 instances with a total memory_target of 8.75 gb. OEM shows 29% ram is used. We are using AMM. The free -mt command shows
    total used free shared buffers cached
    Mem: 32189 31519 669 0 1250 25168
    -/+ buffers/cache: 5101 27088
    Swap: 32767 39 32728
    Total: 64957 31559 33397

    It looks like tmpfs is 16gb:
    df -k /dev/shm
    Filesystem 1K-blocks Used Available Use% Mounted on
    tmpfs 16481056 5604532 10876524 35% /dev/shm

    It looks like we are seeing 5.6gb of memory used, so I’m confused about why our sgas are defined to consume 8.75gb, but we only see 5.6gb used. read that the sga by default takes 60% of memory_target and pga gets the rest. Because there are not many connections, is this why we see a lower value?

    Does the tmpfs at 16gb mean we can only use 16gb of ram for our databases?

    We are just getting up and running, so there are not yet many connections. I
    Any insights would be appreciated.

  30. Tanel Poder says:

    @Sherrie Kubis
    Hi Sherrie,

    Replying here what I also said in ORACLE-L list:

    The memory which is reserved for PGA_AGGREGATE_TARGET will not show up in /dev/shm as it’s not shared (PGAs are still allocated using process-private memory).

    You can query V$MEMORY_DYNAMIC_COMPONENTS to see how Oracle is currently using the memory:

    SQL> select component, current_size from v$memory_dynamic_components where component like ‘%Target%’;

    COMPONENT CURRENT_SIZE
    —————————— ————
    SGA Target 545259520
    PGA Target 293601280

    Note that hugepages are not used with AMM on Linux so you may be not getting all the performance out of your hardware…

  31. Gunnar says:

    Hi!

    Thanks for a very informative post!

    I have a little comment reqarding calculating free memory and posix shared memory segments on Linux.

    After doing some tests, I see that SysV IPC shared memory segment allocation is “invisible” when issuing the “free” command, i.e the memory allocated is not accounted for as used. Allocated posix shared memory in /dev/shm seems to be counted for under “cached”.

    It is not trivial hard to calculate how much memory which is available for applications on Linux, and here is what I think gives an *indication*:

    Free memory according to “free” in the line “-/+ buffers/cache”
    - sum(size of shared memory segments from ipcs -m)
    - sum(size of shared memory segments in /dev/shm)
    ————————————————–
    Memory available for applications
    ==================================================

    Anonymous nmaps are not accounted for in this calculation, neither is hugepages and I guess there is other numbers I should take into account from /proc/meminfo.

    But at least the numbers I get are better than using top and free without any consideration to shared memory segments.

    Any comment or input regarding calculating free memory on linux would be greatly appreciated.

  32. AriHeikkinen says:

    moro,
    Just “a dummy one comment”.
    There are still some “poor customers” who must use on same host more than one instance (per database on non-RAC).
    Basing on this article and comments, on OS side there is one /dev/shm and SGA’s are using the very same /dev/shm (not possible to do other way?), so as result when calculating proper max size for /dev/shm (tmpfs) one should consider all SGAs (a’ka instances on host). Is’t possible to make more /dev/shm’s like /dev/shm1, /dev/shm2 and give the choosen one shm to choosen one instance – no?.

  33. Pingback: How ORACLE Uses Memory on AIX. Part 4: Having Fun with 11g Memory_target « Intermediate SQL

  34. sthielen says:

    To keep the setting through a reboot on redhat, the following entry in /etc/fstab works for me.
    tmpfs /dev/shm tmpfs size=13000m 0 0
    Thanks for the tips

  35. danny says:

    Thanks for the read.

    What is the best way to monitor /dev/shm?

    Do you just use “ls -la /dev/shm” and “df -h”?

    Thanks

  36. Pat says:

    Hi Tanel,

    Very nice article. With AMM in 11g R2, if /dev/shm is not used for the PGA, how much space should we allocate for /dev/shm -> just the SGA_MAX_SIZE? For e.g.
    In 10g R2, we had SGA_MAX_SIZE as 1.5GB and PGA_AGGREGATE_TARGET as 1GB, now when this database is upgraded to 11g, should MEMORY_MAX_TARGET be set to 1.5GB or 2.5GB?

    Thanks.

  37. Pascal says:

    Hello Tanel,

    Thank you for a Great Article!

    Is it possible to dynamically change the size of /dev/shm without affecting the running Oracle DB-Instannce?

    We have about 37 GB Memory in our DB-Servers but the /dev/shm was initially created with only 14 GB and we would now like to change it to 20 GB.

    Regards,

    Pascal

  38. maclean says:

    Hi Tanel,
    Oracle support recommended do not use pre_page_sga with ASMM in 10g;If this parameter works well in 11g AMM env?

  39. Pingback: Automatic Memory Management in 11g « Pavan DBA's Blog

  40. Kanwar says:

    Hi,
    Does the value of MEMORY_TARGET depend on the number of CPUs in any way? We have a situation where if we increase the # of CPUs (enable them — OS is Solaris), then the cluster throws an <> error. Before the cpu count was increased, it was working fine.

    Thanks,
    Kanwar

  41. Pingback: Oracle Exadata Performance series – Part 1: Should I use Hugepages on Linux Database Nodes? | Tanel Poder's blog: IT & Mobile for Geeks and Pros

  42. Gopala Meapuram says:

    Tanel,

    Thank you. This is an excellent information that you are sharing.

    Quick question:
    Oracle says ” The use of AMM is absolutely incompatible with HugePages.” in of the notes of implementing HugePages.
    In case of hugepage implementations, how is Oracle using memory structure. Does it use System-V way of using SGA or POSIX way.
    We are using Hugepages. I see files getting created under /dev/shm. But number of files getting created directly corresponds to ‘nattch’ of ipcs.
    In this case, is there way to configure /dev/shm using tmpfs.

    Please reply with details if any on its usage in 11gR2 with respect to Hugepages.

    Thanks,
    Gopal

  43. Pingback: Reading Material On Order 2 « Charles Hooper's Oracle Notes

  44. Pingback: Book Review: Oracle Database 11g Performance Tuning Recipes « Charles Hooper's Oracle Notes

  45. gk says:

    When we set a value for PGA_AGGREGATE_TARGET parameter, oracle can use more OS memory for PGA than what we set in PGA_AGGREGATE_TARGET when it is required, with 11g AMM does this work same way? Can oracle allocate more than MEMORY_MAX_TARGET for PGA when it is required?

    Thanks

  46. Rafi says:

    Good post.With very nice explanation…

    Best regards,

    Rafi.

  47. Nabajyoti Dutta says:

    Hi Tanel,

    Great post, thanks.
    I am dealing with many databases being upgraded to 11g from 10g. (all 3 node rac)
    One of them actually have 100g sga per instance. In 10gR2 we didnt have amm and hugepages were on, in 10gR2 I used to see one line for ipcs -m for my each instance showing the sga size.

    in 11gR2 I am not using the AMM and keeping both memory_target and memory_max_target 0.
    Hugepages are still turned on(OEL 5). The alert log mentions Large Pages are being used(good new alert log feature of 11g). But ipcs -m actually shows 3 rows, one being the exact sga size, other two are of size 402653184 and 2097152, also despite the fact the we are not using AMM, /dev/shm has files from the unix user which owns the instance and the size of files amount to 500M (the sga in this case is 60g).

    is it normal with 11g now to have 2 additional rows in ipcs -m and files in /dev/shm even though AMM is not used?

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>