There was a discussion about whether Oracle really allocates all memory for SGA immediately on instance startup or not. And further, whether Oracle allocates memory beyond the SGA_TARET if SGA_MAX_SIZE is larger than it.
It’s worth reading this thread first: http://forums.oracle.com/forums/thread.jspa?threadID=535400&tstart=0
I will paste an edited version of my reply to here as well:
Don’t confuse address space set-up with allocating physical memory pages from RAM!
Even if ipcs -m shows x GB as the SGA shm segment length, it doesn’t mean this memory has actually been initialized and taken from RAM.
Decent OS’es do only initialize the pageable meory pages when they’re touched the first time, so a shm segment showing 10GB in ipcs -m output may be only 10% “used” really as some pages have never been touched.There are many things which affect when and if the memory is actually *allocated*, the ones I remember right now are:
1) using solaris ISM – means Oracle will be usng non-pageable large pages – the shm seg size you see in ipcs is fully allocated from RAM and locked in RAM.
2) using Solaris DISM, the SGA shm segment is pageable (small pages in Solaris 8, large pages from Solaris 9) and may not necessarily be allocated from RAM
3) using lock_sga=true -> the SGA shm segment is allocated from RAM and locked in RAM
4) using _lock_sga_areas -> some ranges of pages in SGA shm segment are locked to memory, some pages of SGA shm segment may still be uninitialized
5) using _pre_page_sga=true -> all pages of SGA shm segment are touched on startup
6) few others like _db_cache_pre_warm which affect memory page touching on startup…
7) using memory_target on Oracle 11g
So, there are *many* things which affect physical memory allocation, but generally, unless you’re using non-pageable pages, not all SGA-size worth of memory is allocated from OS during instance startup.
Normally these artificial instance startup errors after setting sga_max_size to xxxGB come from hitting max shm segment size or max RAM + swap size (on Unixes). On linux on the other hand you can overallocate memory as Linux doesn’t back anonymous memory mappings with swap space (linux starts killing “random” processes instead when running out-of-memory. nice, huh?)
This means that if your SGA_TARGET is lower than SGA_MAX_SIZE during startup then the pages “above” SGA_TARGET will never be touched, thus not allocated!
And if you ramp down SGA_TARGET during your instance lifetime, then the pages “above” the new SGA_TARGET won’t be touched anymore (after MMAN completes the downsizing), which means these pages will be paged out from physical memory if there’s shortage of free physical memory.
Note that this “lazy” allocation behaviour comes from how modern operating systems work, it’s not a feature of Oracle. Oracle just has an option to request some specific behaviour from OS on some platforms (like requesting ISM using SHARE_MMU flag on Solaris when setting up the SGA SHM segment).
Thanks to this heavy “virtualization” of virtual memory pages and short codepath requirements for VM handling, its often hard to get a complete and accurate picture of individual processes & SHM segments physical memory usage.
NB! If you want to move to the "New World" - and benefit from the awesomeness of Hadoop, without having to re-engineer your existing applications - check out Gluent, my new startup that will make history! ;-)