Our take on the Oracle Database 12c In-Memory Option

Enkitec folks have been beta testing the Oracle Database 12c In-Memory Option over the past months and recently the Oracle guys interviewed Kerry OsborneCary Millsap and me to get our opinions. In short, this thing rocks!

We can’t talk much about the technical details before Oracle is officially out in July, but here’s the recorded interview that got published at Oracle website as part of the In-Memory launch today:

Alternatively go to Oracle webpage:

Just scroll down to the Overview section that says: Video: Database Industry Experts Discuss Oracle Database In-Memory (11:10)

I might actually be even more excited about the In-Memory Option than I was excited about Exadata years ago. The In-Memory Option is not just a performance feature, it’s a simplifying feature too. So, now it’s ok to kill your performance problem with hardware, as long as you use it in a smart way :-)

NB! After a 1.5 year break, this year’s only Advanced Oracle Troubleshooting training class (updated with Oracle 12c content) takes place on 16-20 November & 14-18 December 2015, so sign up now if you plan to attend this year!

This entry was posted in InMemory, Oracle, Oracle 12c. Bookmark the permalink.

16 Responses to Our take on the Oracle Database 12c In-Memory Option

  1. Tanel, could you describe, please, if you know, how oracle maintains data consistency from column cache without locking and standard row cache examination?

  2. “I might actually even be more excited about the In-Memory Option than I was excited about Exadata years ago.”

    I’ll second that. Exadata rated “interesting”, In-Memory rates; “impressive”.

    • Pavol Babel says:

      Exadata rated “interesting”. That’s why I personally do not like Tanel’s expression “reports running 50x faster on Exadata”. 50x when compared to what? Some low end disk array? Disk arrays with intelligent tier are here for a long time, SSD/ Flash as well with competitive price to Exadata Storage Servers (the HW price is OK, but Storage SW not, price is $10,000 for one disk and 22% maintenance is inappropriate).

      • Tanel Poder says:

        Pavol, note that the target audience for the Oracle video is different (everyone in IT) than my usual blog readers and hacking session attendees (experienced Oracle professionals).

        So while I did mention in the video that comparing to your old storage array with old bottlenecks, I didn’t comment from technical perspective from so much, but from actual customer results. Can you get better results with your new flash storage array than your old disk array? Sure. Have many of our customers seen awesome results when moving from their old system to Exadata – absolutely! My comment was about the latter. Exadata enables customers to get better performance, not only thanks to the better technology, but also thanks to the mindset change. Try to get someone to drop most indexes and start massively full scanning on their non-Exadata storage array (with lots of FC HBAs installed). I have seen only 2 customers who have done this without Exadata.

        • Jeff Moss says:

          I have a terrible feeling of deja vu in saying this, but my current client is either one of those two customers or a third one to add to the other two. The DW team wanted Exadata from the outset, but it wasn’t allowed for various reasons – ended up with HP servers running Oracle RAC and tons of FC storage underneath – runs a 45TB+ DW very well. Would Exadata have been better – possibly, for some things, perhaps not for others…it’s all moot now.

          Oh, and indexes are few and far between on our DW, but we’d been taking the full scanning approach for years. :-)

          • Tanel Poder says:

            Yep, users with well designed & optimized systems likely won’t see a 1000x performance improvement – as their systems were running very well already. But the average (old-school) DW system I see, will easily get 1-2 orders of magnitude better performance from eliminating the IO subsystem bottlenecks. Maybe I’m biased, becasue people call me when they have performance problems :)

            Nevertheless, Turkcell guys had their DW designed for full table scans and partition pruning and had the biggest server available… and still after moving to Exadata they got 10x perf improvement on average (and consolidated from 11 racks of storage + servers to 1 rack of Exadata :) Ferhat Sengonul from Turkcell has published more details about their Exadata journey ( http://ferhatsengonul.files.wordpress.com/2011/10/turkcells-exadata-journey-oow2011_pdf.pdf ).

  3. Pavol Babel says:

    I have to say In Memory option could be impressive feature (and the License for that will be impressive as well :) ). it could stop migrating some features from Oracle Database to Elastic Search and other NoSQL In_memory DB. However I can hardly stop laughing when watching this video with Larry

    • Tanel Poder says:

      You need to use the right tool for the right problem.

      Plenty of customers are using Oracle where they don’t need Oracle and would better off with some MySQL or Postgres database (when looking only through the microscopic technical lenses). Others go to NoSQL, fail and be back on a solid RDBMS in no time (or blame that NoSQL vendor and move on to the next cool tech of the day).

  4. Pavol Babel says:

    Tanel, sure. As I said before the Oracle brand new In Memory option is really impressive, since you do not have to move data out of your database, no code change etc. Just putting target segments to memory, dropping some annoying indexes on your core tables and move on..
    Regarding the next cool technology, you know, BIGDATA and NoSQL is like teen age sex… I have seen several poor implementations in NoSQL database, where programers tried to create RDBMS inside it and blaming NoSQL vendor.

    On the other hand, Larry’s response to SAP HANA (which is not NoSQL) was very, very poor. From his perspective In-Memory relational was not feasible technology two years ago and now it is very cool just because oracle has (finally) an answer.

    • Tanel Poder says:

      I don’t really worry about what Larry (or anyone else) says… It’s whether the technology, as it is “today”, can physically help the customer to achieve their goals (and the customer gets it). Everything else is just noise.

  5. Jakub Wartak says:

    Hi Tanel, just out of curiosity,

    1) can the “in-memory” option use cheap, commodity RAM on external – from db host(s) – OSes over network (eth/IB?) ?
    2) Is it just another stage of buffer cache or that could go away without loosing data or not? E.g. like the innodb memcached plugin for MySQL? Or is it just limited to the bigger hosts running 512+GB of RAM that certainly are limited with DIMM slots, and are not in the sweet pot when it comes to $/DIMM_GB ?
    3) If it’s limited really to the host running pmon/LGWR and stuff, what’s the benefit to:
    a) giving the memory to SGA/PGA/result_cache?
    b) buying Flash on PCI and using (free) Smart Flash Cache instead for the same $ that would give you this RAM + licensing

    -J., curious as always :)

    • Tanel Poder says:

      Hi Jakub, I will need to wait until July when this thing is officially out :-)

      The main thing is that even if you buy more RAM or extend it by some other way (or extend your buffer cache to flash), then the data format in memory is still going to be the old row based format in the standard oracle blocks and you would still use the CPUs the old way for processing the data. With In Mem DB Option, you would cache the desired tables/partitions/columns in memory in a new, columnar format and it will be much more efficient to process this on the CPUs – first you’ll be touching less memory (cache lines) as you’d only touch the (compressed) columns you need and then also the whole SIMD processing thing. More information in July :-)

  6. Krish says:

    Ok Tanel – Oracle has released So more details please!

Leave a Reply

Your email address will not be published. Required fields are marked *