Performance Stories from Exadata Migrations

Here are my UKOUG 2010 slides about Exadata migration performance, this is real life stuff, not repeating the marketing material:
View more presentations from tanelp.
This entry was posted in Oracle and tagged , , . Bookmark the permalink.

17 Responses to Performance Stories from Exadata Migrations

  1. Tanel Poder says:

    Note that there’s a minor correction needed on slide 13, our throughput wasn’t really limited by the Oracle *SDU* size, but the TCP send buffer size which was 48kB on the Source Solaris machine, but the bottleneck reason stays effectively the same. I’ll blog about the difference very soon :)

  2. Sokrates says:

    nice presentations – looks like handmade partitioning & parallelisation is still not out of date like in good old times ( http://download.oracle.com/docs/cd/A57673_01/DOC/server/doc/A48506/partview.htm where it started )

  3. Bryan Grenn says:

    Great presentation. I especially appreciated your comments on indexing on the Exadata. It was a very succinct, logical approach. Great Job, and thanks for sharing all this critical information.

  4. Nice slides, Tanel. I’d like to point out something on slide 25. The point is made that Exadata is meant to “find” data. Actually, it is meant to eliminate data and eliminate I/O. Accelerating the finding of data is the work of traditional indexes. Exadata smart scan aims to reduce payload through filtration and storage indexes, neither of which are actually “finding” data… just a nit.. if you’ll allow.

  5. Tanel Poder says:

    @Kevin Closson
    Yeah, we are talking about the same thing with different words… And if we didn’t I’d believe you more than myself anyway here ;-)

    Instead of “finding” I should rather say “getting”, which should illustrate that once you’ve gotten the data (to the database layer) then all the further actions (sorting, actual join comparisons, function calls in select list etc) don’t benefit from the cells anymore. Even the pushed bloom filter is still an optimization of how to “get” less data from the inner input rowsource of the join…

    In other words, I can say (very broadly) that there’s major 2 stages in query execution:

    1) Getting the right data
    2) Doing something with the data (which may further eliminate/collapse rows)

    Cell smart scans speed up the stage 1 only…

  6. Tanel Poder says:

    By the way – notice the word in bold – ANTIPATTERNS on slides 21/22. So whatever you see on these two slides is, as the “antipatterns” says, is a warning about the *wrong* way of doing things, not my recommendation how to do things!

  7. Uwe Hesse says:

    Hi Tanel,
    thank you for sharing this experiences about migrating to Exadata! Very helpful & instructive for me.

  8. nice discussion, but in a complex presentation/subject, it will be helpful to reader to also have access to the audio recording
    regards

  9. David Rolfe says:

    What would be your view on using Exedata for very write intensive (>15MB/sec redo) applications? My understanding is that the hardware flash isn’t useful for this and that the underlying disk doesn’t have any write cache. Please correct me if I’m wrong…

  10. Tanel Poder says:

    @David Rolfe
    Hi David,

    In addition to the Flash Cache (which is not write cache), each storage cell has 512 MB of battery packed write cache in disk controllers, so this helps a bit, it’s small in today’s terms, but better than no cache.

    Note that as ASM stripes the redologs across multiple cells, you have multiple cells caches to use. This may not be enough though and may result in long log file parallel write waits (which result in long log file syncs too).

    So yeah, while the flash cache allows to do single block reads much faster (still way slower than doing logical reads from buffer cache though) and it leaves disks less busy, the synchronous writes required for commit may still suffer. How much you suffer depends on the other workload in the server – how busy the disks are…

    This is a case where you could use the flash memory as a (mirrored) ASM disk instead and put your redologs there…

    Btw, why isn’t the flash cache a write cache? Well it’s not mirrored (otherwise you’d get 2.6 TB per full rack instead of 5.3 TB) and that’s why the write IOs have to go to disk before the write complete can be acknowledged…

  11. Travis Mitchell says:

    Tanel,
    Great slides! Could you please expound on the comment made on the slide regarding OLTP and pizza boxes? I’m currently evaluating Exadata for a client for a high volume OLTP 2 node RAC database hope I’m not on the wrong path :) Do you have any experience with the Exadata profiler component of SQL Performance Analyzer? Whether it is accurate in profiling SQL Tuning Sets compared to real-world?
    Thanks again, this information is very useful.
    Thanks,
    Travis

  12. Krishna says:

    Hi Tanel

    In Slide 9, you had mentioned multi MB CU – would you be referring to
    the size of a CU in compressed format as multi MB or deflated data
    from the CU as multi MB?

    Also would it be fair to state that for OLTP, Exadata is just a RAC Cluster with Flash? Exadata has an excellent interconnect – infiniband with RDS/IP which is kind of hard to find/beat in most commercial RAC deployments.

    Thanks
    Krishna

  13. rrasanen says:

    What was the size of the resulting database?

  14. saurabh says:

    Dear Tanel,

    I read your blog “Performance Stories from Exadata Migrations” on Exadata Performance, its really awsome and very helpful.

    We are using Oracle Exadata V2-Quarter Rack on Linux platform. In this article I saw a performance graph on Page 10 for compute nodes and cell nodes.

    Could you please let me know how to prepare such graphs and reports, I also want to prepare such reports and graphs for our servers.

    I really appreciate for your kind assistance.

    Regards.

  15. Rick Lyon says:

    I am having a problem on 11.1.0.7 and Exadata v1 on Linux 64bit on hp.
    10-node RAC database. The issue is with parallel queries going very slow. I saw the same sort of thing happen with two other clients. An Oracle technician in-house says that 11.1.0.7 was short-lived and the only reason it is still around is because of Exadata v1 and the issues it is having.
    Is there something that Oracle does not publish that makes 11.1.0.7 very notorious for query slowdowns on Linus RAC databases? Do you know anything about the version and why many clients have the same issues with having parallel query slowdown issues? And having to flush the shared pool on every node nightly to alleviate slowdown?
    Bug? Does 11gR2 resolve this?
    Please help if you can

  16. Som Shekhar says:

    Hi,
    Very nice and informative presentation.
    Well i have not worked on Oracle Exadata, but gathering what all is available on net and trying to compare with the NoSQL data bases like Cassandra, HBASE, Couchve, Mongo etc..It would be great if you can throw some light on the following questions:

    1. What is the write latency? For example:if i have billion records per second coming to me, then how much does time it takes to load into OLTP?
    2. What would be the read latency if simulatenous writes are also happening?:
    Consider the worst case scenario and full peak load

    Thanks and Regards,
    Som Shekhar

  17. hrishy says:

    @Tanel Poder
    Hi Tanel

    Jut wondering what you think of the new flash log cache for write intensive applications.Since the tail end of the redo logs is stored on the flash will it provide SIGNIFICANT (sorry i want to highlight this word and not sure how to) boost to write intensive workloads?

    Also most materials i have seen on exadata talk of extraodinary IOPS but those numbers are for cell disks and none of them write about IOPS available to compute nodes.Just wondering what would be a the IOPS values for write intensive workloads that people have encountered in real world

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>