Tanel Poder's blog: IT & Mobile for Geeks and Pros
Oracle, Exadata, Linux, Performance, Troubleshooting - Mobile Life and Productivity.
Note that there’s a minor correction needed on slide 13, our throughput wasn’t really limited by the Oracle *SDU* size, but the TCP send buffer size which was 48kB on the Source Solaris machine, but the bottleneck reason stays effectively the same. I’ll blog about the difference very soon :)
nice presentations – looks like handmade partitioning & parallelisation is still not out of date like in good old times ( http://download.oracle.com/docs/cd/A57673_01/DOC/server/doc/A48506/partview.htm where it started )
Great presentation. I especially appreciated your comments on indexing on the Exadata. It was a very succinct, logical approach. Great Job, and thanks for sharing all this critical information.
Nice slides, Tanel. I’d like to point out something on slide 25. The point is made that Exadata is meant to “find” data. Actually, it is meant to eliminate data and eliminate I/O. Accelerating the finding of data is the work of traditional indexes. Exadata smart scan aims to reduce payload through filtration and storage indexes, neither of which are actually “finding” data… just a nit.. if you’ll allow.
Yeah, we are talking about the same thing with different words… And if we didn’t I’d believe you more than myself anyway here ;-)
Instead of “finding” I should rather say “getting”, which should illustrate that once you’ve gotten the data (to the database layer) then all the further actions (sorting, actual join comparisons, function calls in select list etc) don’t benefit from the cells anymore. Even the pushed bloom filter is still an optimization of how to “get” less data from the inner input rowsource of the join…
In other words, I can say (very broadly) that there’s major 2 stages in query execution:
1) Getting the right data
2) Doing something with the data (which may further eliminate/collapse rows)
Cell smart scans speed up the stage 1 only…
By the way – notice the word in bold – ANTIPATTERNS on slides 21/22. So whatever you see on these two slides is, as the “antipatterns” says, is a warning about the *wrong* way of doing things, not my recommendation how to do things!
thank you for sharing this experiences about migrating to Exadata! Very helpful & instructive for me.
nice discussion, but in a complex presentation/subject, it will be helpful to reader to also have access to the audio recording
What would be your view on using Exedata for very write intensive (>15MB/sec redo) applications? My understanding is that the hardware flash isn’t useful for this and that the underlying disk doesn’t have any write cache. Please correct me if I’m wrong…
In addition to the Flash Cache (which is not write cache), each storage cell has 512 MB of battery packed write cache in disk controllers, so this helps a bit, it’s small in today’s terms, but better than no cache.
Note that as ASM stripes the redologs across multiple cells, you have multiple cells caches to use. This may not be enough though and may result in long log file parallel write waits (which result in long log file syncs too).
So yeah, while the flash cache allows to do single block reads much faster (still way slower than doing logical reads from buffer cache though) and it leaves disks less busy, the synchronous writes required for commit may still suffer. How much you suffer depends on the other workload in the server – how busy the disks are…
This is a case where you could use the flash memory as a (mirrored) ASM disk instead and put your redologs there…
Btw, why isn’t the flash cache a write cache? Well it’s not mirrored (otherwise you’d get 2.6 TB per full rack instead of 5.3 TB) and that’s why the write IOs have to go to disk before the write complete can be acknowledged…
Great slides! Could you please expound on the comment made on the slide regarding OLTP and pizza boxes? I’m currently evaluating Exadata for a client for a high volume OLTP 2 node RAC database hope I’m not on the wrong path :) Do you have any experience with the Exadata profiler component of SQL Performance Analyzer? Whether it is accurate in profiling SQL Tuning Sets compared to real-world?
Thanks again, this information is very useful.
In Slide 9, you had mentioned multi MB CU – would you be referring to
the size of a CU in compressed format as multi MB or deflated data
from the CU as multi MB?
Also would it be fair to state that for OLTP, Exadata is just a RAC Cluster with Flash? Exadata has an excellent interconnect – infiniband with RDS/IP which is kind of hard to find/beat in most commercial RAC deployments.
What was the size of the resulting database?
I read your blog “Performance Stories from Exadata Migrations” on Exadata Performance, its really awsome and very helpful.
We are using Oracle Exadata V2-Quarter Rack on Linux platform. In this article I saw a performance graph on Page 10 for compute nodes and cell nodes.
Could you please let me know how to prepare such graphs and reports, I also want to prepare such reports and graphs for our servers.
I really appreciate for your kind assistance.
I am having a problem on 126.96.36.199 and Exadata v1 on Linux 64bit on hp.
10-node RAC database. The issue is with parallel queries going very slow. I saw the same sort of thing happen with two other clients. An Oracle technician in-house says that 188.8.131.52 was short-lived and the only reason it is still around is because of Exadata v1 and the issues it is having.
Is there something that Oracle does not publish that makes 184.108.40.206 very notorious for query slowdowns on Linus RAC databases? Do you know anything about the version and why many clients have the same issues with having parallel query slowdown issues? And having to flush the shared pool on every node nightly to alleviate slowdown?
Bug? Does 11gR2 resolve this?
Please help if you can
Very nice and informative presentation.
Well i have not worked on Oracle Exadata, but gathering what all is available on net and trying to compare with the NoSQL data bases like Cassandra, HBASE, Couchve, Mongo etc..It would be great if you can throw some light on the following questions:
1. What is the write latency? For example:if i have billion records per second coming to me, then how much does time it takes to load into OLTP?
2. What would be the read latency if simulatenous writes are also happening?:
Consider the worst case scenario and full peak load
Thanks and Regards,
Jut wondering what you think of the new flash log cache for write intensive applications.Since the tail end of the redo logs is stored on the flash will it provide SIGNIFICANT (sorry i want to highlight this word and not sure how to) boost to write intensive workloads?
Also most materials i have seen on exadata talk of extraodinary IOPS but those numbers are for cell disks and none of them write about IOPS available to compute nodes.Just wondering what would be a the IOPS values for write intensive workloads that people have encountered in real world
[...] This post was mentioned on Twitter by Surachart Opun. Surachart Opun said: RT @TanelPoder: I just published my Performance Stories from #Exadata Migrations slides, based on real life migrations: http://bit.ly/h6mzeX [...]
[...] I’ve previously published an article about Troubleshooting Exadata Smart Scan performance and some slides from my experience with VLDB Data Warehouse migrations to Exadata. [...]
Notify me of follow-up comments by email.
Notify me of new posts by email.
RSS - Posts
RSS - Comments
Return to top of page
Copyright © 2014 · Minimum Theme on Genesis Framework · WordPress · Log in