60000 bind variables?! Maybe it’s time to use a temporary table instead…

I just noticed a bug 8277300 filed in Metalink with following description:


Wow! That’s about 100 times more bind variables in a single query than what I’ve seen in past. And I thought that query was bad!!! :)

I suspect this is a massive IN list passed to a query. Maybe it’s time to use a temporary table or a collection for passing in the IN values instead?

NB! I am running one more Advanced Oracle Troubleshooting training in 2018! You can attend the live online training and can download personal video recordings too. The Part 1 starts on 29th January 2018 - sign up here!

This entry was posted in Oracle and tagged , . Bookmark the permalink.

17 Responses to 60000 bind variables?! Maybe it’s time to use a temporary table instead…

  1. It reminds me my post about selecting 10000 columns from a query :)


    I did not dare reporting this to metalink …

  2. Noons says:

    or – heaven forbid an efficient solution! – perhaps a temp table to join with the statement’s one?

    Unreal, isn’t it?

  3. David Aldridge says:

    Whoever identified that bug ought to be on some watch list — they’re cloning dinosaurs for sure.

  4. Gints says:

    Hmm, should be some mixture of many inlists because one IN list cannot contain more than 1000 fixed values. However I’ve seen people complaining about that and saying that it isn’t so in SQL Server (I don’t know for sure and actually don’t care) :)

  5. John says:

    They should switch to lieterals

  6. John says:

    literals I mean.

  7. Tanel Poder says:

    Well if it is a query with big inlist (and frequently executed one), then switching to literals would probably drive up parsing time and also cause trouble in shared pool (large statement requires large memory in shared pool).

    I would either switch to a temporary table or PL/SQL collection approach…

  8. Rick says:

    I suspect they were generating the query dynamically based on the number of values in the inlist. This would also result a large number of different queries, many parses and problems with the shared pool.

    I’d probably use a temporary table

  9. Havard says:

    Hm… 60,000 rows is a GTT. Got to be careful with that, you REALLY don’t want disk swapping in a busy production environment.

  10. Tanel Poder says:

    GTT blocks are also cached in buffer cache, there will be no disk swapping.

    It’s always a tradeoff, if you don’t use a GTT or PL/SQL collection you will spend more time parsing and also flush out other objects from shared pool…

  11. joel garry says:

    My cynical speculation: Maybe someone was just getting tired of hp-ux itanium being the redheaded stepchild, wanted to force Oracle to look at otherwise difficult to reproduce shared pool fragmentation errors. And then escalated after “upgrade to” :-O

  12. Havard says:

    Tanel: I thought gtt’s was flushed to TEMP tablespace at some point?

    From AskTom: “Temporary data files (temp files) in Oracle are a special type of data file. Oracle will use
    temporary files to store the intermediate results of a large sort operation, hash operations,
    global temporary table data, or result set, when there is insufficient memory to hold it all in

    I’m no expert on this, but I once witnessed severe downgrade in performance when gtt’s grew for some users.

    Can you spare some time to educate me (us?) on the pros and cons for using gtt as opposed to PL/SQL collections (SGA/PGA)? When will a gtt be flushed from buffer cache (same as ordinary table data?)?

    I would greatly appreciate your input :-)

  13. Tanel Poder says:

    Hi Havard,

    The GTT data blocks are kept in buffer cache, as opposed to UGA memory.
    DBWR may flush these blocks to corresponding temp segments in temp tablespace if there’s a need.

    If the GTT data is reset fast enough (and there’s no buffer cache pressure) then the GTT blocks don’t get written to temp at all.

    So, if you need to push such big number of inlist values, you have following tradeoffs:

    1) code all bind variables into SQL text. This will take time in parsing and binding phase. Also it will need memory from shared pool, possibly flushes out lots of other stuff

    2) use GTTs for passing in bind variables. SQL text is simple, but you have overhead of inserting into GTT. also you should reset the GTT contents after using if you don’t want DBWR to write it to temp

    3) use a PL/SQL array for passing in bind list. SQL text is again simple, however the inlists are kept in process private memory. Nowadays (since 9i+) processes can release private memory back to OS so it may not be that much of a problem.

    So, it’s a tradeoff. I would go with either PL/SQL collections or GTTs.

  14. Hi Tanel,

    I’ve seen that kind of weirdo, though not that massive, in Hibernate-generated code on a large java project. (yes, hard to clear out the myth that Oracle=JADS :-) ).

  15. Havard says:


    Thank you for the clarification, greatly appreciate it :-) Keep up the good work!

  16. Oraboy says:

    >>That’s about 100 times more bind variables in a single query than what I’ve seen in past. And I thought that query was bad!!! :)

    You may be interested in checking this query out..


  17. kiran says:

    COUNT(DECODE(range_wise,1,range_wise,NULL ) ) AS “0-1000”,
    COUNT(DECODE(range_wise,3,range_wise,NULL ) ) AS “1001-3000”,
    COUNT(DECODE(range_wise,5,range_wise,NULL ) ) AS “3001-5000”,
    COUNT(DECODE(range_wise,7,range_wise,NULL ) ) AS “5001-7000”,
    COUNT(DECODE(range_wise,9,range_wise,NULL ) ) AS “7001-9000”,
    COUNT(DECODE(range_wise,10,range_wise,NULL ) ) AS “9000 and above”,
    COUNT(DECODE(range_wise,11,range_wise,0 ) ) AS “total”
    FROM (
    SELECT TYRE_SPEC_DESC,INITCAP(VEN_DESC) VEN_DESC,subasm_sl_num,cumm_hours,range_wise FROM
    (select SUBASM_SL_NUM,sum(decode(rmvl_flag,’Y’,(hmr_off-hmr_on),(b.hmr-hmr_on))) cumm_hours from OCC_SUBASM_HSTRY_TRN a,
    occ_eqpt_hmr_trn b,occ_subasm_mst c,OCC_TYRE_CLS_MST d,OCJ_VEN_MST E,OCC_TYRE_SPEC_MST F
    where a.subasm_id=c.SUBASM_ID and a.eqpt_id=b.eqpt_id and c.SUBASM_NAME_ID=52 and a.SUBASM_NAME_ID=c.SUBASM_NAME_ID and
    and c.SUBASM_MDL_ID=d.CLS_ID and FTMT_DATE ’07-oct-2009′ OR RMVL_DATE IS NULL )
    and b.trn_date=(select max(trn_date) from occ_eqpt_hmr_trn where eqpt_id=a.eqpt_id and trn_date=9001]=10))

Leave a Reply

Your email address will not be published. Required fields are marked *