FAQ
List,
We have a 10.2 db with the ASMM enabled. Here are some of the SGA components
currently:

COMPONENT

CURRENT_SIZE MAX_SIZE OPER_COUNT

---------- ----------
shared pool
12,029,263,872 0 4289
large pool
16,777,216 0 0
java pool
33,554,432 0 2
streams
pool
0 0 0
DEFAULT buffer cache
9,210,691,584 0 4291
KEEP buffer cache
167,772,160 0 0
..

I am surprised to see that the shared_pool is higher than buffer cache. SGA
target and max size are 20G. I was thinking of the factors that could cause
high shared pool usage - not using binds, lots of SQLs on a several
different tables (this is third party apps with 10s of 1000s of tables),
etc.

I was digging around and I found some statistics that look surprising:

NAME VALUE
---------------------------------------- ----------
parse time cpu 4340323
parse time elapsed 13931671
parse count (total) 200405706
parse count (hard) 34470197
parse count (failures) 12680

SQL> select startup_time from v$instance;

STARTUP_T

30-AUG-11

In less than 30 days, it seems there had been more than a million hard parse
a day. That number looks too much to me. This is an OLTP financial
application. Any comments?

Thanks.

Search Discussions

  • David Fitzjarrell at Sep 26, 2011 at 11:39 pm
    10.2.??   A full release number would be very helpful.

    As you stated statements differing only in string literal values can cause such a high hard parse rate (a couple of vendors come to mind guilty of not using bind variables).  In such applications it's not hard to expect such behaviour.

    Questions that come to mind immediately:

    1) Which operating system?
    2) Is the hardware NUMA enabled?
    3) Is Oracle using NUMA?  A call to ipcs -m | grep oracle  may show multiple shared segments with multiple entry points.  10.2.x was notorious for  not using NUMA correctly causing the hard parse rate to increase since sessions may switch memory segments causing Oracle to hard parse statements found in the prior accessed NUMA segment but not found in the currently used segment.  There are ways to address this; MOS document 759565.1 describes issues and has links to the various bugs affecting NUMA.

    These are merely speculation at this point.  More information would be greatly beneficial.
    David Fitzjarrell


    From: Ram Raman <veeeraman@gmail.com>
    To: ORACLE-L <oracle-l@freelists.org>
    Sent: Monday, September 26, 2011 3:30 PM
    Subject: High shared pool usage

    List,
    We have a 10.2 db with the ASMM enabled. Here are some of the SGA components
    currently:


    COMPONENT
    CURRENT_SIZE  MAX_SIZE OPER_COUNT
    ----------------------------------------------------------------
    ---------------- ---------- ----------
    shared pool
    12,029,263,872          0      4289
    large pool
    16,777,216          0          0
    java pool
    33,554,432          0          2
    streams
    pool
    0          0          0
    DEFAULT buffer cache
    9,210,691,584          0      4291
    KEEP buffer cache
    167,772,160          0          0
    ..

    I am surprised to see that the shared_pool is higher than buffer cache. SGA
    target and max size are 20G. I was thinking of the factors that could cause
    high shared pool usage - not using binds, lots of SQLs on a several
    different tables (this is third party apps with 10s of 1000s of tables),
    etc.

    I was digging around and I found some statistics that look surprising:

    NAME                                          VALUE
    ---------------------------------------- ----------
    parse time cpu                              4340323
    parse time elapsed                        13931671
    parse count (total)                      200405706
    parse count (hard)                        34470197
    parse count (failures)                        12680


    SQL> select startup_time from v$instance;


    STARTUP_T
    ---------
    30-AUG-11

    In less than 30 days, it seems there had been more than a million hard parse
    a day. That number looks too much to me. This is an OLTP financial
    application. Any comments?

    Thanks.
  • Ram Raman at Sep 27, 2011 at 4:09 am
    Thanks a lot for your help David.
    - Aix 6.1 (not sure of the minor ver)
    - oracle-10204
    - I have to find the answers for the other two questions tomorrow.
    - psft fin app.
    On Mon, Sep 26, 2011 at 6:12 PM, David Fitzjarrell wrote:

    10.2.?? A full release number would be very helpful.

    As you stated statements differing only in string literal values can cause
    such a high hard parse rate (a couple of vendors come to mind guilty of not
    using bind variables). In such applications it's not hard to expect such
    behaviour.

    Questions that come to mind immediately:

    1) Which operating system?
    2) Is the hardware NUMA enabled?
    3) Is Oracle using NUMA? A call to ipcs -m | grep oracle may show
    multiple shared segments with multiple entry points. 10.2.x was notorious
    for not using NUMA correctly causing the hard parse rate to increase since
    sessions may switch memory segments causing Oracle to hard parse
    statements found in the prior accessed NUMA segment but not found in the
    currently used segment. There are ways to address this; MOS document *759565.1
    *describes issues and has links to the various bugs affecting NUMA.

    These are merely speculation at this point. More information would be
    greatly beneficial.

    David Fitzjarrell


    *From:* Ram Raman <veeeraman@gmail.com>
    *To:* ORACLE-L <oracle-l@freelists.org>
    *Sent:* Monday, September 26, 2011 3:30 PM
    *Subject:* High shared pool usage

    List,
    We have a 10.2 db with the ASMM enabled. Here are some of the SGA
    components
    currently:


    COMPONENT
    CURRENT_SIZE MAX_SIZE OPER_COUNT
    ----------------------------------------------------------------
    ---------------- ---------- ----------
    shared pool
    12,029,263,872 0 4289
    large pool
    16,777,216 0 0
    java pool
    33,554,432 0 2
    streams
    pool
    0 0 0
    DEFAULT buffer cache
    9,210,691,584 0 4291
    KEEP buffer cache
    167,772,160 0 0
    ..

    I am surprised to see that the shared_pool is higher than buffer cache. SGA
    target and max size are 20G. I was thinking of the factors that could cause
    high shared pool usage - not using binds, lots of SQLs on a several
    different tables (this is third party apps with 10s of 1000s of tables),
    etc.

    I was digging around and I found some statistics that look surprising:

    NAME VALUE
    ---------------------------------------- ----------
    parse time cpu 4340323
    parse time elapsed 13931671
    parse count (total) 200405706
    parse count (hard) 34470197
    parse count (failures) 12680


    SQL> select startup_time from v$instance;


    STARTUP_T
    ---------
    30-AUG-11

    In less than 30 days, it seems there had been more than a million hard
    parse
    a day. That number looks too much to me. This is an OLTP financial
    application. Any comments?

    Thanks.


    --
    http://www.freelists.org/webpage/oracle-l




    --
    http://www.freelists.org/webpage/oracle-l
  • Ram Raman at Sep 27, 2011 at 12:56 am
    Resending without older messages.
    To David's questions:

    - Aix 6.1 (not sure of the minor ver)
    - oracle-10204
    - I have to find the answers for the other two questions tomorrow.
    - psft fin app.

    Thanks David.
  • Brent Day at Sep 27, 2011 at 2:47 am
    We have PS FSCM as well and had similar issues in 10.2.0.4. After moving to
    11.2.0.2, I made the following changes to help with this issue:
    1. cursor_sharing=force (not supported by PS, but helps).
    2. sga=26G
    3. shared_pool=3g
    4. turned off auto memory management and set other parameters for
    environment (buffer cache, large pool, etc) as appropriate.

    These may work well in 10.2.0.4, but test and use at your own risk.

    Prior to these changes we were seeing 6-8G consumed by the shared_pool.

    From my research is seems like shared_pool is consumed because the app
    developers do not have bind variables turned on for the various PSAE steps.
    My understanding is that is set on each "step" of an App Engine process so
    it might be worth checking with developers. We found that changing the PSAE
    steps to turn on bind variables breaks some of the processes so proceed with
    caution.

    Again test any changes in your environment before pushing to production and
    these settings should be used at your own risk.

    Brent

    On Mon, Sep 26, 2011 at 6:55 PM, Ram Raman wrote:

    Resending without older messages.
    To David's questions:

    - Aix 6.1 (not sure of the minor ver)
    - oracle-10204
    - I have to find the answers for the other two questions tomorrow.
    - psft fin app.

    Thanks David.

    --
    http://www.freelists.org/webpage/oracle-l
  • Go-Faster Consultancy Ltd. at Sep 27, 2011 at 6:25 am
    I am not a fan of database-wide forced cursor sharing in PeopleSoft (I got
    burnt by this a while ago). However, I have enabled it for specific batch
    processes.

    The excessive parse caused by all the literals can cause ASMM to allocate
    too much memory to library cache at the expense of buffer cache.

    If you have a large SGA then you sometimes see ASMM moving memory back and
    forth between library and buffer caches as load changes. Individual batch
    processes can do this. ASMM can cause performance problems across the
    database. I would suggest setting a minimum value for the buffer cache to
    limit ASMMs behaviour.

    Forcing AE to generate bind variables (ReUseStatement) is a good thing, both
    for the database and the performance of the AE too.

    regards
    _________________________
    David Kurtz
    Go-Faster Consultancy Ltd.
    tel: +44 (0)7771 760660
    fax: +44 (0)7092 348865
    mailto:david.kurtz@go-faster.co.uk
    web: www.go-faster.co.uk
    Book: PeopleSoft for the Oracle DBA: http://www.psftdba.com
    DBA Blogs: PeopleSoft: http://blog.psftdba.com, Oracle:
    http://blog.go-faster.co.uk
    PeopleSoft DBA Forum: http://groups.yahoo.com/group/psftdba
    -----Original Message-----
    From: oracle-l-bounce@freelists.org
    On Behalf Of Brent Day
    Sent: Tuesday, September 27, 2011 3:46 AM
    To: veeeraman@gmail.com
    Cc: David Fitzjarrell; ORACLE-L
    Subject: Re: High shared pool usage
    Importance: High

    We have PS FSCM as well and had similar issues in 10.2.0.4.
    After moving to 11.2.0.2, I made the following changes to help
    with this issue:
    1. cursor_sharing=force (not supported by PS, but helps).
    2. sga=26G
    3. shared_pool=3g
    4. turned off auto memory management and set other parameters
    for environment (buffer cache, large pool, etc) as appropriate.

    These may work well in 10.2.0.4, but test and use at your own risk.

    Prior to these changes we were seeing 6-8G consumed by the shared_pool.

    From my research is seems like shared_pool is consumed because
    the app developers do not have bind variables turned on for
    the various PSAE steps.
    My understanding is that is set on each "step" of an App
    Engine process so it might be worth checking with developers.
    We found that changing the PSAE steps to turn on bind
    variables breaks some of the processes so proceed with caution.

    Again test any changes in your environment before pushing to
    production and these settings should be used at your own risk.

    Brent

    On Mon, Sep 26, 2011 at 6:55 PM, Ram Raman wrote:

    Resending without older messages.
    To David's questions:

    - Aix 6.1 (not sure of the minor ver)
    - oracle-10204
    - I have to find the answers for the other two questions tomorrow.
    - psft fin app.

    Thanks David.

    --
    http://www.freelists.org/webpage/oracle-l
    --
    http://www.freelists.org/webpage/oracle-l
  • Petr Novak at Sep 27, 2011 at 6:23 am
    Hallo Ram,

    you could look in V$SGASTAT how big are the shared pool components , or if you use AWR in DBA_HIST_SGASTAT how the shared pool components increased over time.
    If you find some surprisingly big compoment , look in Metalink for possible bug.
    For example , in 10.2.0.4 is bug with memory leak with frequently running snapshots, but I was not able to find this bug now.


    Best Regards,
    Petr

    ________________________________________
    Von: oracle-l-bounce@freelists.org [oracle-l-bounce@freelists.org]&quot; im Auftrag von &quot;Ram Raman [veeeraman@gmail.com]
    Gesendet: Dienstag, 27. September 2011 02:51
    Bis: David Fitzjarrell
    Cc: ORACLE-L
    Betreff: Re: High shared pool usage

    Thanks a lot for your help David.
    - Aix 6.1 (not sure of the minor ver)
    - oracle-10204
    - I have to find the answers for the other two questions tomorrow.
    - psft fin app.
    On Mon, Sep 26, 2011 at 6:12 PM, David Fitzjarrell wrote:

    10.2.?? A full release number would be very helpful.

    As you stated statements differing only in string literal values can cause
    such a high hard parse rate (a couple of vendors come to mind guilty of not
    using bind variables). In such applications it's not hard to expect such
    behaviour.

    Questions that come to mind immediately:

    1) Which operating system?
    2) Is the hardware NUMA enabled?
    3) Is Oracle using NUMA? A call to ipcs -m | grep oracle may show
    multiple shared segments with multiple entry points. 10.2.x was notorious
    for not using NUMA correctly causing the hard parse rate to increase since
    sessions may switch memory segments causing Oracle to hard parse
    statements found in the prior accessed NUMA segment but not found in the
    currently used segment. There are ways to address this; MOS document *759565.1
    *describes issues and has links to the various bugs affecting NUMA.

    These are merely speculation at this point. More information would be
    greatly beneficial.

    David Fitzjarrell


    *From:* Ram Raman <veeeraman@gmail.com>
    *To:* ORACLE-L <oracle-l@freelists.org>
    *Sent:* Monday, September 26, 2011 3:30 PM
    *Subject:* High shared pool usage

    List,
    We have a 10.2 db with the ASMM enabled. Here are some of the SGA
    components
    currently:


    COMPONENT
    CURRENT_SIZE MAX_SIZE OPER_COUNT
    ----------------------------------------------------------------
    ---------------- ---------- ----------
    shared pool
    12,029,263,872 0 4289
    large pool
    16,777,216 0 0
    java pool
    33,554,432 0 2
    streams
    pool
    0 0 0
    DEFAULT buffer cache
    9,210,691,584 0 4291
    KEEP buffer cache
    167,772,160 0 0
    ..

    I am surprised to see that the shared_pool is higher than buffer cache. SGA
    target and max size are 20G. I was thinking of the factors that could cause
    high shared pool usage - not using binds, lots of SQLs on a several
    different tables (this is third party apps with 10s of 1000s of tables),
    etc.

    I was digging around and I found some statistics that look surprising:

    NAME VALUE
    ---------------------------------------- ----------
    parse time cpu 4340323
    parse time elapsed 13931671
    parse count (total) 200405706
    parse count (hard) 34470197
    parse count (failures) 12680


    SQL> select startup_time from v$instance;


    STARTUP_T
    ---------
    30-AUG-11

    In less than 30 days, it seems there had been more than a million hard
    parse
    a day. That number looks too much to me. This is an OLTP financial
    application. Any comments?

    Thanks.


    --
    http://www.freelists.org/webpage/oracle-l




    --
    http://www.freelists.org/webpage/oracle-l


    --
    http://www.freelists.org/webpage/oracle-l
  • Laimutis Nedzinskas at Sep 27, 2011 at 7:05 am
    more than a million hard parse
    a day. That number looks too much to me. This is an OLTP financial
    application. Any comments?

    Comment 1: that's about 11.5 hard parses a second. Not that bad.

    Comment 2: looking into counters is good but looking into time spent is
    even better: first comes hard parsing time then contention: lathing on pool
    structures, etc. Specially at hard parsing peak times.

    Comment 3: Developers will never learn: SQL standard allows for literal
    sql. Formally, developers are in their right.
    Now decision must be taken to fix the code (which has lot of risks) or try
    to make the system hard parse more efficiently (if that is necessary.)

    just my 2 cents,
    Laimis N

    ---------------------------------------------------------------------------------

    Please consider the environment before printing this e-mail



    From: Ram Raman <veeeraman@gmail.com>

    To: ORACLE-L <oracle-l@freelists.org>

    Date: 2011.09.27 01:31

    Subject: High shared pool usage






    List,
    We have a 10.2 db with the ASMM enabled. Here are some of the SGA
    components
    currently:


    COMPONENT
    CURRENT_SIZE MAX_SIZE OPER_COUNT
    ----------------------------------------------------------------
    ---------------- ---------- ----------
    shared pool
    12,029,263,872 0 4289
    large pool
    16,777,216 0 0
    java pool
    33,554,432 0 2
    streams
    pool
    0 0 0
    DEFAULT buffer cache
    9,210,691,584 0 4291
    KEEP buffer cache
    167,772,160 0 0
    ..

    I am surprised to see that the shared_pool is higher than buffer cache. SGA
    target and max size are 20G. I was thinking of the factors that could cause
    high shared pool usage - not using binds, lots of SQLs on a several
    different tables (this is third party apps with 10s of 1000s of tables),
    etc.

    I was digging around and I found some statistics that look surprising:

    NAME VALUE
    ---------------------------------------- ----------
    parse time cpu 4340323
    parse time elapsed 13931671
    parse count (total) 200405706
    parse count (hard) 34470197
    parse count (failures) 12680


    SQL> select startup_time from v$instance;


    STARTUP_T
    ---------
    30-AUG-11

    In less than 30 days, it seems there had been more than a million hard
    parse
    a day. That number looks too much to me. This is an OLTP financial
    application. Any comments?

    Thanks.
  • Ram Raman at Sep 30, 2011 at 3:32 pm
    Laimutis,
    660 hard parses a minute in an OLTP is not bad? I thought it was a bad
    number. Out of curiousity, what kind of hard parsing do other listers have
    seen in best/worst/normal cases. (I realize that this could be dependent on
    the application, design, the workload, etc). Has anyone imposed size
    restrictions on shared pool like Brent has suggested.

    I like David's (Kurtz) idea of setting a minimum value for buffer cache so
    it does not get stolen.

    Thanks.
    On Tue, Sep 27, 2011 at 2:04 AM, wrote:

    more than a million hard parse
    a day. That number looks too much to me. This is an OLTP financial
    application. Any comments?

    Comment 1: that's about 11.5 hard parses a second. Not that bad.

    Comment 2: looking into counters is good but looking into time spent is
    even better: first comes hard parsing time then contention: lathing on pool
    structures, etc. Specially at hard parsing peak times.

    --
    http://www.freelists.org/webpage/oracle-l
  • Subodh Deshpande at Sep 30, 2011 at 7:38 pm
    Ram,
    I think there is a lot of scope for memory tunning if you feel 660 hard
    parses/minute is too much.
    to support your observation collect stats at different intervals over a
    period and then you can decide..
    if its OLTP application menu driven or even if you know what queries are
    going to be fired (i.e. users, session and work load is known) and then if
    you are coming acorss your this observation then yes memory tunning is
    required.

    you did not informed what hardware, OS, OS memory, oracle memory allocation
    is..is it RAC..is ther any unused memory at OS level.. current users,
    sessions etc..to support your observation I will suggest you to collect some
    stats and then you can take action like pinning of certain objects into
    memory etc..

    hard parses should be avoided as much as possible..

    thanks and take care..subodh
    On 30 September 2011 21:01, Ram Raman wrote:

    Laimutis,
    660 hard parses a minute in an OLTP is not bad? I thought it was a bad
    number. Out of curiousity, what kind of hard parsing do other listers have
    seen in best/worst/normal cases. (I realize that this could be dependent on
    the application, design, the workload, etc). Has anyone imposed size
    restrictions on shared pool like Brent has suggested.

    I like David's (Kurtz) idea of setting a minimum value for buffer cache so
    it does not get stolen.

    Thanks.
    On Tue, Sep 27, 2011 at 2:04 AM, wrote:

    more than a million hard parse
    a day. That number looks too much to me. This is an OLTP financial
    application. Any comments?

    Comment 1: that's about 11.5 hard parses a second. Not that bad.

    Comment 2: looking into counters is good but looking into time spent is
    even better: first comes hard parsing time then contention: lathing on pool
    structures, etc. Specially at hard parsing peak times.

    --
    http://www.freelists.org/webpage/oracle-l


    --
    ==============================
    DO NOT FORGET TO SMILE TODAY
    ==============================


    --
    http://www.freelists.org/webpage/oracle-l
  • Laimutis Nedzinskas at Oct 1, 2011 at 11:33 am
    660 hard parses a minute in an OLTP is not bad?
    I'll be unpopular. But critics is sometimes a necesity. Why - because
    Orcale is not the only database in the world and I dare to say some manage
    to perform better in some areas.
    I've already told - sql allows for literal sql. RDBMS's task is to handle
    it. End of this story for a time beeing.

    Now is it bad or not - it depends on how much resources parsing takes on
    your database. Do an AWR report (Enterprise license is required) or collect
    the stats manually and look into time spent on parsing andrelated issues.
    Out of curiousity, what kind of hard parsing do other listers have
    seen in best/worst/normal cases.
    I've seen a case when each insert of some ETL generated a new sql. There
    was no way to handle it with cursor_sharing options: the guys(developers)
    decided to use a new alias for each insert: insert into table t001; insert
    into table t002 ...

    What can I tell - there was a very defined limit of shared pool size we
    could not go bellow w/o risking shared pool errors, about 1.5 GB.
    Memory today is not that expensive. I think oracle's way of managing shared
    pool was great when memory was expensive (see interview with Ted Hoff at
    http://www.bbc.co.uk/news/technology-13260039 ) and developers generally
    were more qualified but nowdays it tends to fragment the pool. Good news -
    increasing the shared pool usually helps.

    But as for shared pool lathching... At least in 10r2 I've experienced a few
    cases of minute long pool lathing (after schema wide statistics import for
    example)

    my 2 cents, Laimis N



    ---------------------------------------------------------------------------------

    Please consider the environment before printing this e-mail



    From: Subodh Deshpande <deshpande.subodh@gmail.com>

    To: veeeraman@gmail.com

    Cc: ORACLE-L <oracle-l@freelists.org>

    Date: 2011.10.01 00:48

    Subject: Re: High shared pool usage






    Ram,
    I think there is a lot of scope for memory tunning if you feel 660 hard
    parses/minute is too much.
    to support your observation collect stats at different intervals over a
    period and then you can decide..
    if its OLTP application menu driven or even if you know what queries are
    going to be fired (i.e. users, session and work load is known) and then if
    you are coming acorss your this observation then yes memory tunning is
    required.

    you did not informed what hardware, OS, OS memory, oracle memory allocation
    is..is it RAC..is ther any unused memory at OS level.. current users,
    sessions etc..to support your observation I will suggest you to collect
    some
    stats and then you can take action like pinning of certain objects into
    memory etc..

    hard parses should be avoided as much as possible..

    thanks and take care..subodh
    On 30 September 2011 21:01, Ram Raman wrote:

    Laimutis,
    660 hard parses a minute in an OLTP is not bad? I thought it was a bad
    number. Out of curiousity, what kind of hard parsing do other listers have
    seen in best/worst/normal cases. (I realize that this could be dependent on
    the application, design, the workload, etc). Has anyone imposed size
    restrictions on shared pool like Brent has suggested.

    I like David's (Kurtz) idea of setting a minimum value for buffer cache so
    it does not get stolen.

    Thanks.
    On Tue, Sep 27, 2011 at 2:04 AM, wrote:

    more than a million hard parse
    a day. That number looks too much to me. This is an OLTP financial
    application. Any comments?

    Comment 1: that's about 11.5 hard parses a second. Not that bad.

    Comment 2: looking into counters is good but looking into time spent is
    even better: first comes hard parsing time then contention: lathing on pool
    structures, etc. Specially at hard parsing peak times.

    --
    http://www.freelists.org/webpage/oracle-l


    --
    ==============================
    DO NOT FORGET TO SMILE TODAY
    ==============================


    --
    http://www.freelists.org/webpage/oracle-l





    --
    http://www.freelists.org/webpage/oracle-l
  • Cary Millsap at Oct 1, 2011 at 11:26 pm
    *Parsing*�irrespective of whether it's "hard" or "soft"�is not a database
    tuning topic. It is an application development topic.
    Cary Millsap
    Method R Corporation
    http://method-r.com

    On Sat, Oct 1, 2011 at 4:27 AM, wrote:

    660 hard parses a minute in an OLTP is not bad?
    I'll be unpopular. But critics is sometimes a necesity. Why - because
    Orcale is not the only database in the world and I dare to say some manage
    to perform better in some areas.
    I've already told - sql allows for literal sql. RDBMS's task is to handle
    it. End of this story for a time beeing.

    Now is it bad or not - it depends on how much resources parsing takes on
    your database. Do an AWR report (Enterprise license is required) or collect
    the stats manually and look into time spent on parsing andrelated issues.
    Out of curiousity, what kind of hard parsing do other listers have
    seen in best/worst/normal cases.
    I've seen a case when each insert of some ETL generated a new sql. There
    was no way to handle it with cursor_sharing options: the guys(developers)
    decided to use a new alias for each insert: insert into table t001; insert
    into table t002 ...

    What can I tell - there was a very defined limit of shared pool size we
    could not go bellow w/o risking shared pool errors, about 1.5 GB.
    Memory today is not that expensive. I think oracle's way of managing shared
    pool was great when memory was expensive (see interview with Ted Hoff at
    http://www.bbc.co.uk/news/technology-13260039 ) and developers generally
    were more qualified but nowdays it tends to fragment the pool. Good news -
    increasing the shared pool usually helps.

    But as for shared pool lathching... At least in 10r2 I've experienced a few
    cases of minute long pool lathing (after schema wide statistics import for
    example)

    my 2 cents, Laimis N




    ---------------------------------------------------------------------------------

    Please consider the environment before printing this e-mail



    From: Subodh Deshpande <deshpande.subodh@gmail.com>

    To: veeeraman@gmail.com

    Cc: ORACLE-L <oracle-l@freelists.org>

    Date: 2011.10.01 00:48

    Subject: Re: High shared pool usage






    Ram,
    I think there is a lot of scope for memory tunning if you feel 660 hard
    parses/minute is too much.
    to support your observation collect stats at different intervals over a
    period and then you can decide..
    if its OLTP application menu driven or even if you know what queries are
    going to be fired (i.e. users, session and work load is known) and then if
    you are coming acorss your this observation then yes memory tunning is
    required.

    you did not informed what hardware, OS, OS memory, oracle memory allocation
    is..is it RAC..is ther any unused memory at OS level.. current users,
    sessions etc..to support your observation I will suggest you to collect
    some
    stats and then you can take action like pinning of certain objects into
    memory etc..

    hard parses should be avoided as much as possible..

    thanks and take care..subodh
    On 30 September 2011 21:01, Ram Raman wrote:

    Laimutis,
    660 hard parses a minute in an OLTP is not bad? I thought it was a bad
    number. Out of curiousity, what kind of hard parsing do other listers have
    seen in best/worst/normal cases. (I realize that this could be dependent on
    the application, design, the workload, etc). Has anyone imposed size
    restrictions on shared pool like Brent has suggested.

    I like David's (Kurtz) idea of setting a minimum value for buffer cache so
    it does not get stolen.

    Thanks.
    On Tue, Sep 27, 2011 at 2:04 AM, wrote:

    more than a million hard parse
    a day. That number looks too much to me. This is an OLTP financial
    application. Any comments?

    Comment 1: that's about 11.5 hard parses a second. Not that bad.

    Comment 2: looking into counters is good but looking into time spent is
    even better: first comes hard parsing time then contention: lathing on pool
    structures, etc. Specially at hard parsing peak times.

    --
    http://www.freelists.org/webpage/oracle-l


    --
    ==============================
    DO NOT FORGET TO SMILE TODAY
    ==============================


    --
    http://www.freelists.org/webpage/oracle-l





    --
    http://www.freelists.org/webpage/oracle-l

    --
    http://www.freelists.org/webpage/oracle-l
  • Laimutis Nedzinskas at Oct 3, 2011 at 8:07 am
    660 hard parses a minute in an OLTP is not bad?
    p.s. I doubt you can come with that argument to your management and tell
    them to rewrite (i.e. fix) the code based on that. Unless you are the
    management.
    But then you'll probably consider the implications of a code change and
    will say noooo, we have other problems to tackle. And you'll be right.
    Even apparently small fixes have a nasty habbit of going very wrong.
    Specially those small and quick(i.e. dirty) fixes.
  • Marcus Mönnig at Sep 27, 2011 at 7:25 am
    Hello Ram!

    What is allocating the memory in the shared pool?

    SELECT *
    FROM V$SGASTAT
    WHERE pool='shared pool' ORDER BY bytes DESC;

    If allocations for "KGH: NO ACCESS" are very high, you are seeing an ASMM bug.

    If "sql area" is very high, check if the same SQL is parsed again and
    again and thus lots of child cursors exists for the same SQL (another
    bug available in 10.2):


    SELECT sql_id,count(*) childcount
    FROM V_$SQL_SHARED_CURSOR
    group by sql_id
    order by childcount desc;

    HTH,
    Marcus
  • Ram Raman at Sep 28, 2011 at 12:07 am
    Thanks Marcus:
    POOL NAME BYTES
    ------------ ---------------------------------------- --------------------
    shared pool KGH: NO ACCESS 5,131,392,192
    shared pool sql area 4,664,912,584
    shared pool free memory 4,503,302,200
    shared pool CCursor 913,006,592
    shared pool PCursor 717,546,600
    shared pool library cache 531,817,048
    shared pool gcs resources 256,661,792
    shared pool kglsim object batch 151,151,952
    Not very many child cursors:

    SQL_ID CHILDCOUNT
    ------------- ----------
    5js8dhamyt3cw 6
    96g93hntrzjtr 6
    6wh0q2tfwvssz 6
    cqv52axncuvzv 6
    fuscam04hhdqr 5
    grwydz59pu6mc 5
    7ng34ruy5awxq 5
    260w39duxtz9u 5
    15zytu14qzw6p 5
    83taa7kaw59c1 5
    2q93zsrvbdw48 5
    011bkqb0xaxpp 5
    at3ryvuft2f40 5
    db78fxqxwxt7r 5
    4yvsj6a508pgf 4
    09vwa8mu0zfqa 4
    ...

    There are about 1300 cursors for which the child count is greater than 1.


    On Tue, Sep 27, 2011 at 2:24 AM, Marcus Mönnig wrote:

    Hello Ram!

    What is allocating the memory in the shared pool?

    SELECT *
    FROM V$SGASTAT
    WHERE pool='shared pool' ORDER BY bytes DESC;

    If allocations for "KGH: NO ACCESS" are very high, you are seeing an ASMM
    bug.

    If "sql area" is very high, check if the same SQL is parsed again and
    again and thus lots of child cursors exists for the same SQL (another
    bug available in 10.2):


    SELECT sql_id,count(*) childcount
    FROM V_$SQL_SHARED_CURSOR
    group by sql_id
    order by childcount desc;

    HTH,
    Marcus
    --
    http://www.freelists.org/webpage/oracle-l

    --
    http://www.freelists.org/webpage/oracle-l
  • Joel Patterson at Sep 27, 2011 at 11:46 am
    I was wondering if you can export, (expdp), without a password, (or wallet etc.) similar to using other utilities such as 'Sqlplus / as sysdba', or 'rman target /'?

    I have a password script and can create a centralized password file (unencrypted), but since I have been able to do most everything else without a password, I would rather not implement it just because of this issue.

    I've googled (with mixed results -- seen an answer that did not work). Did not find it (an example) in the manual.

    Best Regards,
  • Stefano Cislaghi at Sep 27, 2011 at 12:03 pm
    Hi Joel,
    have you tried using  a wallet? Look on tahiti for "Secure External
    Password Store" ... I've never used it with expdp but should work.


    Stefano
  • Joel Patterson at Sep 27, 2011 at 12:05 pm
    Thanks to boulayp, I have it.

    Sometimes you just forget what you have already used. (need to use single quotes and of escape them on command line). Boulayp said \'/ as sysdba\'

    I did not get Remiqiusz example to work.

    Joel Patterson
    Database Administrator
    904 727-2546

    -----Original Message-----
    From: Stefano Cislaghi
    Sent: Tuesday, September 27, 2011 7:53 AM
    To: Patterson, Joel
    Cc: oracle-l@freelists.org
    Subject: Re: expdp without password

    Hi Joel,
    have you tried using  a wallet? Look on tahiti for "Secure External
    Password Store" ... I've never used it with expdp but should work.


    Stefano

    --
    http://www.stefanocislaghi.eu
    --
    http://www.freelists.org/webpage/oracle-l
  • Remigiusz Sokolowski at Sep 27, 2011 at 12:21 pm
    I run it on Linux, bash - may be that is important - anyway as below

    [oracle@xxxxx expdp]$ expdp "userid='/ as sysdba'" dumpfile=test.dmp
    schemas=xxxx

    Export: Release 11.2.0.2.0 - Production on Wt Wrz 27 14:15:38 2011

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights
    reserved.

    Połączenie z: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0
    - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing
    options
    Uruchamianie "SYS"."SYS_EXPORT_SCHEMA_01": userid="/******** AS SYSDBA"
    dumpfile=test.dmp schemas=goff
    Trwa szacowanie przy użyciu metody BLOCKS...
    Przetwarzanie typu obiektu SCHEMA_EXPORT/TABLE/TABLE_DATA
    Łączna estymacja przy użyciu metody BLOCKS: 576 KB
    Przetwarzanie typu obiektu SCHEMA_EXPORT/USER
    Przetwarzanie typu obiektu SCHEMA_EXPORT/SYSTEM_GRANT
    Przetwarzanie typu obiektu SCHEMA_EXPORT/ROLE_GRANT
    Przetwarzanie typu obiektu SCHEMA_EXPORT/DEFAULT_ROLE
    Przetwarzanie typu obiektu SCHEMA_EXPORT/TABLESPACE_QUOTA
    Przetwarzanie typu obiektu SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    Przetwarzanie typu obiektu SCHEMA_EXPORT/SEQUENCE/SEQUENCE
    Przetwarzanie typu obiektu SCHEMA_EXPORT/TABLE/TABLE
    Przetwarzanie typu obiektu
    SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Przetwarzanie typu obiektu SCHEMA_EXPORT/TABLE/INDEX/INDEX
    Przetwarzanie typu obiektu SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Przetwarzanie typu obiektu
    SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Przetwarzanie typu obiektu SCHEMA_EXPORT/TABLE/COMMENT
    Przetwarzanie typu obiektu SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC
    Przetwarzanie typu obiektu
    SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
    Przetwarzanie typu obiektu SCHEMA_EXPORT/VIEW/VIEW
    Przetwarzanie typu obiektu SCHEMA_EXPORT/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT
    Przetwarzanie typu obiektu SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
    Przetwarzanie typu obiektu SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    Przetwarzanie typu obiektu
    SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    Przetwarzanie typu obiektu
    SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    Przetwarzanie typu obiektu SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . wyeksportowano "xxx"."CONFIG" 6.023 KB
    10 wierszy(-e)
    . . wyeksportowano "xxx"."DEPOSIT" 18.74 KB
    56 wierszy(-e)
    . . wyeksportowano "xxx"."EXCHANGE" 21.00 KB
    94 wierszy(-e)
    . . wyeksportowano "xxx"."HEAD_CASHIER" 6.429 KB
    11 wierszy(-e)
    . . wyeksportowano "xxx"."LANGUAGE_STRING" 13.08 KB
    125 wierszy(-e)
    . . wyeksportowano "xxx"."TILL" 8.195 KB
    47 wierszy(-e)
    . . wyeksportowano "xxx"."TILL_BALANCE" 9.546 KB
    94 wierszy(-e)
    . . wyeksportowano "xxx"."TILL_TRANSFER" 22.25 KB
    239 wierszy(-e)
    . . wyeksportowano "xxx"."WITHDRAWAL" 14.31 KB
    21 wierszy(-e)
    Tabela nadrzędna "SYS"."SYS_EXPORT_SCHEMA_01" została pomyślnie
    wczytana/usunięta z pamięci
    ******************************************************************************
    Zestawem plików zrzutu dla SYS.SYS_EXPORT_SCHEMA_01 jest:
    /oracle/admin/cdw/dpdump/test.dmp
    Zadanie "SYS"."SYS_EXPORT_SCHEMA_01" pomyślnie ukończone o 14:15:46

    W dniu 27.09.2011 14:04, Joel.Patterson@crowley.com pisze:
    Thanks to boulayp, I have it.

    Sometimes you just forget what you have already used. (need to use single quotes and of escape them on command line). Boulayp said \'/ as sysdba\'

    I did not get Remiqiusz example to work.

    Joel Patterson
    Database Administrator
    904 727-2546

    -----Original Message-----
    From: Stefano Cislaghi
    Sent: Tuesday, September 27, 2011 7:53 AM
    To: Patterson, Joel
    Cc: oracle-l@freelists.org
    Subject: Re: expdp without password

    Hi Joel,
    have you tried using a wallet? Look on tahiti for "Secure External
    Password Store" ... I've never used it with expdp but should work.


    Stefano

    --
    http://www.stefanocislaghi.eu
    --
    http://www.freelists.org/webpage/oracle-l


    --
    Pole nakazi

    ----------------------------------------------------------------------
    Remigiusz Sokolowski <remigiusz.sokolowski@nordea.com>
    pos : DBA at DIiUSI
    addr : Nordea Bank Polska SA, Luzycka 6A Street, 81-537 Gdynia, Poland
    phone : +48 58 667 17 43
    mobile: +48 602 42 42 77
    Nordea Bank Polska S.A. z siedzibą w Gdyni, ul. Kielecka 2, 81-303 Gdynia,
    wpisaną do Rejestru Przedsiębiorców Krajowego Rejestru Sądowego pod numerem: 0000021828,
    dla której dokumentację przechowuje Sąd Rejonowy Gdańsk - Północ w Gdańsku,
    VIII Wydział Gospodarczy Krajowego Rejestru Sądowego,
    o kapitale zakładowym i wpłaconym w wysokości: 277.493.500,00 złotych,
    NIP: 586-000-78-20, REGON: 190024711--
    http://www.freelists.org/webpage/oracle-l
  • Joel Patterson at Sep 27, 2011 at 1:50 pm
    Yes, not linux, solaris. But thanks.
    Joel Patterson
    Database Administrator
    904 727-2546

    -----Original Message-----
    From: oracle-l-bounce@freelists.org On Behalf Of Remigiusz Sokolowski
    Sent: Tuesday, September 27, 2011 8:19 AM
    To: oracle-l@freelists.org
    Subject: Re: expdp without password

    I run it on Linux, bash - may be that is important - anyway as below

    [oracle@xxxxx expdp]$ expdp "userid='/ as sysdba'" dumpfile=test.dmp
    schemas=xxxx

    Export: Release 11.2.0.2.0 - Production on Wt Wrz 27 14:15:38 2011

    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights
    reserved.

    Połączenie z: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0
    - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing
    options
    Uruchamianie "SYS"."SYS_EXPORT_SCHEMA_01": userid="/******** AS SYSDBA"
    dumpfile=test.dmp schemas=goff
    Trwa szacowanie przy użyciu metody BLOCKS...
    Przetwarzanie typu obiektu SCHEMA_EXPORT/TABLE/TABLE_DATA
    Łączna estymacja przy użyciu metody BLOCKS: 576 KB
    Przetwarzanie typu obiektu SCHEMA_EXPORT/USER
    Przetwarzanie typu obiektu SCHEMA_EXPORT/SYSTEM_GRANT
    Przetwarzanie typu obiektu SCHEMA_EXPORT/ROLE_GRANT
    Przetwarzanie typu obiektu SCHEMA_EXPORT/DEFAULT_ROLE
    Przetwarzanie typu obiektu SCHEMA_EXPORT/TABLESPACE_QUOTA
    Przetwarzanie typu obiektu SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    Przetwarzanie typu obiektu SCHEMA_EXPORT/SEQUENCE/SEQUENCE
    Przetwarzanie typu obiektu SCHEMA_EXPORT/TABLE/TABLE
    Przetwarzanie typu obiektu
    SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Przetwarzanie typu obiektu SCHEMA_EXPORT/TABLE/INDEX/INDEX
    Przetwarzanie typu obiektu SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Przetwarzanie typu obiektu
    SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Przetwarzanie typu obiektu SCHEMA_EXPORT/TABLE/COMMENT
    Przetwarzanie typu obiektu SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC
    Przetwarzanie typu obiektu
    SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
    Przetwarzanie typu obiektu SCHEMA_EXPORT/VIEW/VIEW
    Przetwarzanie typu obiektu SCHEMA_EXPORT/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT
    Przetwarzanie typu obiektu SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
    Przetwarzanie typu obiektu SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    Przetwarzanie typu obiektu
    SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    Przetwarzanie typu obiektu
    SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    Przetwarzanie typu obiektu SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . wyeksportowano "xxx"."CONFIG" 6.023 KB
    10 wierszy(-e)
    . . wyeksportowano "xxx"."DEPOSIT" 18.74 KB
    56 wierszy(-e)
    . . wyeksportowano "xxx"."EXCHANGE" 21.00 KB
    94 wierszy(-e)
    . . wyeksportowano "xxx"."HEAD_CASHIER" 6.429 KB
    11 wierszy(-e)
    . . wyeksportowano "xxx"."LANGUAGE_STRING" 13.08 KB
    125 wierszy(-e)
    . . wyeksportowano "xxx"."TILL" 8.195 KB
    47 wierszy(-e)
    . . wyeksportowano "xxx"."TILL_BALANCE" 9.546 KB
    94 wierszy(-e)
    . . wyeksportowano "xxx"."TILL_TRANSFER" 22.25 KB
    239 wierszy(-e)
    . . wyeksportowano "xxx"."WITHDRAWAL" 14.31 KB
    21 wierszy(-e)
    Tabela nadrzędna "SYS"."SYS_EXPORT_SCHEMA_01" została pomyślnie
    wczytana/usunięta z pamięci
    ******************************************************************************
    Zestawem plików zrzutu dla SYS.SYS_EXPORT_SCHEMA_01 jest:
    /oracle/admin/cdw/dpdump/test.dmp
    Zadanie "SYS"."SYS_EXPORT_SCHEMA_01" pomyślnie ukończone o 14:15:46

    W dniu 27.09.2011 14:04, Joel.Patterson@crowley.com pisze:
    Thanks to boulayp, I have it.

    Sometimes you just forget what you have already used. (need to use single quotes and of escape them on command line). Boulayp said \'/ as sysdba\'

    I did not get Remiqiusz example to work.

    Joel Patterson
    Database Administrator
    904 727-2546

    -----Original Message-----
    From: Stefano Cislaghi
    Sent: Tuesday, September 27, 2011 7:53 AM
    To: Patterson, Joel
    Cc: oracle-l@freelists.org
    Subject: Re: expdp without password

    Hi Joel,
    have you tried using a wallet? Look on tahiti for "Secure External
    Password Store" ... I've never used it with expdp but should work.


    Stefano

    --
    http://www.stefanocislaghi.eu
    --
    http://www.freelists.org/webpage/oracle-l


    --
    Pole nakazi

    ----------------------------------------------------------------------
    Remigiusz Sokolowski <remigiusz.sokolowski@nordea.com>
    pos : DBA at DIiUSI
    addr : Nordea Bank Polska SA, Luzycka 6A Street, 81-537 Gdynia, Poland
    phone : +48 58 667 17 43
    mobile: +48 602 42 42 77
    Nordea Bank Polska S.A. z siedzibą w Gdyni, ul. Kielecka 2, 81-303 Gdynia,
    wpisaną do Rejestru Przedsiębiorców Krajowego Rejestru Sądowego pod numerem: 0000021828,
    dla której dokumentację przechowuje Sąd Rejonowy Gdańsk - Północ w Gdańsku,
    VIII Wydział Gospodarczy Krajowego Rejestru Sądowego,
    o kapitale zakładowym i wpłaconym w wysokości: 277.493.500,00 złotych,
    NIP: 586-000-78-20, REGON: 190024711--
    http://www.freelists.org/webpage/oracle-l


    --
    http://www.freelists.org/webpage/oracle-l
  • Remigiusz Sokolowski at Sep 27, 2011 at 12:04 pm

    W dniu 27.09.2011 13:45, Joel.Patterson@crowley.com pisze:

    I was wondering if you can export, (expdp), without a password, (or wallet etc.) similar to using other utilities such as 'Sqlplus / as sysdba', or 'rman target /'?

    I have a password script and can create a centralized password file (unencrypted), but since I have been able to do most everything else without a password, I would rather not implement it just because of this issue.

    I've googled (with mixed results -- seen an answer that did not work). Did not find it (an example) in the manual.

    Best Regards,


    --
    http://www.freelists.org/webpage/oracle-l

    expdp "userid='/ as sysdba'" dumpfile=test.dmp schemas=test

    for wallet works as well



    --
    Pole nakazi

    ----------------------------------------------------------------------
    Remigiusz Sokolowski <remigiusz.sokolowski@nordea.com>
    pos : DBA at DIiUSI
    addr : Nordea Bank Polska SA, Luzycka 6A Street, 81-537 Gdynia, Poland
    phone : +48 58 667 17 43
    mobile: +48 602 42 42 77
    Nordea Bank Polska S.A. z siedziba w Gdyni, ul. Kielecka 2, 81-303 Gdynia,
    wpisana do Rejestru Przedsiebiorców Krajowego Rejestru Sadowego pod numerem: 0000021828,
    dla której dokumentacje przechowuje Sad Rejonowy Gdansk - Pólnoc w Gdansku,
    VIII Wydzial Gospodarczy Krajowego Rejestru Sadowego,
    o kapitale zakladowym i wplaconym w wysokosci: 277.493.500,00 zlotych,
    NIP: 586-000-78-20, REGON: 190024711--
    http://www.freelists.org/webpage/oracle-l
  • April Sims at Sep 27, 2011 at 1:24 pm
    Using sysdba for exports is NOT recommended since 10g. Below is a direct
    quote from MOS Note: *How to Connect AS SYSDBA when Using Export or Import
    [ID 277237.1]*
    "SYSDBA is used internally in the Oracle database and has specialized
    functions. Its behavior is not the same as for generalized users. For
    example, the SYS user cannot do a transaction level consistent read
    (read-only transaction). Queries by SYS will return changes made during the
    transaction even if SYS has set the transaction to be READ ONLY. Therefore
    export parameters like CONSISTENT, OBJECT_CONSISTENT, FLASHBACK_SCN, and
    FLASHBACK_TIME cannot be used.
    Starting with Oracle10g, the export shows a warning that the export is not
    consistent when the export is started with CONSISTENT=Y and connects to the
    database with the user SYS (or as SYSDBA):
    EXP-00105: parameter CONSISTENT is not supported for this user

    Note that Oracle automatically provides read consistency to a query so that
    all the data that the query sees comes from a single point in time
    (statement-level read consistency). For export this means that the export of
    table data is consistent. However, if a table contains nested tables, the
    outer table and each inner table are exported as separate transactions. And
    if a table is partitioned, each partition is exported as a separate
    transaction. If a nested table or a partitioned table was updated during the
    export, the data that is exported while connected as the SYS schema could be
    inconsistent.

    Typically, there is no need to invoke Export or Import as SYSDBA, except in
    the following situations:
    - at the request of Oracle technical support;
    - when exporting a transportable tablespace set with the old-style export
    utility (Oracle9i and Oracle8i);
    - when importing a transportable tablespace set with the old-style import
    utility (Oracle10g, Oracle9i, and Oracle8i)."

    On Tue, Sep 27, 2011 at 6:45 AM, wrote:


    I was wondering if you can export, (expdp), without a password, (or wallet
    etc.) similar to using other utilities such as 'Sqlplus / as sysdba', or
    'rman target /'?

    I have a password script and can create a centralized password file
    (unencrypted), but since I have been able to do most everything else without
    a password, I would rather not implement it just because of this issue.

    I've googled (with mixed results -- seen an answer that did not work). Did
    not find it (an example) in the manual.

    Best Regards,


    --
    http://www.freelists.org/webpage/oracle-l


    --
    April C. Sims
    IOUG SELECT Journal Executive Editor
    http://aprilcsims.wordpress.com
    Twitter, LinkedIn
    Oracle Database 11g � Underground Advice for Database Administrators
    <http://www.amazon.com/Oracle-Database-Underground-Advice-Administrators/dp/1849680000/ref=sr_1_1?ie=UTF8&s=books&qid72289339&sr=8-1#noop>
    https://www.packtpub.com/oracle-11g-database-implementations-guide/book
    OCP 8i, 9i, 10g, 11g DBA
    Southern Utah University
    aprilcsims@gmail.com

    --
    http://www.freelists.org/webpage/oracle-l
  • Joel Patterson at Sep 27, 2011 at 1:50 pm
    Drat.
    Now I am back where I started, no? about the only thing left (keeping it a simple as possible), is to create an externally authenticated user, no?

    Joel Patterson
    Database Administrator
    904 727-2546

    -----Original Message-----
    From: oracle-l-bounce@freelists.org On Behalf Of April Sims
    Sent: Tuesday, September 27, 2011 9:22 AM
    To: ORACLE-L
    Subject: Re: expdp without password

    Using sysdba for exports is NOT recommended since 10g. Below is a direct
    quote from MOS Note: *How to Connect AS SYSDBA when Using Export or Import
    [ID 277237.1]*
    "SYSDBA is used internally in the Oracle database and has specialized
    functions. Its behavior is not the same as for generalized users. For
    example, the SYS user cannot do a transaction level consistent read
    (read-only transaction). Queries by SYS will return changes made during the
    transaction even if SYS has set the transaction to be READ ONLY. Therefore
    export parameters like CONSISTENT, OBJECT_CONSISTENT, FLASHBACK_SCN, and
    FLASHBACK_TIME cannot be used.
    Starting with Oracle10g, the export shows a warning that the export is not
    consistent when the export is started with CONSISTENT=Y and connects to the
    database with the user SYS (or as SYSDBA):
    EXP-00105: parameter CONSISTENT is not supported for this user

    Note that Oracle automatically provides read consistency to a query so that
    all the data that the query sees comes from a single point in time
    (statement-level read consistency). For export this means that the export of
    table data is consistent. However, if a table contains nested tables, the
    outer table and each inner table are exported as separate transactions. And
    if a table is partitioned, each partition is exported as a separate
    transaction. If a nested table or a partitioned table was updated during the
    export, the data that is exported while connected as the SYS schema could be
    inconsistent.

    Typically, there is no need to invoke Export or Import as SYSDBA, except in
    the following situations:
    - at the request of Oracle technical support;
    - when exporting a transportable tablespace set with the old-style export
    utility (Oracle9i and Oracle8i);
    - when importing a transportable tablespace set with the old-style import
    utility (Oracle10g, Oracle9i, and Oracle8i)."

    On Tue, Sep 27, 2011 at 6:45 AM, wrote:


    I was wondering if you can export, (expdp), without a password, (or wallet
    etc.) similar to using other utilities such as 'Sqlplus / as sysdba', or
    'rman target /'?

    I have a password script and can create a centralized password file
    (unencrypted), but since I have been able to do most everything else without
    a password, I would rather not implement it just because of this issue.

    I've googled (with mixed results -- seen an answer that did not work). Did
    not find it (an example) in the manual.

    Best Regards,


    --
    http://www.freelists.org/webpage/oracle-l


    --
    April C. Sims
    IOUG SELECT Journal Executive Editor
    http://aprilcsims.wordpress.com
    Twitter, LinkedIn
    Oracle Database 11g – Underground Advice for Database Administrators
    <http://www.amazon.com/Oracle-Database-Underground-Advice-Administrators/dp/1849680000/ref=sr_1_1?ie=UTF8&s=books&qid72289339&sr=8-1#noop>
    https://www.packtpub.com/oracle-11g-database-implementations-guide/book
    OCP 8i, 9i, 10g, 11g DBA
    Southern Utah University
    aprilcsims@gmail.com

    --
    http://www.freelists.org/webpage/oracle-l


    --
    http://www.freelists.org/webpage/oracle-l
  • D'Hooge Freek at Sep 27, 2011 at 2:07 pm
    Joel,

    You could also use a client side wallet.
    Or another option would be to not use expdp, but instead use the dbms_datapump pl/sql package.
    If you program your export with this package, you can use a schedule job to start the export and there would be no need to login.


    Regards,


    Freek D'Hooge
    Uptime
    Oracle Database Administrator
    email: freek.dhooge@uptime.be
    tel +32(0)3 451 23 82
    http://www.uptime.be
    disclaimer: www.uptime.be/disclaimer

    -----Original Message-----
    From: oracle-l-bounce@freelists.org On Behalf Of Joel.Patterson@crowley.com
    Sent: dinsdag 27 september 2011 15:49
    To: aprilcsims@gmail.com; oracle-l@freelists.org
    Subject: RE: expdp without password

    Drat.
    Now I am back where I started, no? about the only thing left (keeping it a simple as possible), is to create an externally authenticated user, no?

    Joel Patterson
    Database Administrator
    904 727-2546
    --
    http://www.freelists.org/webpage/oracle-l
  • Joel Patterson at Sep 27, 2011 at 3:21 pm
    A quick check seems to indicate that with the wallet... I might as well use my password architecture because security wise, if the directory can be accessed, the wallet can just be copied and used without any knowledge of the password. http://www.oracleforensics.com/wordpress/index.php/2010/04/11/oracle-wallet-auto-login-common-misconception-corrected/

    On the dbms_datapump side... Some creativity would need to be involved to 'clean up' log and dmp files. Somehow set the date as part of the file name. Access the data pump dir and remove old exports and logfiles. Data pump does now write over a dump file of the same name. This does however have the advantage of not using a password, but probably harder to be as flexable as a korn shell script that can create directories if they don't exist etc. One major beneifit to the script I was writing was it either can export all databases in oratab or just the databases supplied to it at the command line. With dbms_datapump, each database would have to be logged into to start/stop the job if not wanted... a little more work than editing a cron entry for a script that is controlling all the exports on that server...

    Just some more insight as to where I'm going and where I'm coming from -- just in case other comments come to mind.

    Best Regards,

    Joel Patterson
    Database Administrator
    904 727-2546

    -----Original Message-----
    From: D'Hooge Freek
    Sent: Tuesday, September 27, 2011 10:07 AM
    To: Patterson, Joel; aprilcsims@gmail.com; oracle-l@freelists.org
    Subject: RE: expdp without password

    Joel,

    You could also use a client side wallet.
    Or another option would be to not use expdp, but instead use the dbms_datapump pl/sql package.
    If you program your export with this package, you can use a schedule job to start the export and there would be no need to login.


    Regards,


    Freek D'Hooge
    Uptime
    Oracle Database Administrator
    email: freek.dhooge@uptime.be
    tel +32(0)3 451 23 82
    http://www.uptime.be
    disclaimer: www.uptime.be/disclaimer

    -----Original Message-----
    From: oracle-l-bounce@freelists.org On Behalf Of Joel.Patterson@crowley.com
    Sent: dinsdag 27 september 2011 15:49
    To: aprilcsims@gmail.com; oracle-l@freelists.org
    Subject: RE: expdp without password

    Drat.
    Now I am back where I started, no? about the only thing left (keeping it a simple as possible), is to create an externally authenticated user, no?

    Joel Patterson
    Database Administrator
    904 727-2546
    --
    http://www.freelists.org/webpage/oracle-l
  • Sreejith s at Oct 11, 2011 at 8:11 am
    Hi,
    Came across this thread while searching through Oracle L archives.

    I have referred this note which clearly says one should not use SYS / SYSDBA
    for datapump export and import. However , neither this note, nor
    documentation mention clearly which user can be used for a consistent
    export and import ( Perhaps, I missed it ? ) . Suppose I have to export
    multiple schemas using "SCHEMAS" parameter in datapump export, I cannot take
    a schema level export.In this case which user can I use ? SYSTEM ? or any
    user with IMP_FULL_DATABASE privilege ?

    Also, I have noticed during our past import for a highly transactional
    database ( more than 1.5 TB) size, we usually get the following error during
    import and import terminates.

    *ORA-39014: One or more workers have prematurely exited.
    ORA-39029: worker 1 with process name "DW01" prematurely terminated
    ORA-31671: Worker process DW01 had an unhandled exception.
    ORA-12801: error signaled in parallel query server P000, instance
    clvxd02q:FLYVXS2 (2)
    ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
    ORA-06512: at "SYS.KUPW$WORKER", line 1423
    ORA-06512: at line 2
    *
    On analysis, we found that there is a duplicate value for a primary key /
    unique index for one of the table which we are importing. We are using
    *FLASHBACK_TIME="to_TIMESTAMP(to_char(systimestamp,'DD-MM-YYYY
    HH24:MI:SS'), 'DD-MM-YYYY HH24:MI:SS')"*
    in our export parameter file. When we check the production database table,
    there is no duplicate .The workaround we did was EXCLUDE index & constraints
    first, delete the duplicates , then do import with INCLUDE=index &
    constraint.

    I am just wondering if the use of FLASHBACK_TIME along with SYSDBA could be
    the reason for not getting a consistent export.

    Any help / suggestions on this ?


    Best Regards,
    Sreejith
    On Tue, Sep 27, 2011 at 6:52 PM, April Sims wrote:

    Using sysdba for exports is NOT recommended since 10g. Below is a direct
    quote from MOS Note: *How to Connect AS SYSDBA when Using Export or Import
    [ID 277237.1]*
    "SYSDBA is used internally in the Oracle database and has specialized
    functions. Its behavior is not the same as for generalized users. For
    example, the SYS user cannot do a transaction level consistent read
    (read-only transaction). Queries by SYS will return changes made during the
    transaction even if SYS has set the transaction to be READ ONLY. Therefore
    export parameters like CONSISTENT, OBJECT_CONSISTENT, FLASHBACK_SCN, and
    FLASHBACK_TIME cannot be used.
    Starting with Oracle10g, the export shows a warning that the export is not
    consistent when the export is started with CONSISTENT=Y and connects to the
    database with the user SYS (or as SYSDBA):
    EXP-00105: parameter CONSISTENT is not supported for this user

    Note that Oracle automatically provides read consistency to a query so that
    all the data that the query sees comes from a single point in time
    (statement-level read consistency). For export this means that the export
    of
    table data is consistent. However, if a table contains nested tables, the
    outer table and each inner table are exported as separate transactions. And
    if a table is partitioned, each partition is exported as a separate
    transaction. If a nested table or a partitioned table was updated during
    the
    export, the data that is exported while connected as the SYS schema could
    be
    inconsistent.

    Typically, there is no need to invoke Export or Import as SYSDBA, except in
    the following situations:
    - at the request of Oracle technical support;
    - when exporting a transportable tablespace set with the old-style export
    utility (Oracle9i and Oracle8i);
    - when importing a transportable tablespace set with the old-style import
    utility (Oracle10g, Oracle9i, and Oracle8i)."

    On Tue, Sep 27, 2011 at 6:45 AM, wrote:


    I was wondering if you can export, (expdp), without a password, (or wallet
    etc.) similar to using other utilities such as 'Sqlplus / as sysdba', or
    'rman target /'?

    I have a password script and can create a centralized password file
    (unencrypted), but since I have been able to do most everything else without
    a password, I would rather not implement it just because of this issue.

    I've googled (with mixed results -- seen an answer that did not work). Did
    not find it (an example) in the manual.

    Best Regards,


    --
    http://www.freelists.org/webpage/oracle-l


    --
    April C. Sims
    IOUG SELECT Journal Executive Editor
    http://aprilcsims.wordpress.com
    Twitter, LinkedIn
    Oracle Database 11g � Underground Advice for Database Administrators
    <
    http://www.amazon.com/Oracle-Database-Underground-Advice-Administrators/dp/1849680000/ref=sr_1_1?ie=UTF8&s=books&qid72289339&sr=8-1#noop>
    https://www.packtpub.com/oracle-11g-database-implementations-guide/book
    OCP 8i, 9i, 10g, 11g DBA
    Southern Utah University
    aprilcsims@gmail.com

    --
    http://www.freelists.org/webpage/oracle-l

    --
    http://www.freelists.org/webpage/oracle-l
  • Joel Patterson at Oct 11, 2011 at 11:51 am
    An interpretation of the reason one cannot use SYS (or anyone connected as SYSDBA -- which becomes essentially SYS), is because a consistent export may not be obtained this way. I do not know the reason behind that, but it certainly explains your issue with the duplicate records.
    Also, it appears that any 'other' user with proper permissions is able to get a consistent export. So EXP_FULL_DATABASE, DBA etc will work for 'other' schemas, or indeed, depending what you wish to export, any permissions that allow that particular type of export -- except SYS / SYSDBA will create a consistent export. Therefore, yes SYSTEM will work as SYSTEM has the privileges... or any other user you create and give privileges. Personally, I use an admin account of my own creation for most tasks and not SYS or SYSTEM.


    IMP_FULL_DATABASE is for importing, (verses EXP_FULL_DATABASE). If you have a consistent export, you will have a consistent import.
    (note in older version of 'exp' there was a 'consistent' parameter... :) which is no longer part of 'expdp').

    Joel Patterson
    Database Administrator
    904 727-2546

    -----Original Message-----
    From: oracle-l-bounce@freelists.org On Behalf Of sreejith s
    Sent: Tuesday, October 11, 2011 4:10 AM
    To: aprilcsims@gmail.com
    Cc: ORACLE-L
    Subject: Re: expdp without password

    Hi,
    Came across this thread while searching through Oracle L archives.

    I have referred this note which clearly says one should not use SYS / SYSDBA
    for datapump export and import. However , neither this note, nor
    documentation mention clearly which user can be used for a consistent
    export and import ( Perhaps, I missed it ? ) . Suppose I have to export
    multiple schemas using "SCHEMAS" parameter in datapump export, I cannot take
    a schema level export.In this case which user can I use ? SYSTEM ? or any
    user with IMP_FULL_DATABASE privilege ?

    Also, I have noticed during our past import for a highly transactional
    database ( more than 1.5 TB) size, we usually get the following error during
    import and import terminates.

    *ORA-39014: One or more workers have prematurely exited.
    ORA-39029: worker 1 with process name "DW01" prematurely terminated
    ORA-31671: Worker process DW01 had an unhandled exception.
    ORA-12801: error signaled in parallel query server P000, instance
    clvxd02q:FLYVXS2 (2)
    ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
    ORA-06512: at "SYS.KUPW$WORKER", line 1423
    ORA-06512: at line 2
    *
    On analysis, we found that there is a duplicate value for a primary key /
    unique index for one of the table which we are importing. We are using
    *FLASHBACK_TIME="to_TIMESTAMP(to_char(systimestamp,'DD-MM-YYYY
    HH24:MI:SS'), 'DD-MM-YYYY HH24:MI:SS')"*
    in our export parameter file. When we check the production database table,
    there is no duplicate .The workaround we did was EXCLUDE index & constraints
    first, delete the duplicates , then do import with INCLUDE=index &
    constraint.

    I am just wondering if the use of FLASHBACK_TIME along with SYSDBA could be
    the reason for not getting a consistent export.

    Any help / suggestions on this ?


    Best Regards,
    Sreejith
    On Tue, Sep 27, 2011 at 6:52 PM, April Sims wrote:

    Using sysdba for exports is NOT recommended since 10g. Below is a direct
    quote from MOS Note: *How to Connect AS SYSDBA when Using Export or Import
    [ID 277237.1]*
    "SYSDBA is used internally in the Oracle database and has specialized
    functions. Its behavior is not the same as for generalized users. For
    example, the SYS user cannot do a transaction level consistent read
    (read-only transaction). Queries by SYS will return changes made during the
    transaction even if SYS has set the transaction to be READ ONLY. Therefore
    export parameters like CONSISTENT, OBJECT_CONSISTENT, FLASHBACK_SCN, and
    FLASHBACK_TIME cannot be used.
    Starting with Oracle10g, the export shows a warning that the export is not
    consistent when the export is started with CONSISTENT=Y and connects to the
    database with the user SYS (or as SYSDBA):
    EXP-00105: parameter CONSISTENT is not supported for this user

    Note that Oracle automatically provides read consistency to a query so that
    all the data that the query sees comes from a single point in time
    (statement-level read consistency). For export this means that the export
    of
    table data is consistent. However, if a table contains nested tables, the
    outer table and each inner table are exported as separate transactions. And
    if a table is partitioned, each partition is exported as a separate
    transaction. If a nested table or a partitioned table was updated during
    the
    export, the data that is exported while connected as the SYS schema could
    be
    inconsistent.

    Typically, there is no need to invoke Export or Import as SYSDBA, except in
    the following situations:
    - at the request of Oracle technical support;
    - when exporting a transportable tablespace set with the old-style export
    utility (Oracle9i and Oracle8i);
    - when importing a transportable tablespace set with the old-style import
    utility (Oracle10g, Oracle9i, and Oracle8i)."

    On Tue, Sep 27, 2011 at 6:45 AM, wrote:


    I was wondering if you can export, (expdp), without a password, (or wallet
    etc.) similar to using other utilities such as 'Sqlplus / as sysdba', or
    'rman target /'?

    I have a password script and can create a centralized password file
    (unencrypted), but since I have been able to do most everything else without
    a password, I would rather not implement it just because of this issue.

    I've googled (with mixed results -- seen an answer that did not work). Did
    not find it (an example) in the manual.

    Best Regards,


    --
    http://www.freelists.org/webpage/oracle-l


    --
    April C. Sims
    IOUG SELECT Journal Executive Editor
    http://aprilcsims.wordpress.com
    Twitter, LinkedIn
    Oracle Database 11g – Underground Advice for Database Administrators
    <
    http://www.amazon.com/Oracle-Database-Underground-Advice-Administrators/dp/1849680000/ref=sr_1_1?ie=UTF8&s=books&qid72289339&sr=8-1#noop>
    https://www.packtpub.com/oracle-11g-database-implementations-guide/book
    OCP 8i, 9i, 10g, 11g DBA
    Southern Utah University
    aprilcsims@gmail.com

    --
    http://www.freelists.org/webpage/oracle-l

    --
    http://www.freelists.org/webpage/oracle-l


    --
    http://www.freelists.org/webpage/oracle-l
  • Radoulov, Dimitre at Oct 11, 2011 at 1:40 pm
    On 11/10/2011 13:50, Joel.Patterson@... wrote:
    [...]

    IMP_FULL_DATABASE is for importing, (verses EXP_FULL_DATABASE). If you have a consistent export, you will have a consistent import.
    (note in older version of 'exp' there was a 'consistent' parameter... :) which is no longer part of 'expdp').


    It is available in the latest release in legacy mode :)


    CONSISTENT
    Data Pump Export determines the current time and uses|FLASHBACK_TIME|.



    bit.ly/qPGPfc



    Regards
    Dimitre
  • Joel Patterson at Oct 11, 2011 at 1:49 pm
    Filed under things that make you go hmmm.... ok, I stand corrected, but submit that the spirit of the parameter might remain, whereas the actual original intention has left... there was no flashback back in the day... so really this is a new option/type of export now available that goes 'back in time' -- assuming you have flashback enabled.

    Joel Patterson
    Database Administrator
    904 727-2546
    -----Original Message-----
    From: oracle-l-bounce@freelists.org On Behalf Of Radoulov, Dimitre
    Sent: Tuesday, October 11, 2011 9:39 AM
    To: oracle-l@freelists.org
    Subject: Re: expdp without password

    On 11/10/2011 13:50, Joel.Patterson@... wrote:
    [...]

    IMP_FULL_DATABASE is for importing, (verses EXP_FULL_DATABASE). If you have a consistent export, you will have a consistent import.
    (note in older version of 'exp' there was a 'consistent' parameter... :) which is no longer part of 'expdp').


    It is available in the latest release in legacy mode :)


    CONSISTENT
    Data Pump Export determines the current time and uses|FLASHBACK_TIME|.



    bit.ly/qPGPfc



    Regards
    Dimitre



    --
    http://www.freelists.org/webpage/oracle-l


    --
    http://www.freelists.org/webpage/oracle-l
  • D'Hooge Freek at Sep 28, 2011 at 12:09 am
    Hi,

    I would say your "KGH: NO ACCESS" is excessivily large.
    This component refers to granules that are in transit (being reassigned from the shared pool to the buffer cache and vice-versa)

    There are some bugs know to this. Check following MOS notes:

    How To Prevent The Growth Of The Component 'KGH: NO ACCESS' In The Shared Pool When ASMM Is Enabled [ID 451960.1]
    Common Cause for ORA-4031 in 10gR2, Excess "KGH: NO ACCESS" Memory Allocation [Video] [ID 801787.1]

    Also you can check if the sga components are frequently resizing:

    set linesize 140
    set pages 9999

    column initial_mb format 9G999D99
    column target_mb format 9G999D99
    column final_mb format 9G999D99
    column component format a30

    select *
    from ( select to_char(start_time, 'DD/MM/YYYY HH24:MI:SS') start_time, to_char(end_time, 'DD/MM/YYYY HH24:MI:SS') end_time,
    component, oper_type, oper_mode, initial_size/1024/1024 initial_mb, target_size/1024/1024 target_mb,
    final_size/1024/1024 final_mb, status
    from v$sga_resize_ops a
    order by a.start_time desc
    )
    where rownum <= 40;


    regards,



    Freek D'Hooge
    Uptime
    Oracle Database Administrator
    email: freek.dhooge@uptime.be
    tel +32(0)3 451 23 82
    http://www.uptime.be
    disclaimer: www.uptime.be/disclaimer

    -----Original Message-----
    From: oracle-l-bounce@freelists.org On Behalf Of Ram Raman
    Sent: dinsdag 27 september 2011 23:55
    To: mm@marcusmoennig.de
    Cc: ORACLE-L
    Subject: Re: High shared pool usage

    Thanks Marcus:
    POOL NAME BYTES
    ------------ ---------------------------------------- --------------------
    shared pool KGH: NO ACCESS 5,131,392,192
    shared pool sql area 4,664,912,584
    shared pool free memory 4,503,302,200
    shared pool CCursor 913,006,592
    shared pool PCursor 717,546,600
    shared pool library cache 531,817,048
    shared pool gcs resources 256,661,792
    shared pool kglsim object batch 151,151,952
    Not very many child cursors:

    SQL_ID CHILDCOUNT
    ------------- ----------
    5js8dhamyt3cw 6
    96g93hntrzjtr 6
    6wh0q2tfwvssz 6
    cqv52axncuvzv 6
    fuscam04hhdqr 5
    grwydz59pu6mc 5
    7ng34ruy5awxq 5
    260w39duxtz9u 5
    15zytu14qzw6p 5
    83taa7kaw59c1 5
    2q93zsrvbdw48 5
    011bkqb0xaxpp 5
    at3ryvuft2f40 5
    db78fxqxwxt7r 5
    4yvsj6a508pgf 4
    09vwa8mu0zfqa 4
    ...

    There are about 1300 cursors for which the child count is greater than 1.

    --
    http://www.freelists.org/webpage/oracle-l
  • Tanel Poder at Sep 27, 2011 at 10:30 pm
  • Tanel Poder at Sep 28, 2011 at 12:10 am
    (overquoted, resending)
    If you google for KGH NO ACCESS, you'll see that it's just how ASMM works :)

    http://blog.tanelpoder.com/2009/09/09/kgh-no-access-allocations-in-vsgastat-buffer-cache-within-shared-pool/

    It's the mixed granules where buffer cache is placed within shared pool
    (yes, I just did say that!).

    So, that 5 GB is buffer cache. Your shared pool has been big as probably
    during some time there was more shared pool loading activity than physical
    IO activity and the ASMM manager decided to increase the shared pool. Once
    more buffer cache was needed, it could not de-allocate complete granules
    anymore and just made them "composite granules"

    --
    Tanel Poder
    Expert Oracle Exadata book:
    http://www.apress.com/9781430233923

    On Wed, Sep 28, 2011 at 1:15 AM, D'Hooge Freek wrote:

    Hi,

    I would say your "KGH: NO ACCESS" is excessivily large.
    This component refers to granules that are in transit (being reassigned
    from the shared pool to the buffer cache and vice-versa)

    There are some bugs know to this. Check following MOS notes:

    How To Prevent The Growth Of The Component 'KGH: NO ACCESS' In The Shared
    Pool When ASMM Is Enabled [ID 451960.1]
    Common Cause for ORA-4031 in 10gR2, Excess "KGH: NO ACCESS" Memory
    Allocation [Video] [ID 801787.1]

    Also you can check if the sga components are frequently resizing:

    --
    http://www.freelists.org/webpage/oracle-l
  • Marcus Mönnig at Sep 28, 2011 at 5:53 am
    Hi Tanel and all!
    If you google for KGH NO ACCESS, you'll see that it's just how ASMM works :)
    True, but if you search in Metalink you'll see that this is not how
    ASMM should work. ;-)

    ML Note 801787.1 states that "...it is normal to periodically see
    "KGH: NO ACCESS" allocations up to about 64M". I personally set up
    Nagios checks for "KGH: NO ACCESS" sizes at 128 MB.

    Oracle considers anything above 64M a bug, due to too frequent resize
    operations between the buffer cache and the shared pool. To verify
    this, they give this query:

    select START_TIME, component, oper_type, oper_mode,
    initial_size/1024/1024 "INITIAL", FINAL_SIZE/1024/1024 "FINAL",
    END_TIME
    from v$sga_resize_ops
    where component in ('DEFAULT buffer cache', 'shared pool') and status
    = 'COMPLETE'
    order by start_time, component;


    My personal feeling and believe is that the rising allocations for
    this pool comes from dead, unusuable memory in there.
    I have seen this on 32 bit system where KGH NA was getting higher and
    higher over time and we ultimatley ran into ORA-04031 errors. I also
    have a 64 bit machine at hands with SGA_TARGET set to 5GB where buffer
    cache is around 2 GB and KGH NA is at 1.5GB and V$DB_CACHE_ADVICE
    states that the instance could use another 1 GB of memory. This is
    after three month uptime, but V$DB_CACHE_ADVICE only asks for more
    memory after KGH NA gots higher and higher.

    For me, this is a memory leak, but I can't proove it. Unfortunetly I
    miss the skills to dump the shared pool and inspect the bits one by
    one. :-)

    All this is on 10.2. While there are similar bugs availabel in ML for
    11.*, I've never seen them personally.

    Cheers,
    Marcus
    http://blog.tanelpoder.com/2009/09/09/kgh-no-access-allocations-in-vsgastat-buffer-cache-within-shared-pool/
    It's the mixed granules where buffer cache is placed within shared pool
    (yes, I just did say that!).
    So, that 5 GB is buffer cache. Your shared pool has been big as probably
    during some time there was more shared pool loading activity than physical
    IO activity and the ASMM manager decided to increase the shared pool. Once
    more buffer cache was needed, it could not de-allocate complete granules
    anymore and just made them "composite granules"

    --
    Tanel Poder
    Expert Oracle Exadata book:
    http://www.apress.com/9781430233923
    On Wed, Sep 28, 2011 at 1:15 AM, D'Hooge Freek wrote:

    Hi,

    I would say your "KGH: NO ACCESS" is excessivily large.
    This component refers to granules that are in transit (being reassigned
    from the shared pool to the buffer cache and vice-versa)

    There are some bugs know to this. Check following MOS notes:

    How To Prevent The Growth Of The Component 'KGH: NO ACCESS' In The Shared
    Pool When ASMM Is Enabled [ID 451960.1]
    Common Cause for ORA-4031 in 10gR2, Excess "KGH: NO ACCESS" Memory
    Allocation [Video] [ID 801787.1]

    Also you can check if the sga components are frequently resizing:
    --
    http://www.freelists.org/webpage/oracle-l
  • Tanel Poder at Sep 30, 2011 at 4:54 am
    You can have large KGH NO ACCESS allocations also when you have an one-time
    spike in shared pool utilization (common in 11g with library cache related
    issues where lots of child cursors are loaded for one statement or just with
    occasionally running hard parsing loops in application code). So, the shared
    pool usage might grow really high and this is not related to ASMM really,
    it's more related to the high demand. If you thoroughly read that note you
    quoted you'll actually see that Oracle guys also say this: *"**The exception
    is when the database needs to make large changes i.e. when changing memory
    after a heavy load".*
    So, when you see KGH NO ACCESS in gigabytes, it does not automatically mean
    you're hitting these shared pool bugs (which can be the case though), but it
    may also mean that you just had abnormally high demand for shared pool due
    to bad application code or some other bug higher in the stack (like library
    cache cursor sharing code).

    This memory in KGH NO ACCESS is not "dead, unusable" - it's used by buffer
    cache. I have posted an example in my blog entry about how to measure it
    yourself (by mapping x$bh.ba with x$ksmsp.ksmchadr). The "composite granule"
    code isn't probably able to reclaim that memory back from buffer cache use
    to shared pool - or doesn't realize that it would be good to shrink the
    buffer cache...

    --
    Tanel Poder
    http://blog.tanelpoder.com
    On Tue, Sep 27, 2011 at 10:52 PM, Marcus Mönnig wrote:

    Hi Tanel and all!
    If you google for KGH NO ACCESS, you'll see that it's just how ASMM works
    :)

    True, but if you search in Metalink you'll see that this is not how
    ASMM should work. ;-)

    ML Note 801787.1 states that "...it is normal to periodically see
    "KGH: NO ACCESS" allocations up to about 64M". I personally set up
    Nagios checks for "KGH: NO ACCESS" sizes at 128 MB.

    Oracle considers anything above 64M a bug, due to too frequent resize
    operations between the buffer cache and the shared pool. To verify
    this, they give this query:

    select START_TIME, component, oper_type, oper_mode,
    initial_size/1024/1024 "INITIAL", FINAL_SIZE/1024/1024 "FINAL",
    END_TIME
    from v$sga_resize_ops
    where component in ('DEFAULT buffer cache', 'shared pool') and status
    = 'COMPLETE'
    order by start_time, component;
    --
    http://www.freelists.org/webpage/oracle-l
  • Marcus Mönnig at Sep 30, 2011 at 9:52 pm
    Thanks, Tanel! Your addional explanation cleared it up for me again!
    The "composite granule"
    code isn't probably able to reclaim that memory back from buffer cache use
    to shared pool - or doesn't realize that it would be good to shrink the
    buffer cache...
    This possibly is the culprit of what I observed and called a memory
    leak. My expectation would be that at least when you get ORA-04031
    errors, the instance should know that it would be better to reclaim
    space used by the buffer cache.

    Cheers and happy OOW!
    Marcus
  • D'Hooge Freek at Sep 28, 2011 at 12:10 am
    Tanel,

    Is the "KGH NO ACCESS" component, memory that is entirely used for buffer cache or is it the total of the mixed granules (partly containing buffer cache, partly containing shared pool stuff)?

    Is it normal that the ASMM manager could not free up so many granules when the buffer cache had to grow?

    Kind regards,

    Freek D'Hooge
    Uptime
    Oracle Database Administrator
    email: freek.dhooge@uptime.be
    tel +32(0)3 451 23 82
    http://www.uptime.be
    disclaimer: www.uptime.be/disclaimer
    ---
    From: tanel@poderc.com On Behalf Of Tanel Poder
    Sent: woensdag 28 september 2011 0:23
    To: D'Hooge Freek
    Cc: veeeraman@gmail.com; mm@marcusmoennig.de; ORACLE-L
    Subject: Re: High shared pool usage

    If you google for KGH NO ACCESS, you'll see that it's just how ASMM works :)

    http://blog.tanelpoder.com/2009/09/09/kgh-no-access-allocations-in-vsgastat-buffer-cache-within-shared-pool/

    It's the mixed granules where buffer cache is placed within shared pool (yes, I just did say that!).

    So, that 5 GB is buffer cache. Your shared pool has been big as probably during some time there was more shared pool loading activity than physical IO activity and the ASMM manager decided to increase the shared pool. Once more buffer cache was needed, it could not de-allocate complete granules anymore and just made them "composite granules"
  • Tanel Poder at Sep 28, 2011 at 12:11 am
    Yep, I've explained it in the blog entry ... It's because you can't just
    take away a whole granule from shared pool *if* it still has pinned objects
    in it. And some objects may stay pinned "forever". That's why in 10.1 you
    couldn't always shrink shared pool... this KGH NO ACCESS magic wasn't
    implemented yet ...
    --
    Tanel Poder
    Expert Oracle Exadata book:
    http://www.apress.com/9781430233923


    On Wed, Sep 28, 2011 at 1:35 AM, D'Hooge Freek wrote:

    Tanel,

    Is the "KGH NO ACCESS" component, memory that is entirely used for buffer
    cache or is it the total of the mixed granules (partly containing buffer
    cache, partly containing shared pool stuff)?

    Is it normal that the ASMM manager could not free up so many granules when
    the buffer cache had to grow?


    --
    http://www.freelists.org/webpage/oracle-l
  • Yong Huang at Oct 2, 2011 at 7:09 pm

    I've seen a case when each insert of some ETL generated a new sql.
    There was no way to handle it with cursor_sharing options: the
    guys(developers) decided to use a new alias for each insert:
    insert into table t001; insert into table t002 ...
    That's actually quite common. There's no good solution from the DBA
    side. What I usually do is keep running a DDL, such as grant select
    to dba, on the said table during the period of those inserts. The
    memory used by the SQLs will be released. 11g doesn't do that though,
    and dbms_shared_pool.purge on individual SQLs has to be used.

    Yong Huang
  • Laimutis Nedzinskas at Oct 3, 2011 at 5:45 am
    What I usually do is keep running a DDL, such as grant select
    to dba, on the said table during the period of those inserts. The
    memory used by the SQLs will be released.

    funny enough :) Do you know pool mechanics behind that behaviour ?
    insert into table t001; insert into table t002 ...
    That's actually quite common.
    didn't know that :))) Sounds like some tool or method is behind it.
    Probably good for monitoring of ETL progress... A good way to torture
    oracle too.

    brgds, Laimis N



    ---------------------------------------------------------------------------------

    Please consider the environment before printing this e-mail



    From: Yong Huang <yong321@yahoo.com>

    To: Laimutis.Nedzinskas@seb.lt

    Cc: oracle-l@freelists.org

    Date: 2011.10.02 22:08

    Subject: Re: High shared pool usage





    I've seen a case when each insert of some ETL generated a new sql.
    There was no way to handle it with cursor_sharing options: the
    guys(developers) decided to use a new alias for each insert:
    insert into table t001; insert into table t002 ...
    That's actually quite common. There's no good solution from the DBA
    side. What I usually do is keep running a DDL, such as grant select
    to dba, on the said table during the period of those inserts. The
    memory used by the SQLs will be released. 11g doesn't do that though,
    and dbms_shared_pool.purge on individual SQLs has to be used.

    Yong Huang

Related Discussions

People

Translate

site design / logo © 2022 Grokbase