FAQ
folks,

I have a machine (RAC configuration) 10.2.0.4 on HPUX - PARISC.
in a node I have 38GB physical memory, 8 processors.
the machine could not run with asynchronous IO, so I set io slave to
8, and dbwr to 1.

at AWR report, it said that 2GB memory for PGA is enough, and 12GB of
SGA mostly enough.
concurrent user is 1000 until maximum 1500 users connected.
at one time only 30-50 concurrent active with OLTP transaction.

at the OS perspective, it said that mostly 90-100% memory has been used,
even when high load, 50% of the swap will be used also.
almost every 20 days, the server is rebooted itself,
I think this is due to lack of memory....

why between OS and oracle side there has much big different of memory usage???

--
regards
ujang

"I believe that exchange rate volatility is a major threat to
prosperity in the world today"
Dr. Robert A. Mundell, Nobel Laureate 1999
--
http://www.freelists.org/webpage/oracle-l

Search Discussions

  • Krish.hariharan_at_quasardb.com at Jan 20, 2008 at 1:20 am
    Ujang,

    Since you mention RAC I suspect this is probably node eviction - the cause
    could be cpu utilization (not mentioned) or swap consumption preventing
    heart beat messages and consequently resulting in node eviction. Look at the
    ocssd.log on the surviving node. How many nodes in your cluster and how busy
    are the other nodes (cpu, swap)?

    If my understanding of memory usage is correct (as seen in Solaris) the
    process maps the SGA and is part of its virtual space and has to reserve
    swap proportional to it. I address that using shared servers (MTS) since
    only a few processes (in your case 50) need be active and not the 1500 since
    you can get away with about 75 shared servers processes (if you are not
    already using this capability).

    The system logs or the crs/css logs will give you that information. The
    other thing to look for is the progressive build up and usage of swap.

    Regards,
    -Krish
    Krish Hariharan
    President/Executive Architect, Quasar Database Technologies, LLC
    http://www.linkedin.com/in/quasardb

    -----Original Message-----
    From: oracle-l-bounce_at_freelists.org
    On Behalf Of Ujang Jaenudin
    Sent: Saturday, January 19, 2008 5:36 PM
    To: oracle-l
    Subject: memory problem - oracle 10203 on HPUX

    folks,

    I have a machine (RAC configuration) 10.2.0.4 on HPUX - PARISC.
    in a node I have 38GB physical memory, 8 processors.
    the machine could not run with asynchronous IO, so I set io slave to
    8, and dbwr to 1.

    at AWR report, it said that 2GB memory for PGA is enough, and 12GB of
    SGA mostly enough.
    concurrent user is 1000 until maximum 1500 users connected.
    at one time only 30-50 concurrent active with OLTP transaction.

    at the OS perspective, it said that mostly 90-100% memory has been used,
    even when high load, 50% of the swap will be used also.
    almost every 20 days, the server is rebooted itself,
    I think this is due to lack of memory....

    why between OS and oracle side there has much big different of memory
    usage???

    --
    regards
    ujang

    "I believe that exchange rate volatility is a major threat to
    prosperity in the world today"
    Dr. Robert A. Mundell, Nobel Laureate 1999
    --
    http://www.freelists.org/webpage/oracle-l

    --
    http://www.freelists.org/webpage/oracle-l
  • Ujang Jaenudin at Jan 20, 2008 at 1:47 am
    krish,

    the CPU utilization never goes up more than 30%.
    and luckily no log/trace files on all stack OS/clusterware/database,
    when reboot happened.

    MTS is another plan....

    but why there is so many big difference between oracle reported (AWR) and OS??
    I thinking of OS memory management problem, how to dig into it?

    oh ya OS version is B.11.11 HP-UX PA-RISC.

    regards
    ujang
    On Jan 20, 2008 8:20 AM, wrote:
    Ujang,

    Since you mention RAC I suspect this is probably node eviction - the cause
    could be cpu utilization (not mentioned) or swap consumption preventing
    heart beat messages and consequently resulting in node eviction. Look at the
    ocssd.log on the surviving node. How many nodes in your cluster and how busy
    are the other nodes (cpu, swap)?

    If my understanding of memory usage is correct (as seen in Solaris) the
    process maps the SGA and is part of its virtual space and has to reserve
    swap proportional to it. I address that using shared servers (MTS) since
    only a few processes (in your case 50) need be active and not the 1500 since
    you can get away with about 75 shared servers processes (if you are not
    already using this capability).

    The system logs or the crs/css logs will give you that information. The
    other thing to look for is the progressive build up and usage of swap.

    Regards,
    -Krish
    Krish Hariharan
    President/Executive Architect, Quasar Database Technologies, LLC
    http://www.linkedin.com/in/quasardb


    -----Original Message-----
    From: oracle-l-bounce_at_freelists.org
    On Behalf Of Ujang Jaenudin
    Sent: Saturday, January 19, 2008 5:36 PM
    To: oracle-l
    Subject: memory problem - oracle 10203 on HPUX

    folks,

    I have a machine (RAC configuration) 10.2.0.4 on HPUX - PARISC.
    in a node I have 38GB physical memory, 8 processors.
    the machine could not run with asynchronous IO, so I set io slave to
    8, and dbwr to 1.

    at AWR report, it said that 2GB memory for PGA is enough, and 12GB of
    SGA mostly enough.
    concurrent user is 1000 until maximum 1500 users connected.
    at one time only 30-50 concurrent active with OLTP transaction.

    at the OS perspective, it said that mostly 90-100% memory has been used,
    even when high load, 50% of the swap will be used also.
    almost every 20 days, the server is rebooted itself,
    I think this is due to lack of memory....

    why between OS and oracle side there has much big different of memory
    usage???
    --
    http://www.freelists.org/webpage/oracle-l
  • Krish.hariharan_at_quasardb.com at Jan 20, 2008 at 2:21 am
    Ujang,

    You are addressing two separate perhaps related issues - swap usage and
    machine reboot. I would tackle the latter since that tends to have messages
    in the syslog or the crs/css logs.

    Once you know why that is the case you may have to set up monitors to track
    the swap usage and see process activity.

    I am not experienced in HP system commands. You should work with your SA but
    a simple start might be a ps -ef and correlated swap usage (swap -s). These
    may be different for HPUX.

    Regards,
    -Krish
    Krish Hariharan
    President/Executive Architect, Quasar Database Technologies, LLC
    http://www.linkedin.com/in/quasardb

    -----Original Message-----
    From: oracle-l-bounce_at_freelists.org
    On Behalf Of Ujang Jaenudin
    Sent: Saturday, January 19, 2008 6:47 PM
    To: krish.hariharan_at_quasardb.com; oracle-l
    Subject: Re: memory problem - oracle 10203 on HPUX

    krish,

    the CPU utilization never goes up more than 30%.
    and luckily no log/trace files on all stack OS/clusterware/database,
    when reboot happened.

    MTS is another plan....

    but why there is so many big difference between oracle reported (AWR) and
    OS??
    I thinking of OS memory management problem, how to dig into it?

    oh ya OS version is B.11.11 HP-UX PA-RISC.

    regards
    ujang
    On Jan 20, 2008 8:20 AM, wrote:
    Ujang,

    Since you mention RAC I suspect this is probably node eviction - the cause
    could be cpu utilization (not mentioned) or swap consumption preventing
    heart beat messages and consequently resulting in node eviction. Look at the
    ocssd.log on the surviving node. How many nodes in your cluster and how busy
    are the other nodes (cpu, swap)?

    If my understanding of memory usage is correct (as seen in Solaris) the
    process maps the SGA and is part of its virtual space and has to reserve
    swap proportional to it. I address that using shared servers (MTS) since
    only a few processes (in your case 50) need be active and not the 1500 since
    you can get away with about 75 shared servers processes (if you are not
    already using this capability).

    The system logs or the crs/css logs will give you that information. The
    other thing to look for is the progressive build up and usage of swap.

    Regards,
    -Krish
    Krish Hariharan
    President/Executive Architect, Quasar Database Technologies, LLC
    http://www.linkedin.com/in/quasardb


    -----Original Message-----
    From: oracle-l-bounce_at_freelists.org
    On Behalf Of Ujang Jaenudin
    Sent: Saturday, January 19, 2008 5:36 PM
    To: oracle-l
    Subject: memory problem - oracle 10203 on HPUX

    folks,

    I have a machine (RAC configuration) 10.2.0.4 on HPUX - PARISC.
    in a node I have 38GB physical memory, 8 processors.
    the machine could not run with asynchronous IO, so I set io slave to
    8, and dbwr to 1.

    at AWR report, it said that 2GB memory for PGA is enough, and 12GB of
    SGA mostly enough.
    concurrent user is 1000 until maximum 1500 users connected.
    at one time only 30-50 concurrent active with OLTP transaction.

    at the OS perspective, it said that mostly 90-100% memory has been used,
    even when high load, 50% of the swap will be used also.
    almost every 20 days, the server is rebooted itself,
    I think this is due to lack of memory....

    why between OS and oracle side there has much big different of memory
    usage???
    --
    http://www.freelists.org/webpage/oracle-l

    --
    http://www.freelists.org/webpage/oracle-l
  • Andrew Kerber at Jan 20, 2008 at 5:24 am
    How much swap is configured on the server?
    On Jan 19, 2008 8:21 PM, wrote:

    Ujang,

    You are addressing two separate perhaps related issues - swap usage and
    machine reboot. I would tackle the latter since that tends to have
    messages
    in the syslog or the crs/css logs.

    Once you know why that is the case you may have to set up monitors to
    track
    the swap usage and see process activity.

    I am not experienced in HP system commands. You should work with your SA
    but
    a simple start might be a ps -ef and correlated swap usage (swap -s).
    These
    may be different for HPUX.

    Regards,
    -Krish
    Krish Hariharan
    President/Executive Architect, Quasar Database Technologies, LLC
    http://www.linkedin.com/in/quasardb


    -----Original Message-----
    From: oracle-l-bounce_at_freelists.org
    On Behalf Of Ujang Jaenudin
    Sent: Saturday, January 19, 2008 6:47 PM
    To: krish.hariharan_at_quasardb.com; oracle-l
    Subject: Re: memory problem - oracle 10203 on HPUX

    krish,

    the CPU utilization never goes up more than 30%.
    and luckily no log/trace files on all stack OS/clusterware/database,
    when reboot happened.

    MTS is another plan....

    but why there is so many big difference between oracle reported (AWR) and
    OS??
    I thinking of OS memory management problem, how to dig into it?

    oh ya OS version is B.11.11 HP-UX PA-RISC.


    regards
    ujang

    On Jan 20, 2008 8:20 AM, wrote:
    Ujang,

    Since you mention RAC I suspect this is probably node eviction - the cause
    could be cpu utilization (not mentioned) or swap consumption preventing
    heart beat messages and consequently resulting in node eviction. Look at the
    ocssd.log on the surviving node. How many nodes in your cluster and how busy
    are the other nodes (cpu, swap)?

    If my understanding of memory usage is correct (as seen in Solaris) the
    process maps the SGA and is part of its virtual space and has to reserve
    swap proportional to it. I address that using shared servers (MTS) since
    only a few processes (in your case 50) need be active and not the 1500 since
    you can get away with about 75 shared servers processes (if you are not
    already using this capability).

    The system logs or the crs/css logs will give you that information. The
    other thing to look for is the progressive build up and usage of swap.

    Regards,
    -Krish
    Krish Hariharan
    President/Executive Architect, Quasar Database Technologies, LLC
    http://www.linkedin.com/in/quasardb


    -----Original Message-----
    From: oracle-l-bounce_at_freelists.org [mailto:
    oracle-l-bounce@freelists.org]
    On Behalf Of Ujang Jaenudin
    Sent: Saturday, January 19, 2008 5:36 PM
    To: oracle-l
    Subject: memory problem - oracle 10203 on HPUX

    folks,

    I have a machine (RAC configuration) 10.2.0.4 on HPUX - PARISC.
    in a node I have 38GB physical memory, 8 processors.
    the machine could not run with asynchronous IO, so I set io slave to
    8, and dbwr to 1.

    at AWR report, it said that 2GB memory for PGA is enough, and 12GB of
    SGA mostly enough.
    concurrent user is 1000 until maximum 1500 users connected.
    at one time only 30-50 concurrent active with OLTP transaction.

    at the OS perspective, it said that mostly 90-100% memory has been used,
    even when high load, 50% of the swap will be used also.
    almost every 20 days, the server is rebooted itself,
    I think this is due to lack of memory....

    why between OS and oracle side there has much big different of memory
    usage???
    --
    http://www.freelists.org/webpage/oracle-l



    --
    http://www.freelists.org/webpage/oracle-l

    --
    Andrew W. Kerber

    'If at first you dont succeed, dont take up skydiving.'

    --
    http://www.freelists.org/webpage/oracle-l
  • Smith, Steven K - MSHA at Jan 23, 2008 at 3:52 pm
    We are in the process of installing two EMC DMX-3 disk arrays. One
    local and one remote. (1000+ miles distant)



    We have a requirement to have the production OLTP and warehouse
    databases standby in the remote location. Not real time, but close.



    We are investigating the use of EMC's SRDF in place of data guard to
    maintain the remote Oracle environments. The main reason we are leaning
    this way is because the warehouse is fed from the oltp instance
    (materialized views) in addition to 4 or 5 outside sources. We can
    replicate the entire database/load/source files/etc environments and
    have a setup 'ready to start' with minimal modifications on our part.



    Does anyone have experience maintaining standby databases using SRDF?
    Is EMC selling me a bill of goods?



    Steve Smith

    Desk: 303-231-5499

    Fax: 303-231-5696
  • Andrew Kerber at Jan 23, 2008 at 4:09 pm
    I did this at a previous employers, though they were only about 20 miles
    apart. It worked fine. We tested once per quarter, never had any issues
    that were anything other than incorrect setup. Mostly we didn't have any
    issues at all.

    On Jan 23, 2008 9:52 AM, Smith, Steven K - MSHA
    wrote:
    We are in the process of installing two EMC DMX-3 disk arrays. One local
    and one remote. (1000+ miles distant)



    We have a requirement to have the production OLTP and warehouse databases
    standby in the remote location. Not real time, but close.



    We are investigating the use of EMC's SRDF in place of data guard to
    maintain the remote Oracle environments. The main reason we are leaning
    this way is because the warehouse is fed from the oltp instance
    (materialized views) in addition to 4 or 5 outside sources. We can
    replicate the entire database/load/source files/etc environments and have a
    setup 'ready to start' with minimal modifications on our part.



    Does anyone have experience maintaining standby databases using SRDF? Is
    EMC selling me a bill of goods?



    Steve Smith

    Desk: 303-231-5499

    Fax: 303-231-5696
    --
    Andrew W. Kerber

    'If at first you dont succeed, dont take up skydiving.'

    --
    http://www.freelists.org/webpage/oracle-l
  • Baumgartel, Paul at Jan 23, 2008 at 4:54 pm
    We use SRDF to feed disaster recovery servers in a separate location.
    It works well, but you must consider the hit your write performance will
    take, especially given the great distance between locations. The laws
    of physics dictate a delay proportional to the distance the bits have to
    travel. Make sure that EMC gives you an estimate of the expected delay.


    For high-volume OLTP, this delay can become most problematic on redo log
    writes. You'll want to ensure that your configuration (log _buffer in
    particular) allows log writer to stay busy.


    Paul Baumgartel
    CREDIT SUISSE

    Information Technology
    Prime Services Databases Americas
    One Madison Avenue
    New York, NY 10010
    USA

    Phone 212.538.1143
    paul.baumgartel_at_credit-suisse.com
    www.credit-suisse.com



    From: oracle-l-bounce_at_freelists.org
    On Behalf Of Smith, Steven K -
    MSHA

    Sent: Wednesday, January 23, 2008 10:53 AM
    To: oracle-l
    Subject: EMC's SRDF vs Oracle DataGuard

    We are in the process of installing two EMC DMX-3 disk arrays. One
    local and one remote. (1000+ miles distant)



    We have a requirement to have the production OLTP and warehouse
    databases standby in the remote location. Not real time, but close.



    We are investigating the use of EMC's SRDF in place of data guard to
    maintain the remote Oracle environments. The main reason we are leaning
    this way is because the warehouse is fed from the oltp instance
    (materialized views) in addition to 4 or 5 outside sources. We can
    replicate the entire database/load/source files/etc environments and
    have a setup 'ready to start' with minimal modifications on our part.



    Does anyone have experience maintaining standby databases using SRDF?
    Is EMC selling me a bill of goods?



    Steve Smith

    Desk: 303-231-5499

    Fax: 303-231-5696

    Please access the attached hyperlink for an important electronic communications disclaimer:

    http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html
  • Jeremiah Wilton at Jan 23, 2008 at 9:14 pm

    Baumgartel, Paul wrote:

    ...you must consider the hit your write performance will take, especially given the
    great distance between locations.  The laws of physics dictate a delay
    proportional
    to the distance the bits have to travel
    I think the advantages of DataGuard over SRDF are pretty overwhelming.
    First of all, with Dataguard, you can create a decoupled, asynchronous
    duplicate site that is almost completely recoverable to the present point in
    time. With SRDF, in order to assure that the database will even open, you
    need to be in synchronous mode, which imposes a network penalty for every
    single logfile and datafile write. For heavy OLTP applications the impact
    could be dramatic.

    SRDF will not protect you against a wide variety of failures that DataGuard
    will, such as:

    An errant process that writes over, deletes, or corrupts Oracle datafiles

    User/admin errors and logical corruption: Dataguard can run with an apply
    delay

    In addition, with Dataguard you have the ability to query and easily back up
    the standby with no impact to the primary. There have got to be a half
    dozen other advantages to DG over SRDF that I haven't thought of. Hopefully
    if my SRDF experience is outdated, those with more contemporary experience
    will correct me.

    Best of all, a DG deployment will survive any future migration to another
    storage vendor. With SRDF you lock yourself in with EMC.

    Hope this helps,

    Jeremiah Wilton
    ORA-600 Consulting
    http://www.ora-600.net
  • Baumgartel, Paul at Jan 23, 2008 at 9:50 pm
    I second Jeremiah's comments. The decision to use SRDF here was made by others (probably non-DBAs) a long time ago. I'd prefer DataGuard for the reasons cited.

    Paul Baumgartel
    CREDIT SUISSE

    Information Technology
    Prime Services Databases Americas
    One Madison Avenue
    New York, NY 10010
    USA

    Phone 212.538.1143
    paul.baumgartel_at_credit-suisse.com
    www.credit-suisse.com

    -----Original Message-----
    From: oracle-l-bounce_at_freelists.org On Behalf Of Jeremiah Wilton
    Sent: Wednesday, January 23, 2008 4:14 PM
    To: 'oracle-l'
    Subject: RE: EMC's SRDF vs Oracle DataGuard

    Baumgartel, Paul wrote:
    ...you must consider the hit your write performance will take, especially given the
    great distance between locations.  The laws of physics dictate a delay
    proportional
    to the distance the bits have to travel
    I think the advantages of DataGuard over SRDF are pretty overwhelming.
    First of all, with Dataguard, you can create a decoupled, asynchronous
    duplicate site that is almost completely recoverable to the present point in
    time. With SRDF, in order to assure that the database will even open, you
    need to be in synchronous mode, which imposes a network penalty for every
    single logfile and datafile write. For heavy OLTP applications the impact
    could be dramatic.

    SRDF will not protect you against a wide variety of failures that DataGuard
    will, such as:

    An errant process that writes over, deletes, or corrupts Oracle datafiles

    User/admin errors and logical corruption: Dataguard can run with an apply
    delay

    In addition, with Dataguard you have the ability to query and easily back up
    the standby with no impact to the primary. There have got to be a half
    dozen other advantages to DG over SRDF that I haven't thought of. Hopefully
    if my SRDF experience is outdated, those with more contemporary experience
    will correct me.

    Best of all, a DG deployment will survive any future migration to another
    storage vendor. With SRDF you lock yourself in with EMC.

    Hope this helps,

    Jeremiah Wilton
    ORA-600 Consulting
    http://www.ora-600.net

    --
    http://www.freelists.org/webpage/oracle-l

    ==============================================================================
    Please access the attached hyperlink for an important electronic communications disclaimer:

    http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html
    ==============================================================================

    --
    http://www.freelists.org/webpage/oracle-l
  • Smith, Steven K - MSHA at Jan 23, 2008 at 11:53 pm
    First - we are currently using physical standby for the OLTP. It has worked well. We are migrating the HW from Teradata to Oracle and the business requirements are that both the OLTP and the WH environments be available remotely and be able to continue to be updated remotely. Congress gets antsy if we tell them that they have to wait for their information after a fatality because we have to rebuild the database (think: Sago Mine accident in 2006).

    With the warehouse being partially sourced directly by the OLTP system through materialized views, how, using DataGuard, can you keep both sites current in both environments? I know you can set up data guard OLTP(local) -> OLTP(remote) and WH(local) -> WH(remote). Or, OLTP(local) -> OLTP(remote) and both the local and remote WH refreshing from the local OLTP. The difficulty, as I see it, is getting the WH(remote) to sync with the OLTP(remote) when/if that situation occurs.

    It appears that SRDF would easliy allow that remote WH sync to remote OLTP to continue should it be needed. I'm not sure data guard would support that transition as easily.

    Steve Smith
    Desk: 303-231-5499
    Fax: 303-231-5696

    -----Original Message-----
    From: oracle-l-bounce_at_freelists.org On Behalf Of Baumgartel, Paul
    Sent: Wednesday, January 23, 2008 2:50 PM
    To: oracle-l
    Subject: RE: EMC's SRDF vs Oracle DataGuard

    I second Jeremiah's comments. The decision to use SRDF here was made by others (probably non-DBAs) a long time ago. I'd prefer DataGuard for the reasons cited.

    Paul Baumgartel
    CREDIT SUISSE

    Information Technology
    Prime Services Databases Americas
    One Madison Avenue
    New York, NY 10010
    USA

    Phone 212.538.1143
    paul.baumgartel_at_credit-suisse.com
    www.credit-suisse.com

    -----Original Message-----
    From: oracle-l-bounce_at_freelists.org On Behalf Of Jeremiah Wilton
    Sent: Wednesday, January 23, 2008 4:14 PM
    To: 'oracle-l'
    Subject: RE: EMC's SRDF vs Oracle DataGuard

    Baumgartel, Paul wrote:
    ...you must consider the hit your write performance will take, especially given the
    great distance between locations.  The laws of physics dictate a delay
    proportional
    to the distance the bits have to travel
    I think the advantages of DataGuard over SRDF are pretty overwhelming.
    First of all, with Dataguard, you can create a decoupled, asynchronous
    duplicate site that is almost completely recoverable to the present point in
    time. With SRDF, in order to assure that the database will even open, you
    need to be in synchronous mode, which imposes a network penalty for every
    single logfile and datafile write. For heavy OLTP applications the impact
    could be dramatic.

    SRDF will not protect you against a wide variety of failures that DataGuard
    will, such as:

    An errant process that writes over, deletes, or corrupts Oracle datafiles

    User/admin errors and logical corruption: Dataguard can run with an apply
    delay

    In addition, with Dataguard you have the ability to query and easily back up
    the standby with no impact to the primary. There have got to be a half
    dozen other advantages to DG over SRDF that I haven't thought of. Hopefully
    if my SRDF experience is outdated, those with more contemporary experience
    will correct me.

    Best of all, a DG deployment will survive any future migration to another
    storage vendor. With SRDF you lock yourself in with EMC.

    Hope this helps,

    Jeremiah Wilton
    ORA-600 Consulting
    http://www.ora-600.net

    --
    http://www.freelists.org/webpage/oracle-l

    ==============================================================================
    Please access the attached hyperlink for an important electronic communications disclaimer:

    http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html
    ==============================================================================

    --
    http://www.freelists.org/webpage/oracle-l

    --
    http://www.freelists.org/webpage/oracle-l
  • David Barbour at Jan 23, 2008 at 8:21 pm
    Are you comtemplating the SRDF for both the OLTP and Data Warehouse
    databases? We're running a pretty busy OLTP with 1082 (at last count)
    interfaces. In case of failure, we specically don't want some of the
    interface files to move to the standby. How you're interfaces are connected
    processed and what becomes of them (for example, are files transferred then
    archived in some manner or simply deleted? ) is going to help determine if
    this type of solution will work for you. "Lag" time is certainly an
    issue. I worked on a project where the total failover time needed to be
    under 2 minutes. Really - way under. It was a big OLTP system that did some
    stuff with Train A leaving a station heading West at 40mph while Train B
    left another station and headed East at 60mph. The information coming out
    of the DR Site had to be perfect. We used replication for data and files at
    the OS level but used DataGuard for the database.
    On 1/23/08, Smith, Steven K - MSHA wrote:

    We are in the process of installing two EMC DMX-3 disk arrays. One local
    and one remote. (1000+ miles distant)



    We have a requirement to have the production OLTP and warehouse databases
    standby in the remote location. Not real time, but close.



    We are investigating the use of EMC's SRDF in place of data guard to
    maintain the remote Oracle environments. The main reason we are leaning
    this way is because the warehouse is fed from the oltp instance
    (materialized views) in addition to 4 or 5 outside sources. We can
    replicate the entire database/load/source files/etc environments and have a
    setup 'ready to start' with minimal modifications on our part.



    Does anyone have experience maintaining standby databases using SRDF? Is
    EMC selling me a bill of goods?



    Steve Smith

    Desk: 303-231-5499

    Fax: 303-231-5696
    --
    http://www.freelists.org/webpage/oracle-l
  • Mark W. Farnham at Jan 23, 2008 at 10:59 pm
    You can probably make it work. See Wilton's comments.



    Beyond that, remember that with a physical standby it is possible to
    temporarily cancel remote recovery and clone/rename/open point-in-time,
    which would give you a refreshed point in time load and reporting database
    from which to feed your datawarehouses. And notice that the location is your
    remote location, so you will thereby utilize your production scaled
    environment that is otherwise mostly empty just applying redo logs and a few
    misc. tasks.



    This feeds the natural triage of OLTP being fully serviced quickly in a site
    disaster at the cost of the DW databases, which usually fits with the
    economic survival model if folks plan through the logistics of running the
    business when the primary IT site (often corporate headquarters) is
    inoperable.



    If that triage does not fit your model, then you simply add the extra
    horsepower to the recovery site, but you still do not have to totally
    duplicate it.



    Good luck. Thinking through the entire business logistics plan to determine
    requirements should be upstream from choosing the technical methodology of
    executing the failover. I admit to being quite biased toward physical
    standbys since they have worked well since before Oracle called it a product
    and it essentially is as reliable as Oracle's recovery model. Since at least
    6.0.36 that has been very reliable indeed.



    From: oracle-l-bounce_at_freelists.org
    On Behalf Of Smith, Steven K - MSHA
    Sent: Wednesday, January 23, 2008 10:53 AM
    To: oracle-l
    Subject: EMC's SRDF vs Oracle DataGuard



    We are in the process of installing two EMC DMX-3 disk arrays. One local
    and one remote. (1000+ miles distant)



    We have a requirement to have the production OLTP and warehouse databases
    standby in the remote location. Not real time, but close.



    We are investigating the use of EMC's SRDF in place of data guard to
    maintain the remote Oracle environments. The main reason we are leaning
    this way is because the warehouse is fed from the oltp instance
    (materialized views) in addition to 4 or 5 outside sources. We can
    replicate the entire database/load/source files/etc environments and have a
    setup 'ready to start' with minimal modifications on our part.



    Does anyone have experience maintaining standby databases using SRDF? Is
    EMC selling me a bill of goods?



    Steve Smith

    Desk: 303-231-5499

    Fax: 303-231-5696
  • Ronnie Doggart at Jan 20, 2008 at 9:35 am
    Hi,

    /usr/sbin/swapinfo

    And see how mush swap is configured.

    Ronnie

    From: oracle-l-bounce_at_freelists.org [oracle-l-bounce_at_freelists.org] On Behalf Of Ujang Jaenudin [ujang.jaenudin_at_gmail.com]
    Sent: 20 January 2008 00:36
    To: oracle-l
    Subject: memory problem - oracle 10203 on HPUX

    folks,

    I have a machine (RAC configuration) 10.2.0.4 on HPUX - PARISC.
    in a node I have 38GB physical memory, 8 processors.
    the machine could not run with asynchronous IO, so I set io slave to
    8, and dbwr to 1.

    at AWR report, it said that 2GB memory for PGA is enough, and 12GB of
    SGA mostly enough.
    concurrent user is 1000 until maximum 1500 users connected.
    at one time only 30-50 concurrent active with OLTP transaction.

    at the OS perspective, it said that mostly 90-100% memory has been used,
    even when high load, 50% of the swap will be used also.
    almost every 20 days, the server is rebooted itself,
    I think this is due to lack of memory....

    why between OS and oracle side there has much big different of memory usage???

    --
    regards
    ujang

    "I believe that exchange rate volatility is a major threat to
    prosperity in the world today"
    Dr. Robert A. Mundell, Nobel Laureate 1999
    --
    http://www.freelists.org/webpage/oracle-l

    The information in this message is confidential and may be legally privileged. It is intended solely for the addressee. Access to this message by anyone else is unauthorised. If you are not the intended recipient, any disclosure, copying, or distribution of the message, or any action or omission taken by you in reliance on it, is prohibited and may be unlawful. Please immediately contact the sender if you have received this message in error.

    The views and opinions expressed in this email may not reflect the views and opinions of any member of Lagan Technologies Limited, or any of its subsidiaries.

    Lagan Technologies Limited is a company registered in Northern Ireland with registration number NI 28773. The registered office of Lagan Technologies Limited is 209 Airport Road West, Belfast, Co. Antrim, BT3 9EZ.

    --
    http://www.freelists.org/webpage/oracle-l
  • Ujang Jaenudin at Jan 21, 2008 at 7:01 am
    $ /usr/sbin/swapinfo

    Kb Kb Kb PCT START/ Kb
    TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
    dev 33554432 0 33527628 0% 0 - 1 /dev/vg00/lvol2
    dev 33554432 0 33527628 0% 0 - 1 /dev/vg00/lvolswap
    reserve - 22662700 -22662700

    memory 39065844 6594716 32471128 17%
    On Jan 20, 2008 4:35 PM, Ronnie Doggart wrote:
    Hi,

    /usr/sbin/swapinfo

    And see how mush swap is configured.

    Ronnie

    ________________________________________
    From: oracle-l-bounce_at_freelists.org [oracle-l-bounce_at_freelists.org] On Behalf Of Ujang Jaenudin [ujang.jaenudin_at_gmail.com]
    Sent: 20 January 2008 00:36
    To: oracle-l
    Subject: memory problem - oracle 10203 on HPUX


    folks,

    I have a machine (RAC configuration) 10.2.0.4 on HPUX - PARISC.
    in a node I have 38GB physical memory, 8 processors.
    the machine could not run with asynchronous IO, so I set io slave to
    8, and dbwr to 1.

    at AWR report, it said that 2GB memory for PGA is enough, and 12GB of
    SGA mostly enough.
    concurrent user is 1000 until maximum 1500 users connected.
    at one time only 30-50 concurrent active with OLTP transaction.

    at the OS perspective, it said that mostly 90-100% memory has been used,
    even when high load, 50% of the swap will be used also.
    almost every 20 days, the server is rebooted itself,
    I think this is due to lack of memory....

    why between OS and oracle side there has much big different of memory usage???


    --
    regards
    ujang

    "I believe that exchange rate volatility is a major threat to
    prosperity in the world today"
    Dr. Robert A. Mundell, Nobel Laureate 1999
    --
    http://www.freelists.org/webpage/oracle-l

    The information in this message is confidential and may be legally privileged. It is intended solely for the addressee. Access to this message by anyone else is unauthorised. If you are not the intended recipient, any disclosure, copying, or distribution of the message, or any action or omission taken by you in reliance on it, is prohibited and may be unlawful. Please immediately contact the sender if you have received this message in error.

    The views and opinions expressed in this email may not reflect the views and opinions of any member of Lagan Technologies Limited, or any of its subsidiaries.

    Lagan Technologies Limited is a company registered in Northern Ireland with registration number NI 28773. The registered office of Lagan Technologies Limited is 209 Airport Road West, Belfast, Co. Antrim, BT3 9EZ.
    --
    regards
    ujang

    "I believe that exchange rate volatility is a major threat to
    prosperity in the world today"
    Dr. Robert A. Mundell, Nobel Laureate 1999
    --
    http://www.freelists.org/webpage/oracle-l
  • Ronnie Doggart at Jan 20, 2008 at 9:39 am
    Hi,

    Also if you have Glance or GlancePlus you can use this to monitor swap space usage dynamically.

    Ronnie

    From: oracle-l-bounce_at_freelists.org [oracle-l-bounce_at_freelists.org] On Behalf Of Ujang Jaenudin [ujang.jaenudin_at_gmail.com]
    Sent: 20 January 2008 00:36
    To: oracle-l
    Subject: memory problem - oracle 10203 on HPUX

    folks,

    I have a machine (RAC configuration) 10.2.0.4 on HPUX - PARISC.
    in a node I have 38GB physical memory, 8 processors.
    the machine could not run with asynchronous IO, so I set io slave to
    8, and dbwr to 1.

    at AWR report, it said that 2GB memory for PGA is enough, and 12GB of
    SGA mostly enough.
    concurrent user is 1000 until maximum 1500 users connected.
    at one time only 30-50 concurrent active with OLTP transaction.

    at the OS perspective, it said that mostly 90-100% memory has been used,
    even when high load, 50% of the swap will be used also.
    almost every 20 days, the server is rebooted itself,
    I think this is due to lack of memory....

    why between OS and oracle side there has much big different of memory usage???

    --
    regards
    ujang

    "I believe that exchange rate volatility is a major threat to
    prosperity in the world today"
    Dr. Robert A. Mundell, Nobel Laureate 1999
    --
    http://www.freelists.org/webpage/oracle-l

    The information in this message is confidential and may be legally privileged. It is intended solely for the addressee. Access to this message by anyone else is unauthorised. If you are not the intended recipient, any disclosure, copying, or distribution of the message, or any action or omission taken by you in reliance on it, is prohibited and may be unlawful. Please immediately contact the sender if you have received this message in error.

    The views and opinions expressed in this email may not reflect the views and opinions of any member of Lagan Technologies Limited, or any of its subsidiaries.

    Lagan Technologies Limited is a company registered in Northern Ireland with registration number NI 28773. The registered office of Lagan Technologies Limited is 209 Airport Road West, Belfast, Co. Antrim, BT3 9EZ.

    --
    http://www.freelists.org/webpage/oracle-l
  • Jonathan Lewis at Jan 20, 2008 at 9:58 am
    There's a useful set of notes, including video presentation
    on this topic at:

    http://www.pythian.com/blogs/741/pythian-goodies-free-memory-swap-oracle-and-everything

    Most of the discussion is based on Linux, but the principles apply
    across the board.

    Regards

    Jonathan Lewis
    http://jonathanlewis.wordpress.com

    Author: Cost Based Oracle: Fundamentals
    http://www.jlcomp.demon.co.uk/cbo_book/ind_book.html

    The Co-operative Oracle Users' FAQ
    http://www.jlcomp.demon.co.uk/faq/ind_faq.html

    Original Message -----
    From: "Ujang Jaenudin"
    To: "oracle-l"
    Sent: Sunday, January 20, 2008 12:36 AM
    Subject: memory problem - oracle 10203 on HPUX
    folks,

    I have a machine (RAC configuration) 10.2.0.4 on HPUX - PARISC.
    in a node I have 38GB physical memory, 8 processors.
    the machine could not run with asynchronous IO, so I set io slave to
    8, and dbwr to 1.

    at AWR report, it said that 2GB memory for PGA is enough, and 12GB of
    SGA mostly enough.
    concurrent user is 1000 until maximum 1500 users connected.
    at one time only 30-50 concurrent active with OLTP transaction.

    at the OS perspective, it said that mostly 90-100% memory has been used,
    even when high load, 50% of the swap will be used also.
    almost every 20 days, the server is rebooted itself,
    I think this is due to lack of memory....

    why between OS and oracle side there has much big different of memory usage???


    --
    regards
    ujang

    "I believe that exchange rate volatility is a major threat to
    prosperity in the world today"
    Dr. Robert A. Mundell, Nobel Laureate 1999
    --
    http://www.freelists.org/webpage/oracle-l
    --
    http://www.freelists.org/webpage/oracle-l
  • Ujang Jaenudin at Jan 21, 2008 at 9:01 am
    hi,

    I got the memory report from glance and grid control host performance report.
    from those tools indicate the same number.

    looking fsflush for hp-ux.....

    regards
    ujang
    On Jan 21, 2008 2:17 PM, vnr1995 wrote:
    If this is a fresh installation, and if you have faced this problem
    right away after being brought into production, I have few questions:

    How did you know that the OS is using 80 to 90 percent of the memory
    is being used? Many commands dealing memory management are misleading:
    for instance, prstat on solaris is downright misleading; tuning based
    on prstat is downright suicidal. I don't know about hpux; on solaris,
    one can get memory usage by interacting with kernel debugger, smth
    like
    echo "::memstat" | mdb -k

    There is another thing you need to look at: how frequently fsflush
    daemon is being run? There is a kernel parameter (in solaris) that
    dictates this frequency: autoup. Given you got 38 GB memory, try to
    run the fsflush daemon less often, so that the paging activity gets
    reduced.





    On Jan 19, 2008 7:36 PM, Ujang Jaenudin wrote:
    folks,

    I have a machine (RAC configuration) 10.2.0.4 on HPUX - PARISC.
    in a node I have 38GB physical memory, 8 processors.
    the machine could not run with asynchronous IO, so I set io slave to
    8, and dbwr to 1.
    --
    regards
    ujang

    "I believe that exchange rate volatility is a major threat to
    prosperity in the world today"
    Dr. Robert A. Mundell, Nobel Laureate 1999
    --
    http://www.freelists.org/webpage/oracle-l
  • Ujang Jaenudin at Jan 22, 2008 at 3:56 am
    hi,

    according to pmap (pmap for hpux.... is it precission tool??)
    it seem that for one server process, it only takes 4MB.... am I correct?

    $ ps -ef | grep LOCAL=NO
    ............

    oracle 11307 1 2 10:50:29 ? 00:00 oracleOPEAICX1 (LOCAL=NO)
    oracle 11525 1 0 10:50:37 ? 00:00 oracleOPEAICX1 (LOCAL=NO)
    oracle 6787 1 0 10:47:43 ? 00:00 oracleOPEAICX1 (LOCAL=NO)

    $ ./pmap-0-0-hpux-parisc 11307
    00000000 4K read/exec shrd [null]
    c008f000 4K read shrd /var/spool/pwgr/status
    00000000 114688K read/exec exec shrd [text]
    ffff0000 32K read/write [user]
    00000000 4096K read/write [data]
    00000000 1024K read/write [mmap-file]
    40000000 1024K read/write [mmap-file]
    80000000 640K read/write [mmap-file]
    c0000000 8K read/write/exec [mmap-file]
    00000000 4K read/write shlib /usr/lib/pa20_64/libnss_files.1
    40000000 4K read/write shlib /usr/lib/pa20_64/libnss_nis.1
    80000000 512K read/write [mmap-file]
    c0000000 16K read/write/exec [mmap-file]
    00000000 16K read/write/exec [mmap-file]
    40000000 8K read/write/exec [mmap-file]
    80000000 12K read/write shlib
    /opt/star-ncf-prod/ep_patch/usr/lib/pa20_64/libxti.2
    c0000000 8K read/write/exec [mmap-file]
    ffff1000 60K read/write shlib /usr/lib/pa20_64/libc.2
    00000000 52K read/write [mmap-file]
    40000000 12K read/write shlib /usr/lib/pa20_64/libm.2
    7fff5000 44K read/write shlib /usr/lib/pa20_64/libnsl.1
    80000000 28K read/write [mmap-file]
    c0000000 8K read/write/exec [mmap-file]
    00000000 4K read/write shlib /usr/lib/pa20_64/libdl.1
    3ffff000 4K read/write shlib /usr/lib/pa20_64/libnss_dns.1
    40000000 4K read/write [mmap-file]
    7fffd000 12K read/write shlib /usr/lib/pa20_64/libpthread.1
    80000000 8K read/write [mmap-file]
    c0000000 4K read/write shlib /usr/lib/pa20_64/librt.2
    00000000 8K read/write/exec [mmap-file]
    3ff9f000 388K read/write shlib /usr/lib/pa20_64/libcl.2
    40000000 52K read/write [mmap-file]
    7ffc1000 252K read/write shlib
    /apps/oracle/product/10.2/lib/libnnz10.sl
    80000000 8K read/write [mmap-file]
    c0000000 780K read/write shlib
    /apps/oracle/product/10.2/lib/libjox10.sl
    00000000 4K read/write shlib
    /apps/oracle/product/10.2/lib/libdbcfg10.sl
    40000000 8K read/write/exec [mmap-file]
    80000000 4K read/write shlib
    /apps/oracle/product/10.2/lib/libclsra10.sl
    c0000000 4K read/write shlib
    /apps/oracle/product/10.2/lib/libocrutl10.sl
    00000000 8K read/write shlib
    /apps/oracle/product/10.2/lib/libocrb10.sl
    40000000 8K read/write shlib
    /apps/oracle/product/10.2/lib/libocr10.sl
    80000000 8K read/write/exec [mmap-file]
    c0000000 4K read/write shlib
    /apps/oracle/product/10.2/lib/libskgxn2.sl
    00000000 24K read/write shlib
    /apps/oracle/product/10.2/lib/libhasgen10.sl
    40000000 8K read/write/exec [mmap-file]
    7fffb000 20K read/write shlib /usr/lib/pa20_64/dld.sl
    80000000 4K read/write shlib [mmap-file]
    bfff0000 576K read/write [stack]
    00000000 256K exec exec shrd shlib /usr/lib/pa20_64/dld.sl
    000ad000 12K read/exec exec shrd shlib /usr/lib/pa20_64/libdl.1
    000bc000 8K read/exec exec shrd shlib /apps/oracle/product/10.2/lib/l
    0010c000 20K read/exec exec shrd shlib /usr/lib/pa20_64/librt.2
    00114000 24K read/exec exec shrd shlib /usr/lib/pa20_64/libnss_dns.1
    00120000 100K read/exec exec shrd shlib /usr/lib/pa20_64/libpthread.1
    001f4000 28K read/exec exec shrd shlib /usr/lib/pa20_64/libnss_nis.1
    00200000 1200K read/exec exec shrd shlib /usr/lib/pa20_64/libcl.2
    0032c000 36K read/exec exec shrd shlib /usr/lib/pa20_64/libnss_files.1
    00340000 520K read/exec exec shrd shlib /usr/lib/pa20_64/libnsl.1
    003c4000 40K read/exec exec shrd shlib /apps/oracle/product/10.2/lib/l
    003d0000 152K read/exec exec shrd shlib /usr/lib/pa20_64/libm.2
    00610000 92K read/exec exec shrd shlib /opt/star-ncf-prod/ep_patch/usr
    00640000 716K read/exec exec shrd shlib /apps/oracle/product/10.2/lib/l
    006f4000 36K read/exec exec shrd shlib /apps/oracle/product/10.2/lib/l
    00700000 1144K read/exec exec shrd shlib /usr/lib/pa20_64/libc.2
    00820000 76K read/exec exec shrd shlib /apps/oracle/product/10.2/lib/l
    00840000 436K read/exec exec shrd shlib /apps/oracle/product/10.2/lib/l
    00b40000 304K read/exec exec shrd shlib /apps/oracle/product/10.2/lib/l
    00c00000 2252K read/exec exec shrd shlib /apps/oracle/product/10.2/lib/l
    04400000 9428K read/exec exec shrd shlib /apps/oracle/product/10.2/lib/l
    40000000 13647900K read/write/exec shrd [shared]

    regards
    ujang
    On Jan 21, 2008 4:01 PM, Ujang Jaenudin wrote:
    hi,

    I got the memory report from glance and grid control host performance report.
    from those tools indicate the same number.

    looking fsflush for hp-ux.....

    regards
    ujang

    On Jan 21, 2008 2:17 PM, vnr1995 wrote:
    If this is a fresh installation, and if you have faced this problem
    right away after being brought into production, I have few questions:

    How did you know that the OS is using 80 to 90 percent of the memory
    is being used? Many commands dealing memory management are misleading:
    for instance, prstat on solaris is downright misleading; tuning based
    on prstat is downright suicidal. I don't know about hpux; on solaris,
    one can get memory usage by interacting with kernel debugger, smth
    like
    echo "::memstat" | mdb -k

    There is another thing you need to look at: how frequently fsflush
    daemon is being run? There is a kernel parameter (in solaris) that
    dictates this frequency: autoup. Given you got 38 GB memory, try to
    run the fsflush daemon less often, so that the paging activity gets
    reduced.





    On Jan 19, 2008 7:36 PM, Ujang Jaenudin wrote:
    folks,

    I have a machine (RAC configuration) 10.2.0.4 on HPUX - PARISC.
    in a node I have 38GB physical memory, 8 processors.
    the machine could not run with asynchronous IO, so I set io slave to
    8, and dbwr to 1.



    --
    regards
    ujang

    "I believe that exchange rate volatility is a major threat to
    prosperity in the world today"
    Dr. Robert A. Mundell, Nobel Laureate 1999
    --
    regards
    ujang

    "I believe that exchange rate volatility is a major threat to
    prosperity in the world today"
    Dr. Robert A. Mundell, Nobel Laureate 1999
    --
    http://www.freelists.org/webpage/oracle-l
  • Pedro Espinoza at Jan 22, 2008 at 9:17 pm
    By default, iirc, hpux is set to sync io. Change the kernel parameter
    fs_async to make it async io. Use Tim Gorman's oramem script to get
    the proper memory usage, instead of just doing pmap for a particular
    process.

    http://www.evdbt.com/tools.htm#oramem

    Use sar reports to get proper swap, paging reports. Tune syncer
    daemon, if heavy swapping is going on. Just read and digest the
    following page.

    http://safari.oreilly.com/0130428167/ch11lev1sec5
    On Jan 19, 2008 7:36 PM, Ujang Jaenudin wrote:
    folks,

    I have a machine (RAC configuration) 10.2.0.4 on HPUX - PARISC.
    in a node I have 38GB physical memory, 8 processors.
    the machine could not run with asynchronous IO, so I set io slave to
    8, and dbwr to 1.

    at AWR report, it said that 2GB memory for PGA is enough, and 12GB of
    SGA mostly enough.
    concurrent user is 1000 until maximum 1500 users connected.
    at one time only 30-50 concurrent active with OLTP transaction.

    at the OS perspective, it said that mostly 90-100% memory has been used,
    even when high load, 50% of the swap will be used also.
    almost every 20 days, the server is rebooted itself,
    I think this is due to lack of memory....

    why between OS and oracle side there has much big different of memory usage???


    --
    regards
    ujang

    "I believe that exchange rate volatility is a major threat to
    prosperity in the world today"
    Dr. Robert A. Mundell, Nobel Laureate 1999
    --
    http://www.freelists.org/webpage/oracle-l

    --
    http://www.freelists.org/webpage/oracle-l
  • Finn Jorgensen at Jan 23, 2008 at 2:27 pm
    I haven't managed HP-UX for quite some time, but back then (10.20) HP-UX
    would use ALL available memory for filesystem cache unless a kernel
    parameter has been set to manage it otherwise. Perhaps that's still the
    case?

    Finn
  • Ujang Jaenudin at Jan 23, 2008 at 3:53 pm
    finn,

    I have long discussion with SA. also they are checking all the system
    and found that buffer_cache for FS takes the memory that it sould not
    to be.

    thanks to juan, that he directed me to discuss about dbc kernel parameters.
    according to glance, the buiffer it takes 21GB (almost 50% of all
    memory). and the swap isn't used by system even when spike loads in
    oracle need it much more memory.

    regards
    ujang
    On Jan 23, 2008 9:27 PM, Finn Jorgensen wrote:
    I haven't managed HP-UX for quite some time, but back then (10.20) HP-UX
    would use ALL available memory for filesystem cache unless a kernel
    parameter has been set to manage it otherwise. Perhaps that's still the
    case?

    Finn
    --
    regards
    ujang

    "I believe that exchange rate volatility is a major threat to
    prosperity in the world today"
    Dr. Robert A. Mundell, Nobel Laureate 1999
    --
    http://www.freelists.org/webpage/oracle-l
  • Finn Jorgensen at Jan 23, 2008 at 6:59 pm
    If you set filesystemio_options=directIO then you don't have to set
    /usr/sbin/mount
    -F vxfs -e -o noatime,*mincache=direct*

    This is because the init.ora parameter directs oracle to use direct IO calls
    to all files and so you don't have to also instruct the filesystem to use
    direct IO calls to files opened in it.

    Did I misunderstand something?

    Finn
    On 1/23/08, Juan Miranda wrote:



    Ujang



    Remember also to bypass fs cache using direct i/o, (only if you have
    Online JFS):



    filesystemio_options=directIO

    /usr/sbin/mount -F vxfs -e -o noatime,*mincache=direct *







    If you don´t use RAW volumen, you must configure io slaves because vxfs
    don´t permit asynch i/o.

    I used this:

    disk_asynch_io=false

    dbwr_io_slaves=6





    This is my configuration for a Datawarehouse



    Free



    5000

    PGA


    9000

    SGA
    buffer pool

    11000

    shared pool

    keep pool



    Buffer SO



    3200

    System



    3600



    total

    31800







    Juan.




    --
    http://www.freelists.org/webpage/oracle-l

Related Discussions

People

Translate

site design / logo © 2022 Grokbase