FAQ
Testing on Solaris, I got direct I/O if either NFS mount was set
forcedirectio or Oracle had the parameter filesystemio_options=directio

The only case I got unix files system caching was when the mount was done
without forcedirectio and filesystemio_options were either none or asynch.

Kyle
http://dboptimizer.com

PS used dtrace on solaris to watch the file system access to see whether the
query was going to disk or not and watching the number of physical reads
with autotrace in sqlplus. The query was definitely doing the same physical
reads in all cases and in all cases the disks were accessed except when the
NFS mount was done without forcedirectio and filesystemio_options were
either none or asynch. Query was doing
87129 consistent reads
77951 physical reads
Of course the response time of the query was a good indicator. The second
execution of the query with Unix caching was about 5 seconds, with direct
I/O and 32K resize/wsize on the NFS mount it was 60 seconds and with 1M
rsize/wsize on the NFS mounts it was 30 seconds. (Looks like the rsize/wsize
can have a big impact )
For this table, when cached in the buffer cache, it too 2 seconds, ie no
physical reads.
On Fri, Feb 11, 2011 at 3:11 PM, D'Hooge Freek wrote:

Gaja,
X-archive-position: 34399
X-ecartis-version: Ecartis v1.0.0
Sender: oracle-l-bounce_at_freelists.org
Errors-to: oracle-l-bounce_at_freelists.org
X-original-sender: Freek.DHooge_at_uptime.be
Precedence: normal
Reply-To: Freek.DHooge_at_uptime.be
List-help:
List-unsubscribe:
List-software: Ecartis version 1.0.0
List-Id: oracle-l
X-List-ID: oracle-l
List-subscribe:
List-owner:
List-post:
List-archive: <http://www.freelists.org/archives/oracle-l>
X-list: oracle-l

This explains why the forcedirectio mount option is required with NFS on
solaris.
But I always thought that setting the filesystemio_options parameter to
directIO or setall caused the processes to open the files with the O_DIRECT
flag. If so, would this then not cause the file to be accessed with directio
despite any setting on the filesystem?

I'm working mainly on linux these days (either with nfs or asm), so not
much chance in testing this.


Regards,


Freek D'Hooge
Uptime
Oracle Database Administrator
email: freek.dhooge_at_uptime.be
tel +32(0)3 451 23 82
http://www.uptime.be
disclaimer: www.uptime.be/disclaimer
---
From: Gaja Krishna Vaidyanatha
Sent: vrijdag 11 februari 2011 22:47
To: D'Hooge Freek
Cc: Oracle-L List
Subject: Re: How much RAM is to much

Hi Freek,

What you said is true for filesystems that do NOT allow "direct I/O" mount
options in their respective mount commands. But for those filesystems that
do (i.e. vxfs, jfs etc) support the relevant direct I/O mount options, the
direct I/O mount option has always (in my experience) been required in
addition to setting filesystemio_options to SETALL. Setting just the
filesystemio_options in the init.ora (in those cases) did not create the
desired result.

If you have observed the "lack of the mount option" in recent times on
those filesystems where direct I/O mount options ARE supported (i.e. vxfs,
jfs etc), please advise. There is always something to learn new each day :)

Cheers,

Gaja

Gaja Krishna Vaidyanatha,
Founder/Principal, DBPerfMan LLC
http://www.dbperfman.com
Phone - 001-(650)-743-6060
Co-author:Oracle Insights:Tales of the Oak Table -
http://www.apress.com/book/bookDisplay.html?bID=314
Co-author:Oracle Performance Tuning 101 -
http://www.amazon.com/gp/reader/0072131454/ref=sib_dp_pt/102-6130796-4625766
--
http://www.freelists.org/webpage/oracle-l

--
http://www.freelists.org/webpage/oracle-l

Search Discussions

  • Jamey Johnston at Apr 19, 2011 at 1:18 am
    Be careful with running DBs with large I/O loads on Solaris without using Direct I/O. It uses a lot of CPU cycles! We would have our servers go to load averages of over 200-300 and become virtually unresponsive.

    jbj2

    --

    Jamey Johnston
    On Apr 18, 2011, at 7:58 PM, kyle Hailey wrote:


    Testing on Solaris, I got direct I/O if either NFS mount was set forcedirectio or Oracle had the parameter filesystemio_options=directio

    The only case I got unix files system caching was when the mount was done without forcedirectio and filesystemio_options were either none or asynch.

    - Kyle
    http://dboptimizer.com

    PS used dtrace on solaris to watch the file system access to see whether the query was going to disk or not and watching the number of physical reads with autotrace in sqlplus. The query was definitely doing the same physical reads in all cases and in all cases the disks were accessed except when the NFS mount was done without forcedirectio and filesystemio_options were either none or asynch. Query was doing
    87129 consistent reads
    77951 physical reads
    Of course the response time of the query was a good indicator. The second execution of the query with Unix caching was about 5 seconds, with direct I/O and 32K resize/wsize on the NFS mount it was 60 seconds and with 1M rsize/wsize on the NFS mounts it was 30 seconds. (Looks like the rsize/wsize can have a big impact )
    For this table, when cached in the buffer cache, it too 2 seconds, ie no physical reads.



    On Fri, Feb 11, 2011 at 3:11 PM, D'Hooge Freek wrote:
    Gaja,
    X-archive-position: 34399
    X-ecartis-version: Ecartis v1.0.0
    Sender: oracle-l-bounce_at_freelists.org
    Errors-to: oracle-l-bounce_at_freelists.org
    X-original-sender: Freek.DHooge_at_uptime.be
    Precedence: normal
    Reply-To: Freek.DHooge_at_uptime.be
    List-help:
    List-unsubscribe:
    List-software: Ecartis version 1.0.0
    List-Id: oracle-l
    X-List-ID: oracle-l
    List-subscribe:
    List-owner:
    List-post:
    List-archive: <http://www.freelists.org/archives/oracle-l>
    X-list: oracle-l

    This explains why the forcedirectio mount option is required with NFS on solaris.
    But I always thought that setting the filesystemio_options parameter to directIO or setall caused the processes to open the files with the O_DIRECT flag. If so, would this then not cause the file to be accessed with directio despite any setting on the filesystem?

    I'm working mainly on linux these days (either with nfs or asm), so not much chance in testing this.


    Regards,


    Freek D'Hooge
    Uptime
    Oracle Database Administrator
    email: freek.dhooge_at_uptime.be
    tel +32(0)3 451 23 82
    http://www.uptime.be
    disclaimer: www.uptime.be/disclaimer
    ---
    From: Gaja Krishna Vaidyanatha
    Sent: vrijdag 11 februari 2011 22:47
    To: D'Hooge Freek
    Cc: Oracle-L List
    Subject: Re: How much RAM is to much

    Hi Freek,

    What you said is true for filesystems that do NOT allow "direct I/O" mount options in their respective mount commands. But for those filesystems that do (i.e. vxfs, jfs etc) support the relevant direct I/O mount options, the direct I/O mount option has always (in my experience) been required in addition to setting filesystemio_options to SETALL. Setting just the filesystemio_options in the init.ora (in those cases) did not create the desired result.

    If you have observed the "lack of the mount option" in recent times on those filesystems where direct I/O mount options ARE supported (i.e. vxfs, jfs etc), please advise. There is always something to learn new each day :)

    Cheers,

    Gaja

    Gaja Krishna Vaidyanatha,
    Founder/Principal, DBPerfMan LLC
    http://www.dbperfman.com
    Phone - 001-(650)-743-6060
    Co-author:Oracle Insights:Tales of the Oak Table - http://www.apress.com/book/bookDisplay.html?bID=314
    Co-author:Oracle Performance Tuning 101 - http://www.amazon.com/gp/reader/0072131454/ref=sib_dp_pt/102-6130796-4625766
    --
    http://www.freelists.org/webpage/oracle-l

    --
    http://www.freelists.org/webpage/oracle-l
  • kyle Hailey at Apr 19, 2011 at 5:35 pm
    I'm wondering why there is the recommendation to use "forcedirectio" on the,mount options when it seems, at least on solaris, that
    filesystemio_options=directio is sufficient for using direct I/O?

    Kyle
    http://dboptimizer.com
    On Mon, Apr 18, 2011 at 5:58 PM, kyle Hailey wrote:


    Testing on Solaris, I got direct I/O if either NFS mount was set
    forcedirectio or Oracle had the parameter filesystemio_options=directio

    The only case I got unix files system caching was when the mount was done
    without forcedirectio and filesystemio_options were either none or asynch.

    - Kyle
    http://dboptimizer.com

    PS used dtrace on solaris to watch the file system access to see whether
    the query was going to disk or not and watching the number of physical reads
    with autotrace in sqlplus. The query was definitely doing the same physical
    reads in all cases and in all cases the disks were accessed except when the
    NFS mount was done without forcedirectio and filesystemio_options were
    either none or asynch. Query was doing
    87129 consistent reads
    77951 physical reads
    Of course the response time of the query was a good indicator. The second
    execution of the query with Unix caching was about 5 seconds, with direct
    I/O and 32K resize/wsize on the NFS mount it was 60 seconds and with 1M
    rsize/wsize on the NFS mounts it was 30 seconds. (Looks like the rsize/wsize
    can have a big impact )
    For this table, when cached in the buffer cache, it too 2 seconds, ie no
    physical reads.



    On Fri, Feb 11, 2011 at 3:11 PM, D'Hooge Freek wrote:

    Gaja,
    X-archive-position: 34399
    X-ecartis-version: Ecartis v1.0.0
    Sender: oracle-l-bounce_at_freelists.org
    Errors-to: oracle-l-bounce_at_freelists.org
    X-original-sender: Freek.DHooge_at_uptime.be
    Precedence: normal
    Reply-To: Freek.DHooge_at_uptime.be
    List-help:
    List-unsubscribe:
    List-software: Ecartis version 1.0.0
    List-Id: oracle-l
    X-List-ID: oracle-l
    List-subscribe:
    List-owner:
    List-post:
    List-archive: <http://www.freelists.org/archives/oracle-l>
    X-list: oracle-l

    This explains why the forcedirectio mount option is required with NFS on
    solaris.
    But I always thought that setting the filesystemio_options parameter to
    directIO or setall caused the processes to open the files with the O_DIRECT
    flag. If so, would this then not cause the file to be accessed with directio
    despite any setting on the filesystem?

    I'm working mainly on linux these days (either with nfs or asm), so not
    much chance in testing this.


    Regards,


    Freek D'Hooge
    Uptime
    Oracle Database Administrator
    email: freek.dhooge_at_uptime.be
    tel +32(0)3 451 23 82
    http://www.uptime.be
    disclaimer: www.uptime.be/disclaimer
    ---
    From: Gaja Krishna Vaidyanatha
    Sent: vrijdag 11 februari 2011 22:47
    To: D'Hooge Freek
    Cc: Oracle-L List
    Subject: Re: How much RAM is to much

    Hi Freek,

    What you said is true for filesystems that do NOT allow "direct I/O" mount
    options in their respective mount commands. But for those filesystems that
    do (i.e. vxfs, jfs etc) support the relevant direct I/O mount options, the
    direct I/O mount option has always (in my experience) been required in
    addition to setting filesystemio_options to SETALL. Setting just the
    filesystemio_options in the init.ora (in those cases) did not create the
    desired result.

    If you have observed the "lack of the mount option" in recent times on
    those filesystems where direct I/O mount options ARE supported (i.e. vxfs,
    jfs etc), please advise. There is always something to learn new each day :)

    Cheers,

    Gaja

    Gaja Krishna Vaidyanatha,
    Founder/Principal, DBPerfMan LLC
    http://www.dbperfman.com
    Phone - 001-(650)-743-6060
    Co-author:Oracle Insights:Tales of the Oak Table -
    http://www.apress.com/book/bookDisplay.html?bID=314
    Co-author:Oracle Performance Tuning 101 -
    http://www.amazon.com/gp/reader/0072131454/ref=sib_dp_pt/102-6130796-4625766
    --
    http://www.freelists.org/webpage/oracle-l

    --
    http://www.freelists.org/webpage/oracle-l
  • D'Hooge Freek at Apr 19, 2011 at 6:42 pm
    Maybe it used to be necessary and no one has verified this on newer versions?

    Freek D'Hooge
    Uptime
    Oracle Database Administrator
    email: freek.dhooge_at_uptime.be
    tel +32(0)3 451 23 82
    http://www.uptime.be
    disclaimer: www.uptime.be/disclaimer

    From: oracle-l-bounce_at_freelists.org On Behalf Of kyle Hailey
    Sent: dinsdag 19 april 2011 19:36
    To: Oracle-L List
    Subject: Re: How much RAM is to much

    I'm wondering why there is the recommendation to use "forcedirectio" on the,mount options when it seems, at least on solaris, that filesystemio_options=directio is sufficient for using direct I/O?

    Kyle
    http://dboptimizer.com
  • Przemyslaw Bak at Apr 20, 2011 at 9:19 am

    On Tue, Apr 19, 2011 at 08:42:34PM +0200, D'Hooge Freek wrote:
    Maybe it used to be necessary and no one has verified this on newer versions?
    We have been using directio setup on Oracle side for a few years.
    Since (I guess) 9g (just recently switched to 10g).
    It works even better then 'forcedirectio' option used on filesystem side.

    Regards
    Przemyslaw Bak (przemol)

    --
    http://przemol.blogspot.com/

    -------------------------------------------------
    Dzwon TANIO w swieta!
    Sprawdz >> http://linkint.pl/f298c

    --
    http://www.freelists.org/webpage/oracle-l
  • kyle Hailey at Apr 20, 2011 at 11:05 pm
    What metrics tell you that filesystemio_options=directio "works even better
    then 'forcedirectio' option used on filesystem side." ?

    On Solaris, I didn't notice any difference between
    filesystemio_options=directio and using forcedirectio on the mount. If
    either one of them was set my performance was similar.

    Here are some response times for a full tablescan doing 56000 physical
    reads, and 87000 consistent reads, along with the vmstat

    1)

    mount=no-forcedirectio, filesystemio_options=setall

    kthr memory page disk faults cpu
    r b w swap free re mf pi po fr de sr s0 s1 s2 -- in sy cs us sy
    id
    0 0 0 633904 2716200 0 0 0 0 0 0 0 0 5 0 0 8849 1879 7501 1 2
    98
    0 0 0 633904 2716248 0 45 0 0 0 0 0 0 3 0 0 9031 2395 7610 1 2
    97
    0 0 0 621472 2707176 214 806 0 0 0 0 0 1 4 0 0 8982 5888 7656 2 2
    96
    0 0 0 612168 2699240 676 1690 0 0 0 0 0 1 3 0 0 9025 8534 7677 3 2
    95
    0 0 0 612824 2699688 8 44 0 0 0 0 0 3 4 0 0 8314 2600 7045 1 1
    98
    0 0 0 612824 2699752 0 1 0 0 0 0 0 10 9 0 0 8985 2226 7669 1 2
    98
    0 0 0 612760 2699632 0 49 0 0 0 0 0 0 2 0 0 8549 2818 7293 1 2
    97

    Elapsed: 00:00:39.23
    Elapsed: 00:00:20.24
    Elapsed: 00:00:17.97
    Elapsed: 00:00:17.91
    Elapsed: 00:00:19.91
    Elapsed: 00:00:19.31
    Elapsed: 00:00:17.95

    2)

    mount=forcedirectio, filesystemio_options=none

    kthr memory page disk faults cpu
    r b w swap free re mf pi po fr de sr s0 s1 s2 -- in sy cs us sy
    id
    0 0 0 586472 3093824 75 271 0 0 0 0 0 0 2 0 0 9129 2968 7841 1 2
    97
    0 0 0 589744 3096920 0 28 0 0 0 0 0 1 851 0 0 9484 2373 8126 1 2
    97
    0 0 0 589744 3096920 0 20 0 0 0 0 0 2 0 0 0 8923 2624 7664 1 2
    97
    0 0 0 589744 3096928 0 44 0 0 0 0 0 3 2 0 0 9743 2912 8361 2 2
    96
    0 0 0 589744 3097128 0 0 0 0 0 0 0 1 0 0 0 9582 2205 8218 1 2
    97
    0 0 0 589744 3097136 453 1515 0 0 0 0 0 2 3 0 0 9631 9173 8190 2 2
    96
    0 0 0 565584 3079304 528 1496 0 0 0 0 0 2 4 0 0 9999 8728 8713 2 2
    95
    1 0 0 561096 3075384 331 717 0 0 0 0 0 2 1 0 0 9431 5682 8101 2 2
    96

    Elapsed: 00:00:42.04
    Elapsed: 00:00:17.86
    Elapsed: 00:00:19.16
    Elapsed: 00:00:19.13
    Elapsed: 00:00:18.04
    Elapsed: 00:00:18.18
    Elapsed: 00:00:16.18

    3)

    mount=forcedirectio, filesystemio_options=setall

    kthr memory page disk faults cpu
    r b w swap free re mf pi po fr de sr s0 s1 s2 -- in sy cs us sy
    id
    0 0 0 534656 3047352 17 10 0 0 0 0 0 1 1 0 0 8756 2335 7503 1 2
    98
    0 0 0 534656 3047360 0 0 0 0 0 0 0 2 0 0 0 9673 2075 8332 1 2
    97
    0 0 0 534656 3047360 0 28 0 0 0 0 0 0 2 0 0 9692 2345 8349 1 2
    97
    0 0 0 534656 3047424 0 18 0 0 0 0 0 0 1 0 0 9105 2701 7811 2 2
    97
    0 0 0 547728 3056896 0 44 0 0 0 0 0 2 0 0 0 9884 3113 8453 2 2
    96
    0 0 0 546640 3055808 0 0 0 0 0 0 0 0 2 0 0 9253 2061 7906 1 2
    97
    0 0 0 546640 3055808 0 1 0 0 0 0 0 4 1 0 0 9750 2146 8354 1 2
    97
    0 0 0 546640 3055808 0 25 0 0 0 0 0 2 13 0 0 9670 2366 8261 1 2
    97
    0 0 0 546640 3055808 0 17 0 0 0 0 0 1 8 0 0 9620 2794 8282 1 2
    97
    0 0 0 547728 3056896 0 45 0 0 0 0 0 1 1 0 0 5529 2647 4820 1 1
    98

    Elapsed: 00:00:39.22
    Elapsed: 00:00:19.03
    Elapsed: 00:00:18.80
    Elapsed: 00:00:18.29
    Elapsed: 00:00:19.19
    Elapsed: 00:00:17.99
    Elapsed: 00:00:18.97

    Kyle

    2011/4/20
    On Tue, Apr 19, 2011 at 08:42:34PM +0200, D'Hooge Freek wrote:
    Maybe it used to be necessary and no one has verified this on newer
    versions?

    We have been using directio setup on Oracle side for a few years.
    Since (I guess) 9g (just recently switched to 10g).
    It works even better then 'forcedirectio' option used on filesystem side.


    Regards
    Przemyslaw Bak (przemol)
    --
    http://przemol.blogspot.com/





























    -------------------------------------------------
    Dzwon TANIO w swieta!
    Sprawdz >> http://linkint.pl/f298c

    --
    http://www.freelists.org/webpage/oracle-l

    --
    http://www.freelists.org/webpage/oracle-l
  • Przemyslaw Bak at Apr 21, 2011 at 8:00 am

    On Wed, Apr 20, 2011 at 04:05:22PM -0700, kyle Hailey wrote:
    What metrics tell you that filesystemio_options=directio "works even better
    then 'forcedirectio' option used on filesystem side." ?
    No metrics.
    When I use filesystemio_options=setall (not =directio !!!) and mount
    this particular filesystem _without_ 'forcedirectio' option it gives me two options:
    1. direct and asynchronous access to all files where rdbms prefers such access (the most important)
    2. cacheable access to all other files (some files are accessed faster when they are "behind" cache)
    On Solaris, I didn't notice any difference between
    filesystemio_options=directio and using forcedirectio on the mount. If
    either one of them was set my performance was similar.
    We don't use filesystemio_options=directio, only filesystemio_options=setall.

    Regards
    Przemyslaw Bak (przemol)

    --
    http://przemol.blogspot.com/

    ----------------------------------------------------------------------
    Ksiazka kucharska za darmo! Prawie setka
    przepisow na jajka! Pobierz > http://linkint.pl/f2995

    --
    http://www.freelists.org/webpage/oracle-l
  • kyle Hailey at Apr 20, 2011 at 11:09 pm
    Tried this on HP, and unlike Solaris where the init.ora was sufficient, on
    HP the filesystem had to be mounted forcedirectio and the init.ora didn't
    matter one way or the other, ie

    on HP
    mount=forcedirectio, filesystemio_options=setall (or directio)
    => direct I/O
    mount=forcedirectio, filesystemio_options=none
    => direct I/O
    mount=no forcedirectio, filesystemio_options=setall (or directio)
    => no direct I/O

    whereas on SOLARIS
    mount=forcedirectio, filesystemio_options=setall (or directio)
    => direct I/O
    mount=forcedirectio, filesystemio_options=none
    => direct I/O
    mount=no forcedirectio, filesystemio_options=setall (or directio)
    => direct I/O

    then on LINUX and AIX, AFAIK, there is no mount option for forcedirectio and
    the init.ora parameter filesystemio_options controls direct I/O.

    Kyle
    On Tue, Apr 19, 2011 at 10:35 AM, kyle Hailey wrote:


    I'm wondering why there is the recommendation to use "forcedirectio" on the
    ,mount options when it seems, at least on solaris, that
    filesystemio_options=directio is sufficient for using direct I/O?

    - Kyle
    http://dboptimizer.com


    On Mon, Apr 18, 2011 at 5:58 PM, kyle Hailey wrote:


    Testing on Solaris, I got direct I/O if either NFS mount was set
    forcedirectio or Oracle had the parameter filesystemio_options=directio

    The only case I got unix files system caching was when the mount was done
    without forcedirectio and filesystemio_options were either none or asynch.

    - Kyle
    http://dboptimizer.com

    PS used dtrace on solaris to watch the file system access to see whether
    the query was going to disk or not and watching the number of physical reads
    with autotrace in sqlplus. The query was definitely doing the same physical
    reads in all cases and in all cases the disks were accessed except when the
    NFS mount was done without forcedirectio and filesystemio_options were
    either none or asynch. Query was doing
    87129 consistent reads
    77951 physical reads
    Of course the response time of the query was a good indicator. The second
    execution of the query with Unix caching was about 5 seconds, with direct
    I/O and 32K resize/wsize on the NFS mount it was 60 seconds and with 1M
    rsize/wsize on the NFS mounts it was 30 seconds. (Looks like the rsize/wsize
    can have a big impact )
    For this table, when cached in the buffer cache, it too 2 seconds, ie no
    physical reads.



    On Fri, Feb 11, 2011 at 3:11 PM, D'Hooge Freek wrote:

    Gaja,
    X-archive-position: 34399
    X-ecartis-version: Ecartis v1.0.0
    Sender: oracle-l-bounce_at_freelists.org
    Errors-to: oracle-l-bounce_at_freelists.org
    X-original-sender: Freek.DHooge_at_uptime.be
    Precedence: normal
    Reply-To: Freek.DHooge_at_uptime.be
    List-help:
    List-unsubscribe:
    List-software: Ecartis version 1.0.0
    List-Id: oracle-l
    X-List-ID: oracle-l
    List-subscribe:
    List-owner:
    List-post:
    List-archive: <http://www.freelists.org/archives/oracle-l>
    X-list: oracle-l

    This explains why the forcedirectio mount option is required with NFS on
    solaris.
    But I always thought that setting the filesystemio_options parameter to
    directIO or setall caused the processes to open the files with the O_DIRECT
    flag. If so, would this then not cause the file to be accessed with directio
    despite any setting on the filesystem?

    I'm working mainly on linux these days (either with nfs or asm), so not
    much chance in testing this.


    Regards,


    Freek D'Hooge
    Uptime
    Oracle Database Administrator
    email: freek.dhooge_at_uptime.be
    tel +32(0)3 451 23 82
    http://www.uptime.be
    disclaimer: www.uptime.be/disclaimer
    ---
    From: Gaja Krishna Vaidyanatha
    Sent: vrijdag 11 februari 2011 22:47
    To: D'Hooge Freek
    Cc: Oracle-L List
    Subject: Re: How much RAM is to much

    Hi Freek,

    What you said is true for filesystems that do NOT allow "direct I/O"
    mount options in their respective mount commands. But for those filesystems
    that do (i.e. vxfs, jfs etc) support the relevant direct I/O mount options,
    the direct I/O mount option has always (in my experience) been required in
    addition to setting filesystemio_options to SETALL. Setting just the
    filesystemio_options in the init.ora (in those cases) did not create the
    desired result.

    If you have observed the "lack of the mount option" in recent times on
    those filesystems where direct I/O mount options ARE supported (i.e. vxfs,
    jfs etc), please advise. There is always something to learn new each day :)

    Cheers,

    Gaja

    Gaja Krishna Vaidyanatha,
    Founder/Principal, DBPerfMan LLC
    http://www.dbperfman.com
    Phone - 001-(650)-743-6060
    Co-author:Oracle Insights:Tales of the Oak Table -
    http://www.apress.com/book/bookDisplay.html?bID=314
    Co-author:Oracle Performance Tuning 101 -
    http://www.amazon.com/gp/reader/0072131454/ref=sib_dp_pt/102-6130796-4625766
    --
    http://www.freelists.org/webpage/oracle-l

    --
    http://www.freelists.org/webpage/oracle-l
  • Yong Huang at Apr 21, 2011 at 7:37 pm

    I'm wondering why there is the recommendation to use "forcedirectio" on
    the mount options when it seems, at least on solaris, that
    filesystemio_options=directio is sufficient for using direct I/O?
    - Kyle
    I tested on Linux using ext3 file system with Oracle 10.2.0.1. With
    filesystemio_options set, Oracle can open datafiles, log files, archive
    logs with O_DIRECT. But trace files, including alert.log, are opened
    without that flag. So if lots of big trace files are created, I think
    the file system page cache could be stressed.

    By the way, Kyle, your two messages about your performance tests are
    very useful. A very minor point. forcedirectio (or equivalent) is more
    filesystem- than OS- specific. For instance, AIX's default or most
    popular filesystem JFS probably doesn't have this option. But if you
    use VxFS, you can specify convosync=direct.

    Yong Huang

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouporacle-l @
categoriesoracle
postedApr 19, '11 at 12:58a
activeApr 21, '11 at 7:37p
posts9
users5
websiteoracle.com

People

Translate

site design / logo © 2022 Grokbase