FAQ
Hello Everyone,

We are building up a system from scratch and wanted to follow the best
practices in setting up SAN Storage Array for ASM. With 22 disks in the
array for use, i was planning on doing 11 + 11 in Raid 1/0 setup with
maximum stripe possible within the array and carve out 3 x 2G LUNs for
vote_ocr and 2 x 1TB LUNs for both DATA and FRA diskgroup. vote-ocr DGs
will use normal redundancy and database in external redundancy. This will be
on 11gR2 + Linux 5.5 + 4 node RAC cluster to house a mixed (70% oltp + 30%
DDS) workload application. Wondering if anyone have an opinion or best
practices generally implemented for these setups.

Thanks
Steven Andrew

Search Discussions

  • Steve Harville at Apr 20, 2011 at 9:15 am
    What is the brand and model is the storage system?

    Steve Harville

    http://www.linkedin.com/in/steveharville

    We are building up a system from scratch and wanted to follow the best
    practices in setting up SAN Storage Array for ASM. With 22 disks in the
    array for use, i was planning on doing 11 + 11 in Raid 1/0 setup with
    maximum stripe possible within the array and carve out 3 x 2G LUNs for
    vote_ocr and 2 x 1TB LUNs for both DATA and FRA diskgroup. vote-ocr DGs
    will use normal redundancy and database in external redundancy.
    --
    http://www.freelists.org/webpage/oracle-l
  • Anonymous at Apr 20, 2011 at 10:06 am
    I strongly recommend to reserve at least 1 disk for hotspare. If
    storage amount is not the problem, I will use 2 disks for hotspare and
    leave 20 disks for RAID10. 5*1G LUNs for ocr & voting.

    --
    Kamus

    Visit my blog for more : http://www.dbform.com
    Join ACOUG: http://www.acoug.org
    On Wed, Apr 20, 2011 at 10:40 AM, Steven Andrew wrote:
    Hello Everyone,
    We are building up a system from scratch and wanted to follow the best
    practices in setting up SAN Storage Array for ASM. With 22 disks in the
    array for use, i was planning on doing 11 + 11 in Raid 1/0 setup with
    maximum stripe possible within the array and carve out 3 x 2G LUNs for
    vote_ocr and 2 x 1TB LUNs for both DATA and FRA diskgroup.  vote-ocr DGs
    will use normal redundancy and database in external redundancy. This will be
    on 11gR2 + Linux 5.5 + 4 node RAC cluster to house a mixed (70% oltp + 30%
    DDS) workload application. Wondering if anyone have an opinion or best
    practices generally implemented for these setups.
    Thanks
    Steven Andrew
    --
    http://www.freelists.org/webpage/oracle-l
  • Steven Andrew at Apr 20, 2011 at 3:12 pm
    Thanks Leyi. We have reserved 2 disks for hotspare already and usable we
    have is 22 disks.

    Is the 5 x 1G luns is for placing ocr and voting in high redundancy instead
    of normal one.

    To answer Steve, the storage is emc, cx series.

    To answer Goran, We are designing this system to handle 1000 to 1500 IOPS.
    On Wed, Apr 20, 2011 at 3:06 AM, Leyi Zhang (Kamus) wrote:

    I strongly recommend to reserve at least 1 disk for hotspare. If
    storage amount is not the problem, I will use 2 disks for hotspare and
    leave 20 disks for RAID10. 5*1G LUNs for ocr & voting.

    --
    Kamus

    Visit my blog for more : http://www.dbform.com
    Join ACOUG: http://www.acoug.org


    On Wed, Apr 20, 2011 at 10:40 AM, Steven Andrew wrote:
    Hello Everyone,
    We are building up a system from scratch and wanted to follow the best
    practices in setting up SAN Storage Array for ASM. With 22 disks in the
    array for use, i was planning on doing 11 + 11 in Raid 1/0 setup with
    maximum stripe possible within the array and carve out 3 x 2G LUNs for
    vote_ocr and 2 x 1TB LUNs for both DATA and FRA diskgroup. vote-ocr DGs
    will use normal redundancy and database in external redundancy. This will be
    on 11gR2 + Linux 5.5 + 4 node RAC cluster to house a mixed (70% oltp + 30%
    DDS) workload application. Wondering if anyone have an opinion or best
    practices generally implemented for these setups.
    Thanks
    Steven Andrew
    --
    http://www.freelists.org/webpage/oracle-l
  • Steve Harville at Apr 20, 2011 at 8:25 pm
    You could go raid 5 instead of raid 10 and get more useable disk. There
    should won't be a write penalty since writes are cached on that hardware.
    Reads would be faster on raid 5 than on raid 10 because there are more
    spindles to spread the data (less duplication).

    Steve Harville

    http://www.linkedin.com/in/steveharville
    On Wed, Apr 20, 2011 at 11:12 AM, Steven Andrew wrote:

    Thanks Leyi. We have reserved 2 disks for hotspare already and usable we
    have is 22 disks.

    Is the 5 x 1G luns is for placing ocr and voting in high redundancy instead
    of normal one.

    To answer Steve, the storage is emc, cx series.

    To answer Goran, We are designing this system to handle 1000 to 1500 IOPS.

    --
    http://www.freelists.org/webpage/oracle-l
  • Jared Still at Apr 22, 2011 at 3:42 pm

    On Wed, Apr 20, 2011 at 1:25 PM, Steve Harville wrote:

    You could go raid 5 instead of raid 10 and get more useable disk. There
    should won't be a write penalty since writes are cached on that hardware.
    That only works up until the write cache gets saturated.

    Jared Still
    Certifiable Oracle DBA and Part Time Perl Evangelist
    Oracle Blog: http://jkstill.blogspot.com
    Home Page: http://jaredstill.com
  • Steve Harville at Apr 22, 2011 at 3:55 pm
    I have never seen that happen on recent emc hardware except in controved tests.
    That only works up until the write cache gets saturated.


    Jared Still
    --

    Steve Harville

    http://www.linkedin.com/in/steveharville
    --
    http://www.freelists.org/webpage/oracle-l
  • Goran bogdanovic at Apr 20, 2011 at 11:04 am
    Hi Steven,

    apart from OCR/Voting disk requiremets, when designing SAN Storage for
    database, following need to be done considered:

    - find out application I/O characteristis
    - find out bandwith limits for all I/O stack components
    - understand reqired RPO, RTO, SLA's

    The resulted design should be throughput od not capacity oriented.

    regards,
    goran
    On Wed, Apr 20, 2011 at 4:40 AM, Steven Andrew wrote:

    Hello Everyone,

    We are building up a system from scratch and wanted to follow the best
    practices in setting up SAN Storage Array for ASM. With 22 disks in the
    array for use, i was planning on doing 11 + 11 in Raid 1/0 setup with
    maximum stripe possible within the array and carve out 3 x 2G LUNs for
    vote_ocr and 2 x 1TB LUNs for both DATA and FRA diskgroup. vote-ocr DGs
    will use normal redundancy and database in external redundancy. This will be
    on 11gR2 + Linux 5.5 + 4 node RAC cluster to house a mixed (70% oltp + 30%
    DDS) workload application. Wondering if anyone have an opinion or best
    practices generally implemented for these setups.

    Thanks
    Steven Andrew
    --
    http://www.freelists.org/webpage/oracle-l
  • David Robillard at Apr 22, 2011 at 11:53 am
    Hello Steven,
    We are building up a system from scratch and wanted to follow the best
    practices in setting up SAN Storage Array for ASM. With 22 disks in the
    array for use, i was planning on doing 11 + 11 in Raid 1/0 setup with
    maximum stripe possible within the array and carve out 3 x 2G LUNs for
    vote_ocr and 2 x 1TB LUNs for both DATA and FRA diskgroup. vote-ocr DGs
    will use normal redundancy and database in external redundancy.
    You might want to read the fine manual on ASM. You don't specify which
    database version nor the OS you plan to use. So I'll assume you're
    going with 11gR2. If so, check out « Oracle ASM Administrator's Guide
    11gR2 Chapter 2: Considerations for Oracle ASM Storage,
    Recommendations for Storage Preparation » [1].

    This official documentation says that « A minimum of four LUNs (Oracle
    ASM disks) of equal size and performance is recommended for each disk
    group. » and that you should « Create external redundancy disk groups
    when using high-end storage arrays. »

    With that in mind, you might want to change the 2 x 1 TB LUNs for 4 x
    512 GB LUNs. But keep in mind that if you need to add more disk space
    to either disk groups, you will need a 512 GB LUN which is relatively
    big. That is to satisfy the ASM data distribution and balance
    operation as the fine manual says: « Oracle ASM data distribution
    policy is capacity-based. Ensure that Oracle ASM disks in a disk group
    have the same capacity to maintain balance. » In other words, use LUNs
    of the same size in the same disk group.

    Also, if you plan to use your +FRA disk group as your RMAN backup
    area, consider giving your +FRA disk group more disk space than the
    +DATA disk group. Especially if you plan to use RMAN incrementally
    updated backups [5]. The well named Doc ID 762934.1 « Flash Recovery
    Area Sizing » can help you with, well, sizing the +FRA. This one is
    also interesting: Doc ID 305648.1 « What is a Flash Recovery Area and
    how to configure it ? ».

    You might also be interested in Doc ID 1187723.1 « Master Note for
    Automatic Storage Management (ASM) » along with Doc ID 265633.1 « ASM
    Technical Best Practices » which is for 10gR2 and 11gR1. the « Top 10
    Things You Always Wanted to Know About ASM But Were Afraid to Ask » by
    Nitin Vengurlekar [2] which is quite interesting.

    Finally, a side note on ASMLib. The official ASM documentation [1]
    says that if you're running Linux, then « use the Oracle ASMLIB
    feature to provide consistent device naming and permission
    persistency. ». IMHO this is a bad idea as you can achieve the same
    goal with udev instead of ASMLib [3]. But don't take my word for it
    and check out Christo Kutrovsky's presentation « RAC+ASM: 3 years in
    production. Stories to share » [4].

    Regards,

    David

    [1] http://download.oracle.com/docs/cd/E11882_01/server.112/e16102/asmprepare.htm#BABJHHEC
    [2] http://www.dbaexpert.com/blog/wp-content/uploads/2009/08/doug-top-10-asm-questions.pdf
    [3] http://itdavid.blogspot.com/2011/03/how-to-increase-disk-space-in-existing.html
    [4] http://www.pythian.com/news/9055/oracle-rac-asm-3-years-in-production-stories-to-share-slides-from-rmoug10/
    [5] http://download.oracle.com/docs/cd/E11882_01/backup.112/e10642/rcmbckba.htm#CHDEHBFF
  • Steven Andrew at Apr 25, 2011 at 11:42 pm

    On Fri, Apr 22, 2011 at 4:53 AM, David Robillard wrote:


    With that in mind, you might want to change the 2 x 1 TB LUNs for 4 x
    512 GB LUNs. But keep in mind that if you need to add more disk space
    to either disk groups, you will need a 512 GB LUN which is relatively
    big. That is to satisfy the ASM data distribution and balance
    operation as the fine manual says: « Oracle ASM data distribution
    policy is capacity-based. Ensure that Oracle ASM disks in a disk group
    have the same capacity to maintain balance. » In other words, use LUNs
    of the same size in the same disk group.
    Hi David,

    Thanks for the detailed mail. One thing I tend to disagree is minimum 4 LUNs
    per diskgroup recommendation. Isn't creating smaller LUNs, increases the
    LUNs maintenance in the DG like having smaller datafiles for tablespaces. At
    least that was the theory i had come up with fewer bigger LUNs within DG. As
    all LUNs will be coming off of same RAID set, does it really matter having
    smaller LUNs? I understand to increase the DG, i would need another TB, but
    if database is NOT going grow beyond allocated space, it shouldn't be a
    problem right.

    Thanks,
    Steven.
  • Anonymous at Apr 26, 2011 at 3:08 am
    I would also like to use 2*500M LUN for datadg instead of 1*1T LUN,
    since it's not so critical, I didn't give this suggestion in my last
    reply.
    My thought is: if your storage box has 2 controller (A controller and
    B controller), you can create 2 LUNs and set each using the separate
    controller as preferential device, eg. LUN1 prefer using A controller
    and LUN2 prefer B.

    --
    Kamus

    Visit my blog for more : http://www.dbform.com
    Join ACOUG: http://www.acoug.org
    On Tue, Apr 26, 2011 at 7:42 AM, Steven Andrew wrote:
    On Fri, Apr 22, 2011 at 4:53 AM, David Robillard
    wrote:
    With that in mind, you might want to change the 2 x 1 TB LUNs for 4 x
    512 GB LUNs. But keep in mind that if you need to add more disk space
    to either disk groups, you will need a 512 GB LUN which is relatively
    big. That is to satisfy the ASM data distribution and balance
    operation as the fine manual says: « Oracle ASM data distribution
    policy is capacity-based. Ensure that Oracle ASM disks in a disk group
    have the same capacity to maintain balance. » In other words, use LUNs
    of the same size in the same disk group.
    Hi David,
    Thanks for the detailed mail. One thing I tend to disagree is minimum 4 LUNs
    per diskgroup recommendation. Isn't creating smaller LUNs, increases the
    LUNs maintenance in the DG like having smaller datafiles for tablespaces. At
    least that was the theory i had come up with fewer bigger LUNs within DG. As
    all LUNs will be coming off of same RAID set, does it really matter having
    smaller LUNs? I understand to increase the DG, i would need another TB, but
    if database is NOT going grow beyond allocated space, it shouldn't be a
    problem right.
    Thanks,
    Steven.
    --
    http://www.freelists.org/webpage/oracle-l
  • Jeremy Schneider at Apr 26, 2011 at 2:19 pm
    Ahhh! A religious, er, um, "storage best practice" question.
    On 4/25/2011 6:42 PM, Steven Andrew wrote:
    if database is NOT going grow beyond allocated space, it shouldn't be
    a problem right.
    Is that like saying that 640K memory should be enough for anything? :)

    A few thoughts in response to the thread so far:
    - the number 4 seems a little arbitrary to me, as indeed all "specific"
    numbers that have been thrown out in this discussion
    - 11gR2 does not actually require dedicated disks for OCR/vote although
    it seems to me that most people are doing this. the OCR & vote can be
    spread across disks that are also used for data.
    - if you use ASM for redundancy then you get "hot spare capacity"
    instead of hot spare disks, and achieve better spindle utilization and
    higher IOPS -- but you lose a lot of potential capacity b/c you can't
    use a parity scheme; only mirroring
    - IMHO, avoiding parity-based RAID for mirroring and trying to segregate
    traffic (e.g. redo) MIGHT get you AT BEST an extra 10-15% performance
    over properly-configured alternatives. I made up that number, but the
    point is - it's small. And that's only if you do it exactly right...
    which is very difficult... I honestly doubt that it's making any
    difference in most databases (ever since it became a popular religion).
    And you lose a lot of possible capacity. Maybe parity is best for you.
    - What if your DB is still small in 2 years and you decide that want to
    use some of that disk for something different? Trying to shrink a LUN
    which has been given to ASM is basically impossible right now.
    - The question of what's most important for you (capacity, performance,
    flexibility) is purely a business decision. There is no best practice
    that can answer this for you.

    Not that I have any of your answers, but if you're interested you can
    check out a presentation I gave at UKOUG and Collaborate this year about
    ASM lessons learned at a number of large companies. That was more about
    wide-scale adoptions, and your question sounds a bit different.
    Nonetheless: see "Premier League Peek" ->
    http://www.ardentperf.com/downloads

    -Jeremy

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouporacle-l @
categoriesoracle
postedApr 20, '11 at 2:40a
activeApr 26, '11 at 2:19p
posts12
users7
websiteoracle.com

People

Translate

site design / logo © 2022 Grokbase