FAQ
8 DataNodes (16-core CPU && 32G memory && 1000M NET CARD)
1 NameNode (16-core CPU && 32G memory && 1000M NET CARD)

I really want to know how to make full use of the cluster . Some advice ?
thank you.
--
Stay Hungry. Stay Foolish.

Search Discussions

  • Fei Pan at May 7, 2011 at 5:35 pm
    4 * 300G DISK for each Node.

    2011/5/8 Fei Pan <cnweike@gmail.com>
    8 DataNodes (16-core CPU && 32G memory && 1000M NET CARD)
    1 NameNode (16-core CPU && 32G memory && 1000M NET CARD)

    I really want to know how to make full use of the cluster . Some advice ?
    thank you.
    --
    Stay Hungry. Stay Foolish.

    --
    Stay Hungry. Stay Foolish.
  • GOEKE, MATTHEW [AG/1000] at May 7, 2011 at 6:14 pm
    Fei,



    From my experience this is a very compute heavy setup, based on data density:number of cores, so it would be helpful to know what your use cases are in order to make any recommendations. Also what is the interface and speed of those hard drives?

    Matt



    From: Fei Pan
    Sent: Saturday, May 07, 2011 12:35 PM
    To: hdfs-user@hadoop.apache.org
    Subject: Re: How to make full use of the disk IO ?



    4 * 300G DISK for each Node.

    2011/5/8 Fei Pan <cnweike@gmail.com>

    8 DataNodes (16-core CPU && 32G memory && 1000M NET CARD)
    1 NameNode (16-core CPU && 32G memory && 1000M NET CARD)

    I really want to know how to make full use of the cluster . Some advice ? thank you.
    --
    Stay Hungry. Stay Foolish.




    --
    Stay Hungry. Stay Foolish.

    This e-mail message may contain privileged and/or confidential information, and is intended to be received only by persons entitled
    to receive such information. If you have received this e-mail in error, please notify the sender immediately. Please delete it and
    all attachments from any servers, hard drives or any other media. Other use of this e-mail by you is strictly prohibited.

    All e-mails and attachments sent and received are subject to monitoring, reading and archival by Monsanto, including its
    subsidiaries. The recipient of this e-mail is solely responsible for checking for the presence of "Viruses" or other "Malware".
    Monsanto, along with its subsidiaries, accepts no liability for any damage caused by any such code transmitted by or accompanying
    this e-mail or any attachment.


    The information contained in this email may be subject to the export control laws and regulations of the United States, potentially
    including but not limited to the Export Administration Regulations (EAR) and sanctions regulations issued by the U.S. Department of
    Treasury, Office of Foreign Asset Controls (OFAC). As a recipient of this information you are obligated to comply with all
    applicable U.S. export laws and regulations.
  • Fei Pan at May 7, 2011 at 6:28 pm
    I think most our jobs data intensive ... we use hive to compute some result
    and put the result in the reporter system .

    the hard dirive is also ok...

    the hard drives are:
    *
    Host: scsi0 Channel: 00 Id: 00 Lun: 00
    Vendor: SEAGATE Model: ST9300603SS Rev: 3104
    Type: Direct-Access ANSI SCSI revision: 0*5


    2011/5/8 GOEKE, MATTHEW [AG/1000] <matthew.goeke@monsanto.com>
    Fei,



    From my experience this is a very compute heavy setup, based on data
    density:number of cores, so it would be helpful to know what your use cases
    are in order to make any recommendations. Also what is the interface and
    speed of those hard drives?

    Matt



    *From:* Fei Pan
    *Sent:* Saturday, May 07, 2011 12:35 PM
    *To:* hdfs-user@hadoop.apache.org
    *Subject:* Re: How to make full use of the disk IO ?



    4 * 300G DISK for each Node.

    2011/5/8 Fei Pan <cnweike@gmail.com>

    8 DataNodes (16-core CPU && 32G memory && 1000M NET CARD)
    1 NameNode (16-core CPU && 32G memory && 1000M NET CARD)

    I really want to know how to make full use of the cluster . Some advice ?
    thank you.
    --
    Stay Hungry. Stay Foolish.




    --
    Stay Hungry. Stay Foolish.
    This e-mail message may contain privileged and/or confidential information,
    and is intended to be received only by persons entitled
    to receive such information. If you have received this e-mail in error,
    please notify the sender immediately. Please delete it and
    all attachments from any servers, hard drives or any other media. Other use
    of this e-mail by you is strictly prohibited.

    All e-mails and attachments sent and received are subject to monitoring,
    reading and archival by Monsanto, including its
    subsidiaries. The recipient of this e-mail is solely responsible for
    checking for the presence of "Viruses" or other "Malware".
    Monsanto, along with its subsidiaries, accepts no liability for any damage
    caused by any such code transmitted by or accompanying
    this e-mail or any attachment.


    The information contained in this email may be subject to the export
    control laws and regulations of the United States, potentially
    including but not limited to the Export Administration Regulations (EAR)
    and sanctions regulations issued by the U.S. Department of
    Treasury, Office of Foreign Asset Controls (OFAC). As a recipient of this
    information you are obligated to comply with all
    applicable U.S. export laws and regulations.


    --
    Stay Hungry. Stay Foolish.
  • Fei Pan at May 7, 2011 at 6:39 pm
    But we will do some BI job latter on top of the cluster ... (maybe we will
    use mahout)

    2011/5/8 Fei Pan <cnweike@gmail.com>
    I think most our jobs data intensive ... we use hive to compute some
    result and put the result in the reporter system .

    the hard dirive is also ok...

    the hard drives are:
    *
    Host: scsi0 Channel: 00 Id: 00 Lun: 00
    Vendor: SEAGATE Model: ST9300603SS Rev: 3104
    Type: Direct-Access ANSI SCSI revision: 0*5


    2011/5/8 GOEKE, MATTHEW [AG/1000] <matthew.goeke@monsanto.com>

    Fei,


    From my experience this is a very compute heavy setup, based on data
    density:number of cores, so it would be helpful to know what your use cases
    are in order to make any recommendations. Also what is the interface and
    speed of those hard drives?

    Matt



    *From:* Fei Pan
    *Sent:* Saturday, May 07, 2011 12:35 PM
    *To:* hdfs-user@hadoop.apache.org
    *Subject:* Re: How to make full use of the disk IO ?



    4 * 300G DISK for each Node.

    2011/5/8 Fei Pan <cnweike@gmail.com>

    8 DataNodes (16-core CPU && 32G memory && 1000M NET CARD)
    1 NameNode (16-core CPU && 32G memory && 1000M NET CARD)

    I really want to know how to make full use of the cluster . Some advice ?
    thank you.
    --
    Stay Hungry. Stay Foolish.




    --
    Stay Hungry. Stay Foolish.
    This e-mail message may contain privileged and/or confidential
    information, and is intended to be received only by persons entitled
    to receive such information. If you have received this e-mail in error,
    please notify the sender immediately. Please delete it and
    all attachments from any servers, hard drives or any other media. Other
    use of this e-mail by you is strictly prohibited.

    All e-mails and attachments sent and received are subject to monitoring,
    reading and archival by Monsanto, including its
    subsidiaries. The recipient of this e-mail is solely responsible for
    checking for the presence of "Viruses" or other "Malware".
    Monsanto, along with its subsidiaries, accepts no liability for any damage
    caused by any such code transmitted by or accompanying
    this e-mail or any attachment.


    The information contained in this email may be subject to the export
    control laws and regulations of the United States, potentially
    including but not limited to the Export Administration Regulations (EAR)
    and sanctions regulations issued by the U.S. Department of
    Treasury, Office of Foreign Asset Controls (OFAC). As a recipient of this
    information you are obligated to comply with all
    applicable U.S. export laws and regulations.


    --
    Stay Hungry. Stay Foolish.

    --
    Stay Hungry. Stay Foolish.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouphdfs-user @
categorieshadoop
postedMay 7, '11 at 5:32p
activeMay 7, '11 at 6:39p
posts5
users2
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase