Fei,
From my experience this is a very compute heavy setup, based on data density:number of cores, so it would be helpful to know what your use cases are in order to make any recommendations. Also what is the interface and speed of those hard drives?
Matt
From: Fei Pan
Sent: Saturday, May 07, 2011 12:35 PM
To: hdfs-user@hadoop.apache.org
Subject: Re: How to make full use of the disk IO ?
4 * 300G DISK for each Node.
2011/5/8 Fei Pan <cnweike@gmail.com>
8 DataNodes (16-core CPU && 32G memory && 1000M NET CARD)
1 NameNode (16-core CPU && 32G memory && 1000M NET CARD)
I really want to know how to make full use of the cluster . Some advice ? thank you.
--
Stay Hungry. Stay Foolish.
--
Stay Hungry. Stay Foolish.
This e-mail message may contain privileged and/or confidential information, and is intended to be received only by persons entitled
to receive such information. If you have received this e-mail in error, please notify the sender immediately. Please delete it and
all attachments from any servers, hard drives or any other media. Other use of this e-mail by you is strictly prohibited.
All e-mails and attachments sent and received are subject to monitoring, reading and archival by Monsanto, including its
subsidiaries. The recipient of this e-mail is solely responsible for checking for the presence of "Viruses" or other "Malware".
Monsanto, along with its subsidiaries, accepts no liability for any damage caused by any such code transmitted by or accompanying
this e-mail or any attachment.
The information contained in this email may be subject to the export control laws and regulations of the United States, potentially
including but not limited to the Export Administration Regulations (EAR) and sanctions regulations issued by the U.S. Department of
Treasury, Office of Foreign Asset Controls (OFAC). As a recipient of this information you are obligated to comply with all
applicable U.S. export laws and regulations.