|
Sonal Goyal |
at Jun 22, 2011 at 2:00 am
|
⇧ |
| |
Hi Mark,
You can take a look at
http://allthingshadoop.com/2010/04/28/map-reduce-tips-tricks-your-first-real-cluster/and
http://www.cloudera.com/blog/2009/03/configuration-parameters-what-can-you-just-ignore/toconfigure your cluster. Along with the tasks, you can change the child
jvm heap size, data.xceivers etc. A good practice is to understand what kind
of map reduce programming you will be doing, are your tasks CPU bound or
memory bound and accordingly change your base cluster settings.
Best Regards,
Sonal
<
https://github.com/sonalgoyal/hiho>Hadoop ETL and Data
Integration<
https://github.com/sonalgoyal/hiho>
Nube Technologies <
http://www.nubetech.co>
<
http://in.linkedin.com/in/sonalgoyal>
On Wed, Jun 22, 2011 at 6:16 AM, Mark wrote:
We have a small 4 node clusters that have 12GB of ram and the cpus are Quad
Core Xeons.
I'm assuming the defaults aren't that generous so what are some
configuration changes I should make to take advantage of this hardware? Max
map task? Max reduce tasks? Anything else?
Thanks