|| at Jun 22, 2011 at 2:00 am
You can take a look athttp://allthingshadoop.com/2010/04/28/map-reduce-tips-tricks-your-first-real-cluster/
configure your cluster. Along with the tasks, you can change the child
jvm heap size, data.xceivers etc. A good practice is to understand what kind
of map reduce programming you will be doing, are your tasks CPU bound or
memory bound and accordingly change your base cluster settings.
>Hadoop ETL and Data
Nube Technologies <http://www.nubetech.co
On Wed, Jun 22, 2011 at 6:16 AM, Mark wrote:
We have a small 4 node clusters that have 12GB of ram and the cpus are Quad
I'm assuming the defaults aren't that generous so what are some
configuration changes I should make to take advantage of this hardware? Max
map task? Max reduce tasks? Anything else?