FAQ
Hi -- Can we specify a different set of slaves for each mapreduce job run.
I tried using the --config option and specify different set of slaves in
slaves config file. However, it does not use the selective slaves set but
the one initially configured.

Any help?

Thanks,
Biksah

Search Discussions

  • Faraz Ahmad at Sep 28, 2011 at 5:08 pm
    This (slaves) is a configuration parameter (look at
    bin/hadoop-config.sh) which is set once you start mapreduce cluster
    (execute "start-mapred.sh"). You can change the slaves by executing
    "stop-mapred.sh", changing the slaves file and running
    "start-mapred.sh" between the jobs. You can also stop/start new slaves
    (task trackers) between the jobs using parameters
    "mapred.hosts.exclude" and "mapred.hosts" (I think hadoop tutorial
    provides help on that) but all these methods require you to restart
    mapreduce cluster.

    On Tue, Sep 27, 2011 at 11:50 AM, bikash sharma wrote:
    Hi -- Can we specify a different set of slaves for each mapreduce job run.
    I tried using the --config option and specify different set of slaves in
    slaves config file. However, it does not use the selective slaves set but
    the one initially configured.

    Any help?

    Thanks,
    Biksah

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-dev @
categorieshadoop
postedSep 27, '11 at 3:51p
activeSep 28, '11 at 5:08p
posts2
users2
websitehadoop.apache.org...
irc#hadoop

2 users in discussion

Faraz Ahmad: 1 post Bikash sharma: 1 post

People

Translate

site design / logo © 2022 Grokbase