|| at Mar 26, 2008 at 4:36 am
On Tue, 25 Mar 2008, Nate Carlson wrote:
Is it possible to have a single slave process jobs for multiple masters?
There are two types of slaves and 2 corresponding masters in Hadoop. The 2
masters are Namenode and JobTracker while the slaves are datanodes and
tasktrackers resp. Each slave when started has a hardcoded master
information in the config that is passed to the slave during the start-up.
So sharing would not be possible.
If not, I guess I'll just run multiple slaves on the same machines. ;)
Yes. Seems like. But be sure to have a control on the number of tasks that
can be run on the machine. Commonly used config is 4 maps and 4 reducers
for a (mapred) slave that is not shared (i.e per machine). Try to make
sure that the total tasks that can be run simultaneously is descent
(Trying to share slaves for our dev/staging/qa environments)