FAQ
Hey Bikash,

Maybe you can manually start a tasktracker on the node and see if there are
any error messages. Also, don't forget to check your configure files for
mapreduce and hdfs and make sure datanode can start successfully first.
After all these steps, you can submit a job on the master node and see if
there are any communication between these failed nodes and the master node.
Post your error messages here if possible.

HTH.
Simon -
On Sat, Feb 26, 2011 at 10:44 AM, bikash sharma wrote:

Thanks James. Well all the config. files and shared keys are on a shared
storage that is accessed by all the nodes in the cluster.
At times, everything runs fine on initialization, but at other times, the
same problem persists, so was bit confused.
Also, checked the TaskTracker logs on those nodes, there does not seem to
be
any error.

-bikash
On Sat, Feb 26, 2011 at 10:30 AM, James Seigel wrote:

Maybe your ssh keys aren’t distributed the same on each machine or the
machines aren’t configured the same?

J

On 2011-02-26, at 8:25 AM, bikash sharma wrote:

Hi,
I have a 10 nodes Hadoop cluster, where I am running some benchmarks
for
experiments.
Surprisingly, when I initialize the Hadoop cluster
(hadoop/bin/start-mapred.sh), in many instances, only some nodes have
TaskTracker process up (seen using jps), while other nodes do not have
TaskTrackers. Could anyone please explain?

Thanks,
Bikash


--
Regards,
Simon

Search Discussions

Discussion Posts

Previous

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 4 of 12 | next ›
Discussion Overview
groupcommon-user @
categorieshadoop
postedFeb 26, '11 at 3:26p
activeMar 12, '11 at 4:31p
posts12
users6
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase