cat public_key >> authorized_keys
2011/3/4 MANISH SINGLA <coolmanishhot8@gmail.com>
Hii all,
I am trying to setup a 2 node cluster...I have configured all the
files as specified in the tutorial I am refering to...I copied the
public key to the slave's machine...but when I ssh to the slave from
the master, it asks for password everytime...kindly help...
sharmabiks.07@gmail.com
nodes
have
Hadoop
start
if
files
and
a
times,
not
or
nodes
not
I am trying to setup a 2 node cluster...I have configured all the
files as specified in the tutorial I am refering to...I copied the
public key to the slave's machine...but when I ssh to the slave from
the master, it asks for password everytime...kindly help...
On Fri, Mar 4, 2011 at 11:12 AM, icebergs wrote:
You can check the logs whose tasktracker isn't up.
The path is "HADOOP_HOME/logs/".
The answer may be in it.
2011/3/2 bikash sharma <sharmabiks.07@gmail.com>
but aYou can check the logs whose tasktracker isn't up.
The path is "HADOOP_HOME/logs/".
The answer may be in it.
2011/3/2 bikash sharma <sharmabiks.07@gmail.com>
Hi Sonal,
Thanks. I guess you are right. ps -ef exposes such processes.
-bikash
Thanks. I guess you are right. ps -ef exposes such processes.
-bikash
On Tue, Mar 1, 2011 at 1:29 PM, Sonal Goyal wrote:
Bikash,
I have sometimes found hanging processes which jps does not report,
Bikash,
I have sometimes found hanging processes which jps does not report,
ps -ef shows them. Maybe you can check this on the errant nodes..
Thanks and Regards,
Sonal
<https://github.com/sonalgoyal/hiho>Hadoop ETL and Data Integration<
https://github.com/sonalgoyal/hiho>
Nube Technologies <http://www.nubetech.co>
<http://in.linkedin.com/in/sonalgoyal>
On Tue, Mar 1, 2011 at 7:37 PM, bikash sharma <
Thanks and Regards,
Sonal
<https://github.com/sonalgoyal/hiho>Hadoop ETL and Data Integration<
https://github.com/sonalgoyal/hiho>
Nube Technologies <http://www.nubetech.co>
<http://in.linkedin.com/in/sonalgoyal>
On Tue, Mar 1, 2011 at 7:37 PM, bikash sharma <
wrote:
reformattedHi James,
Sorry for the late response. No, the same problem persists. I
Sorry for the late response. No, the same problem persists. I
HDFS, stopped mapred and hdfs daemons and restarted them (using
start-dfs.sh
and start-mapred.sh from master node). But surprisingly out of 4
start-dfs.sh
and start-mapred.sh from master node). But surprisingly out of 4
cluster, two nodes have TaskTracker running while other two do not
TaskTrackers on them (verified using jps). I guess since I have the
installed on shared storage, that might be the issue? Btw, how do I
the services independently on each node?
-bikash
-bikash
On Sun, Feb 27, 2011 at 11:05 PM, James Seigel wrote:
.... Did you get it working? What was the fix?
Sent from my mobile. Please excuse the typos.
.... Did you get it working? What was the fix?
Sent from my mobile. Please excuse the typos.
On 2011-02-27, at 8:43 PM, Simon wrote:
Hey Bikash,
Maybe you can manually start a tasktracker on the node and see
Hey Bikash,
Maybe you can manually start a tasktracker on the node and see
there
are
any error messages. Also, don't forget to check your configure
for
first.
mapreduce and hdfs and make sure datanode can start successfully
After all these steps, you can submit a job on the master node
see
master
if
there are any communication between these failed nodes and the
node.
sharmabiks.07@gmail.comPost your error messages here if possible.
HTH.
Simon -
On Sat, Feb 26, 2011 at 10:44 AM, bikash sharma <
HTH.
Simon -
On Sat, Feb 26, 2011 at 10:44 AM, bikash sharma <
wrote:
Thanks James. Well all the config. files and shared keys are on
shared
storage that is accessed by all the nodes in the cluster.
At times, everything runs fine on initialization, but at other
At times, everything runs fine on initialization, but at other
the
same problem persists, so was bit confused.
Also, checked the TaskTracker logs on those nodes, there does
Also, checked the TaskTracker logs on those nodes, there does
seem
to
wrote:be
any error.
-bikash
On Sat, Feb 26, 2011 at 10:30 AM, James Seigel <james@tynt.com>
any error.
-bikash
On Sat, Feb 26, 2011 at 10:30 AM, James Seigel <james@tynt.com>
Maybe your ssh keys aren’t distributed the same on each machine
the
benchmarks
machines aren’t configured the same?
J
J
On 2011-02-26, at 8:25 AM, bikash sharma wrote:
Hi,
I have a 10 nodes Hadoop cluster, where I am running some
Hi,
I have a 10 nodes Hadoop cluster, where I am running some
for
experiments.
Surprisingly, when I initialize the Hadoop cluster
(hadoop/bin/start-mapred.sh), in many instances, only some
Surprisingly, when I initialize the Hadoop cluster
(hadoop/bin/start-mapred.sh), in many instances, only some
have
TaskTracker process up (seen using jps), while other nodes do
have
--
Regards,
Simon
TaskTrackers. Could anyone please explain?
Thanks,
Bikash
Thanks,
Bikash
--
Regards,
Simon