Arun
--- On Tue, 9/15/09, Chandraprakash Bhagtani wrote:
From: Chandraprakash Bhagtani <cpbhagtani@gmail.com>
Subject: Re: Slaves are not able to connect to master
To: common-dev@hadoop.apache.org
Date: Tuesday, September 15, 2009, 7:35 PM
you can try replacing names with IP address in hadoop conf files and slaves
file as well.
--
Thanks & Regards,
Chandra Prakash Bhagtani
On Tue, Sep 15, 2009 at 7:15 PM, arun kumar wrote:
Hi Chandraprakash,
Yes node22 is 128.226.118.98. I have not entered a corresponding entry, in
any of the machines,on /etc/hosts (right now, I don't have the access
privileges to do that, hoping to get that soon). I am able to ssh from
node22 (the master) to nodes 20,21 and 23 (the slaves) without a password
and also from the slaves to the master.
Thanks for your help,
Arun
--- On Tue, 9/15/09, Chandraprakash Bhagtani wrote:
From: Chandraprakash Bhagtani <cpbhagtani@gmail.com>
Subject: Re: Slaves are not able to connect to master
To: common-dev@hadoop.apache.org
Date: Tuesday, September 15, 2009, 2:27 PM
Hi Arun,
Which is node22? is it 128.226.118.98 <http://128.226.118.98:54310/>? have
u
named 128.226.118.98 <http://128.226.118.98:54310/> as node22 in
/etc/hosts?
are you able to ssh 128.226.118.98 <http://128.226.118.98:54310/> from
your
datanodes without password?
--
Thanks & Regards,
Chandra Prakash Bhagtani
On Tue, Sep 15, 2009 at 3:33 AM, arun kumar <arunkumar_skcet@yahoo.com
<
http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Multi-Node_Cluster%29>except
the /etc/hosts step. But for some reason the number of datanodes is
Hi Chandraprakash,
Yes node22 is 128.226.118.98. I have not entered a corresponding entry, in
any of the machines,on /etc/hosts (right now, I don't have the access
privileges to do that, hoping to get that soon). I am able to ssh from
node22 (the master) to nodes 20,21 and 23 (the slaves) without a password
and also from the slaves to the master.
Thanks for your help,
Arun
--- On Tue, 9/15/09, Chandraprakash Bhagtani wrote:
From: Chandraprakash Bhagtani <cpbhagtani@gmail.com>
Subject: Re: Slaves are not able to connect to master
To: common-dev@hadoop.apache.org
Date: Tuesday, September 15, 2009, 2:27 PM
Hi Arun,
Which is node22? is it 128.226.118.98 <http://128.226.118.98:54310/>? have
u
named 128.226.118.98 <http://128.226.118.98:54310/> as node22 in
/etc/hosts?
are you able to ssh 128.226.118.98 <http://128.226.118.98:54310/> from
your
datanodes without password?
--
Thanks & Regards,
Chandra Prakash Bhagtani
On Tue, Sep 15, 2009 at 3:33 AM, arun kumar <arunkumar_skcet@yahoo.com
wrote:
All,
I am trying to setup a cluster with 4 nodes and I followed all the steps
listed here:
http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Multi-Node_Cluster)<http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Multi-Node_Cluster%29>All,
I am trying to setup a cluster with 4 nodes and I followed all the steps
listed here:
<
http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Multi-Node_Cluster%29>except
the /etc/hosts step. But for some reason the number of datanodes is
just one (which I am assuming is the master node) and when I checked the
logs on the datanode, I am getting this message: " Retrying connect to
server: node22/128.226.118.98:54310. Already tried 9 time(s).
2009-09-14 17:31:50,934 INFO org.apache.hadoop.ipc.RPC: Server at node22/
128.226.118.98:54310 not available yet, Zzzzz...". I am not able to
proceed from here, any help on finding the leads is appreciated.
Thanks,
Arun
logs on the datanode, I am getting this message: " Retrying connect to
server: node22/128.226.118.98:54310. Already tried 9 time(s).
2009-09-14 17:31:50,934 INFO org.apache.hadoop.ipc.RPC: Server at node22/
128.226.118.98:54310 not available yet, Zzzzz...". I am not able to
proceed from here, any help on finding the leads is appreciated.
Thanks,
Arun
--
Thanks & Regards,
Chandra Prakash Bhagtani,