Mark,
Can you take a peek at one of the non-joining agents' logs
(/var/log/cloudera-scm-agent/cloudera-scm-agent.log) and tell us if
there are any errors being logged there?
On Wed, Jan 30, 2013 at 8:39 PM, Mark Aguiling wrote:
Hey there,
I am having trouble trying to add hosts to my cluster via Cloudera Manager.
Cloudera Manager successfully recognizes each nodes' IP address and can
install CDH, but after Host Inspector runs...there is still only one host in
my cluster ( The node with Cloudera Manager on it). I have read a couple
options and I'll let you know what I tried:
First, I edited my /etc/hosts file on each node to this:
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
192.168.20.110 maestro1
192.168.20.144 maestro2
192.168.20.145 maestro3
The last 3 lines being the IP addresses of all 3 nodes. I read some where to
put the IP Addresses of all 3 machines in each of the hosts file in all 3
machines. ( Avoid a DNS server )
Then after that didn't work, I read that in order for the cluster nodes to
be able to reach port 7182 on the Cloudera Manager server to heartbeat... I
did this using the following code in the terminal as root:
iptables -A INPUT -p tcp --dport 7182 -j ACCEPT
Still, Cloudera Manager only recognizes the one host which is the host
Cloudera Manager is installed on...
Any suggestions please?
--
Mark Aguiling
Hey there,
I am having trouble trying to add hosts to my cluster via Cloudera Manager.
Cloudera Manager successfully recognizes each nodes' IP address and can
install CDH, but after Host Inspector runs...there is still only one host in
my cluster ( The node with Cloudera Manager on it). I have read a couple
options and I'll let you know what I tried:
First, I edited my /etc/hosts file on each node to this:
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
192.168.20.110 maestro1
192.168.20.144 maestro2
192.168.20.145 maestro3
The last 3 lines being the IP addresses of all 3 nodes. I read some where to
put the IP Addresses of all 3 machines in each of the hosts file in all 3
machines. ( Avoid a DNS server )
Then after that didn't work, I read that in order for the cluster nodes to
be able to reach port 7182 on the Cloudera Manager server to heartbeat... I
did this using the following code in the terminal as root:
iptables -A INPUT -p tcp --dport 7182 -j ACCEPT
Still, Cloudera Manager only recognizes the one host which is the host
Cloudera Manager is installed on...
Any suggestions please?
--
Mark Aguiling
--
Harsh J