This sounds like a SSH key issue. I'm going to assume that you're invoking
the start-*.sh scripts from the NameNode. On the NameNode, you'll want to
run "ssh-keygen -t rsa" as the user that runs Hadoop (probably "hadoop").
This should create two files: ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub. scp the
*.pub file to all of your other nodes, and store that file as
~/.ssh/authorized_keys on each node, including the NameNode. Give
~/.ssh/authorized_keys 600 permissions. You should be good to go. I
recommend testing this stuff before running the start-*.sh scripts.
You may always want to look at our (Cloudera's) RPMs and DEBs. They
simplify the installation of Hadoop and give you init scripst to start all
the daemons. Then you can avoid the start-*.sh scripts. <http://www.cloudera.com/hadoop>
Hope this helps.
On Fri, Jul 10, 2009 at 9:19 AM, Divij Durve wrote:
I am quite new to using hadoop. I have got the config and everything
perfect with 1 namenode/jobtracker, 1 datanode, 1 secondary data node.
However, Keeping the config the same i just added a slave to the list in
conf/slaves files and tried running the cluster. This resulted in me
permission denied when i put in the password for ssh in. The ssh
passwordless login is not working for some reason. its only the data nodes
that are giving trouble however, the secondary name node is starting up
without a hitch even though that pass is the last one to be entered.
Any ideas/suggestions anyone might have.