FAQ
This sounds like a SSH key issue. I'm going to assume that you're invoking
the start-*.sh scripts from the NameNode. On the NameNode, you'll want to
run "ssh-keygen -t rsa" as the user that runs Hadoop (probably "hadoop").
This should create two files: ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub. scp the
*.pub file to all of your other nodes, and store that file as
~/.ssh/authorized_keys on each node, including the NameNode. Give
~/.ssh/authorized_keys 600 permissions. You should be good to go. I
recommend testing this stuff before running the start-*.sh scripts.

You may always want to look at our (Cloudera's) RPMs and DEBs. They
simplify the installation of Hadoop and give you init scripst to start all
the daemons. Then you can avoid the start-*.sh scripts. <
http://www.cloudera.com/hadoop>

Hope this helps.

Alex
On Fri, Jul 10, 2009 at 9:19 AM, Divij Durve wrote:

Hey everyone,

I am quite new to using hadoop. I have got the config and everything
working
perfect with 1 namenode/jobtracker, 1 datanode, 1 secondary data node.
However, Keeping the config the same i just added a slave to the list in
the
conf/slaves files and tried running the cluster. This resulted in me
getting
permission denied when i put in the password for ssh in. The ssh
passwordless login is not working for some reason. its only the data nodes
that are giving trouble however, the secondary name node is starting up
without a hitch even though that pass is the last one to be entered.
Any ideas/suggestions anyone might have.

Thanks

Search Discussions

Discussion Posts

Previous

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 2 of 3 | next ›
Discussion Overview
groupcommon-user @
categorieshadoop
postedJul 10, '09 at 4:19p
activeJul 11, '09 at 5:58a
posts3
users2
websitehadoop.apache.org...
irc#hadoop

2 users in discussion

Divij Durve: 2 posts Alex Loddengaard: 1 post

People

Translate

site design / logo © 2022 Grokbase