FAQ
Thanks for replying it was a dumb mistake I had 20.1 on the namenode and
20.2 on the slaves - problem solved

Thanks again for replying! Cheers!

-----Original Message-----
From: Thomas Graves
Sent: Tuesday, June 14, 2011 4:30 PM
To: common-dev@hadoop.apache.org; Schmitz, Jeff GSUSI-PTT/TBIM
Subject: Re: Noob question

It looks like it thinks /usr/local/hadoop-0.20.1/ is $HADOOP_HOME. Did
you
install hadoop on all the slave boxes in same location as the box you
have
working? I'm assuming you are using the start-all.sh scripts. That
script
goes to each slave box and tries to cd to $HADOOP_HOME and runs the
start
commands from there.

Tom

On 6/14/11 2:09 PM, "Jeff.Schmitz@shell.com" wrote:

Hello there! So I was running in Pseudo-distributed configuration and
everything was working fine - So now I have some more nodes and am
trying to run fully distributed I followed the docs and added the slaves
file...

Setup passphraseless ssh ...........



What am I missing getting this error at start up








Cheers -



Jeffery Schmitz
Projects and Technology
3737 Bellaire Blvd Houston, Texas 77001
Tel: +1-713-245-7326 Fax: +1 713 245 7678
Email: Jeff.Schmitz@shell.com
"TK-421, why aren't you at your post?"



Search Discussions

Discussion Posts

Previous

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 3 of 3 | next ›
Discussion Overview
groupcommon-dev @
categorieshadoop
postedJun 14, '11 at 7:58p
activeJun 14, '11 at 9:33p
posts3
users2
websitehadoop.apache.org...
irc#hadoop

2 users in discussion

Jeff Schmitz: 2 posts Thomas Graves: 1 post

People

Translate

site design / logo © 2022 Grokbase