FAQ
Hi hadoo developers / users,

For some reason I don't have any Datanode both in master or slave when I
start the dfs and mapreduce. I follow the great tutorial by Michael Noll at
Running Hadoop On Ubuntu Linux (Multi-Node Cluster)
<http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Multi-Node_Cluster%29>
*1)* Firtst, when I do *"hadoop namenode -format*", the output on the
console is like:

09/07/17 14:38:18 INFO dfs.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = *abc-laptop/127.0.1.**1*
STARTUP_MSG: args = [-format]
......
INFO dfs.Storage: Storage directory
/usr/local/hadoop-datastore/hadoop-hadoop/dfs/name has been successfully
formatted.
.......

But aren't the host supposed to be my master and slave which listen to
192.168.0.1 and 192.168.0.2?

*2) *The files in the <HadoopDir>/logs/ are all *hadoop-abc-XXX*..., but
they are supposed to be *hadoop-hadoop-XXX*... (My group and username are
both hadoop)

*3*) I got this in the <HadoopDir>/logs/hadoop-abc-datanode-abc-laptop.log

* ERROR org.apache.hadoop.dfs.DataNode: java.io.IOException: Call to
localhost/127.0.0.1:9000 failed on local exception: java.io.IOException:
Connection reset by peer*

I don't know why I need* localhost/127.0.0.1:9000*, my master is master/
192.168.0.1:9000 according to my hadoop.site.xml.


Your ideas will be much appreciated!

George

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 1 of 1 | next ›
Discussion Overview
groupcommon-user @
categorieshadoop
postedJul 17, '09 at 11:55p
activeJul 17, '09 at 11:55p
posts1
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

George Pang: 1 post

People

Translate

site design / logo © 2021 Grokbase