FAQ
Hi,



I am trying to get a simple 7 node cluster working, 7 datanodes, 1
namenode.



External processes running on the datanodes use the fs interface to hdfs
to store and retrieve files.

The cluster is working, sort of, I can make directories and add and
remove files etc.





During processing of data, multiple files are saved into hdfs
successfully, however I get "bad connect ack" errors on 1 or 2 files in
the set, which results in those files begin stored as 0 length.



I believe the problem stems from (on the datanode)



ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(10.241.0.3:50010,
storageID=DS-1857179747-127.0.0.1-50010 --etc

Java.net.NoRouteToHostException



So what host is there no route to? Looks like localhost, but that works
fine.

Ssh localhost, ssh 127.0.0.1, ssh 10.241.0.3, all work with no password
query.



What am I missing??



Thanks -Bill

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouphdfs-user @
categorieshadoop
postedNov 19, '09 at 9:46p
activeNov 19, '09 at 9:46p
posts1
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Bill Brune: 1 post

People

Translate

site design / logo © 2022 Grokbase