FAQ

On May 27, 2011, at 7:26 AM, DAN wrote:
You see you have "2 Solaris servers for now", and dfs.replication is setted as 3.
These don't match.

That doesn't matter. HDFS will basically flag any files written with a warning that they are under-replicated.

The problem is that the datanode processes aren't running and/or aren't communicating to the namenode. That's what the "java.io.IOException: File /tmp/hadoop-cfadm/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1" means.

It should also be pointed out that writing to /tmp (the default) is a bad idea. This should get changed.

Also, since you are running Solaris, check the FAQ on some settings you'll need to do in order to make Hadoop's broken username detection to work properly, amongst other things.

Search Discussions

Discussion Posts

Previous

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 8 of 13 | next ›
Discussion Overview
groupcommon-user @
categorieshadoop
postedMay 27, '11 at 1:54a
activeJan 25, '12 at 7:18a
posts13
users7
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase