1 namenode, 1 datanode. Dfs.replication=3. We also tried 0, 1, 2, same result.

From: Yaozhen Pan
Sent: Tuesday, May 31, 2011 10:34 AM
To: hdfs-user@hadoop.apache.org
Subject: Re: Unable to start hadoop-0.20.2 but able to start hadoop-0.20.203 cluster

How many datanodes are in your cluster? and what is the value of "dfs.replication" in hdfs-site.xml (if not specified, default value is 3)?

From the error log, it seems there are not enough datanodes to replicate the files in hdfs.
在 2011 5 31 22:23,"Harsh J" <harsh@cloudera.com 写道:

Please post the output of `hadoop dfsadmin -report` and attach the
tail of a started DN's log?
On Tue, May 31, 2011 at 7:44 PM, Xu, Richard wrote:
2. Also, Configured Cap...
This might easily be the cause. I'm not sure if its a Solaris thing
that can lead to this though.
3. in datanode server, no error in logs, but tasktracker logs has the following suspicious thing:...
I don't see any suspicious log message in what you'd posted. Anyhow,
the TT does not matter here.

Harsh J

Search Discussions

Discussion Posts


Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 4 of 8 | next ›
Discussion Overview
grouphdfs-user @
postedMay 31, '11 at 2:15p
activeMay 31, '11 at 5:31p



site design / logo © 2022 Grokbase