On 05/31/2011 10:06 AM, Xu, Richard wrote:
1 namenode, 1 datanode. Dfs.replication=3. We also tried 0, 1, 2, same

*From:*Yaozhen Pan
*Sent:* Tuesday, May 31, 2011 10:34 AM
*To:* hdfs-user@hadoop.apache.org
*Subject:* Re: Unable to start hadoop-0.20.2 but able to start
hadoop-0.20.203 cluster

How many datanodes are in your cluster? and what is the value of
"dfs.replication" in hdfs-site.xml (if not specified, default value is

From the error log, it seems there are not enough datanodes to
replicate the files in hdfs.

在 2011 5 31 22:23,"Harsh J" <harsh@cloudera.com

Please post the output of `hadoop dfsadmin -report` and attach the
tail of a started DN's log?

On Tue, May 31, 2011 at 7:44 PM, Xu, Richard wrote:
2. Also, Configured Cap...
This might easily be the cause. I'm not sure if its a Solaris thing
that can lead to this though.

3. in datanode server, no error in logs, but tasktracker logs has
the following suspicious thing:...

I don't see any suspicious log message in what you'd posted. Anyhow,
the TT does not matter here.

Harsh J
Regards, Xu
When you installed on Solaris:
- Did you syncronize the ntp server on all nodes:
echo "server youservernetp.com" > /etc/inet/ntp.conf
svcadm enable svc:/network/ntp:default

- Are you using the same Java version on both systems (Ubuntu and Solaris)?

- Can you test with one NN and two DN?

Marcos Luis Ortiz Valmaseda
Software Engineer (Distributed Systems)

Search Discussions

Discussion Posts


Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 5 of 8 | next ›
Discussion Overview
grouphdfs-user @
postedMay 31, '11 at 2:15p
activeMay 31, '11 at 5:31p



site design / logo © 2022 Grokbase