FAQ
Hi, All

I just start to use Hadoop few days ago. I met the error message
" WARN hdfs.DFSClient: DataStreamer Exception:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/hadoop/count/count/temp1 could only be replicated to 0 nodes, instead
of 1"
while trying to copy data files to DFS after Hadoop is started.

I did all the settings according to the
"Running_Hadoop_On_Ubuntu_Linux_(Single-Node_Cluster)"'s instruction, and I
don't know what's wrong. Besides, during the process, no error message is
written to log files.

Also, according to "http://localhost.localdomain:50070/dfshealth.jsp", I
have one live namenode. By the broswer, I even can see the first data file
is created in DFS, but the size of it is 0.

Things I've tried:
1. Stop hadoop, re-format DFS and start hadoop again.
2. Change "localhost" to "127.0.0.1"

But neigher of them works.

Could anyone help me or give me a hint?

Thanks.

Anthony
--
View this message in context: http://www.nabble.com/could-only-be-replicated-to-0-nodes%2C-instead-of-1-tp24459104p24459104.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.

Search Discussions

  • Anthony.Fan at Jul 13, 2009 at 10:25 am
    The full error message is
    09/07/02 16:28:09 WARN hdfs.DFSClient: NotReplicatedYetException sleeping
    /user/hadoop/count/count/temp1 retries left 1
    09/07/02 16:28:12 WARN hdfs.DFSClient: DataStreamer Exception:
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/hadoop/count/count/temp1 could only be replicated to 0 nodes, instead
    of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1280)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:351)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:481)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:894)

    at org.apache.hadoop.ipc.Client.call(Client.java:697)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
    at $Proxy0.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy0.addBlock(Unknown Source)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2814)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2696)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1996)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2183)

    --
    View this message in context: http://www.nabble.com/could-only-be-replicated-to-0-nodes%2C-instead-of-1-tp24459104p24459151.html
    Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
  • Raghu Angadi at Jul 13, 2009 at 5:22 pm
    There seems to be some configuration problem.

    post 'bin/haddop dfsadmin -report' output and any relevant portions of
    Datanode log.

    Raghu.

    Anthony.Fan wrote:
    The full error message is
    09/07/02 16:28:09 WARN hdfs.DFSClient: NotReplicatedYetException sleeping
    /user/hadoop/count/count/temp1 retries left 1
    09/07/02 16:28:12 WARN hdfs.DFSClient: DataStreamer Exception:
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/hadoop/count/count/temp1 could only be replicated to 0 nodes, instead
    of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1280)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:351)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:481)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:894)

    at org.apache.hadoop.ipc.Client.call(Client.java:697)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
    at $Proxy0.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy0.addBlock(Unknown Source)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2814)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2696)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1996)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2183)
  • Boyu Zhang at Jul 14, 2009 at 6:22 pm
    This happened to me too. What I did was deleting the files generated by
    formating the namenode. By default, they are under /tmp/hadoop****, just
    delete the hadoop*** directory and re-format the namenode. If you specify
    another location in your config file, go to that location and delete the
    corresponding directory.

    If your last job did not end correctly, you will have this kind of problems.

    Hope it could help.

    Boyu Zhang

    Ph. D. Student
    Computer and Information Sciences Department
    University of Delaware

    (210) 274-2104
    bzhang@udel.edu
    http://www.eecis.udel.edu/~bzhang

    -----Original Message-----
    From: Anthony.Fan
    Sent: Monday, July 13, 2009 6:21 AM
    To: hadoop-user@lucene.apache.org
    Subject: could only be replicated to 0 nodes, instead of 1


    Hi, All

    I just start to use Hadoop few days ago. I met the error message
    " WARN hdfs.DFSClient: DataStreamer Exception:
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/hadoop/count/count/temp1 could only be replicated to 0 nodes, instead
    of 1"
    while trying to copy data files to DFS after Hadoop is started.

    I did all the settings according to the
    "Running_Hadoop_On_Ubuntu_Linux_(Single-Node_Cluster)"'s instruction, and I
    don't know what's wrong. Besides, during the process, no error message is
    written to log files.

    Also, according to "http://localhost.localdomain:50070/dfshealth.jsp", I
    have one live namenode. By the broswer, I even can see the first data file
    is created in DFS, but the size of it is 0.

    Things I've tried:
    1. Stop hadoop, re-format DFS and start hadoop again.
    2. Change "localhost" to "127.0.0.1"

    But neigher of them works.

    Could anyone help me or give me a hint?

    Thanks.

    Anthony
    --
    View this message in context:
    http://www.nabble.com/could-only-be-replicated-to-0-nodes%2C-instead-of-1-tp
    24459104p24459104.html
    Sent from the Hadoop lucene-users mailing list archive at Nabble.com.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedJul 13, '09 at 10:21a
activeJul 14, '09 at 6:22p
posts4
users3
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase