FAQ
I am trying to install/configure hadoop on a cluster with several computers.
I followed exactly the instructions in the hadoop website for configuring
multiple slaves, and when I run start-all.sh I get no errors - both datanode
and tasktracker are reported to be running (doing ps awux | grep hadoop on
the slave nodes returns two java processes). Also, the log files are empty -
nothing is printed there. Still, when I try to use bin/hadoop dfs -put,
I get the following error:

# bin/hadoop dfs -put w.txt w.txt
put: java.io.IOException: File /user/scohen/w4.txt could only be replicated
to 0 nodes, instead of 1

and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).

I couldn't find much information about this error, but I did manage to see
somewhere it might mean that there are no datanodes running. But as I said,
start-all does not give any errors. Any ideas what could be problem?

Thanks.

Jerr.
--
View this message in context: http://www.nabble.com/%22could-only-be-replicated-to-0-nodes%2C-instead-of-1%22-tf4950939.html#a14175780
Sent from the Hadoop Users mailing list archive at Nabble.com.

Search Discussions

  • Jason Venner at Dec 5, 2007 at 5:09 pm
    This happens to me, when the dfs has gotten into an inconsistent state.

    NOTE: you will lose all of the contents of your HDS file system.

    What I hae to do, is stop dfs, remove the contents of the dfs
    directories on all the machines, hadoop namenode -format on the
    controller, then restart dfs.
    That consistently fixes the problem for me. This may be serious overkill
    but it works.

    NOTE: you will lose all of the contents of your HDS file system.

    jerrro wrote:
    I am trying to install/configure hadoop on a cluster with several computers.
    I followed exactly the instructions in the hadoop website for configuring
    multiple slaves, and when I run start-all.sh I get no errors - both datanode
    and tasktracker are reported to be running (doing ps awux | grep hadoop on
    the slave nodes returns two java processes). Also, the log files are empty -
    nothing is printed there. Still, when I try to use bin/hadoop dfs -put,
    I get the following error:

    # bin/hadoop dfs -put w.txt w.txt
    put: java.io.IOException: File /user/scohen/w4.txt could only be replicated
    to 0 nodes, instead of 1

    and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).

    I couldn't find much information about this error, but I did manage to see
    somewhere it might mean that there are no datanodes running. But as I said,
    start-all does not give any errors. Any ideas what could be problem?

    Thanks.

    Jerr.
  • Jerrro at Dec 5, 2007 at 5:29 pm
    I did this several times, while tuning the configuration in all kinds of
    way... But still, nothing helped -
    Even when I stop everything, reformat and start it back again, I get this
    error whenever trying to use dfs -put.


    Jason Venner-2 wrote:
    This happens to me, when the dfs has gotten into an inconsistent state.

    NOTE: you will lose all of the contents of your HDS file system.

    What I hae to do, is stop dfs, remove the contents of the dfs
    directories on all the machines, hadoop namenode -format on the
    controller, then restart dfs.
    That consistently fixes the problem for me. This may be serious overkill
    but it works.

    NOTE: you will lose all of the contents of your HDS file system.

    jerrro wrote:
    I am trying to install/configure hadoop on a cluster with several
    computers.
    I followed exactly the instructions in the hadoop website for configuring
    multiple slaves, and when I run start-all.sh I get no errors - both
    datanode
    and tasktracker are reported to be running (doing ps awux | grep hadoop
    on
    the slave nodes returns two java processes). Also, the log files are
    empty -
    nothing is printed there. Still, when I try to use bin/hadoop dfs -put,
    I get the following error:

    # bin/hadoop dfs -put w.txt w.txt
    put: java.io.IOException: File /user/scohen/w4.txt could only be
    replicated
    to 0 nodes, instead of 1

    and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).

    I couldn't find much information about this error, but I did manage to
    see
    somewhere it might mean that there are no datanodes running. But as I
    said,
    start-all does not give any errors. Any ideas what could be problem?

    Thanks.

    Jerr.
    --
    View this message in context: http://www.nabble.com/%22could-only-be-replicated-to-0-nodes%2C-instead-of-1%22-tf4950939.html#a14176525
    Sent from the Hadoop Users mailing list archive at Nabble.com.
  • Hairong Kuang at Dec 5, 2007 at 5:40 pm
    Check http://namenode_host:50070/dfshealth.jsp to see if your cluster is
    out of safemode or not and how many datanodes are up.

    You could check .out/.log files under the log directory to see if there
    is any error starting datanodes/namenode.

    Hairong

    -----Original Message-----
    From: jerrro
    Sent: Wednesday, December 05, 2007 9:29 AM
    To: hadoop-user@lucene.apache.org
    Subject: Re: "could only be replicated to 0 nodes, instead of 1"


    I did this several times, while tuning the configuration in all kinds of
    way... But still, nothing helped - Even when I stop everything, reformat
    and start it back again, I get this error whenever trying to use dfs
    -put.


    Jason Venner-2 wrote:
    This happens to me, when the dfs has gotten into an inconsistent state.
    NOTE: you will lose all of the contents of your HDS file system.

    What I hae to do, is stop dfs, remove the contents of the dfs
    directories on all the machines, hadoop namenode -format on the
    controller, then restart dfs.
    That consistently fixes the problem for me. This may be serious
    overkill but it works.

    NOTE: you will lose all of the contents of your HDS file system.

    jerrro wrote:
    I am trying to install/configure hadoop on a cluster with several
    computers.
    I followed exactly the instructions in the hadoop website for
    configuring multiple slaves, and when I run start-all.sh I get no
    errors - both datanode and tasktracker are reported to be running
    (doing ps awux | grep hadoop on the slave nodes returns two java
    processes). Also, the log files are empty - nothing is printed there.
    Still, when I try to use bin/hadoop dfs -put, I get the following
    error:

    # bin/hadoop dfs -put w.txt w.txt
    put: java.io.IOException: File /user/scohen/w4.txt could only be
    replicated to 0 nodes, instead of 1

    and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows
    it).
    I couldn't find much information about this error, but I did manage
    to see somewhere it might mean that there are no datanodes running.
    But as I said, start-all does not give any errors. Any ideas what
    could be problem?

    Thanks.

    Jerr.
    --
    View this message in context:
    http://www.nabble.com/%22could-only-be-replicated-to-0-nodes%2C-instead-
    of-1%22-tf4950939.html#a14176525
    Sent from the Hadoop Users mailing list archive at Nabble.com.
  • Jayant Durgad at Apr 11, 2008 at 12:59 am
    I am faced with the exact same problem described here, does anybody know how
    to resolve this?
  • John Menzer at Apr 12, 2008 at 9:04 pm
    i had the same error message...
    can you describe when and how this error occurs?


    Jayant Durgad wrote:
    I am faced with the exact same problem described here, does anybody know
    how
    to resolve this?
    --
    View this message in context: http://www.nabble.com/Re%3A-%22could-only-be-replicated-to-0-nodes%2C-instead-of-1%22-tp16623192p16656655.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
  • Raghu Angadi at Apr 11, 2008 at 11:23 pm

    jerrro wrote:

    I couldn't find much information about this error, but I did manage to see
    somewhere it might mean that there are no datanodes running. But as I said,
    start-all does not give any errors. Any ideas what could be problem?
    start-all return does not mean datanodes are ok. Did you check if there
    are any datanodes alive? You can check from http://namenode:50070/.

    Raghu.
  • Lohit at Apr 12, 2008 at 9:14 pm
    Can you check the datanode and namenode logs and see if all are up and running? I am assuming you are running this on single host hence replication of 1.
    Thanks,
    Lohit

    ----- Original Message ----
    From: John Menzer <standard00@gmx.net>
    To: core-user@hadoop.apache.org
    Sent: Saturday, April 12, 2008 2:04:00 PM
    Subject: Re: "could only be replicated to 0 nodes, instead of 1"


    i had the same error message...
    can you describe when and how this error occurs?


    Jayant Durgad wrote:
    I am faced with the exact same problem described here, does anybody know
    how
    to resolve this?
    --
    View this message in context: http://www.nabble.com/Re%3A-%22could-only-be-replicated-to-0-nodes%2C-instead-of-1%22-tp16623192p16656655.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
  • Jasongs at May 8, 2008 at 10:30 am
    I get the same error when doing a put and my cluster is running ok

    i.e. has capacity and all nodes are live.
    Error message is
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /test/test.txt could only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1127)
    at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:312)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:585)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:901)

    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:585)
    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
    at
    org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2074)
    at
    org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:1967)
    at
    org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1500(DFSClient.java:1487)
    at
    org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1601)
    I would appreciate any help/suggestions

    Thanks


    jerrro wrote:
    I am trying to install/configure hadoop on a cluster with several
    computers. I followed exactly the instructions in the hadoop website for
    configuring multiple slaves, and when I run start-all.sh I get no errors -
    both datanode and tasktracker are reported to be running (doing ps awux |
    grep hadoop on the slave nodes returns two java processes). Also, the log
    files are empty - nothing is printed there. Still, when I try to use
    bin/hadoop dfs -put,
    I get the following error:

    # bin/hadoop dfs -put w.txt w.txt
    put: java.io.IOException: File /user/scohen/w4.txt could only be
    replicated to 0 nodes, instead of 1

    and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).

    I couldn't find much information about this error, but I did manage to see
    somewhere it might mean that there are no datanodes running. But as I
    said, start-all does not give any errors. Any ideas what could be problem?

    Thanks.

    Jerr.
    --
    View this message in context: http://www.nabble.com/%22could-only-be-replicated-to-0-nodes%2C-instead-of-1%22-tp14175780p17124514.html
    Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
  • Hairong Kuang at May 8, 2008 at 6:03 pm
    Could you please go to the dfs webUI and check how many datanodes are up and
    how much available space each has?

    Hairong

    On 5/8/08 3:30 AM, "jasongs" wrote:


    I get the same error when doing a put and my cluster is running ok

    i.e. has capacity and all nodes are live.
    Error message is
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /test/test.txt could only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1127)
    at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:312)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.j
    ava:25)
    at java.lang.reflect.Method.invoke(Method.java:585)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:901)

    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.j
    ava:25)
    at java.lang.reflect.Method.invoke(Method.java:585)
    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocation
    Handler.java:82)
    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandle
    r.java:59)
    at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
    at
    org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient
    .java:2074)
    at
    org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClien
    t.java:1967)
    at
    org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1500(DFSClient.java:148
    7)
    at
    org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.jav
    a:1601)
    I would appreciate any help/suggestions

    Thanks


    jerrro wrote:
    I am trying to install/configure hadoop on a cluster with several
    computers. I followed exactly the instructions in the hadoop website for
    configuring multiple slaves, and when I run start-all.sh I get no errors -
    both datanode and tasktracker are reported to be running (doing ps awux |
    grep hadoop on the slave nodes returns two java processes). Also, the log
    files are empty - nothing is printed there. Still, when I try to use
    bin/hadoop dfs -put,
    I get the following error:

    # bin/hadoop dfs -put w.txt w.txt
    put: java.io.IOException: File /user/scohen/w4.txt could only be
    replicated to 0 nodes, instead of 1

    and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).

    I couldn't find much information about this error, but I did manage to see
    somewhere it might mean that there are no datanodes running. But as I
    said, start-all does not give any errors. Any ideas what could be problem?

    Thanks.

    Jerr.
  • Arul Ganesh at Nov 13, 2008 at 8:29 pm
    Hi,
    If you are getting this in windows environment (2003 64 bit). We have faced
    the same problem. Now we tried the following steps and it started working.
    1)Install cygwin and ssh.
    2) Downloaded the stable version Hadoop - hadoop-0.17.2.1.tar.gz as on
    13/Nov/2008
    3) Untar it via cygwin (tar xvfz hadoop-0.17.2.1.tar.gz). please DONOT use
    WINZIP to untar.
    4) We tried running the sudo distribution example provided in quickstart
    (http://hadoop.apache.org/core/docs/current/quickstart.html) and it worked.

    Thanks
    Arul and Limin
    eBay Inc.,



    jerrro wrote:
    I am trying to install/configure hadoop on a cluster with several
    computers. I followed exactly the instructions in the hadoop website for
    configuring multiple slaves, and when I run start-all.sh I get no errors -
    both datanode and tasktracker are reported to be running (doing ps awux |
    grep hadoop on the slave nodes returns two java processes). Also, the log
    files are empty - nothing is printed there. Still, when I try to use
    bin/hadoop dfs -put,
    I get the following error:

    # bin/hadoop dfs -put w.txt w.txt
    put: java.io.IOException: File /user/scohen/w4.txt could only be
    replicated to 0 nodes, instead of 1

    and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).

    I couldn't find much information about this error, but I did manage to see
    somewhere it might mean that there are no datanodes running. But as I
    said, start-all does not give any errors. Any ideas what could be problem?

    Thanks.

    Jerr.
    --
    View this message in context: http://www.nabble.com/%22could-only-be-replicated-to-0-nodes%2C-instead-of-1%22-tp14175780p20488938.html
    Sent from the Hadoop lucene-users mailing list archive at Nabble.com.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedDec 5, '07 at 5:00p
activeNov 13, '08 at 8:29p
posts11
users9
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase