FAQ
Hi,

In one of my jobs I am getting the following error.

java.io.IOException: File X could only be replicated to 0 nodes, instead of
1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1282)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)

and the job fails. I am running a single server that runs all the hadoop
daemons. So only one datanode in my scenario.

The datanode was up all the time.
There is enough space on the disk.
Even on debug level, I do not see any of the following logs


Node X " is not chosen because the node is (being) decommissioned
because the node does not have enough space
because the node is too busy
because the rack has too many chosen nodes

Do anyone know of anyother scenario in which occur ?

Thanks
Sudharsan S

Search Discussions

  • Real great.. at Jul 5, 2011 at 12:46 pm
    Sir,
    Is the datanode shown as live in the web interface?
    Me too an amateur. Just asking out of curiosity.
    On Tue, Jul 5, 2011 at 6:13 PM, Sudharsan Sampath wrote:

    Hi,

    In one of my jobs I am getting the following error.

    java.io.IOException: File X could only be replicated to 0 nodes, instead of
    1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1282)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)
    at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)

    and the job fails. I am running a single server that runs all the hadoop
    daemons. So only one datanode in my scenario.

    The datanode was up all the time.
    There is enough space on the disk.
    Even on debug level, I do not see any of the following logs


    Node X " is not chosen because the node is (being) decommissioned
    because the node does not have enough space
    because the node is too busy
    because the rack has too many chosen nodes

    Do anyone know of anyother scenario in which occur ?

    Thanks
    Sudharsan S


    --
    Regards,
    R.V.
  • Sudharsan Sampath at Jul 5, 2011 at 2:34 pm
    Hi,

    Thanks for the interest. But, yes. the datanode is alive and healthy. Dont
    see any failures or errors prior to this in any of the logs :(

    -Sudhan S

    On Tue, Jul 5, 2011 at 6:15 PM, real great..
    wrote:
    Sir,
    Is the datanode shown as live in the web interface?
    Me too an amateur. Just asking out of curiosity.

    On Tue, Jul 5, 2011 at 6:13 PM, Sudharsan Sampath wrote:

    Hi,

    In one of my jobs I am getting the following error.

    java.io.IOException: File X could only be replicated to 0 nodes, instead
    of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1282)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)
    at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)

    and the job fails. I am running a single server that runs all the hadoop
    daemons. So only one datanode in my scenario.

    The datanode was up all the time.
    There is enough space on the disk.
    Even on debug level, I do not see any of the following logs


    Node X " is not chosen because the node is (being) decommissioned
    because the node does not have enough space
    because the node is too busy
    because the rack has too many chosen nodes

    Do anyone know of anyother scenario in which occur ?

    Thanks
    Sudharsan S


    --
    Regards,
    R.V.
  • Devaraj K at Jul 5, 2011 at 2:34 pm
    Check the datanode logs, whether it is registered with namenode or not. At
    the same time you can check any problem occurred while initializing the
    datanode. If it registers successfully it shows that data node in the live
    nodes of the namenode UI.





    Devaraj K

    ----------------------------------------------------------------------------
    ---------------------------------------------------------
    This e-mail and its attachments contain confidential information from
    HUAWEI, which
    is intended only for the person or entity whose address is listed above. Any
    use of the
    information contained herein in any way (including, but not limited to,
    total or partial
    disclosure, reproduction, or dissemination) by persons other than the
    intended
    recipient(s) is prohibited. If you receive this e-mail in error, please
    notify the sender by
    phone or email immediately and delete it!ss



    _____

    From: Sudharsan Sampath
    Sent: Tuesday, July 05, 2011 6:13 PM
    To: mapreduce-user@hadoop.apache.org
    Subject: MapReduce output could not be written



    Hi,

    In one of my jobs I am getting the following error.

    java.io.IOException: File X could only be replicated to 0 nodes, instead of
    1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNam
    esystem.java:1282)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)
    at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
    .java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)

    and the job fails. I am running a single server that runs all the hadoop
    daemons. So only one datanode in my scenario.

    The datanode was up all the time.
    There is enough space on the disk.
    Even on debug level, I do not see any of the following logs


    Node X " is not chosen because the node is (being) decommissioned
    because the node does not have enough space
    because the node is too busy
    because the rack has too many chosen nodes

    Do anyone know of anyother scenario in which occur ?

    Thanks
    Sudharsan S
  • Mostafa Gaber at Jul 5, 2011 at 7:59 pm
    I faced this problem before. I was setting hadoop.tmp.dir to /tmp/..., and
    my machine was running for a long time, and hence /tmp was full, so that
    HDFS can't store files any more.

    So, check the size of the partition where you specified hadoop.tmp.dir to
    put data into. Also, try to assign hadoop.tmp.dir to another partition where
    there is some space, and which is not got full fast like /tmp.
    On Tue, Jul 5, 2011 at 10:33 AM, Devaraj K wrote:

    Check the datanode logs, whether it is registered with namenode or not.
    At the same time you can check any problem occurred while initializing the
    datanode. If it registers successfully it shows that data node in the live
    nodes of the namenode UI.****

    ****

    ** **

    Devaraj K ****


    -------------------------------------------------------------------------------------------------------------------------------------
    This e-mail and its attachments contain confidential information from
    HUAWEI, which
    is intended only for the person or entity whose address is listed above.
    Any use of the
    information contained herein in any way (including, but not limited to,
    total or partial
    disclosure, reproduction, or dissemination) by persons other than the
    intended
    recipient(s) is prohibited. If you receive this e-mail in error, please
    notify the sender by
    phone or email immediately and delete it!ss****

    ****
    ------------------------------

    *From:* Sudharsan Sampath
    *Sent:* Tuesday, July 05, 2011 6:13 PM
    *To:* mapreduce-user@hadoop.apache.org
    *Subject:* MapReduce output could not be written****

    ** **

    Hi,

    In one of my jobs I am getting the following error.

    java.io.IOException: File X could only be replicated to 0 nodes, instead of
    1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1282)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)
    at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)

    and the job fails. I am running a single server that runs all the hadoop
    daemons. So only one datanode in my scenario.

    The datanode was up all the time.
    There is enough space on the disk.
    Even on debug level, I do not see any of the following logs


    Node X " is not chosen because the node is (being) decommissioned
    because the node does not have enough space
    because the node is too busy
    because the rack has too many chosen nodes

    Do anyone know of anyother scenario in which occur ?

    Thanks
    Sudharsan S****


    --
    Best Regards,
    Mostafa Ead

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupmapreduce-user @
categorieshadoop
postedJul 5, '11 at 12:43p
activeJul 5, '11 at 7:59p
posts5
users4
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase