FAQ
when I say
hadoop fs -copyFromLocal small_yeast /user/training/small_yeast

I get

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/training/small_yeast/yeast_chrXIV00000006.sam.gz could only be
replicated to 0 nodes, instead of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:434)
at sun.reflect.GeneratedMethodAccessor841.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
...


Anyone seen this and know how to fix it
I am on a 4 node virtual cloudera cluster

--
Steven M. Lewis PhD
Institute for Systems Biology
Seattle WA

Search Discussions

  • Allen Wittenauer at Jun 22, 2010 at 8:03 pm

    On Jun 22, 2010, at 12:55 PM, Steve Lewis wrote:
    /user/training/small_yeast/yeast_chrXIV00000006.sam.gz could only be replicated to 0 nodes, instead of 1
    ... almost always means the namenode doesn't think it has any viable datanodes (anymore).
    Anyone seen this and know how to fix it
    I am on a 4 node virtual cloudera cluster
    Check the namenode UI and see if it is in safemode, how many live datanodes you have, etc.
  • Steve Lewis at Jun 22, 2010 at 8:58 pm
    training@hadoop1:~$ hadoop dfsadmin -safemode get
    Safe mode is OFF

    training@hadoop1:~$ hadoop dfsadmin -refreshNodes
    training@hadoop1:~$ hadoop fs -copyFromLocal small_yeast
    /user/training/small_yeast
    ^CcopyFromLocal: Filesystem closed

    with 1 file copied then the same error


    On Tue, Jun 22, 2010 at 1:03 PM, Allen Wittenauer
    wrote:
    On Jun 22, 2010, at 12:55 PM, Steve Lewis wrote:
    /user/training/small_yeast/yeast_chrXIV00000006.sam.gz could only be
    replicated to 0 nodes, instead of 1

    ... almost always means the namenode doesn't think it has any viable
    datanodes (anymore).
    Anyone seen this and know how to fix it
    I am on a 4 node virtual cloudera cluster
    Check the namenode UI and see if it is in safemode, how many live datanodes
    you have, etc.


    --
    Steven M. Lewis PhD
    Institute for Systems Biology
    Seattle WA
  • Allen Wittenauer at Jun 22, 2010 at 10:14 pm

    On Jun 22, 2010, at 1:58 PM, Steve Lewis wrote:

    training@hadoop1:~$ hadoop dfsadmin -safemode get
    Safe mode is OFF
    OK, so you are out of safemode.
    training@hadoop1:~$ hadoop dfsadmin -refreshNodes
    This just re-reads the list of nodes. hadoop dfsadmin -report might be more useful.

    By chance, is this the first file you've tried writing to this hdfs?
  • Steve Lewis at Jun 23, 2010 at 12:20 am
    No I have been using it for about two weeks and have many dozen files but it
    may be close to full

    On Jun 22, 2010 3:14 PM, "Allen Wittenauer" wrote:

    On Jun 22, 2010, at 1:58 PM, Steve Lewis wrote:

    training@hadoop1:~$ hadoop dfsadmin -safemode ge...
    OK, so you are out of safemode.

    training@hadoop1:~$ hadoop dfsadmin -refreshNodes
    This just re-reads the list of nodes. hadoop dfsadmin -report might be more
    useful.

    By chance, is this the first file you've tried writing to this hdfs?

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupmapreduce-user @
categorieshadoop
postedJun 22, '10 at 7:56p
activeJun 23, '10 at 12:20a
posts5
users2
websitehadoop.apache.org...
irc#hadoop

2 users in discussion

Steve Lewis: 3 posts Allen Wittenauer: 2 posts

People

Translate

site design / logo © 2022 Grokbase