FAQ
I am trying to read data placed on hdfs in one EC2 cluster from a different
EC2 cluster and am getting the errors below. Both EC2 Clusters are running
v0.19. When I run 'hadoop -get small-page-index small-page-index' on the
source cluster everything works fine and the data is properly retrieved out
of hdfs. FWIW, hadoop fs -ls works fine across clusters. Any ideas of what
might be the problem and how to remedy it?

thanks,
Scott

Here are the errors I am getting:

[root@domU-12-31-38-00-4E-32 ~]# hadoop fs -cp
hdfs://domU-12-31-38-00-1C-B1.compute-1.internal:50001/user/root/small-page-index
small-page-index
09/09/14 21:48:43 INFO hdfs.DFSClient: Could not obtain block
blk_-4157273618194597760_1160 from any node: java.io.IOException: No live
nodes contain current block
09/09/14 21:51:46 INFO hdfs.DFSClient: Could not obtain block
blk_-4157273618194597760_1160 from any node: java.io.IOException: No live
nodes contain current block
09/09/14 21:54:49 INFO hdfs.DFSClient: Could not obtain block
blk_-4157273618194597760_1160 from any node: java.io.IOException: No live
nodes contain current block
Exception closing file /user/root/small-page-index/aIndex/_0.cfs
java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:198)
at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:65)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3084)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3053)
at
org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:942)
at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:210)
at
org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:243)
at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:1413)
at org.apache.hadoop.fs.FileSystem.closeAll(FileSystem.java:236)
at
org.apache.hadoop.fs.FileSystem$ClientFinalizer.run(FileSystem.java:221)

Search Discussions

  • Mafish Liu at Sep 15, 2009 at 2:28 am
    Check if your datanodes are starting up correctly.
    This error occurs when there are file entries in namenode. By it meets
    problems to fetch file data from datanodes.

    2009/9/15 scott w <scottblanc@gmail.com>:
    I am trying to read data placed on hdfs in one EC2 cluster from a different
    EC2 cluster and am getting the errors below. Both EC2 Clusters are running
    v0.19. When I run 'hadoop -get small-page-index small-page-index' on the
    source cluster everything works fine and the data is properly retrieved out
    of hdfs. FWIW, hadoop fs -ls works fine across clusters. Any ideas of what
    might be the problem and how to remedy it?

    thanks,
    Scott

    Here are the errors I am getting:

    [root@domU-12-31-38-00-4E-32 ~]# hadoop fs -cp
    hdfs://domU-12-31-38-00-1C-B1.compute-1.internal:50001/user/root/small-page-index
    small-page-index
    09/09/14 21:48:43 INFO hdfs.DFSClient: Could not obtain block
    blk_-4157273618194597760_1160 from any node:  java.io.IOException: No live
    nodes contain current block
    09/09/14 21:51:46 INFO hdfs.DFSClient: Could not obtain block
    blk_-4157273618194597760_1160 from any node:  java.io.IOException: No live
    nodes contain current block
    09/09/14 21:54:49 INFO hdfs.DFSClient: Could not obtain block
    blk_-4157273618194597760_1160 from any node:  java.io.IOException: No live
    nodes contain current block
    Exception closing file /user/root/small-page-index/aIndex/_0.cfs
    java.io.IOException: Filesystem closed
    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:198)
    at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:65)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3084)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3053)
    at
    org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:942)
    at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:210)
    at
    org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:243)
    at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:1413)
    at org.apache.hadoop.fs.FileSystem.closeAll(FileSystem.java:236)
    at
    org.apache.hadoop.fs.FileSystem$ClientFinalizer.run(FileSystem.java:221)


    --
    Mafish@gmail.com

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedSep 14, '09 at 10:50p
activeSep 15, '09 at 2:28a
posts2
users2
websitehadoop.apache.org...
irc#hadoop

2 users in discussion

Mafish Liu: 1 post Scott w: 1 post

People

Translate

site design / logo © 2022 Grokbase