FAQ
Hi, when use "hadoop dfs -cat" command, I keep getting the problem that
says " Could not obtain block 0 from any node: java.io.IOException: No live
nodes contain current block"

The block does exist as the "cat" command didn't always fail, occasionally
it will return desired result, but most of time I just get that error.

I also checked with hadoop web console and find all data-nodes living.

I'm using hadoop 0.13.1, deployed on a cluster of 6 servers, all run on
AMD64 and OpenSuse 10.2 64X, with 2G RAM.

This problem happens only recently afterI imported a bulk of data(1G) into
HDFS.

Any idea how I can fix it? or will upgrade to 0.14.2 help?

Thanks

Search Discussions

  • Konstantin Shvachko at Oct 16, 2007 at 6:55 pm
    Does fsck return HEALTHY status?
    What is your block replication factor?
    If one of the data-nodes is flaky and you have a particular block only
    on that node, then that could be the case.
    You might want to examine the nodes or increase replication.

    Open Study wrote:
    Hi, when use "hadoop dfs -cat" command, I keep getting the problem that
    says " Could not obtain block 0 from any node: java.io.IOException: No live
    nodes contain current block"

    The block does exist as the "cat" command didn't always fail, occasionally
    it will return desired result, but most of time I just get that error.

    I also checked with hadoop web console and find all data-nodes living.

    I'm using hadoop 0.13.1, deployed on a cluster of 6 servers, all run on
    AMD64 and OpenSuse 10.2 64X, with 2G RAM.

    This problem happens only recently afterI imported a bulk of data(1G) into
    HDFS.

    Any idea how I can fix it? or will upgrade to 0.14.2 help?

    Thanks

  • Koji Noguchi at Oct 16, 2007 at 7:03 pm
    Are you seeing this?
    https://issues.apache.org/jira/browse/HADOOP-1911

    Koji

    Konstantin Shvachko wrote:
    Does fsck return HEALTHY status?
    What is your block replication factor?
    If one of the data-nodes is flaky and you have a particular block only
    on that node, then that could be the case.
    You might want to examine the nodes or increase replication.

    Open Study wrote:
    Hi, when use "hadoop dfs -cat" command, I keep getting the problem that
    says " Could not obtain block 0 from any node: java.io.IOException:
    No live
    nodes contain current block"

    The block does exist as the "cat" command didn't always fail,
    occasionally
    it will return desired result, but most of time I just get that error.

    I also checked with hadoop web console and find all data-nodes living.

    I'm using hadoop 0.13.1, deployed on a cluster of 6 servers, all run on
    AMD64 and OpenSuse 10.2 64X, with 2G RAM.

    This problem happens only recently afterI imported a bulk of data(1G)
    into
    HDFS.

    Any idea how I can fix it? or will upgrade to 0.14.2 help?

    Thanks

  • Konstantin Shvachko at Oct 16, 2007 at 7:28 pm
    Yes, this explains the infinite loop, but does not explain how it got
    corrupted and why the failure to read is not stable,
    which are more interesting questions :-)
    We'll need more information to track that.

    --Konstantin

    Koji Noguchi wrote:
    Are you seeing this?
    https://issues.apache.org/jira/browse/HADOOP-1911

    Koji

    Konstantin Shvachko wrote:
    Does fsck return HEALTHY status?
    What is your block replication factor?
    If one of the data-nodes is flaky and you have a particular block
    only on that node, then that could be the case.
    You might want to examine the nodes or increase replication.

    Open Study wrote:
    Hi, when use "hadoop dfs -cat" command, I keep getting the problem
    that
    says " Could not obtain block 0 from any node: java.io.IOException:
    No live
    nodes contain current block"

    The block does exist as the "cat" command didn't always fail,
    occasionally
    it will return desired result, but most of time I just get that error.

    I also checked with hadoop web console and find all data-nodes living.

    I'm using hadoop 0.13.1, deployed on a cluster of 6 servers, all run on
    AMD64 and OpenSuse 10.2 64X, with 2G RAM.

    This problem happens only recently afterI imported a bulk of
    data(1G) into
    HDFS.

    Any idea how I can fix it? or will upgrade to 0.14.2 help?

    Thanks

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedOct 16, '07 at 4:41p
activeOct 16, '07 at 7:28p
posts4
users3
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase