FAQ
I'm submitting a job to a clean CM installed cluster on EC2 or Skytap. Get
the same error. It works running the job when I'm on a node in the cluster,
but fails when I try to run it remotely. I'm otherwise able to communicate
with the cluster, so the hostnames for namenode and jobtracker are correct.

Does anyone know what I'm missing?

Best,
Bjorn

Search Discussions

  • Philip Zeyliger at Mar 27, 2013 at 5:23 pm
    Hi Bjorn,

    This typically means that either none of the datanodes have heartbeated
    into the namenode or that all the datanodes are reporting themselves as not
    having enough empty space. Most likely it's the latter, and you can change
    the "reserved disk usage" parameter for the datanodes. Typically that's
    set somewhat high. See the search
    https://groups.google.com/a/cloudera.org/forum/?fromgroups#!search/could$20only$20be$20replicated$20to$200$20nodes$20instead$20of$20min
    on
    this mailing list for many similar threads.

    Cheers,

    -- Philip

    On Fri, Mar 22, 2013 at 10:59 AM, Björn Jónsson wrote:

    I'm submitting a job to a clean CM installed cluster on EC2 or Skytap. Get
    the same error. It works running the job when I'm on a node in the cluster,
    but fails when I try to run it remotely. I'm otherwise able to communicate
    with the cluster, so the hostnames for namenode and jobtracker are correct.

    Does anyone know what I'm missing?

    Best,
    Bjorn

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedMar 22, '13 at 6:00p
activeMar 27, '13 at 5:23p
posts2
users2
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase