|| at Mar 27, 2013 at 5:23 pm
This typically means that either none of the datanodes have heartbeated
into the namenode or that all the datanodes are reporting themselves as not
having enough empty space. Most likely it's the latter, and you can change
the "reserved disk usage" parameter for the datanodes. Typically that's
set somewhat high. See the search
this mailing list for many similar threads.
On Fri, Mar 22, 2013 at 10:59 AM, Björn Jónsson wrote:
I'm submitting a job to a clean CM installed cluster on EC2 or Skytap. Get
the same error. It works running the job when I'm on a node in the cluster,
but fails when I try to run it remotely. I'm otherwise able to communicate
with the cluster, so the hostnames for namenode and jobtracker are correct.
Does anyone know what I'm missing?