I've got a hadoop cluster running the example files just fine, but am
having difficulty running a custom job.
Specifically, the job starts, and then each task that is scheduled
fails, with the following error:
Error initializing task_0007_m_000063_0:
java.io.IOException: /DFS_ROOT/tmp/mapred/system/submit_i849v1/
job.xml: No such file or directory
I was under the impression that when starting hadoop using bin/hadoop
jar myJarFile.jar <options>, hadoop would do the magic of actually
transferring the jar for me. In fact, I do see it temporarily appear
in tmp/mapred/local/jobTracker (a jar and xml file) on the DFS drive
I am using. But for whatever reason, this is not making it to the
client machines (or locating it in the correct place).
The job itself is very simple, literally a copy of wordcount, with a
slightly changed map function.
If anyone has any ideas, I would greatly appreciate it.
Thanks.
Ross Boucher
boucher@apple.com