FAQ
Moving this to mapreduce-user (this is the right list)..

Could you please look at the TaskTracker logs around the time when you see the task failure. That might have something more useful for debugging..

On Jul 10, 2011, at 8:14 PM, Michael Hu wrote:

Hi,all,
The hadoop is set up. Whenever I run a job, I always got the same error.
Error is:

micah29@nc2:/usr/local/hadoop/hadoop$ ./bin/hadoop jar
hadoop-mapred-examples-0.21.0.jar wordcount test testout

*11/07/11 10:48:59 INFO mapreduce.Job: Running job: job_201107111031_0003
11/07/11 10:49:00 INFO mapreduce.Job: map 0% reduce 0%
11/07/11 10:49:11 INFO mapreduce.Job: Task Id :
attempt_201107111031_0003_m_000002_0, Status : FAILED
java.lang.Throwable: Child Error
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:249)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:236)

11/07/11 10:49:11 WARN mapreduce.Job: Error reading task
outputhttp://nc2:50060/tasklog?plaintext=true&attemptid=attempt_201107111031_0003_m_000002_0&filter=stdout
11/07/11 10:49:11 WARN mapreduce.Job: Error reading task
outputhttp://nc2:50060/tasklog?plaintext=true&attemptid=attempt_201107111031_0003_m_000002_0&filter=stderr
*

I google the " Task process exit with nonzero status of 1." They say
'it's an OS limit on the number of sub-directories that can be related in
another directory.' But I can create any sub-directories related in another
directory.

Please, could anybody help me to solve this problem? Thanks
--
Yours sincerely
Hu Shengqiu

Search Discussions

  • Sudharsan Sampath at Jul 11, 2011 at 8:23 am
    Hi,

    The issue could be attributed to many causes. Few of which are

    1) Unable to create logs due to insufficient space in the logs directory,
    permissions issue.
    2) ulimit threshold that causes insuffucient allocation of memory.
    3) OOM on the child or unable to allocate the configured memory while
    spawning the child
    4) Bug in the child args configuration in the mapred-site
    5) Unable to write the temp outputs (due to space or permission issue)

    The log that u mentioned in a limit on the file system spec and usually
    occurs in a complex environment. Highly rare it could be the issue in
    running the wordcount example.

    Thanks
    Sudhan S
    On Mon, Jul 11, 2011 at 11:50 AM, Devaraj Das wrote:

    Moving this to mapreduce-user (this is the right list)..

    Could you please look at the TaskTracker logs around the time when you see
    the task failure. That might have something more useful for debugging..

    On Jul 10, 2011, at 8:14 PM, Michael Hu wrote:

    Hi,all,
    The hadoop is set up. Whenever I run a job, I always got the same error.
    Error is:

    micah29@nc2:/usr/local/hadoop/hadoop$ ./bin/hadoop jar
    hadoop-mapred-examples-0.21.0.jar wordcount test testout

    *11/07/11 10:48:59 INFO mapreduce.Job: Running job: job_201107111031_0003
    11/07/11 10:49:00 INFO mapreduce.Job: map 0% reduce 0%
    11/07/11 10:49:11 INFO mapreduce.Job: Task Id :
    attempt_201107111031_0003_m_000002_0, Status : FAILED
    java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:249)
    Caused by: java.io.IOException: Task process exit with nonzero status of 1.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:236)

    11/07/11 10:49:11 WARN mapreduce.Job: Error reading task
    outputhttp://nc2:50060/tasklog?plaintext=true&attemptid=attempt_201107111031_0003_m_000002_0&filter=stdout
    11/07/11 10:49:11 WARN mapreduce.Job: Error reading task
    outputhttp://nc2:50060/tasklog?plaintext=true&attemptid=attempt_201107111031_0003_m_000002_0&filter=stderr
    *

    I google the " Task process exit with nonzero status of 1." They say
    'it's an OS limit on the number of sub-directories that can be related in
    another directory.' But I can create any sub-directories related in another
    directory.

    Please, could anybody help me to solve this problem? Thanks
    --
    Yours sincerely
    Hu Shengqiu
  • Steve Lewis at Jul 11, 2011 at 3:19 pm
    We have seen that limit reached when we ran a large number of jobs (3000
    strikes me as the figure but the real number may be hire) It has to do with
    the number of files created in a 24 hour period, My colleague who had the
    issue created a chron job to clear out the logs every hour.

    How many jobs are you running in a 24 hour period?
    On Mon, Jul 11, 2011 at 1:17 AM, Sudharsan Sampath wrote:

    Hi,

    The issue could be attributed to many causes. Few of which are

    1) Unable to create logs due to insufficient space in the logs directory,
    permissions issue.
    2) ulimit threshold that causes insuffucient allocation of memory.
    3) OOM on the child or unable to allocate the configured memory while
    spawning the child
    4) Bug in the child args configuration in the mapred-site
    5) Unable to write the temp outputs (due to space or permission issue)

    The log that u mentioned in a limit on the file system spec and usually
    occurs in a complex environment. Highly rare it could be the issue in
    running the wordcount example.

    Thanks
    Sudhan S

    On Mon, Jul 11, 2011 at 11:50 AM, Devaraj Das wrote:

    Moving this to mapreduce-user (this is the right list)..

    Could you please look at the TaskTracker logs around the time when you see
    the task failure. That might have something more useful for debugging..

    On Jul 10, 2011, at 8:14 PM, Michael Hu wrote:

    Hi,all,
    The hadoop is set up. Whenever I run a job, I always got the same error.
    Error is:

    micah29@nc2:/usr/local/hadoop/hadoop$ ./bin/hadoop jar
    hadoop-mapred-examples-0.21.0.jar wordcount test testout

    *11/07/11 10:48:59 INFO mapreduce.Job: Running job:
    job_201107111031_0003
    11/07/11 10:49:00 INFO mapreduce.Job: map 0% reduce 0%
    11/07/11 10:49:11 INFO mapreduce.Job: Task Id :
    attempt_201107111031_0003_m_000002_0, Status : FAILED
    java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:249)
    Caused by: java.io.IOException: Task process exit with nonzero status of 1.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:236)

    11/07/11 10:49:11 WARN mapreduce.Job: Error reading task
    outputhttp://nc2:50060/tasklog?plaintext=true&attemptid=attempt_201107111031_0003_m_000002_0&filter=stdout
    11/07/11 10:49:11 WARN mapreduce.Job: Error reading task
    outputhttp://nc2:50060/tasklog?plaintext=true&attemptid=attempt_201107111031_0003_m_000002_0&filter=stderr
    *

    I google the " Task process exit with nonzero status of 1." They say
    'it's an OS limit on the number of sub-directories that can be related in
    another directory.' But I can create any sub-directories related in another
    directory.

    Please, could anybody help me to solve this problem? Thanks
    --
    Yours sincerely
    Hu Shengqiu

    --
    Steven M. Lewis PhD
    4221 105th Ave NE
    Kirkland, WA 98033
    206-384-1340 (cell)
    Skype lordjoe_com
  • Harsh J at Jul 11, 2011 at 5:08 pm
    Right, it may be hitting this one:
    https://issues.apache.org/jira/browse/MAPREDUCE-2592

    But exit code 1 is perhaps too ambiguous to point to just that (I
    mean, could be anything else too).
    On Mon, Jul 11, 2011 at 8:48 PM, Steve Lewis wrote:
    We have seen that limit reached when we ran a large number of jobs (3000
    strikes me as the figure but the real number may be hire) It has to do with
    the number of files created in a 24 hour period, My colleague who had the
    issue created a chron job to clear out the logs every hour.
    How many jobs are you running in a 24 hour period?
    On Mon, Jul 11, 2011 at 1:17 AM, Sudharsan Sampath wrote:

    Hi,

    The issue could be attributed to many causes. Few of which are

    1) Unable to create logs due to insufficient space in the logs directory,
    permissions issue.
    2) ulimit threshold that causes insuffucient allocation of memory.
    3) OOM on the child or unable to allocate the configured memory while
    spawning the child
    4) Bug in the child args configuration in the mapred-site
    5) Unable to write the temp outputs (due to space or permission issue)

    The log that u mentioned in a limit on the file system spec and usually
    occurs in a complex environment. Highly rare it could be the issue in
    running the wordcount example.

    Thanks
    Sudhan S

    On Mon, Jul 11, 2011 at 11:50 AM, Devaraj Das <ddas@hortonworks.com>
    wrote:
    Moving this to mapreduce-user (this is the right list)..

    Could you please look at the TaskTracker logs around the time when you
    see the task failure. That might have something more useful for debugging..

    On Jul 10, 2011, at 8:14 PM, Michael Hu wrote:

    Hi,all,
    The hadoop is set up. Whenever I run a job, I always got the same
    error.
    Error is:

    micah29@nc2:/usr/local/hadoop/hadoop$ ./bin/hadoop jar
    hadoop-mapred-examples-0.21.0.jar wordcount test testout

    *11/07/11 10:48:59 INFO mapreduce.Job: Running job:
    job_201107111031_0003
    11/07/11 10:49:00 INFO mapreduce.Job:  map 0% reduce 0%
    11/07/11 10:49:11 INFO mapreduce.Job: Task Id :
    attempt_201107111031_0003_m_000002_0, Status : FAILED
    java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:249)
    Caused by: java.io.IOException: Task process exit with nonzero status
    of 1.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:236)

    11/07/11 10:49:11 WARN mapreduce.Job: Error reading task

    outputhttp://nc2:50060/tasklog?plaintext=true&attemptid=attempt_201107111031_0003_m_000002_0&filter=stdout
    11/07/11 10:49:11 WARN mapreduce.Job: Error reading task

    outputhttp://nc2:50060/tasklog?plaintext=true&attemptid=attempt_201107111031_0003_m_000002_0&filter=stderr
    *

    I google the " Task process exit with nonzero status of 1." They say
    'it's an OS limit on the number of sub-directories that can be related
    in
    another directory.' But I can create any sub-directories related in
    another
    directory.

    Please, could anybody help me to solve this problem? Thanks
    --
    Yours sincerely
    Hu Shengqiu


    --
    Steven M. Lewis PhD
    4221 105th Ave NE
    Kirkland, WA 98033
    206-384-1340 (cell)
    Skype lordjoe_com



    --
    Harsh J

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupmapreduce-user @
categorieshadoop
postedJul 11, '11 at 6:21a
activeJul 11, '11 at 5:08p
posts4
users4
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase