FAQ
Hi,all,
The hadoop is set up. Whenever I run a job, I always got the same error.
Error is:

micah29@nc2:/usr/local/hadoop/hadoop$ ./bin/hadoop jar
hadoop-mapred-examples-0.21.0.jar wordcount test testout

*11/07/11 10:48:59 INFO mapreduce.Job: Running job: job_201107111031_0003
11/07/11 10:49:00 INFO mapreduce.Job: map 0% reduce 0%
11/07/11 10:49:11 INFO mapreduce.Job: Task Id :
attempt_201107111031_0003_m_000002_0, Status : FAILED
java.lang.Throwable: Child Error
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:249)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:236)

11/07/11 10:49:11 WARN mapreduce.Job: Error reading task
outputhttp://nc2:50060/tasklog?plaintext=true&attemptid=attempt_201107111031_0003_m_000002_0&filter=stdout
11/07/11 10:49:11 WARN mapreduce.Job: Error reading task
outputhttp://nc2:50060/tasklog?plaintext=true&attemptid=attempt_201107111031_0003_m_000002_0&filter=stderr
*

I google the " Task process exit with nonzero status of 1." They say
'it's an OS limit on the number of sub-directories that can be related in
another directory.' But I can create any sub-directories related in another
directory.

Please, could anybody help me to solve this problem? Thanks
--
Yours sincerely
Hu Shengqiu

Search Discussions

  • Devaraj Das at Jul 11, 2011 at 6:21 am
    Moving this to mapreduce-user (this is the right list)..

    Could you please look at the TaskTracker logs around the time when you see the task failure. That might have something more useful for debugging..

    On Jul 10, 2011, at 8:14 PM, Michael Hu wrote:

    Hi,all,
    The hadoop is set up. Whenever I run a job, I always got the same error.
    Error is:

    micah29@nc2:/usr/local/hadoop/hadoop$ ./bin/hadoop jar
    hadoop-mapred-examples-0.21.0.jar wordcount test testout

    *11/07/11 10:48:59 INFO mapreduce.Job: Running job: job_201107111031_0003
    11/07/11 10:49:00 INFO mapreduce.Job: map 0% reduce 0%
    11/07/11 10:49:11 INFO mapreduce.Job: Task Id :
    attempt_201107111031_0003_m_000002_0, Status : FAILED
    java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:249)
    Caused by: java.io.IOException: Task process exit with nonzero status of 1.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:236)

    11/07/11 10:49:11 WARN mapreduce.Job: Error reading task
    outputhttp://nc2:50060/tasklog?plaintext=true&attemptid=attempt_201107111031_0003_m_000002_0&filter=stdout
    11/07/11 10:49:11 WARN mapreduce.Job: Error reading task
    outputhttp://nc2:50060/tasklog?plaintext=true&attemptid=attempt_201107111031_0003_m_000002_0&filter=stderr
    *

    I google the " Task process exit with nonzero status of 1." They say
    'it's an OS limit on the number of sub-directories that can be related in
    another directory.' But I can create any sub-directories related in another
    directory.

    Please, could anybody help me to solve this problem? Thanks
    --
    Yours sincerely
    Hu Shengqiu
  • Bharath Mundlapudi at Jul 11, 2011 at 5:54 pm
    That number is around 40K (I think). I am not sure if you have certain configurations to cleanup user task logs periodically. We have solved this problem in MAPREDUCE-2415 which part of 0.20.204.


    But you cleanup the task logs periodically, you will not run into this problem.

    -Bharath




    ________________________________
    From: Michael Hu <mesolitary@gmail.com>
    To: common-dev@hadoop.apache.org
    Sent: Sunday, July 10, 2011 8:14 PM
    Subject: "java.lang.Throwable: Child Error " And " Task process exit with nonzero status of 1."

    Hi,all,
    The hadoop is set up. Whenever I run a job, I always got the same error.
    Error is:

    micah29@nc2:/usr/local/hadoop/hadoop$ ./bin/hadoop jar
    hadoop-mapred-examples-0.21.0.jar wordcount test testout

    *11/07/11 10:48:59 INFO mapreduce.Job: Running job: job_201107111031_0003
    11/07/11 10:49:00 INFO mapreduce.Job:  map 0% reduce 0%
    11/07/11 10:49:11 INFO mapreduce.Job: Task Id :
    attempt_201107111031_0003_m_000002_0, Status : FAILED
    java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:249)
    Caused by: java.io.IOException: Task process exit with nonzero status of 1.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:236)

    11/07/11 10:49:11 WARN mapreduce.Job: Error reading task
    outputhttp://nc2:50060/tasklog?plaintext=true&attemptid=attempt_201107111031_0003_m_000002_0&filter=stdout
    11/07/11 10:49:11 WARN mapreduce.Job: Error reading task
    outputhttp://nc2:50060/tasklog?plaintext=true&attemptid=attempt_201107111031_0003_m_000002_0&filter=stderr
    *

    I google the " Task process exit with nonzero status of 1." They say
    'it's an OS limit on the number of sub-directories that can be related in
    another directory.' But I can create any sub-directories related in another
    directory.

    Please, could anybody help me to solve this problem? Thanks
    --
    Yours sincerely
    Hu Shengqiu
  • C.V.Krishnakumar Iyer at Jul 12, 2011 at 2:12 am
    Hi,

    I get this error too. But the Job completes properly. Is this error any cause for concern? As in, would any computation be hampered because of this?

    Thanks !

    Regards,
    Krishnakumar
    On Jul 11, 2011, at 10:53 AM, Bharath Mundlapudi wrote:

    That number is around 40K (I think). I am not sure if you have certain configurations to cleanup user task logs periodically. We have solved this problem in MAPREDUCE-2415 which part of 0.20.204.


    But you cleanup the task logs periodically, you will not run into this problem.

    -Bharath
  • Harsh J at Jul 12, 2011 at 6:04 am
    The job may have succeeded due to the task having run successfully on
    another tasktracker after a retry attempt was scheduled. This probably
    means one of your TT has something bad on it, and should be easily
    identifiable from the UI.

    If all TTs are bad, your job would fail -- so yes, better to fix than
    worry about expecting failures.

    On Mon, Jul 11, 2011 at 11:53 PM, C.V.Krishnakumar Iyer
    wrote:
    Hi,

    I get this error too. But the Job completes properly. Is this error any cause for concern? As in, would any computation be hampered because of this?

    Thanks !

    Regards,
    Krishnakumar
    On Jul 11, 2011, at 10:53 AM, Bharath Mundlapudi wrote:

    That number is around 40K (I think). I am not sure if you have certain configurations to cleanup user task logs periodically. We have solved this problem in MAPREDUCE-2415 which part of 0.20.204.


    But you cleanup the task logs periodically, you will not run into this problem.

    -Bharath


    --
    Harsh J
  • Michael Hu at Jul 13, 2011 at 9:12 am
    Hi,all:
    I met a very weird problem. When I input some data, and if this data
    have to split into more than 2 tasks, then the last task's status is always
    initializing. I check the tasktracker, userlogs, datanode's logs, there're
    no error report.

    Please, Could anybody know how this happen , thanks

    --
    Yours sincerely
    Hu Shengqiu

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-dev @
categorieshadoop
postedJul 11, '11 at 3:15a
activeJul 13, '11 at 9:12a
posts6
users5
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2021 Grokbase