FAQ
Hi all,

When I run the pi Hadoop sample I get this error:

10/03/31 15:46:13 WARN mapred.JobClient: Error reading task outputhttp://
h04.ctinfra.ufpr.br:50060/tasklog?plaintext=true&taskid=attempt_201003311545_0001_r_000002_0&filter=stdout
10/03/31 15:46:13 WARN mapred.JobClient: Error reading task outputhttp://
h04.ctinfra.ufpr.br:50060/tasklog?plaintext=true&taskid=attempt_201003311545_0001_r_000002_0&filter=stderr
10/03/31 15:46:20 INFO mapred.JobClient: Task Id :
attempt_201003311545_0001_m_000006_1, Status : FAILED
java.io.IOException: Task process exit with nonzero status of 134.
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)

May be its because the datanode can't create more threads.

ramiro@lcpad:~/hadoop-0.20.2$ cat
logs/userlogs/attempt_201003311457_0001_r_000001_2/stdout
#
# A fatal error has been detected by the Java Runtime Environment:
#
# java.lang.OutOfMemoryError: Cannot create GC thread. Out of system
resources.
#
# Internal Error (gcTaskThread.cpp:38), pid=28840, tid=140010745776400
# Error: Cannot create GC thread. Out of system resources.
#
# JRE version: 6.0_17-b04
# Java VM: Java HotSpot(TM) 64-Bit Server VM (14.3-b01 mixed mode
linux-amd64 )
# An error report file with more information is saved as:
#
/var-host/tmp/hadoop-ramiro/mapred/local/taskTracker/jobcache/job_201003311457_0001/attempt_201003311457_0001_r_000001_2/work/hs_err_pid28840.log
#
# If you would like to submit a bug report, please visit:
# http://java.sun.com/webapps/bugreport/crash.jsp
#

I configured the limits bellow, but I'm still getting the same error.

<property>
<name>fs.inmemory.size.mb</name>
<value>100</value>
</property>

<property>
<name>mapred.child.java.opts</name>
<value>-Xmx128M</value>
</property>

Do you know what limit should I configure to fix it?

Thanks in Advance

Edson Ramiro

Search Discussions

  • Scott Carey at Apr 1, 2010 at 7:42 pm
    The default size of Java's young GC generation is 1/3 of the heap. (-XX:NewRatio defaults to 2)
    You have told it to use 100MB for in memory file system. There is a default setting of 64MB sort space.

    if -Xmx is 128M then the above sums to over 200MB and won't fit. Turning down the use of any of the three above could help, or increasing -Xmx.

    Additionally, when a thread can't be allocated it could potentially be due to a limit on the OS side for file system handles per process or user.

    On Mar 31, 2010, at 11:48 AM, Edson Ramiro wrote:

    Hi all,

    When I run the pi Hadoop sample I get this error:

    10/03/31 15:46:13 WARN mapred.JobClient: Error reading task outputhttp://
    h04.ctinfra.ufpr.br:50060/tasklog?plaintext=true&taskid=attempt_201003311545_0001_r_000002_0&filter=stdout
    10/03/31 15:46:13 WARN mapred.JobClient: Error reading task outputhttp://
    h04.ctinfra.ufpr.br:50060/tasklog?plaintext=true&taskid=attempt_201003311545_0001_r_000002_0&filter=stderr
    10/03/31 15:46:20 INFO mapred.JobClient: Task Id :
    attempt_201003311545_0001_m_000006_1, Status : FAILED
    java.io.IOException: Task process exit with nonzero status of 134.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)

    May be its because the datanode can't create more threads.

    ramiro@lcpad:~/hadoop-0.20.2$ cat
    logs/userlogs/attempt_201003311457_0001_r_000001_2/stdout
    #
    # A fatal error has been detected by the Java Runtime Environment:
    #
    # java.lang.OutOfMemoryError: Cannot create GC thread. Out of system
    resources.
    #
    # Internal Error (gcTaskThread.cpp:38), pid=28840, tid=140010745776400
    # Error: Cannot create GC thread. Out of system resources.
    #
    # JRE version: 6.0_17-b04
    # Java VM: Java HotSpot(TM) 64-Bit Server VM (14.3-b01 mixed mode
    linux-amd64 )
    # An error report file with more information is saved as:
    #
    /var-host/tmp/hadoop-ramiro/mapred/local/taskTracker/jobcache/job_201003311457_0001/attempt_201003311457_0001_r_000001_2/work/hs_err_pid28840.log
    #
    # If you would like to submit a bug report, please visit:
    # http://java.sun.com/webapps/bugreport/crash.jsp
    #

    I configured the limits bellow, but I'm still getting the same error.

    <property>
    <name>fs.inmemory.size.mb</name>
    <value>100</value>
    </property>

    <property>
    <name>mapred.child.java.opts</name>
    <value>-Xmx128M</value>
    </property>

    Do you know what limit should I configure to fix it?

    Thanks in Advance

    Edson Ramiro

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedMar 31, '10 at 6:48p
activeApr 1, '10 at 7:42p
posts2
users2
websitehadoop.apache.org...
irc#hadoop

2 users in discussion

Scott Carey: 1 post Edson Ramiro: 1 post

People

Translate

site design / logo © 2022 Grokbase