FAQ
Hello,
While trying to start the task tracker I get the following error in
the logs (see below).
I'm guessing its trying to clean up an aborted job( a badly coded one)
and too many files to clean up.

Does anyone know which directory its looking into so that I manually
clean it up?
Regards
S

==Error==

2009-11-30 11:39:47,989 ERROR org.apache.hadoop.mapred.TaskTracker:
Can not start task tracker because java.lang.OutOfMemoryError: GC
overhead limit exceeded
at java.util.Arrays.copyOf(Arrays.java:2882)
at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:572)
at java.lang.StringBuilder.append(StringBuilder.java:203)
at java.io.UnixFileSystem.resolve(UnixFileSystem.java:93)
at java.io.File.(File.java:1056)
at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:73)
at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
at org.apache.hadoop.fs.RawLocalFileSystem.delete(RawLocalFileSystem.java:269)
at org.apache.hadoop.fs.ChecksumFileSystem.delete(ChecksumFileSystem.java:438)
at org.apache.hadoop.fs.FilterFileSystem.delete(FilterFileSystem.java:143)
at org.apache.hadoop.mapred.JobConf.deleteLocalFiles(JobConf.java:270)
at org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:441)
at org.apache.hadoop.mapred.TaskTracker.(TaskTracker.java:2833)

Search Discussions

  • Bill Au at Nov 30, 2009 at 5:41 pm
    Your JVM is running out of heap space so you will need to run it with a
    bigger max heap size.

    Bill

    On Mon, Nov 30, 2009 at 11:53 AM, Saptarshi Guha
    wrote:
    Hello,
    While trying to start the task tracker I get the following error in
    the logs (see below).
    I'm guessing its trying to clean up an aborted job( a badly coded one)
    and too many files to clean up.

    Does anyone know which directory its looking into so that I manually
    clean it up?
    Regards
    S

    ==Error==

    2009-11-30 11:39:47,989 ERROR org.apache.hadoop.mapred.TaskTracker:
    Can not start task tracker because java.lang.OutOfMemoryError: GC
    overhead limit exceeded
    at java.util.Arrays.copyOf(Arrays.java:2882)
    at
    java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
    at
    java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:572)
    at java.lang.StringBuilder.append(StringBuilder.java:203)
    at java.io.UnixFileSystem.resolve(UnixFileSystem.java:93)
    at java.io.File.<init>(File.java:207)
    at java.io.File.listFiles(File.java:1056)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:73)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
    at
    org.apache.hadoop.fs.RawLocalFileSystem.delete(RawLocalFileSystem.java:269)
    at
    org.apache.hadoop.fs.ChecksumFileSystem.delete(ChecksumFileSystem.java:438)
    at
    org.apache.hadoop.fs.FilterFileSystem.delete(FilterFileSystem.java:143)
    at
    org.apache.hadoop.mapred.JobConf.deleteLocalFiles(JobConf.java:270)
    at
    org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:441)
    at org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:934)
    at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2833)
  • Todd Lipcon at Nov 30, 2009 at 5:49 pm
    That looks like the gc time overhead limit, not an actual out of memory
    error.

    It's probably trying to rm -rf the mapred.local.dir contents. If your TT is
    stopped, feel free to remove everything from in there and try to start
    again.

    -Todd
    On Mon, Nov 30, 2009 at 9:40 AM, Bill Au wrote:

    Your JVM is running out of heap space so you will need to run it with a
    bigger max heap size.

    Bill

    On Mon, Nov 30, 2009 at 11:53 AM, Saptarshi Guha
    wrote:
    Hello,
    While trying to start the task tracker I get the following error in
    the logs (see below).
    I'm guessing its trying to clean up an aborted job( a badly coded one)
    and too many files to clean up.

    Does anyone know which directory its looking into so that I manually
    clean it up?
    Regards
    S

    ==Error==

    2009-11-30 11:39:47,989 ERROR org.apache.hadoop.mapred.TaskTracker:
    Can not start task tracker because java.lang.OutOfMemoryError: GC
    overhead limit exceeded
    at java.util.Arrays.copyOf(Arrays.java:2882)
    at
    java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
    at
    java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:572)
    at java.lang.StringBuilder.append(StringBuilder.java:203)
    at java.io.UnixFileSystem.resolve(UnixFileSystem.java:93)
    at java.io.File.<init>(File.java:207)
    at java.io.File.listFiles(File.java:1056)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:73)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
    at
    org.apache.hadoop.fs.RawLocalFileSystem.delete(RawLocalFileSystem.java:269)
    at
    org.apache.hadoop.fs.ChecksumFileSystem.delete(ChecksumFileSystem.java:438)
    at
    org.apache.hadoop.fs.FilterFileSystem.delete(FilterFileSystem.java:143)
    at
    org.apache.hadoop.mapred.JobConf.deleteLocalFiles(JobConf.java:270)
    at
    org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:441)
    at
    org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:934)
    at
    org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2833)
  • Bill Au at Nov 30, 2009 at 6:04 pm
    The gc overhead limit exceeded error is caused by the heap being almost out
    of space. The JVM is spending more than 98% of the total time garbage
    collection and less than 2% of the heap is recovered,

    Bill
    On Mon, Nov 30, 2009 at 12:48 PM, Todd Lipcon wrote:

    That looks like the gc time overhead limit, not an actual out of memory
    error.

    It's probably trying to rm -rf the mapred.local.dir contents. If your TT is
    stopped, feel free to remove everything from in there and try to start
    again.

    -Todd
    On Mon, Nov 30, 2009 at 9:40 AM, Bill Au wrote:

    Your JVM is running out of heap space so you will need to run it with a
    bigger max heap size.

    Bill

    On Mon, Nov 30, 2009 at 11:53 AM, Saptarshi Guha
    wrote:
    Hello,
    While trying to start the task tracker I get the following error in
    the logs (see below).
    I'm guessing its trying to clean up an aborted job( a badly coded one)
    and too many files to clean up.

    Does anyone know which directory its looking into so that I manually
    clean it up?
    Regards
    S

    ==Error==

    2009-11-30 11:39:47,989 ERROR org.apache.hadoop.mapred.TaskTracker:
    Can not start task tracker because java.lang.OutOfMemoryError: GC
    overhead limit exceeded
    at java.util.Arrays.copyOf(Arrays.java:2882)
    at
    java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
    at
    java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:572)
    at java.lang.StringBuilder.append(StringBuilder.java:203)
    at java.io.UnixFileSystem.resolve(UnixFileSystem.java:93)
    at java.io.File.<init>(File.java:207)
    at java.io.File.listFiles(File.java:1056)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:73)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
    at
    org.apache.hadoop.fs.RawLocalFileSystem.delete(RawLocalFileSystem.java:269)
    at
    org.apache.hadoop.fs.ChecksumFileSystem.delete(ChecksumFileSystem.java:438)
    at
    org.apache.hadoop.fs.FilterFileSystem.delete(FilterFileSystem.java:143)
    at
    org.apache.hadoop.mapred.JobConf.deleteLocalFiles(JobConf.java:270)
    at
    org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:441)
    at
    org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:934)
    at
    org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2833)
  • Saptarshi Guha at Nov 30, 2009 at 9:33 pm
    Yes, this is it, because the TT's were running fine before the bad job.
    I cleared the directory(which took forever) and it worked
    Thanks
    Saptarshi
    On Mon, Nov 30, 2009 at 12:48 PM, Todd Lipcon wrote:
    That looks like the gc time overhead limit, not an actual out of memory
    error.

    It's probably trying to rm -rf the mapred.local.dir contents. If your TT is
    stopped, feel free to remove everything from in there and try to start
    again.

    -Todd
    On Mon, Nov 30, 2009 at 9:40 AM, Bill Au wrote:

    Your JVM is running out of heap space so you will need to run it with a
    bigger max heap size.

    Bill

    On Mon, Nov 30, 2009 at 11:53 AM, Saptarshi Guha
    wrote:
    Hello,
    While trying to start the task tracker I get the following error in
    the logs (see below).
    I'm guessing its trying to clean up an aborted job( a badly coded one)
    and too many files to clean up.

    Does anyone know which directory its looking into so that I manually
    clean it up?
    Regards
    S

    ==Error==

    2009-11-30 11:39:47,989 ERROR org.apache.hadoop.mapred.TaskTracker:
    Can not start task tracker because java.lang.OutOfMemoryError: GC
    overhead limit exceeded
    at java.util.Arrays.copyOf(Arrays.java:2882)
    at

    java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
    at
    java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:572)
    at java.lang.StringBuilder.append(StringBuilder.java:203)
    at java.io.UnixFileSystem.resolve(UnixFileSystem.java:93)
    at java.io.File.<init>(File.java:207)
    at java.io.File.listFiles(File.java:1056)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:73)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
    at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
    at

    org.apache.hadoop.fs.RawLocalFileSystem.delete(RawLocalFileSystem.java:269)
    at

    org.apache.hadoop.fs.ChecksumFileSystem.delete(ChecksumFileSystem.java:438)
    at
    org.apache.hadoop.fs.FilterFileSystem.delete(FilterFileSystem.java:143)
    at
    org.apache.hadoop.mapred.JobConf.deleteLocalFiles(JobConf.java:270)
    at
    org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:441)
    at
    org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:934)
    at
    org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2833)

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedNov 30, '09 at 4:54p
activeNov 30, '09 at 9:33p
posts5
users3
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase