FAQ
Hi,

While running Terrior on Hadoop, I am getting the following error again &
again, can someone please point out where the problem is?

attempt_201010252225_0001_m_000009_2: WARN - Error running child
attempt_201010252225_0001_m_000009_2: java.lang.OutOfMemoryError: GC
overhead limit exceeded
attempt_201010252225_0001_m_000009_2: at
org.terrier.structures.indexing.singlepass.hadoop.HadoopRunWriter.writeTerm(HadoopRunWriter.java:78)
attempt_201010252225_0001_m_000009_2: at
org.terrier.structures.indexing.singlepass.MemoryPostings.writeToWriter(MemoryPostings.java:151)
attempt_201010252225_0001_m_000009_2: at
org.terrier.structures.indexing.singlepass.MemoryPostings.finish(MemoryPostings.java:112)
attempt_201010252225_0001_m_000009_2: at
org.terrier.indexing.hadoop.Hadoop_BasicSinglePassIndexer.forceFlush(Hadoop_BasicSinglePassIndexer.java:308)
attempt_201010252225_0001_m_000009_2: at
org.terrier.indexing.hadoop.Hadoop_BasicSinglePassIndexer.closeMap(Hadoop_BasicSinglePassIndexer.java:419)
attempt_201010252225_0001_m_000009_2: at
org.terrier.indexing.hadoop.Hadoop_BasicSinglePassIndexer.close(Hadoop_BasicSinglePassIndexer.java:236)
attempt_201010252225_0001_m_000009_2: at
org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
attempt_201010252225_0001_m_000009_2: at
org.apache.hadoop.mapred.MapTask.run(MapTask.java:227)
attempt_201010252225_0001_m_000009_2: at
org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2198)

Thanks

Regards
Siddharth

Search Discussions

  • Hemanth Yamijala at Oct 30, 2010 at 9:29 am
    Hi,

    On Tue, Oct 26, 2010 at 8:14 PM, siddharth raghuvanshi
    wrote:
    Hi,

    While running Terrior on Hadoop, I am getting the following error again &
    again, can someone please point out where the problem is?

    attempt_201010252225_0001_m_000009_2: WARN - Error running child
    attempt_201010252225_0001_m_000009_2: java.lang.OutOfMemoryError: GC
    overhead limit exceeded
    This error generally means that your MapReduce program requires more
    JVM heap space than has been configured by default. You could refer to
    the map/reduce documentation at http://bit.ly/9VAHCT and see if that
    helps you. In short, you may have to set up some specific
    configuration parameters for your map / reduce tasks to run with more
    JVM heap space than the default. Depending on which version of Hadoop
    you are using, the names could vary a little, but they should be
    present in the relevant documentation.

    Thanks
    hemanth
    attempt_201010252225_0001_m_000009_2: at
    org.terrier.structures.indexing.singlepass.hadoop.HadoopRunWriter.writeTerm(HadoopRunWriter.java:78)
    attempt_201010252225_0001_m_000009_2: at
    org.terrier.structures.indexing.singlepass.MemoryPostings.writeToWriter(MemoryPostings.java:151)
    attempt_201010252225_0001_m_000009_2: at
    org.terrier.structures.indexing.singlepass.MemoryPostings.finish(MemoryPostings.java:112)
    attempt_201010252225_0001_m_000009_2: at
    org.terrier.indexing.hadoop.Hadoop_BasicSinglePassIndexer.forceFlush(Hadoop_BasicSinglePassIndexer.java:308)
    attempt_201010252225_0001_m_000009_2: at
    org.terrier.indexing.hadoop.Hadoop_BasicSinglePassIndexer.closeMap(Hadoop_BasicSinglePassIndexer.java:419)
    attempt_201010252225_0001_m_000009_2: at
    org.terrier.indexing.hadoop.Hadoop_BasicSinglePassIndexer.close(Hadoop_BasicSinglePassIndexer.java:236)
    attempt_201010252225_0001_m_000009_2: at
    org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
    attempt_201010252225_0001_m_000009_2: at
    org.apache.hadoop.mapred.MapTask.run(MapTask.java:227)
    attempt_201010252225_0001_m_000009_2: at
    org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2198)

    Thanks

    Regards
    Siddharth

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedOct 26, '10 at 2:45p
activeOct 30, '10 at 9:29a
posts2
users2
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2023 Grokbase