Grokbase Groups Hive user June 2011
FAQ
I have a table with 3 levels of partitioning and about 10,000 files (one
file at every 'leaf'). I am using EMR and the table is stored in S3.
For some reason, Hive can't even start running a simple query that creates a
local copy of a subset of the big table.

Does this look like an EMR-specific issue or is there something I could do?
I am thinking about copying al of the data into HDFS first.

Number of reduce tasks is set to 0 since there's no reduce operator
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit
exceeded
at java.util.LinkedHashMap.newKeyIterator(LinkedHashMap.java:396)
at java.util.HashMap$KeySet.iterator(HashMap.java:874)
at
java.beans.java_util_Map_PersistenceDelegate.initialize(MetaData.java:516)
at java.beans.PersistenceDelegate.initialize(PersistenceDelegate.java:190)
at
java.beans.DefaultPersistenceDelegate.initialize(DefaultPersistenceDelegate.java:393)
at java.beans.PersistenceDelegate.initialize(PersistenceDelegate.java:190)
at
java.beans.DefaultPersistenceDelegate.initialize(DefaultPersistenceDelegate.java:393)
at java.beans.PersistenceDelegate.writeObject(PersistenceDelegate.java:100)
at java.beans.Encoder.writeObject(Encoder.java:54)
at java.beans.XMLEncoder.writeObject(XMLEncoder.java:257)
at java.beans.Encoder.writeExpression(Encoder.java:279)
at java.beans.XMLEncoder.writeExpression(XMLEncoder.java:372)
at java.beans.PersistenceDelegate.writeObject(PersistenceDelegate.java:97)
at java.beans.Encoder.writeObject(Encoder.java:54)
at java.beans.XMLEncoder.writeObject(XMLEncoder.java:257)
at java.beans.Encoder.writeExpression(Encoder.java:279)
at java.beans.XMLEncoder.writeExpression(XMLEncoder.java:372)
at
java.beans.DefaultPersistenceDelegate.doProperty(DefaultPersistenceDelegate.java:212)
at
java.beans.DefaultPersistenceDelegate.initBean(DefaultPersistenceDelegate.java:247)
at
java.beans.DefaultPersistenceDelegate.initialize(DefaultPersistenceDelegate.java:395)
at java.beans.PersistenceDelegate.writeObject(PersistenceDelegate.java:100)
at java.beans.Encoder.writeObject(Encoder.java:54)
at java.beans.XMLEncoder.writeObject(XMLEncoder.java:257)
at java.beans.Encoder.writeExpression(Encoder.java:279)
at java.beans.XMLEncoder.writeExpression(XMLEncoder.java:372)
at java.beans.PersistenceDelegate.writeObject(PersistenceDelegate.java:97)
at java.beans.Encoder.writeObject(Encoder.java:54)
at java.beans.XMLEncoder.writeObject(XMLEncoder.java:257)
at java.beans.Encoder.writeExpression(Encoder.java:279)
at java.beans.XMLEncoder.writeExpression(XMLEncoder.java:372)
at
java.beans.java_util_Map_PersistenceDelegate.initialize(MetaData.java:523)
at java.beans.PersistenceDelegate.initialize(PersistenceDelegate.java:190)

Search Discussions

  • Steven Wong at Jun 21, 2011 at 8:53 pm
    Is the OOM in the Hive client? If so, you should try increasing its max heap size by setting the env var HADOOP_HEAPSIZE. One place to set it in is hive-env.sh; see /home/hadoop/.versions/hive-0.7/conf/hive-env.sh.template for more info.


    From: Igor Tatarinov
    Sent: Tuesday, June 21, 2011 12:19 AM
    To: user@hive.apache.org
    Subject: Hive running out of memory

    I have a table with 3 levels of partitioning and about 10,000 files (one file at every 'leaf'). I am using EMR and the table is stored in S3.
    For some reason, Hive can't even start running a simple query that creates a local copy of a subset of the big table.

    Does this look like an EMR-specific issue or is there something I could do?
    I am thinking about copying al of the data into HDFS first.

    Number of reduce tasks is set to 0 since there's no reduce operator
    Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
    at java.util.LinkedHashMap.newKeyIterator(LinkedHashMap.java:396)
    at java.util.HashMap$KeySet.iterator(HashMap.java:874)
    at java.beans.java_util_Map_PersistenceDelegate.initialize(MetaData.java:516)
    at java.beans.PersistenceDelegate.initialize(PersistenceDelegate.java:190)
    at java.beans.DefaultPersistenceDelegate.initialize(DefaultPersistenceDelegate.java:393)
    at java.beans.PersistenceDelegate.initialize(PersistenceDelegate.java:190)
    at java.beans.DefaultPersistenceDelegate.initialize(DefaultPersistenceDelegate.java:393)
    at java.beans.PersistenceDelegate.writeObject(PersistenceDelegate.java:100)
    at java.beans.Encoder.writeObject(Encoder.java:54)
    at java.beans.XMLEncoder.writeObject(XMLEncoder.java:257)
    at java.beans.Encoder.writeExpression(Encoder.java:279)
    at java.beans.XMLEncoder.writeExpression(XMLEncoder.java:372)
    at java.beans.PersistenceDelegate.writeObject(PersistenceDelegate.java:97)
    at java.beans.Encoder.writeObject(Encoder.java:54)
    at java.beans.XMLEncoder.writeObject(XMLEncoder.java:257)
    at java.beans.Encoder.writeExpression(Encoder.java:279)
    at java.beans.XMLEncoder.writeExpression(XMLEncoder.java:372)
    at java.beans.DefaultPersistenceDelegate.doProperty(DefaultPersistenceDelegate.java:212)
    at java.beans.DefaultPersistenceDelegate.initBean(DefaultPersistenceDelegate.java:247)
    at java.beans.DefaultPersistenceDelegate.initialize(DefaultPersistenceDelegate.java:395)
    at java.beans.PersistenceDelegate.writeObject(PersistenceDelegate.java:100)
    at java.beans.Encoder.writeObject(Encoder.java:54)
    at java.beans.XMLEncoder.writeObject(XMLEncoder.java:257)
    at java.beans.Encoder.writeExpression(Encoder.java:279)
    at java.beans.XMLEncoder.writeExpression(XMLEncoder.java:372)
    at java.beans.PersistenceDelegate.writeObject(PersistenceDelegate.java:97)
    at java.beans.Encoder.writeObject(Encoder.java:54)
    at java.beans.XMLEncoder.writeObject(XMLEncoder.java:257)
    at java.beans.Encoder.writeExpression(Encoder.java:279)
    at java.beans.XMLEncoder.writeExpression(XMLEncoder.java:372)
    at java.beans.java_util_Map_PersistenceDelegate.initialize(MetaData.java:523)
    at java.beans.PersistenceDelegate.initialize(PersistenceDelegate.java:190)
  • Igor Tatarinov at Jun 21, 2011 at 9:31 pm
    Yes, that's probably it. I found a related JIRA:
    https://issues.apache.org/jira/browse/HIVE-1316

    doesn't look like the EMR installation has this fix. I am going to increase
    the heap size and see if that helps.
    On Tue, Jun 21, 2011 at 1:52 PM, Steven Wong wrote:

    Is the OOM in the Hive client? If so, you should try increasing its max
    heap size by setting the env var HADOOP_HEAPSIZE. One place to set it in is
    hive-env.sh; see /home/hadoop/.versions/hive-0.7/conf/hive-env.sh.template
    for more info.****

    ** **

    ** **

    *From:* Igor Tatarinov
    *Sent:* Tuesday, June 21, 2011 12:19 AM
    *To:* user@hive.apache.org
    *Subject:* Hive running out of memory****

    ** **

    I have a table with 3 levels of partitioning and about 10,000 files (one
    file at every 'leaf'). I am using EMR and the table is stored in S3.****

    For some reason, Hive can't even start running a simple query that creates
    a local copy of a subset of the big table.****

    ** **

    Does this look like an EMR-specific issue or is there something I could do?
    ****

    I am thinking about copying al of the data into HDFS first.****

    ** **

    Number of reduce tasks is set to 0 since there's no reduce operator****

    Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit
    exceeded****

    at
    java.util.LinkedHashMap.newKeyIterator(LinkedHashMap.java:396)****

    at java.util.HashMap$KeySet.iterator(HashMap.java:874)****

    at
    java.beans.java_util_Map_PersistenceDelegate.initialize(MetaData.java:516)
    ****

    at
    java.beans.PersistenceDelegate.initialize(PersistenceDelegate.java:190)***
    *

    at
    java.beans.DefaultPersistenceDelegate.initialize(DefaultPersistenceDelegate.java:393)
    ****

    at
    java.beans.PersistenceDelegate.initialize(PersistenceDelegate.java:190)***
    *

    at
    java.beans.DefaultPersistenceDelegate.initialize(DefaultPersistenceDelegate.java:393)
    ****

    at
    java.beans.PersistenceDelegate.writeObject(PersistenceDelegate.java:100)**
    **

    at java.beans.Encoder.writeObject(Encoder.java:54)****

    at java.beans.XMLEncoder.writeObject(XMLEncoder.java:257)****

    at java.beans.Encoder.writeExpression(Encoder.java:279)****

    at java.beans.XMLEncoder.writeExpression(XMLEncoder.java:372)*
    ***

    at
    java.beans.PersistenceDelegate.writeObject(PersistenceDelegate.java:97)***
    *

    at java.beans.Encoder.writeObject(Encoder.java:54)****

    at java.beans.XMLEncoder.writeObject(XMLEncoder.java:257)****

    at java.beans.Encoder.writeExpression(Encoder.java:279)****

    at java.beans.XMLEncoder.writeExpression(XMLEncoder.java:372)*
    ***

    at
    java.beans.DefaultPersistenceDelegate.doProperty(DefaultPersistenceDelegate.java:212)
    ****

    at
    java.beans.DefaultPersistenceDelegate.initBean(DefaultPersistenceDelegate.java:247)
    ****

    at
    java.beans.DefaultPersistenceDelegate.initialize(DefaultPersistenceDelegate.java:395)
    ****

    at
    java.beans.PersistenceDelegate.writeObject(PersistenceDelegate.java:100)**
    **

    at java.beans.Encoder.writeObject(Encoder.java:54)****

    at java.beans.XMLEncoder.writeObject(XMLEncoder.java:257)****

    at java.beans.Encoder.writeExpression(Encoder.java:279)****

    at java.beans.XMLEncoder.writeExpression(XMLEncoder.java:372)*
    ***

    at
    java.beans.PersistenceDelegate.writeObject(PersistenceDelegate.java:97)***
    *

    at java.beans.Encoder.writeObject(Encoder.java:54)****

    at java.beans.XMLEncoder.writeObject(XMLEncoder.java:257)****

    at java.beans.Encoder.writeExpression(Encoder.java:279)****

    at java.beans.XMLEncoder.writeExpression(XMLEncoder.java:372)*
    ***

    at
    java.beans.java_util_Map_PersistenceDelegate.initialize(MetaData.java:523)
    ****

    at
    java.beans.PersistenceDelegate.initialize(PersistenceDelegate.java:190)***
    *

    ** **

    ** **

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupuser @
categorieshive, hadoop
postedJun 21, '11 at 7:20a
activeJun 21, '11 at 9:31p
posts3
users2
websitehive.apache.org

2 users in discussion

Igor Tatarinov: 2 posts Steven Wong: 1 post

People

Translate

site design / logo © 2021 Grokbase