FAQ
Found below logs from node which failed.

Tuple(id=1 size=8 slots=[Slot(id=0 type=BIGINT col=-1 offset=0
null=(offset=0 mask=0))])
I0910 02:25:15.116484 5514 mem-limit.h:86] Query: 0:0Exceeded limit:
limit=40481688780 consumption=40483454976
I0910 02:25:15.116503 5514 mem-limit.h:86] Query: 0:0Exceeded limit:
limit=40481688780 consumption=40483459072

Could not understand why it require so much memory. From profile I
found that this node require 37.7 GB while total data in table is 900
MB.

Regards,

Nishant




On Tue, Sep 10, 2013 at 11:41 AM, Nishant Patel
wrote:
Hi,

I have executed select count(1) from table1;

Total size of table is 878.8 M.

File size for which it has failed is 188.6 MB.

Error is 'Backend 1:Read failed while trying to finish scan range: '. I
have 10 nodes having good memory.

Can anyone tall me the reason for failure?

--
Regards,
Nishant Patel

--
Regards,
Nishant Patel

To unsubscribe from this group and stop receiving emails from it, send an email to impala-user+unsubscribe@cloudera.org.

Search Discussions

  • John Russell at Sep 10, 2013 at 7:39 am
    What file format, any compression codec, and which Impala version? There was an issue like this fixed in 1.1.1, that only affected one file format (either SequenceFile or RCFile IIRC, with no compression).

    Thanks,
    John
    --
    Sent from my iPad

    On Sep 10, 2013, at 12:28 AM, Nishant Patel wrote:

    Found below logs from node which failed.

    Tuple(id=1 size=8 slots=[Slot(id=0 type=BIGINT col=-1 offset=0 null=(offset=0 mask=0))])
    I0910 02:25:15.116484 5514 mem-limit.h:86] Query: 0:0Exceeded limit: limit=40481688780 consumption=40483454976
    I0910 02:25:15.116503 5514 mem-limit.h:86] Query: 0:0Exceeded limit: limit=40481688780 consumption=40483459072

    Could not understand why it require so much memory. From profile I found that this node require 37.7 GB while total data in table is 900 MB.


    Regards,
    Nishant


    On Tue, Sep 10, 2013 at 11:41 AM, Nishant Patel wrote:
    Hi,

    I have executed select count(1) from table1;

    Total size of table is 878.8 M.

    File size for which it has failed is 188.6 MB.

    Error is 'Backend 1:Read failed while trying to finish scan range: '. I have 10 nodes having good memory.

    Can anyone tall me the reason for failure?

    --
    Regards,
    Nishant Patel


    --
    Regards,
    Nishant Patel

    To unsubscribe from this group and stop receiving emails from it, send an email to impala-user+unsubscribe@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it, send an email to impala-user+unsubscribe@cloudera.org.
  • Nishant Patel at Sep 10, 2013 at 7:44 am
    I am using text file format with no compression.

    version
    ;
    Shell version: Impala Shell v1.1 (5e15fca) built on Sun Jul 21 15:51:04 PDT
    2013
    Server version: impalad version 1.1 RELEASE (build
    5e15fcacc48ec4ea65e8aa76362cb3ec9be26f13)

    Thanks,
    Nishant

    On Tue, Sep 10, 2013 at 1:09 PM, John Russell wrote:

    What file format, any compression codec, and which Impala version? There
    was an issue like this fixed in 1.1.1, that only affected one file format
    (either SequenceFile or RCFile IIRC, with no compression).

    Thanks,
    John
    --
    Sent from my iPad


    On Sep 10, 2013, at 12:28 AM, Nishant Patel wrote:

    Found below logs from node which failed.

    Tuple(id=1 size=8 slots=[Slot(id=0 type=BIGINT col=-1 offset=0 null=(offset=0 mask=0))])
    I0910 02:25:15.116484 5514 mem-limit.h:86] Query: 0:0Exceeded limit: limit=40481688780 consumption=40483454976
    I0910 02:25:15.116503 5514 mem-limit.h:86] Query: 0:0Exceeded limit: limit=40481688780 consumption=40483459072

    Could not understand why it require so much memory. From profile I found that this node require 37.7 GB while total data in table is 900 MB.

    Regards,

    Nishant




    On Tue, Sep 10, 2013 at 11:41 AM, Nishant Patel <nishant.k.patel@gmail.com
    wrote:
    Hi,

    I have executed select count(1) from table1;

    Total size of table is 878.8 M.

    File size for which it has failed is 188.6 MB.

    Error is 'Backend 1:Read failed while trying to finish scan range: '. I
    have 10 nodes having good memory.

    Can anyone tall me the reason for failure?

    --
    Regards,
    Nishant Patel

    --
    Regards,
    Nishant Patel

    To unsubscribe from this group and stop receiving emails from it, send
    an email to impala-user+unsubscribe@cloudera.org.

    To unsubscribe from this group and stop receiving emails from it, send an
    email to impala-user+unsubscribe@cloudera.org.


    --
    Regards,
    Nishant Patel

    To unsubscribe from this group and stop receiving emails from it, send an email to impala-user+unsubscribe@cloudera.org.
  • Alan Choi at Sep 10, 2013 at 9:19 pm
    Hi Nishant,

    Sounds like you might have a really long line or have an incorrect line
    delimiter. Could you please verify that?

    Thanks,
    Alan


    On Tue, Sep 10, 2013 at 12:44 AM, Nishant Patel
    wrote:
    I am using text file format with no compression.

    version
    ;
    Shell version: Impala Shell v1.1 (5e15fca) built on Sun Jul 21 15:51:04
    PDT 2013
    Server version: impalad version 1.1 RELEASE (build
    5e15fcacc48ec4ea65e8aa76362cb3ec9be26f13)

    Thanks,
    Nishant

    On Tue, Sep 10, 2013 at 1:09 PM, John Russell wrote:

    What file format, any compression codec, and which Impala version? There
    was an issue like this fixed in 1.1.1, that only affected one file format
    (either SequenceFile or RCFile IIRC, with no compression).

    Thanks,
    John
    --
    Sent from my iPad


    On Sep 10, 2013, at 12:28 AM, Nishant Patel <nishant.k.patel@gmail.com>
    wrote:

    Found below logs from node which failed.

    Tuple(id=1 size=8 slots=[Slot(id=0 type=BIGINT col=-1 offset=0 null=(offset=0 mask=0))])
    I0910 02:25:15.116484 5514 mem-limit.h:86] Query: 0:0Exceeded limit: limit=40481688780 consumption=40483454976
    I0910 02:25:15.116503 5514 mem-limit.h:86] Query: 0:0Exceeded limit: limit=40481688780 consumption=40483459072

    Could not understand why it require so much memory. From profile I found that this node require 37.7 GB while total data in table is 900 MB.


    Regards,

    Nishant




    On Tue, Sep 10, 2013 at 11:41 AM, Nishant Patel <
    nishant.k.patel@gmail.com> wrote:
    Hi,

    I have executed select count(1) from table1;

    Total size of table is 878.8 M.

    File size for which it has failed is 188.6 MB.

    Error is 'Backend 1:Read failed while trying to finish scan range: '. I
    have 10 nodes having good memory.

    Can anyone tall me the reason for failure?

    --
    Regards,
    Nishant Patel

    --
    Regards,
    Nishant Patel

    To unsubscribe from this group and stop receiving emails from it, send
    an email to impala-user+unsubscribe@cloudera.org.

    To unsubscribe from this group and stop receiving emails from it, send
    an email to impala-user+unsubscribe@cloudera.org.


    --
    Regards,
    Nishant Patel

    To unsubscribe from this group and stop receiving emails from it, send
    an email to impala-user+unsubscribe@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it, send an email to impala-user+unsubscribe@cloudera.org.
  • Nishant Patel at Sep 11, 2013 at 5:48 am
    Hi Alan,

    Thanks for your response. Yes you are correct. Once of the record is very
    huge which force impala to break;

    Regards,
    Nishant


    On Wed, Sep 11, 2013 at 2:49 AM, Alan Choi wrote:

    Hi Nishant,

    Sounds like you might have a really long line or have an incorrect line
    delimiter. Could you please verify that?

    Thanks,
    Alan


    On Tue, Sep 10, 2013 at 12:44 AM, Nishant Patel <nishant.k.patel@gmail.com
    wrote:
    I am using text file format with no compression.

    version
    ;
    Shell version: Impala Shell v1.1 (5e15fca) built on Sun Jul 21 15:51:04
    PDT 2013
    Server version: impalad version 1.1 RELEASE (build
    5e15fcacc48ec4ea65e8aa76362cb3ec9be26f13)

    Thanks,
    Nishant

    On Tue, Sep 10, 2013 at 1:09 PM, John Russell wrote:

    What file format, any compression codec, and which Impala version? There
    was an issue like this fixed in 1.1.1, that only affected one file format
    (either SequenceFile or RCFile IIRC, with no compression).

    Thanks,
    John
    --
    Sent from my iPad


    On Sep 10, 2013, at 12:28 AM, Nishant Patel <nishant.k.patel@gmail.com>
    wrote:

    Found below logs from node which failed.

    Tuple(id=1 size=8 slots=[Slot(id=0 type=BIGINT col=-1 offset=0 null=(offset=0 mask=0))])
    I0910 02:25:15.116484 5514 mem-limit.h:86] Query: 0:0Exceeded limit: limit=40481688780 consumption=40483454976
    I0910 02:25:15.116503 5514 mem-limit.h:86] Query: 0:0Exceeded limit: limit=40481688780 consumption=40483459072

    Could not understand why it require so much memory. From profile I found that this node require 37.7 GB while total data in table is 900 MB.



    Regards,

    Nishant




    On Tue, Sep 10, 2013 at 11:41 AM, Nishant Patel <
    nishant.k.patel@gmail.com> wrote:
    Hi,

    I have executed select count(1) from table1;

    Total size of table is 878.8 M.

    File size for which it has failed is 188.6 MB.

    Error is 'Backend 1:Read failed while trying to finish scan range: '. I
    have 10 nodes having good memory.

    Can anyone tall me the reason for failure?

    --
    Regards,
    Nishant Patel

    --
    Regards,
    Nishant Patel

    To unsubscribe from this group and stop receiving emails from it, send
    an email to impala-user+unsubscribe@cloudera.org.

    To unsubscribe from this group and stop receiving emails from it, send
    an email to impala-user+unsubscribe@cloudera.org.


    --
    Regards,
    Nishant Patel

    To unsubscribe from this group and stop receiving emails from it, send
    an email to impala-user+unsubscribe@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to impala-user+unsubscribe@cloudera.org.


    --
    Regards,
    Nishant Patel

    To unsubscribe from this group and stop receiving emails from it, send an email to impala-user+unsubscribe@cloudera.org.
  • Skye Wanderman-Milne at Sep 16, 2013 at 6:15 pm
    Hi Nishant,

    I've reopened IMPALA-525 <https://issues.cloudera.org/browse/IMPALA-525> --
    Impala will still use too much memory reading some text files with very
    long rows. As a workaround for now, removing the huge record from the text
    file or converting your data to a different file format should fix the
    problem.

    Sorry for the inconvenience,
    Skye


    On Tue, Sep 10, 2013 at 10:48 PM, Nishant Patel
    wrote:
    Hi Alan,

    Thanks for your response. Yes you are correct. Once of the record is very
    huge which force impala to break;

    Regards,
    Nishant


    On Wed, Sep 11, 2013 at 2:49 AM, Alan Choi wrote:

    Hi Nishant,

    Sounds like you might have a really long line or have an incorrect line
    delimiter. Could you please verify that?

    Thanks,
    Alan


    On Tue, Sep 10, 2013 at 12:44 AM, Nishant Patel <
    nishant.k.patel@gmail.com> wrote:
    I am using text file format with no compression.

    version
    ;
    Shell version: Impala Shell v1.1 (5e15fca) built on Sun Jul 21 15:51:04
    PDT 2013
    Server version: impalad version 1.1 RELEASE (build
    5e15fcacc48ec4ea65e8aa76362cb3ec9be26f13)

    Thanks,
    Nishant

    On Tue, Sep 10, 2013 at 1:09 PM, John Russell wrote:

    What file format, any compression codec, and which Impala version?
    There was an issue like this fixed in 1.1.1, that only affected one file
    format (either SequenceFile or RCFile IIRC, with no compression).

    Thanks,
    John
    --
    Sent from my iPad


    On Sep 10, 2013, at 12:28 AM, Nishant Patel <nishant.k.patel@gmail.com>
    wrote:

    Found below logs from node which failed.

    Tuple(id=1 size=8 slots=[Slot(id=0 type=BIGINT col=-1 offset=0 null=(offset=0 mask=0))])
    I0910 02:25:15.116484 5514 mem-limit.h:86] Query: 0:0Exceeded limit: limit=40481688780 consumption=40483454976
    I0910 02:25:15.116503 5514 mem-limit.h:86] Query: 0:0Exceeded limit: limit=40481688780 consumption=40483459072

    Could not understand why it require so much memory. From profile I found that this node require 37.7 GB while total data in table is 900 MB.





    Regards,

    Nishant




    On Tue, Sep 10, 2013 at 11:41 AM, Nishant Patel <
    nishant.k.patel@gmail.com> wrote:
    Hi,

    I have executed select count(1) from table1;

    Total size of table is 878.8 M.

    File size for which it has failed is 188.6 MB.

    Error is 'Backend 1:Read failed while trying to finish scan range: '.
    I have 10 nodes having good memory.

    Can anyone tall me the reason for failure?

    --
    Regards,
    Nishant Patel

    --
    Regards,
    Nishant Patel

    To unsubscribe from this group and stop receiving emails from it,
    send an email to impala-user+unsubscribe@cloudera.org.

    To unsubscribe from this group and stop receiving emails from it, send
    an email to impala-user+unsubscribe@cloudera.org.


    --
    Regards,
    Nishant Patel

    To unsubscribe from this group and stop receiving emails from it, send
    an email to impala-user+unsubscribe@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to impala-user+unsubscribe@cloudera.org.


    --
    Regards,
    Nishant Patel

    To unsubscribe from this group and stop receiving emails from it, send
    an email to impala-user+unsubscribe@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it, send an email to impala-user+unsubscribe@cloudera.org.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupimpala-user @
categorieshadoop
postedSep 10, '13 at 7:28a
activeSep 16, '13 at 6:15p
posts6
users4
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2021 Grokbase