FAQ
What file format, any compression codec, and which Impala version? There was an issue like this fixed in 1.1.1, that only affected one file format (either SequenceFile or RCFile IIRC, with no compression).

Thanks,
John
--
Sent from my iPad

On Sep 10, 2013, at 12:28 AM, Nishant Patel wrote:

Found below logs from node which failed.

Tuple(id=1 size=8 slots=[Slot(id=0 type=BIGINT col=-1 offset=0 null=(offset=0 mask=0))])
I0910 02:25:15.116484 5514 mem-limit.h:86] Query: 0:0Exceeded limit: limit=40481688780 consumption=40483454976
I0910 02:25:15.116503 5514 mem-limit.h:86] Query: 0:0Exceeded limit: limit=40481688780 consumption=40483459072

Could not understand why it require so much memory. From profile I found that this node require 37.7 GB while total data in table is 900 MB.


Regards,
Nishant


On Tue, Sep 10, 2013 at 11:41 AM, Nishant Patel wrote:
Hi,

I have executed select count(1) from table1;

Total size of table is 878.8 M.

File size for which it has failed is 188.6 MB.

Error is 'Backend 1:Read failed while trying to finish scan range: '. I have 10 nodes having good memory.

Can anyone tall me the reason for failure?

--
Regards,
Nishant Patel


--
Regards,
Nishant Patel

To unsubscribe from this group and stop receiving emails from it, send an email to impala-user+unsubscribe@cloudera.org.
To unsubscribe from this group and stop receiving emails from it, send an email to impala-user+unsubscribe@cloudera.org.

Search Discussions

Discussion Posts

Previous

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 2 of 6 | next ›
Discussion Overview
groupimpala-user @
categorieshadoop
postedSep 10, '13 at 7:28a
activeSep 16, '13 at 6:15p
posts6
users4
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2021 Grokbase