Grokbase Groups HBase user June 2011
FAQ
We are seeing "responseTooLarge for: next..." errors in our region server
logs. If I understand correctly is this caused by opening a scanner and the
rows are too big to be returned? If the scan batch size is set to 1 this
tells me these are rows too big to actually read. Is this correct? Why would
this error occur? The size are listed as 120m+ when we see this error.

We have been getting timeouts (60s) on our scans even with a scan batch size
of 1. I assume this may be part of the explanation why? What is causing this
and how do we get around it?

Thanks.

Search Discussions

  • Wayne at Jun 7, 2011 at 2:11 pm
    We also see a lot of "responseTooLarge for: multi". Not sure what this is...
    On Tue, Jun 7, 2011 at 9:50 AM, Wayne wrote:

    We are seeing "responseTooLarge for: next..." errors in our region server
    logs. If I understand correctly is this caused by opening a scanner and the
    rows are too big to be returned? If the scan batch size is set to 1 this
    tells me these are rows too big to actually read. Is this correct? Why would
    this error occur? The size are listed as 120m+ when we see this error.

    We have been getting timeouts (60s) on our scans even with a scan batch
    size of 1. I assume this may be part of the explanation why? What is causing
    this and how do we get around it?

    Thanks.
  • Jean-Daniel Cryans at Jun 10, 2011 at 10:02 pm
    It's a WARN right? This is the default size from which we print it:

    private static final int DEFAULT_WARN_RESPONSE_SIZE = 100 * 1024 * 1024;

    J-D
    On Tue, Jun 7, 2011 at 2:10 PM, Wayne wrote:
    We also see a lot of "responseTooLarge for: multi". Not sure what this is...
    On Tue, Jun 7, 2011 at 9:50 AM, Wayne wrote:

    We are seeing  "responseTooLarge for: next..." errors in our region server
    logs. If I understand correctly is this caused by opening a scanner and the
    rows are too big to be returned? If the scan batch size is set to 1 this
    tells me these are rows too big to actually read. Is this correct? Why would
    this error occur? The size are listed as 120m+ when we see this error.

    We have been getting timeouts (60s) on our scans even with a scan batch
    size of 1. I assume this may be part of the explanation why? What is causing
    this and how do we get around it?

    Thanks.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupuser @
categorieshbase, hadoop
postedJun 7, '11 at 1:51p
activeJun 10, '11 at 10:02p
posts3
users2
websitehbase.apache.org

2 users in discussion

Wayne: 2 posts Jean-Daniel Cryans: 1 post

People

Translate

site design / logo © 2022 Grokbase