FAQ
Hi All,

When I try to export a hbase table using the below command, I get this
error.

hbase org.apache.hadoop.hbase.mapreduce.Export test_table
/tmp/test_table_bkp

Any thoughts on this please?

Thanks

Search Discussions

  • Darren Lo at Mar 19, 2013 at 12:39 am
    Hi Mike,

    (Adding cdh-user)

    Can you please include some information on your error message? I just see
    your command.

    Thanks,
    Darren

    On Mon, Mar 18, 2013 at 5:37 PM, Mike wrote:

    Hi All,

    When I try to export a hbase table using the below command, I get this
    error.

    hbase org.apache.hadoop.hbase.mapreduce.Export test_table
    /tmp/test_table_bkp

    Any thoughts on this please?

    Thanks


    --
    Thanks,
    Darren
  • Mike at Mar 19, 2013 at 3:09 am
    Thanks for the quick response, Darren.

    Here is the error messages.

    13/03/18 20:13:06 INFO mapred.JobClient: map 0% reduce 0%
    13/03/18 20:14:18 INFO mapred.JobClient: Task Id :
    attempt_201303121738_0009_m_000000_0, Status : FAILED
    org.apache.hadoop.hbase.client.ScannerTimeoutException: 61162ms passed
    since the last invocation, timeout is currently set to 60000
    at
    org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1302)
    at
    org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:151)
    at
    org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:142)
    at
    org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:458)
    at
    org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:76)
    at
    org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:85)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:139)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:645)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.S
    attempt_201303121738_0009_m_000000_0: log4j:WARN No appenders could be
    found for logger (org.apache.hadoop.hdfs.DFSClient).
    attempt_201303121738_0009_m_000000_0: log4j:WARN Please initialize the
    log4j system properly.
    attempt_201303121738_0009_m_000000_0: log4j:WARN See
    http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
    13/03/18 20:15:33 INFO mapred.JobClient: Task Id :
    attempt_201303121738_0009_m_000000_1, Status : FAILED
    org.apache.hadoop.hbase.client.ScannerTimeoutException: 67691ms passed
    since the last invocation, timeout is currently set to 60000
    at
    org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1302)
    at
    org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:151)
    at
    org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:142)
    at
    org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:458)
    at
    org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:76)
    at
    org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:85)

    Thanks
    On Monday, March 18, 2013 8:39:26 PM UTC-4, Darren Lo wrote:

    Hi Mike,

    (Adding cdh-user)

    Can you please include some information on your error message? I just see
    your command.

    Thanks,
    Darren


    On Mon, Mar 18, 2013 at 5:37 PM, Mike <[email protected] <javascript:>>wrote:
    Hi All,

    When I try to export a hbase table using the below command, I get this
    error.

    hbase org.apache.hadoop.hbase.mapreduce.Export test_table
    /tmp/test_table_bkp

    Any thoughts on this please?

    Thanks


    --
    Thanks,
    Darren
  • Kevin O'dell at Mar 19, 2013 at 3:33 am
    Mike,

    What is your scanner caching set at?
    On Mon, Mar 18, 2013 at 11:09 PM, Mike wrote:
    Thanks for the quick response, Darren.

    Here is the error messages.

    13/03/18 20:13:06 INFO mapred.JobClient: map 0% reduce 0%
    13/03/18 20:14:18 INFO mapred.JobClient: Task Id :
    attempt_201303121738_0009_m_000000_0, Status : FAILED
    org.apache.hadoop.hbase.client.ScannerTimeoutException: 61162ms passed since
    the last invocation, timeout is currently set to 60000
    at
    org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1302)
    at
    org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:151)
    at
    org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:142)
    at
    org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:458)
    at
    org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:76)
    at
    org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:85)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:139)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:645)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.S
    attempt_201303121738_0009_m_000000_0: log4j:WARN No appenders could be found
    for logger (org.apache.hadoop.hdfs.DFSClient).
    attempt_201303121738_0009_m_000000_0: log4j:WARN Please initialize the log4j
    system properly.
    attempt_201303121738_0009_m_000000_0: log4j:WARN See
    http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
    13/03/18 20:15:33 INFO mapred.JobClient: Task Id :
    attempt_201303121738_0009_m_000000_1, Status : FAILED
    org.apache.hadoop.hbase.client.ScannerTimeoutException: 67691ms passed since
    the last invocation, timeout is currently set to 60000
    at
    org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1302)
    at
    org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:151)
    at
    org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:142)
    at
    org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:458)
    at
    org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:76)
    at
    org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:85)

    Thanks
    On Monday, March 18, 2013 8:39:26 PM UTC-4, Darren Lo wrote:

    Hi Mike,

    (Adding cdh-user)

    Can you please include some information on your error message? I just see
    your command.

    Thanks,
    Darren


    On Mon, Mar 18, 2013 at 5:37 PM, Mike wrote:

    Hi All,

    When I try to export a hbase table using the below command, I get this
    error.

    hbase org.apache.hadoop.hbase.mapreduce.Export test_table
    /tmp/test_table_bkp

    Any thoughts on this please?

    Thanks



    --
    Thanks,
    Darren


    --
    Kevin O'Dell
    Customer Operations Engineer, Cloudera
  • Mike at Mar 19, 2013 at 2:39 pm
    Hi Kevin,

    HBase Client Scanner Caching is set to 4000.


    Thanks
    On Monday, March 18, 2013 11:33:34 PM UTC-4, Kevin O'dell wrote:

    Mike,

    What is your scanner caching set at?

    On Mon, Mar 18, 2013 at 11:09 PM, Mike <[email protected] <javascript:>>
    wrote:
    Thanks for the quick response, Darren.

    Here is the error messages.

    13/03/18 20:13:06 INFO mapred.JobClient: map 0% reduce 0%
    13/03/18 20:14:18 INFO mapred.JobClient: Task Id :
    attempt_201303121738_0009_m_000000_0, Status : FAILED
    org.apache.hadoop.hbase.client.ScannerTimeoutException: 61162ms passed since
    the last invocation, timeout is currently set to 60000
    at
    org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1302)
    at
    org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:151)
    at
    org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:142)
    at
    org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:458)
    at
    org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:76)
    at
    org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:85)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:139)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:645)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.S
    attempt_201303121738_0009_m_000000_0: log4j:WARN No appenders could be found
    for logger (org.apache.hadoop.hdfs.DFSClient).
    attempt_201303121738_0009_m_000000_0: log4j:WARN Please initialize the log4j
    system properly.
    attempt_201303121738_0009_m_000000_0: log4j:WARN See
    http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
    13/03/18 20:15:33 INFO mapred.JobClient: Task Id :
    attempt_201303121738_0009_m_000000_1, Status : FAILED
    org.apache.hadoop.hbase.client.ScannerTimeoutException: 67691ms passed since
    the last invocation, timeout is currently set to 60000
    at
    org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1302)
    at
    org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:151)
    at
    org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:142)
    at
    org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:458)
    at
    org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:76)
    at
    org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:85)
    Thanks
    On Monday, March 18, 2013 8:39:26 PM UTC-4, Darren Lo wrote:

    Hi Mike,

    (Adding cdh-user)

    Can you please include some information on your error message? I just
    see
    your command.

    Thanks,
    Darren


    On Mon, Mar 18, 2013 at 5:37 PM, Mike wrote:

    Hi All,

    When I try to export a hbase table using the below command, I get this
    error.

    hbase org.apache.hadoop.hbase.mapreduce.Export test_table
    /tmp/test_table_bkp

    Any thoughts on this please?

    Thanks



    --
    Thanks,
    Darren


    --
    Kevin O'Dell
    Customer Operations Engineer, Cloudera
  • Kevin O'dell at Mar 19, 2013 at 3:29 pm
    Hi Mike,

    That is really high. You are pulling 4000 rows into cache at a time.
    You probably cannot parse that many. Try lowering that to 100 and
    see what happens.
    On Tue, Mar 19, 2013 at 10:39 AM, Mike wrote:
    Hi Kevin,

    HBase Client Scanner Caching is set to 4000.


    Thanks
    On Monday, March 18, 2013 11:33:34 PM UTC-4, Kevin O'dell wrote:

    Mike,

    What is your scanner caching set at?
    On Mon, Mar 18, 2013 at 11:09 PM, Mike wrote:
    Thanks for the quick response, Darren.

    Here is the error messages.

    13/03/18 20:13:06 INFO mapred.JobClient: map 0% reduce 0%
    13/03/18 20:14:18 INFO mapred.JobClient: Task Id :
    attempt_201303121738_0009_m_000000_0, Status : FAILED
    org.apache.hadoop.hbase.client.ScannerTimeoutException: 61162ms passed
    since
    the last invocation, timeout is currently set to 60000
    at

    org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1302)
    at

    org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:151)
    at

    org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:142)
    at

    org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:458)
    at

    org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:76)
    at

    org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:85)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:139)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:645)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.S
    attempt_201303121738_0009_m_000000_0: log4j:WARN No appenders could be
    found
    for logger (org.apache.hadoop.hdfs.DFSClient).
    attempt_201303121738_0009_m_000000_0: log4j:WARN Please initialize the
    log4j
    system properly.
    attempt_201303121738_0009_m_000000_0: log4j:WARN See
    http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
    13/03/18 20:15:33 INFO mapred.JobClient: Task Id :
    attempt_201303121738_0009_m_000000_1, Status : FAILED
    org.apache.hadoop.hbase.client.ScannerTimeoutException: 67691ms passed
    since
    the last invocation, timeout is currently set to 60000
    at

    org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1302)
    at

    org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:151)
    at

    org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:142)
    at

    org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:458)
    at

    org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:76)
    at

    org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:85)

    Thanks
    On Monday, March 18, 2013 8:39:26 PM UTC-4, Darren Lo wrote:

    Hi Mike,

    (Adding cdh-user)

    Can you please include some information on your error message? I just
    see
    your command.

    Thanks,
    Darren


    On Mon, Mar 18, 2013 at 5:37 PM, Mike wrote:

    Hi All,

    When I try to export a hbase table using the below command, I get this
    error.

    hbase org.apache.hadoop.hbase.mapreduce.Export test_table
    /tmp/test_table_bkp

    Any thoughts on this please?

    Thanks



    --
    Thanks,
    Darren


    --
    Kevin O'Dell
    Customer Operations Engineer, Cloudera


    --
    Kevin O'Dell
    Customer Operations Engineer, Cloudera

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedMar 19, '13 at 12:37a
activeMar 19, '13 at 3:29p
posts6
users3
websitecloudera.com
irc#hadoop

3 users in discussion

Mike: 3 posts Kevin O'dell: 2 posts Darren Lo: 1 post

People

Translate

site design / logo © 2023 Grokbase