FAQ
From the thread dump looks like so many threads are stuck at

org.apache.hadoop.hbase.KeyValue$KVComparator.compare(KeyValue.java:1535)

org.apache.hadoop.hbase.KeyValue$KVComparator.compare(KeyValue.java:1523)

java.util.concurrent.ConcurrentSkipListMap$ComparableUsingComparator.compareTo(ConcurrentSkipListMap.java:647)

java.util.concurrent.ConcurrentSkipListMap.findNear(ConcurrentSkipListMap.java:1346)

Is this similar to HBASE-9428 issue.??

Waiting for some help regarding this.. :)


Thanks and regards
Vinay S Kashyap


---------- Forwarded message ----------
From: Vinay Kashyap <vinu.kash@gmail.com>
Date: Tue, Oct 22, 2013 at 8:47 PM
Subject: High CPU utilization in few Region servers during read
To: user@hbase.apache.org


Hi,

I am running HBase 0.94.6 (cdh-4.4.0) with 25 region servers.
I am testing a scenario to read and write only from/to RAM.

I have the following settings
Table precreated with 25 regions.
HFile size - 48 GB
MemStore size - 72 GB
Heap size - 96 GB

These settings are to avoid any flushes to the disk. Data need not be
persisted.

I am able to achieve a load throughput of 75K ops per region server.
While reading 23 region servers are serving requests with throughput of 55K
ops, but randomly 2 of the region servers always end up serving few 100 ops.

In these 2 region servers the CPU usage is very high and close to 100%
continuously bringing down the overall throughput. I did not observe any
long GC pauses in this time.

I also tried applying the patch for HBASE-9428 issue, but still faced the
same problem.
Thread dump for the affected region server is at
http://pastebin.com/JGx9gXnm

Any hints on how to solve this.?

Thanks and regards
Vinay S Kashyap

Search Discussions

Discussion Posts

Previous

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 2 of 5 | next ›
Discussion Overview
groupuser @
categorieshbase, hadoop
postedOct 22, '13 at 11:48a
activeOct 25, '13 at 3:06p
posts5
users3
websitehbase.apache.org

People

Translate

site design / logo © 2022 Grokbase