Grokbase Groups HBase user April 2010
FAQ
Hi guys,
I successfully configured hadoop, mapreduce and hbase.
Now want to run Performance Evaluation a bit.

The configuration of our systems are

Master Machine:

Processor:
Intel Centrino Mobile Technology Processor 1.66 GHz CPUs
Memory:
1 GB/Go DDR2 SDRAM
Storage:
80 GB/Go
Network:
Gigabit Ethernet

Slave 1 Machine:

Processor:
Core 2 Duo Intel T5450 Processor 1.66 GHz CPUs
Memory:
2 GB/Go DDR2 SDRAM
Storage:
200 GB/Go
Network:
Gigabit Ethernet

Slave 2 Machine:

Processor:
Intel(R) Pentium(R) M processor 1400MHZ
Memory:
512 MB RAM
Storage:
45 GB
Network:
Gigabit Ethernet

The Performance Evaluation algorithms sequentialWrite and
sequentialRead are successfully runned.

We followed the same procedure for randomWrite and randomRead.

randomWrite was successful but randomRead was failed . See the output
below for the randomRead. ( The cpu memory usage was 94% is it the
reason??)

hadoop@Hadoopserver:~/hadoop-0.20.1/bin> ./hadoop
org.apache.hadoop.hbase.PerformanceEvaluation randomRead 3
10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
environment:zookeeper.version=3.2.2-888565, built on 12/08/2009 21:51
GMT
10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
environment:host.name=Hadoopserver
10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
environment:java.version=1.6.0_15
10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
environment:java.vendor=Sun Microsystems Inc.
10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
environment:java.home=/usr/java/jdk1.6.0_15/jre
10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
environment:java.class.path=/home/hadoop/hadoop-0.20.1/bin/../conf:/usr/java/jdk1.6.0_15/lib/tools.jar:/home/hadoop/hadoop-0.20.1/bin/..:/home/hadoop/hadoop-0.20.1/bin/../hadoop-0.20.1-core.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/commons-cli-1.2.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/commons-codec-1.3.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/commons-el-1.0.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/commons-httpclient-3.0.1.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/commons-logging-1.0.4.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/commons-logging-api-1.0.4.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/commons-net-1.4.1.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/core-3.1.1.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/hsqldb-1.8.0.10.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/jasper-compiler-5.5.12.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/jasper-runtime-5.5.12.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/jets3t-0.6.1.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/jetty-6.1.14.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/jetty-util-6.1.14.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/junit-3.8.1.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/kfs-0.2.2.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/log4j-1.2.15.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/oro-2.0.8.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/slf4j-api-1.4.3.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/xmlenc-0.52.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/jsp-2.1/jsp-2.1.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/jsp-2.1/jsp-api-2.1.jar:/home/hadoop/hbase-0.20.3/hbase-0.20.3.jar:/home/hadoop/hbase-0.20.3/conf:/home/hadoop/hbase-0.20.3/hbase-0.20.3-test.jar:/home/hadoop/hbase-0.20.3/lib/zookeeper-3.2.2.jar
10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
environment:java.library.path=/home/hadoop/hadoop-0.20.1/bin/../lib/native/Linux-i386-32
10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
environment:java.io.tmpdir=/tmp
10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
environment:java.compiler=<NA>
10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client environment:os.arch=i386
10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
environment:os.version=2.6.27.19-5-pae
10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client environment:user.name=hadoop
10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
environment:user.home=/home/hadoop
10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
environment:user.dir=/home/hadoop/hadoop-0.20.1/bin
10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Initiating client
connection, connectString=Hadoopclient1:2222,Hadoopclient:2222,Hadoopserver:2222
sessionTimeout=60000
watcher=org.apache.hadoop.hbase.client.HConnectionManager$ClientZKWatcher@12152e6
10/04/17 17:58:08 INFO zookeeper.ClientCnxn:
zookeeper.disableAutoWatchReset is false
10/04/17 17:58:08 INFO zookeeper.ClientCnxn: Attempting connection to
server Hadoopserver/192.168.1.1:2222
10/04/17 17:58:08 INFO zookeeper.ClientCnxn: Priming connection to
java.nio.channels.SocketChannel[connected local=/192.168.1.1:41539
remote=Hadoopserver/192.168.1.1:2222]
10/04/17 17:58:08 INFO zookeeper.ClientCnxn: Server connection successful
10/04/17 17:58:09 WARN mapred.JobClient: Use GenericOptionsParser for
parsing the arguments. Applications should implement Tool for the
same.
10/04/17 17:58:09 INFO input.FileInputFormat: Total input paths to process : 1
10/04/17 17:58:10 INFO hbase.PerformanceEvaluation: Total # of splits: 30
10/04/17 17:58:10 INFO mapred.JobClient: Running job: job_201004171753_0001
10/04/17 17:58:11 INFO mapred.JobClient: map 0% reduce 0%
10/04/17 17:58:25 INFO mapred.JobClient: map 6% reduce 0%
10/04/17 17:58:28 INFO mapred.JobClient: map 13% reduce 0%
10/04/17 17:58:31 INFO mapred.JobClient: map 20% reduce 0%
10/04/17 18:08:58 INFO mapred.JobClient: map 16% reduce 0%
10/04/17 18:10:12 INFO mapred.JobClient: Task Id :
attempt_201004171753_0001_m_000004_0, Status : FAILED
Task attempt_201004171753_0001_m_000004_0 failed to report status for
601 seconds. Killing!
10/04/17 18:11:37 INFO mapred.JobClient: map 20% reduce 0%
10/04/17 18:15:40 INFO mapred.JobClient: map 16% reduce 0%
10/04/17 18:16:47 INFO mapred.JobClient: map 13% reduce 0%
10/04/17 18:16:48 INFO mapred.JobClient: Task Id :
attempt_201004171753_0001_m_000001_0, Status : FAILED
Task attempt_201004171753_0001_m_000001_0 failed to report status for
600 seconds. Killing!
10/04/17 18:16:53 INFO mapred.JobClient: Task Id :
attempt_201004171753_0001_m_000002_0, Status : FAILED
Task attempt_201004171753_0001_m_000002_0 failed to report status for
602 seconds. Killing!
10/04/17 18:17:00 INFO mapred.JobClient: map 16% reduce 0%
10/04/17 18:19:08 INFO mapred.JobClient: map 20% reduce 0%
10/04/17 18:22:47 INFO mapred.JobClient: map 16% reduce 0%
10/04/17 18:22:54 INFO mapred.JobClient: Task Id :
attempt_201004171753_0001_m_000000_0, Status : FAILED
Task attempt_201004171753_0001_m_000000_0 failed to report status for
600 seconds. Killing!
10/04/17 18:22:57 INFO mapred.JobClient: map 13% reduce 0%
10/04/17 18:23:00 INFO mapred.JobClient: Task Id :
attempt_201004171753_0001_m_000005_0, Status : FAILED
Task attempt_201004171753_0001_m_000005_0 failed to report status for
600 seconds. Killing!
10/04/17 18:23:04 INFO mapred.JobClient: map 16% reduce 0%
10/04/17 18:23:11 INFO mapred.JobClient: map 20% reduce 0%
10/04/17 18:24:29 INFO mapred.JobClient: map 16% reduce 0%
10/04/17 18:24:35 INFO mapred.JobClient: Task Id :
attempt_201004171753_0001_m_000003_0, Status : FAILED
Task attempt_201004171753_0001_m_000003_0 failed to report status for
601 seconds. Killing!
10/04/17 18:24:47 INFO mapred.JobClient: map 20% reduce 0%
10/04/17 18:24:53 INFO mapred.JobClient: map 16% reduce 0%
10/04/17 18:26:30 INFO mapred.JobClient: Task Id :
attempt_201004171753_0001_m_000006_0, Status : FAILED
Task attempt_201004171753_0001_m_000006_0 failed to report status for
604 seconds. Killing!
10/04/17 18:28:15 INFO mapred.JobClient: map 20% reduce 0%
10/04/17 18:29:17 INFO mapred.JobClient: map 16% reduce 0%
10/04/17 18:30:24 INFO mapred.JobClient: Task Id :
attempt_201004171753_0001_m_000007_0, Status : FAILED
Task attempt_201004171753_0001_m_000007_0 failed to report status for
602 seconds. Killing!
10/04/17 18:31:24 INFO mapred.JobClient: map 20% reduce 0%
10/04/17 18:33:08 INFO mapred.JobClient: map 16% reduce 0%
10/04/17 18:33:15 INFO mapred.JobClient: Task Id :
attempt_201004171753_0001_m_000002_1, Status : FAILED
Task attempt_201004171753_0001_m_000002_1 failed to report status for
602 seconds. Killing!
10/04/17 18:33:25 INFO mapred.JobClient: map 20% reduce 0%
10/04/17 18:36:02 INFO mapred.JobClient: map 16% reduce 0%
10/04/17 18:36:08 INFO mapred.JobClient: Task Id :
attempt_201004171753_0001_m_000001_1, Status : FAILED
Task attempt_201004171753_0001_m_000001_1 failed to report status for
602 seconds. Killing!
10/04/17 18:36:20 INFO mapred.JobClient: map 20% reduce 0%
10/04/17 18:39:27 INFO mapred.JobClient: map 16% reduce 0%
10/04/17 18:39:33 INFO mapred.JobClient: Task Id :
attempt_201004171753_0001_m_000000_1, Status : FAILED
Task attempt_201004171753_0001_m_000000_1 failed to report status for
600 seconds. Killing!
10/04/17 18:39:45 INFO mapred.JobClient: map 20% reduce 0%
10/04/17 18:40:57 INFO mapred.JobClient: map 16% reduce 0%
10/04/17 18:41:03 INFO mapred.JobClient: Task Id :
attempt_201004171753_0001_m_000005_1, Status : FAILED
Task attempt_201004171753_0001_m_000005_1 failed to report status for
602 seconds. Killing!
10/04/17 18:41:14 INFO mapred.JobClient: map 20% reduce 0%
10/04/17 18:47:00 INFO mapred.JobClient: map 16% reduce 0%
10/04/17 18:47:56 INFO mapred.JobClient: map 13% reduce 0%
10/04/17 18:48:12 INFO mapred.JobClient: Task Id :
attempt_201004171753_0001_m_000003_1, Status : FAILED
Task attempt_201004171753_0001_m_000003_1 failed to report status for
602 seconds. Killing!
10/04/17 18:48:15 INFO mapred.JobClient: Task Id :
attempt_201004171753_0001_m_000008_0, Status : FAILED
Task attempt_201004171753_0001_m_000008_0 failed to report status for
601 seconds. Killing!
10/04/17 18:48:50 INFO mapred.JobClient: map 16% reduce 0%
10/04/17 18:49:19 INFO mapred.JobClient: map 13% reduce 0%
10/04/17 18:49:30 INFO mapred.JobClient: Task Id :
attempt_201004171753_0001_m_000006_1, Status : FAILED
Task attempt_201004171753_0001_m_000006_1 failed to report status for
602 seconds. Killing!
10/04/17 18:49:34 INFO mapred.JobClient: map 16% reduce 0%
10/04/17 18:49:38 INFO mapred.JobClient: map 20% reduce 0%
10/04/17 18:49:47 INFO mapred.JobClient: map 16% reduce 0%
10/04/17 18:49:57 INFO mapred.JobClient: Task Id :
attempt_201004171753_0001_m_000004_1, Status : FAILED
Task attempt_201004171753_0001_m_000004_1 failed to report status for
600 seconds. Killing!
10/04/17 18:50:07 INFO mapred.JobClient: map 20% reduce 0%
10/04/17 18:51:43 INFO mapred.JobClient: map 16% reduce 0%
10/04/17 18:51:51 INFO mapred.JobClient: Task Id :
attempt_201004171753_0001_m_000007_1, Status : FAILED
Task attempt_201004171753_0001_m_000007_1 failed to report status for
600 seconds. Killing!
10/04/17 18:52:00 INFO mapred.JobClient: map 20% reduce 0%
10/04/17 19:00:30 INFO mapred.JobClient: map 16% reduce 0%
10/04/17 19:00:37 INFO mapred.JobClient: Task Id :
attempt_201004171753_0001_m_000001_2, Status : FAILED
Task attempt_201004171753_0001_m_000001_2 failed to report status for
602 seconds. Killing!
10/04/17 19:00:47 INFO mapred.JobClient: map 20% reduce 0%
10/04/17 19:02:03 INFO mapred.JobClient: map 16% reduce 0%
10/04/17 19:02:06 INFO mapred.JobClient: Task Id :
attempt_201004171753_0001_m_000004_2, Status : FAILED
Task attempt_201004171753_0001_m_000004_2 failed to report status for
600 seconds. Killing!
10/04/17 19:02:15 INFO mapred.JobClient: map 20% reduce 0%
10/04/17 19:07:44 INFO mapred.JobClient: map 16% reduce 0%
10/04/17 19:07:55 INFO mapred.JobClient: map 13% reduce 0%
10/04/17 19:08:08 INFO mapred.JobClient: map 10% reduce 0%
10/04/17 19:08:14 INFO mapred.JobClient: Task Id :
attempt_201004171753_0001_m_000008_1, Status : FAILED
Task attempt_201004171753_0001_m_000008_1 failed to report status for
600 seconds. Killing!
10/04/17 19:08:18 INFO mapred.JobClient: map 6% reduce 0%
10/04/17 19:08:20 INFO mapred.JobClient: Task Id :
attempt_201004171753_0001_m_000003_2, Status : FAILED
Task attempt_201004171753_0001_m_000003_2 failed to report status for
601 seconds. Killing!
10/04/17 19:08:24 INFO mapred.JobClient: map 10% reduce 0%
10/04/17 19:08:31 INFO mapred.JobClient: map 13% reduce 0%
10/04/17 19:08:50 INFO mapred.JobClient: Task Id :
attempt_201004171753_0001_m_000000_2, Status : FAILED
Task attempt_201004171753_0001_m_000000_2 failed to report status for
601 seconds. Killing!
10/04/17 19:08:56 INFO mapred.JobClient: Task Id :
attempt_201004171753_0001_m_000005_2, Status : FAILED
Task attempt_201004171753_0001_m_000005_2 failed to report status for
600 seconds. Killing!
10/04/17 19:10:41 INFO mapred.JobClient: map 16% reduce 0%
10/04/17 19:10:44 INFO mapred.JobClient: map 20% reduce 0%
10/04/17 19:12:20 INFO mapred.JobClient: map 16% reduce 0%
10/04/17 19:12:38 INFO mapred.JobClient: Job complete: job_201004171753_0001
10/04/17 19:12:41 INFO mapred.JobClient: Counters: 2
10/04/17 19:12:41 INFO mapred.JobClient: Job Counters
10/04/17 19:12:41 INFO mapred.JobClient: Launched map tasks=29
10/04/17 19:12:41 INFO mapred.JobClient: Failed map tasks=1
10/04/17 19:12:42 INFO zookeeper.ZooKeeper: Closing session: 0x280c7c9a9c0001
10/04/17 19:12:42 INFO zookeeper.ClientCnxn: Closing ClientCnxn for
session: 0x280c7c9a9c0001
10/04/17 19:12:42 INFO zookeeper.ClientCnxn: Exception while closing
send thread for session 0x280c7c9a9c0001 : Read error rc = -1
java.nio.DirectByteBuffer[pos=0 lim=4 cap=4]
10/04/17 19:12:43 INFO zookeeper.ClientCnxn: Disconnecting ClientCnxn
for session: 0x280c7c9a9c0001
10/04/17 19:12:43 INFO zookeeper.ZooKeeper: Session: 0x280c7c9a9c0001 closed
10/04/17 19:12:43 INFO zookeeper.ClientCnxn: EventThread shut down


Also the regionserver logs shows repeated sequences of

hadoop@Hadoopserver:~/hbase-0.20.3/logs> tail -100
hbase-hadoop-regionserver-Hadoopserver.log
at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:126)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1125)
at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:615)
at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:679)
at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:943)

2010-04-17 19:08:08,845 DEBUG
org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU
eviction started. Attempting to free 21002624 bytes
2010-04-17 19:08:09,171 DEBUG
org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU
eviction completed. Freed 21066064 bytes. Priority Sizes:
Single=92.18593MB (96663952), Multi=76.67258MB (80397024),Memory=0.0MB
(0)
2010-04-17 19:08:14,420 WARN org.apache.hadoop.ipc.HBaseServer: IPC
Server Responder, call get([B@176ebca, row=0001526875, maxVersions=1,
timeRange=[0,9223372036854775807), families={(family=info,
columns={data}}) from 192.168.1.2:33323: output error
2010-04-17 19:08:14,422 INFO org.apache.hadoop.ipc.HBaseServer: IPC
Server handler 5 on 60020 caught:
java.nio.channels.ClosedChannelException
at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:126)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1125)
at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:615)
at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:679)
at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:943)

2010-04-17 19:08:58,186 DEBUG
org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU
eviction started. Attempting to free 20995384 bytes
2010-04-17 19:08:59,145 DEBUG
org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU
eviction completed. Freed 20999376 bytes. Priority Sizes:
Single=92.36976MB (96856712), Multi=76.67258MB (80397024),Memory=0.0MB
(0)
2010-04-17 19:09:27,559 DEBUG
org.apache.hadoop.hbase.io.hfile.LruBlockCache: Cache Stats: Sizes:
Total=150.6519MB (157969968), Free=49.0356MB (51417552),
Max=199.6875MB (209387520), Counts: Blocks=2355, Access=32992,
Hit=6641, Miss=26351, Evictions=76, Evicted=23993, Ratios: Hit
Ratio=20.129121840000153%, Miss Ratio=79.87087965011597%,
Evicted/Run=315.6973571777344
2010-04-17 19:09:27,563 WARN org.apache.hadoop.hbase.util.Sleeper: We
slept 27142ms, ten times longer than scheduled: 1000
2010-04-17 19:10:02,430 DEBUG
org.apache.hadoop.hbase.io.hfile.LruBlockCache: Cache Stats: Sizes:
Total=152.55917MB (159969896), Free=47.12832MB (49417624),
Max=199.6875MB (209387520), Counts: Blocks=2385, Access=33024,
Hit=6643, Miss=26381, Evictions=76, Evicted=23993, Ratios: Hit
Ratio=20.115673542022705%, Miss Ratio=79.8843264579773%,
Evicted/Run=315.6973571777344
2010-04-17 19:11:02,492 DEBUG
org.apache.hadoop.hbase.io.hfile.LruBlockCache: Cache Stats: Sizes:
Total=157.45457MB (165103088), Free=42.232925MB (44284432),
Max=199.6875MB (209387520), Counts: Blocks=2462, Access=33134,
Hit=6675, Miss=26459, Evictions=76, Evicted=23993, Ratios: Hit
Ratio=20.145469903945923%, Miss Ratio=79.85453009605408%,
Evicted/Run=315.6973571777344
2010-04-17 19:11:20,864 WARN org.apache.hadoop.hbase.util.Sleeper: We
slept 15430ms, ten times longer than scheduled: 1000
2010-04-17 19:12:03,171 DEBUG
org.apache.hadoop.hbase.io.hfile.LruBlockCache: Cache Stats: Sizes:
Total=162.34995MB (170236264), Free=37.337547MB (39151256),
Max=199.6875MB (209387520), Counts: Blocks=2539, Access=33238,
Hit=6701, Miss=26537, Evictions=76, Evicted=23993, Ratios: Hit
Ratio=20.16066014766693%, Miss Ratio=79.83934283256531%,
Evicted/Run=315.6973571777344
2010-04-17 19:12:25,795 WARN org.apache.hadoop.ipc.HBaseServer: IPC
Server Responder, call get([B@c3a728, row=0001671568, maxVersions=1,
timeRange=[0,9223372036854775807), families={(family=info,
columns={data}}) from 192.168.1.3:56782: output error
2010-04-17 19:12:26,476 INFO org.apache.hadoop.ipc.HBaseServer: IPC
Server handler 9 on 60020 caught:
java.nio.channels.ClosedChannelException
at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:126)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1125)
at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:615)
at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:679)
at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:943)



Thanks in advance,
Senthil

Search Discussions

  • Jean-Daniel Cryans at Apr 19, 2010 at 7:10 am
    Not sure where to start, there are so many things wrong with your cluster. ;)

    Commodity hardware is usually more than 1 cpu, and HBase itself
    requires 1GB of RAM. Looking at slave2 for example, your datanode,
    region server and MR processes are all competing for 512MB of RAM and
    1 CPU. In the log lines you pasted, the more important stuff is:

    2010-04-17 19:11:20,864 WARN org.apache.hadoop.hbase.util.Sleeper: We
    slept 15430ms, ten times longer than scheduled: 1000

    That means the JVM was pausing (because of GC, or swapping, or most
    probably both) and becomes unresponsive. If you really wish to run
    processing on that cluster, I would use the master and slave1 as
    datanode and region servers then slave2 as MapReduce only. Also slave1
    should have the Namenode, HBase Master and Zookeeper since it has more
    RAM. Then I would configure the heaps so that I wouldn't swap, and
    configure only 1 map and 1 reduce (not the default of 2).

    But still, I wouldn't expect much processing juice out of that.

    J-D

    On Sat, Apr 17, 2010 at 8:13 PM, jayavelu jaisenthilkumar
    wrote:
    Hi guys,
    I successfully configured hadoop, mapreduce and hbase.
    Now want to run Performance Evaluation a bit.

    The configuration of our systems are

    Master Machine:

    Processor:
    Intel Centrino Mobile Technology Processor 1.66 GHz CPUs
    Memory:
    1 GB/Go DDR2 SDRAM
    Storage:
    80 GB/Go
    Network:
    Gigabit Ethernet

    Slave 1 Machine:

    Processor:
    Core 2 Duo Intel T5450 Processor 1.66 GHz CPUs
    Memory:
    2 GB/Go DDR2 SDRAM
    Storage:
    200 GB/Go
    Network:
    Gigabit Ethernet

    Slave 2 Machine:

    Processor:
    Intel(R) Pentium(R) M processor 1400MHZ
    Memory:
    512 MB RAM
    Storage:
    45 GB
    Network:
    Gigabit Ethernet

    The Performance Evaluation algorithms sequentialWrite and
    sequentialRead are successfully runned.

    We followed the same procedure for randomWrite and randomRead.

    randomWrite was successful but randomRead was failed .  See the output
    below for the randomRead. ( The cpu memory usage was 94% is it the
    reason??)

    hadoop@Hadoopserver:~/hadoop-0.20.1/bin> ./hadoop
    org.apache.hadoop.hbase.PerformanceEvaluation randomRead 3
    10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
    environment:zookeeper.version=3.2.2-888565, built on 12/08/2009 21:51
    GMT
    10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
    environment:host.name=Hadoopserver
    10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
    environment:java.version=1.6.0_15
    10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
    environment:java.vendor=Sun Microsystems Inc.
    10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
    environment:java.home=/usr/java/jdk1.6.0_15/jre
    10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
    environment:java.class.path=/home/hadoop/hadoop-0.20.1/bin/../conf:/usr/java/jdk1.6.0_15/lib/tools.jar:/home/hadoop/hadoop-0.20.1/bin/..:/home/hadoop/hadoop-0.20.1/bin/../hadoop-0.20.1-core.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/commons-cli-1.2.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/commons-codec-1.3.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/commons-el-1.0.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/commons-httpclient-3.0.1.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/commons-logging-1.0.4.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/commons-logging-api-1.0.4.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/commons-net-1.4.1.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/core-3.1.1.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/hsqldb-1.8.0.10.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/jasper-compiler-5.5.12.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/jasper-runtime-5.5.12.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/jets3t-0.6.1.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/jetty-6.1.14.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/jetty-util-6.1.14.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/junit-3.8.1.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/kfs-0.2.2.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/log4j-1.2.15.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/oro-2.0.8.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/slf4j-api-1.4.3.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/slf4j-log4j12-1.4.3.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/xmlenc-0.52.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/jsp-2.1/jsp-2.1.jar:/home/hadoop/hadoop-0.20.1/bin/../lib/jsp-2.1/jsp-api-2.1.jar:/home/hadoop/hbase-0.20.3/hbase-0.20.3.jar:/home/hadoop/hbase-0.20.3/conf:/home/hadoop/hbase-0.20.3/hbase-0.20.3-test.jar:/home/hadoop/hbase-0.20.3/lib/zookeeper-3.2.2.jar
    10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
    environment:java.library.path=/home/hadoop/hadoop-0.20.1/bin/../lib/native/Linux-i386-32
    10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
    environment:java.io.tmpdir=/tmp
    10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
    environment:java.compiler=<NA>
    10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
    10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client environment:os.arch=i386
    10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
    environment:os.version=2.6.27.19-5-pae
    10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client environment:user.name=hadoop
    10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
    environment:user.home=/home/hadoop
    10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Client
    environment:user.dir=/home/hadoop/hadoop-0.20.1/bin
    10/04/17 17:58:08 INFO zookeeper.ZooKeeper: Initiating client
    connection, connectString=Hadoopclient1:2222,Hadoopclient:2222,Hadoopserver:2222
    sessionTimeout=60000
    watcher=org.apache.hadoop.hbase.client.HConnectionManager$ClientZKWatcher@12152e6
    10/04/17 17:58:08 INFO zookeeper.ClientCnxn:
    zookeeper.disableAutoWatchReset is false
    10/04/17 17:58:08 INFO zookeeper.ClientCnxn: Attempting connection to
    server Hadoopserver/192.168.1.1:2222
    10/04/17 17:58:08 INFO zookeeper.ClientCnxn: Priming connection to
    java.nio.channels.SocketChannel[connected local=/192.168.1.1:41539
    remote=Hadoopserver/192.168.1.1:2222]
    10/04/17 17:58:08 INFO zookeeper.ClientCnxn: Server connection successful
    10/04/17 17:58:09 WARN mapred.JobClient: Use GenericOptionsParser for
    parsing the arguments. Applications should implement Tool for the
    same.
    10/04/17 17:58:09 INFO input.FileInputFormat: Total input paths to process : 1
    10/04/17 17:58:10 INFO hbase.PerformanceEvaluation: Total # of splits: 30
    10/04/17 17:58:10 INFO mapred.JobClient: Running job: job_201004171753_0001
    10/04/17 17:58:11 INFO mapred.JobClient:  map 0% reduce 0%
    10/04/17 17:58:25 INFO mapred.JobClient:  map 6% reduce 0%
    10/04/17 17:58:28 INFO mapred.JobClient:  map 13% reduce 0%
    10/04/17 17:58:31 INFO mapred.JobClient:  map 20% reduce 0%
    10/04/17 18:08:58 INFO mapred.JobClient:  map 16% reduce 0%
    10/04/17 18:10:12 INFO mapred.JobClient: Task Id :
    attempt_201004171753_0001_m_000004_0, Status : FAILED
    Task attempt_201004171753_0001_m_000004_0 failed to report status for
    601 seconds. Killing!
    10/04/17 18:11:37 INFO mapred.JobClient:  map 20% reduce 0%
    10/04/17 18:15:40 INFO mapred.JobClient:  map 16% reduce 0%
    10/04/17 18:16:47 INFO mapred.JobClient:  map 13% reduce 0%
    10/04/17 18:16:48 INFO mapred.JobClient: Task Id :
    attempt_201004171753_0001_m_000001_0, Status : FAILED
    Task attempt_201004171753_0001_m_000001_0 failed to report status for
    600 seconds. Killing!
    10/04/17 18:16:53 INFO mapred.JobClient: Task Id :
    attempt_201004171753_0001_m_000002_0, Status : FAILED
    Task attempt_201004171753_0001_m_000002_0 failed to report status for
    602 seconds. Killing!
    10/04/17 18:17:00 INFO mapred.JobClient:  map 16% reduce 0%
    10/04/17 18:19:08 INFO mapred.JobClient:  map 20% reduce 0%
    10/04/17 18:22:47 INFO mapred.JobClient:  map 16% reduce 0%
    10/04/17 18:22:54 INFO mapred.JobClient: Task Id :
    attempt_201004171753_0001_m_000000_0, Status : FAILED
    Task attempt_201004171753_0001_m_000000_0 failed to report status for
    600 seconds. Killing!
    10/04/17 18:22:57 INFO mapred.JobClient:  map 13% reduce 0%
    10/04/17 18:23:00 INFO mapred.JobClient: Task Id :
    attempt_201004171753_0001_m_000005_0, Status : FAILED
    Task attempt_201004171753_0001_m_000005_0 failed to report status for
    600 seconds. Killing!
    10/04/17 18:23:04 INFO mapred.JobClient:  map 16% reduce 0%
    10/04/17 18:23:11 INFO mapred.JobClient:  map 20% reduce 0%
    10/04/17 18:24:29 INFO mapred.JobClient:  map 16% reduce 0%
    10/04/17 18:24:35 INFO mapred.JobClient: Task Id :
    attempt_201004171753_0001_m_000003_0, Status : FAILED
    Task attempt_201004171753_0001_m_000003_0 failed to report status for
    601 seconds. Killing!
    10/04/17 18:24:47 INFO mapred.JobClient:  map 20% reduce 0%
    10/04/17 18:24:53 INFO mapred.JobClient:  map 16% reduce 0%
    10/04/17 18:26:30 INFO mapred.JobClient: Task Id :
    attempt_201004171753_0001_m_000006_0, Status : FAILED
    Task attempt_201004171753_0001_m_000006_0 failed to report status for
    604 seconds. Killing!
    10/04/17 18:28:15 INFO mapred.JobClient:  map 20% reduce 0%
    10/04/17 18:29:17 INFO mapred.JobClient:  map 16% reduce 0%
    10/04/17 18:30:24 INFO mapred.JobClient: Task Id :
    attempt_201004171753_0001_m_000007_0, Status : FAILED
    Task attempt_201004171753_0001_m_000007_0 failed to report status for
    602 seconds. Killing!
    10/04/17 18:31:24 INFO mapred.JobClient:  map 20% reduce 0%
    10/04/17 18:33:08 INFO mapred.JobClient:  map 16% reduce 0%
    10/04/17 18:33:15 INFO mapred.JobClient: Task Id :
    attempt_201004171753_0001_m_000002_1, Status : FAILED
    Task attempt_201004171753_0001_m_000002_1 failed to report status for
    602 seconds. Killing!
    10/04/17 18:33:25 INFO mapred.JobClient:  map 20% reduce 0%
    10/04/17 18:36:02 INFO mapred.JobClient:  map 16% reduce 0%
    10/04/17 18:36:08 INFO mapred.JobClient: Task Id :
    attempt_201004171753_0001_m_000001_1, Status : FAILED
    Task attempt_201004171753_0001_m_000001_1 failed to report status for
    602 seconds. Killing!
    10/04/17 18:36:20 INFO mapred.JobClient:  map 20% reduce 0%
    10/04/17 18:39:27 INFO mapred.JobClient:  map 16% reduce 0%
    10/04/17 18:39:33 INFO mapred.JobClient: Task Id :
    attempt_201004171753_0001_m_000000_1, Status : FAILED
    Task attempt_201004171753_0001_m_000000_1 failed to report status for
    600 seconds. Killing!
    10/04/17 18:39:45 INFO mapred.JobClient:  map 20% reduce 0%
    10/04/17 18:40:57 INFO mapred.JobClient:  map 16% reduce 0%
    10/04/17 18:41:03 INFO mapred.JobClient: Task Id :
    attempt_201004171753_0001_m_000005_1, Status : FAILED
    Task attempt_201004171753_0001_m_000005_1 failed to report status for
    602 seconds. Killing!
    10/04/17 18:41:14 INFO mapred.JobClient:  map 20% reduce 0%
    10/04/17 18:47:00 INFO mapred.JobClient:  map 16% reduce 0%
    10/04/17 18:47:56 INFO mapred.JobClient:  map 13% reduce 0%
    10/04/17 18:48:12 INFO mapred.JobClient: Task Id :
    attempt_201004171753_0001_m_000003_1, Status : FAILED
    Task attempt_201004171753_0001_m_000003_1 failed to report status for
    602 seconds. Killing!
    10/04/17 18:48:15 INFO mapred.JobClient: Task Id :
    attempt_201004171753_0001_m_000008_0, Status : FAILED
    Task attempt_201004171753_0001_m_000008_0 failed to report status for
    601 seconds. Killing!
    10/04/17 18:48:50 INFO mapred.JobClient:  map 16% reduce 0%
    10/04/17 18:49:19 INFO mapred.JobClient:  map 13% reduce 0%
    10/04/17 18:49:30 INFO mapred.JobClient: Task Id :
    attempt_201004171753_0001_m_000006_1, Status : FAILED
    Task attempt_201004171753_0001_m_000006_1 failed to report status for
    602 seconds. Killing!
    10/04/17 18:49:34 INFO mapred.JobClient:  map 16% reduce 0%
    10/04/17 18:49:38 INFO mapred.JobClient:  map 20% reduce 0%
    10/04/17 18:49:47 INFO mapred.JobClient:  map 16% reduce 0%
    10/04/17 18:49:57 INFO mapred.JobClient: Task Id :
    attempt_201004171753_0001_m_000004_1, Status : FAILED
    Task attempt_201004171753_0001_m_000004_1 failed to report status for
    600 seconds. Killing!
    10/04/17 18:50:07 INFO mapred.JobClient:  map 20% reduce 0%
    10/04/17 18:51:43 INFO mapred.JobClient:  map 16% reduce 0%
    10/04/17 18:51:51 INFO mapred.JobClient: Task Id :
    attempt_201004171753_0001_m_000007_1, Status : FAILED
    Task attempt_201004171753_0001_m_000007_1 failed to report status for
    600 seconds. Killing!
    10/04/17 18:52:00 INFO mapred.JobClient:  map 20% reduce 0%
    10/04/17 19:00:30 INFO mapred.JobClient:  map 16% reduce 0%
    10/04/17 19:00:37 INFO mapred.JobClient: Task Id :
    attempt_201004171753_0001_m_000001_2, Status : FAILED
    Task attempt_201004171753_0001_m_000001_2 failed to report status for
    602 seconds. Killing!
    10/04/17 19:00:47 INFO mapred.JobClient:  map 20% reduce 0%
    10/04/17 19:02:03 INFO mapred.JobClient:  map 16% reduce 0%
    10/04/17 19:02:06 INFO mapred.JobClient: Task Id :
    attempt_201004171753_0001_m_000004_2, Status : FAILED
    Task attempt_201004171753_0001_m_000004_2 failed to report status for
    600 seconds. Killing!
    10/04/17 19:02:15 INFO mapred.JobClient:  map 20% reduce 0%
    10/04/17 19:07:44 INFO mapred.JobClient:  map 16% reduce 0%
    10/04/17 19:07:55 INFO mapred.JobClient:  map 13% reduce 0%
    10/04/17 19:08:08 INFO mapred.JobClient:  map 10% reduce 0%
    10/04/17 19:08:14 INFO mapred.JobClient: Task Id :
    attempt_201004171753_0001_m_000008_1, Status : FAILED
    Task attempt_201004171753_0001_m_000008_1 failed to report status for
    600 seconds. Killing!
    10/04/17 19:08:18 INFO mapred.JobClient:  map 6% reduce 0%
    10/04/17 19:08:20 INFO mapred.JobClient: Task Id :
    attempt_201004171753_0001_m_000003_2, Status : FAILED
    Task attempt_201004171753_0001_m_000003_2 failed to report status for
    601 seconds. Killing!
    10/04/17 19:08:24 INFO mapred.JobClient:  map 10% reduce 0%
    10/04/17 19:08:31 INFO mapred.JobClient:  map 13% reduce 0%
    10/04/17 19:08:50 INFO mapred.JobClient: Task Id :
    attempt_201004171753_0001_m_000000_2, Status : FAILED
    Task attempt_201004171753_0001_m_000000_2 failed to report status for
    601 seconds. Killing!
    10/04/17 19:08:56 INFO mapred.JobClient: Task Id :
    attempt_201004171753_0001_m_000005_2, Status : FAILED
    Task attempt_201004171753_0001_m_000005_2 failed to report status for
    600 seconds. Killing!
    10/04/17 19:10:41 INFO mapred.JobClient:  map 16% reduce 0%
    10/04/17 19:10:44 INFO mapred.JobClient:  map 20% reduce 0%
    10/04/17 19:12:20 INFO mapred.JobClient:  map 16% reduce 0%
    10/04/17 19:12:38 INFO mapred.JobClient: Job complete: job_201004171753_0001
    10/04/17 19:12:41 INFO mapred.JobClient: Counters: 2
    10/04/17 19:12:41 INFO mapred.JobClient:   Job Counters
    10/04/17 19:12:41 INFO mapred.JobClient:     Launched map tasks=29
    10/04/17 19:12:41 INFO mapred.JobClient:     Failed map tasks=1
    10/04/17 19:12:42 INFO zookeeper.ZooKeeper: Closing session: 0x280c7c9a9c0001
    10/04/17 19:12:42 INFO zookeeper.ClientCnxn: Closing ClientCnxn for
    session: 0x280c7c9a9c0001
    10/04/17 19:12:42 INFO zookeeper.ClientCnxn: Exception while closing
    send thread for session 0x280c7c9a9c0001 : Read error rc = -1
    java.nio.DirectByteBuffer[pos=0 lim=4 cap=4]
    10/04/17 19:12:43 INFO zookeeper.ClientCnxn: Disconnecting ClientCnxn
    for session: 0x280c7c9a9c0001
    10/04/17 19:12:43 INFO zookeeper.ZooKeeper: Session: 0x280c7c9a9c0001 closed
    10/04/17 19:12:43 INFO zookeeper.ClientCnxn: EventThread shut down


    Also the regionserver logs shows repeated sequences of

    hadoop@Hadoopserver:~/hbase-0.20.3/logs> tail -100
    hbase-hadoop-regionserver-Hadoopserver.log
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:126)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1125)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:615)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:679)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:943)

    2010-04-17 19:08:08,845 DEBUG
    org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU
    eviction started.  Attempting to free 21002624 bytes
    2010-04-17 19:08:09,171 DEBUG
    org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU
    eviction completed. Freed 21066064 bytes.  Priority Sizes:
    Single=92.18593MB (96663952), Multi=76.67258MB (80397024),Memory=0.0MB
    (0)
    2010-04-17 19:08:14,420 WARN org.apache.hadoop.ipc.HBaseServer: IPC
    Server Responder, call get([B@176ebca, row=0001526875, maxVersions=1,
    timeRange=[0,9223372036854775807), families={(family=info,
    columns={data}}) from 192.168.1.2:33323: output error
    2010-04-17 19:08:14,422 INFO org.apache.hadoop.ipc.HBaseServer: IPC
    Server handler 5 on 60020 caught:
    java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:126)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1125)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:615)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:679)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:943)

    2010-04-17 19:08:58,186 DEBUG
    org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU
    eviction started.  Attempting to free 20995384 bytes
    2010-04-17 19:08:59,145 DEBUG
    org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU
    eviction completed. Freed 20999376 bytes.  Priority Sizes:
    Single=92.36976MB (96856712), Multi=76.67258MB (80397024),Memory=0.0MB
    (0)
    2010-04-17 19:09:27,559 DEBUG
    org.apache.hadoop.hbase.io.hfile.LruBlockCache: Cache Stats: Sizes:
    Total=150.6519MB (157969968), Free=49.0356MB (51417552),
    Max=199.6875MB (209387520), Counts: Blocks=2355, Access=32992,
    Hit=6641, Miss=26351, Evictions=76, Evicted=23993, Ratios: Hit
    Ratio=20.129121840000153%, Miss Ratio=79.87087965011597%,
    Evicted/Run=315.6973571777344
    2010-04-17 19:09:27,563 WARN org.apache.hadoop.hbase.util.Sleeper: We
    slept 27142ms, ten times longer than scheduled: 1000
    2010-04-17 19:10:02,430 DEBUG
    org.apache.hadoop.hbase.io.hfile.LruBlockCache: Cache Stats: Sizes:
    Total=152.55917MB (159969896), Free=47.12832MB (49417624),
    Max=199.6875MB (209387520), Counts: Blocks=2385, Access=33024,
    Hit=6643, Miss=26381, Evictions=76, Evicted=23993, Ratios: Hit
    Ratio=20.115673542022705%, Miss Ratio=79.8843264579773%,
    Evicted/Run=315.6973571777344
    2010-04-17 19:11:02,492 DEBUG
    org.apache.hadoop.hbase.io.hfile.LruBlockCache: Cache Stats: Sizes:
    Total=157.45457MB (165103088), Free=42.232925MB (44284432),
    Max=199.6875MB (209387520), Counts: Blocks=2462, Access=33134,
    Hit=6675, Miss=26459, Evictions=76, Evicted=23993, Ratios: Hit
    Ratio=20.145469903945923%, Miss Ratio=79.85453009605408%,
    Evicted/Run=315.6973571777344
    2010-04-17 19:11:20,864 WARN org.apache.hadoop.hbase.util.Sleeper: We
    slept 15430ms, ten times longer than scheduled: 1000
    2010-04-17 19:12:03,171 DEBUG
    org.apache.hadoop.hbase.io.hfile.LruBlockCache: Cache Stats: Sizes:
    Total=162.34995MB (170236264), Free=37.337547MB (39151256),
    Max=199.6875MB (209387520), Counts: Blocks=2539, Access=33238,
    Hit=6701, Miss=26537, Evictions=76, Evicted=23993, Ratios: Hit
    Ratio=20.16066014766693%, Miss Ratio=79.83934283256531%,
    Evicted/Run=315.6973571777344
    2010-04-17 19:12:25,795 WARN org.apache.hadoop.ipc.HBaseServer: IPC
    Server Responder, call get([B@c3a728, row=0001671568, maxVersions=1,
    timeRange=[0,9223372036854775807), families={(family=info,
    columns={data}}) from 192.168.1.3:56782: output error
    2010-04-17 19:12:26,476 INFO org.apache.hadoop.ipc.HBaseServer: IPC
    Server handler 9 on 60020 caught:
    java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:126)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1125)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:615)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:679)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:943)



    Thanks in advance,
    Senthil

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupuser @
categorieshbase, hadoop
postedApr 17, '10 at 6:14p
activeApr 19, '10 at 7:10a
posts2
users2
websitehbase.apache.org

People

Translate

site design / logo © 2022 Grokbase