with key/value = 10/100 bytes
*dfs.write.packet.size = 64k*
Writer Finished,Consume time is: 383 seconds
==================Summary==================
Tables And Columns: hbasetest(10){caches:value1(100)}
----------Writer Statistics--------------
Write Threads: 100
Write Rows: 10000000
Consume Time: 383s
Requests: 10000000 Success: 100% (10000000) Error: 0% (0)
Avg TPS: 26041 Max TPS: 44460 Min TPS: 3183
Avg RT: 3ms
RT <= 0: 0% 3483/10000000
RT (0,1]: 12% 1273633/10000000
RT (1,5]: 81% 8123190/10000000
RT (5,10]: 1% 147321/10000000
RT (10,50]: 3% 371064/10000000
RT (50,100]: 0% 74896/10000000
RT (100,500]: 0% 6400/10000000
RT (500,1000]: 0% 13/10000000
RT > 1000: 0% 0/10000000
jstat -gcutil 19968 2s
S0 S1 E O P YGC YGCT FGC FGCT GCT
58.44 0.00 54.47 15.66 99.85 154 4.191 0 0.000 4.191
*dfs.write.packet.size = 32k*
*
*
Writer Finished,Consume time is: 367 seconds
==================Summary==================
Tables And Columns: hbasetest(10){caches:value1(100)}
----------Writer Statistics--------------
Write Threads: 100
Write Rows: 10000000
Consume Time: 367s
Requests: 10000000 Success: 100% (10000000) Error: 0% (0)
Avg TPS: 27173 Max TPS: 45276 Min TPS: 968
Avg RT: 3ms
RT <= 0: 0% 7274/10000000
RT (0,1]: 19% 1948293/10000000
RT (1,5]: 73% 7350970/10000000
RT (5,10]: 2% 259443/10000000
RT (10,50]: 3% 371545/10000000
RT (50,100]: 0% 56944/10000000
RT (100,500]: 0% 5360/10000000
RT (500,1000]: 0% 85/10000000
RT > 1000: 0% 86/10000000
S0 S1 E O P YGC YGCT FGC FGCT
GCT
0.00 92.02 92.32 14.89 99.74 67 2.668 0 0.000
2.668
*dfs.write.packet.size = 16k*
Writer Finished,Consume time is: 364 seconds
==================Summary==================
Tables And Columns: hbasetest(10){caches:value1(100)}
----------Writer Statistics--------------
Write Threads: 100
Write Rows: 10000000
Consume Time: 364s
Requests: 10000000 Success: 100% (10000000) Error: 0% (0)
Avg TPS: 27397 Max TPS: 45309 Min TPS: 890
Avg RT: 3ms
RT <= 0: 0% 9291/10000000
RT (0,1]: 21% 2118605/10000000
RT (1,5]: 71% 7192119/10000000
RT (5,10]: 2% 265516/10000000
RT (10,50]: 3% 346697/10000000
RT (50,100]: 0% 61084/10000000
RT (100,500]: 0% 6590/10000000
RT (500,1000]: 0% 15/10000000
RT > 1000: 0% 83/10000000
S0 S1 E O P YGC YGCT FGC FGCT
GCT
53.45 0.00 77.52 15.31 99.24 50 2.295 0 0.000
2.295
2012/7/29 宾莉金 <binlijin@gmail.com>
hi, all devs
Current when FSDataOutputStream's write and sync method are called a
new Packet may be created and put into dataQueue, the daemon DataStreamer
will get packet from the dataQueue and write to datanode.
I find every Packet will new a byte array with size is 65557(more than
64K). but the pktLen is just a few hundreds or a few ten hunderds . So new
a byte array with big size but just use a very little of it. So the utilization
rate is very low.
If 1024 Packet are created one second,64M+ garbage is produced one
second.
I rewrite the logic of new Packet with a PacketPool,I find the
hregionserver's gc frequency and gc total time reduce a lot,but a single gc
time increase a lot,so i reduce the jvm New gen size,but the same result
get,The average RT increase and TPS reduced. I cann't see why? And wish
some one can told me why? may be the reason is gc.
Although average RT increase,but more request has little rt and also
more request has a big rt,so the average RT increase.
--
*Best Regards,*
lijin bin
--
*Best Regards,*
lijin bin