Grokbase Groups HBase user June 2011
FAQ
Hi,

I am doing bulk insertion into Hbase using Map reduce reading from lot of
small(10MB approximation) files, resulting mappers = no of files. I am also
monitoring the performance using Ganglia. The machines are c1.xlarge for
processing the files(task trackers+data nodes) and m1.xlarge for hbase
cluster(region servers+data nodes). The CPU usage remain 75%-100% for almost
all of the servers. The ram usage also below 5 GB. But the job fails due to
killing of lot of maps. If i run the same job without insertion then
processing complete in 9-10 minutes. So the question is why it is killing
so many maps? Any clue?


--
Regards
Shuja-ur-Rehman Baig
<http://pk.linkedin.com/in/shujamughal>

Search Discussions

Discussion Posts

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 1 of 2 | next ›
Discussion Overview
groupuser @
categorieshbase, hadoop
postedJun 30, '11 at 3:08p
activeJul 1, '11 at 6:03a
posts2
users2
websitehbase.apache.org

2 users in discussion

Shuja Rehman: 1 post Stack: 1 post

People

Translate

site design / logo © 2022 Grokbase