So, I tried giving each of the region servers 2GB and also tried to
limit the number of cells/columns I'm creating, but the memory still
maxes out at 2GB eventually runs into OOME.
Basically, I'm just trying to store about 5 or 10 "cells" under a
column, where the values are very small (say 1 or 5 or 10 characters),
and also the cells may not always exist, so it would be a sparse
matrix.
I'm wondering if I should just be storing serializable custom Java
objects as cell values instead where the object contains the
attributes that I'm currently trying to store as individual
columns/cell values. Some of the attributes would be null if they're
not present. I am not sure if there is any benefit to that.
Would this help at all with the memory issues? Or should I downgrade
from using HBase 0.18.1/ try different release?
Thanks,
Ryan
On Fri, Dec 19, 2008 at 2:10 PM, Ryan LeCompte wrote:Thanks, I'll give it a shot.
On Fri, Dec 19, 2008 at 2:09 PM, stack wrote:Your cells are small? If so, please up your heap size (See
$HBASE_HOME/conf/hbase-env.sh. See HADOOP_HEAPSIZE). This should be better
in 0.19.0 (See HBASE-900).
Thanks,
St.Ack
Ryan LeCompte wrote:
Hello all,
I'm trying to get a new HBase cluster up and running on top of an
existing 5 node Hadoop cluster.
I've basically set it up such that the HBase master is running on the
same machine as the name node, and the region servers are running on
the other 4 machines as slaves.
I'm trying to populate a single table with about 10GB of data via a
map/reduce job, and I'm noticing that the region servers are running
out of memory.
Any pointers on what could be going on here?
Thanks,
Ryan