FAQ
Hi,

Running a cluster of small EC2 instances with 1.7GB of memory each.
Using HDFS, MapReduce and Yarn. Cloudera 4 has set the heap size for
hdfs to 52MB, but apparently hdfs is using double that , looking at top..

The following error is appearing in the hdfs log
file hadoop-cmf-hdfs1-DATANODE-hadoop1.log.out , even without mapreduce
jobs running.

---

Unexpected exception in block pool Block pool
BP-1546357427-10.96.129.120-1344011400942 (storage id DS-1473151029-

10.76.178.155-50010-1344011413394) service to
hadoopmaster/10.198.119.221:8020
java.lang.OutOfMemoryError: Java heap space

----

The datanode is crashing and dying.

Does OutOfMemoryError imply that all physical memory has been consumed on
the machine? Because if I look at these nodes with htop, I see around
500MB physical memory consumed right now, and the other 1.2GB of memory is
showing as being used by cached memory which can be flushed with "sync ;
echo 3 | sudo tee /proc/sys/vm/drop_caches" which means it's not really
being consumed, is it...

What I don't know yet, is if immediately prior to a crash, something
unusual happens, and all physical memory is genuinely consumed somehow.

Any suggestions now for investigation or action?

Thanks,
Sam

Search Discussions

  • Sam Darwin at Sep 3, 2012 at 3:03 pm
    Increased datanode heap size from 50M to 150M, seems to help so far.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedSep 3, '12 at 9:51a
activeSep 3, '12 at 3:03p
posts2
users1
websitecloudera.com
irc#hadoop

1 user in discussion

Sam Darwin: 2 posts

People

Translate

site design / logo © 2022 Grokbase