FAQ
Moving to mapreduce-user@, bcc: common-user

Have you tried bumping up the heap for the map task?

Since you are setting io.sort.mb to 256M, pls set heap-size to 512M at
least, if not more.

mapred.child.java.opts -> -Xmx512M or -Xmx1024m

Arun
On Mar 11, 2010, at 8:24 AM, Boyu Zhang wrote:

Dear All,

I am running a hadoop job processing data. The output of map is really
large, and it spill 15 times. So I was trying to set io.sort.mb = 256
instead of 100. And I leave everything else default. I am using 0.20.2
version. And when I run the job, I got the following errors:

2010-03-11 11:09:37,581 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=MAP, sessionId=
2010-03-11 11:09:38,073 INFO org.apache.hadoop.mapred.MapTask:
numReduceTasks: 1
2010-03-11 11:09:38,086 INFO org.apache.hadoop.mapred.MapTask:
io.sort.mb = 256
2010-03-11 11:09:38,326 FATAL org.apache.hadoop.mapred.TaskTracker:
Error running child : java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.mapred.MapTask
$MapOutputBuffer.<init>(MapTask.java:781)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:350)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
at org.apache.hadoop.mapred.Child.main(Child.java:170)


I can't figure out why, could anyone please give me a hint? Any hlep
will be
appreciate! Thanks a lot!

SIncerely,

Boyu

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 1 of 1 | next ›
Discussion Overview
groupmapreduce-user @
categorieshadoop
postedMar 11, '10 at 5:28p
activeMar 11, '10 at 5:28p
posts1
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Arun C Murthy: 1 post

People

Translate

site design / logo © 2022 Grokbase