Hi,
we're running 100 XLarge instances (ec2), with a gig of heap space for
each task - and are seeing the following error frequently (but not
always):
##### BEGIN PASTE #####
[exec] 08/09/03 11:21:09 INFO mapred.JobClient: map 43% reduce 5%
[exec] 08/09/03 11:21:16 INFO mapred.JobClient: Task Id :
attempt_200809031101_0001_m_000220_0, Status : FAILED
[exec] java.io.IOException: Spill failed
[exec] at org.apache.hadoop.mapred.MapTask
$MapOutputBuffer.flush(MapTask.java:688)
[exec] at org.apache.hadoop.mapred.MapTask.run(MapTask.java:228)
[exec] at org.apache.hadoop.mapred.TaskTracker
$Child.main(TaskTracker.java:2209)
[exec] Caused by: java.lang.OutOfMemoryError: Java heap space
[exec] at org.apache.hadoop.mapred.MapTask$MapOutputBuffer
$InMemValBytes.reset(MapTask.java:928)
[exec] at org.apache.hadoop.mapred.MapTask
$MapOutputBuffer.getVBytesForOffset(MapTask.java:891)
[exec] at org.apache.hadoop.mapred.MapTask
$MapOutputBuffer.sortAndSpill(MapTask.java:765)
[exec] at org.apache.hadoop.mapred.MapTask
$MapOutputBuffer.access$1600(MapTask.java:286)
[exec] at org.apache.hadoop.mapred.MapTask$MapOutputBuffer
$SpillThread.run(MapTask.java:712)
##### END #####
Has anyone seen this? Thanks,
Florian Leibert
Sr. Software Engineer
Adknowledge Inc.