On Thu, 21 Oct 2010 12:44:24 +0200 Erik Forsberg wrote:
attempt_201010201640_0001_r_000000_0 Merging of the local FS files
threw an exception: java.io.IOException: java.lang.RuntimeException:
java.io.EOFException at
org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:128)
attempt_201010201640_0001_r_000000_0 Merging of the local FS files
threw an exception: java.io.IOException: java.lang.RuntimeException:
java.io.EOFException at
org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:128)
010-10-21 10:31:24,696 INFO org.apache.hadoop.mapred.ReduceTask:
header: attempt_201010201640_0186_m_000579_0, compressed len: 2281015,
decompressed len: 10368075 2010-10-21 10:31:24,696 INFO
org.apache.hadoop.mapred.ReduceTask: Shuffling 10368075 bytes (2281015
raw bytes) into RAM from attempt_201010201640_0186_m_000579_0
2010-10-21 10:31:24,744 INFO org.apache.hadoop.io.compress.CodecPool:
Got brand-new decompressor 2010-10-21 10:31:24,854 INFO
org.apache.hadoop.mapred.Merger: Down to the last merge-pass, with 10
segments left of total size: 582879560 bytes 2010-10-21 10:31:27,396
FATAL org.apache.hadoop.mapred.TaskRunner:
attempt_201010201640_0186_r_000027_2 : Failed to merge in
memoryjava.lang.OutOfMemoryError: Java heap space at
org.apache.hadoop.io.BytesWritable.setCapacity(BytesWritable.java:119)
at org.apache.hadoop.io.BytesWritable.setSize(BytesWritable.java:98) at
org.apache.hadoop.io.BytesWritable.readFields(BytesWritable.java:153)
at
org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:122)
at org.apache.hadoop.mapred.Merger$MergeQueue.lessThan(Merger.java:373)
at
org.apache.hadoop.util.PriorityQueue.downHeap(PriorityQueue.java:139)
at
org.apache.hadoop.util.PriorityQueue.adjustTop(PriorityQueue.java:103)
at
org.apache.hadoop.mapred.Merger$MergeQueue.adjustPriorityQueue(Merger.java:335)
at org.apache.hadoop.mapred.Merger$MergeQueue.next(Merger.java:350) at
org.apache.hadoop.mapred.Merger.writeFile(Merger.java:156) at
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.doInMemMerge(ReduceTask.java:2635)
at
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.run(ReduceTask.java:2576)
2010-10-21 10:31:27,397 WARN org.apache.hadoop.mapred.ReduceTask:
attempt_201010201640_0186_r_000027_2 Merging of the local FS files
threw an exception: java.io.IOException: java.lang.RuntimeException:
java.io.EOFException at
org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:128)
at org.apache.hadoop.mapred.Merger$MergeQueue.lessThan(Merger.java:373)
at
org.apache.hadoop.util.PriorityQueue.downHeap(PriorityQueue.java:144)
at
org.apache.hadoop.util.PriorityQueue.adjustTop(PriorityQueue.java:103)
at
org.apache.hadoop.mapred.Merger$MergeQueue.adjustPriorityQueue(Merger.java:335)
at org.apache.hadoop.mapred.Merger$MergeQueue.next(Merger.java:350) at
org.apache.hadoop.mapred.Merger.writeFile(Merger.java:156) at
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$LocalFSMerger.run(ReduceTask.java:2529)
Caused by: java.io.EOFException at
java.io.DataInputStream.readFully(DataInputStream.java:180) at
org.apache.hadoop.io.BytesWritable.readFields(BytesWritable.java:154)
at
org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:122) ...
7 more
at
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$LocalFSMerger.run(ReduceTask.java:2533)
2010-10-21 10:31:27,714 INFO org.apache.hadoop.mapred.ReduceTask:
GetMapEventsThread exiting 2010-10-21 10:31:27,717 INFO
org.apache.hadoop.mapred.ReduceTask: getMapsEventsThread joined.
2010-10-21 10:31:27,727 INFO org.apache.hadoop.mapred.ReduceTask:
Closed ram manager
Then, on second try, they fail with the java.io.EOFException above.
\EF