|
Hadoop QA (JIRA) |
at Mar 6, 2008 at 5:22 am
|
⇧ |
| |
[
https://issues.apache.org/jira/browse/HADOOP-2943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12575566#action_12575566 ]
Hadoop QA commented on HADOOP-2943:
-----------------------------------
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12377219/2943.patchagainst trunk revision 619744.
@author +1. The patch does not contain any @author tags.
tests included -1. The patch doesn't appear to include any new or modified tests.
Please justify why no tests are needed for this patch.
javadoc +1. The javadoc tool did not generate any warning messages.
javac +1. The applied patch does not generate any new javac compiler warnings.
release audit +1. The applied patch does not generate any new release audit warnings.
findbugs +1. The patch does not introduce any new Findbugs warnings.
core tests +1. The patch passed core unit tests.
contrib tests +1. The patch passed contrib unit tests.
Test results:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1901/testReport/Findbugs warnings:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1901/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.htmlCheckstyle results:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1901/artifact/trunk/build/test/checkstyle-errors.htmlConsole output:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/1901/consoleThis message is automatically generated.
Compression for intermediate map output is broken
-------------------------------------------------
Key: HADOOP-2943
URL:
https://issues.apache.org/jira/browse/HADOOP-2943Project: Hadoop Core
Issue Type: Bug
Components: mapred
Reporter: Chris Douglas
Assignee: Chris Douglas
Attachments: 2943.patch, 2943.patch
It looks like SequenceFile::RecordCompressWriter and SequenceFile::BlockCompressWriter weren't updated to use the new serialization added in HADOOP-1986. This causes failures in the merge when mapred.compress.map.output is true and mapred.map.output.compression.type=BLOCK:
{noformat}
java.io.IOException: File is corrupt!
at org.apache.hadoop.io.SequenceFile$Reader.readBlock(SequenceFile.java:1656)
at org.apache.hadoop.io.SequenceFile$Reader.nextRawKey(SequenceFile.java:1969)
at org.apache.hadoop.io.SequenceFile$Sorter$SegmentDescriptor.nextRawKey(SequenceFile.java:2985)
at org.apache.hadoop.io.SequenceFile$Sorter$MergeQueue.merge(SequenceFile.java:2785)
at org.apache.hadoop.io.SequenceFile$Sorter.merge(SequenceFile.java:2494)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.mergeParts(MapTask.java:654)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:740)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:212)
at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2077)
{noformat}
mapred.map.output.compression.type=RECORD works for Writables, but should be updated.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.