Hadoop Record compiler generates Java files with erroneous byte-array lengths for fields trailing a 'ustring' field

Key: HADOOP-7651
URL: https://issues.apache.org/jira/browse/HADOOP-7651
Project: Hadoop Common
Issue Type: Bug
Components: record
Affects Versions: 0.21.0,
Reporter: Hung-chih Yang

Hadoop Record compiler produces Java files from a DDL file. If a DDL file has a class that contains a 'ustring' field, then the generated 'compareRaw()' function for this record is erroneous in computing the length of remaining bytes after the logic of computing the buffer segment for a 'ustring' field.

Below is a line in a generated 'compareRaw()' function for a record class with a 'ustring' field :
s1+=i1; s2+=i2; l1-=i1; l1-=i2;
This line shoud be corrected by changing the last 'l1' to 'l2':
s1+=i1; s2+=i2; l1-=i1; l2-=i2;

To fix this bug, one should correct the 'genCompareBytes()' function in the 'JString.java' file of the package 'org.apache.hadoop.record.compiler' by changing the line below to the ensuing line. There is only one digit difference:

cb.append("s1+=i1; s2+=i2; l1-=i1; l1-=i2;\n");

cb.append("s1+=i1; s2+=i2; l1-=i1; l2-=i2;\n");

This bug is serious as it will always crash unserializing a record with a simple definition like the one below
class PairStringDouble {
ustring first;
double second;
Unserializing a record of this class will throw an exception as the 'second' field does not have 8 bytes for a double value due to the erroneous length computation for the remaining buffer.

Both Hadoop 0.20 and 0.21 have this bug.

This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-dev @
postedSep 17, '11 at 9:43a
activeSep 17, '11 at 9:43a

1 user in discussion

Hung-chih Yang (JIRA): 1 post



site design / logo © 2022 Grokbase