FAQ
avoiding unnecessary byte[] allocation in SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes
---------------------------------------------------------------------------------------------------------

Key: HADOOP-5196
URL: https://issues.apache.org/jira/browse/HADOOP-5196
Project: Hadoop Core
Issue Type: Improvement
Components: io
Affects Versions: 0.21.0
Reporter: Hong Tang
Priority: Minor


SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes are used by the SequenceFile's raw bytes reading/writing API. The current implementation does not reuse the internal byte[] and causes unnecessary buffer allocation and initializaiton (zeroing the buffer).

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Search Discussions

  • Hong Tang (JIRA) at Feb 7, 2009 at 2:43 am
    [ https://issues.apache.org/jira/browse/HADOOP-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Hong Tang reassigned HADOOP-5196:
    ---------------------------------

    Assignee: Hong Tang
    avoiding unnecessary byte[] allocation in SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes
    ---------------------------------------------------------------------------------------------------------

    Key: HADOOP-5196
    URL: https://issues.apache.org/jira/browse/HADOOP-5196
    Project: Hadoop Core
    Issue Type: Improvement
    Components: io
    Affects Versions: 0.21.0
    Reporter: Hong Tang
    Assignee: Hong Tang
    Priority: Minor

    SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes are used by the SequenceFile's raw bytes reading/writing API. The current implementation does not reuse the internal byte[] and causes unnecessary buffer allocation and initializaiton (zeroing the buffer).
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Hong Tang (JIRA) at Feb 10, 2009 at 6:33 am
    [ https://issues.apache.org/jira/browse/HADOOP-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Hong Tang updated HADOOP-5196:
    ------------------------------

    Attachment: HADOOP-5196-trunk.patch
    avoiding unnecessary byte[] allocation in SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes
    ---------------------------------------------------------------------------------------------------------

    Key: HADOOP-5196
    URL: https://issues.apache.org/jira/browse/HADOOP-5196
    Project: Hadoop Core
    Issue Type: Improvement
    Components: io
    Affects Versions: 0.21.0
    Reporter: Hong Tang
    Assignee: Hong Tang
    Priority: Minor
    Attachments: HADOOP-5196-trunk.patch


    SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes are used by the SequenceFile's raw bytes reading/writing API. The current implementation does not reuse the internal byte[] and causes unnecessary buffer allocation and initializaiton (zeroing the buffer).
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Hong Tang (JIRA) at Feb 10, 2009 at 6:35 am
    [ https://issues.apache.org/jira/browse/HADOOP-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Hong Tang updated HADOOP-5196:
    ------------------------------

    Fix Version/s: 0.21.0
    Status: Patch Available (was: Open)
    avoiding unnecessary byte[] allocation in SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes
    ---------------------------------------------------------------------------------------------------------

    Key: HADOOP-5196
    URL: https://issues.apache.org/jira/browse/HADOOP-5196
    Project: Hadoop Core
    Issue Type: Improvement
    Components: io
    Affects Versions: 0.21.0
    Reporter: Hong Tang
    Assignee: Hong Tang
    Priority: Minor
    Fix For: 0.21.0

    Attachments: HADOOP-5196-trunk.patch


    SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes are used by the SequenceFile's raw bytes reading/writing API. The current implementation does not reuse the internal byte[] and causes unnecessary buffer allocation and initializaiton (zeroing the buffer).
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Hadoop QA (JIRA) at Feb 10, 2009 at 7:31 am
    [ https://issues.apache.org/jira/browse/HADOOP-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12672189#action_12672189 ]

    Hadoop QA commented on HADOOP-5196:
    -----------------------------------

    -1 overall. Here are the results of testing the latest attachment
    http://issues.apache.org/jira/secure/attachment/12399887/HADOOP-5196-trunk.patch
    against trunk revision 742827.

    +1 @author. The patch does not contain any @author tags.

    -1 tests included. The patch doesn't appear to include any new or modified tests.
    Please justify why no tests are needed for this patch.

    +1 javadoc. The javadoc tool did not generate any warning messages.

    +1 javac. The applied patch does not increase the total number of javac compiler warnings.

    +1 findbugs. The patch does not introduce any new Findbugs warnings.

    +1 Eclipse classpath. The patch retains Eclipse classpath integrity.

    +1 release audit. The applied patch does not increase the total number of release audit warnings.

    -1 core tests. The patch failed core unit tests.

    -1 contrib tests. The patch failed contrib unit tests.

    Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3824/testReport/
    Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3824/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
    Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3824/artifact/trunk/build/test/checkstyle-errors.html
    Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3824/console

    This message is automatically generated.
    avoiding unnecessary byte[] allocation in SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes
    ---------------------------------------------------------------------------------------------------------

    Key: HADOOP-5196
    URL: https://issues.apache.org/jira/browse/HADOOP-5196
    Project: Hadoop Core
    Issue Type: Improvement
    Components: io
    Affects Versions: 0.21.0
    Reporter: Hong Tang
    Assignee: Hong Tang
    Priority: Minor
    Fix For: 0.21.0

    Attachments: HADOOP-5196-trunk.patch


    SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes are used by the SequenceFile's raw bytes reading/writing API. The current implementation does not reuse the internal byte[] and causes unnecessary buffer allocation and initializaiton (zeroing the buffer).
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Hong Tang (JIRA) at Mar 17, 2009 at 11:48 pm
    [ https://issues.apache.org/jira/browse/HADOOP-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Hong Tang updated HADOOP-5196:
    ------------------------------

    Status: Patch Available (was: Open)
    avoiding unnecessary byte[] allocation in SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes
    ---------------------------------------------------------------------------------------------------------

    Key: HADOOP-5196
    URL: https://issues.apache.org/jira/browse/HADOOP-5196
    Project: Hadoop Core
    Issue Type: Improvement
    Components: io
    Affects Versions: 0.21.0
    Reporter: Hong Tang
    Assignee: Hong Tang
    Priority: Minor
    Fix For: 0.21.0

    Attachments: HADOOP-5196-trunk.patch


    SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes are used by the SequenceFile's raw bytes reading/writing API. The current implementation does not reuse the internal byte[] and causes unnecessary buffer allocation and initializaiton (zeroing the buffer).
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Hong Tang (JIRA) at Mar 17, 2009 at 11:48 pm
    [ https://issues.apache.org/jira/browse/HADOOP-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Hong Tang updated HADOOP-5196:
    ------------------------------

    Status: Open (was: Patch Available)
    avoiding unnecessary byte[] allocation in SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes
    ---------------------------------------------------------------------------------------------------------

    Key: HADOOP-5196
    URL: https://issues.apache.org/jira/browse/HADOOP-5196
    Project: Hadoop Core
    Issue Type: Improvement
    Components: io
    Affects Versions: 0.21.0
    Reporter: Hong Tang
    Assignee: Hong Tang
    Priority: Minor
    Fix For: 0.21.0

    Attachments: HADOOP-5196-trunk.patch


    SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes are used by the SequenceFile's raw bytes reading/writing API. The current implementation does not reuse the internal byte[] and causes unnecessary buffer allocation and initializaiton (zeroing the buffer).
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Chris Douglas (JIRA) at Mar 18, 2009 at 1:58 am
    [ https://issues.apache.org/jira/browse/HADOOP-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12682874#action_12682874 ]

    Chris Douglas commented on HADOOP-5196:
    ---------------------------------------

    +1
    avoiding unnecessary byte[] allocation in SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes
    ---------------------------------------------------------------------------------------------------------

    Key: HADOOP-5196
    URL: https://issues.apache.org/jira/browse/HADOOP-5196
    Project: Hadoop Core
    Issue Type: Improvement
    Components: io
    Affects Versions: 0.21.0
    Reporter: Hong Tang
    Assignee: Hong Tang
    Priority: Minor
    Fix For: 0.21.0

    Attachments: HADOOP-5196-trunk.patch


    SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes are used by the SequenceFile's raw bytes reading/writing API. The current implementation does not reuse the internal byte[] and causes unnecessary buffer allocation and initializaiton (zeroing the buffer).
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Hadoop QA (JIRA) at Mar 19, 2009 at 1:46 pm
    [ https://issues.apache.org/jira/browse/HADOOP-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12683468#action_12683468 ]

    Hadoop QA commented on HADOOP-5196:
    -----------------------------------

    -1 overall. Here are the results of testing the latest attachment
    http://issues.apache.org/jira/secure/attachment/12399887/HADOOP-5196-trunk.patch
    against trunk revision 755790.

    +1 @author. The patch does not contain any @author tags.

    -1 tests included. The patch doesn't appear to include any new or modified tests.
    Please justify why no tests are needed for this patch.

    +1 javadoc. The javadoc tool did not generate any warning messages.

    +1 javac. The applied patch does not increase the total number of javac compiler warnings.

    +1 findbugs. The patch does not introduce any new Findbugs warnings.

    +1 Eclipse classpath. The patch retains Eclipse classpath integrity.

    +1 release audit. The applied patch does not increase the total number of release audit warnings.

    +1 core tests. The patch passed core unit tests.

    +1 contrib tests. The patch passed contrib unit tests.

    Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/105/testReport/
    Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/105/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
    Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/105/artifact/trunk/build/test/checkstyle-errors.html
    Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/105/console

    This message is automatically generated.
    avoiding unnecessary byte[] allocation in SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes
    ---------------------------------------------------------------------------------------------------------

    Key: HADOOP-5196
    URL: https://issues.apache.org/jira/browse/HADOOP-5196
    Project: Hadoop Core
    Issue Type: Improvement
    Components: io
    Affects Versions: 0.21.0
    Reporter: Hong Tang
    Assignee: Hong Tang
    Priority: Minor
    Fix For: 0.21.0

    Attachments: HADOOP-5196-trunk.patch


    SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes are used by the SequenceFile's raw bytes reading/writing API. The current implementation does not reuse the internal byte[] and causes unnecessary buffer allocation and initializaiton (zeroing the buffer).
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Mahadev konar (JIRA) at Mar 19, 2009 at 5:45 pm
    [ https://issues.apache.org/jira/browse/HADOOP-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Mahadev konar updated HADOOP-5196:
    ----------------------------------

    Resolution: Fixed
    Hadoop Flags: [Reviewed]
    Status: Resolved (was: Patch Available)

    I just committed this. thanks hong.
    avoiding unnecessary byte[] allocation in SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes
    ---------------------------------------------------------------------------------------------------------

    Key: HADOOP-5196
    URL: https://issues.apache.org/jira/browse/HADOOP-5196
    Project: Hadoop Core
    Issue Type: Improvement
    Components: io
    Affects Versions: 0.21.0
    Reporter: Hong Tang
    Assignee: Hong Tang
    Priority: Minor
    Fix For: 0.21.0

    Attachments: HADOOP-5196-trunk.patch


    SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes are used by the SequenceFile's raw bytes reading/writing API. The current implementation does not reuse the internal byte[] and causes unnecessary buffer allocation and initializaiton (zeroing the buffer).
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Hudson (JIRA) at Mar 20, 2009 at 7:31 pm
    [ https://issues.apache.org/jira/browse/HADOOP-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12684008#action_12684008 ]

    Hudson commented on HADOOP-5196:
    --------------------------------

    Integrated in Hadoop-trunk #785 (See [http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/785/])
    . avoiding unnecessary byte[] allocation in SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes. (hong tang via mahadev)

    avoiding unnecessary byte[] allocation in SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes
    ---------------------------------------------------------------------------------------------------------

    Key: HADOOP-5196
    URL: https://issues.apache.org/jira/browse/HADOOP-5196
    Project: Hadoop Core
    Issue Type: Improvement
    Components: io
    Affects Versions: 0.21.0
    Reporter: Hong Tang
    Assignee: Hong Tang
    Priority: Minor
    Fix For: 0.21.0

    Attachments: HADOOP-5196-trunk.patch


    SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes are used by the SequenceFile's raw bytes reading/writing API. The current implementation does not reuse the internal byte[] and causes unnecessary buffer allocation and initializaiton (zeroing the buffer).
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-dev @
categorieshadoop
postedFeb 7, '09 at 2:43a
activeMar 20, '09 at 7:31p
posts11
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Hudson (JIRA): 11 posts

People

Translate

site design / logo © 2022 Grokbase