FAQ
distcp requires large heapsize when copying many files
------------------------------------------------------

Key: HADOOP-1866
URL: https://issues.apache.org/jira/browse/HADOOP-1866
Project: Hadoop
Issue Type: Bug
Components: util
Reporter: Koji Noguchi
Priority: Minor


Trying to distcp 1.5 million files with 1G client heapsize, failed with outofmemory.


Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit
exceeded
at java.util.regex.Pattern.compile(Pattern.java:1438)
at java.util.regex.Pattern.(Pattern.java:846)
at java.lang.String.replace(String.java:2208)
at org.apache.hadoop.fs.Path.normalizePath(Path.java:147)
at org.apache.hadoop.fs.Path.initialize(Path.java:137)
at org.apache.hadoop.fs.Path.(DfsPath.java:32)
at org.apache.hadoop.dfs.DistributedFileSystem$RawDistributedFileSystem.listPaths(DistributedFileSystem.java:214)
at org.apache.hadoop.fs.FileSystem.listPaths(FileSystem.java:483)
at org.apache.hadoop.fs.FileSystem.listPaths(FileSystem.java:496)
at org.apache.hadoop.fs.ChecksumFileSystem.listPaths(ChecksumFileSystem.java:539)
at org.apache.hadoop.util.CopyFiles$FSCopyFilesMapper.setup(CopyFiles.java:327)
at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:762)
at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:808)
at org.apache.hadoop.util.ToolBase.doMain(ToolBase.java:189)
at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:818)

It would be nice if distcp doesn't require gigs of heapsize when copying large amount of files.




--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Search Discussions

  • Owen O'Malley (JIRA) at Sep 10, 2007 at 4:47 pm
    [ https://issues.apache.org/jira/browse/HADOOP-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12526187 ]

    Owen O'Malley commented on HADOOP-1866:
    ---------------------------------------

    Koji, I assume this was on 0.13 or 0.14? The new distcp in 0.15 is almost completely re-written, so the version matters a lot.
    distcp requires large heapsize when copying many files
    ------------------------------------------------------

    Key: HADOOP-1866
    URL: https://issues.apache.org/jira/browse/HADOOP-1866
    Project: Hadoop
    Issue Type: Bug
    Components: util
    Reporter: Koji Noguchi
    Priority: Minor

    Trying to distcp 1.5 million files with 1G client heapsize, failed with outofmemory.
    Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit
    exceeded
    at java.util.regex.Pattern.compile(Pattern.java:1438)
    at java.util.regex.Pattern.<init>(Pattern.java:1130)
    at java.util.regex.Pattern.compile(Pattern.java:846)
    at java.lang.String.replace(String.java:2208)
    at org.apache.hadoop.fs.Path.normalizePath(Path.java:147)
    at org.apache.hadoop.fs.Path.initialize(Path.java:137)
    at org.apache.hadoop.fs.Path.<init>(Path.java:126)
    at org.apache.hadoop.dfs.DfsPath.<init>(DfsPath.java:32)
    at org.apache.hadoop.dfs.DistributedFileSystem$RawDistributedFileSystem.listPaths(DistributedFileSystem.java:214)
    at org.apache.hadoop.fs.FileSystem.listPaths(FileSystem.java:483)
    at org.apache.hadoop.fs.FileSystem.listPaths(FileSystem.java:496)
    at org.apache.hadoop.fs.ChecksumFileSystem.listPaths(ChecksumFileSystem.java:539)
    at org.apache.hadoop.util.CopyFiles$FSCopyFilesMapper.setup(CopyFiles.java:327)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:762)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:808)
    at org.apache.hadoop.util.ToolBase.doMain(ToolBase.java:189)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:818)
    It would be nice if distcp doesn't require gigs of heapsize when copying large amount of files.
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Koji Noguchi (JIRA) at Sep 10, 2007 at 5:10 pm
    [ https://issues.apache.org/jira/browse/HADOOP-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Koji Noguchi updated HADOOP-1866:
    ---------------------------------

    Affects Version/s: 0.13.1

    bq. Koji, I assume this was on 0.13 or 0.14? The new distcp in 0.15 is almost completely re-written, so the version matters a lot.

    Sorry. This was on 0.13.1. Looks like 0.15 distcp handles it much better. (no ArrayList finalPathList that holds all the files)



    distcp requires large heapsize when copying many files
    ------------------------------------------------------

    Key: HADOOP-1866
    URL: https://issues.apache.org/jira/browse/HADOOP-1866
    Project: Hadoop
    Issue Type: Bug
    Components: util
    Affects Versions: 0.13.1
    Reporter: Koji Noguchi
    Priority: Minor

    Trying to distcp 1.5 million files with 1G client heapsize, failed with outofmemory.
    Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit
    exceeded
    at java.util.regex.Pattern.compile(Pattern.java:1438)
    at java.util.regex.Pattern.<init>(Pattern.java:1130)
    at java.util.regex.Pattern.compile(Pattern.java:846)
    at java.lang.String.replace(String.java:2208)
    at org.apache.hadoop.fs.Path.normalizePath(Path.java:147)
    at org.apache.hadoop.fs.Path.initialize(Path.java:137)
    at org.apache.hadoop.fs.Path.<init>(Path.java:126)
    at org.apache.hadoop.dfs.DfsPath.<init>(DfsPath.java:32)
    at org.apache.hadoop.dfs.DistributedFileSystem$RawDistributedFileSystem.listPaths(DistributedFileSystem.java:214)
    at org.apache.hadoop.fs.FileSystem.listPaths(FileSystem.java:483)
    at org.apache.hadoop.fs.FileSystem.listPaths(FileSystem.java:496)
    at org.apache.hadoop.fs.ChecksumFileSystem.listPaths(ChecksumFileSystem.java:539)
    at org.apache.hadoop.util.CopyFiles$FSCopyFilesMapper.setup(CopyFiles.java:327)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:762)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:808)
    at org.apache.hadoop.util.ToolBase.doMain(ToolBase.java:189)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:818)
    It would be nice if distcp doesn't require gigs of heapsize when copying large amount of files.
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Owen O'Malley (JIRA) at Oct 24, 2007 at 6:21 pm
    [ https://issues.apache.org/jira/browse/HADOOP-1866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Owen O'Malley resolved HADOOP-1866.
    -----------------------------------

    Resolution: Duplicate
    Fix Version/s: 0.15.0
    Assignee: Chris Douglas

    This was fixed by HADOOP-1569.
    distcp requires large heapsize when copying many files
    ------------------------------------------------------

    Key: HADOOP-1866
    URL: https://issues.apache.org/jira/browse/HADOOP-1866
    Project: Hadoop
    Issue Type: Bug
    Components: util
    Affects Versions: 0.13.1
    Reporter: Koji Noguchi
    Assignee: Chris Douglas
    Priority: Minor
    Fix For: 0.15.0


    Trying to distcp 1.5 million files with 1G client heapsize, failed with outofmemory.
    Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit
    exceeded
    at java.util.regex.Pattern.compile(Pattern.java:1438)
    at java.util.regex.Pattern.<init>(Pattern.java:1130)
    at java.util.regex.Pattern.compile(Pattern.java:846)
    at java.lang.String.replace(String.java:2208)
    at org.apache.hadoop.fs.Path.normalizePath(Path.java:147)
    at org.apache.hadoop.fs.Path.initialize(Path.java:137)
    at org.apache.hadoop.fs.Path.<init>(Path.java:126)
    at org.apache.hadoop.dfs.DfsPath.<init>(DfsPath.java:32)
    at org.apache.hadoop.dfs.DistributedFileSystem$RawDistributedFileSystem.listPaths(DistributedFileSystem.java:214)
    at org.apache.hadoop.fs.FileSystem.listPaths(FileSystem.java:483)
    at org.apache.hadoop.fs.FileSystem.listPaths(FileSystem.java:496)
    at org.apache.hadoop.fs.ChecksumFileSystem.listPaths(ChecksumFileSystem.java:539)
    at org.apache.hadoop.util.CopyFiles$FSCopyFilesMapper.setup(CopyFiles.java:327)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:762)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:808)
    at org.apache.hadoop.util.ToolBase.doMain(ToolBase.java:189)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:818)
    It would be nice if distcp doesn't require gigs of heapsize when copying large amount of files.
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-dev @
categorieshadoop
postedSep 10, '07 at 4:37p
activeOct 24, '07 at 6:21p
posts4
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Owen O'Malley (JIRA): 4 posts

People

Translate

site design / logo © 2022 Grokbase