------------------------------------------------------
Key: HADOOP-1866
URL: https://issues.apache.org/jira/browse/HADOOP-1866
Project: Hadoop
Issue Type: Bug
Components: util
Reporter: Koji Noguchi
Priority: Minor
Trying to distcp 1.5 million files with 1G client heapsize, failed with outofmemory.
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit
exceeded
at java.util.regex.Pattern.compile(Pattern.java:1438)
at java.util.regex.Pattern.(Pattern.java:846)
at java.lang.String.replace(String.java:2208)
at org.apache.hadoop.fs.Path.normalizePath(Path.java:147)
at org.apache.hadoop.fs.Path.initialize(Path.java:137)
at org.apache.hadoop.fs.Path.(DfsPath.java:32)
at org.apache.hadoop.dfs.DistributedFileSystem$RawDistributedFileSystem.listPaths(DistributedFileSystem.java:214)
at org.apache.hadoop.fs.FileSystem.listPaths(FileSystem.java:483)
at org.apache.hadoop.fs.FileSystem.listPaths(FileSystem.java:496)
at org.apache.hadoop.fs.ChecksumFileSystem.listPaths(ChecksumFileSystem.java:539)
at org.apache.hadoop.util.CopyFiles$FSCopyFilesMapper.setup(CopyFiles.java:327)
at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:762)
at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:808)
at org.apache.hadoop.util.ToolBase.doMain(ToolBase.java:189)
at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:818)
It would be nice if distcp doesn't require gigs of heapsize when copying large amount of files.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.