FAQ
distcp fail copying to /user/<username>/<newtarget> (with permission on)
------------------------------------------------------------------------

Key: HADOOP-3138
URL: https://issues.apache.org/jira/browse/HADOOP-3138
Project: Hadoop Core
Issue Type: Bug
Components: util
Affects Versions: 0.16.1
Reporter: Koji Noguchi


When distcp-ing to /user/<username>/<newtarget>, I get an error with

Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x

{noformat}
at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)

at org.apache.hadoop.ipc.Client.call(Client.java:512)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
{noformat}

In distcp set up, we have
{noformat}
if (!dstExists || !dstIsDir) {
Path parent = destPath.getParent();
dstfs.mkdirs(parent);
logPath = new Path(parent, filename);
}
{noformat}
We should check if parent path exists before calling mkdir?


--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Search Discussions

  • Tsz Wo (Nicholas), SZE (JIRA) at Mar 31, 2008 at 6:02 pm
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Tsz Wo (Nicholas), SZE reassigned HADOOP-3138:
    ----------------------------------------------

    Assignee: Tsz Wo (Nicholas), SZE
    We should check if parent path exists before calling mkdir?
    Sure. I will fix it.
    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: util
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Tsz Wo (Nicholas), SZE

    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Raghu Angadi (JIRA) at Mar 31, 2008 at 6:22 pm
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12583782#action_12583782 ]

    Raghu Angadi commented on HADOOP-3138:
    --------------------------------------
    We should check if parent path exists before calling mkdir?
    Should this be a client side fix? HDFS mkdirs is is like 'mkdir -p', if the dir exists it should pass (at least according my Linux machine anyway: {{`mkdir -p /usr`}} succeeds without error though I don't have permission to create /usr). So it looks like if a directory exists, then creating that directory should not result in a access denied (unless the user does not have directory-read permissions to the parent directory, may be).
    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: util
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Tsz Wo (Nicholas), SZE

    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Tsz Wo (Nicholas), SZE (JIRA) at Mar 31, 2008 at 8:36 pm
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12583830#action_12583830 ]

    Tsz Wo (Nicholas), SZE commented on HADOOP-3138:
    ------------------------------------------------
    Should this be a client side fix?
    Tried the following in Linux:

    - mkdir(const char *pathname, mode_t mode) system call. It returns " pathname already exists".

    - For mkdir -p, it does not report any error if the path exists and the path is a directory, even if there is no w permission in the parent.

    - For mkdir, it reports "File exists" error.

    I guess Raghu is right since our mkdirs and mkdir -p have similar semantic.
    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: util
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Tsz Wo (Nicholas), SZE

    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Robert Chansler (JIRA) at Mar 31, 2008 at 10:00 pm
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Robert Chansler updated HADOOP-3138:
    ------------------------------------

    Fix Version/s: 0.17.0
    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: util
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Tsz Wo (Nicholas), SZE
    Fix For: 0.17.0


    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Robert Chansler (JIRA) at Mar 31, 2008 at 10:02 pm
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Robert Chansler updated HADOOP-3138:
    ------------------------------------

    Priority: Blocker (was: Major)
    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: util
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Tsz Wo (Nicholas), SZE
    Priority: Blocker
    Fix For: 0.17.0


    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Raghu Angadi (JIRA) at Apr 2, 2008 at 11:58 pm
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Raghu Angadi reassigned HADOOP-3138:
    ------------------------------------

    Assignee: Raghu Angadi (was: Tsz Wo (Nicholas), SZE)
    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: util
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Raghu Angadi
    Priority: Blocker
    Fix For: 0.17.0


    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Raghu Angadi (JIRA) at Apr 3, 2008 at 12:01 am
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Raghu Angadi updated HADOOP-3138:
    ---------------------------------

    Component/s: (was: util)
    dfs
    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: dfs
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Raghu Angadi
    Priority: Blocker
    Fix For: 0.17.0


    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Raghu Angadi (JIRA) at Apr 3, 2008 at 4:30 am
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Raghu Angadi updated HADOOP-3138:
    ---------------------------------

    Status: Patch Available (was: Open)
    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: dfs
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Raghu Angadi
    Priority: Blocker
    Fix For: 0.17.0

    Attachments: HADOOP-3138.patch


    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Raghu Angadi (JIRA) at Apr 3, 2008 at 4:31 am
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Raghu Angadi updated HADOOP-3138:
    ---------------------------------

    Attachment: HADOOP-3138.patch

    mkdirs() first invokes {{exists()}} at the beginning and returns false if the path exists. yes, exists() is called even before the check for safemode().

    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: dfs
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Raghu Angadi
    Priority: Blocker
    Fix For: 0.17.0

    Attachments: HADOOP-3138.patch


    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Hadoop QA (JIRA) at Apr 3, 2008 at 7:46 pm
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12585257#action_12585257 ]

    Hadoop QA commented on HADOOP-3138:
    -----------------------------------

    -1 overall. Here are the results of testing the latest attachment
    http://issues.apache.org/jira/secure/attachment/12379230/HADOOP-3138.patch
    against trunk revision 643282.

    @author +1. The patch does not contain any @author tags.

    tests included +1. The patch appears to include 3 new or modified tests.

    javadoc +1. The javadoc tool did not generate any warning messages.

    javac +1. The applied patch does not generate any new javac compiler warnings.

    release audit +1. The applied patch does not generate any new release audit warnings.

    findbugs +1. The patch does not introduce any new Findbugs warnings.

    core tests -1. The patch failed core unit tests.

    contrib tests +1. The patch passed contrib unit tests.

    Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2142/testReport/
    Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2142/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
    Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2142/artifact/trunk/build/test/checkstyle-errors.html
    Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2142/console

    This message is automatically generated.
    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: dfs
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Raghu Angadi
    Priority: Blocker
    Fix For: 0.17.0

    Attachments: HADOOP-3138.patch


    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Raghu Angadi (JIRA) at Apr 3, 2008 at 8:00 pm
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Raghu Angadi updated HADOOP-3138:
    ---------------------------------

    Status: Open (was: Patch Available)
    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: dfs
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Raghu Angadi
    Priority: Blocker
    Fix For: 0.17.0

    Attachments: HADOOP-3138.patch


    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Tsz Wo (Nicholas), SZE (JIRA) at Apr 3, 2008 at 8:41 pm
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Tsz Wo (Nicholas), SZE updated HADOOP-3138:
    -------------------------------------------

    Hadoop Flags: [Reviewed]

    +1 patch looks good
    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: dfs
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Raghu Angadi
    Priority: Blocker
    Fix For: 0.17.0

    Attachments: HADOOP-3138.patch


    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Raghu Angadi (JIRA) at Apr 3, 2008 at 8:54 pm
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12585279#action_12585279 ]

    Raghu Angadi commented on HADOOP-3138:
    --------------------------------------

    hmm... in multiple locations, there is code like {{' if (!mkdir(dir)) { throw new IOException("could not create dir"); } '}} , which is wrong. I can fix the once that show up on the unit test. Alternately just we could continue to allow this kind of use...
    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: dfs
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Raghu Angadi
    Priority: Blocker
    Fix For: 0.17.0

    Attachments: HADOOP-3138.patch


    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Raghu Angadi (JIRA) at Apr 3, 2008 at 8:56 pm
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12585279#action_12585279 ]

    rangadi edited comment on HADOOP-3138 at 4/3/08 1:52 PM:
    --------------------------------------------------------------

    hmm... in multiple locations, there is code like {{' if (!mkdir(dir)) { throw new IOException("could not create dir"); } '}} , which is wrong. I can fix the ones that show up on the unit test. Alternately we could just continue to allow this kind of use...

    was (Author: rangadi):
    hmm... in multiple locations, there is code like {{' if (!mkdir(dir)) { throw new IOException("could not create dir"); } '}} , which is wrong. I can fix the once that show up on the unit test. Alternately just we could continue to allow this kind of use...
    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: dfs
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Raghu Angadi
    Priority: Blocker
    Fix For: 0.17.0

    Attachments: HADOOP-3138.patch


    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Raghu Angadi (JIRA) at Apr 3, 2008 at 9:22 pm
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Raghu Angadi updated HADOOP-3138:
    ---------------------------------

    Attachment: HADOOP-3138.patch

    The next patch attached continue to enforce the assumption on mkdirs() return value.
    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: dfs
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Raghu Angadi
    Priority: Blocker
    Fix For: 0.17.0

    Attachments: HADOOP-3138.patch, HADOOP-3138.patch


    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Raghu Angadi (JIRA) at Apr 3, 2008 at 9:22 pm
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Raghu Angadi updated HADOOP-3138:
    ---------------------------------

    Status: Patch Available (was: Open)
    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: dfs
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Raghu Angadi
    Priority: Blocker
    Fix For: 0.17.0

    Attachments: HADOOP-3138.patch, HADOOP-3138.patch


    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Tsz Wo (Nicholas), SZE (JIRA) at Apr 3, 2008 at 10:43 pm
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12585332#action_12585332 ]

    Tsz Wo (Nicholas), SZE commented on HADOOP-3138:
    ------------------------------------------------
    ... , which is wrong.
    +1 I agree that the semantic of mkdirs(...) is not clear. This should be fixed in HADOOP-3163.

    Not sure we still need to fix this issue since the distcp problem is already gone by HADOOP-3099. Should we mark this one as WON'T FIX and then work on HADOOP-3163?
    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: dfs
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Raghu Angadi
    Priority: Blocker
    Fix For: 0.17.0

    Attachments: HADOOP-3138.patch, HADOOP-3138.patch


    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Raghu Angadi (JIRA) at Apr 3, 2008 at 10:47 pm
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12585334#action_12585334 ]

    rangadi edited comment on HADOOP-3138 at 4/3/08 3:44 PM:
    --------------------------------------------------------------

    Since this is a bug in DFS, it is likely to affect other users as well.. my preference would be to commit it.


    was (Author: rangadi):
    Since this is a bug in DFS, it is likely affect other users as well.. my preference would be to commit it.

    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: dfs
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Raghu Angadi
    Priority: Blocker
    Fix For: 0.17.0

    Attachments: HADOOP-3138.patch, HADOOP-3138.patch


    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Raghu Angadi (JIRA) at Apr 3, 2008 at 10:48 pm
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12585334#action_12585334 ]

    Raghu Angadi commented on HADOOP-3138:
    --------------------------------------

    Since this is a bug in DFS, it is likely affect other users as well.. my preference would be to commit it.

    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: dfs
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Raghu Angadi
    Priority: Blocker
    Fix For: 0.17.0

    Attachments: HADOOP-3138.patch, HADOOP-3138.patch


    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Hadoop QA (JIRA) at Apr 4, 2008 at 5:15 am
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12585400#action_12585400 ]

    Hadoop QA commented on HADOOP-3138:
    -----------------------------------

    +1 overall. Here are the results of testing the latest attachment
    http://issues.apache.org/jira/secure/attachment/12379316/HADOOP-3138.patch
    against trunk revision 643282.

    @author +1. The patch does not contain any @author tags.

    tests included +1. The patch appears to include 3 new or modified tests.

    javadoc +1. The javadoc tool did not generate any warning messages.

    javac +1. The applied patch does not generate any new javac compiler warnings.

    release audit +1. The applied patch does not generate any new release audit warnings.

    findbugs +1. The patch does not introduce any new Findbugs warnings.

    core tests +1. The patch passed core unit tests.

    contrib tests +1. The patch passed contrib unit tests.

    Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2152/testReport/
    Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2152/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
    Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2152/artifact/trunk/build/test/checkstyle-errors.html
    Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2152/console

    This message is automatically generated.
    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: dfs
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Raghu Angadi
    Priority: Blocker
    Fix For: 0.17.0

    Attachments: HADOOP-3138.patch, HADOOP-3138.patch


    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Raghu Angadi (JIRA) at Apr 4, 2008 at 6:50 pm
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Raghu Angadi updated HADOOP-3138:
    ---------------------------------

    Resolution: Fixed
    Status: Resolved (was: Patch Available)

    I just committed this.
    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: dfs
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Raghu Angadi
    Priority: Blocker
    Fix For: 0.17.0

    Attachments: HADOOP-3138.patch, HADOOP-3138.patch


    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Konstantin Shvachko (JIRA) at Apr 5, 2008 at 12:33 am
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12585855#action_12585855 ]

    Konstantin Shvachko commented on HADOOP-3138:
    ---------------------------------------------

    +1. The patch is good for 0.16.3
    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: dfs
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Raghu Angadi
    Priority: Blocker
    Fix For: 0.17.0

    Attachments: HADOOP-3138.patch, HADOOP-3138.patch


    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Hudson (JIRA) at Apr 5, 2008 at 12:16 pm
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12585973#action_12585973 ]

    Hudson commented on HADOOP-3138:
    --------------------------------

    Integrated in Hadoop-trunk #451 (See [http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/451/])
    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: dfs
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Raghu Angadi
    Priority: Blocker
    Fix For: 0.17.0

    Attachments: HADOOP-3138.patch, HADOOP-3138.patch


    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Mukund Madhugiri (JIRA) at Apr 30, 2008 at 1:14 am
    [ https://issues.apache.org/jira/browse/HADOOP-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Mukund Madhugiri updated HADOOP-3138:
    -------------------------------------

    Fix Version/s: (was: 0.17.0)
    0.16.4

    I committed this to 0.16.4. Thanks Raghu.
    distcp fail copying to /user/<username>/<newtarget> (with permission on)
    ------------------------------------------------------------------------

    Key: HADOOP-3138
    URL: https://issues.apache.org/jira/browse/HADOOP-3138
    Project: Hadoop Core
    Issue Type: Bug
    Components: dfs
    Affects Versions: 0.16.1
    Reporter: Koji Noguchi
    Assignee: Raghu Angadi
    Priority: Blocker
    Fix For: 0.16.4

    Attachments: HADOOP-3138.patch, HADOOP-3138.patch


    When distcp-ing to /user/<username>/<newtarget>, I get an error with
    Copy failed: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.fs.permission.AccessControlException: Permission denied: user=knoguchi, access=WRITE, inode="user":superuser:superusergroup:rwxr-xr-x
    {noformat}
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:173)
    at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:154)
    at org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:102)
    at org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4037)
    at org.apache.hadoop.dfs.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4007)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1576)
    at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559)
    at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:899)
    at org.apache.hadoop.ipc.Client.call(Client.java:512)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.mkdirs(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient.mkdirs(DFSClient.java:550)
    at org.apache.hadoop.dfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:184)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:980)
    at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:735)
    at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:525)
    at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:596)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:612)
    {noformat}
    In distcp set up, we have
    {noformat}
    if (!dstExists || !dstIsDir) {
    Path parent = destPath.getParent();
    dstfs.mkdirs(parent);
    logPath = new Path(parent, filename);
    }
    {noformat}
    We should check if parent path exists before calling mkdir?
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-dev @
categorieshadoop
postedMar 31, '08 at 5:56p
activeApr 30, '08 at 1:14a
posts25
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Mukund Madhugiri (JIRA): 25 posts

People

Translate

site design / logo © 2022 Grokbase