FAQ
Hi,
When the user calling FileSystem.copyFromLocalFile() doesn't have permission
to write to certain hdfs path:
Thread [main] (Suspended (exception AccessControlException))
DFSClient.mkdirs(String, FsPermission) line: 905
DistributedFileSystem.mkdirs(Path, FsPermission) line: 262
DistributedFileSystem(FileSystem).mkdirs(Path) line: 1162
FileUtil.copy(FileSystem, Path, FileSystem, Path, boolean, boolean,
Configuration) line: 194
DistributedFileSystem(FileSystem).copyFromLocalFile(boolean, boolean,
Path, Path) line: 1231
DistributedFileSystem(FileSystem).copyFromLocalFile(boolean, Path, Path)
line: 1207
DistributedFileSystem(FileSystem).copyFromLocalFile(Path, Path) line:
1179
GridM2mInstallation.copyInputFiles(FlowConfigurations$FlowConf) line:
380

Passwordless ssh has been setup for current user, tyu, on localhost and user
hadoop on Namenode.
I want to get opinion on how I can programmatically get pass the above
exception - by specifying user as hadoop, maybe ?

Thanks

Search Discussions

  • Raymond Jennings III at Jun 2, 2010 at 7:53 pm
    I have a cluster of 12 slave nodes. I see that for some jobs the part-r-00000 type files, half of them are zero in size after the job completes. Does this mean the hash function that splits the data to each reducer node is not working all that well? On other jobs it's pretty much even across all reducers but on certain jobs only half of the reducers have files bigger than 0. It is reproducible though. Can I change this hash function in anyway? Thanks.
  • Hemanth Yamijala at Jun 3, 2010 at 1:49 am
    Ted,
    When the user calling FileSystem.copyFromLocalFile() doesn't have permission
    to write to certain hdfs path:
    Thread [main] (Suspended (exception AccessControlException))
    DFSClient.mkdirs(String, FsPermission) line: 905
    DistributedFileSystem.mkdirs(Path, FsPermission) line: 262
    DistributedFileSystem(FileSystem).mkdirs(Path) line: 1162
    FileUtil.copy(FileSystem, Path, FileSystem, Path, boolean, boolean,
    Configuration) line: 194
    DistributedFileSystem(FileSystem).copyFromLocalFile(boolean, boolean,
    Path, Path) line: 1231
    DistributedFileSystem(FileSystem).copyFromLocalFile(boolean, Path, Path)
    line: 1207
    DistributedFileSystem(FileSystem).copyFromLocalFile(Path, Path) line:
    1179
    GridM2mInstallation.copyInputFiles(FlowConfigurations$FlowConf) line:
    380

    Passwordless ssh has been setup for current user, tyu, on localhost and user
    hadoop on Namenode.
    I want to get opinion on how I can programmatically get pass the above
    exception - by specifying user as hadoop, maybe ?
    Is there a reason why access cannot be given to the user on DFS during
    setup, using dfs -chmod or dfs -chown ? That seems a more correct
    solution. Please note that while some versions of Hadoop allowed user
    name to be set as a configuration property (I think it was called
    hadoop.job.ugi or some such), it will stop working with later, secure
    versions of Hadoop.

    Thanks
    Hemanth
  • Ted Yu at Jun 3, 2010 at 1:54 am
    I am currently calling 'dfs -chmod' through ssh before calling
    copyFromLocalFile().
    Just want to seek unified approach.
    On Wed, Jun 2, 2010 at 6:49 PM, Hemanth Yamijala wrote:

    Ted,
    When the user calling FileSystem.copyFromLocalFile() doesn't have
    permission
    to write to certain hdfs path:
    Thread [main] (Suspended (exception AccessControlException))
    DFSClient.mkdirs(String, FsPermission) line: 905
    DistributedFileSystem.mkdirs(Path, FsPermission) line: 262
    DistributedFileSystem(FileSystem).mkdirs(Path) line: 1162
    FileUtil.copy(FileSystem, Path, FileSystem, Path, boolean, boolean,
    Configuration) line: 194
    DistributedFileSystem(FileSystem).copyFromLocalFile(boolean, boolean,
    Path, Path) line: 1231
    DistributedFileSystem(FileSystem).copyFromLocalFile(boolean, Path, Path)
    line: 1207
    DistributedFileSystem(FileSystem).copyFromLocalFile(Path, Path) line:
    1179
    GridM2mInstallation.copyInputFiles(FlowConfigurations$FlowConf) line:
    380

    Passwordless ssh has been setup for current user, tyu, on localhost and user
    hadoop on Namenode.
    I want to get opinion on how I can programmatically get pass the above
    exception - by specifying user as hadoop, maybe ?
    Is there a reason why access cannot be given to the user on DFS during
    setup, using dfs -chmod or dfs -chown ? That seems a more correct
    solution. Please note that while some versions of Hadoop allowed user
    name to be set as a configuration property (I think it was called
    hadoop.job.ugi or some such), it will stop working with later, secure
    versions of Hadoop.

    Thanks
    Hemanth

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedJun 2, '10 at 6:57p
activeJun 3, '10 at 1:54a
posts4
users3
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase