Amazing! It helped for flume! Thank you so much.
I had also an issue to start JobHistory
/var/log/hadoop-mapreduce# less hadoop-cmf-yarn1-JOBHISTORY-userver.log.out
2012-11-27 10:17:41,897 FATAL
org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer: Error starting
JobHistoryServer
org.apache.hadoop.yarn.YarnException: Error creating done directory:
[hdfs://userver:8020/user/history/done]
at
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.init(HistoryFileManager.java:398)
at
org.apache.hadoop.mapreduce.v2.hs.JobHistory.init(JobHistory.java:87)
at
org.apache.hadoop.yarn.service.CompositeService.init(CompositeService.java:58)
at
org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.init(JobHistoryServer.java:78)
at
org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.main(JobHistoryServer.java:132)
Caused by: org.apache.hadoop.security.AccessControlException: Permission
denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
But this "sudo hadoop chown" commands helped a lot with JobHisotry too:
sudo -u hdfs hadoop fs -chown mapred:mapred /user/history
At the begging when I was querying "hadoop fs -ls /" it showed me all the
structure from my local root (as "/" was the mountpoint I assume)
I've changed mounting point and restarted HDFS but still it didn't change
anything, just after I've rebooted the whole vm it change to those paths
I've provided in mounting points!
Best regards Harsh!
W dniu wtorek, 27 listopada 2012 11:55:55 UTC+1 użytkownik Harsh J napisał:
Hi Johnny,
Your flume app is attempting to write to HDFS on a path thats not been
created/granted for it.
If you know the path, create it manually and grant flume write access for
it.
For example, if you want a /hdfs path on HDFS, for user flume to write to:
sudo -u hdfs hadoop fs -mkdir /hdfs
sudo -u hdfs hadoop fs -chown flume:flume /hdfs
And retry your flume app, and it should work.
On Tue, Nov 27, 2012 at 4:18 PM, Johnny Kowalski
<
[email protected] <javascript:>> wrote:
Hello, I've used your cloudera manager to install YARN, zookeeper, flume and
of course hadoop on clean ubuntu 12.04.
But I am facing a problem with persmissions although I've change the
mountpoint for namenode "/" from newly created "/hdfs"
I still get same error in logs :
/var/log/hadoop-hdfs# tail -f hadoop-cmf-hdfs1-NAMENODE-userver.log.out
2012-11-27 11:42:25,528 ERROR
org.apache.hadoop.security.UserGroupInformation:
PriviledgedActionException
as:flume (auth:SIMPLE)
cause:org.apache.hadoop.security.AccessControlException: Permission denied:
user=flume, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
2012-11-27 11:42:25,529 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 8020, call
org.apache.hadoop.hdfs.protocol.ClientProtocol.create from
192.168.56.101:57926: error:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=flume, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
org.apache.hadoop.security.AccessControlException: Permission denied:
user=flume, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
and flume
/var/log/flume-ng# tail -f flume-cmf-flume1-AGENT-userver.log
2012-11-27 11:47:23,840 WARN org.apache.flume.sink.hdfs.HDFSEventSink: HDFS
IO error
org.apache.hadoop.security.AccessControlException: Permission denied:
user=flume, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
please help
--
Harsh J