FAQ
Hi All,

I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0 free
edition and CDH 4.0.3.
Installation went smooth.

I installed sqoop on it today. When I run the sqoop import to load data
from Oracle11g to Hbase,
I get the subject error.

I am running it as root user. I am still getting the same error.

Sqoop creates the table fine. But when it tries to the write the data,
hadoop(hdfs) is giving this error.

Any thoughts?

Thanks

Search Discussions

  • Mike at Aug 13, 2012 at 2:55 pm
    I tried the below still no luck
    chmod 777 /tmp
    sudo -u hdfs hadoop fs -chmod -R 1777 /tmp Thanks
    On Monday, August 13, 2012 10:27:52 AM UTC-4, Mike wrote:

    Hi All,

    I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0
    free edition and CDH 4.0.3.
    Installation went smooth.

    I installed sqoop on it today. When I run the sqoop import to load data
    from Oracle11g to Hbase,
    I get the subject error.

    I am running it as root user. I am still getting the same error.

    Sqoop creates the table fine. But when it tries to the write the data,
    hadoop(hdfs) is giving this error.

    Any thoughts?

    Thanks

  • Harsh J at Aug 13, 2012 at 4:20 pm
    Your "root" user seems to lack a home directory which it is trying to
    use for itself.

    Do this:

    sudo -u hdfs hadoop fs -mkdir /user/root
    sudo -u hdfs hadoop fs -chown root:root /user/root

    Then run your program, and it should work.
    On Mon, Aug 13, 2012 at 8:25 PM, Mike wrote:
    I tried the below still no luck
    chmod 777 /tmp
    sudo -u hdfs hadoop fs -chmod -R 1777 /tmp Thanks
    On Monday, August 13, 2012 10:27:52 AM UTC-4, Mike wrote:

    Hi All,

    I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0
    free edition and CDH 4.0.3.
    Installation went smooth.

    I installed sqoop on it today. When I run the sqoop import to load data
    from Oracle11g to Hbase,
    I get the subject error.

    I am running it as root user. I am still getting the same error.

    Sqoop creates the table fine. But when it tries to the write the data,
    hadoop(hdfs) is giving this error.

    Any thoughts?

    Thanks


    --
    Harsh J
  • Mike at Aug 13, 2012 at 5:01 pm
    Thanks Harsh. You da man!
    On Monday, August 13, 2012 12:19:47 PM UTC-4, Harsh J wrote:

    Your "root" user seems to lack a home directory which it is trying to
    use for itself.

    Do this:

    sudo -u hdfs hadoop fs -mkdir /user/root
    sudo -u hdfs hadoop fs -chown root:root /user/root

    Then run your program, and it should work.

    On Mon, Aug 13, 2012 at 8:25 PM, Mike <mike...@gmail.com <javascript:>>
    wrote:
    I tried the below still no luck
    chmod 777 /tmp
    sudo -u hdfs hadoop fs -chmod -R 1777 /tmp Thanks
    On Monday, August 13, 2012 10:27:52 AM UTC-4, Mike wrote:

    Hi All,

    I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0
    free edition and CDH 4.0.3.
    Installation went smooth.

    I installed sqoop on it today. When I run the sqoop import to load data
    from Oracle11g to Hbase,
    I get the subject error.

    I am running it as root user. I am still getting the same error.

    Sqoop creates the table fine. But when it tries to the write the data,
    hadoop(hdfs) is giving this error.

    Any thoughts?

    Thanks


    --
    Harsh J
  • Harsh J at Aug 13, 2012 at 6:33 pm
    Hey Mike,

    Glad it helped. Would appreciate your ideas on
    https://issues.apache.org/jira/browse/HDFS-2750 to help improve such
    messages.
    On Mon, Aug 13, 2012 at 10:31 PM, Mike wrote:
    Thanks Harsh. You da man!

    On Monday, August 13, 2012 12:19:47 PM UTC-4, Harsh J wrote:

    Your "root" user seems to lack a home directory which it is trying to
    use for itself.

    Do this:

    sudo -u hdfs hadoop fs -mkdir /user/root
    sudo -u hdfs hadoop fs -chown root:root /user/root

    Then run your program, and it should work.
    On Mon, Aug 13, 2012 at 8:25 PM, Mike wrote:
    I tried the below still no luck
    chmod 777 /tmp
    sudo -u hdfs hadoop fs -chmod -R 1777 /tmp Thanks
    On Monday, August 13, 2012 10:27:52 AM UTC-4, Mike wrote:

    Hi All,

    I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0
    free edition and CDH 4.0.3.
    Installation went smooth.

    I installed sqoop on it today. When I run the sqoop import to load data
    from Oracle11g to Hbase,
    I get the subject error.

    I am running it as root user. I am still getting the same error.

    Sqoop creates the table fine. But when it tries to the write the data,
    hadoop(hdfs) is giving this error.

    Any thoughts?

    Thanks


    --
    Harsh J


    --
    Harsh J
  • Andy at Aug 22, 2012 at 2:28 pm
    Hi Harsh,

    I am seeing a similar problem but your suggestion (i.e. make the user
    directory and change ownership did not work). I am following the
    instructions for CDH4
    here: https://ccp.cloudera.com/download/attachments/18786167/CDH4_Quick_Start_Guide_4.0.pdf?version=1&modificationDate=1340988393000.

    I am performing the install on RHEL 6.3 (http://aws.amazon.com/rhel/).

    When I execute the following command as any user other than the hdfs user I
    get this stack trace (it works if I am the hdfs user)
    ## Command

    /usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar grep input output 'dfs[a-z.]+'

    ## Error:

    Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=joe, access=EXECUTE, inode="/var/lib/hadoop-hdfs/cache/mapred/mapred/staging":hdfs:supergroup:drwx-----


    If I list the directory contents as joe:

    hadoop fs -ls input

    Found 3 items

    -rw-r--r-- 1 joe supergroup 1461 2012-08-22 08:53 input/core-site.xml

    -rw-r--r-- 1 joe supergroup 1854 2012-08-22 08:53 input/hdfs-site.xml

    -rw-r--r-- 1 joe supergroup 1001 2012-08-22 08:53 input/mapred-site.xml


    And as far as I can tell, that user should be able to execute the mapreduce job.


    Any help or other areas to look at would be much appreciated.


    Thanks,


    Andy

    On Monday, August 13, 2012 10:27:52 AM UTC-4, Mike wrote:

    Hi All,

    I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0
    free edition and CDH 4.0.3.
    Installation went smooth.

    I installed sqoop on it today. When I run the sqoop import to load data
    from Oracle11g to Hbase,
    I get the subject error.

    I am running it as root user. I am still getting the same error.

    Sqoop creates the table fine. But when it tries to the write the data,
    hadoop(hdfs) is giving this error.

    Any thoughts?

    Thanks

  • Joey Echeverria at Aug 22, 2012 at 5:38 pm
    Either change the permissions on /var/lib/hadoop-hdfs/cache/mapred/mapred/staging to be 777 or change the mapred.system.dir to be /user/${user.name}/.staging

    -Joey

    --
    Joey Echeverria
    Principal Solutions Architect
    Cloudera, Inc.

    On Wednesday, August 22, 2012 at 10:28, Andy wrote:

    Hi Harsh,

    I am seeing a similar problem but your suggestion (i.e. make the user directory and change ownership did not work). I am following the instructions for CDH4 here: https://ccp.cloudera.com/download/attachments/18786167/CDH4_Quick_Start_Guide_4.0.pdf?version=1&modificationDate=1340988393000.

    I am performing the install on RHEL 6.3 (http://aws.amazon.com/rhel/).

    When I execute the following command as any user other than the hdfs user I get this stack trace (it works if I am the hdfs user)
    ## Command
    /usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar grep input output 'dfs[a-z.]+'
    ## Error:
    Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=joe, access=EXECUTE, inode="/var/lib/hadoop-hdfs/cache/mapred/mapred/staging":hdfs:supergroup:drwx-----

    If I list the directory contents as joe:
    hadoop fs -ls input
    Found 3 items
    -rw-r--r-- 1 joe supergroup 1461 2012-08-22 08:53 input/core-site.xml
    -rw-r--r-- 1 joe supergroup 1854 2012-08-22 08:53 input/hdfs-site.xml
    -rw-r--r-- 1 joe supergroup 1001 2012-08-22 08:53 input/mapred-site.xml

    And as far as I can tell, that user should be able to execute the mapreduce job.

    Any help or other areas to look at would be much appreciated.

    Thanks,

    Andy
    On Monday, August 13, 2012 10:27:52 AM UTC-4, Mike wrote:
    Hi All,

    I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0 free edition and CDH 4.0.3.
    Installation went smooth.

    I installed sqoop on it today. When I run the sqoop import to load data from Oracle11g to Hbase,
    I get the subject error.

    I am running it as root user. I am still getting the same error.

    Sqoop creates the table fine. But when it tries to the write the data, hadoop(hdfs) is giving this error.

    Any thoughts?

    Thanks
  • Andrew Sunderland at Aug 27, 2012 at 3:34 pm
    Thanks very much Joey.

    I am still having trouble running:

    $ /usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar
    grep input output 'dfs[a-z.]+'

    As any user other than hdfs. The ability to successfully run the job as
    hdfs makes me think my whole installation is not hosed...

    I wonder if the error is a red herring, I say that because the directory
    referenced in the error:

    node="/var/lib/hadoop-hdfs/cache/mapred/mapred/staging

    Does not even seem to exist. For example, if I:

    $ cd /var/lib/hadoop-hdfs/cache/mapred/mapred
    $ ls -l

    total 4
    drwxrwxrwx. 8 mapred mapred 4096 Aug 27 08:46 local

    Or search for it:

    $ find / -type d -name "staging" 2> /dev/null
    /lib/modules/2.6.32-276.el6.x86_64/kernel/drivers/staging


    I don't see the staging directory. Irregardless, I tried changing all
    permissions under the mapred directory:

    chmod -R 777 /var/lib/hadoop-hdfs/cache/mapred/mapred


    To check that it updated the permissions:

    cd /var/lib/hadoop-hdfs/cache/mapred/mapred

    stat -c '%A %a %n' *

    drwxrwxrwx 777 local

    As far as I can tell at this point, everything has read write and execute
    permissions. I also compared the groups for joe vs. hdfs:

    groups joe
    joe : joe hdfs hdusers
    groups hdfs
    hdfs : hdfs hdusers

    And I still get the same error when running the command....

    Permission denied: user=joe, access=EXECUTE,
    inode="/var/lib/hadoop-hdfs/cache/mapred/mapred/staging":hdfs:supergroup:drwx------

    Any thoughts or advice are much appreciated.

    - Andy


    On Wed, Aug 22, 2012 at 1:38 PM, Joey Echeverria wrote:

    Either change the permissions on /var/lib/hadoop-hdfs/cache/mapred/mapred/staging
    to be 777 or change the mapred.system.dir to be /user/${user.name
    }/.staging

    -Joey

    --
    Joey Echeverria
    Principal Solutions Architect
    Cloudera, Inc.

    On Wednesday, August 22, 2012 at 10:28, Andy wrote:

    Hi Harsh,

    I am seeing a similar problem but your suggestion (i.e. make the user
    directory and change ownership did not work). I am following the
    instructions for CDH4 here:
    https://ccp.cloudera.com/download/attachments/18786167/CDH4_Quick_Start_Guide_4.0.pdf?version=1&modificationDate=1340988393000
    .

    I am performing the install on RHEL 6.3 (http://aws.amazon.com/rhel/).

    When I execute the following command as any user other than the hdfs user
    I get this stack trace (it works if I am the hdfs user)
    ## Command

    /usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar grep input output 'dfs[a-z.]+'

    ## Error:

    Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=joe, access=EXECUTE, inode="/var/lib/hadoop-hdfs/cache/mapred/mapred/staging":hdfs:supergroup:drwx-----



    If I list the directory contents as joe:


    hadoop fs -ls input

    Found 3 items

    -rw-r--r-- 1 joe supergroup 1461 2012-08-22 08:53 input/core-site.xml

    -rw-r--r-- 1 joe supergroup 1854 2012-08-22 08:53 input/hdfs-site.xml

    -rw-r--r-- 1 joe supergroup 1001 2012-08-22 08:53 input/mapred-site.xml


    And as far as I can tell, that user should be able to execute the mapreduce job.


    Any help or other areas to look at would be much appreciated.


    Thanks,


    Andy




    On Monday, August 13, 2012 10:27:52 AM UTC-4, Mike wrote:

    Hi All,

    I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0
    free edition and CDH 4.0.3.
    Installation went smooth.

    I installed sqoop on it today. When I run the sqoop import to load data
    from Oracle11g to Hbase,
    I get the subject error.

    I am running it as root user. I am still getting the same error.

    Sqoop creates the table fine. But when it tries to the write the data,
    hadoop(hdfs) is giving this error.

    Any thoughts?

    Thanks



    --
    Andrew Sunderland
    Spry Enterprises, Inc.
    asunderland@spryinc.com
    443.831.5476
    http://spryinc.com/
  • Joey Echeverria at Aug 27, 2012 at 8:08 pm
    The directory is in HDFS, not the local file system. So you need to do
    something like the following:

    sudo -u hdfs hadoop fs -chmod -R 777 /var/lib/hadoop-hdfs/cache/mapred/mapred

    On Mon, Aug 27, 2012 at 11:34 AM, Andrew Sunderland
    wrote:
    Thanks very much Joey.

    I am still having trouble running:

    $ /usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar
    grep input output 'dfs[a-z.]+'

    As any user other than hdfs. The ability to successfully run the job as hdfs
    makes me think my whole installation is not hosed...

    I wonder if the error is a red herring, I say that because the directory
    referenced in the error:

    node="/var/lib/hadoop-hdfs/cache/mapred/mapred/staging

    Does not even seem to exist. For example, if I:

    $ cd /var/lib/hadoop-hdfs/cache/mapred/mapred
    $ ls -l

    total 4
    drwxrwxrwx. 8 mapred mapred 4096 Aug 27 08:46 local

    Or search for it:

    $ find / -type d -name "staging" 2> /dev/null
    /lib/modules/2.6.32-276.el6.x86_64/kernel/drivers/staging


    I don't see the staging directory. Irregardless, I tried changing all
    permissions under the mapred directory:

    chmod -R 777 /var/lib/hadoop-hdfs/cache/mapred/mapred


    To check that it updated the permissions:

    cd /var/lib/hadoop-hdfs/cache/mapred/mapred

    stat -c '%A %a %n' *

    drwxrwxrwx 777 local

    As far as I can tell at this point, everything has read write and execute
    permissions. I also compared the groups for joe vs. hdfs:

    groups joe
    joe : joe hdfs hdusers
    groups hdfs
    hdfs : hdfs hdusers

    And I still get the same error when running the command....

    Permission denied: user=joe, access=EXECUTE,
    inode="/var/lib/hadoop-hdfs/cache/mapred/mapred/staging":hdfs:supergroup:drwx------

    Any thoughts or advice are much appreciated.

    - Andy


    On Wed, Aug 22, 2012 at 1:38 PM, Joey Echeverria wrote:

    Either change the permissions on
    /var/lib/hadoop-hdfs/cache/mapred/mapred/staging to be 777 or change the
    mapred.system.dir to be /user/${user.name}/.staging

    -Joey

    --
    Joey Echeverria
    Principal Solutions Architect
    Cloudera, Inc.

    On Wednesday, August 22, 2012 at 10:28, Andy wrote:

    Hi Harsh,

    I am seeing a similar problem but your suggestion (i.e. make the user
    directory and change ownership did not work). I am following the
    instructions for CDH4 here:
    https://ccp.cloudera.com/download/attachments/18786167/CDH4_Quick_Start_Guide_4.0.pdf?version=1&modificationDate=1340988393000.

    I am performing the install on RHEL 6.3 (http://aws.amazon.com/rhel/).

    When I execute the following command as any user other than the hdfs user
    I get this stack trace (it works if I am the hdfs user)
    ## Command

    /usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar
    grep input output 'dfs[a-z.]+'

    ## Error:

    Caused by: org.apache.hadoop.security.AccessControlException: Permission
    denied: user=joe, access=EXECUTE,
    inode="/var/lib/hadoop-hdfs/cache/mapred/mapred/staging":hdfs:supergroup:drwx-----





    If I list the directory contents as joe:




    hadoop fs -ls input

    Found 3 items

    -rw-r--r-- 1 joe supergroup 1461 2012-08-22 08:53
    input/core-site.xml

    -rw-r--r-- 1 joe supergroup 1854 2012-08-22 08:53
    input/hdfs-site.xml

    -rw-r--r-- 1 joe supergroup 1001 2012-08-22 08:53
    input/mapred-site.xml


    And as far as I can tell, that user should be able to execute the
    mapreduce job.


    Any help or other areas to look at would be much appreciated.


    Thanks,


    Andy






    On Monday, August 13, 2012 10:27:52 AM UTC-4, Mike wrote:

    Hi All,

    I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0
    free edition and CDH 4.0.3.
    Installation went smooth.

    I installed sqoop on it today. When I run the sqoop import to load data
    from Oracle11g to Hbase,
    I get the subject error.

    I am running it as root user. I am still getting the same error.

    Sqoop creates the table fine. But when it tries to the write the data,
    hadoop(hdfs) is giving this error.

    Any thoughts?

    Thanks



    --
    Andrew Sunderland
    Spry Enterprises, Inc.
    asunderland@spryinc.com
    443.831.5476
    http://spryinc.com/


    --
    Joey Echeverria
    Principal Solutions Architect
    Cloudera, Inc.
  • Andy at Aug 27, 2012 at 8:53 pm
    Thanks Joey, that was exactly it.

    If it is helpful, I documented the steps I took in addition to what you
    guys provide here:

    https://ccp.cloudera.com/display/CDH4DOC/Installing+CDH4+on+a+Single+Linux+Node+in+Pseudo-distributed+Mode


    I don't know if it would be useful for your documentation going forward. If
    so, let me know and I will send it your way.

    - Andy
    On Monday, August 27, 2012 4:08:12 PM UTC-4, Joey Echeverria wrote:

    The directory is in HDFS, not the local file system. So you need to do
    something like the following:

    sudo -u hdfs hadoop fs -chmod -R 777
    /var/lib/hadoop-hdfs/cache/mapred/mapred

    On Mon, Aug 27, 2012 at 11:34 AM, Andrew Sunderland
    <asund...@spryinc.com <javascript:>> wrote:
    Thanks very much Joey.

    I am still having trouble running:

    $ /usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar
    grep input output 'dfs[a-z.]+'

    As any user other than hdfs. The ability to successfully run the job as hdfs
    makes me think my whole installation is not hosed...

    I wonder if the error is a red herring, I say that because the directory
    referenced in the error:

    node="/var/lib/hadoop-hdfs/cache/mapred/mapred/staging

    Does not even seem to exist. For example, if I:

    $ cd /var/lib/hadoop-hdfs/cache/mapred/mapred
    $ ls -l

    total 4
    drwxrwxrwx. 8 mapred mapred 4096 Aug 27 08:46 local

    Or search for it:

    $ find / -type d -name "staging" 2> /dev/null
    /lib/modules/2.6.32-276.el6.x86_64/kernel/drivers/staging


    I don't see the staging directory. Irregardless, I tried changing all
    permissions under the mapred directory:

    chmod -R 777 /var/lib/hadoop-hdfs/cache/mapred/mapred


    To check that it updated the permissions:

    cd /var/lib/hadoop-hdfs/cache/mapred/mapred

    stat -c '%A %a %n' *

    drwxrwxrwx 777 local

    As far as I can tell at this point, everything has read write and execute
    permissions. I also compared the groups for joe vs. hdfs:

    groups joe
    joe : joe hdfs hdusers
    groups hdfs
    hdfs : hdfs hdusers

    And I still get the same error when running the command....

    Permission denied: user=joe, access=EXECUTE,
    inode="/var/lib/hadoop-hdfs/cache/mapred/mapred/staging":hdfs:supergroup:drwx------
    Any thoughts or advice are much appreciated.

    - Andy


    On Wed, Aug 22, 2012 at 1:38 PM, Joey Echeverria wrote:

    Either change the permissions on
    /var/lib/hadoop-hdfs/cache/mapred/mapred/staging to be 777 or change
    the
    mapred.system.dir to be /user/${user.name}/.staging

    -Joey

    --
    Joey Echeverria
    Principal Solutions Architect
    Cloudera, Inc.

    On Wednesday, August 22, 2012 at 10:28, Andy wrote:

    Hi Harsh,

    I am seeing a similar problem but your suggestion (i.e. make the user
    directory and change ownership did not work). I am following the
    instructions for CDH4 here:
    https://ccp.cloudera.com/download/attachments/18786167/CDH4_Quick_Start_Guide_4.0.pdf?version=1&modificationDate=1340988393000.
    I am performing the install on RHEL 6.3 (http://aws.amazon.com/rhel/).

    When I execute the following command as any user other than the hdfs
    user
    I get this stack trace (it works if I am the hdfs user)
    ## Command

    /usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar
    grep input output 'dfs[a-z.]+'

    ## Error:

    Caused by: org.apache.hadoop.security.AccessControlException:
    Permission
    denied: user=joe, access=EXECUTE,
    inode="/var/lib/hadoop-hdfs/cache/mapred/mapred/staging":hdfs:supergroup:drwx-----




    If I list the directory contents as joe:




    hadoop fs -ls input

    Found 3 items

    -rw-r--r-- 1 joe supergroup 1461 2012-08-22 08:53
    input/core-site.xml

    -rw-r--r-- 1 joe supergroup 1854 2012-08-22 08:53
    input/hdfs-site.xml

    -rw-r--r-- 1 joe supergroup 1001 2012-08-22 08:53
    input/mapred-site.xml


    And as far as I can tell, that user should be able to execute the
    mapreduce job.


    Any help or other areas to look at would be much appreciated.


    Thanks,


    Andy






    On Monday, August 13, 2012 10:27:52 AM UTC-4, Mike wrote:

    Hi All,

    I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0
    free edition and CDH 4.0.3.
    Installation went smooth.

    I installed sqoop on it today. When I run the sqoop import to load data
    from Oracle11g to Hbase,
    I get the subject error.

    I am running it as root user. I am still getting the same error.

    Sqoop creates the table fine. But when it tries to the write the data,
    hadoop(hdfs) is giving this error.

    Any thoughts?

    Thanks



    --
    Andrew Sunderland
    Spry Enterprises, Inc.
    asund...@spryinc.com <javascript:>
    443.831.5476
    http://spryinc.com/


    --
    Joey Echeverria
    Principal Solutions Architect
    Cloudera, Inc.
  • Joey Echeverria at Aug 27, 2012 at 8:59 pm
    I'd love to see what you wrote. Improving our docs is always a good thing.

    -Joey
    On Mon, Aug 27, 2012 at 4:53 PM, Andy wrote:
    Thanks Joey, that was exactly it.

    If it is helpful, I documented the steps I took in addition to what you guys
    provide here:

    https://ccp.cloudera.com/display/CDH4DOC/Installing+CDH4+on+a+Single+Linux+Node+in+Pseudo-distributed+Mode


    I don't know if it would be useful for your documentation going forward. If
    so, let me know and I will send it your way.

    - Andy
    On Monday, August 27, 2012 4:08:12 PM UTC-4, Joey Echeverria wrote:

    The directory is in HDFS, not the local file system. So you need to do
    something like the following:

    sudo -u hdfs hadoop fs -chmod -R 777
    /var/lib/hadoop-hdfs/cache/mapred/mapred

    On Mon, Aug 27, 2012 at 11:34 AM, Andrew Sunderland
    wrote:
    Thanks very much Joey.

    I am still having trouble running:

    $ /usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar
    grep input output 'dfs[a-z.]+'

    As any user other than hdfs. The ability to successfully run the job as
    hdfs
    makes me think my whole installation is not hosed...

    I wonder if the error is a red herring, I say that because the directory
    referenced in the error:

    node="/var/lib/hadoop-hdfs/cache/mapred/mapred/staging

    Does not even seem to exist. For example, if I:

    $ cd /var/lib/hadoop-hdfs/cache/mapred/mapred
    $ ls -l

    total 4
    drwxrwxrwx. 8 mapred mapred 4096 Aug 27 08:46 local

    Or search for it:

    $ find / -type d -name "staging" 2> /dev/null
    /lib/modules/2.6.32-276.el6.x86_64/kernel/drivers/staging


    I don't see the staging directory. Irregardless, I tried changing all
    permissions under the mapred directory:

    chmod -R 777 /var/lib/hadoop-hdfs/cache/mapred/mapred


    To check that it updated the permissions:

    cd /var/lib/hadoop-hdfs/cache/mapred/mapred

    stat -c '%A %a %n' *

    drwxrwxrwx 777 local

    As far as I can tell at this point, everything has read write and
    execute
    permissions. I also compared the groups for joe vs. hdfs:

    groups joe
    joe : joe hdfs hdusers
    groups hdfs
    hdfs : hdfs hdusers

    And I still get the same error when running the command....

    Permission denied: user=joe, access=EXECUTE,

    inode="/var/lib/hadoop-hdfs/cache/mapred/mapred/staging":hdfs:supergroup:drwx------

    Any thoughts or advice are much appreciated.

    - Andy



    On Wed, Aug 22, 2012 at 1:38 PM, Joey Echeverria <jo...@cloudera.com>
    wrote:
    Either change the permissions on
    /var/lib/hadoop-hdfs/cache/mapred/mapred/staging to be 777 or change
    the
    mapred.system.dir to be /user/${user.name}/.staging

    -Joey

    --
    Joey Echeverria
    Principal Solutions Architect
    Cloudera, Inc.

    On Wednesday, August 22, 2012 at 10:28, Andy wrote:

    Hi Harsh,

    I am seeing a similar problem but your suggestion (i.e. make the user
    directory and change ownership did not work). I am following the
    instructions for CDH4 here:

    https://ccp.cloudera.com/download/attachments/18786167/CDH4_Quick_Start_Guide_4.0.pdf?version=1&modificationDate=1340988393000.

    I am performing the install on RHEL 6.3 (http://aws.amazon.com/rhel/).

    When I execute the following command as any user other than the hdfs
    user
    I get this stack trace (it works if I am the hdfs user)
    ## Command

    /usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar
    grep input output 'dfs[a-z.]+'

    ## Error:

    Caused by: org.apache.hadoop.security.AccessControlException:
    Permission
    denied: user=joe, access=EXECUTE,

    inode="/var/lib/hadoop-hdfs/cache/mapred/mapred/staging":hdfs:supergroup:drwx-----





    If I list the directory contents as joe:




    hadoop fs -ls input

    Found 3 items

    -rw-r--r-- 1 joe supergroup 1461 2012-08-22 08:53
    input/core-site.xml

    -rw-r--r-- 1 joe supergroup 1854 2012-08-22 08:53
    input/hdfs-site.xml

    -rw-r--r-- 1 joe supergroup 1001 2012-08-22 08:53
    input/mapred-site.xml


    And as far as I can tell, that user should be able to execute the
    mapreduce job.


    Any help or other areas to look at would be much appreciated.


    Thanks,


    Andy






    On Monday, August 13, 2012 10:27:52 AM UTC-4, Mike wrote:

    Hi All,

    I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0
    free edition and CDH 4.0.3.
    Installation went smooth.

    I installed sqoop on it today. When I run the sqoop import to load data
    from Oracle11g to Hbase,
    I get the subject error.

    I am running it as root user. I am still getting the same error.

    Sqoop creates the table fine. But when it tries to the write the data,
    hadoop(hdfs) is giving this error.

    Any thoughts?

    Thanks



    --
    Andrew Sunderland
    Spry Enterprises, Inc.
    asund...@spryinc.com
    443.831.5476
    http://spryinc.com/


    --
    Joey Echeverria
    Principal Solutions Architect
    Cloudera, Inc.


    --
    Joey Echeverria
    Principal Solutions Architect
    Cloudera, Inc.
  • Andrew Sunderland at Aug 28, 2012 at 4:21 pm
    Attached is what I have, most of the content is what you guys provided. The
    areas where I ended up doing a little debugging where:

    - Configuring the firewall rules for RHEL
    - Installing the JDK

    - You guys detail this, I found unless I edited etc/profile and etc/sudoers
    it would not work

    - Updating the file permissions

    This was my first attempt at anything Hadoop related, so some of this was
    most likely beginner error. After walking through it once, it doesn't seem
    like it would take more than an hour to recreate the set up.

    - Andy
    On Mon, Aug 27, 2012 at 4:59 PM, Joey Echeverria wrote:

    I'd love to see what you wrote. Improving our docs is always a good thing.

    -Joey
    On Mon, Aug 27, 2012 at 4:53 PM, Andy wrote:
    Thanks Joey, that was exactly it.

    If it is helpful, I documented the steps I took in addition to what you guys
    provide here:

    https://ccp.cloudera.com/display/CDH4DOC/Installing+CDH4+on+a+Single+Linux+Node+in+Pseudo-distributed+Mode

    I don't know if it would be useful for your documentation going forward. If
    so, let me know and I will send it your way.

    - Andy
    On Monday, August 27, 2012 4:08:12 PM UTC-4, Joey Echeverria wrote:

    The directory is in HDFS, not the local file system. So you need to do
    something like the following:

    sudo -u hdfs hadoop fs -chmod -R 777
    /var/lib/hadoop-hdfs/cache/mapred/mapred

    On Mon, Aug 27, 2012 at 11:34 AM, Andrew Sunderland
    wrote:
    Thanks very much Joey.

    I am still having trouble running:

    $ /usr/bin/hadoop jar
    /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar
    grep input output 'dfs[a-z.]+'

    As any user other than hdfs. The ability to successfully run the job
    as
    hdfs
    makes me think my whole installation is not hosed...

    I wonder if the error is a red herring, I say that because the
    directory
    referenced in the error:

    node="/var/lib/hadoop-hdfs/cache/mapred/mapred/staging

    Does not even seem to exist. For example, if I:

    $ cd /var/lib/hadoop-hdfs/cache/mapred/mapred
    $ ls -l

    total 4
    drwxrwxrwx. 8 mapred mapred 4096 Aug 27 08:46 local

    Or search for it:

    $ find / -type d -name "staging" 2> /dev/null
    /lib/modules/2.6.32-276.el6.x86_64/kernel/drivers/staging


    I don't see the staging directory. Irregardless, I tried changing all
    permissions under the mapred directory:

    chmod -R 777 /var/lib/hadoop-hdfs/cache/mapred/mapred


    To check that it updated the permissions:

    cd /var/lib/hadoop-hdfs/cache/mapred/mapred

    stat -c '%A %a %n' *

    drwxrwxrwx 777 local

    As far as I can tell at this point, everything has read write and
    execute
    permissions. I also compared the groups for joe vs. hdfs:

    groups joe
    joe : joe hdfs hdusers
    groups hdfs
    hdfs : hdfs hdusers

    And I still get the same error when running the command....

    Permission denied: user=joe, access=EXECUTE,
    inode="/var/lib/hadoop-hdfs/cache/mapred/mapred/staging":hdfs:supergroup:drwx------
    Any thoughts or advice are much appreciated.

    - Andy



    On Wed, Aug 22, 2012 at 1:38 PM, Joey Echeverria <jo...@cloudera.com>
    wrote:
    Either change the permissions on
    /var/lib/hadoop-hdfs/cache/mapred/mapred/staging to be 777 or change
    the
    mapred.system.dir to be /user/${user.name}/.staging

    -Joey

    --
    Joey Echeverria
    Principal Solutions Architect
    Cloudera, Inc.

    On Wednesday, August 22, 2012 at 10:28, Andy wrote:

    Hi Harsh,

    I am seeing a similar problem but your suggestion (i.e. make the user
    directory and change ownership did not work). I am following the
    instructions for CDH4 here:
    https://ccp.cloudera.com/download/attachments/18786167/CDH4_Quick_Start_Guide_4.0.pdf?version=1&modificationDate=1340988393000
    .
    I am performing the install on RHEL 6.3 (http://aws.amazon.com/rhel/
    ).
    When I execute the following command as any user other than the hdfs
    user
    I get this stack trace (it works if I am the hdfs user)
    ## Command

    /usr/bin/hadoop jar
    /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar
    grep input output 'dfs[a-z.]+'

    ## Error:

    Caused by: org.apache.hadoop.security.AccessControlException:
    Permission
    denied: user=joe, access=EXECUTE,
    inode="/var/lib/hadoop-hdfs/cache/mapred/mapred/staging":hdfs:supergroup:drwx-----




    If I list the directory contents as joe:




    hadoop fs -ls input

    Found 3 items

    -rw-r--r-- 1 joe supergroup 1461 2012-08-22 08:53
    input/core-site.xml

    -rw-r--r-- 1 joe supergroup 1854 2012-08-22 08:53
    input/hdfs-site.xml

    -rw-r--r-- 1 joe supergroup 1001 2012-08-22 08:53
    input/mapred-site.xml


    And as far as I can tell, that user should be able to execute the
    mapreduce job.


    Any help or other areas to look at would be much appreciated.


    Thanks,


    Andy






    On Monday, August 13, 2012 10:27:52 AM UTC-4, Mike wrote:

    Hi All,

    I have 2 nodes CentOS hadoop cluster. I installed cloudera manager
    4.0
    free edition and CDH 4.0.3.
    Installation went smooth.

    I installed sqoop on it today. When I run the sqoop import to load
    data
    from Oracle11g to Hbase,
    I get the subject error.

    I am running it as root user. I am still getting the same error.

    Sqoop creates the table fine. But when it tries to the write the
    data,
    hadoop(hdfs) is giving this error.

    Any thoughts?

    Thanks



    --
    Andrew Sunderland
    Spry Enterprises, Inc.
    asund...@spryinc.com
    443.831.5476
    http://spryinc.com/


    --
    Joey Echeverria
    Principal Solutions Architect
    Cloudera, Inc.


    --
    Joey Echeverria
    Principal Solutions Architect
    Cloudera, Inc.


    --
    Andrew Sunderland
    Spry Enterprises, Inc.
    asunderland@spryinc.com
    443.831.5476
    http://spryinc.com/
  • Muhammad Mohsin Ali at Sep 5, 2012 at 8:57 am
    Hello guys.

    I have tried all of the above, but can't seem to make my mapreduce work. I
    am using CDH4. It fails to start the job tracker. The error log is:

    org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4265)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4236)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2628)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2592)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:638)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:412)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42618)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
    at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
    at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1741)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:482)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1731)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:503)
    at org.apache.hadoop.mapred.JobTracker.(JobTracker.java:2053)
    at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:294)
    at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:286)
    at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4799)
    Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4265)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4236)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2628)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2592)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:638)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:412)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42618)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)

    at org.apache.hadoop.ipc.Client.call(Client.java:1161)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:184)
    at $Proxy10.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:165)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:84)
    at $Proxy10.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:420)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1739)
    ... 8 more



    Please help :(
  • Harsh J at Sep 8, 2012 at 2:44 pm
    Hi,

    Have you followed the MR1 deployment guide at
    https://ccp.cloudera.com/display/CDH4DOC/Deploying+MapReduce+v1+%28MRv1%29+on+a+Cluster?

    Specifically, your JT needs this step to have been taken care of
    before it starts:
    https://ccp.cloudera.com/display/CDH4DOC/Deploying+MapReduce+v1+%28MRv1%29+on+a+Cluster#DeployingMapReducev1%28MRv1%29onaCluster-Step9

    Are you using Cloudera Manager? It helps avoid having to run these
    steps manually.

    On Wed, Sep 5, 2012 at 2:19 PM, Muhammad Mohsin Ali
    wrote:
    Hello guys.

    I have tried all of the above, but can't seem to make my mapreduce work. I
    am using CDH4. It fails to start the job tracker. The error log is:

    org.apache.hadoop.security.AccessControlException: Permission denied:
    user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
    at
    org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
    at
    org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
    at
    org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4265)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4236)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2628)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2592)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:638)
    at
    org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:412)
    at
    org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42618)
    at
    org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
    at
    org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
    at
    org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1741)
    at
    org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:482)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1731)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:503)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2284)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2053)
    at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:294)
    at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:286)
    at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4799)
    Caused by: org.apache.hadoop.security.AccessControlException: Permission
    denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
    at
    org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
    at
    org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
    at
    org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4265)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4236)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2628)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2592)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:638)
    at
    org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:412)
    at
    org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42618)
    at
    org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)

    at org.apache.hadoop.ipc.Client.call(Client.java:1161)
    at
    org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:184)
    at $Proxy10.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:165)
    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:84)
    at $Proxy10.mkdirs(Unknown Source)
    at
    org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:420)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1739)
    ... 8 more



    Please help :(


    --
    Harsh J
  • Thilak at Nov 28, 2012 at 2:50 am
    Hi Mike,

    I have a CentOS hadoop ckuster and I face the same problem when i try to
    create a table in hive.
    I tried,
    chmod 777 /tmp
    sudo -u hdfs hadoop fs -chmod -R 1777 /tmp
    and I also tried..
    sudo -u hdfs hadoop fs -mkdir /user/root
    sudo -u hdfs hadoop fs -chown root:root /user/root

    But it dint work!
    Any suggestion?
    On Monday, 13 August 2012 07:27:52 UTC-7, Mike wrote:

    Hi All,

    I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0
    free edition and CDH 4.0.3.
    Installation went smooth.

    I installed sqoop on it today. When I run the sqoop import to load data
    from Oracle11g to Hbase,
    I get the subject error.

    I am running it as root user. I am still getting the same error.

    Sqoop creates the table fine. But when it tries to the write the data,
    hadoop(hdfs) is giving this error.

    Any thoughts?

    Thanks

  • Ram Krishnamurthy at Nov 28, 2012 at 2:51 am
    Are you logged in as root or hdfs? If you su - hdfs and try hive it should
    work

    On Tue, Nov 27, 2012 at 9:50 PM, Thilak wrote:

    Hi Mike,

    I have a CentOS hadoop ckuster and I face the same problem when i try to
    create a table in hive.
    I tried,
    chmod 777 /tmp
    sudo -u hdfs hadoop fs -chmod -R 1777 /tmp
    and I also tried..
    sudo -u hdfs hadoop fs -mkdir /user/root
    sudo -u hdfs hadoop fs -chown root:root /user/root

    But it dint work!
    Any suggestion?
    On Monday, 13 August 2012 07:27:52 UTC-7, Mike wrote:

    Hi All,

    I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0
    free edition and CDH 4.0.3.
    Installation went smooth.

    I installed sqoop on it today. When I run the sqoop import to load data
    from Oracle11g to Hbase,
    I get the subject error.

    I am running it as root user. I am still getting the same error.

    Sqoop creates the table fine. But when it tries to the write the data,
    hadoop(hdfs) is giving this error.

    Any thoughts?

    Thanks


    --
    Thanks,
    *Ram Krishnamurthy*
    rkrishnamurthy@greenway-solutions.com
    *Cell: 704-953-8125*
  • Kavish Ahuja at Feb 23, 2013 at 2:08 pm
    The simplest answer is :
    * Disabling the dfs permission.By adding below property code to
    conf/hdfs-site.xml

       <property>
         <name>dfs.permissions</name>
         <value>false</value>
       </property>


    Do this in CDH4 Manager HDFS CONFIGURATION (if you are using CDH4 to make
    your hadoop cluster).

    CHEERS !!

    *

    On Wednesday, November 28, 2012 8:21:32 AM UTC+5:30, Ram Krishnamurthy
    wrote:
    Are you logged in as root or hdfs? If you su - hdfs and try hive it should
    work


    On Tue, Nov 27, 2012 at 9:50 PM, Thilak <gs.t...@gmail.com <javascript:>>wrote:
    Hi Mike,

    I have a CentOS hadoop ckuster and I face the same problem when i try to
    create a table in hive.
    I tried,
    chmod 777 /tmp
    sudo -u hdfs hadoop fs -chmod -R 1777 /tmp
    and I also tried..
    sudo -u hdfs hadoop fs -mkdir /user/root
    sudo -u hdfs hadoop fs -chown root:root /user/root

    But it dint work!
    Any suggestion?
    On Monday, 13 August 2012 07:27:52 UTC-7, Mike wrote:

    Hi All,

    I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0
    free edition and CDH 4.0.3.
    Installation went smooth.

    I installed sqoop on it today. When I run the sqoop import to load data
    from Oracle11g to Hbase,
    I get the subject error.

    I am running it as root user. I am still getting the same error.

    Sqoop creates the table fine. But when it tries to the write the data,
    hadoop(hdfs) is giving this error.

    Any thoughts?

    Thanks


    --
    Thanks,
    *Ram Krishnamurthy*
    rkrishnamurthy@greenway-solutions.com <javascript:>
    *Cell: 704-953-8125*


  • Gerrit Jansen van Vuuren at Mar 1, 2013 at 11:30 pm
    Hi,

    I had this problem recently with apache hadoop 1.0.4 and found the
    following solution:

    *Problem:*

    job initialization failed:
    org.apache.hadoop.security.AccessControlException:
    org.apache.hadoop.security.AccessControlException: Permission denied:
    user=root, access=EXECUTE, inode="system":hadoop:root:rwx------ at
    sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at

    *Look at:*

    hadoop fs -ls /tmp/hadoop-hadoop/mapred/system

    Found 1 items

    -rw------- 3 hadoop root 4 2013-03-01 14:38
    /tmp/hadoop-hadoop/mapred/system/jobtracker.info

    (compare the above with the error message)


    *FIX*

    *As hadoop*


    hadoop fs -chown -R hadoop:hadoop /tmp/hadoop-hadoop/mapred/system

    hadoop fs -chmod -R 777 /tmp/hadoop-hadoop/mapred/system


  • Fiona ren at May 15, 2013 at 7:00 pm
    hey Kavish,

    I'm using Cloudera Manager to configure the cluster. Now I came across with
    the same problem as it. You mentioned "*Do this in CDH4 Manager HDFS
    CONFIGURATION (if you are using CDH4 to make your hadoop cluster). *"
    Now I open up the setting page for hdfs1 and "configure", but don't know
    which file to edit. Could you please provide more details in what and where
    to add?

    Thanks

    On Saturday, February 23, 2013 8:08:37 AM UTC-6, Kavish Ahuja wrote:

    The simplest answer is :
    * Disabling the dfs permission.By adding below property code to
    conf/hdfs-site.xml

    <property>
    <name>dfs.permissions</name>
    <value>false</value>
    </property>


    Do this in CDH4 Manager HDFS CONFIGURATION (if you are using CDH4 to make
    your hadoop cluster).

    CHEERS !!

    *

    On Wednesday, November 28, 2012 8:21:32 AM UTC+5:30, Ram Krishnamurthy
    wrote:
    Are you logged in as root or hdfs? If you su - hdfs and try hive it
    should work

    On Tue, Nov 27, 2012 at 9:50 PM, Thilak wrote:

    Hi Mike,

    I have a CentOS hadoop ckuster and I face the same problem when i try to
    create a table in hive.
    I tried,
    chmod 777 /tmp
    sudo -u hdfs hadoop fs -chmod -R 1777 /tmp
    and I also tried..
    sudo -u hdfs hadoop fs -mkdir /user/root
    sudo -u hdfs hadoop fs -chown root:root /user/root

    But it dint work!
    Any suggestion?
    On Monday, 13 August 2012 07:27:52 UTC-7, Mike wrote:

    Hi All,

    I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0
    free edition and CDH 4.0.3.
    Installation went smooth.

    I installed sqoop on it today. When I run the sqoop import to load data
    from Oracle11g to Hbase,
    I get the subject error.

    I am running it as root user. I am still getting the same error.

    Sqoop creates the table fine. But when it tries to the write the data,
    hadoop(hdfs) is giving this error.

    Any thoughts?

    Thanks


    --
    Thanks,
    *Ram Krishnamurthy*
    rkrishnamurthy@greenway-solutions.com
    *Cell: 704-953-8125*


  • Darren Lo at May 15, 2013 at 7:09 pm
    Hi Fiona,

    Turning off security is not recommended nor necessary. Usually you run your
    hive queries or map reduce jobs as a non-system user (not hive, root, hdfs,
    but something like fiona or bob).

    You also need to create the home directory for these users. If you are
    using Hue, you can very easily create a user in Hue and check the option to
    create the home directory. If not, you can run:
    sudo -u hdfs hdfs dfs -mkdir /user/fiona
    sudo -u hdfs hdfs dfs -chown fiona:fiona /user/fiona

    At this point, running hive jobs while logged in as user "fiona" should
    work.

    If you really want to disable permission checking, then you can edit the
    HDFS configuration via the Cloudera Manager UI. Look for the property
    "Check HDFS Permissions", which should appear by default along with all
    other service-wide configs. If it doesn't appear, search for it using the
    search box on the left. Then restart HDFS.

    Thanks,
    Darren

    On Wed, May 15, 2013 at 12:00 PM, fiona ren wrote:

    hey Kavish,

    I'm using Cloudera Manager to configure the cluster. Now I came across
    with the same problem as it. You mentioned "*Do this in CDH4 Manager HDFS
    CONFIGURATION (if you are using CDH4 to make your hadoop cluster). *"
    Now I open up the setting page for hdfs1 and "configure", but don't know
    which file to edit. Could you please provide more details in what and where
    to add?

    Thanks

    On Saturday, February 23, 2013 8:08:37 AM UTC-6, Kavish Ahuja wrote:

    The simplest answer is :
    * Disabling the dfs permission.By adding below property code to
    conf/hdfs-site.xml

    <property>
    <name>dfs.permissions</name>
    <value>false</value>
    </property>


    Do this in CDH4 Manager HDFS CONFIGURATION (if you are using CDH4 to make
    your hadoop cluster).

    CHEERS !!

    *

    On Wednesday, November 28, 2012 8:21:32 AM UTC+5:30, Ram Krishnamurthy
    wrote:
    Are you logged in as root or hdfs? If you su - hdfs and try hive it
    should work

    On Tue, Nov 27, 2012 at 9:50 PM, Thilak wrote:

    Hi Mike,

    I have a CentOS hadoop ckuster and I face the same problem when i try
    to create a table in hive.
    I tried,
    chmod 777 /tmp
    sudo -u hdfs hadoop fs -chmod -R 1777 /tmp
    and I also tried..
    sudo -u hdfs hadoop fs -mkdir /user/root
    sudo -u hdfs hadoop fs -chown root:root /user/root

    But it dint work!
    Any suggestion?
    On Monday, 13 August 2012 07:27:52 UTC-7, Mike wrote:

    Hi All,

    I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0
    free edition and CDH 4.0.3.
    Installation went smooth.

    I installed sqoop on it today. When I run the sqoop import to load
    data from Oracle11g to Hbase,
    I get the subject error.

    I am running it as root user. I am still getting the same error.

    Sqoop creates the table fine. But when it tries to the write the data,
    hadoop(hdfs) is giving this error.

    Any thoughts?

    Thanks


    --
    Thanks,
    *Ram Krishnamurthy*
    rkrishnamurthy@greenway-**solutions.com
    *Cell: 704-953-8125*



    --
    Thanks,
    Darren
  • Fiona ren at May 15, 2013 at 7:21 pm
    Hi Darren,

    Really appreciate your prompt reply!
    Here's my situation, I ran these command

    ints = to.dfs(1:100)
    calc = mapreduce(input = ints,
                        map = function(k, v) cbind(v, 2*v))from.dfs(calc)

    and came across with these errors:

    packageJobJar: [/tmp/Rtmpg8rLZs/rmr-local-env6a2e33fc3b5a, /tmp/Rtmpg8rLZs/rmr-global-env6a2e13cfe97b, /tmp/Rtmpg8rLZs/rmr-streaming-map6a2e5786f445, /tmp/hadoop-dlabadmin/hadoop-unjar4543684753945520558/] [] /tmp/streamjob5147727764062864307.jar tmpDir=null
    13/05/15 12:52:13 ERROR security.UserGroupInformation: PriviledgedActionException as:dlabadmin (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Permission denied: user=dlabadmin, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
      at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
      at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
      at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4655)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2996)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2960)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2938)
      at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
      at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:417)
      at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44096)
      at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.Subject.doAs(Subject.java:396)
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

    13/05/15 12:52:13 ERROR streaming.StreamJob: Error Launching job : Permission denied: user=dlabadmin, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
      at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
      at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
      at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4655)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2996)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2960)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2938)
      at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
      at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:417)
      at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44096)
      at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.Subject.doAs(Subject.java:396)
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

    Streaming Command Failed!Error in mr(map = map, reduce = reduce, combine = combine, vectorized.reduce, :
       hadoop streaming failed with error code 5DEPRECATED: Use of this script to execute hdfs command is deprecated.Instead use the hdfs command for it.
    DEPRECATED: Use of this script to execute hdfs command is deprecated.Instead use the hdfs command for it.DEPRECATED: Use of this script to execute hdfs command is deprecated.Instead use the hdfs command for it.rmr: DEPRECATED: Please use 'rm -r' instead.13/05/15 12:52:18 WARN fs.TrashPolicyDefault: Can't create trash directory: hdfs://ub12hdpmaster:8020/user/dlabadmin/.Trash/Current/tmp/Rtmpg8rLZsrmr: Failed to move to trash: hdfs://ub12hdpmaster:8020/tmp/Rtmpg8rLZs/file6a2e5d588a87. Consider using -skipTrash option> hdfs.ls("/tmp")

    My understanding of these errors is admin user only has "write" privilege
    to this hdfs files.
    I try to find a way to grand more privileges to admin user.
    Let me know whether my understanding is correct.

    Thanks,

    Fiona
    On Wednesday, May 15, 2013 2:09:09 PM UTC-5, Darren Lo wrote:

    Hi Fiona,

    Turning off security is not recommended nor necessary. Usually you run
    your hive queries or map reduce jobs as a non-system user (not hive, root,
    hdfs, but something like fiona or bob).

    You also need to create the home directory for these users. If you are
    using Hue, you can very easily create a user in Hue and check the option to
    create the home directory. If not, you can run:
    sudo -u hdfs hdfs dfs -mkdir /user/fiona
    sudo -u hdfs hdfs dfs -chown fiona:fiona /user/fiona

    At this point, running hive jobs while logged in as user "fiona" should
    work.

    If you really want to disable permission checking, then you can edit the
    HDFS configuration via the Cloudera Manager UI. Look for the property
    "Check HDFS Permissions", which should appear by default along with all
    other service-wide configs. If it doesn't appear, search for it using the
    search box on the left. Then restart HDFS.

    Thanks,
    Darren


    On Wed, May 15, 2013 at 12:00 PM, fiona ren <fiona....@gmail.com<javascript:>
    wrote:
    hey Kavish,

    I'm using Cloudera Manager to configure the cluster. Now I came across
    with the same problem as it. You mentioned "*Do this in CDH4 Manager
    HDFS CONFIGURATION (if you are using CDH4 to make your hadoop cluster). *
    "
    Now I open up the setting page for hdfs1 and "configure", but don't know
    which file to edit. Could you please provide more details in what and where
    to add?

    Thanks

    On Saturday, February 23, 2013 8:08:37 AM UTC-6, Kavish Ahuja wrote:

    The simplest answer is :
    * Disabling the dfs permission.By adding below property code to
    conf/hdfs-site.xml

    <property>
    <name>dfs.permissions</name>
    <value>false</value>
    </property>


    Do this in CDH4 Manager HDFS CONFIGURATION (if you are using CDH4 to
    make your hadoop cluster).

    CHEERS !!

    *

    On Wednesday, November 28, 2012 8:21:32 AM UTC+5:30, Ram Krishnamurthy
    wrote:
    Are you logged in as root or hdfs? If you su - hdfs and try hive it
    should work

    On Tue, Nov 27, 2012 at 9:50 PM, Thilak wrote:

    Hi Mike,

    I have a CentOS hadoop ckuster and I face the same problem when i try
    to create a table in hive.
    I tried,
    chmod 777 /tmp
    sudo -u hdfs hadoop fs -chmod -R 1777 /tmp
    and I also tried..
    sudo -u hdfs hadoop fs -mkdir /user/root
    sudo -u hdfs hadoop fs -chown root:root /user/root

    But it dint work!
    Any suggestion?
    On Monday, 13 August 2012 07:27:52 UTC-7, Mike wrote:

    Hi All,

    I have 2 nodes CentOS hadoop cluster. I installed cloudera manager
    4.0 free edition and CDH 4.0.3.
    Installation went smooth.

    I installed sqoop on it today. When I run the sqoop import to load
    data from Oracle11g to Hbase,
    I get the subject error.

    I am running it as root user. I am still getting the same error.

    Sqoop creates the table fine. But when it tries to the write the
    data, hadoop(hdfs) is giving this error.

    Any thoughts?

    Thanks


    --
    Thanks,
    *Ram Krishnamurthy*
    rkrishnamurthy@greenway-**solutions.com
    *Cell: 704-953-8125*



    --
    Thanks,
    Darren
  • Darren Lo at May 15, 2013 at 7:25 pm
    Hi Fiona,

    The problem is usually that mapreduce is trying to create something in the
    user dir, in your case /user/dlabadmin. Since the user "dlabadmin" doesn't
    have write privileges to /user, this fails. If you create your user's home
    directory in hdfs and chown it to the right user, you'll probably get past
    this error. dlabadmin doesn't actually need write permissions to /user,
    just needs the home directory created.

    Thanks,
    Darren

    On Wed, May 15, 2013 at 12:21 PM, fiona ren wrote:

    Hi Darren,

    Really appreciate your prompt reply!
    Here's my situation, I ran these command

    ints = to.dfs(1:100)
    calc = mapreduce(input = ints,
    map = function(k, v) cbind(v, 2*v))from.dfs(calc)

    and came across with these errors:

    packageJobJar: [/tmp/Rtmpg8rLZs/rmr-local-env6a2e33fc3b5a, /tmp/Rtmpg8rLZs/rmr-global-env6a2e13cfe97b, /tmp/Rtmpg8rLZs/rmr-streaming-map6a2e5786f445, /tmp/hadoop-dlabadmin/hadoop-unjar4543684753945520558/] [] /tmp/streamjob5147727764062864307.jar tmpDir=null
    13/05/15 12:52:13 ERROR security.UserGroupInformation: PriviledgedActionException as:dlabadmin (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Permission denied: user=dlabadmin, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4655)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2996)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2960)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2938)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:417)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44096)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

    13/05/15 12:52:13 ERROR streaming.StreamJob: Error Launching job : Permission denied: user=dlabadmin, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4655)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2996)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2960)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2938)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:417)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44096)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

    Streaming Command Failed!Error in mr(map = map, reduce = reduce, combine = combine, vectorized.reduce, :
    hadoop streaming failed with error code 5DEPRECATED: Use of this script to execute hdfs command is deprecated.Instead use the hdfs command for it.
    DEPRECATED: Use of this script to execute hdfs command is deprecated.Instead use the hdfs command for it.DEPRECATED: Use of this script to execute hdfs command is deprecated.Instead use the hdfs command for it.rmr: DEPRECATED: Please use 'rm -r' instead.13/05/15 12:52:18 WARN fs.TrashPolicyDefault: Can't create trash directory: hdfs://ub12hdpmaster:8020/user/dlabadmin/.Trash/Current/tmp/Rtmpg8rLZsrmr: Failed to move to trash: hdfs://ub12hdpmaster:8020/tmp/Rtmpg8rLZs/file6a2e5d588a87. Consider using -skipTrash option> hdfs.ls("/tmp")

    My understanding of these errors is admin user only has "write" privilege
    to this hdfs files.
    I try to find a way to grand more privileges to admin user.
    Let me know whether my understanding is correct.

    Thanks,

    Fiona
    On Wednesday, May 15, 2013 2:09:09 PM UTC-5, Darren Lo wrote:

    Hi Fiona,

    Turning off security is not recommended nor necessary. Usually you run
    your hive queries or map reduce jobs as a non-system user (not hive, root,
    hdfs, but something like fiona or bob).

    You also need to create the home directory for these users. If you are
    using Hue, you can very easily create a user in Hue and check the option to
    create the home directory. If not, you can run:
    sudo -u hdfs hdfs dfs -mkdir /user/fiona
    sudo -u hdfs hdfs dfs -chown fiona:fiona /user/fiona

    At this point, running hive jobs while logged in as user "fiona" should
    work.

    If you really want to disable permission checking, then you can edit the
    HDFS configuration via the Cloudera Manager UI. Look for the property
    "Check HDFS Permissions", which should appear by default along with all
    other service-wide configs. If it doesn't appear, search for it using the
    search box on the left. Then restart HDFS.

    Thanks,
    Darren

    On Wed, May 15, 2013 at 12:00 PM, fiona ren wrote:

    hey Kavish,

    I'm using Cloudera Manager to configure the cluster. Now I came across
    with the same problem as it. You mentioned "*Do this in CDH4 Manager
    HDFS CONFIGURATION (if you are using CDH4 to make your hadoop cluster).
    *"
    Now I open up the setting page for hdfs1 and "configure", but don't know
    which file to edit. Could you please provide more details in what and where
    to add?

    Thanks

    On Saturday, February 23, 2013 8:08:37 AM UTC-6, Kavish Ahuja wrote:

    The simplest answer is :
    * Disabling the dfs permission.By adding below property code to
    conf/hdfs-site.xml

    <property>
    <name>dfs.permissions</name>
    <value>false</value>
    </property>


    Do this in CDH4 Manager HDFS CONFIGURATION (if you are using CDH4 to
    make your hadoop cluster).

    CHEERS !!

    *

    On Wednesday, November 28, 2012 8:21:32 AM UTC+5:30, Ram Krishnamurthy
    wrote:
    Are you logged in as root or hdfs? If you su - hdfs and try hive it
    should work

    On Tue, Nov 27, 2012 at 9:50 PM, Thilak wrote:

    Hi Mike,

    I have a CentOS hadoop ckuster and I face the same problem when i try
    to create a table in hive.
    I tried,
    chmod 777 /tmp
    sudo -u hdfs hadoop fs -chmod -R 1777 /tmp
    and I also tried..
    sudo -u hdfs hadoop fs -mkdir /user/root
    sudo -u hdfs hadoop fs -chown root:root /user/root

    But it dint work!
    Any suggestion?
    On Monday, 13 August 2012 07:27:52 UTC-7, Mike wrote:

    Hi All,

    I have 2 nodes CentOS hadoop cluster. I installed cloudera manager
    4.0 free edition and CDH 4.0.3.
    Installation went smooth.

    I installed sqoop on it today. When I run the sqoop import to load
    data from Oracle11g to Hbase,
    I get the subject error.

    I am running it as root user. I am still getting the same error.

    Sqoop creates the table fine. But when it tries to the write the
    data, hadoop(hdfs) is giving this error.

    Any thoughts?

    Thanks


    --
    Thanks,
    *Ram Krishnamurthy*
    rkrishnamurthy@greenway-**soluti**ons.com
    *Cell: 704-953-8125*



    --
    Thanks,
    Darren

    --
    Thanks,
    Darren
  • Darren Lo at May 16, 2013 at 9:57 pm
    Glad to hear it's working now!

    On Thu, May 16, 2013 at 2:55 PM, Tiantian Fiona Ren wrote:

    Thanks a lot Darren! The problem solved.


    2013/5/15 Darren Lo <dlo@cloudera.com>
    Hi Fiona,

    The problem is usually that mapreduce is trying to create something in
    the user dir, in your case /user/dlabadmin. Since the user "dlabadmin"
    doesn't have write privileges to /user, this fails. If you create your
    user's home directory in hdfs and chown it to the right user, you'll
    probably get past this error. dlabadmin doesn't actually need write
    permissions to /user, just needs the home directory created.

    Thanks,
    Darren

    On Wed, May 15, 2013 at 12:21 PM, fiona ren wrote:

    Hi Darren,

    Really appreciate your prompt reply!
    Here's my situation, I ran these command

    ints = to.dfs(1:100)
    calc = mapreduce(input = ints,
    map = function(k, v) cbind(v, 2*v))from.dfs(calc)

    and came across with these errors:

    packageJobJar: [/tmp/Rtmpg8rLZs/rmr-local-env6a2e33fc3b5a, /tmp/Rtmpg8rLZs/rmr-global-env6a2e13cfe97b, /tmp/Rtmpg8rLZs/rmr-streaming-map6a2e5786f445, /tmp/hadoop-dlabadmin/hadoop-unjar4543684753945520558/] [] /tmp/streamjob5147727764062864307.jar tmpDir=null
    13/05/15 12:52:13 ERROR security.UserGroupInformation: PriviledgedActionException as:dlabadmin (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Permission denied: user=dlabadmin, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4655)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2996)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2960)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2938)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:417)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44096)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

    13/05/15 12:52:13 ERROR streaming.StreamJob: Error Launching job : Permission denied: user=dlabadmin, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4655)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2996)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2960)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2938)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:417)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44096)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

    Streaming Command Failed!Error in mr(map = map, reduce = reduce, combine = combine, vectorized.reduce, :
    hadoop streaming failed with error code 5DEPRECATED: Use of this script to execute hdfs command is deprecated.Instead use the hdfs command for it.
    DEPRECATED: Use of this script to execute hdfs command is deprecated.Instead use the hdfs command for it.DEPRECATED: Use of this script to execute hdfs command is deprecated.Instead use the hdfs command for it.rmr: DEPRECATED: Please use 'rm -r' instead.13/05/15 12:52:18 WARN fs.TrashPolicyDefault: Can't create trash directory: hdfs://ub12hdpmaster:8020/user/dlabadmin/.Trash/Current/tmp/Rtmpg8rLZsrmr: Failed to move to trash: hdfs://ub12hdpmaster:8020/tmp/Rtmpg8rLZs/file6a2e5d588a87. Consider using -skipTrash option> hdfs.ls("/tmp")

    My understanding of these errors is admin user only has "write"
    privilege to this hdfs files.
    I try to find a way to grand more privileges to admin user.
    Let me know whether my understanding is correct.

    Thanks,

    Fiona
    On Wednesday, May 15, 2013 2:09:09 PM UTC-5, Darren Lo wrote:

    Hi Fiona,

    Turning off security is not recommended nor necessary. Usually you run
    your hive queries or map reduce jobs as a non-system user (not hive, root,
    hdfs, but something like fiona or bob).

    You also need to create the home directory for these users. If you are
    using Hue, you can very easily create a user in Hue and check the option to
    create the home directory. If not, you can run:
    sudo -u hdfs hdfs dfs -mkdir /user/fiona
    sudo -u hdfs hdfs dfs -chown fiona:fiona /user/fiona

    At this point, running hive jobs while logged in as user "fiona" should
    work.

    If you really want to disable permission checking, then you can edit
    the HDFS configuration via the Cloudera Manager UI. Look for the property
    "Check HDFS Permissions", which should appear by default along with all
    other service-wide configs. If it doesn't appear, search for it using the
    search box on the left. Then restart HDFS.

    Thanks,
    Darren

    On Wed, May 15, 2013 at 12:00 PM, fiona ren wrote:

    hey Kavish,

    I'm using Cloudera Manager to configure the cluster. Now I came across
    with the same problem as it. You mentioned "*Do this in CDH4 Manager
    HDFS CONFIGURATION (if you are using CDH4 to make your hadoop cluster).
    *"
    Now I open up the setting page for hdfs1 and "configure", but don't
    know which file to edit. Could you please provide more details in what and
    where to add?

    Thanks

    On Saturday, February 23, 2013 8:08:37 AM UTC-6, Kavish Ahuja wrote:

    The simplest answer is :
    * Disabling the dfs permission.By adding below property code to
    conf/hdfs-site.xml

    <property>
    <name>dfs.permissions</name>
    <value>false</value>
    </property>


    Do this in CDH4 Manager HDFS CONFIGURATION (if you are using CDH4 to
    make your hadoop cluster).

    CHEERS !!

    *

    On Wednesday, November 28, 2012 8:21:32 AM UTC+5:30, Ram
    Krishnamurthy wrote:
    Are you logged in as root or hdfs? If you su - hdfs and try hive it
    should work

    On Tue, Nov 27, 2012 at 9:50 PM, Thilak wrote:

    Hi Mike,

    I have a CentOS hadoop ckuster and I face the same problem when i
    try to create a table in hive.
    I tried,
    chmod 777 /tmp
    sudo -u hdfs hadoop fs -chmod -R 1777 /tmp
    and I also tried..
    sudo -u hdfs hadoop fs -mkdir /user/root
    sudo -u hdfs hadoop fs -chown root:root /user/root

    But it dint work!
    Any suggestion?
    On Monday, 13 August 2012 07:27:52 UTC-7, Mike wrote:

    Hi All,

    I have 2 nodes CentOS hadoop cluster. I installed cloudera manager
    4.0 free edition and CDH 4.0.3.
    Installation went smooth.

    I installed sqoop on it today. When I run the sqoop import to load
    data from Oracle11g to Hbase,
    I get the subject error.

    I am running it as root user. I am still getting the same error.

    Sqoop creates the table fine. But when it tries to the write the
    data, hadoop(hdfs) is giving this error.

    Any thoughts?

    Thanks


    --
    Thanks,
    *Ram Krishnamurthy*
    rkrishnamurthy@greenway-**soluti**ons.com
    *Cell: 704-953-8125*



    --
    Thanks,
    Darren

    --
    Thanks,
    Darren

    --
    Thanks,
    Darren

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedAug 13, '12 at 2:27p
activeMay 16, '13 at 9:57p
posts23
users11
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase