FAQ
Just installed CDH4 on a newly created Ubuntu 12.04 server VM. The
installation went smoothly but this happens when I try to start the single
node cluster:

Unhandled exception. Starting shutdown.
org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
...


The hbase service won't start.

This is similar to a lot of other threads but none of them seem to match this exactly.


TIA

Search Discussions

  • Harsh J at Sep 10, 2012 at 8:49 pm
    Hi,

    Do this:

    Visit hbase1 -> Actions dropdown (top right) -> "Create root directory…"

    Once this completes, start HBase via Actions -> Start. Does this work?
    On Tue, Sep 11, 2012 at 2:03 AM, crazyeddie wrote:
    Just installed CDH4 on a newly created Ubuntu 12.04 server VM. The
    installation went smoothly but this happens when I try to start the single
    node cluster:

    Unhandled exception. Starting shutdown.
    org.apache.hadoop.security.AccessControlException: Permission denied:
    user=hbase, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
    at
    org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
    at
    org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
    at
    org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
    ...


    The hbase service won't start.

    This is similar to a lot of other threads but none of them seem to match
    this exactly.


    TIA


    --
    Harsh J
  • Crazyeddie at Sep 10, 2012 at 9:09 pm
    No, I did that, now get this:

    DataStreamer Exception
    java.io.IOException: File /hbase/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1269)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1977)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:470)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:292)

    ...

    plus 10 other exception stack traces!
    On Tuesday, September 11, 2012 8:41:24 AM UTC+12, Harsh J wrote:

    Hi,

    Do this:

    Visit hbase1 -> Actions dropdown (top right) -> "Create root directory…"

    Once this completes, start HBase via Actions -> Start. Does this work?
    On Tue, Sep 11, 2012 at 2:03 AM, crazyeddie wrote:
    Just installed CDH4 on a newly created Ubuntu 12.04 server VM. The
    installation went smoothly but this happens when I try to start the single
    node cluster:

    Unhandled exception. Starting shutdown.
    org.apache.hadoop.security.AccessControlException: Permission denied:
    user=hbase, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
    at
    org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
    at
    org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
    at
    org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
    ...


    The hbase service won't start.

    This is similar to a lot of other threads but none of them seem to match
    this exactly.


    TIA


    --
    Harsh J
  • Adam Smieszny at Sep 10, 2012 at 9:43 pm
    Have your DataNode services started? Are all services showing "Running"
    with "Good" Health?

    Thanks,
    Adam
    On Mon, Sep 10, 2012 at 5:09 PM, crazyeddie wrote:

    No, I did that, now get this:

    DataStreamer Exception
    java.io.IOException: File /hbase/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1269)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1977)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:470)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:292)

    ...

    plus 10 other exception stack traces!
    On Tuesday, September 11, 2012 8:41:24 AM UTC+12, Harsh J wrote:

    Hi,

    Do this:

    Visit hbase1 -> Actions dropdown (top right) -> "Create root directory…"

    Once this completes, start HBase via Actions -> Start. Does this work?

    On Tue, Sep 11, 2012 at 2:03 AM, crazyeddie <edward....@gmail.com>
    wrote:
    Just installed CDH4 on a newly created Ubuntu 12.04 server VM. The
    installation went smoothly but this happens when I try to start the single
    node cluster:

    Unhandled exception. Starting shutdown.
    org.apache.hadoop.security.**AccessControlException: Permission denied:
    user=hbase, access=WRITE, inode="/":hdfs:supergroup:**drwxr-xr-x
    at
    org.apache.hadoop.hdfs.server.**namenode.FSPermissionChecker.**
    check(FSPermissionChecker.**java:205)
    at
    org.apache.hadoop.hdfs.server.**namenode.FSPermissionChecker.**
    check(FSPermissionChecker.**java:186)
    at
    org.apache.hadoop.hdfs.server.**namenode.FSPermissionChecker.**
    checkPermission(**FSPermissionChecker.java:135)
    ...


    The hbase service won't start.

    This is similar to a lot of other threads but none of them seem to match
    this exactly.


    TIA


    --
    Harsh J

    --
    Adam Smieszny
    Cloudera | Systems Engineer | http://tiny.cloudera.com/about
    917.830.4156 | http://www.linkedin.com/in/adamsmieszny
  • Crazyeddie at Sep 10, 2012 at 10:13 pm
    Not sure - I tried to restart the entire cluster, assuming it would start
    all the required services in the correct sequence:

    Initialization failed for block pool Block pool
    BP-1689176817-127.0.1.1-1347243567581 (storage id
    DS-300661076-127.0.1.1-50010-1347243571650) service to
    cluster-1/127.0.1.1:8020
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode denied communication with namenode:
    DatanodeRegistration(127.0.0.1,
    storageID=DS-300661076-127.0.1.1-50010-1347243571650, infoPort=50075,
    ipcPort=50020, storageInfo=lv=-40;cid=cluster8;nsid=1834108066;c=0)

    On Tuesday, September 11, 2012 9:36:18 AM UTC+12, Adam Smieszny wrote:

    Have your DataNode services started? Are all services showing "Running"
    with "Good" Health?

    Thanks,
    Adam

    On Mon, Sep 10, 2012 at 5:09 PM, crazyeddie <edward....@gmail.com<javascript:>
    wrote:
    No, I did that, now get this:

    DataStreamer Exception
    java.io.IOException: File /hbase/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.

    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1269)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1977)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:470)

    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:292)

    ...

    plus 10 other exception stack traces!
    On Tuesday, September 11, 2012 8:41:24 AM UTC+12, Harsh J wrote:

    Hi,

    Do this:

    Visit hbase1 -> Actions dropdown (top right) -> "Create root directory…"

    Once this completes, start HBase via Actions -> Start. Does this work?

    On Tue, Sep 11, 2012 at 2:03 AM, crazyeddie <edward....@gmail.com>
    wrote:
    Just installed CDH4 on a newly created Ubuntu 12.04 server VM. The
    installation went smoothly but this happens when I try to start the single
    node cluster:

    Unhandled exception. Starting shutdown.
    org.apache.hadoop.security.**AccessControlException: Permission denied:
    user=hbase, access=WRITE, inode="/":hdfs:supergroup:**drwxr-xr-x
    at
    org.apache.hadoop.hdfs.server.**namenode.FSPermissionChecker.**
    check(FSPermissionChecker.**java:205)
    at
    org.apache.hadoop.hdfs.server.**namenode.FSPermissionChecker.**
    check(FSPermissionChecker.**java:186)
    at
    org.apache.hadoop.hdfs.server.**namenode.FSPermissionChecker.**
    checkPermission(**FSPermissionChecker.java:135)
    ...


    The hbase service won't start.

    This is similar to a lot of other threads but none of them seem to match
    this exactly.


    TIA


    --
    Harsh J

    --
    Adam Smieszny
    Cloudera | Systems Engineer | http://tiny.cloudera.com/about
    917.830.4156 | http://www.linkedin.com/in/adamsmieszny
  • Adam Smieszny at Sep 10, 2012 at 9:58 pm
    The 127.0.1.1 that I see in your IP indicates to me that this is likely a
    networking issue. I would venture to guess that we need to make some
    modifications to the way that your hosts are resolving their addresses.

    Can you please share your /etc/hosts from the CM machine?

    This is a common issue on Ubuntu hosts - please see
    https://groups.google.com/a/cloudera.org/forum/?fromgroups#!search/127.0.1.1

    Thanks,
    Adam
    On Mon, Sep 10, 2012 at 5:45 PM, crazyeddie wrote:

    Not sure - I tried to restart the entire cluster, assuming it would start
    all the required services in the correct sequence:

    Initialization failed for block pool Block pool
    BP-1689176817-127.0.1.1-1347243567581 (storage id
    DS-300661076-127.0.1.1-50010-1347243571650) service to cluster-1/
    127.0.1.1:8020
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode denied communication with namenode:
    DatanodeRegistration(127.0.0.1,
    storageID=DS-300661076-127.0.1.1-50010-1347243571650, infoPort=50075,
    ipcPort=50020, storageInfo=lv=-40;cid=cluster8;nsid=1834108066;c=0)

    On Tuesday, September 11, 2012 9:36:18 AM UTC+12, Adam Smieszny wrote:

    Have your DataNode services started? Are all services showing "Running"
    with "Good" Health?

    Thanks,
    Adam

    On Mon, Sep 10, 2012 at 5:09 PM, crazyeddie wrote:

    No, I did that, now get this:

    DataStreamer Exception
    java.io.IOException: File /hbase/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.

    at org.apache.hadoop.hdfs.server.**blockmanagement.BlockManager.**chooseTarget(BlockManager.**java:1269)
    at org.apache.hadoop.hdfs.server.**namenode.FSNamesystem.**getAdditionalBlock(**FSNamesystem.java:1977)
    at org.apache.hadoop.hdfs.server.**namenode.NameNodeRpcServer.**addBlock(NameNodeRpcServer.**java:470)

    at org.apache.hadoop.hdfs.**protocolPB.**ClientNamenodeProtocolServerSi**deTranslatorPB.addBlock(**ClientNamenodeProtocolServerSi**deTranslatorPB.java:292)

    ...

    plus 10 other exception stack traces!
    On Tuesday, September 11, 2012 8:41:24 AM UTC+12, Harsh J wrote:

    Hi,

    Do this:

    Visit hbase1 -> Actions dropdown (top right) -> "Create root
    directory…"

    Once this completes, start HBase via Actions -> Start. Does this work?

    On Tue, Sep 11, 2012 at 2:03 AM, crazyeddie <edward....@gmail.com>
    wrote:
    Just installed CDH4 on a newly created Ubuntu 12.04 server VM. The
    installation went smoothly but this happens when I try to start the single
    node cluster:

    Unhandled exception. Starting shutdown.
    org.apache.hadoop.security.**Acc**essControlException: Permission denied:
    user=hbase, access=WRITE, inode="/":hdfs:supergroup:**drwx**r-xr-x
    at
    org.apache.hadoop.hdfs.server.****namenode.FSPermissionChecker.**c**
    heck(FSPermissionChecker.**java:**205)
    at
    org.apache.hadoop.hdfs.server.****namenode.FSPermissionChecker.**c**
    heck(FSPermissionChecker.**java:**186)
    at
    org.apache.hadoop.hdfs.server.****namenode.FSPermissionChecker.**c**
    heckPermission(**FSPermissionChe**cker.java:135)
    ...


    The hbase service won't start.

    This is similar to a lot of other threads but none of them seem to match
    this exactly.


    TIA


    --
    Harsh J

    --
    Adam Smieszny
    Cloudera | Systems Engineer | http://tiny.cloudera.com/**about<http://tiny.cloudera.com/about>
    917.830.4156 | http://www.linkedin.com/in/**adamsmieszny<http://www.linkedin.com/in/adamsmieszny>

    --
    Adam Smieszny
    Cloudera | Systems Engineer | http://tiny.cloudera.com/about
    917.830.4156 | http://www.linkedin.com/in/adamsmieszny
  • Crazyeddie at Sep 10, 2012 at 10:23 pm
    Changed the 127.0.1.1 line so /etc/hosts is now

    127.0.0.1 localhost
    192.168.132.118 cluster-1.domain cluster-1

    same result - DisallowedDatanodeException
    On Tuesday, September 11, 2012 9:58:04 AM UTC+12, Adam Smieszny wrote:

    The 127.0.1.1 that I see in your IP indicates to me that this is likely a
    networking issue. I would venture to guess that we need to make some
    modifications to the way that your hosts are resolving their addresses.

    Can you please share your /etc/hosts from the CM machine?

    This is a common issue on Ubuntu hosts - please see
    https://groups.google.com/a/cloudera.org/forum/?fromgroups#!search/127.0.1.1

    Thanks,
    Adam

    On Mon, Sep 10, 2012 at 5:45 PM, crazyeddie <edward....@gmail.com<javascript:>
    wrote:
    Not sure - I tried to restart the entire cluster, assuming it would start
    all the required services in the correct sequence:

    Initialization failed for block pool Block pool
    BP-1689176817-127.0.1.1-1347243567581 (storage id
    DS-300661076-127.0.1.1-50010-1347243571650) service to cluster-1/
    127.0.1.1:8020
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode denied communication with namenode:
    DatanodeRegistration(127.0.0.1,
    storageID=DS-300661076-127.0.1.1-50010-1347243571650, infoPort=50075,
    ipcPort=50020, storageInfo=lv=-40;cid=cluster8;nsid=1834108066;c=0)

    On Tuesday, September 11, 2012 9:36:18 AM UTC+12, Adam Smieszny wrote:

    Have your DataNode services started? Are all services showing "Running"
    with "Good" Health?

    Thanks,
    Adam

    On Mon, Sep 10, 2012 at 5:09 PM, crazyeddie wrote:

    No, I did that, now get this:

    DataStreamer Exception
    java.io.IOException: File /hbase/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.


    at org.apache.hadoop.hdfs.server.**blockmanagement.BlockManager.**chooseTarget(BlockManager.**java:1269)
    at org.apache.hadoop.hdfs.server.**namenode.FSNamesystem.**getAdditionalBlock(**FSNamesystem.java:1977)

    at org.apache.hadoop.hdfs.server.**namenode.NameNodeRpcServer.**addBlock(NameNodeRpcServer.**java:470)

    at org.apache.hadoop.hdfs.**protocolPB.**ClientNamenodeProtocolServerSi**deTranslatorPB.addBlock(**ClientNamenodeProtocolServerSi**deTranslatorPB.java:292)

    ...

    plus 10 other exception stack traces!
    On Tuesday, September 11, 2012 8:41:24 AM UTC+12, Harsh J wrote:

    Hi,

    Do this:

    Visit hbase1 -> Actions dropdown (top right) -> "Create root
    directory…"

    Once this completes, start HBase via Actions -> Start. Does this work?

    On Tue, Sep 11, 2012 at 2:03 AM, crazyeddie <edward....@gmail.com>
    wrote:
    Just installed CDH4 on a newly created Ubuntu 12.04 server VM. The
    installation went smoothly but this happens when I try to start the single
    node cluster:

    Unhandled exception. Starting shutdown.
    org.apache.hadoop.security.**Acc**essControlException: Permission denied:
    user=hbase, access=WRITE, inode="/":hdfs:supergroup:**drwx**r-xr-x
    at
    org.apache.hadoop.hdfs.server.****namenode.FSPermissionChecker.**c**
    heck(FSPermissionChecker.**java:**205)
    at
    org.apache.hadoop.hdfs.server.****namenode.FSPermissionChecker.**c**
    heck(FSPermissionChecker.**java:**186)
    at
    org.apache.hadoop.hdfs.server.****namenode.FSPermissionChecker.**c**
    heckPermission(**FSPermissionChe**cker.java:135)
    ...


    The hbase service won't start.

    This is similar to a lot of other threads but none of them seem to match
    this exactly.


    TIA


    --
    Harsh J

    --
    Adam Smieszny
    Cloudera | Systems Engineer | http://tiny.cloudera.com/**about<http://tiny.cloudera.com/about>
    917.830.4156 | http://www.linkedin.com/in/**adamsmieszny<http://www.linkedin.com/in/adamsmieszny>

    --
    Adam Smieszny
    Cloudera | Systems Engineer | http://tiny.cloudera.com/about
    917.830.4156 | http://www.linkedin.com/in/adamsmieszny
  • Crazyeddie at Sep 10, 2012 at 10:24 pm
    It looks as if /etc/hosts (or DNS) needs to be in the correct state
    *before* installation... still getting errors from hdfs start about
    127.0.1.1

    Where on the FS are the configuration files, maybe they can be patched
    after the fact.
    On Tuesday, September 11, 2012 8:33:57 AM UTC+12, crazyeddie wrote:

    Just installed CDH4 on a newly created Ubuntu 12.04 server VM. The
    installation went smoothly but this happens when I try to start the single
    node cluster:

    Unhandled exception. Starting shutdown.
    org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
    ...


    The hbase service won't start.

    This is similar to a lot of other threads but none of them seem to match this exactly.


    TIA

  • Crazyeddie at Sep 11, 2012 at 3:38 am
    Well after much messing about I created a new VM from scratch, and set
    /etc/hosts as follows:

    127.0.0.1 localhost
    192.168.xxx.yyy cluster-1

    Then installed CDH4 from scratch, and everything worked nicely. Looks like
    the default /etc/hosts on Ubuntu confuses it mightily.
    On Tuesday, September 11, 2012 8:33:57 AM UTC+12, crazyeddie wrote:

    Just installed CDH4 on a newly created Ubuntu 12.04 server VM. The
    installation went smoothly but this happens when I try to start the single
    node cluster:

    Unhandled exception. Starting shutdown.
    org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
    ...


    The hbase service won't start.

    This is similar to a lot of other threads but none of them seem to match this exactly.


    TIA

  • Philip Zeyliger at Sep 11, 2012 at 4:04 pm
    The "host inspector" during the installation should have caught the issue,
    but I'm glad you're up and running!

    -- Philip
    On Mon, Sep 10, 2012 at 8:10 PM, crazyeddie wrote:

    Well after much messing about I created a new VM from scratch, and set
    /etc/hosts as follows:

    127.0.0.1 localhost
    192.168.xxx.yyy cluster-1

    Then installed CDH4 from scratch, and everything worked nicely. Looks like
    the default /etc/hosts on Ubuntu confuses it mightily.

    On Tuesday, September 11, 2012 8:33:57 AM UTC+12, crazyeddie wrote:

    Just installed CDH4 on a newly created Ubuntu 12.04 server VM. The
    installation went smoothly but this happens when I try to start the single
    node cluster:

    Unhandled exception. Starting shutdown.
    org.apache.hadoop.security.**AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/":hdfs:supergroup:**drwxr-xr-x

    at org.apache.hadoop.hdfs.server.**namenode.FSPermissionChecker.**check(FSPermissionChecker.**java:205)
    at org.apache.hadoop.hdfs.server.**namenode.FSPermissionChecker.**check(FSPermissionChecker.**java:186)

    at org.apache.hadoop.hdfs.server.**namenode.FSPermissionChecker.**checkPermission(**FSPermissionChecker.java:135)
    ...


    The hbase service won't start.

    This is similar to a lot of other threads but none of them seem to match this exactly.


    TIA

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedSep 10, '12 at 8:34p
activeSep 11, '12 at 4:04p
posts10
users4
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase