FAQ
I have configured Cloudera manager and CDH4.0.4 is installed.

Data node is not running finding following exception

Supervisor returned FATAL: + '[' /usr/share/cmf ']'
++ find /usr/share/cmf/lib/plugins -name 'event-publish-*.jar'
++ tr -d '\n'
+ ADD_TO_CP=/usr/share/cmf/lib/plugins/event-publish-4.0.4-shaded.jar
+ eval 'OLD_VALUE=$HADOOP_CLASSPATH'
++ OLD_VALUE=
+ '[' -z ']'
+ export HADOOP_CLASSPATH=/usr/share/cmf/lib/plugins/event-publish-4.0.4-shaded.jar
+ HADOOP_CLASSPATH=/usr/share/cmf/lib/plugins/event-publish-4.0.4-shaded.jar
+ set -x
+ perl -pi -e 's#{{CMF_CONF_DIR}}#/run/cloudera-scm-agent/process/93-hdfs-DATANODE#g' /run/cloudera-scm-agent/process/93-hdfs-DATANODE/core-site.xml /run/cloudera-scm-agent/process/93-hdfs-DATANODE/hdfs-site.xml
+ '[' -e /run/cloudera-scm-agent/process/93-hdfs-DATANODE/topology.py ']'
++ find /run/cloudera-scm-agent/process/93-hdfs-DATANODE -maxdepth 1 -name '*.py'
+ OUTPUT=
+ '[' '' '!=' '' ']'
+ export 'HADOOP_OPTS=-Djava.net.preferIPv4Stack=true '
+ HADOOP_OPTS='-Djava.net.preferIPv4Stack=true '
+ export HADOOP_IDENT_STRING=hdfs
+ HADOOP_IDENT_STRING=hdfs
+ '[' -n '' ']'
+ acquire_kerberos_tgt hdfs.keytab
+ '[' -z hdfs.keytab ']'
+ '[' -n '' ']'
+ '[' validate-writable-empty-dirs = datanode ']'
+ '[' file-operation = datanode ']'
+ '[' bootstrap = datanode ']'
+ '[' failover = datanode ']'
+ '[' transition-to-active = datanode ']'
+ '[' initializeSharedEdits = datanode ']'
+ '[' initialize-znode = datanode ']'
+ '[' format-namenode = datanode ']'
+ '[' monitor-decommission = datanode ']'
+ '[' monitor-upgrade = datanode ']'
+ '[' finalize-upgrade = datanode ']'
+ '[' mkdir = datanode ']'
+ '[' namenode = datanode -o secondarynamenode = datanode -o datanode = datanode ']'
+ HADOOP_OPTS='-Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ export 'HADOOP_OPTS=-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ HADOOP_OPTS='-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ exec /usr/lib/hadoop-hdfs/bin/hdfs --config /run/cloudera-scm-agent/process/93-hdfs-DATANODE datanode

Please help to resolve this.

Search Discussions

  • Philip Zeyliger at Oct 17, 2012 at 10:37 pm
    Look up the logs of the data node (accessible via the UI, or via
    /var/log/hadoop/*DATANODE*). Most likely, one of the ports the datanode
    uses is taken up by something else.

    -- Philip
    On Wed, Oct 17, 2012 at 7:45 AM, shivayogi kumbar wrote:

    I have configured Cloudera manager and CDH4.0.4 is installed.

    Data node is not running finding following exception

    Supervisor returned FATAL: + '[' /usr/share/cmf ']'
    ++ find /usr/share/cmf/lib/plugins -name 'event-publish-*.jar'
    ++ tr -d '\n'
    + ADD_TO_CP=/usr/share/cmf/lib/**plugins/event-publish-4.0.4-**shaded.jar
    + eval 'OLD_VALUE=$HADOOP_CLASSPATH'
    ++ OLD_VALUE=
    + '[' -z ']'
    + export HADOOP_CLASSPATH=/usr/share/**cmf/lib/plugins/event-publish-**4.0.4-shaded.jar
    + HADOOP_CLASSPATH=/usr/share/**cmf/lib/plugins/event-publish-**4.0.4-shaded.jar
    + set -x
    + perl -pi -e 's#{{CMF_CONF_DIR}}#/run/**cloudera-scm-agent/process/93-**hdfs-DATANODE#g' /run/cloudera-scm-agent/**process/93-hdfs-DATANODE/core-**site.xml /run/cloudera-scm-agent/**process/93-hdfs-DATANODE/hdfs-**site.xml
    + '[' -e /run/cloudera-scm-agent/**process/93-hdfs-DATANODE/**topology.py ']'
    ++ find /run/cloudera-scm-agent/**process/93-hdfs-DATANODE -maxdepth 1 -name '*.py'
    + OUTPUT=
    + '[' '' '!=' '' ']'
    + export 'HADOOP_OPTS=-Djava.net.**preferIPv4Stack=true '
    + HADOOP_OPTS='-Djava.net.**preferIPv4Stack=true '
    + export HADOOP_IDENT_STRING=hdfs
    + HADOOP_IDENT_STRING=hdfs
    + '[' -n '' ']'
    + acquire_kerberos_tgt hdfs.keytab
    + '[' -z hdfs.keytab ']'
    + '[' -n '' ']'
    + '[' validate-writable-empty-dirs = datanode ']'
    + '[' file-operation = datanode ']'
    + '[' bootstrap = datanode ']'
    + '[' failover = datanode ']'
    + '[' transition-to-active = datanode ']'
    + '[' initializeSharedEdits = datanode ']'
    + '[' initialize-znode = datanode ']'
    + '[' format-namenode = datanode ']'
    + '[' monitor-decommission = datanode ']'
    + '[' monitor-upgrade = datanode ']'
    + '[' finalize-upgrade = datanode ']'
    + '[' mkdir = datanode ']'
    + '[' namenode = datanode -o secondarynamenode = datanode -o datanode = datanode ']'
    + HADOOP_OPTS='-Dsecurity.audit.**logger=INFO,RFAS -Djava.net.preferIPv4Stack=**true '
    + export 'HADOOP_OPTS=-Dhdfs.audit.**logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,**RFAS -Djava.net.preferIPv4Stack=**true '
    + HADOOP_OPTS='-Dhdfs.audit.**logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,**RFAS -Djava.net.preferIPv4Stack=**true '
    + exec /usr/lib/hadoop-hdfs/bin/hdfs --config /run/cloudera-scm-agent/**process/93-hdfs-DATANODE datanode

    Please help to resolve this.
  • Shivayogi at Oct 18, 2012 at 1:59 am
    Hi Phil following Fatal error I have found in the logs

    October 18 2012 6:58 AM FATAL
    org.apache.hadoop.hdfs.server.datanode.DataNode

    Initialization failed for block pool Block pool
    BP-478258594-127.0.1.1-1350497088276 (storage id
    DS-572390315-127.0.1.1-50010-1350497097205) service to
    shivu-desktop/127.0.1.1:8020
    org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
    Datanode denied communication with namenode:
    DatanodeRegistration(127.0.0.1,
    storageID=DS-572390315-127.0.1.1-50010-1350497097205, infoPort=50075,
    ipcPort=50020, storageInfo=lv=-40;cid=cluster4;nsid=1863494069;c=0)
    at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:566)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3358)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:854)
    at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:91)
    at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:20018)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)

    at org.apache.hadoop.ipc.Client.call(Client.java:1160)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
    at $Proxy10.registerDatanode(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
    at $Proxy10.registerDatanode(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.registerDatanode(DatanodeProtocolClientSideTranslatorPB.java:149)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.register(BPServiceActor.java:619)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:221)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:661)
    at java.lang.Thread.run(Thread.java:662)

    On Thu, Oct 18, 2012 at 4:07 AM, Philip Zeyliger wrote:

    Look up the logs of the data node (accessible via the UI, or via
    /var/log/hadoop/*DATANODE*). Most likely, one of the ports the datanode
    uses is taken up by something else.

    -- Philip

    On Wed, Oct 17, 2012 at 7:45 AM, shivayogi kumbar wrote:

    I have configured Cloudera manager and CDH4.0.4 is installed.

    Data node is not running finding following exception

    Supervisor returned FATAL: + '[' /usr/share/cmf ']'
    ++ find /usr/share/cmf/lib/plugins -name 'event-publish-*.jar'
    ++ tr -d '\n'
    + ADD_TO_CP=/usr/share/cmf/lib/**plugins/event-publish-4.0.4-**shaded.jar
    + eval 'OLD_VALUE=$HADOOP_CLASSPATH'
    ++ OLD_VALUE=
    + '[' -z ']'
    + export HADOOP_CLASSPATH=/usr/share/**cmf/lib/plugins/event-publish-**4.0.4-shaded.jar
    + HADOOP_CLASSPATH=/usr/share/**cmf/lib/plugins/event-publish-**4.0.4-shaded.jar
    + set -x
    + perl -pi -e 's#{{CMF_CONF_DIR}}#/run/**cloudera-scm-agent/process/93-**hdfs-DATANODE#g' /run/cloudera-scm-agent/**process/93-hdfs-DATANODE/core-**site.xml /run/cloudera-scm-agent/**process/93-hdfs-DATANODE/hdfs-**site.xml
    + '[' -e /run/cloudera-scm-agent/**process/93-hdfs-DATANODE/**topology.py ']'
    ++ find /run/cloudera-scm-agent/**process/93-hdfs-DATANODE -maxdepth 1 -name '*.py'
    + OUTPUT=
    + '[' '' '!=' '' ']'
    + export 'HADOOP_OPTS=-Djava.net.**preferIPv4Stack=true '
    + HADOOP_OPTS='-Djava.net.**preferIPv4Stack=true '
    + export HADOOP_IDENT_STRING=hdfs
    + HADOOP_IDENT_STRING=hdfs
    + '[' -n '' ']'
    + acquire_kerberos_tgt hdfs.keytab
    + '[' -z hdfs.keytab ']'
    + '[' -n '' ']'
    + '[' validate-writable-empty-dirs = datanode ']'
    + '[' file-operation = datanode ']'
    + '[' bootstrap = datanode ']'
    + '[' failover = datanode ']'
    + '[' transition-to-active = datanode ']'
    + '[' initializeSharedEdits = datanode ']'
    + '[' initialize-znode = datanode ']'
    + '[' format-namenode = datanode ']'
    + '[' monitor-decommission = datanode ']'
    + '[' monitor-upgrade = datanode ']'
    + '[' finalize-upgrade = datanode ']'
    + '[' mkdir = datanode ']'
    + '[' namenode = datanode -o secondarynamenode = datanode -o datanode = datanode ']'
    + HADOOP_OPTS='-Dsecurity.audit.**logger=INFO,RFAS -Djava.net.preferIPv4Stack=**true '
    + export 'HADOOP_OPTS=-Dhdfs.audit.**logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,**RFAS -Djava.net.preferIPv4Stack=**true '
    + HADOOP_OPTS='-Dhdfs.audit.**logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,**RFAS -Djava.net.preferIPv4Stack=**true '
    + exec /usr/lib/hadoop-hdfs/bin/hdfs --config /run/cloudera-scm-agent/**process/93-hdfs-DATANODE datanode

    Please help to resolve this.

    --
    regards
    Shiv
  • Shivayogi at Oct 18, 2012 at 1:55 am
    Following is the result from my host inspector.

    Inspector ran on all 1 hosts.
    Individual hosts resolved their own hostnames correctly.
    No errors where found while checking /etc/hosts.
    All hosts resolved localhost to 127.0.0.1.
    All hosts checked resolved each other's hostnames correctly.
    Host clocks are approximately in sync (within ten minutes).
    Host time zones are consistent across the cluster.
    No users or groups are missing.
    The numeric user ids of the HDFS user are consistent across hosts.
    No kernel versions that are known to be bad are running.
    0 hosts are running CDH3 and 1 hosts are running CDH4.
    All checked hosts are running the same version of components.
    All checked Cloudera Management Daemons versions are consistent with the
    server.
    All checked Cloudera Management Agents versions are consistent with the se
    On Thu, Oct 18, 2012 at 7:21 AM, Shivayogi wrote:

    Hi Phil following Fatal error I have found in the logs

    October 18 2012 6:58 AM FATAL
    org.apache.hadoop.hdfs.server.datanode.DataNode

    Initialization failed for block pool Block pool BP-478258594-127.0.1.1-1350497088276 (storage id DS-572390315-127.0.1.1-50010-1350497097205) service to shivu-desktop/127.0.1.1:8020
    org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException): Datanode denied communication with namenode: DatanodeRegistration(127.0.0.1, storageID=DS-572390315-127.0.1.1-50010-1350497097205, infoPort=50075, ipcPort=50020, storageInfo=lv=-40;cid=cluster4;nsid=1863494069;c=0)
    at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:566)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3358)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:854)
    at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:91)
    at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:20018)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)

    at org.apache.hadoop.ipc.Client.call(Client.java:1160)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
    at $Proxy10.registerDatanode(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
    at $Proxy10.registerDatanode(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.registerDatanode(DatanodeProtocolClientSideTranslatorPB.java:149)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.register(BPServiceActor.java:619)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:221)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:661)
    at java.lang.Thread.run(Thread.java:662)

    On Thu, Oct 18, 2012 at 4:07 AM, Philip Zeyliger wrote:

    Look up the logs of the data node (accessible via the UI, or via
    /var/log/hadoop/*DATANODE*). Most likely, one of the ports the datanode
    uses is taken up by something else.

    -- Philip

    On Wed, Oct 17, 2012 at 7:45 AM, shivayogi kumbar wrote:

    I have configured Cloudera manager and CDH4.0.4 is installed.

    Data node is not running finding following exception

    Supervisor returned FATAL: + '[' /usr/share/cmf ']'
    ++ find /usr/share/cmf/lib/plugins -name 'event-publish-*.jar'
    ++ tr -d '\n'
    + ADD_TO_CP=/usr/share/cmf/lib/**plugins/event-publish-4.0.4-**shaded.jar
    + eval 'OLD_VALUE=$HADOOP_CLASSPATH'
    ++ OLD_VALUE=
    + '[' -z ']'
    + export HADOOP_CLASSPATH=/usr/share/**cmf/lib/plugins/event-publish-**4.0.4-shaded.jar
    + HADOOP_CLASSPATH=/usr/share/**cmf/lib/plugins/event-publish-**4.0.4-shaded.jar
    + set -x
    + perl -pi -e 's#{{CMF_CONF_DIR}}#/run/**cloudera-scm-agent/process/93-**hdfs-DATANODE#g' /run/cloudera-scm-agent/**process/93-hdfs-DATANODE/core-**site.xml /run/cloudera-scm-agent/**process/93-hdfs-DATANODE/hdfs-**site.xml
    + '[' -e /run/cloudera-scm-agent/**process/93-hdfs-DATANODE/**topology.py ']'
    ++ find /run/cloudera-scm-agent/**process/93-hdfs-DATANODE -maxdepth 1 -name '*.py'
    + OUTPUT=
    + '[' '' '!=' '' ']'
    + export 'HADOOP_OPTS=-Djava.net.**preferIPv4Stack=true '
    + HADOOP_OPTS='-Djava.net.**preferIPv4Stack=true '
    + export HADOOP_IDENT_STRING=hdfs
    + HADOOP_IDENT_STRING=hdfs
    + '[' -n '' ']'
    + acquire_kerberos_tgt hdfs.keytab
    + '[' -z hdfs.keytab ']'
    + '[' -n '' ']'
    + '[' validate-writable-empty-dirs = datanode ']'
    + '[' file-operation = datanode ']'
    + '[' bootstrap = datanode ']'
    + '[' failover = datanode ']'
    + '[' transition-to-active = datanode ']'
    + '[' initializeSharedEdits = datanode ']'
    + '[' initialize-znode = datanode ']'
    + '[' format-namenode = datanode ']'
    + '[' monitor-decommission = datanode ']'
    + '[' monitor-upgrade = datanode ']'
    + '[' finalize-upgrade = datanode ']'
    + '[' mkdir = datanode ']'
    + '[' namenode = datanode -o secondarynamenode = datanode -o datanode = datanode ']'
    + HADOOP_OPTS='-Dsecurity.audit.**logger=INFO,RFAS -Djava.net.preferIPv4Stack=**true '
    + export 'HADOOP_OPTS=-Dhdfs.audit.**logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,**RFAS -Djava.net.preferIPv4Stack=**true '
    + HADOOP_OPTS='-Dhdfs.audit.**logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,**RFAS -Djava.net.preferIPv4Stack=**true '
    + exec /usr/lib/hadoop-hdfs/bin/hdfs --config /run/cloudera-scm-agent/**process/93-hdfs-DATANODE datanode

    Please help to resolve this.

    --
    regards
    Shiv


    --
    regards
    Shiv
  • Philip Zeyliger at Oct 18, 2012 at 9:28 pm
    Hi Shivayogi,

    Datanode denied communication with namenode:
    DatanodeRegistration(127.0.0.1,
    storageID=DS-572390315-127.0.1.1-50010-1350497097205, infoPort=50075,
    ipcPort=50020, storageInfo=lv=-40;cid=cluster4;nsid=1863494069;c=0)

    Here, the datanode is identifying itself as 127.0.0.1. Meanwhile, we set
    up the namenode's "hosts.allow" probably to use a real name (or is the host
    called "localhost" consistently?). You can add "localhost" to the
    "dfs.hosts.allow safety valve", but better would be to get your networking
    and hostname configuration right.

    -- Philip

    On Wed, Oct 17, 2012 at 6:40 PM, shivayogi kumbar wrote:

    Phil,

    Please find the error which I have found
    October 18 2012 6:58 AM FATAL
    org.apache.hadoop.hdfs.server.datanode.DataNode

    Initialization failed for block pool Block pool BP-478258594-127.0.1.1-1350497088276 (storage id DS-572390315-127.0.1.1-50010-1350497097205) service to shivu-desktop/127.0.1.1:8020
    org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException): Datanode denied communication with namenode: DatanodeRegistration(127.0.0.1, storageID=DS-572390315-127.0.1.1-50010-1350497097205, infoPort=50075, ipcPort=50020, storageInfo=lv=-40;cid=cluster4;nsid=1863494069;c=0)
    at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:566)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3358)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:854)
    at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:91)
    at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:20018)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)

    at org.apache.hadoop.ipc.Client.call(Client.java:1160)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
    at $Proxy10.registerDatanode(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
    at $Proxy10.registerDatanode(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.registerDatanode(DatanodeProtocolClientSideTranslatorPB.java:149)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.register(BPServiceActor.java:619)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:221)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:661)
    at java.lang.Thread.run(Thread.java:662)


    On Thursday, 18 October 2012 04:07:33 UTC+5:30, Philip Zeyliger wrote:

    Look up the logs of the data node (accessible via the UI, or via
    /var/log/hadoop/*DATANODE*). Most likely, one of the ports the datanode
    uses is taken up by something else.

    -- Philip

    On Wed, Oct 17, 2012 at 7:45 AM, shivayogi kumbar wrote:

    I have configured Cloudera manager and CDH4.0.4 is installed.

    Data node is not running finding following exception

    Supervisor returned FATAL: + '[' /usr/share/cmf ']'
    ++ find /usr/share/cmf/lib/plugins -name 'event-publish-*.jar'
    ++ tr -d '\n'
    + ADD_TO_CP=/usr/share/cmf/lib/**p**lugins/event-publish-4.0.4-**sha**ded.jar
    + eval 'OLD_VALUE=$HADOOP_CLASSPATH'
    ++ OLD_VALUE=
    + '[' -z ']'
    + export HADOOP_CLASSPATH=/usr/share/**cm**f/lib/plugins/event-publish-**4.**0.4-shaded.jar
    + HADOOP_CLASSPATH=/usr/share/**cm**f/lib/plugins/event-publish-**4.**0.4-shaded.jar
    + set -x
    + perl -pi -e 's#{{CMF_CONF_DIR}}#/run/**cloud**era-scm-agent/process/93-**hdfs-**DATANODE#g' /run/cloudera-scm-agent/**proces**s/93-hdfs-DATANODE/core-**site.**xml /run/cloudera-scm-agent/**proces**s/93-hdfs-DATANODE/hdfs-**site.**xml
    + '[' -e /run/cloudera-scm-agent/**proces**s/93-hdfs-DATANODE/**topology.py ']'
    ++ find /run/cloudera-scm-agent/**proces**s/93-hdfs-DATANODE -maxdepth 1 -name '*.py'
    + OUTPUT=
    + '[' '' '!=' '' ']'
    + export 'HADOOP_OPTS=-Djava.net.**prefer**IPv4Stack=true '
    + HADOOP_OPTS='-Djava.net.**prefer**IPv4Stack=true '
    + export HADOOP_IDENT_STRING=hdfs
    + HADOOP_IDENT_STRING=hdfs
    + '[' -n '' ']'
    + acquire_kerberos_tgt hdfs.keytab
    + '[' -z hdfs.keytab ']'
    + '[' -n '' ']'
    + '[' validate-writable-empty-dirs = datanode ']'
    + '[' file-operation = datanode ']'
    + '[' bootstrap = datanode ']'
    + '[' failover = datanode ']'
    + '[' transition-to-active = datanode ']'
    + '[' initializeSharedEdits = datanode ']'
    + '[' initialize-znode = datanode ']'
    + '[' format-namenode = datanode ']'
    + '[' monitor-decommission = datanode ']'
    + '[' monitor-upgrade = datanode ']'
    + '[' finalize-upgrade = datanode ']'
    + '[' mkdir = datanode ']'
    + '[' namenode = datanode -o secondarynamenode = datanode -o datanode = datanode ']'
    + HADOOP_OPTS='-Dsecurity.audit.****logger=INFO,RFAS -Djava.net.preferIPv4Stack=**tru**e '
    + export 'HADOOP_OPTS=-Dhdfs.audit.**logg**er=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,**R**FAS -Djava.net.preferIPv4Stack=**tru**e '
    + HADOOP_OPTS='-Dhdfs.audit.**logg**er=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,**R**FAS -Djava.net.preferIPv4Stack=**tru**e '
    + exec /usr/lib/hadoop-hdfs/bin/hdfs --config /run/cloudera-scm-agent/**proces**s/93-hdfs-DATANODE datanode

    Please help to resolve this.
  • Shivayogi at Oct 19, 2012 at 2:43 am
    Hi Phil,

    I have got the resolution by deleting the ubuntu ip address 127.0.1.1 and
    by providing the other ip address say 192.168.1.2 to my desktop.
    After that I have re installed CDH4 it went successfully.

    Thanks
    Shivayogi
    On Fri, Oct 19, 2012 at 2:52 AM, Philip Zeyliger wrote:

    Hi Shivayogi,

    Datanode denied communication with namenode:
    DatanodeRegistration(127.0.0.1,
    storageID=DS-572390315-127.0.1.1-50010-1350497097205, infoPort=50075,
    ipcPort=50020, storageInfo=lv=-40;cid=cluster4;nsid=1863494069;c=0)

    Here, the datanode is identifying itself as 127.0.0.1. Meanwhile, we set
    up the namenode's "hosts.allow" probably to use a real name (or is the host
    called "localhost" consistently?). You can add "localhost" to the
    "dfs.hosts.allow safety valve", but better would be to get your networking
    and hostname configuration right.

    -- Philip

    On Wed, Oct 17, 2012 at 6:40 PM, shivayogi kumbar wrote:

    Phil,

    Please find the error which I have found

    October 18 2012 6:58 AM FATAL
    org.apache.hadoop.hdfs.server.datanode.DataNode

    Initialization failed for block pool Block pool BP-478258594-127.0.1.1-1350497088276 (storage id DS-572390315-127.0.1.1-50010-1350497097205) service to shivu-desktop/127.0.1.1:8020
    org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException): Datanode denied communication with namenode: DatanodeRegistration(127.0.0.1, storageID=DS-572390315-127.0.1.1-50010-1350497097205, infoPort=50075, ipcPort=50020, storageInfo=lv=-40;cid=cluster4;nsid=1863494069;c=0)
    at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:566)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3358)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:854)
    at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:91)
    at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:20018)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)

    at org.apache.hadoop.ipc.Client.call(Client.java:1160)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
    at $Proxy10.registerDatanode(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
    at $Proxy10.registerDatanode(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.registerDatanode(DatanodeProtocolClientSideTranslatorPB.java:149)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.register(BPServiceActor.java:619)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:221)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:661)
    at java.lang.Thread.run(Thread.java:662)


    On Thursday, 18 October 2012 04:07:33 UTC+5:30, Philip Zeyliger wrote:

    Look up the logs of the data node (accessible via the UI, or via
    /var/log/hadoop/*DATANODE*). Most likely, one of the ports the datanode
    uses is taken up by something else.

    -- Philip

    On Wed, Oct 17, 2012 at 7:45 AM, shivayogi kumbar wrote:

    I have configured Cloudera manager and CDH4.0.4 is installed.

    Data node is not running finding following exception

    Supervisor returned FATAL: + '[' /usr/share/cmf ']'
    ++ find /usr/share/cmf/lib/plugins -name 'event-publish-*.jar'
    ++ tr -d '\n'
    + ADD_TO_CP=/usr/share/cmf/lib/**p**lugins/event-publish-4.0.4-**sha**ded.jar
    + eval 'OLD_VALUE=$HADOOP_CLASSPATH'
    ++ OLD_VALUE=
    + '[' -z ']'
    + export HADOOP_CLASSPATH=/usr/share/**cm**f/lib/plugins/event-publish-**4.**0.4-shaded.jar
    + HADOOP_CLASSPATH=/usr/share/**cm**f/lib/plugins/event-publish-**4.**0.4-shaded.jar
    + set -x
    + perl -pi -e 's#{{CMF_CONF_DIR}}#/run/**cloud**era-scm-agent/process/93-**hdfs-**DATANODE#g' /run/cloudera-scm-agent/**proces**s/93-hdfs-DATANODE/core-**site.**xml /run/cloudera-scm-agent/**proces**s/93-hdfs-DATANODE/hdfs-**site.**xml
    + '[' -e /run/cloudera-scm-agent/**proces**s/93-hdfs-DATANODE/**topology.py ']'
    ++ find /run/cloudera-scm-agent/**proces**s/93-hdfs-DATANODE -maxdepth 1 -name '*.py'
    + OUTPUT=
    + '[' '' '!=' '' ']'
    + export 'HADOOP_OPTS=-Djava.net.**prefer**IPv4Stack=true '
    + HADOOP_OPTS='-Djava.net.**prefer**IPv4Stack=true '
    + export HADOOP_IDENT_STRING=hdfs
    + HADOOP_IDENT_STRING=hdfs
    + '[' -n '' ']'
    + acquire_kerberos_tgt hdfs.keytab
    + '[' -z hdfs.keytab ']'
    + '[' -n '' ']'
    + '[' validate-writable-empty-dirs = datanode ']'
    + '[' file-operation = datanode ']'
    + '[' bootstrap = datanode ']'
    + '[' failover = datanode ']'
    + '[' transition-to-active = datanode ']'
    + '[' initializeSharedEdits = datanode ']'
    + '[' initialize-znode = datanode ']'
    + '[' format-namenode = datanode ']'
    + '[' monitor-decommission = datanode ']'
    + '[' monitor-upgrade = datanode ']'
    + '[' finalize-upgrade = datanode ']'
    + '[' mkdir = datanode ']'
    + '[' namenode = datanode -o secondarynamenode = datanode -o datanode = datanode ']'
    + HADOOP_OPTS='-Dsecurity.audit.****logger=INFO,RFAS -Djava.net.preferIPv4Stack=**tru**e '
    + export 'HADOOP_OPTS=-Dhdfs.audit.**logg**er=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,**R**FAS -Djava.net.preferIPv4Stack=**tru**e '
    + HADOOP_OPTS='-Dhdfs.audit.**logg**er=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,**R**FAS -Djava.net.preferIPv4Stack=**tru**e '
    + exec /usr/lib/hadoop-hdfs/bin/hdfs --config /run/cloudera-scm-agent/**proces**s/93-hdfs-DATANODE datanode

    Please help to resolve this.

    --
    regards
    Shiv

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedOct 17, '12 at 2:52p
activeOct 19, '12 at 2:43a
posts6
users2
websitecloudera.com
irc#hadoop

2 users in discussion

Shivayogi: 4 posts Philip Zeyliger: 2 posts

People

Translate

site design / logo © 2022 Grokbase