FAQ
I have configured Cloudera manager and CDH4.0.4 is installed.

Data node is not running finding following exception

Supervisor returned FATAL: + '[' /usr/share/cmf ']'
++ find /usr/share/cmf/lib/plugins -name 'event-publish-*.jar'
++ tr -d '\n'
+ ADD_TO_CP=/usr/share/cmf/lib/plugins/event-publish-4.0.4-shaded.jar
+ eval 'OLD_VALUE=$HADOOP_CLASSPATH'
++ OLD_VALUE=
+ '[' -z ']'
+ export HADOOP_CLASSPATH=/usr/share/cmf/lib/plugins/event-publish-4.0.4-shaded.jar
+ HADOOP_CLASSPATH=/usr/share/cmf/lib/plugins/event-publish-4.0.4-shaded.jar
+ set -x
+ perl -pi -e 's#{{CMF_CONF_DIR}}#/run/cloudera-scm-agent/process/93-hdfs-DATANODE#g'
/run/cloudera-scm-agent/process/93-hdfs-DATANODE/core-site.xml
/run/cloudera-scm-agent/process/93-hdfs-DATANODE/hdfs-site.xml
+ '[' -e /run/cloudera-scm-agent/process/93-hdfs-DATANODE/topology.py ']'
++ find /run/cloudera-scm-agent/process/93-hdfs-DATANODE -maxdepth 1
-name '*.py'
+ OUTPUT=
+ '[' '' '!=' '' ']'
+ export 'HADOOP_OPTS=-Djava.net.preferIPv4Stack=true '
+ HADOOP_OPTS='-Djava.net.preferIPv4Stack=true '
+ export HADOOP_IDENT_STRING=hdfs
+ HADOOP_IDENT_STRING=hdfs
+ '[' -n '' ']'
+ acquire_kerberos_tgt hdfs.keytab
+ '[' -z hdfs.keytab ']'
+ '[' -n '' ']'
+ '[' validate-writable-empty-dirs = datanode ']'
+ '[' file-operation = datanode ']'
+ '[' bootstrap = datanode ']'
+ '[' failover = datanode ']'
+ '[' transition-to-active = datanode ']'
+ '[' initializeSharedEdits = datanode ']'
+ '[' initialize-znode = datanode ']'
+ '[' format-namenode = datanode ']'
+ '[' monitor-decommission = datanode ']'
+ '[' monitor-upgrade = datanode ']'
+ '[' finalize-upgrade = datanode ']'
+ '[' mkdir = datanode ']'
+ '[' namenode = datanode -o secondarynamenode = datanode -o datanode
= datanode ']'
+ HADOOP_OPTS='-Dsecurity.audit.logger=INFO,RFAS
-Djava.net.preferIPv4Stack=true '
+ export 'HADOOP_OPTS=-Dhdfs.audit.logger=INFO,RFAAUDIT
-Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ HADOOP_OPTS='-Dhdfs.audit.logger=INFO,RFAAUDIT
-Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ exec /usr/lib/hadoop-hdfs/bin/hdfs --config
/run/cloudera-scm-agent/process/93-hdfs-DATANODE datanode

Please help to resolve this.
--
regards
Shiv

Search Discussions

  • Adam Smieszny at Oct 18, 2012 at 5:28 pm
    Hi Shivayogi,

    The output that you shared indicates that we did get to the step where we
    are executing the datanode process:
    exec /usr/lib/hadoop-hdfs/bin/hdfs --config
    /run/cloudera-scm-agent/process/93-hdfs-DATANODE datanode

    Which means that more information should be available on the datanode
    machine in the file /var/log/hadoop-hdfs/*DATANODE*
    as well, you might check for errors in /var/log/cloudera-scm-server/*

    Please share if you find any errors in either of these

    Thanks,
    On Wed, Oct 17, 2012 at 10:33 AM, Shivayogi wrote:


    I have configured Cloudera manager and CDH4.0.4 is installed.

    Data node is not running finding following exception

    Supervisor returned FATAL: + '[' /usr/share/cmf ']'
    ++ find /usr/share/cmf/lib/plugins -name 'event-publish-*.jar'
    ++ tr -d '\n'
    + ADD_TO_CP=/usr/share/cmf/lib/plugins/event-publish-4.0.4-shaded.jar
    + eval 'OLD_VALUE=$HADOOP_CLASSPATH'
    ++ OLD_VALUE=
    + '[' -z ']'
    + export HADOOP_CLASSPATH=/usr/share/cmf/lib/plugins/event-publish-4.0.4-shaded.jar
    + HADOOP_CLASSPATH=/usr/share/cmf/lib/plugins/event-publish-4.0.4-shaded.jar
    + set -x
    + perl -pi -e 's#{{CMF_CONF_DIR}}#/run/cloudera-scm-agent/process/93-hdfs-DATANODE#g' /run/cloudera-scm-agent/process/93-hdfs-DATANODE/core-site.xml /run/cloudera-scm-agent/process/93-hdfs-DATANODE/hdfs-site.xml
    + '[' -e /run/cloudera-scm-agent/process/93-hdfs-DATANODE/topology.py ']'
    ++ find /run/cloudera-scm-agent/process/93-hdfs-DATANODE -maxdepth 1 -name '*.py'
    + OUTPUT=
    + '[' '' '!=' '' ']'
    + export 'HADOOP_OPTS=-Djava.net.preferIPv4Stack=true '
    + HADOOP_OPTS='-Djava.net.preferIPv4Stack=true '
    + export HADOOP_IDENT_STRING=hdfs
    + HADOOP_IDENT_STRING=hdfs
    + '[' -n '' ']'
    + acquire_kerberos_tgt hdfs.keytab
    + '[' -z hdfs.keytab ']'
    + '[' -n '' ']'
    + '[' validate-writable-empty-dirs = datanode ']'
    + '[' file-operation = datanode ']'
    + '[' bootstrap = datanode ']'
    + '[' failover = datanode ']'
    + '[' transition-to-active = datanode ']'
    + '[' initializeSharedEdits = datanode ']'
    + '[' initialize-znode = datanode ']'
    + '[' format-namenode = datanode ']'
    + '[' monitor-decommission = datanode ']'
    + '[' monitor-upgrade = datanode ']'
    + '[' finalize-upgrade = datanode ']'
    + '[' mkdir = datanode ']'
    + '[' namenode = datanode -o secondarynamenode = datanode -o datanode = datanode ']'
    + HADOOP_OPTS='-Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
    + export 'HADOOP_OPTS=-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
    + HADOOP_OPTS='-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
    + exec /usr/lib/hadoop-hdfs/bin/hdfs --config /run/cloudera-scm-agent/process/93-hdfs-DATANODE datanode

    Please help to resolve this.
    --
    regards
    Shiv


    --
    Adam Smieszny
    Cloudera | Systems Engineer | http://tiny.cloudera.com/about
    917.830.4156 | http://www.linkedin.com/in/adamsmieszny
  • Shivayogi at Oct 19, 2012 at 2:47 am
    Adam,

    I have resolved the issue by deleting ubuntu's ip address 127.0.0.1 and
    providing my desktop address say 192.168.1.2.
    Thanks for your reply.

    Regards
    Shivayogi
    On Thu, Oct 18, 2012 at 9:14 PM, Adam Smieszny wrote:

    Hi Shivayogi,

    The output that you shared indicates that we did get to the step where we
    are executing the datanode process:
    exec /usr/lib/hadoop-hdfs/bin/hdfs --config
    /run/cloudera-scm-agent/process/93-hdfs-DATANODE datanode

    Which means that more information should be available on the datanode
    machine in the file /var/log/hadoop-hdfs/*DATANODE*
    as well, you might check for errors in /var/log/cloudera-scm-server/*

    Please share if you find any errors in either of these

    Thanks,
    On Wed, Oct 17, 2012 at 10:33 AM, Shivayogi wrote:


    I have configured Cloudera manager and CDH4.0.4 is installed.

    Data node is not running finding following exception

    Supervisor returned FATAL: + '[' /usr/share/cmf ']'
    ++ find /usr/share/cmf/lib/plugins -name 'event-publish-*.jar'
    ++ tr -d '\n'
    + ADD_TO_CP=/usr/share/cmf/lib/plugins/event-publish-4.0.4-shaded.jar
    + eval 'OLD_VALUE=$HADOOP_CLASSPATH'
    ++ OLD_VALUE=
    + '[' -z ']'
    + export HADOOP_CLASSPATH=/usr/share/cmf/lib/plugins/event-publish-4.0.4-shaded.jar
    + HADOOP_CLASSPATH=/usr/share/cmf/lib/plugins/event-publish-4.0.4-shaded.jar
    + set -x
    + perl -pi -e 's#{{CMF_CONF_DIR}}#/run/cloudera-scm-agent/process/93-hdfs-DATANODE#g' /run/cloudera-scm-agent/process/93-hdfs-DATANODE/core-site.xml /run/cloudera-scm-agent/process/93-hdfs-DATANODE/hdfs-site.xml
    + '[' -e /run/cloudera-scm-agent/process/93-hdfs-DATANODE/topology.py ']'
    ++ find /run/cloudera-scm-agent/process/93-hdfs-DATANODE -maxdepth 1 -name '*.py'
    + OUTPUT=
    + '[' '' '!=' '' ']'
    + export 'HADOOP_OPTS=-Djava.net.preferIPv4Stack=true '
    + HADOOP_OPTS='-Djava.net.preferIPv4Stack=true '
    + export HADOOP_IDENT_STRING=hdfs
    + HADOOP_IDENT_STRING=hdfs
    + '[' -n '' ']'
    + acquire_kerberos_tgt hdfs.keytab
    + '[' -z hdfs.keytab ']'
    + '[' -n '' ']'
    + '[' validate-writable-empty-dirs = datanode ']'
    + '[' file-operation = datanode ']'
    + '[' bootstrap = datanode ']'
    + '[' failover = datanode ']'
    + '[' transition-to-active = datanode ']'
    + '[' initializeSharedEdits = datanode ']'
    + '[' initialize-znode = datanode ']'
    + '[' format-namenode = datanode ']'
    + '[' monitor-decommission = datanode ']'
    + '[' monitor-upgrade = datanode ']'
    + '[' finalize-upgrade = datanode ']'
    + '[' mkdir = datanode ']'
    + '[' namenode = datanode -o secondarynamenode = datanode -o datanode = datanode ']'
    + HADOOP_OPTS='-Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
    + export 'HADOOP_OPTS=-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
    + HADOOP_OPTS='-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
    + exec /usr/lib/hadoop-hdfs/bin/hdfs --config /run/cloudera-scm-agent/process/93-hdfs-DATANODE datanode

    Please help to resolve this.
    --
    regards
    Shiv


    --
    Adam Smieszny
    Cloudera | Systems Engineer | http://tiny.cloudera.com/about
    917.830.4156 | http://www.linkedin.com/in/adamsmieszny

    --
    regards
    Shiv

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedOct 17, '12 at 2:39p
activeOct 19, '12 at 2:47a
posts3
users2
websitecloudera.com
irc#hadoop

2 users in discussion

Shivayogi: 2 posts Adam Smieszny: 1 post

People

Translate

site design / logo © 2022 Grokbase