FAQ
Hi,
I upgraded Cloudera Manager 4.1 to 4.5.2.1 yesterday. But HDFS did not
start again.
How can I fix this issue?
Thanks

*CDH:* CDH3
*OS:* Debian/Squeeze
*Geting this warning*: Mismatched CDH versions: host has CDH3 but role
expects 4

*Environment Variables:*

     HADOOP_LOGFILE=hadoop-cmf-hdfs1-DATANODE-manager.oran.loc.log.out
     HADOOP_AUDIT_LOGGER=INFO,RFAAUDIT
     HADOOP_ROOT_LOGGER=INFO,RFA
     CDH_VERSION=4
     HADOOP_LOG_DIR=/var/log/hadoop-hdfs
     HADOOP_SECURITY_LOGGER=INFO,RFAS
     HADOOP_DATANODE_OPTS=-Xmx1041855203 -XX:+UseParNewGC
-XX:+UseConcMarkSweepGC -XX:-CMSConcurrentMTEnabled
-XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled


*Start Command results:*

     Supervisor returned FATAL: + '[' -e
/var/run/cloudera-scm-agent/process/147-hdfs-DATANODE/topology.py ']'
     + '[' -e
/var/run/cloudera-scm-agent/process/147-hdfs-DATANODE/log4j.properties ']'
     + perl -pi -e
's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/147-hdfs-DATANODE#g'
/var/run/cloudera-scm-agent/process/147-hdfs-DATANODE/log4j.properties
     ++ find /var/run/cloudera-scm-agent/process/147-hdfs-DATANODE -maxdepth
1 -name '*.py'
     + OUTPUT=
     + '[' '' '!=' '' ']'
     + export HADOOP_IDENT_STRING=hdfs
     + HADOOP_IDENT_STRING=hdfs
     + '[' -n '' ']'
     + acquire_kerberos_tgt hdfs.keytab
     + '[' -z hdfs.keytab ']'
     + '[' -n '' ']'
     + '[' validate-writable-empty-dirs = datanode ']'
     + '[' file-operation = datanode ']'
     + '[' bootstrap = datanode ']'
     + '[' failover = datanode ']'
     + '[' transition-to-active = datanode ']'
     + '[' initializeSharedEdits = datanode ']'
     + '[' initialize-znode = datanode ']'
     + '[' format-namenode = datanode ']'
     + '[' monitor-decommission = datanode ']'
     + '[' jnSyncWait = datanode ']'
     + '[' nnRpcWait = datanode ']'
     + '[' monitor-upgrade = datanode ']'
     + '[' finalize-upgrade = datanode ']'
     + '[' mkdir = datanode ']'
     + '[' namenode = datanode -o secondarynamenode = datanode -o datanode =
datanode ']'
     + HADOOP_OPTS='-Dsecurity.audit.logger=INFO,RFAS
-Djava.net.preferIPv4Stack=true '
     + export 'HADOOP_OPTS=-Dhdfs.audit.logger=INFO,RFAAUDIT
-Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
     + HADOOP_OPTS='-Dhdfs.audit.logger=INFO,RFAAUDIT
-Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
     + exec /usr/lib/hadoop-hdfs/bin/hdfs --config
/var/run/cloudera-scm-agent/process/147-hdfs-DATANODE datanode
     /usr/lib/cmf/service/hdfs/hdfs.sh: line 346:
/usr/lib/hadoop-hdfs/bin/hdfs: No such file or directory
     /usr/lib/cmf/service/hdfs/hdfs.sh: line 346: exec:
/usr/lib/hadoop-hdfs/bin/hdfs: cannot execute: No such file or directory

Search Discussions

  • Vikram Srivastava at May 8, 2013 at 5:40 pm
    Hey Kansu,

    While upgrading Cloudera Manager, did you do "Upgrade Cluster" (from the
    main services page -> "Actions" menu) also? That would make Cloudera
    Manager think that you are using CDH4 instead of CDH3.

    Vikram
    On Wed, May 8, 2013 at 7:22 AM, Kansu Köse wrote:

    Hi,
    I upgraded Cloudera Manager 4.1 to 4.5.2.1 yesterday. But HDFS did not
    start again.
    How can I fix this issue?
    Thanks

    *CDH:* CDH3
    *OS:* Debian/Squeeze
    *Geting this warning*: Mismatched CDH versions: host has CDH3 but role
    expects 4

    *Environment Variables:*

    HADOOP_LOGFILE=hadoop-cmf-hdfs1-DATANODE-manager.oran.loc.log.out
    HADOOP_AUDIT_LOGGER=INFO,RFAAUDIT
    HADOOP_ROOT_LOGGER=INFO,RFA
    CDH_VERSION=4
    HADOOP_LOG_DIR=/var/log/hadoop-hdfs
    HADOOP_SECURITY_LOGGER=INFO,RFAS
    HADOOP_DATANODE_OPTS=-Xmx1041855203 -XX:+UseParNewGC
    -XX:+UseConcMarkSweepGC -XX:-CMSConcurrentMTEnabled
    -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled


    *Start Command results:*

    Supervisor returned FATAL: + '[' -e
    /var/run/cloudera-scm-agent/process/147-hdfs-DATANODE/topology.py ']'
    + '[' -e
    /var/run/cloudera-scm-agent/process/147-hdfs-DATANODE/log4j.properties ']'
    + perl -pi -e
    's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/147-hdfs-DATANODE#g'
    /var/run/cloudera-scm-agent/process/147-hdfs-DATANODE/log4j.properties
    ++ find /var/run/cloudera-scm-agent/process/147-hdfs-DATANODE
    -maxdepth 1 -name '*.py'
    + OUTPUT=
    + '[' '' '!=' '' ']'
    + export HADOOP_IDENT_STRING=hdfs
    + HADOOP_IDENT_STRING=hdfs
    + '[' -n '' ']'
    + acquire_kerberos_tgt hdfs.keytab
    + '[' -z hdfs.keytab ']'
    + '[' -n '' ']'
    + '[' validate-writable-empty-dirs = datanode ']'
    + '[' file-operation = datanode ']'
    + '[' bootstrap = datanode ']'
    + '[' failover = datanode ']'
    + '[' transition-to-active = datanode ']'
    + '[' initializeSharedEdits = datanode ']'
    + '[' initialize-znode = datanode ']'
    + '[' format-namenode = datanode ']'
    + '[' monitor-decommission = datanode ']'
    + '[' jnSyncWait = datanode ']'
    + '[' nnRpcWait = datanode ']'
    + '[' monitor-upgrade = datanode ']'
    + '[' finalize-upgrade = datanode ']'
    + '[' mkdir = datanode ']'
    + '[' namenode = datanode -o secondarynamenode = datanode -o datanode
    = datanode ']'
    + HADOOP_OPTS='-Dsecurity.audit.logger=INFO,RFAS
    -Djava.net.preferIPv4Stack=true '
    + export 'HADOOP_OPTS=-Dhdfs.audit.logger=INFO,RFAAUDIT
    -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
    + HADOOP_OPTS='-Dhdfs.audit.logger=INFO,RFAAUDIT
    -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
    + exec /usr/lib/hadoop-hdfs/bin/hdfs --config
    /var/run/cloudera-scm-agent/process/147-hdfs-DATANODE datanode
    /usr/lib/cmf/service/hdfs/hdfs.sh: line 346:
    /usr/lib/hadoop-hdfs/bin/hdfs: No such file or directory
    /usr/lib/cmf/service/hdfs/hdfs.sh: line 346: exec:
    /usr/lib/hadoop-hdfs/bin/hdfs: cannot execute: No such file or directory
  • Kansu Köse at May 9, 2013 at 6:27 am
    Hi Vikram,
    Thanks for your response.
    Yes, I guess I did that. It`s my mistake.. we use old hypertable version on
    that cluster. hypertable 0.9.6 would not work with CDH4. we have lots of
    data in there.
    Is there any way for get back this changes?





    8 Mayıs 2013 Çarşamba 20:40:33 UTC+3 tarihinde Vikram Srivastava yazdı:
    Hey Kansu,

    While upgrading Cloudera Manager, did you do "Upgrade Cluster" (from the
    main services page -> "Actions" menu) also? That would make Cloudera
    Manager think that you are using CDH4 instead of CDH3.

    Vikram

    On Wed, May 8, 2013 at 7:22 AM, Kansu Köse <kans...@gmail.com<javascript:>
    wrote:
    Hi,
    I upgraded Cloudera Manager 4.1 to 4.5.2.1 yesterday. But HDFS did not
    start again.
    How can I fix this issue?
    Thanks

    *CDH:* CDH3
    *OS:* Debian/Squeeze
    *Geting this warning*: Mismatched CDH versions: host has CDH3 but role
    expects 4

    *Environment Variables:*

    HADOOP_LOGFILE=hadoop-cmf-hdfs1-DATANODE-manager.oran.loc.log.out
    HADOOP_AUDIT_LOGGER=INFO,RFAAUDIT
    HADOOP_ROOT_LOGGER=INFO,RFA
    CDH_VERSION=4
    HADOOP_LOG_DIR=/var/log/hadoop-hdfs
    HADOOP_SECURITY_LOGGER=INFO,RFAS
    HADOOP_DATANODE_OPTS=-Xmx1041855203 -XX:+UseParNewGC
    -XX:+UseConcMarkSweepGC -XX:-CMSConcurrentMTEnabled
    -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled


    *Start Command results:*

    Supervisor returned FATAL: + '[' -e
    /var/run/cloudera-scm-agent/process/147-hdfs-DATANODE/topology.py ']'
    + '[' -e
    /var/run/cloudera-scm-agent/process/147-hdfs-DATANODE/log4j.properties ']'
    + perl -pi -e
    's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/147-hdfs-DATANODE#g'
    /var/run/cloudera-scm-agent/process/147-hdfs-DATANODE/log4j.properties
    ++ find /var/run/cloudera-scm-agent/process/147-hdfs-DATANODE
    -maxdepth 1 -name '*.py'
    + OUTPUT=
    + '[' '' '!=' '' ']'
    + export HADOOP_IDENT_STRING=hdfs
    + HADOOP_IDENT_STRING=hdfs
    + '[' -n '' ']'
    + acquire_kerberos_tgt hdfs.keytab
    + '[' -z hdfs.keytab ']'
    + '[' -n '' ']'
    + '[' validate-writable-empty-dirs = datanode ']'
    + '[' file-operation = datanode ']'
    + '[' bootstrap = datanode ']'
    + '[' failover = datanode ']'
    + '[' transition-to-active = datanode ']'
    + '[' initializeSharedEdits = datanode ']'
    + '[' initialize-znode = datanode ']'
    + '[' format-namenode = datanode ']'
    + '[' monitor-decommission = datanode ']'
    + '[' jnSyncWait = datanode ']'
    + '[' nnRpcWait = datanode ']'
    + '[' monitor-upgrade = datanode ']'
    + '[' finalize-upgrade = datanode ']'
    + '[' mkdir = datanode ']'
    + '[' namenode = datanode -o secondarynamenode = datanode -o datanode
    = datanode ']'
    + HADOOP_OPTS='-Dsecurity.audit.logger=INFO,RFAS
    -Djava.net.preferIPv4Stack=true '
    + export 'HADOOP_OPTS=-Dhdfs.audit.logger=INFO,RFAAUDIT
    -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
    + HADOOP_OPTS='-Dhdfs.audit.logger=INFO,RFAAUDIT
    -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
    + exec /usr/lib/hadoop-hdfs/bin/hdfs --config
    /var/run/cloudera-scm-agent/process/147-hdfs-DATANODE datanode
    /usr/lib/cmf/service/hdfs/hdfs.sh: line 346:
    /usr/lib/hadoop-hdfs/bin/hdfs: No such file or directory
    /usr/lib/cmf/service/hdfs/hdfs.sh: line 346: exec:
    /usr/lib/hadoop-hdfs/bin/hdfs: cannot execute: No such file or directory
  • Kansu Köse at May 9, 2013 at 8:25 am
    Hi,
    My problem is solved.
    Problem: I chose wrong CDH version during upgrade process
    Solvation:
    1-I uninstalled cloudera-scm-server
    2-Then reinstaled it.
    3- I Added existing cluster again.
    Every thing is working perfect now
    Thank you very much for your helps.

    8 Mayıs 2013 Çarşamba 20:40:33 UTC+3 tarihinde Vikram Srivastava yazdı:
    Hey Kansu,

    While upgrading Cloudera Manager, did you do "Upgrade Cluster" (from the
    main services page -> "Actions" menu) also? That would make Cloudera
    Manager think that you are using CDH4 instead of CDH3.

    Vikram

    On Wed, May 8, 2013 at 7:22 AM, Kansu Köse <kans...@gmail.com<javascript:>
    wrote:
    Hi,
    I upgraded Cloudera Manager 4.1 to 4.5.2.1 yesterday. But HDFS did not
    start again.
    How can I fix this issue?
    Thanks

    *CDH:* CDH3
    *OS:* Debian/Squeeze
    *Geting this warning*: Mismatched CDH versions: host has CDH3 but role
    expects 4

    *Environment Variables:*

    HADOOP_LOGFILE=hadoop-cmf-hdfs1-DATANODE-manager.oran.loc.log.out
    HADOOP_AUDIT_LOGGER=INFO,RFAAUDIT
    HADOOP_ROOT_LOGGER=INFO,RFA
    CDH_VERSION=4
    HADOOP_LOG_DIR=/var/log/hadoop-hdfs
    HADOOP_SECURITY_LOGGER=INFO,RFAS
    HADOOP_DATANODE_OPTS=-Xmx1041855203 -XX:+UseParNewGC
    -XX:+UseConcMarkSweepGC -XX:-CMSConcurrentMTEnabled
    -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled


    *Start Command results:*

    Supervisor returned FATAL: + '[' -e
    /var/run/cloudera-scm-agent/process/147-hdfs-DATANODE/topology.py ']'
    + '[' -e
    /var/run/cloudera-scm-agent/process/147-hdfs-DATANODE/log4j.properties ']'
    + perl -pi -e
    's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/147-hdfs-DATANODE#g'
    /var/run/cloudera-scm-agent/process/147-hdfs-DATANODE/log4j.properties
    ++ find /var/run/cloudera-scm-agent/process/147-hdfs-DATANODE
    -maxdepth 1 -name '*.py'
    + OUTPUT=
    + '[' '' '!=' '' ']'
    + export HADOOP_IDENT_STRING=hdfs
    + HADOOP_IDENT_STRING=hdfs
    + '[' -n '' ']'
    + acquire_kerberos_tgt hdfs.keytab
    + '[' -z hdfs.keytab ']'
    + '[' -n '' ']'
    + '[' validate-writable-empty-dirs = datanode ']'
    + '[' file-operation = datanode ']'
    + '[' bootstrap = datanode ']'
    + '[' failover = datanode ']'
    + '[' transition-to-active = datanode ']'
    + '[' initializeSharedEdits = datanode ']'
    + '[' initialize-znode = datanode ']'
    + '[' format-namenode = datanode ']'
    + '[' monitor-decommission = datanode ']'
    + '[' jnSyncWait = datanode ']'
    + '[' nnRpcWait = datanode ']'
    + '[' monitor-upgrade = datanode ']'
    + '[' finalize-upgrade = datanode ']'
    + '[' mkdir = datanode ']'
    + '[' namenode = datanode -o secondarynamenode = datanode -o datanode
    = datanode ']'
    + HADOOP_OPTS='-Dsecurity.audit.logger=INFO,RFAS
    -Djava.net.preferIPv4Stack=true '
    + export 'HADOOP_OPTS=-Dhdfs.audit.logger=INFO,RFAAUDIT
    -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
    + HADOOP_OPTS='-Dhdfs.audit.logger=INFO,RFAAUDIT
    -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
    + exec /usr/lib/hadoop-hdfs/bin/hdfs --config
    /var/run/cloudera-scm-agent/process/147-hdfs-DATANODE datanode
    /usr/lib/cmf/service/hdfs/hdfs.sh: line 346:
    /usr/lib/hadoop-hdfs/bin/hdfs: No such file or directory
    /usr/lib/cmf/service/hdfs/hdfs.sh: line 346: exec:
    /usr/lib/hadoop-hdfs/bin/hdfs: cannot execute: No such file or directory

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedMay 8, '13 at 2:23p
activeMay 9, '13 at 8:25a
posts4
users2
websitecloudera.com
irc#hadoop

2 users in discussion

Kansu Köse: 3 posts Vikram Srivastava: 1 post

People

Translate

site design / logo © 2022 Grokbase