FAQ
Hi,

We are working on a first time install of cloudera manager standard 4.6 . I
have a problem with starting the namenode on the cluster.

The role log file says:

2013-06-22 14:24:44,790 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
dfs.namenode.safemode.extension = 30000 2013-06-22 14:24:44,801 INFO
org.apache.hadoop.hdfs.server.common.Storage: Lock on
/home/gocloud/hadoop/dfs/nn/in_use.lock acquired by nodename
19928@nmlgoibibofe04.edc.mihi.com 2013-06-22 14:24:44,803 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
metrics system... 2013-06-22 14:24:44,803 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
stopped. 2013-06-22 14:24:44,803 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
shutdown complete. 2013-06-22 14:24:44,803 FATAL
org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.io.IOException: NameNode is not formatted. at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:212)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:639)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:476)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:400)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:610) at
org.apache.hadoop.hdfs.server.namenode.NameNode.

  This comes on the Stderr logs and I think this is causing the issue.

  '[' -e /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE/topology.py ']'
+ perl -pi -e 's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE#g' /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE/topology.py
+ '[' -e /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE/log4j.properties ']'
+ perl -pi -e 's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE#g' /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE/log4j.properties
++ find /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE -maxdepth 1 -name '*.py'find: /var/run/cloudera-scm-agent/process: Permission denied
+ OUTPUT=
+ '[' '' '!=' '' ']'
+ export HADOOP_IDENT_STRING=hdfs
+ HADOOP_IDENT_STRING=hdfs
+ '[' -n '' ']'
+ acquire_kerberos_tgt hdfs.keytab
+ '[' -z hdfs.keytab ']'
+ '[' -n '' ']'
+ '[' validate-writable-empty-dirs = namenode ']'
+ '[' file-operation = namenode ']'
+ '[' bootstrap = namenode ']'
+ '[' failover = namenode ']'
+ '[' transition-to-active = namenode ']'
+ '[' initializeSharedEdits = namenode ']'
+ '[' initialize-znode = namenode ']'
+ '[' format-namenode = namenode ']'
+ '[' monitor-decommission = namenode ']'
+ '[' jnSyncWait = namenode ']'
+ '[' nnRpcWait = namenode ']'
+ '[' monitor-upgrade = namenode ']'
+ '[' finalize-upgrade = namenode ']'
+ '[' mkdir = namenode ']'
+ '[' namenode = namenode -o secondarynamenode = namenode -o datanode = namenode ']'
+ HADOOP_OPTS='-Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ export 'HADOOP_OPTS=-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ HADOOP_OPTS='-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ exec /opt/cloudera/parcels/CDH-4.3.0-1.cdh4.3.0.p0.22/lib/hadoop-hdfs/bin/hdfs --config /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE namenode


Any help would be greatly appreciated

Search Discussions

  • bc Wong at Jun 25, 2013 at 6:40 pm
    Can you try to format the NN first? CM -> HDFS -> NN -> Actions -> Format.

    Cheers,
    bc

    On Sat, Jun 22, 2013 at 2:04 AM, Abhishek Pathak
    wrote:
    Hi,

    We are working on a first time install of cloudera manager standard 4.6 . I
    have a problem with starting the namenode on the cluster.

    The role log file says:

    2013-06-22 14:24:44,790 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    dfs.namenode.safemode.extension = 30000 2013-06-22 14:24:44,801 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Lock on
    /home/gocloud/hadoop/dfs/nn/in_use.lock acquired by nodename
    19928@nmlgoibibofe04.edc.mihi.com 2013-06-22 14:24:44,803 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics
    system... 2013-06-22 14:24:44,803 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
    stopped. 2013-06-22 14:24:44,803 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
    shutdown complete. 2013-06-22 14:24:44,803 FATAL
    org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
    java.io.IOException: NameNode is not formatted. at
    org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:212)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:639)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:476)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:400)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:610) at
    org.apache.hadoop.hdfs.server.namenode.NameNode.

    This comes on the Stderr logs and I think this is causing the issue.

    '[' -e /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE/topology.py
    ']'
    + perl -pi -e
    's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE#g'
    /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE/topology.py
    + '[' -e
    /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE/log4j.properties ']'
    + perl -pi -e
    's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE#g'
    /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE/log4j.properties
    ++ find /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE -maxdepth 1
    -name '*.py'
    find: /var/run/cloudera-scm-agent/process: Permission denied
    + OUTPUT=
    + '[' '' '!=' '' ']'
    + export HADOOP_IDENT_STRING=hdfs
    + HADOOP_IDENT_STRING=hdfs
    + '[' -n '' ']'
    + acquire_kerberos_tgt hdfs.keytab
    + '[' -z hdfs.keytab ']'
    + '[' -n '' ']'
    + '[' validate-writable-empty-dirs = namenode ']'
    + '[' file-operation = namenode ']'
    + '[' bootstrap = namenode ']'
    + '[' failover = namenode ']'
    + '[' transition-to-active = namenode ']'
    + '[' initializeSharedEdits = namenode ']'
    + '[' initialize-znode = namenode ']'
    + '[' format-namenode = namenode ']'
    + '[' monitor-decommission = namenode ']'
    + '[' jnSyncWait = namenode ']'
    + '[' nnRpcWait = namenode ']'
    + '[' monitor-upgrade = namenode ']'
    + '[' finalize-upgrade = namenode ']'
    + '[' mkdir = namenode ']'
    + '[' namenode = namenode -o secondarynamenode = namenode -o datanode =
    namenode ']'
    + HADOOP_OPTS='-Dsecurity.audit.logger=INFO,RFAS
    -Djava.net.preferIPv4Stack=true '
    + export 'HADOOP_OPTS=-Dhdfs.audit.logger=INFO,RFAAUDIT
    -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
    + HADOOP_OPTS='-Dhdfs.audit.logger=INFO,RFAAUDIT
    -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
    + exec
    /opt/cloudera/parcels/CDH-4.3.0-1.cdh4.3.0.p0.22/lib/hadoop-hdfs/bin/hdfs
    --config /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE namenode


    Any help would be greatly appreciated
  • ชาวนา ชาวไร่ at Jul 9, 2013 at 3:28 am
    can you try start hdfs.?
    On Wednesday, June 26, 2013 1:40:33 AM UTC+7, bc Wong wrote:

    Can you try to format the NN first? CM -> HDFS -> NN -> Actions -> Format.

    Cheers,
    bc

    On Sat, Jun 22, 2013 at 2:04 AM, Abhishek Pathak
    <abhishek.p...@gmail.com <javascript:>> wrote:
    Hi,

    We are working on a first time install of cloudera manager standard 4.6 . I
    have a problem with starting the namenode on the cluster.

    The role log file says:

    2013-06-22 14:24:44,790 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    dfs.namenode.safemode.extension = 30000 2013-06-22 14:24:44,801 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Lock on
    /home/gocloud/hadoop/dfs/nn/in_use.lock acquired by nodename
    19...@nmlgoibibofe04.edc.mihi.com <javascript:> 2013-06-22 14:24:44,803 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics
    system... 2013-06-22 14:24:44,803 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
    stopped. 2013-06-22 14:24:44,803 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
    shutdown complete. 2013-06-22 14:24:44,803 FATAL
    org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
    java.io.IOException: NameNode is not formatted. at
    org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:212)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:639)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:476)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:400)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:610) at
    org.apache.hadoop.hdfs.server.namenode.NameNode.

    This comes on the Stderr logs and I think this is causing the issue.

    '[' -e
    /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE/topology.py
    ']'
    + perl -pi -e
    's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE#g'
    /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE/topology.py
    + '[' -e
    /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE/log4j.properties ']'
    + perl -pi -e
    's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE#g'
    /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE/log4j.properties
    ++ find /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE -maxdepth 1
    -name '*.py'
    find: /var/run/cloudera-scm-agent/process: Permission denied
    + OUTPUT=
    + '[' '' '!=' '' ']'
    + export HADOOP_IDENT_STRING=hdfs
    + HADOOP_IDENT_STRING=hdfs
    + '[' -n '' ']'
    + acquire_kerberos_tgt hdfs.keytab
    + '[' -z hdfs.keytab ']'
    + '[' -n '' ']'
    + '[' validate-writable-empty-dirs = namenode ']'
    + '[' file-operation = namenode ']'
    + '[' bootstrap = namenode ']'
    + '[' failover = namenode ']'
    + '[' transition-to-active = namenode ']'
    + '[' initializeSharedEdits = namenode ']'
    + '[' initialize-znode = namenode ']'
    + '[' format-namenode = namenode ']'
    + '[' monitor-decommission = namenode ']'
    + '[' jnSyncWait = namenode ']'
    + '[' nnRpcWait = namenode ']'
    + '[' monitor-upgrade = namenode ']'
    + '[' finalize-upgrade = namenode ']'
    + '[' mkdir = namenode ']'
    + '[' namenode = namenode -o secondarynamenode = namenode -o datanode =
    namenode ']'
    + HADOOP_OPTS='-Dsecurity.audit.logger=INFO,RFAS
    -Djava.net.preferIPv4Stack=true '
    + export 'HADOOP_OPTS=-Dhdfs.audit.logger=INFO,RFAAUDIT
    -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
    + HADOOP_OPTS='-Dhdfs.audit.logger=INFO,RFAAUDIT
    -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
    + exec
    /opt/cloudera/parcels/CDH-4.3.0-1.cdh4.3.0.p0.22/lib/hadoop-hdfs/bin/hdfs
    --config /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE namenode


    Any help would be greatly appreciated

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedJun 22, '13 at 9:04a
activeJul 9, '13 at 3:28a
posts3
users3
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase