Hi,
We are working on a first time install of cloudera manager standard 4.6 . I
have a problem with starting the namenode on the cluster.
The role log file says:
2013-06-22 14:24:44,790 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
dfs.namenode.safemode.extension = 30000 2013-06-22 14:24:44,801 INFO
org.apache.hadoop.hdfs.server.common.Storage: Lock on
/home/gocloud/hadoop/dfs/nn/in_use.lock acquired by nodename
19928@nmlgoibibofe04.edc.mihi.com 2013-06-22 14:24:44,803 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
metrics system... 2013-06-22 14:24:44,803 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
stopped. 2013-06-22 14:24:44,803 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
shutdown complete. 2013-06-22 14:24:44,803 FATAL
org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.io.IOException: NameNode is not formatted. at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:212)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:639)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:476)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:400)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:610) at
org.apache.hadoop.hdfs.server.namenode.NameNode.
This comes on the Stderr logs and I think this is causing the issue.
'[' -e /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE/topology.py ']'
+ perl -pi -e 's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE#g' /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE/topology.py
+ '[' -e /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE/log4j.properties ']'
+ perl -pi -e 's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE#g' /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE/log4j.properties
++ find /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE -maxdepth 1 -name '*.py'find: /var/run/cloudera-scm-agent/process: Permission denied
+ OUTPUT=
+ '[' '' '!=' '' ']'
+ export HADOOP_IDENT_STRING=hdfs
+ HADOOP_IDENT_STRING=hdfs
+ '[' -n '' ']'
+ acquire_kerberos_tgt hdfs.keytab
+ '[' -z hdfs.keytab ']'
+ '[' -n '' ']'
+ '[' validate-writable-empty-dirs = namenode ']'
+ '[' file-operation = namenode ']'
+ '[' bootstrap = namenode ']'
+ '[' failover = namenode ']'
+ '[' transition-to-active = namenode ']'
+ '[' initializeSharedEdits = namenode ']'
+ '[' initialize-znode = namenode ']'
+ '[' format-namenode = namenode ']'
+ '[' monitor-decommission = namenode ']'
+ '[' jnSyncWait = namenode ']'
+ '[' nnRpcWait = namenode ']'
+ '[' monitor-upgrade = namenode ']'
+ '[' finalize-upgrade = namenode ']'
+ '[' mkdir = namenode ']'
+ '[' namenode = namenode -o secondarynamenode = namenode -o datanode = namenode ']'
+ HADOOP_OPTS='-Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ export 'HADOOP_OPTS=-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ HADOOP_OPTS='-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ exec /opt/cloudera/parcels/CDH-4.3.0-1.cdh4.3.0.p0.22/lib/hadoop-hdfs/bin/hdfs --config /var/run/cloudera-scm-agent/process/253-hdfs-NAMENODE namenode
Any help would be greatly appreciated