FAQ
Thanks Darren,

But there is non of this kind of things happened recently.

is there any solution for this or How I can recover from this issue?

Every time I start or restart, I am the same issue.


Thanks & Regards,

*Anupam Ranjan*

On 20 March 2013 21:55, Darren Lo wrote:

(Adding back cdh-user, seems to be a cdh issue)

Seems there was a problem with recovering your namenode transactions. Were
there any crashes, abnormal restarts, hard drive problems, etc recently?

Log tail:
13-03-20 10:25:53,620 INFO org.apache.hadoop.hdfs.server.common.Storage:
Lock on /dfs/nn/in_use.lock acquired by nodename
11230@clouderra.tcubes.com
2013-03-20 10:25:53,731 INFO org.apache.hadoop.hdfs.server.common.Storage:
Lock on /data/dfs/nn/in_use.lock acquired by nodename
11230@clouderra.tcubes.com
2013-03-20 10:25:53,880 INFO
org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering
unfinalized segments in /dfs/nn/current
2013-03-20 10:25:53,988 INFO
org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering
unfinalized segments in /data/dfs/nn/current
2013-03-20 10:25:54,581 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
metrics system...
2013-03-20 10:25:54,582 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
stopped.
2013-03-20 10:25:54,582 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
shutdown complete.
2013-03-20 10:25:54,583 FATAL
org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.io.IOException: Gap in transactions. Expected to be able to read up
until at least txid 2647 but unable to find any edit logs containing txid
2647
at
org.apache.hadoop.hdfs.server.namenode.FSEditLog.checkForGaps(FSEditLog.java:1175)
at
org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1133)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:616)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:267)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:534)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:424)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:386)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:398)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:432)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:608)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:589)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1140)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
2013-03-20 10:25:54,592 INFO org.apache.hadoop.util.ExitUtil: Exiting with
status 1
2013-03-20 10:25:54,600 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at clouderra.tcubes.com/192.168.3.227
************************************************************/

Hbase can't start because of a connection error, probably because your
namenode is down.

Thanks,
Darren

On Tue, Mar 19, 2013 at 9:58 PM, Anupam Ranjan wrote:

Hi Darren,

PFA the role log files for Namenode and Hbase-master respectively.

I don't see any other processes are running on this port.


Thanks,
*Anupam Ranjan*

On 19 March 2013 21:00, Darren Lo wrote:

Hi Anupam,
(bcc cdh-user)

Can you please provide any relevant role logs from Cloudera Manager?
From the main page:
Click HDFS
Click on name node with problem
Click Processes tab
Click Role Log Details

Please also provide logs for HBase.

One common failure is a port conflict. If you see a message where it
can't bind to a port, then check to make sure you don't have other
processes running on those ports.

Thanks,
Darren


On Tue, Mar 19, 2013 at 5:42 AM, Anupam Ranjan <
fantasticanupam@gmail.com> wrote:
Hi All,

There is an issue with Namenode on Cloudera Manager. Whenever I start
the Namenode, It is not starting and giving this error message in log. The
same issue persist with HBase too.

Error:

Supervisor returned FATAL: + '[' -e /var/run/cloudera-scm-agent/process/78-hdfs-NAMENODE/topology.py ']'
+ perl -pi -e 's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/78-hdfs-NAMENODE#g' /var/run/cloudera-scm-agent/process/78-hdfs-NAMENODE/topology.py
++ find /var/run/cloudera-scm-agent/process/78-hdfs-NAMENODE -maxdepth 1 -name '*.py'
+ OUTPUT='/var/run/cloudera-scm-agent/process/78-hdfs-NAMENODE/cloudera_manager_agent_fencer.py
/var/run/cloudera-scm-agent/process/78-hdfs-NAMENODE/topology.py'
+ '[' '/var/run/cloudera-scm-agent/process/78-hdfs-NAMENODE/cloudera_manager_agent_fencer.py
/var/run/cloudera-scm-agent/process/78-hdfs-NAMENODE/topology.py' '!=' '' ']'
+ chmod +x /var/run/cloudera-scm-agent/process/78-hdfs-NAMENODE/cloudera_manager_agent_fencer.py /var/run/cloudera-scm-agent/process/78-hdfs-NAMENODE/topology.py
+ export 'HADOOP_OPTS=-Djava.net.preferIPv4Stack=true '
+ HADOOP_OPTS='-Djava.net.preferIPv4Stack=true '
+ export HADOOP_IDENT_STRING=hdfs
+ HADOOP_IDENT_STRING=hdfs
+ '[' -n '' ']'
+ acquire_kerberos_tgt hdfs.keytab
+ '[' -z hdfs.keytab ']'
+ '[' -n '' ']'
+ '[' validate-writable-empty-dirs = namenode ']'
+ '[' file-operation = namenode ']'
+ '[' bootstrap = namenode ']'
+ '[' failover = namenode ']'
+ '[' transition-to-active = namenode ']'
+ '[' initializeSharedEdits = namenode ']'
+ '[' initialize-znode = namenode ']'
+ '[' format-namenode = namenode ']'
+ '[' monitor-decommission = namenode ']'
+ '[' monitor-upgrade = namenode ']'
+ '[' finalize-upgrade = namenode ']'
+ '[' mkdir = namenode ']'
+ '[' namenode = namenode -o secondarynamenode = namenode -o datanode = namenode ']'
+ HADOOP_OPTS='-Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ export 'HADOOP_OPTS=-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ HADOOP_OPTS='-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ exec /usr/lib/hadoop-hdfs/bin/hdfs --config /var/run/cloudera-scm-agent/process/78-hdfs-NAMENODE namenode


Thanks & Regards,
*Anupam Ranjan*


--
Thanks,
Darren

--
Thanks,
Darren

Search Discussions

Discussion Posts

Previous

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 5 of 11 | next ›
Discussion Overview
groupscm-users @
categorieshadoop
postedMar 19, '13 at 12:42p
activeApr 10, '13 at 8:04a
posts11
users4
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase