FAQ
Hi guys,

I have very similar issue like Anupam's.

I've been running services, then I installed zookeeper & hbase. zookeeper
was ok but hbase not.

I'm using 3 servers, for hbase I installed thrift for all 3, too.

/var/log/hbase/hbase-cmf-hbase1-MASTER-....log.out shows like below;
...
2013-04-09 15:31:35,518 ERROR
org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master
java.lang.RuntimeException: Failed construction of Master: class
org.apache.hadoop.hbase.master.HMaster at
org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1824)
at
org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:152)
at
org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:104)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at
org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76)
at
org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1838)Caused by:
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method) at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at
org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:247)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener.(HBaseServer.java:1533)
at
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.(WritableRpcEngine.java:245)
at
org.apache.hadoop.hbase.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:55)
at
org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:433) at
org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:422)
at
org.apache.hadoop.hbase.master.HMaster.(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at
org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1819)
... 5 more
...

Thanks,
Jun

2013년 3월 19일 화요일 오후 9시 42분 53초 UTC+9, Anupam Ranjan 님의 말:
Hi All,

There is an issue with Namenode on Cloudera Manager. Whenever I start the
Namenode, It is not starting and giving this error message in log. The same
issue persist with HBase too.

Error:

Supervisor returned FATAL: + '[' -e /var/run/cloudera-scm-agent/process/78-hdfs-NAMENODE/topology.py ']'
+ perl -pi -e 's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/78-hdfs-NAMENODE#g' /var/run/cloudera-scm-agent/process/78-hdfs-NAMENODE/topology.py
++ find /var/run/cloudera-scm-agent/process/78-hdfs-NAMENODE -maxdepth 1 -name '*.py'
+ OUTPUT='/var/run/cloudera-scm-agent/process/78-hdfs-NAMENODE/cloudera_manager_agent_fencer.py
/var/run/cloudera-scm-agent/process/78-hdfs-NAMENODE/topology.py'
+ '[' '/var/run/cloudera-scm-agent/process/78-hdfs-NAMENODE/cloudera_manager_agent_fencer.py
/var/run/cloudera-scm-agent/process/78-hdfs-NAMENODE/topology.py' '!=' '' ']'
+ chmod +x /var/run/cloudera-scm-agent/process/78-hdfs-NAMENODE/cloudera_manager_agent_fencer.py /var/run/cloudera-scm-agent/process/78-hdfs-NAMENODE/topology.py
+ export 'HADOOP_OPTS=-Djava.net.preferIPv4Stack=true '
+ HADOOP_OPTS='-Djava.net.preferIPv4Stack=true '
+ export HADOOP_IDENT_STRING=hdfs
+ HADOOP_IDENT_STRING=hdfs
+ '[' -n '' ']'
+ acquire_kerberos_tgt hdfs.keytab
+ '[' -z hdfs.keytab ']'
+ '[' -n '' ']'
+ '[' validate-writable-empty-dirs = namenode ']'
+ '[' file-operation = namenode ']'
+ '[' bootstrap = namenode ']'
+ '[' failover = namenode ']'
+ '[' transition-to-active = namenode ']'
+ '[' initializeSharedEdits = namenode ']'
+ '[' initialize-znode = namenode ']'
+ '[' format-namenode = namenode ']'
+ '[' monitor-decommission = namenode ']'
+ '[' monitor-upgrade = namenode ']'
+ '[' finalize-upgrade = namenode ']'
+ '[' mkdir = namenode ']'
+ '[' namenode = namenode -o secondarynamenode = namenode -o datanode = namenode ']'
+ HADOOP_OPTS='-Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ export 'HADOOP_OPTS=-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ HADOOP_OPTS='-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
+ exec /usr/lib/hadoop-hdfs/bin/hdfs --config /var/run/cloudera-scm-agent/process/78-hdfs-NAMENODE namenode


Thanks & Regards,
*Anupam Ranjan*

Search Discussions

Discussion Posts

Previous

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 6 of 11 | next ›
Discussion Overview
groupscm-users @
categorieshadoop
postedMar 19, '13 at 12:42p
activeApr 10, '13 at 8:04a
posts11
users4
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase