FAQ
I tried another type of EC2 instance. The same error happened. Do I need
to configure some parameters on CM WebUI before start the new datanode HDFS
service?
Thanks a lot!

Start HDFS service failed...
Supervisor returned FATAL. Please check the role log file, stderr, or
stdout.

stderr:
+ set -x + perl -pi -e
's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/153-hdfs-DATANODE#g'
/var/run/cloudera-scm-agent/process/153-hdfs-DATANODE/core-site.xml
/var/run/cloudera-scm-agent/process/153-hdfs-DATANODE/hdfs-site.xml + '['
-e /var/run/cloudera-scm-agent/process/153-hdfs-DATANODE/topology.py ']' +
'[' -e
/var/run/cloudera-scm-agent/process/153-hdfs-DATANODE/log4j.properties ']'
+ perl -pi -e
's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/153-hdfs-DATANODE#g'
/var/run/cloudera-scm-agent/process/153-hdfs-DATANODE/log4j.properties ++
find /var/run/cloudera-scm-agent/process/153-hdfs-DATANODE -maxdepth 1
-name '*.py' + OUTPUT= + '[' '' '!=' '' ']' + export
HADOOP_IDENT_STRING=hdfs + HADOOP_IDENT_STRING=hdfs + '[' -n '' ']' +
acquire_kerberos_tgt hdfs.keytab + '[' -z hdfs.keytab ']' + '[' -n '' ']' +
'[' validate-writable-empty-dirs = datanode ']' + '[' file-operation =
datanode ']' + '[' bootstrap = datanode ']' + '[' failover = datanode ']' +
'[' transition-to-active = datanode ']' + '[' initializeSharedEdits =
datanode ']' + '[' initialize-znode = datanode ']' + '[' format-namenode =
datanode ']' + '[' monitor-decommission = datanode ']' + '[' jnSyncWait =
datanode ']' + '[' nnRpcWait = datanode ']' + '[' monitor-upgrade =
datanode ']' + '[' finalize-upgrade = datanode ']' + '[' mkdir = datanode
']' + '[' namenode = datanode -o secondarynamenode = datanode -o datanode =
datanode ']' + HADOOP_OPTS='-Dsecurity.audit.logger=INFO,RFAS
-Djava.net.preferIPv4Stack=true ' + export
'HADOOP_OPTS=-Dhdfs.audit.logger=INFO,RFAAUDIT
-Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true ' +
HADOOP_OPTS='-Dhdfs.audit.logger=INFO,RFAAUDIT
-Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true ' + exec
/opt/cloudera/parcels/CDH-4.3.0-1.cdh4.3.0.p0.22/lib/hadoop-hdfs/bin/hdfs
--config /var/run/cloudera-scm-agent/process/153-hdfs-DATANODE datanode

stdout:
Sat Jul 20 20:09:03 EDT 2013 using /usr/java/jdk1.6.0_31 as JAVA_HOME using
4 as CDH_VERSION using
/var/run/cloudera-scm-agent/process/153-hdfs-DATANODE as CONF_DIR using as
SECURE_USER using as SECURE_GROUP Sat Jul 20 20:09:07 EDT 2013 using
/usr/java/jdk1.6.0_31 as JAVA_HOME using 4 as CDH_VERSION using
/var/run/cloudera-scm-agent/process/153-hdfs-DATANODE as CONF_DIR using as
SECURE_USER using as SECURE_GROUP Sat Jul 20 20:09:14 EDT 2013 using
/usr/java/jdk1.6.0_31 as JAVA_HOME using 4 as CDH_VERSION using
/var/run/cloudera-scm-agent/process/153-hdfs-DATANODE as CONF_DIR using as
SECURE_USER using as SECURE_GROUP Sat Jul 20 20:09:21 EDT 2013 using
/usr/java/jdk1.6.0_31 as JAVA_HOME using 4 as CDH_VERSION using
/var/run/cloudera-scm-agent/process/153-hdfs-DATANODE as CONF_DIR using as
SECURE_USER using as SECURE_GROUP

On Friday, July 19, 2013 7:23:55 PM UTC-7, GIS Song wrote:

Hi,
I create a EC2 instance ubuntu 12.4AMI on Amazon, and installed
CDH4. Now I have added this node into an existing cluster. But when I
failed to start the hdfs service on this new datanode through Cloudera
Manager WebUI. the port 50010 was not used in this instance.

I also tried the user 'ubuntu' and the user 'root' to tun the command:
$ service hadoop-hdfs-datanode start
hadoop-hdfs-datanode: unrecognized service


------------------------CM WebUI
Error------------------------------------------------------
Command 'Start' failed for service 'hdfs1'

2013-07-20 02:08:31,115 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
java.net.BindException: Problem binding to [
ec2-XXX-XXX-XXX-XXX.amazonaws.com:50010] java.net.BindException:
Cannot assign requested address; For more details see:
http://wiki.apache.org/hadoop/BindException

root@/home/ubuntu# netstat -a -t --numeric-ports -p
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address
State PID/Program name
tcp 0 0 172.31.29.149:9000 0.0.0.0:*
LISTEN 832/python
tcp 0 0 127.0.0.1:9001 0.0.0.0:*
LISTEN 852/python
tcp 0 0 0.0.0.0:22 0.0.0.0:*
LISTEN 649/sshd
tcp 0 0 127.0.0.1:5432 0.0.0.0:*
LISTEN 785/postgres
tcp 0 288 172.31.29.149:22 128.111.106.224:54639
ESTABLISHED 1663/sshd: ubuntu [
tcp 0 0 127.0.0.1:9001 127.0.0.1:48293
ESTABLISHED 852/python
tcp 0 0 172.31.29.149:58613 128.111.234.47:7182
ESTABLISHED 832/python
tcp 0 0 127.0.0.1:48293 127.0.0.1:9001
ESTABLISHED 832/python
tcp6 0 0 :::22 :::*
LISTEN 649/sshd


Thanks,
Song

Search Discussions

Discussion Posts

Previous

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 2 of 2 | next ›
Discussion Overview
groupscm-users @
categorieshadoop
postedJul 20, '13 at 2:23a
activeJul 21, '13 at 12:13a
posts2
users1
websitecloudera.com
irc#hadoop

1 user in discussion

GIS Song: 2 posts

People

Translate

site design / logo © 2022 Grokbase