FAQ
Hi Phil following Fatal error I have found in the logs

October 18 2012 6:58 AM FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode

Initialization failed for block pool Block pool
BP-478258594-127.0.1.1-1350497088276 (storage id
DS-572390315-127.0.1.1-50010-1350497097205) service to
shivu-desktop/127.0.1.1:8020
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
Datanode denied communication with namenode:
DatanodeRegistration(127.0.0.1,
storageID=DS-572390315-127.0.1.1-50010-1350497097205, infoPort=50075,
ipcPort=50020, storageInfo=lv=-40;cid=cluster4;nsid=1863494069;c=0)
at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:566)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3358)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:854)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:91)
at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:20018)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)

at org.apache.hadoop.ipc.Client.call(Client.java:1160)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
at $Proxy10.registerDatanode(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
at $Proxy10.registerDatanode(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.registerDatanode(DatanodeProtocolClientSideTranslatorPB.java:149)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.register(BPServiceActor.java:619)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:221)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:661)
at java.lang.Thread.run(Thread.java:662)

On Thu, Oct 18, 2012 at 4:07 AM, Philip Zeyliger wrote:

Look up the logs of the data node (accessible via the UI, or via
/var/log/hadoop/*DATANODE*). Most likely, one of the ports the datanode
uses is taken up by something else.

-- Philip

On Wed, Oct 17, 2012 at 7:45 AM, shivayogi kumbar wrote:

I have configured Cloudera manager and CDH4.0.4 is installed.

Data node is not running finding following exception

Supervisor returned FATAL: + '[' /usr/share/cmf ']'
++ find /usr/share/cmf/lib/plugins -name 'event-publish-*.jar'
++ tr -d '\n'
+ ADD_TO_CP=/usr/share/cmf/lib/**plugins/event-publish-4.0.4-**shaded.jar
+ eval 'OLD_VALUE=$HADOOP_CLASSPATH'
++ OLD_VALUE=
+ '[' -z ']'
+ export HADOOP_CLASSPATH=/usr/share/**cmf/lib/plugins/event-publish-**4.0.4-shaded.jar
+ HADOOP_CLASSPATH=/usr/share/**cmf/lib/plugins/event-publish-**4.0.4-shaded.jar
+ set -x
+ perl -pi -e 's#{{CMF_CONF_DIR}}#/run/**cloudera-scm-agent/process/93-**hdfs-DATANODE#g' /run/cloudera-scm-agent/**process/93-hdfs-DATANODE/core-**site.xml /run/cloudera-scm-agent/**process/93-hdfs-DATANODE/hdfs-**site.xml
+ '[' -e /run/cloudera-scm-agent/**process/93-hdfs-DATANODE/**topology.py ']'
++ find /run/cloudera-scm-agent/**process/93-hdfs-DATANODE -maxdepth 1 -name '*.py'
+ OUTPUT=
+ '[' '' '!=' '' ']'
+ export 'HADOOP_OPTS=-Djava.net.**preferIPv4Stack=true '
+ HADOOP_OPTS='-Djava.net.**preferIPv4Stack=true '
+ export HADOOP_IDENT_STRING=hdfs
+ HADOOP_IDENT_STRING=hdfs
+ '[' -n '' ']'
+ acquire_kerberos_tgt hdfs.keytab
+ '[' -z hdfs.keytab ']'
+ '[' -n '' ']'
+ '[' validate-writable-empty-dirs = datanode ']'
+ '[' file-operation = datanode ']'
+ '[' bootstrap = datanode ']'
+ '[' failover = datanode ']'
+ '[' transition-to-active = datanode ']'
+ '[' initializeSharedEdits = datanode ']'
+ '[' initialize-znode = datanode ']'
+ '[' format-namenode = datanode ']'
+ '[' monitor-decommission = datanode ']'
+ '[' monitor-upgrade = datanode ']'
+ '[' finalize-upgrade = datanode ']'
+ '[' mkdir = datanode ']'
+ '[' namenode = datanode -o secondarynamenode = datanode -o datanode = datanode ']'
+ HADOOP_OPTS='-Dsecurity.audit.**logger=INFO,RFAS -Djava.net.preferIPv4Stack=**true '
+ export 'HADOOP_OPTS=-Dhdfs.audit.**logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,**RFAS -Djava.net.preferIPv4Stack=**true '
+ HADOOP_OPTS='-Dhdfs.audit.**logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,**RFAS -Djava.net.preferIPv4Stack=**true '
+ exec /usr/lib/hadoop-hdfs/bin/hdfs --config /run/cloudera-scm-agent/**process/93-hdfs-DATANODE datanode

Please help to resolve this.

--
regards
Shiv

Search Discussions

Discussion Posts

Previous

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 4 of 6 | next ›
Discussion Overview
groupscm-users @
categorieshadoop
postedOct 17, '12 at 2:52p
activeOct 19, '12 at 2:43a
posts6
users2
websitecloudera.com
irc#hadoop

2 users in discussion

Shivayogi: 4 posts Philip Zeyliger: 2 posts

People

Translate

site design / logo © 2023 Grokbase