FAQ
Hello Hadoop users.!!!

Well.. I am doing simple hadoop single node installation.. but my datanode
is taking some time to run..

If I go through the namenode logs.. I am getting some strange exception.

2011-06-02 03:59:59,959 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = ub4/162.192.100.44
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
911707; compiled by 'chrisdo' on Fri Feb $
************************************************************/
2011-06-02 04:00:00,034 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
Initializing RPC Metrics with hostName=NameNode, port=54310
2011-06-02 04:00:00,038 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: ub4/
162.192.100.44:54310
2011-06-02 04:00:00,039 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=NameNode, sessionId=nu$
2011-06-02 04:00:00,040 INFO
org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing
NameNodeMeterics using contex$
2011-06-02 04:00:00,074 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop,hadoop
2011-06-02 04:00:00,074 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2011-06-02 04:00:00,074 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isPermissionEnabled=true
2011-06-02 04:00:00,084 INFO
org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
Initializing FSNamesystemMetrics using$
2011-06-02 04:00:00,085 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
FSNamesystemStatusMBean
2011-06-02 04:00:00,109 INFO org.apache.hadoop.hdfs.server.common.Storage:
Number of files = 1
2011-06-02 04:00:00,114 INFO org.apache.hadoop.hdfs.server.common.Storage:
Number of files under construction = 0
2011-06-02 04:00:00,114 INFO org.apache.hadoop.hdfs.server.common.Storage:
Image file of size 96 loaded in 0 seconds.
2011-06-02 04:00:00,550 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
FSImage in 489 msecs
2011-06-02 04:00:00,552 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks
= 0
2011-06-02 04:00:00,552 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
blocks = 0
2011-06-02 04:00:00,552 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
under-replicated blocks = 0
2011-06-02 04:00:00,552 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
over-replicated blocks = 0
2011-06-02 04:00:00,552 INFO org.apache.hadoop.hdfs.StateChange: STATE*
Leaving safe mode after 0 secs.
2011-06-02 04:00:00,553 INFO org.apache.hadoop.hdfs.StateChange: STATE*
Network topology has 0 racks and 0 datanodes
2011-06-02 04:00:00,553 INFO org.apache.hadoop.hdfs.StateChange: STATE*
UnderReplicatedBlocks has 0 blocks
2011-06-02 04:00:01,093 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2011-06-02 04:00:01,137 INFO org.apache.hadoop.http.HttpServer: Port
returned by webServer.getConnectors()[0].getLocalPort() before ope$
2011-06-02 04:00:01,138 INFO org.apache.hadoop.http.HttpServer:
listener.getLocalPort() returned 50070 webServer.getConnectors()[0].get$
2011-06-02 04:00:01,138 INFO org.apache.hadoop.http.HttpServer: Jetty bound
to port 50070
2011-06-02 04:00:01,138 INFO org.mortbay.log: jetty-6.1.14
2011-06-02 04:00:48,495 INFO org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:50070
2011-06-02 04:00:48,495 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
0.0.0.0:50070
2011-06-02 04:00:48,501 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2011-06-02 04:00:48,501 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 54310: starting
2011-06-02 04:00:48,501 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 0 on 54310: starting
2011-06-02 04:00:48,501 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 54310: starting
2011-06-02 04:00:48,502 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 54310: starting
2011-06-02 04:00:48,502 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 4 on 54310: starting
2011-06-02 04:00:48,502 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 3 on 54310: starting
2011-06-02 04:00:48,502 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 5 on 54310: starting
2011-06-02 04:00:48,502 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 7 on 54310: starting
2011-06-02 04:00:48,503 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 8 on 54310: starting
2011-06-02 04:00:48,503 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 6 on 54310: starting
2011-06-02 04:00:48,504 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 9 on 54310: starting
*2011-06-02 04:00:48,532 INFO org.apache.hadoop.ipc.Server: Error register
getProtocolVersion
java.lang.IllegalArgumentException: Duplicate metricsName:getProtocolVersion
at
org.apache.hadoop.metrics.util.MetricsRegistry.add(MetricsRegistry.java:53)
at
org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.(MetricsTimeVaryingRate.java:99)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
*2011-06-02 04:02:14,597 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
NameSystem.registerDatanode: node registration from 162.192.100$
2011-06-02 04:02:14,599 INFO org.apache.hadoop.net.NetworkTopology: Adding a
new node: /default-rack/162.192.100.44:50010
2011-06-02 04:02:44,639 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
ugi=hadoop,hadoop ip=/162.192.100.44 $
2011-06-02 04:02:44,642 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions:
0 Total time for transactions$
2011-06-02 04:02:44,719 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
ugi=hadoop,hadoop ip=/162.192.100.44 $
2011-06-02 04:02:44,726 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
ugi=hadoop,hadoop ip=/162.192.100.44 $
2011-06-02 04:02:44,776 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
ugi=hadoop,hadoop ip=/162.192.100.44 $
2011-06-02 04:02:44,785 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
ugi=hadoop,hadoop ip=/162.192.100.44 $
2011-06-02 04:02:44,790 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
NameSystem.allocateBlock: /usr/local/hadoop/hadoop-datastore/ha$
2011-06-02 04:02:44,842 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
NameSystem.addStoredBlock: blockMap updated: 162.192.100.44:500$
My configuration fliles are : ---

*------- Core-site.xml ---------*

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/hadoop-datastore/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. The uri's scheme
determines the config property (fs.SCHEME.impl) naming the FileSystem
implementation class. The uri's authority is used to determine the host,
port, etc. for a filesystem.</description>
</property>
</configuration>

*------- Mapred-site.xml ---------*
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs at. If
"local", then jobs are run in-process as a single map and reduce task.
</description>
</property>
</configuration>

*------- hdfs-site.xml ---------*

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication. The actual number of replications
can be specified when the file is created. The default is used if
replication is not specified in create time. </description>
</property>
</configuration>

Although my data node is working.. but its taking some considerable amount
of time to start..
Can someone suggest me why I am getting this exception.. and what can be the
solution..

Thanks.
Praveenesh

Search Discussions

  • Praveenesh kumar at Jun 2, 2011 at 8:38 am
    Hey guys..!!

    Any suggestions..!!!

    ---------- Forwarded message ----------
    From: praveenesh kumar <praveenesh@gmail.com>
    Date: Wed, Jun 1, 2011 at 2:48 PM
    Subject: Data node is taking time to start.. "Error register
    getProtocolVersion" in namenode..!!
    To: common-user@hadoop.apache.org


    Hello Hadoop users.!!!

    Well.. I am doing simple hadoop single node installation.. but my datanode
    is taking some time to run..

    If I go through the namenode logs.. I am getting some strange exception.

    2011-06-02 03:59:59,959 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = ub4/162.192.100.44
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 0.20.2
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
    911707; compiled by 'chrisdo' on Fri Feb $
    ************************************************************/
    2011-06-02 04:00:00,034 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
    Initializing RPC Metrics with hostName=NameNode, port=54310
    2011-06-02 04:00:00,038 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: ub4/
    162.192.100.44:54310
    2011-06-02 04:00:00,039 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
    Initializing JVM Metrics with processName=NameNode, sessionId=nu$
    2011-06-02 04:00:00,040 INFO
    org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing
    NameNodeMeterics using contex$
    2011-06-02 04:00:00,074 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop,hadoop
    2011-06-02 04:00:00,074 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
    2011-06-02 04:00:00,074 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isPermissionEnabled=true
    2011-06-02 04:00:00,084 INFO
    org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
    Initializing FSNamesystemMetrics using$
    2011-06-02 04:00:00,085 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
    FSNamesystemStatusMBean
    2011-06-02 04:00:00,109 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Number of files = 1
    2011-06-02 04:00:00,114 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Number of files under construction = 0
    2011-06-02 04:00:00,114 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Image file of size 96 loaded in 0 seconds.
    2011-06-02 04:00:00,550 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
    FSImage in 489 msecs
    2011-06-02 04:00:00,552 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks
    = 0
    2011-06-02 04:00:00,552 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
    blocks = 0
    2011-06-02 04:00:00,552 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    under-replicated blocks = 0
    2011-06-02 04:00:00,552 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    over-replicated blocks = 0
    2011-06-02 04:00:00,552 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    Leaving safe mode after 0 secs.
    2011-06-02 04:00:00,553 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    Network topology has 0 racks and 0 datanodes
    2011-06-02 04:00:00,553 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    UnderReplicatedBlocks has 0 blocks
    2011-06-02 04:00:01,093 INFO org.mortbay.log: Logging to
    org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
    org.mortbay.log.Slf4jLog
    2011-06-02 04:00:01,137 INFO org.apache.hadoop.http.HttpServer: Port
    returned by webServer.getConnectors()[0].getLocalPort() before ope$
    2011-06-02 04:00:01,138 INFO org.apache.hadoop.http.HttpServer:
    listener.getLocalPort() returned 50070 webServer.getConnectors()[0].get$
    2011-06-02 04:00:01,138 INFO org.apache.hadoop.http.HttpServer: Jetty bound
    to port 50070
    2011-06-02 04:00:01,138 INFO org.mortbay.log: jetty-6.1.14
    2011-06-02 04:00:48,495 INFO org.mortbay.log: Started
    SelectChannelConnector@0.0.0.0:50070
    2011-06-02 04:00:48,495 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
    0.0.0.0:50070
    2011-06-02 04:00:48,501 INFO org.apache.hadoop.ipc.Server: IPC Server
    Responder: starting
    2011-06-02 04:00:48,501 INFO org.apache.hadoop.ipc.Server: IPC Server
    listener on 54310: starting
    2011-06-02 04:00:48,501 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 54310: starting
    2011-06-02 04:00:48,501 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 1 on 54310: starting
    2011-06-02 04:00:48,502 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 54310: starting
    2011-06-02 04:00:48,502 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 4 on 54310: starting
    2011-06-02 04:00:48,502 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 3 on 54310: starting
    2011-06-02 04:00:48,502 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 54310: starting
    2011-06-02 04:00:48,502 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 54310: starting
    2011-06-02 04:00:48,503 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 8 on 54310: starting
    2011-06-02 04:00:48,503 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 6 on 54310: starting
    2011-06-02 04:00:48,504 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 54310: starting
    *2011-06-02 04:00:48,532 INFO org.apache.hadoop.ipc.Server: Error register
    getProtocolVersion
    java.lang.IllegalArgumentException: Duplicate metricsName:getProtocolVersion
    at
    org.apache.hadoop.metrics.util.MetricsRegistry.add(MetricsRegistry.java:53)
    at
    org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.(MetricsTimeVaryingRate.java:99)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:416)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    *2011-06-02 04:02:14,597 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
    NameSystem.registerDatanode: node registration from 162.192.100$
    2011-06-02 04:02:14,599 INFO org.apache.hadoop.net.NetworkTopology: Adding a
    new node: /default-rack/162.192.100.44:50010
    2011-06-02 04:02:44,639 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
    ugi=hadoop,hadoop ip=/162.192.100.44 $
    2011-06-02 04:02:44,642 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions:
    0 Total time for transactions$
    2011-06-02 04:02:44,719 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
    ugi=hadoop,hadoop ip=/162.192.100.44 $
    2011-06-02 04:02:44,726 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
    ugi=hadoop,hadoop ip=/162.192.100.44 $
    2011-06-02 04:02:44,776 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
    ugi=hadoop,hadoop ip=/162.192.100.44 $
    2011-06-02 04:02:44,785 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
    ugi=hadoop,hadoop ip=/162.192.100.44 $
    2011-06-02 04:02:44,790 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
    NameSystem.allocateBlock: /usr/local/hadoop/hadoop-datastore/ha$
    2011-06-02 04:02:44,842 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
    NameSystem.addStoredBlock: blockMap updated: 162.192.100.44:500$
    My configuration fliles are : ---

    *------- Core-site.xml ---------*

    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <configuration>
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/usr/local/hadoop/hadoop-datastore/hadoop-${user.name}</value>
    <description>A base for other temporary directories.</description>
    </property>
    <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:54310</value>
    <description>The name of the default file system. A URI whose scheme and
    authority determine the FileSystem implementation. The uri's scheme
    determines the config property (fs.SCHEME.impl) naming the FileSystem
    implementation class. The uri's authority is used to determine the host,
    port, etc. for a filesystem.</description>
    </property>
    </configuration>

    *------- Mapred-site.xml ---------*
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
    <property>
    <name>mapred.job.tracker</name>
    <value>localhost:54311</value>
    <description>The host and port that the MapReduce job tracker runs at. If
    "local", then jobs are run in-process as a single map and reduce task.
    </description>
    </property>
    </configuration>

    *------- hdfs-site.xml ---------*

    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
    <property>
    <name>dfs.replication</name>
    <value>1</value>
    <description>Default block replication. The actual number of replications
    can be specified when the file is created. The default is used if
    replication is not specified in create time. </description>
    </property>
    </configuration>

    Although my data node is working.. but its taking some considerable amount
    of time to start..
    Can someone suggest me why I am getting this exception.. and what can be the
    solution..

    Thanks.
    Praveenesh

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedJun 1, '11 at 9:19a
activeJun 2, '11 at 8:38a
posts2
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Praveenesh kumar: 2 posts

People

Translate

site design / logo © 2021 Grokbase