FAQ
Hi all,

Using hadoop 0.20.1 I am seeing the following on my namenode startup.

2009-10-14 11:09:54,232 INFO org.apache.hadoop.ipc.Server: Error
register getProtocolVersion
java.lang.IllegalArgumentException: Duplicate metricsName:getProtocolVersion

Could someone please point me in the right direction for diagnosing
where I have gone wrong?

Thanks
Tim



/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = cluma.gbif.org/192.38.28.77
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.1
STARTUP_MSG: build =
http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1
-r 810220; compiled by 'oom' on Tue Sep 1 20:55:56 UTC 2009
************************************************************/
2009-10-14 11:09:53,010 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
Initializing RPC Metrics with hostName=NameNode, port=8020
2009-10-14 11:09:53,013 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
cluma.gbif.clu/192.168.76.254:8020
2009-10-14 11:09:53,015 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=NameNode, sessionId=null
2009-10-14 11:09:53,019 INFO
org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
Initializing NameNodeMeterics using context
object:org.apache.hadoop.metrics.spi.NullContext
2009-10-14 11:09:53,056 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
2009-10-14 11:09:53,057 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
supergroup=supergroup
2009-10-14 11:09:53,057 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isPermissionEnabled=true
2009-10-14 11:09:53,064 INFO
org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
Initializing FSNamesystemMetrics using context
object:org.apache.hadoop.metrics.spi.NullContext
2009-10-14 11:09:53,065 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
FSNamesystemStatusMBean
2009-10-14 11:09:53,090 INFO
org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
2009-10-14 11:09:53,094 INFO
org.apache.hadoop.hdfs.server.common.Storage: Number of files under
construction = 0
2009-10-14 11:09:53,094 INFO
org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94
loaded in 0 seconds.
2009-10-14 11:09:53,094 INFO
org.apache.hadoop.hdfs.server.common.Storage: Edits file
/hadoop/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
2009-10-14 11:09:53,098 INFO
org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94
saved in 0 seconds.
2009-10-14 11:09:53,113 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
FSImage in 72 msecs
2009-10-14 11:09:53,114 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
blocks = 0
2009-10-14 11:09:53,114 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
blocks = 0
2009-10-14 11:09:53,114 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
under-replicated blocks = 0
2009-10-14 11:09:53,114 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
over-replicated blocks = 0
2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
STATE* Leaving safe mode after 0 secs.
2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
STATE* Network topology has 0 racks and 0 datanodes
2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
STATE* UnderReplicatedBlocks has 0 blocks
2009-10-14 11:09:53,198 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2009-10-14 11:09:53,242 INFO org.apache.hadoop.http.HttpServer: Port
returned by webServer.getConnectors()[0].getLocalPort() before open()
is -1. Opening the listener on 50070
2009-10-14 11:09:53,243 INFO org.apache.hadoop.http.HttpServer:
listener.getLocalPort() returned 50070
webServer.getConnectors()[0].getLocalPort() returned 50070
2009-10-14 11:09:53,243 INFO org.apache.hadoop.http.HttpServer: Jetty
bound to port 50070
2009-10-14 11:09:53,243 INFO org.mortbay.log: jetty-6.1.14
2009-10-14 11:09:53,433 INFO org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:50070
2009-10-14 11:09:53,434 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
0.0.0.0:50070
2009-10-14 11:09:53,434 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2009-10-14 11:09:53,435 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 8020: starting
2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 0 on 8020: starting
2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 8020: starting
2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 8020: starting
2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 3 on 8020: starting
2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 4 on 8020: starting
2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 6 on 8020: starting
2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 7 on 8020: starting
2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 8 on 8020: starting
2009-10-14 11:09:53,441 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 9 on 8020: starting
2009-10-14 11:09:53,444 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 5 on 8020: starting
2009-10-14 11:09:54,232 INFO org.apache.hadoop.ipc.Server: Error
register getProtocolVersion
java.lang.IllegalArgumentException: Duplicate metricsName:getProtocolVersion
at org.apache.hadoop.metrics.util.MetricsRegistry.add(MetricsRegistry.java:53)
at org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.(MetricsTimeVaryingRate.java:99)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
2009-10-14 11:09:54,667 INFO org.apache.hadoop.hdfs.StateChange:
BLOCK* NameSystem.registerDatanode: node registration from
192.168.76.3:50010 storage
DS-551038728-192.168.76.3-50010-1255511394664
2009-10-14 11:09:54,668 INFO org.apache.hadoop.net.NetworkTopology:
Adding a new node: /default-rack/192.168.76.3:50010
2009-10-14 11:09:54,673 INFO org.apache.hadoop.hdfs.StateChange:
BLOCK* NameSystem.registerDatanode: node registration from
192.168.76.2:50010 storage
DS-681115827-192.168.76.2-50010-1255511394673
2009-10-14 11:09:54,674 INFO org.apache.hadoop.net.NetworkTopology:
Adding a new node: /default-rack/192.168.76.2:50010
2009-10-14 11:09:54,689 INFO org.apache.hadoop.hdfs.StateChange:
BLOCK* NameSystem.registerDatanode: node registration from
192.168.76.7:50010 storage
DS-1731752631-192.168.76.7-50010-1255511394688
2009-10-14 11:09:54,689 INFO org.apache.hadoop.net.NetworkTopology:
Adding a new node: /default-rack/192.168.76.7:50010
2009-10-14 11:09:54,701 INFO org.apache.hadoop.hdfs.StateChange:
BLOCK* NameSystem.registerDatanode: node registration from
192.168.76.6:50010 storage
DS-338163265-192.168.76.6-50010-1255511394698
2009-10-14 11:09:54,701 INFO org.apache.hadoop.net.NetworkTopology:
Adding a new node: /default-rack/192.168.76.6:50010
2009-10-14 11:09:59,688 INFO org.apache.hadoop.hdfs.StateChange:
BLOCK* NameSystem.registerDatanode: node registration from
192.168.76.1:50010 storage
DS-988595128-192.168.76.1-50010-1255511399688
2009-10-14 11:09:59,689 INFO org.apache.hadoop.net.NetworkTopology:
Adding a new node: /default-rack/192.168.76.1:50010
2009-10-14 11:10:04,277 INFO org.apache.hadoop.hdfs.StateChange:
BLOCK* NameSystem.registerDatanode: node registration from
192.168.76.9:50010 storage
DS-1096719686-192.168.76.9-50010-1255511404276
2009-10-14 11:10:04,278 INFO org.apache.hadoop.net.NetworkTopology:
Adding a new node: /default-rack/192.168.76.9:50010
2009-10-14 11:10:04,289 INFO org.apache.hadoop.hdfs.StateChange:
BLOCK* NameSystem.registerDatanode: node registration from
192.168.76.5:50010 storage
DS-1405051412-192.168.76.5-50010-1255511404288
2009-10-14 11:10:04,289 INFO org.apache.hadoop.net.NetworkTopology:
Adding a new node: /default-rack/192.168.76.5:50010
2009-10-14 11:10:05,961 INFO org.apache.hadoop.hdfs.StateChange:
BLOCK* NameSystem.registerDatanode: node registration from
192.168.76.8:50010 storage
DS-1138617755-192.168.76.8-50010-1255511405960
2009-10-14 11:10:05,961 INFO org.apache.hadoop.net.NetworkTopology:
Adding a new node: /default-rack/192.168.76.8:50010
2009-10-14 11:10:06,154 INFO org.apache.hadoop.hdfs.StateChange:
BLOCK* NameSystem.registerDatanode: node registration from
192.168.76.4:50010 storage
DS-569548170-192.168.76.4-50010-1255511406153
2009-10-14 11:10:06,154 INFO org.apache.hadoop.net.NetworkTopology:
Adding a new node: /default-rack/192.168.76.4:50010
2009-10-14 11:10:14,480 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 0
time(s).
2009-10-14 11:10:35,483 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 1
time(s).
2009-10-14 11:10:56,486 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 2
time(s).
2009-10-14 11:11:17,489 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 3
time(s).
2009-10-14 11:11:38,492 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 4
time(s).
2009-10-14 11:11:59,495 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 5
time(s).
2009-10-14 11:12:20,498 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 6
time(s).
2009-10-14 11:12:41,501 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 7
time(s).
2009-10-14 11:13:02,504 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 8
time(s).
2009-10-14 11:13:23,506 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 9
time(s).
etc.

Search Discussions

  • Brian Bockelman at Oct 14, 2009 at 1:12 pm
    Hey Tim,

    Can you see if that goes away if you use the null metrics context (the
    error appears to be coming from metrics - have you touched them?)?

    Brian
    On Oct 14, 2009, at 4:33 AM, tim robertson wrote:

    Hi all,

    Using hadoop 0.20.1 I am seeing the following on my namenode startup.

    2009-10-14 11:09:54,232 INFO org.apache.hadoop.ipc.Server: Error
    register getProtocolVersion
    java.lang.IllegalArgumentException: Duplicate
    metricsName:getProtocolVersion

    Could someone please point me in the right direction for diagnosing
    where I have gone wrong?

    Thanks
    Tim



    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = cluma.gbif.org/192.38.28.77
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 0.20.1
    STARTUP_MSG: build =
    http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1
    -r 810220; compiled by 'oom' on Tue Sep 1 20:55:56 UTC 2009
    ************************************************************/
    2009-10-14 11:09:53,010 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
    Initializing RPC Metrics with hostName=NameNode, port=8020
    2009-10-14 11:09:53,013 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
    cluma.gbif.clu/192.168.76.254:8020
    2009-10-14 11:09:53,015 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
    Initializing JVM Metrics with processName=NameNode, sessionId=null
    2009-10-14 11:09:53,019 INFO
    org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
    Initializing NameNodeMeterics using context
    object:org.apache.hadoop.metrics.spi.NullContext
    2009-10-14 11:09:53,056 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
    2009-10-14 11:09:53,057 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    supergroup=supergroup
    2009-10-14 11:09:53,057 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isPermissionEnabled=true
    2009-10-14 11:09:53,064 INFO
    org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
    Initializing FSNamesystemMetrics using context
    object:org.apache.hadoop.metrics.spi.NullContext
    2009-10-14 11:09:53,065 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
    FSNamesystemStatusMBean
    2009-10-14 11:09:53,090 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files under
    construction = 0
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94
    loaded in 0 seconds.
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Edits file
    /hadoop/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
    2009-10-14 11:09:53,098 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94
    saved in 0 seconds.
    2009-10-14 11:09:53,113 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
    FSImage in 72 msecs
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
    blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
    blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    under-replicated blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    over-replicated blocks = 0
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Leaving safe mode after 0 secs.
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Network topology has 0 racks and 0 datanodes
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* UnderReplicatedBlocks has 0 blocks
    2009-10-14 11:09:53,198 INFO org.mortbay.log: Logging to
    org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
    org.mortbay.log.Slf4jLog
    2009-10-14 11:09:53,242 INFO org.apache.hadoop.http.HttpServer: Port
    returned by webServer.getConnectors()[0].getLocalPort() before open()
    is -1. Opening the listener on 50070
    2009-10-14 11:09:53,243 INFO org.apache.hadoop.http.HttpServer:
    listener.getLocalPort() returned 50070
    webServer.getConnectors()[0].getLocalPort() returned 50070
    2009-10-14 11:09:53,243 INFO org.apache.hadoop.http.HttpServer: Jetty
    bound to port 50070
    2009-10-14 11:09:53,243 INFO org.mortbay.log: jetty-6.1.14
    2009-10-14 11:09:53,433 INFO org.mortbay.log: Started
    SelectChannelConnector@0.0.0.0:50070
    2009-10-14 11:09:53,434 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
    0.0.0.0:50070
    2009-10-14 11:09:53,434 INFO org.apache.hadoop.ipc.Server: IPC Server
    Responder: starting
    2009-10-14 11:09:53,435 INFO org.apache.hadoop.ipc.Server: IPC Server
    listener on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 1 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 3 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 4 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 6 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 8 on 8020: starting
    2009-10-14 11:09:53,441 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 8020: starting
    2009-10-14 11:09:53,444 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 8020: starting
    2009-10-14 11:09:54,232 INFO org.apache.hadoop.ipc.Server: Error
    register getProtocolVersion
    java.lang.IllegalArgumentException: Duplicate
    metricsName:getProtocolVersion
    at org.apache.hadoop.metrics.util.MetricsRegistry.add
    (MetricsRegistry.java:53)
    at
    org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init>
    (MetricsTimeVaryingRate.java:89)
    at
    org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init>
    (MetricsTimeVaryingRate.java:99)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    2009-10-14 11:09:54,667 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.3:50010 storage
    DS-551038728-192.168.76.3-50010-1255511394664
    2009-10-14 11:09:54,668 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.3:50010
    2009-10-14 11:09:54,673 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.2:50010 storage
    DS-681115827-192.168.76.2-50010-1255511394673
    2009-10-14 11:09:54,674 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.2:50010
    2009-10-14 11:09:54,689 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.7:50010 storage
    DS-1731752631-192.168.76.7-50010-1255511394688
    2009-10-14 11:09:54,689 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.7:50010
    2009-10-14 11:09:54,701 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.6:50010 storage
    DS-338163265-192.168.76.6-50010-1255511394698
    2009-10-14 11:09:54,701 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.6:50010
    2009-10-14 11:09:59,688 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.1:50010 storage
    DS-988595128-192.168.76.1-50010-1255511399688
    2009-10-14 11:09:59,689 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.1:50010
    2009-10-14 11:10:04,277 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.9:50010 storage
    DS-1096719686-192.168.76.9-50010-1255511404276
    2009-10-14 11:10:04,278 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.9:50010
    2009-10-14 11:10:04,289 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.5:50010 storage
    DS-1405051412-192.168.76.5-50010-1255511404288
    2009-10-14 11:10:04,289 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.5:50010
    2009-10-14 11:10:05,961 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.8:50010 storage
    DS-1138617755-192.168.76.8-50010-1255511405960
    2009-10-14 11:10:05,961 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.8:50010
    2009-10-14 11:10:06,154 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.4:50010 storage
    DS-569548170-192.168.76.4-50010-1255511406153
    2009-10-14 11:10:06,154 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.4:50010
    2009-10-14 11:10:14,480 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 0
    time(s).
    2009-10-14 11:10:35,483 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 1
    time(s).
    2009-10-14 11:10:56,486 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 2
    time(s).
    2009-10-14 11:11:17,489 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 3
    time(s).
    2009-10-14 11:11:38,492 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 4
    time(s).
    2009-10-14 11:11:59,495 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 5
    time(s).
    2009-10-14 11:12:20,498 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 6
    time(s).
    2009-10-14 11:12:41,501 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 7
    time(s).
    2009-10-14 11:13:02,504 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 8
    time(s).
    2009-10-14 11:13:23,506 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 9
    time(s).
    etc.
  • Tim robertson at Oct 14, 2009 at 2:38 pm
    Hi Brian,

    Thanks for replying. I don't even know what the metrics really are
    so have never knowingly touched them. I have only done the basic core
    and site configuration along with master / slaves list and then a
    namenode format and started the cluster to get this error. All based
    on the current 0.20.1 download on apache.

    I think I am using the NullContext looking at the properties file...
    could you please verify this is correct?

    Many thanks for your help!
    Tim


    # Configuration of the "dfs" context for null
    dfs.class=org.apache.hadoop.metrics.spi.NullContext

    # Configuration of the "dfs" context for file
    #dfs.class=org.apache.hadoop.metrics.file.FileContext
    #dfs.period=10
    #dfs.fileName=/tmp/dfsmetrics.log

    # Configuration of the "dfs" context for ganglia
    # dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext
    # dfs.period=10
    # dfs.servers=localhost:8649


    # Configuration of the "mapred" context for null
    mapred.class=org.apache.hadoop.metrics.spi.NullContext

    # Configuration of the "mapred" context for file
    #mapred.class=org.apache.hadoop.metrics.file.FileContext
    #mapred.period=10
    #mapred.fileName=/tmp/mrmetrics.log

    # Configuration of the "mapred" context for ganglia
    # mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext
    # mapred.period=10
    # mapred.servers=localhost:8649


    # Configuration of the "jvm" context for null
    jvm.class=org.apache.hadoop.metrics.spi.NullContext

    # Configuration of the "jvm" context for file
    #jvm.class=org.apache.hadoop.metrics.file.FileContext
    #jvm.period=10
    #jvm.fileName=/tmp/jvmmetrics.log

    # Configuration of the "jvm" context for ganglia
    # jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext
    # jvm.period=10
    # jvm.servers=localhost:8649



    On Wed, Oct 14, 2009 at 3:12 PM, Brian Bockelman wrote:
    Hey Tim,

    Can you see if that goes away if you use the null metrics context (the error
    appears to be coming from metrics - have you touched them?)?

    Brian
    On Oct 14, 2009, at 4:33 AM, tim robertson wrote:

    Hi all,

    Using hadoop 0.20.1 I am seeing the following on my namenode startup.

    2009-10-14 11:09:54,232 INFO org.apache.hadoop.ipc.Server: Error
    register getProtocolVersion
    java.lang.IllegalArgumentException: Duplicate
    metricsName:getProtocolVersion

    Could someone please point me in the right direction for diagnosing
    where I have gone wrong?

    Thanks
    Tim



    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG:   host = cluma.gbif.org/192.38.28.77
    STARTUP_MSG:   args = []
    STARTUP_MSG:   version = 0.20.1
    STARTUP_MSG:   build =
    http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1
    -r 810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 2009
    ************************************************************/
    2009-10-14 11:09:53,010 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
    Initializing RPC Metrics with hostName=NameNode, port=8020
    2009-10-14 11:09:53,013 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
    cluma.gbif.clu/192.168.76.254:8020
    2009-10-14 11:09:53,015 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
    Initializing JVM Metrics with processName=NameNode, sessionId=null
    2009-10-14 11:09:53,019 INFO
    org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
    Initializing NameNodeMeterics using context
    object:org.apache.hadoop.metrics.spi.NullContext
    2009-10-14 11:09:53,056 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
    2009-10-14 11:09:53,057 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    supergroup=supergroup
    2009-10-14 11:09:53,057 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isPermissionEnabled=true
    2009-10-14 11:09:53,064 INFO
    org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
    Initializing FSNamesystemMetrics using context
    object:org.apache.hadoop.metrics.spi.NullContext
    2009-10-14 11:09:53,065 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
    FSNamesystemStatusMBean
    2009-10-14 11:09:53,090 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files under
    construction = 0
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94
    loaded in 0 seconds.
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Edits file
    /hadoop/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
    2009-10-14 11:09:53,098 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94
    saved in 0 seconds.
    2009-10-14 11:09:53,113 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
    FSImage in 72 msecs
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
    blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
    blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    under-replicated blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    over-replicated blocks = 0
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Leaving safe mode after 0 secs.
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Network topology has 0 racks and 0 datanodes
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* UnderReplicatedBlocks has 0 blocks
    2009-10-14 11:09:53,198 INFO org.mortbay.log: Logging to
    org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
    org.mortbay.log.Slf4jLog
    2009-10-14 11:09:53,242 INFO org.apache.hadoop.http.HttpServer: Port
    returned by webServer.getConnectors()[0].getLocalPort() before open()
    is -1. Opening the listener on 50070
    2009-10-14 11:09:53,243 INFO org.apache.hadoop.http.HttpServer:
    listener.getLocalPort() returned 50070
    webServer.getConnectors()[0].getLocalPort() returned 50070
    2009-10-14 11:09:53,243 INFO org.apache.hadoop.http.HttpServer: Jetty
    bound to port 50070
    2009-10-14 11:09:53,243 INFO org.mortbay.log: jetty-6.1.14
    2009-10-14 11:09:53,433 INFO org.mortbay.log: Started
    SelectChannelConnector@0.0.0.0:50070
    2009-10-14 11:09:53,434 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
    0.0.0.0:50070
    2009-10-14 11:09:53,434 INFO org.apache.hadoop.ipc.Server: IPC Server
    Responder: starting
    2009-10-14 11:09:53,435 INFO org.apache.hadoop.ipc.Server: IPC Server
    listener on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 1 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 3 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 4 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 6 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 8 on 8020: starting
    2009-10-14 11:09:53,441 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 8020: starting
    2009-10-14 11:09:53,444 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 8020: starting
    2009-10-14 11:09:54,232 INFO org.apache.hadoop.ipc.Server: Error
    register getProtocolVersion
    java.lang.IllegalArgumentException: Duplicate
    metricsName:getProtocolVersion
    at
    org.apache.hadoop.metrics.util.MetricsRegistry.add(MetricsRegistry.java:53)
    at
    org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init>(MetricsTimeVaryingRate.java:89)
    at
    org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init>(MetricsTimeVaryingRate.java:99)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    2009-10-14 11:09:54,667 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.3:50010 storage
    DS-551038728-192.168.76.3-50010-1255511394664
    2009-10-14 11:09:54,668 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.3:50010
    2009-10-14 11:09:54,673 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.2:50010 storage
    DS-681115827-192.168.76.2-50010-1255511394673
    2009-10-14 11:09:54,674 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.2:50010
    2009-10-14 11:09:54,689 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.7:50010 storage
    DS-1731752631-192.168.76.7-50010-1255511394688
    2009-10-14 11:09:54,689 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.7:50010
    2009-10-14 11:09:54,701 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.6:50010 storage
    DS-338163265-192.168.76.6-50010-1255511394698
    2009-10-14 11:09:54,701 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.6:50010
    2009-10-14 11:09:59,688 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.1:50010 storage
    DS-988595128-192.168.76.1-50010-1255511399688
    2009-10-14 11:09:59,689 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.1:50010
    2009-10-14 11:10:04,277 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.9:50010 storage
    DS-1096719686-192.168.76.9-50010-1255511404276
    2009-10-14 11:10:04,278 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.9:50010
    2009-10-14 11:10:04,289 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.5:50010 storage
    DS-1405051412-192.168.76.5-50010-1255511404288
    2009-10-14 11:10:04,289 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.5:50010
    2009-10-14 11:10:05,961 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.8:50010 storage
    DS-1138617755-192.168.76.8-50010-1255511405960
    2009-10-14 11:10:05,961 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.8:50010
    2009-10-14 11:10:06,154 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.4:50010 storage
    DS-569548170-192.168.76.4-50010-1255511406153
    2009-10-14 11:10:06,154 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.4:50010
    2009-10-14 11:10:14,480 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 0
    time(s).
    2009-10-14 11:10:35,483 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 1
    time(s).
    2009-10-14 11:10:56,486 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 2
    time(s).
    2009-10-14 11:11:17,489 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 3
    time(s).
    2009-10-14 11:11:38,492 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 4
    time(s).
    2009-10-14 11:11:59,495 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 5
    time(s).
    2009-10-14 11:12:20,498 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 6
    time(s).
    2009-10-14 11:12:41,501 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 7
    time(s).
    2009-10-14 11:13:02,504 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 8
    time(s).
    2009-10-14 11:13:23,506 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 9
    time(s).
    etc.
  • Brian Bockelman at Oct 14, 2009 at 4:24 pm
    Hey Tim,

    That was my best guess - looks like you've got the right metrics config.

    Anyone else have an idea?

    Brian
    On Oct 14, 2009, at 9:37 AM, tim robertson wrote:

    Hi Brian,

    Thanks for replying. I don't even know what the metrics really are
    so have never knowingly touched them. I have only done the basic core
    and site configuration along with master / slaves list and then a
    namenode format and started the cluster to get this error. All based
    on the current 0.20.1 download on apache.

    I think I am using the NullContext looking at the properties file...
    could you please verify this is correct?

    Many thanks for your help!
    Tim


    # Configuration of the "dfs" context for null
    dfs.class=org.apache.hadoop.metrics.spi.NullContext

    # Configuration of the "dfs" context for file
    #dfs.class=org.apache.hadoop.metrics.file.FileContext
    #dfs.period=10
    #dfs.fileName=/tmp/dfsmetrics.log

    # Configuration of the "dfs" context for ganglia
    # dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext
    # dfs.period=10
    # dfs.servers=localhost:8649


    # Configuration of the "mapred" context for null
    mapred.class=org.apache.hadoop.metrics.spi.NullContext

    # Configuration of the "mapred" context for file
    #mapred.class=org.apache.hadoop.metrics.file.FileContext
    #mapred.period=10
    #mapred.fileName=/tmp/mrmetrics.log

    # Configuration of the "mapred" context for ganglia
    # mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext
    # mapred.period=10
    # mapred.servers=localhost:8649


    # Configuration of the "jvm" context for null
    jvm.class=org.apache.hadoop.metrics.spi.NullContext

    # Configuration of the "jvm" context for file
    #jvm.class=org.apache.hadoop.metrics.file.FileContext
    #jvm.period=10
    #jvm.fileName=/tmp/jvmmetrics.log

    # Configuration of the "jvm" context for ganglia
    # jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext
    # jvm.period=10
    # jvm.servers=localhost:8649




    On Wed, Oct 14, 2009 at 3:12 PM, Brian Bockelman
    wrote:
    Hey Tim,

    Can you see if that goes away if you use the null metrics context
    (the error
    appears to be coming from metrics - have you touched them?)?

    Brian
    On Oct 14, 2009, at 4:33 AM, tim robertson wrote:

    Hi all,

    Using hadoop 0.20.1 I am seeing the following on my namenode
    startup.

    2009-10-14 11:09:54,232 INFO org.apache.hadoop.ipc.Server: Error
    register getProtocolVersion
    java.lang.IllegalArgumentException: Duplicate
    metricsName:getProtocolVersion

    Could someone please point me in the right direction for diagnosing
    where I have gone wrong?

    Thanks
    Tim



    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = cluma.gbif.org/192.38.28.77
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 0.20.1
    STARTUP_MSG: build =
    http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-
    rc1
    -r 810220; compiled by 'oom' on Tue Sep 1 20:55:56 UTC 2009
    ************************************************************/
    2009-10-14 11:09:53,010 INFO
    org.apache.hadoop.ipc.metrics.RpcMetrics:
    Initializing RPC Metrics with hostName=NameNode, port=8020
    2009-10-14 11:09:53,013 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
    cluma.gbif.clu/192.168.76.254:8020
    2009-10-14 11:09:53,015 INFO
    org.apache.hadoop.metrics.jvm.JvmMetrics:
    Initializing JVM Metrics with processName=NameNode, sessionId=null
    2009-10-14 11:09:53,019 INFO
    org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
    Initializing NameNodeMeterics using context
    object:org.apache.hadoop.metrics.spi.NullContext
    2009-10-14 11:09:53,056 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
    2009-10-14 11:09:53,057 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    supergroup=supergroup
    2009-10-14 11:09:53,057 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isPermissionEnabled=true
    2009-10-14 11:09:53,064 INFO
    org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
    Initializing FSNamesystemMetrics using context
    object:org.apache.hadoop.metrics.spi.NullContext
    2009-10-14 11:09:53,065 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
    FSNamesystemStatusMBean
    2009-10-14 11:09:53,090 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files under
    construction = 0
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94
    loaded in 0 seconds.
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Edits file
    /hadoop/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
    2009-10-14 11:09:53,098 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94
    saved in 0 seconds.
    2009-10-14 11:09:53,113 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished
    loading
    FSImage in 72 msecs
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
    blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    invalid
    blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    under-replicated blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    over-replicated blocks = 0
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Leaving safe mode after 0 secs.
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Network topology has 0 racks and 0 datanodes
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* UnderReplicatedBlocks has 0 blocks
    2009-10-14 11:09:53,198 INFO org.mortbay.log: Logging to
    org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
    org.mortbay.log.Slf4jLog
    2009-10-14 11:09:53,242 INFO org.apache.hadoop.http.HttpServer: Port
    returned by webServer.getConnectors()[0].getLocalPort() before open
    ()
    is -1. Opening the listener on 50070
    2009-10-14 11:09:53,243 INFO org.apache.hadoop.http.HttpServer:
    listener.getLocalPort() returned 50070
    webServer.getConnectors()[0].getLocalPort() returned 50070
    2009-10-14 11:09:53,243 INFO org.apache.hadoop.http.HttpServer:
    Jetty
    bound to port 50070
    2009-10-14 11:09:53,243 INFO org.mortbay.log: jetty-6.1.14
    2009-10-14 11:09:53,433 INFO org.mortbay.log: Started
    SelectChannelConnector@0.0.0.0:50070
    2009-10-14 11:09:53,434 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
    0.0.0.0:50070
    2009-10-14 11:09:53,434 INFO org.apache.hadoop.ipc.Server: IPC
    Server
    Responder: starting
    2009-10-14 11:09:53,435 INFO org.apache.hadoop.ipc.Server: IPC
    Server
    listener on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC
    Server
    handler 0 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC
    Server
    handler 1 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC
    Server
    handler 2 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC
    Server
    handler 3 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC
    Server
    handler 4 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC
    Server
    handler 6 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC
    Server
    handler 7 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC
    Server
    handler 8 on 8020: starting
    2009-10-14 11:09:53,441 INFO org.apache.hadoop.ipc.Server: IPC
    Server
    handler 9 on 8020: starting
    2009-10-14 11:09:53,444 INFO org.apache.hadoop.ipc.Server: IPC
    Server
    handler 5 on 8020: starting
    2009-10-14 11:09:54,232 INFO org.apache.hadoop.ipc.Server: Error
    register getProtocolVersion
    java.lang.IllegalArgumentException: Duplicate
    metricsName:getProtocolVersion
    at
    org.apache.hadoop.metrics.util.MetricsRegistry.add
    (MetricsRegistry.java:53)
    at
    org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init>
    (MetricsTimeVaryingRate.java:89)
    at
    org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init>
    (MetricsTimeVaryingRate.java:99)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    2009-10-14 11:09:54,667 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.3:50010 storage
    DS-551038728-192.168.76.3-50010-1255511394664
    2009-10-14 11:09:54,668 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.3:50010
    2009-10-14 11:09:54,673 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.2:50010 storage
    DS-681115827-192.168.76.2-50010-1255511394673
    2009-10-14 11:09:54,674 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.2:50010
    2009-10-14 11:09:54,689 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.7:50010 storage
    DS-1731752631-192.168.76.7-50010-1255511394688
    2009-10-14 11:09:54,689 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.7:50010
    2009-10-14 11:09:54,701 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.6:50010 storage
    DS-338163265-192.168.76.6-50010-1255511394698
    2009-10-14 11:09:54,701 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.6:50010
    2009-10-14 11:09:59,688 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.1:50010 storage
    DS-988595128-192.168.76.1-50010-1255511399688
    2009-10-14 11:09:59,689 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.1:50010
    2009-10-14 11:10:04,277 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.9:50010 storage
    DS-1096719686-192.168.76.9-50010-1255511404276
    2009-10-14 11:10:04,278 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.9:50010
    2009-10-14 11:10:04,289 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.5:50010 storage
    DS-1405051412-192.168.76.5-50010-1255511404288
    2009-10-14 11:10:04,289 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.5:50010
    2009-10-14 11:10:05,961 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.8:50010 storage
    DS-1138617755-192.168.76.8-50010-1255511405960
    2009-10-14 11:10:05,961 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.8:50010
    2009-10-14 11:10:06,154 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.4:50010 storage
    DS-569548170-192.168.76.4-50010-1255511406153
    2009-10-14 11:10:06,154 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.4:50010
    2009-10-14 11:10:14,480 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already
    tried 0
    time(s).
    2009-10-14 11:10:35,483 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already
    tried 1
    time(s).
    2009-10-14 11:10:56,486 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already
    tried 2
    time(s).
    2009-10-14 11:11:17,489 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already
    tried 3
    time(s).
    2009-10-14 11:11:38,492 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already
    tried 4
    time(s).
    2009-10-14 11:11:59,495 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already
    tried 5
    time(s).
    2009-10-14 11:12:20,498 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already
    tried 6
    time(s).
    2009-10-14 11:12:41,501 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already
    tried 7
    time(s).
    2009-10-14 11:13:02,504 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already
    tried 8
    time(s).
    2009-10-14 11:13:23,506 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already
    tried 9
    time(s).
    etc.
  • Todd Lipcon at Oct 14, 2009 at 5:23 pm
    Hi Tim,

    I have also seen this error, but it's not fatal.

    Is this log from just the NameNode or did you tail multiple logs? It
    seems odd that your namenode would be trying to make an IPC client to
    itself (port 8020).

    After you see these messages, does your namenode shut down? Does jps
    show it running? Is the web interface available at port 50070?

    -Todd


    On Wed, Oct 14, 2009 at 2:33 AM, tim robertson
    wrote:
    Hi all,

    Using hadoop 0.20.1 I am seeing the following on my namenode startup.

    2009-10-14 11:09:54,232 INFO org.apache.hadoop.ipc.Server: Error
    register getProtocolVersion
    java.lang.IllegalArgumentException: Duplicate metricsName:getProtocolVersion

    Could someone please point me in the right direction for diagnosing
    where I have gone wrong?

    Thanks
    Tim



    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG:   host = cluma.gbif.org/192.38.28.77
    STARTUP_MSG:   args = []
    STARTUP_MSG:   version = 0.20.1
    STARTUP_MSG:   build =
    http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1
    -r 810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 2009
    ************************************************************/
    2009-10-14 11:09:53,010 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
    Initializing RPC Metrics with hostName=NameNode, port=8020
    2009-10-14 11:09:53,013 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
    cluma.gbif.clu/192.168.76.254:8020
    2009-10-14 11:09:53,015 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
    Initializing JVM Metrics with processName=NameNode, sessionId=null
    2009-10-14 11:09:53,019 INFO
    org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
    Initializing NameNodeMeterics using context
    object:org.apache.hadoop.metrics.spi.NullContext
    2009-10-14 11:09:53,056 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
    2009-10-14 11:09:53,057 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    supergroup=supergroup
    2009-10-14 11:09:53,057 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isPermissionEnabled=true
    2009-10-14 11:09:53,064 INFO
    org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
    Initializing FSNamesystemMetrics using context
    object:org.apache.hadoop.metrics.spi.NullContext
    2009-10-14 11:09:53,065 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
    FSNamesystemStatusMBean
    2009-10-14 11:09:53,090 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files under
    construction = 0
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94
    loaded in 0 seconds.
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Edits file
    /hadoop/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
    2009-10-14 11:09:53,098 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94
    saved in 0 seconds.
    2009-10-14 11:09:53,113 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
    FSImage in 72 msecs
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
    blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
    blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    under-replicated blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    over-replicated blocks = 0
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Leaving safe mode after 0 secs.
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Network topology has 0 racks and 0 datanodes
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* UnderReplicatedBlocks has 0 blocks
    2009-10-14 11:09:53,198 INFO org.mortbay.log: Logging to
    org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
    org.mortbay.log.Slf4jLog
    2009-10-14 11:09:53,242 INFO org.apache.hadoop.http.HttpServer: Port
    returned by webServer.getConnectors()[0].getLocalPort() before open()
    is -1. Opening the listener on 50070
    2009-10-14 11:09:53,243 INFO org.apache.hadoop.http.HttpServer:
    listener.getLocalPort() returned 50070
    webServer.getConnectors()[0].getLocalPort() returned 50070
    2009-10-14 11:09:53,243 INFO org.apache.hadoop.http.HttpServer: Jetty
    bound to port 50070
    2009-10-14 11:09:53,243 INFO org.mortbay.log: jetty-6.1.14
    2009-10-14 11:09:53,433 INFO org.mortbay.log: Started
    SelectChannelConnector@0.0.0.0:50070
    2009-10-14 11:09:53,434 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
    0.0.0.0:50070
    2009-10-14 11:09:53,434 INFO org.apache.hadoop.ipc.Server: IPC Server
    Responder: starting
    2009-10-14 11:09:53,435 INFO org.apache.hadoop.ipc.Server: IPC Server
    listener on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 1 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 3 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 4 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 6 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 8 on 8020: starting
    2009-10-14 11:09:53,441 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 8020: starting
    2009-10-14 11:09:53,444 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 8020: starting
    2009-10-14 11:09:54,232 INFO org.apache.hadoop.ipc.Server: Error
    register getProtocolVersion
    java.lang.IllegalArgumentException: Duplicate metricsName:getProtocolVersion
    at org.apache.hadoop.metrics.util.MetricsRegistry.add(MetricsRegistry.java:53)
    at org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init>(MetricsTimeVaryingRate.java:89)
    at org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init>(MetricsTimeVaryingRate.java:99)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    2009-10-14 11:09:54,667 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.3:50010 storage
    DS-551038728-192.168.76.3-50010-1255511394664
    2009-10-14 11:09:54,668 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.3:50010
    2009-10-14 11:09:54,673 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.2:50010 storage
    DS-681115827-192.168.76.2-50010-1255511394673
    2009-10-14 11:09:54,674 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.2:50010
    2009-10-14 11:09:54,689 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.7:50010 storage
    DS-1731752631-192.168.76.7-50010-1255511394688
    2009-10-14 11:09:54,689 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.7:50010
    2009-10-14 11:09:54,701 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.6:50010 storage
    DS-338163265-192.168.76.6-50010-1255511394698
    2009-10-14 11:09:54,701 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.6:50010
    2009-10-14 11:09:59,688 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.1:50010 storage
    DS-988595128-192.168.76.1-50010-1255511399688
    2009-10-14 11:09:59,689 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.1:50010
    2009-10-14 11:10:04,277 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.9:50010 storage
    DS-1096719686-192.168.76.9-50010-1255511404276
    2009-10-14 11:10:04,278 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.9:50010
    2009-10-14 11:10:04,289 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.5:50010 storage
    DS-1405051412-192.168.76.5-50010-1255511404288
    2009-10-14 11:10:04,289 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.5:50010
    2009-10-14 11:10:05,961 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.8:50010 storage
    DS-1138617755-192.168.76.8-50010-1255511405960
    2009-10-14 11:10:05,961 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.8:50010
    2009-10-14 11:10:06,154 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.4:50010 storage
    DS-569548170-192.168.76.4-50010-1255511406153
    2009-10-14 11:10:06,154 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.4:50010
    2009-10-14 11:10:14,480 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 0
    time(s).
    2009-10-14 11:10:35,483 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 1
    time(s).
    2009-10-14 11:10:56,486 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 2
    time(s).
    2009-10-14 11:11:17,489 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 3
    time(s).
    2009-10-14 11:11:38,492 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 4
    time(s).
    2009-10-14 11:11:59,495 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 5
    time(s).
    2009-10-14 11:12:20,498 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 6
    time(s).
    2009-10-14 11:12:41,501 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 7
    time(s).
    2009-10-14 11:13:02,504 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 8
    time(s).
    2009-10-14 11:13:23,506 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 9
    time(s).
    etc.
  • Tim robertson at Oct 15, 2009 at 7:12 am
    Thanks for the info Todd,
    I have also seen this error, but it's not fatal.
    Ok, I'll ignore it for the time being.
    Is this log from just the NameNode or did you tail multiple logs? It
    seems odd that your namenode would be trying to make an IPC client to
    itself (port 8020).
    Just the name node.
    After you see these messages, does your namenode shut down?
    It kind of just hangs really (Already tried 9 times, Already tried 10 times etc)
    Does jps show it running? Is the web interface available at port 50070?
    No, it never is available.

    I'll keep hunting for some config error.

    Cheers
    Tim

    -Todd


    On Wed, Oct 14, 2009 at 2:33 AM, tim robertson
    wrote:
    Hi all,

    Using hadoop 0.20.1 I am seeing the following on my namenode startup.

    2009-10-14 11:09:54,232 INFO org.apache.hadoop.ipc.Server: Error
    register getProtocolVersion
    java.lang.IllegalArgumentException: Duplicate metricsName:getProtocolVersion

    Could someone please point me in the right direction for diagnosing
    where I have gone wrong?

    Thanks
    Tim



    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG:   host = cluma.gbif.org/192.38.28.77
    STARTUP_MSG:   args = []
    STARTUP_MSG:   version = 0.20.1
    STARTUP_MSG:   build =
    http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1
    -r 810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 2009
    ************************************************************/
    2009-10-14 11:09:53,010 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
    Initializing RPC Metrics with hostName=NameNode, port=8020
    2009-10-14 11:09:53,013 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
    cluma.gbif.clu/192.168.76.254:8020
    2009-10-14 11:09:53,015 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
    Initializing JVM Metrics with processName=NameNode, sessionId=null
    2009-10-14 11:09:53,019 INFO
    org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
    Initializing NameNodeMeterics using context
    object:org.apache.hadoop.metrics.spi.NullContext
    2009-10-14 11:09:53,056 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
    2009-10-14 11:09:53,057 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    supergroup=supergroup
    2009-10-14 11:09:53,057 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isPermissionEnabled=true
    2009-10-14 11:09:53,064 INFO
    org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
    Initializing FSNamesystemMetrics using context
    object:org.apache.hadoop.metrics.spi.NullContext
    2009-10-14 11:09:53,065 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
    FSNamesystemStatusMBean
    2009-10-14 11:09:53,090 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files under
    construction = 0
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94
    loaded in 0 seconds.
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Edits file
    /hadoop/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
    2009-10-14 11:09:53,098 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94
    saved in 0 seconds.
    2009-10-14 11:09:53,113 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
    FSImage in 72 msecs
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
    blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
    blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    under-replicated blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    over-replicated blocks = 0
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Leaving safe mode after 0 secs.
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Network topology has 0 racks and 0 datanodes
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* UnderReplicatedBlocks has 0 blocks
    2009-10-14 11:09:53,198 INFO org.mortbay.log: Logging to
    org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
    org.mortbay.log.Slf4jLog
    2009-10-14 11:09:53,242 INFO org.apache.hadoop.http.HttpServer: Port
    returned by webServer.getConnectors()[0].getLocalPort() before open()
    is -1. Opening the listener on 50070
    2009-10-14 11:09:53,243 INFO org.apache.hadoop.http.HttpServer:
    listener.getLocalPort() returned 50070
    webServer.getConnectors()[0].getLocalPort() returned 50070
    2009-10-14 11:09:53,243 INFO org.apache.hadoop.http.HttpServer: Jetty
    bound to port 50070
    2009-10-14 11:09:53,243 INFO org.mortbay.log: jetty-6.1.14
    2009-10-14 11:09:53,433 INFO org.mortbay.log: Started
    SelectChannelConnector@0.0.0.0:50070
    2009-10-14 11:09:53,434 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
    0.0.0.0:50070
    2009-10-14 11:09:53,434 INFO org.apache.hadoop.ipc.Server: IPC Server
    Responder: starting
    2009-10-14 11:09:53,435 INFO org.apache.hadoop.ipc.Server: IPC Server
    listener on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 1 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 3 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 4 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 6 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 8 on 8020: starting
    2009-10-14 11:09:53,441 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 8020: starting
    2009-10-14 11:09:53,444 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 8020: starting
    2009-10-14 11:09:54,232 INFO org.apache.hadoop.ipc.Server: Error
    register getProtocolVersion
    java.lang.IllegalArgumentException: Duplicate metricsName:getProtocolVersion
    at org.apache.hadoop.metrics.util.MetricsRegistry.add(MetricsRegistry.java:53)
    at org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init>(MetricsTimeVaryingRate.java:89)
    at org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init>(MetricsTimeVaryingRate.java:99)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    2009-10-14 11:09:54,667 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.3:50010 storage
    DS-551038728-192.168.76.3-50010-1255511394664
    2009-10-14 11:09:54,668 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.3:50010
    2009-10-14 11:09:54,673 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.2:50010 storage
    DS-681115827-192.168.76.2-50010-1255511394673
    2009-10-14 11:09:54,674 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.2:50010
    2009-10-14 11:09:54,689 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.7:50010 storage
    DS-1731752631-192.168.76.7-50010-1255511394688
    2009-10-14 11:09:54,689 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.7:50010
    2009-10-14 11:09:54,701 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.6:50010 storage
    DS-338163265-192.168.76.6-50010-1255511394698
    2009-10-14 11:09:54,701 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.6:50010
    2009-10-14 11:09:59,688 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.1:50010 storage
    DS-988595128-192.168.76.1-50010-1255511399688
    2009-10-14 11:09:59,689 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.1:50010
    2009-10-14 11:10:04,277 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.9:50010 storage
    DS-1096719686-192.168.76.9-50010-1255511404276
    2009-10-14 11:10:04,278 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.9:50010
    2009-10-14 11:10:04,289 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.5:50010 storage
    DS-1405051412-192.168.76.5-50010-1255511404288
    2009-10-14 11:10:04,289 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.5:50010
    2009-10-14 11:10:05,961 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.8:50010 storage
    DS-1138617755-192.168.76.8-50010-1255511405960
    2009-10-14 11:10:05,961 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.8:50010
    2009-10-14 11:10:06,154 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.4:50010 storage
    DS-569548170-192.168.76.4-50010-1255511406153
    2009-10-14 11:10:06,154 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.4:50010
    2009-10-14 11:10:14,480 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 0
    time(s).
    2009-10-14 11:10:35,483 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 1
    time(s).
    2009-10-14 11:10:56,486 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 2
    time(s).
    2009-10-14 11:11:17,489 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 3
    time(s).
    2009-10-14 11:11:38,492 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 4
    time(s).
    2009-10-14 11:11:59,495 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 5
    time(s).
    2009-10-14 11:12:20,498 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 6
    time(s).
    2009-10-14 11:12:41,501 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 7
    time(s).
    2009-10-14 11:13:02,504 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 8
    time(s).
    2009-10-14 11:13:23,506 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 9
    time(s).
    etc.
  • Todd Lipcon at Oct 15, 2009 at 8:23 am
    Hi Tim,

    jps should show it running if it's "hung".

    In that case, see if you can get a stack trace using "jstack <pid>".
    Paste that trace and we may be able to figure out where it's hung up.

    -Todd

    On Thu, Oct 15, 2009 at 12:12 AM, tim robertson
    wrote:
    Thanks for the info Todd,
    I have also seen this error, but it's not fatal.
    Ok, I'll ignore it for the time being.
    Is this log from just the NameNode or did you tail multiple logs? It
    seems odd that your namenode would be trying to make an IPC client to
    itself (port 8020).
    Just the name node.
    After you see these messages, does your namenode shut down?
    It kind of just hangs really (Already tried 9 times, Already tried 10 times etc)
    Does jps show it running? Is the web interface available at port 50070?
    No, it never is available.

    I'll keep hunting for some config error.

    Cheers
    Tim

    -Todd


    On Wed, Oct 14, 2009 at 2:33 AM, tim robertson
    wrote:
    Hi all,

    Using hadoop 0.20.1 I am seeing the following on my namenode startup.

    2009-10-14 11:09:54,232 INFO org.apache.hadoop.ipc.Server: Error
    register getProtocolVersion
    java.lang.IllegalArgumentException: Duplicate metricsName:getProtocolVersion

    Could someone please point me in the right direction for diagnosing
    where I have gone wrong?

    Thanks
    Tim



    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG:   host = cluma.gbif.org/192.38.28.77
    STARTUP_MSG:   args = []
    STARTUP_MSG:   version = 0.20.1
    STARTUP_MSG:   build =
    http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1
    -r 810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 2009
    ************************************************************/
    2009-10-14 11:09:53,010 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
    Initializing RPC Metrics with hostName=NameNode, port=8020
    2009-10-14 11:09:53,013 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
    cluma.gbif.clu/192.168.76.254:8020
    2009-10-14 11:09:53,015 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
    Initializing JVM Metrics with processName=NameNode, sessionId=null
    2009-10-14 11:09:53,019 INFO
    org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
    Initializing NameNodeMeterics using context
    object:org.apache.hadoop.metrics.spi.NullContext
    2009-10-14 11:09:53,056 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
    2009-10-14 11:09:53,057 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    supergroup=supergroup
    2009-10-14 11:09:53,057 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isPermissionEnabled=true
    2009-10-14 11:09:53,064 INFO
    org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
    Initializing FSNamesystemMetrics using context
    object:org.apache.hadoop.metrics.spi.NullContext
    2009-10-14 11:09:53,065 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
    FSNamesystemStatusMBean
    2009-10-14 11:09:53,090 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files under
    construction = 0
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94
    loaded in 0 seconds.
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Edits file
    /hadoop/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
    2009-10-14 11:09:53,098 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94
    saved in 0 seconds.
    2009-10-14 11:09:53,113 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
    FSImage in 72 msecs
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
    blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
    blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    under-replicated blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    over-replicated blocks = 0
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Leaving safe mode after 0 secs.
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Network topology has 0 racks and 0 datanodes
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* UnderReplicatedBlocks has 0 blocks
    2009-10-14 11:09:53,198 INFO org.mortbay.log: Logging to
    org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
    org.mortbay.log.Slf4jLog
    2009-10-14 11:09:53,242 INFO org.apache.hadoop.http.HttpServer: Port
    returned by webServer.getConnectors()[0].getLocalPort() before open()
    is -1. Opening the listener on 50070
    2009-10-14 11:09:53,243 INFO org.apache.hadoop.http.HttpServer:
    listener.getLocalPort() returned 50070
    webServer.getConnectors()[0].getLocalPort() returned 50070
    2009-10-14 11:09:53,243 INFO org.apache.hadoop.http.HttpServer: Jetty
    bound to port 50070
    2009-10-14 11:09:53,243 INFO org.mortbay.log: jetty-6.1.14
    2009-10-14 11:09:53,433 INFO org.mortbay.log: Started
    SelectChannelConnector@0.0.0.0:50070
    2009-10-14 11:09:53,434 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
    0.0.0.0:50070
    2009-10-14 11:09:53,434 INFO org.apache.hadoop.ipc.Server: IPC Server
    Responder: starting
    2009-10-14 11:09:53,435 INFO org.apache.hadoop.ipc.Server: IPC Server
    listener on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 1 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 3 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 4 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 6 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 8 on 8020: starting
    2009-10-14 11:09:53,441 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 8020: starting
    2009-10-14 11:09:53,444 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 8020: starting
    2009-10-14 11:09:54,232 INFO org.apache.hadoop.ipc.Server: Error
    register getProtocolVersion
    java.lang.IllegalArgumentException: Duplicate metricsName:getProtocolVersion
    at org.apache.hadoop.metrics.util.MetricsRegistry.add(MetricsRegistry.java:53)
    at org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init>(MetricsTimeVaryingRate.java:89)
    at org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init>(MetricsTimeVaryingRate.java:99)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    2009-10-14 11:09:54,667 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.3:50010 storage
    DS-551038728-192.168.76.3-50010-1255511394664
    2009-10-14 11:09:54,668 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.3:50010
    2009-10-14 11:09:54,673 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.2:50010 storage
    DS-681115827-192.168.76.2-50010-1255511394673
    2009-10-14 11:09:54,674 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.2:50010
    2009-10-14 11:09:54,689 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.7:50010 storage
    DS-1731752631-192.168.76.7-50010-1255511394688
    2009-10-14 11:09:54,689 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.7:50010
    2009-10-14 11:09:54,701 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.6:50010 storage
    DS-338163265-192.168.76.6-50010-1255511394698
    2009-10-14 11:09:54,701 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.6:50010
    2009-10-14 11:09:59,688 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.1:50010 storage
    DS-988595128-192.168.76.1-50010-1255511399688
    2009-10-14 11:09:59,689 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.1:50010
    2009-10-14 11:10:04,277 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.9:50010 storage
    DS-1096719686-192.168.76.9-50010-1255511404276
    2009-10-14 11:10:04,278 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.9:50010
    2009-10-14 11:10:04,289 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.5:50010 storage
    DS-1405051412-192.168.76.5-50010-1255511404288
    2009-10-14 11:10:04,289 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.5:50010
    2009-10-14 11:10:05,961 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.8:50010 storage
    DS-1138617755-192.168.76.8-50010-1255511405960
    2009-10-14 11:10:05,961 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.8:50010
    2009-10-14 11:10:06,154 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.4:50010 storage
    DS-569548170-192.168.76.4-50010-1255511406153
    2009-10-14 11:10:06,154 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.4:50010
    2009-10-14 11:10:14,480 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 0
    time(s).
    2009-10-14 11:10:35,483 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 1
    time(s).
    2009-10-14 11:10:56,486 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 2
    time(s).
    2009-10-14 11:11:17,489 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 3
    time(s).
    2009-10-14 11:11:38,492 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 4
    time(s).
    2009-10-14 11:11:59,495 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 5
    time(s).
    2009-10-14 11:12:20,498 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 6
    time(s).
    2009-10-14 11:12:41,501 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 7
    time(s).
    2009-10-14 11:13:02,504 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 8
    time(s).
    2009-10-14 11:13:23,506 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 9
    time(s).
    etc.
  • Eason.Lee at Oct 15, 2009 at 9:07 am
    2009-10-14 11:09:54,232 INFO org.apache.hadoop.ipc.Server: Error
    register getProtocolVersion
    java.lang.IllegalArgumentException: Duplicate metricsName:getProtocolVersion
    at
    org.apache.hadoop.metrics.util.MetricsRegistry.add(MetricsRegistry.java:53)
    at
    org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.(MetricsTimeVaryingRate.java:99)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

    I have got this error when i start my mapreduce
    but everything seems OK
    and i can still running my mapreduce job
    Any suggestion to deal with this error?

    2009/10/15 Todd Lipcon <todd@cloudera.com>
    Hi Tim,

    jps should show it running if it's "hung".

    In that case, see if you can get a stack trace using "jstack <pid>".
    Paste that trace and we may be able to figure out where it's hung up.

    -Todd

    On Thu, Oct 15, 2009 at 12:12 AM, tim robertson
    wrote:
    Thanks for the info Todd,
    I have also seen this error, but it's not fatal.
    Ok, I'll ignore it for the time being.
    Is this log from just the NameNode or did you tail multiple logs? It
    seems odd that your namenode would be trying to make an IPC client to
    itself (port 8020).
    Just the name node.
    After you see these messages, does your namenode shut down?
    It kind of just hangs really (Already tried 9 times, Already tried 10
    times etc)
    Does jps show it running? Is the web interface available at port 50070?
    No, it never is available.

    I'll keep hunting for some config error.

    Cheers
    Tim

    -Todd


    On Wed, Oct 14, 2009 at 2:33 AM, tim robertson
    wrote:
    Hi all,

    Using hadoop 0.20.1 I am seeing the following on my namenode startup.

    2009-10-14 11:09:54,232 INFO org.apache.hadoop.ipc.Server: Error
    register getProtocolVersion
    java.lang.IllegalArgumentException: Duplicate
    metricsName:getProtocolVersion
    Could someone please point me in the right direction for diagnosing
    where I have gone wrong?

    Thanks
    Tim



    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = cluma.gbif.org/192.38.28.77
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 0.20.1
    STARTUP_MSG: build =
    http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1
    -r 810220; compiled by 'oom' on Tue Sep 1 20:55:56 UTC 2009
    ************************************************************/
    2009-10-14 11:09:53,010 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
    Initializing RPC Metrics with hostName=NameNode, port=8020
    2009-10-14 11:09:53,013 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
    cluma.gbif.clu/192.168.76.254:8020
    2009-10-14 11:09:53,015 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
    Initializing JVM Metrics with processName=NameNode, sessionId=null
    2009-10-14 11:09:53,019 INFO
    org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
    Initializing NameNodeMeterics using context
    object:org.apache.hadoop.metrics.spi.NullContext
    2009-10-14 11:09:53,056 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
    2009-10-14 11:09:53,057 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    supergroup=supergroup
    2009-10-14 11:09:53,057 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isPermissionEnabled=true
    2009-10-14 11:09:53,064 INFO
    org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
    Initializing FSNamesystemMetrics using context
    object:org.apache.hadoop.metrics.spi.NullContext
    2009-10-14 11:09:53,065 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
    FSNamesystemStatusMBean
    2009-10-14 11:09:53,090 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files under
    construction = 0
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94
    loaded in 0 seconds.
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Edits file
    /hadoop/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
    2009-10-14 11:09:53,098 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94
    saved in 0 seconds.
    2009-10-14 11:09:53,113 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
    FSImage in 72 msecs
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
    blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
    blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    under-replicated blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    over-replicated blocks = 0
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Leaving safe mode after 0 secs.
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Network topology has 0 racks and 0 datanodes
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* UnderReplicatedBlocks has 0 blocks
    2009-10-14 11:09:53,198 INFO org.mortbay.log: Logging to
    org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
    org.mortbay.log.Slf4jLog
    2009-10-14 11:09:53,242 INFO org.apache.hadoop.http.HttpServer: Port
    returned by webServer.getConnectors()[0].getLocalPort() before open()
    is -1. Opening the listener on 50070
    2009-10-14 11:09:53,243 INFO org.apache.hadoop.http.HttpServer:
    listener.getLocalPort() returned 50070
    webServer.getConnectors()[0].getLocalPort() returned 50070
    2009-10-14 11:09:53,243 INFO org.apache.hadoop.http.HttpServer: Jetty
    bound to port 50070
    2009-10-14 11:09:53,243 INFO org.mortbay.log: jetty-6.1.14
    2009-10-14 11:09:53,433 INFO org.mortbay.log: Started
    SelectChannelConnector@0.0.0.0:50070
    2009-10-14 11:09:53,434 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
    0.0.0.0:50070
    2009-10-14 11:09:53,434 INFO org.apache.hadoop.ipc.Server: IPC Server
    Responder: starting
    2009-10-14 11:09:53,435 INFO org.apache.hadoop.ipc.Server: IPC Server
    listener on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 1 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 3 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 4 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 6 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 8 on 8020: starting
    2009-10-14 11:09:53,441 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 8020: starting
    2009-10-14 11:09:53,444 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 8020: starting
    2009-10-14 11:09:54,232 INFO org.apache.hadoop.ipc.Server: Error
    register getProtocolVersion
    java.lang.IllegalArgumentException: Duplicate
    metricsName:getProtocolVersion
    at
    org.apache.hadoop.metrics.util.MetricsRegistry.add(MetricsRegistry.java:53)
    at
    org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init>(MetricsTimeVaryingRate.java:89)
    at
    org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init>(MetricsTimeVaryingRate.java:99)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    2009-10-14 11:09:54,667 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.3:50010 storage
    DS-551038728-192.168.76.3-50010-1255511394664
    2009-10-14 11:09:54,668 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.3:50010
    2009-10-14 11:09:54,673 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.2:50010 storage
    DS-681115827-192.168.76.2-50010-1255511394673
    2009-10-14 11:09:54,674 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.2:50010
    2009-10-14 11:09:54,689 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.7:50010 storage
    DS-1731752631-192.168.76.7-50010-1255511394688
    2009-10-14 11:09:54,689 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.7:50010
    2009-10-14 11:09:54,701 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.6:50010 storage
    DS-338163265-192.168.76.6-50010-1255511394698
    2009-10-14 11:09:54,701 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.6:50010
    2009-10-14 11:09:59,688 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.1:50010 storage
    DS-988595128-192.168.76.1-50010-1255511399688
    2009-10-14 11:09:59,689 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.1:50010
    2009-10-14 11:10:04,277 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.9:50010 storage
    DS-1096719686-192.168.76.9-50010-1255511404276
    2009-10-14 11:10:04,278 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.9:50010
    2009-10-14 11:10:04,289 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.5:50010 storage
    DS-1405051412-192.168.76.5-50010-1255511404288
    2009-10-14 11:10:04,289 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.5:50010
    2009-10-14 11:10:05,961 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.8:50010 storage
    DS-1138617755-192.168.76.8-50010-1255511405960
    2009-10-14 11:10:05,961 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.8:50010
    2009-10-14 11:10:06,154 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.4:50010 storage
    DS-569548170-192.168.76.4-50010-1255511406153
    2009-10-14 11:10:06,154 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.4:50010
    2009-10-14 11:10:14,480 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 0
    time(s).
    2009-10-14 11:10:35,483 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 1
    time(s).
    2009-10-14 11:10:56,486 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 2
    time(s).
    2009-10-14 11:11:17,489 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 3
    time(s).
    2009-10-14 11:11:38,492 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 4
    time(s).
    2009-10-14 11:11:59,495 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 5
    time(s).
    2009-10-14 11:12:20,498 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 6
    time(s).
    2009-10-14 11:12:41,501 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 7
    time(s).
    2009-10-14 11:13:02,504 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 8
    time(s).
    2009-10-14 11:13:23,506 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 9
    time(s).
    etc.
  • Tim robertson at Oct 15, 2009 at 1:16 pm
    Hi Todd, Brian

    Thanks for your comments.
    I got to the bottom of the problem. A netstat showed me that all the
    DNs could connect to the NN but the firewall was stopping the NN
    complete the http handshake to itself.

    It's all up and running now.

    Thanks,
    Tim

    On Thu, Oct 15, 2009 at 10:23 AM, Todd Lipcon wrote:
    Hi Tim,

    jps should show it running if it's "hung".

    In that case, see if you can get a stack trace using "jstack <pid>".
    Paste that trace and we may be able to figure out where it's hung up.

    -Todd

    On Thu, Oct 15, 2009 at 12:12 AM, tim robertson
    wrote:
    Thanks for the info Todd,
    I have also seen this error, but it's not fatal.
    Ok, I'll ignore it for the time being.
    Is this log from just the NameNode or did you tail multiple logs? It
    seems odd that your namenode would be trying to make an IPC client to
    itself (port 8020).
    Just the name node.
    After you see these messages, does your namenode shut down?
    It kind of just hangs really (Already tried 9 times, Already tried 10 times etc)
    Does jps show it running? Is the web interface available at port 50070?
    No, it never is available.

    I'll keep hunting for some config error.

    Cheers
    Tim

    -Todd


    On Wed, Oct 14, 2009 at 2:33 AM, tim robertson
    wrote:
    Hi all,

    Using hadoop 0.20.1 I am seeing the following on my namenode startup.

    2009-10-14 11:09:54,232 INFO org.apache.hadoop.ipc.Server: Error
    register getProtocolVersion
    java.lang.IllegalArgumentException: Duplicate metricsName:getProtocolVersion

    Could someone please point me in the right direction for diagnosing
    where I have gone wrong?

    Thanks
    Tim



    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG:   host = cluma.gbif.org/192.38.28.77
    STARTUP_MSG:   args = []
    STARTUP_MSG:   version = 0.20.1
    STARTUP_MSG:   build =
    http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1
    -r 810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 2009
    ************************************************************/
    2009-10-14 11:09:53,010 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
    Initializing RPC Metrics with hostName=NameNode, port=8020
    2009-10-14 11:09:53,013 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
    cluma.gbif.clu/192.168.76.254:8020
    2009-10-14 11:09:53,015 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
    Initializing JVM Metrics with processName=NameNode, sessionId=null
    2009-10-14 11:09:53,019 INFO
    org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
    Initializing NameNodeMeterics using context
    object:org.apache.hadoop.metrics.spi.NullContext
    2009-10-14 11:09:53,056 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
    2009-10-14 11:09:53,057 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    supergroup=supergroup
    2009-10-14 11:09:53,057 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isPermissionEnabled=true
    2009-10-14 11:09:53,064 INFO
    org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
    Initializing FSNamesystemMetrics using context
    object:org.apache.hadoop.metrics.spi.NullContext
    2009-10-14 11:09:53,065 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
    FSNamesystemStatusMBean
    2009-10-14 11:09:53,090 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files under
    construction = 0
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94
    loaded in 0 seconds.
    2009-10-14 11:09:53,094 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Edits file
    /hadoop/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
    2009-10-14 11:09:53,098 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94
    saved in 0 seconds.
    2009-10-14 11:09:53,113 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
    FSImage in 72 msecs
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
    blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
    blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    under-replicated blocks = 0
    2009-10-14 11:09:53,114 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    over-replicated blocks = 0
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Leaving safe mode after 0 secs.
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Network topology has 0 racks and 0 datanodes
    2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* UnderReplicatedBlocks has 0 blocks
    2009-10-14 11:09:53,198 INFO org.mortbay.log: Logging to
    org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
    org.mortbay.log.Slf4jLog
    2009-10-14 11:09:53,242 INFO org.apache.hadoop.http.HttpServer: Port
    returned by webServer.getConnectors()[0].getLocalPort() before open()
    is -1. Opening the listener on 50070
    2009-10-14 11:09:53,243 INFO org.apache.hadoop.http.HttpServer:
    listener.getLocalPort() returned 50070
    webServer.getConnectors()[0].getLocalPort() returned 50070
    2009-10-14 11:09:53,243 INFO org.apache.hadoop.http.HttpServer: Jetty
    bound to port 50070
    2009-10-14 11:09:53,243 INFO org.mortbay.log: jetty-6.1.14
    2009-10-14 11:09:53,433 INFO org.mortbay.log: Started
    SelectChannelConnector@0.0.0.0:50070
    2009-10-14 11:09:53,434 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
    0.0.0.0:50070
    2009-10-14 11:09:53,434 INFO org.apache.hadoop.ipc.Server: IPC Server
    Responder: starting
    2009-10-14 11:09:53,435 INFO org.apache.hadoop.ipc.Server: IPC Server
    listener on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 1 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 3 on 8020: starting
    2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 4 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 6 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 8020: starting
    2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 8 on 8020: starting
    2009-10-14 11:09:53,441 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 8020: starting
    2009-10-14 11:09:53,444 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 8020: starting
    2009-10-14 11:09:54,232 INFO org.apache.hadoop.ipc.Server: Error
    register getProtocolVersion
    java.lang.IllegalArgumentException: Duplicate metricsName:getProtocolVersion
    at org.apache.hadoop.metrics.util.MetricsRegistry.add(MetricsRegistry.java:53)
    at org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init>(MetricsTimeVaryingRate.java:89)
    at org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init>(MetricsTimeVaryingRate.java:99)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    2009-10-14 11:09:54,667 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.3:50010 storage
    DS-551038728-192.168.76.3-50010-1255511394664
    2009-10-14 11:09:54,668 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.3:50010
    2009-10-14 11:09:54,673 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.2:50010 storage
    DS-681115827-192.168.76.2-50010-1255511394673
    2009-10-14 11:09:54,674 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.2:50010
    2009-10-14 11:09:54,689 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.7:50010 storage
    DS-1731752631-192.168.76.7-50010-1255511394688
    2009-10-14 11:09:54,689 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.7:50010
    2009-10-14 11:09:54,701 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.6:50010 storage
    DS-338163265-192.168.76.6-50010-1255511394698
    2009-10-14 11:09:54,701 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.6:50010
    2009-10-14 11:09:59,688 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.1:50010 storage
    DS-988595128-192.168.76.1-50010-1255511399688
    2009-10-14 11:09:59,689 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.1:50010
    2009-10-14 11:10:04,277 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.9:50010 storage
    DS-1096719686-192.168.76.9-50010-1255511404276
    2009-10-14 11:10:04,278 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.9:50010
    2009-10-14 11:10:04,289 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.5:50010 storage
    DS-1405051412-192.168.76.5-50010-1255511404288
    2009-10-14 11:10:04,289 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.5:50010
    2009-10-14 11:10:05,961 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.8:50010 storage
    DS-1138617755-192.168.76.8-50010-1255511405960
    2009-10-14 11:10:05,961 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.8:50010
    2009-10-14 11:10:06,154 INFO org.apache.hadoop.hdfs.StateChange:
    BLOCK* NameSystem.registerDatanode: node registration from
    192.168.76.4:50010 storage
    DS-569548170-192.168.76.4-50010-1255511406153
    2009-10-14 11:10:06,154 INFO org.apache.hadoop.net.NetworkTopology:
    Adding a new node: /default-rack/192.168.76.4:50010
    2009-10-14 11:10:14,480 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 0
    time(s).
    2009-10-14 11:10:35,483 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 1
    time(s).
    2009-10-14 11:10:56,486 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 2
    time(s).
    2009-10-14 11:11:17,489 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 3
    time(s).
    2009-10-14 11:11:38,492 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 4
    time(s).
    2009-10-14 11:11:59,495 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 5
    time(s).
    2009-10-14 11:12:20,498 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 6
    time(s).
    2009-10-14 11:12:41,501 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 7
    time(s).
    2009-10-14 11:13:02,504 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 8
    time(s).
    2009-10-14 11:13:23,506 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 9
    time(s).
    etc.
  • Yibo820217 at Oct 23, 2009 at 2:56 am
    hi,
    this,I config the hdfs,config
    conf/masters,slaves,core-site.xml,hdfs-site.xml,mapred-site.xml, and
    hadoop-env.sh
    then I go to start hdfs using bin/hadoop start-all.sh
    but I find the logs in namenode is like this:

    ....
    009-10-23 10:10:31,071 INFO org.mortbay.log: Started
    SelectChannelConnector@0.0.0.0:50070
    2009-10-23 10:10:31,071 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
    0.0.0.0:50070
    2009-10-23 10:10:31,072 INFO org.apache.hadoop.ipc.Server: IPC Server
    Responder: starting
    2009-10-23 10:10:31,072 INFO org.apache.hadoop.ipc.Server: IPC Server
    listener on 9000: starting
    2009-10-23 10:10:31,074 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 9000: starting
    2009-10-23 10:10:31,074 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 1 on 9000: starting
    2009-10-23 10:10:31,074 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 9000: starting
    2009-10-23 10:10:31,074 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 3 on 9000: starting
    2009-10-23 10:10:31,074 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 4 on 9000: starting
    2009-10-23 10:10:31,074 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 9000: starting
    2009-10-23 10:10:31,075 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 6 on 9000: starting
    2009-10-23 10:10:31,075 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 9000: starting
    2009-10-23 10:10:31,075 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 8 on 9000: starting
    2009-10-23 10:10:31,075 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 9000: starting
    2009-10-23 10:10:31,111 INFO org.apache.hadoop.ipc.Server: Error register
    getProtocolVersion
    java.lang.IllegalArgumentException: Duplicate metricsName:getProtocolVersion
    at
    org.apache.hadoop.metrics.util.MetricsRegistry.add(MetricsRegistry.java:53)
    at
    org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.(MetricsTimeVaryingRate.java:99)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    ....

    and namenode,datanode,secondary namenode are all normal.


    I don't know the reason, can anybody tell me?
    Thanks!
    --
    View this message in context: http://www.nabble.com/Error-register-getProtocolVersion-tp26020226p26020226.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
  • Brian Bockelman at Oct 23, 2009 at 12:03 pm
    Hey,

    Search the archives. I think someone else (Tim Robertson maybe?) ran
    into this.

    IIRC, it was a harmless message and the true error was elsewhere.

    Brian
    On Oct 22, 2009, at 9:55 PM, yibo820217 wrote:


    hi,
    this,I config the hdfs,config
    conf/masters,slaves,core-site.xml,hdfs-site.xml,mapred-site.xml, and
    hadoop-env.sh
    then I go to start hdfs using bin/hadoop start-all.sh
    but I find the logs in namenode is like this:

    ....
    009-10-23 10:10:31,071 INFO org.mortbay.log: Started
    SelectChannelConnector@0.0.0.0:50070
    2009-10-23 10:10:31,071 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
    0.0.0.0:50070
    2009-10-23 10:10:31,072 INFO org.apache.hadoop.ipc.Server: IPC Server
    Responder: starting
    2009-10-23 10:10:31,072 INFO org.apache.hadoop.ipc.Server: IPC Server
    listener on 9000: starting
    2009-10-23 10:10:31,074 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 9000: starting
    2009-10-23 10:10:31,074 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 1 on 9000: starting
    2009-10-23 10:10:31,074 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 9000: starting
    2009-10-23 10:10:31,074 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 3 on 9000: starting
    2009-10-23 10:10:31,074 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 4 on 9000: starting
    2009-10-23 10:10:31,074 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 9000: starting
    2009-10-23 10:10:31,075 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 6 on 9000: starting
    2009-10-23 10:10:31,075 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 9000: starting
    2009-10-23 10:10:31,075 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 8 on 9000: starting
    2009-10-23 10:10:31,075 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 9000: starting
    2009-10-23 10:10:31,111 INFO org.apache.hadoop.ipc.Server: Error
    register
    getProtocolVersion
    java.lang.IllegalArgumentException: Duplicate
    metricsName:getProtocolVersion
    at
    org.apache.hadoop.metrics.util.MetricsRegistry.add
    (MetricsRegistry.java:53)
    at
    org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init>
    (MetricsTimeVaryingRate.java:89)
    at
    org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init>
    (MetricsTimeVaryingRate.java:99)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    ....

    and namenode,datanode,secondary namenode are all normal.


    I don't know the reason, can anybody tell me?
    Thanks!
    --
    View this message in context: http://www.nabble.com/Error-register-getProtocolVersion-tp26020226p26020226.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedOct 14, '09 at 9:33a
activeOct 23, '09 at 12:03p
posts11
users5
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase