FAQ
Hi,

When i started hadoop daemons with start-dfs.sh
the namenode fail to run

only the secondarynamenode and the datanode run
what is the problem here and how can i run the namenode??

thanks a lot for any help :)

khaled

Search Discussions

  • Sagar Shukla at Jun 2, 2010 at 12:44 pm
    Hi Khaled,
    Errors are usually thrown in logs/hadoop-hadoop-namenode.log file. Please check for the causes of why namenode startup failed.

    If you face problems debugging, you can post it here and we can help you debug.

    Thanks,
    Sagar

    -----Original Message-----
    From: Khaled BEN BAHRI
    Sent: Wednesday, June 02, 2010 5:49 PM
    To: common-user@hadoop.apache.org
    Subject: Starting namenode

    Hi,

    When i started hadoop daemons with start-dfs.sh
    the namenode fail to run

    only the secondarynamenode and the datanode run
    what is the problem here and how can i run the namenode??

    thanks a lot for any help :)

    khaled


    DISCLAIMER
    ==========
    This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.
  • Khaled BEN BAHRI at Jun 2, 2010 at 1:23 pm
    Hi,
    When i started the namenode that gives these errors in logs file :
    when i want to enter in the safe mode it also fail and gives that it
    can't connect to server

    org.apache.hadoop.hdfs.server.namenode.DecommissionManager:
    Interrupted Monitor
    java.lang.InterruptedException: sleep interrupted
    at java.lang.Thread.sleep(Native Method)
    at
    org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
    at java.lang.Thread.run(Thread.java:619)
    2010-06-02 14:13:21,079 INFO org.apache.hadoop.ipc.Server: Stopping
    server on 9000
    2010-06-02 14:13:21,079 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode:
    java.net.BindException: Address already in use
    at sun.nio.ch.Net.bind(Native Method)
    at
    sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at
    org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
    at org.apache.hadoop.http.HttpServer.start(HttpServer.java:425)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:246)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:202)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:956)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)



    Thanks in advance
    Khaled


    Quoting Sagar Shukla <sagar_shukla@persistent.co.in>:
    Hi Khaled,
    Errors are usually thrown in logs/hadoop-hadoop-namenode.log
    file. Please check for the causes of why namenode startup failed.

    If you face problems debugging, you can post it here and we can help
    you debug.

    Thanks,
    Sagar

    -----Original Message-----
    From: Khaled BEN BAHRI
    Sent: Wednesday, June 02, 2010 5:49 PM
    To: common-user@hadoop.apache.org
    Subject: Starting namenode

    Hi,

    When i started hadoop daemons with start-dfs.sh
    the namenode fail to run

    only the secondarynamenode and the datanode run
    what is the problem here and how can i run the namenode??

    thanks a lot for any help :)

    khaled


    DISCLAIMER
    ==========
    This e-mail may contain privileged and confidential information
    which is the property of Persistent Systems Ltd. It is intended only
    for the use of the individual or entity to which it is addressed.
    If you are not the intended recipient, you are not authorized to
    read, retain, copy, print, distribute or use this message. If you
    have received this communication in error, please notify the sender
    and delete all copies of this message. Persistent Systems Ltd. does
    not accept any liability for virus infected mails.
  • Sagar Shukla at Jun 2, 2010 at 2:28 pm
    Error " Address already in use" indicates that there is already a process running on port 9000, because of which this service is unable to start.

    Looks like previously shutdown of namenode process was not clean because of which this process is already running. Not allowing new process to start.

    You can kill previous process using command
    # fuser -k 9000/tcp

    And then trying to start namenode should work.

    Thanks,
    Sagar

    -----Original Message-----
    From: Khaled BEN BAHRI
    Sent: Wednesday, June 02, 2010 6:52 PM
    To: common-user@hadoop.apache.org
    Subject: RE: Starting namenode

    Hi,
    When i started the namenode that gives these errors in logs file :
    when i want to enter in the safe mode it also fail and gives that it
    can't connect to server

    org.apache.hadoop.hdfs.server.namenode.DecommissionManager:
    Interrupted Monitor
    java.lang.InterruptedException: sleep interrupted
    at java.lang.Thread.sleep(Native Method)
    at
    org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
    at java.lang.Thread.run(Thread.java:619)
    2010-06-02 14:13:21,079 INFO org.apache.hadoop.ipc.Server: Stopping
    server on 9000
    2010-06-02 14:13:21,079 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode:
    java.net.BindException: Address already in use
    at sun.nio.ch.Net.bind(Native Method)
    at
    sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at
    org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
    at org.apache.hadoop.http.HttpServer.start(HttpServer.java:425)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:246)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:202)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:956)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)



    Thanks in advance
    Khaled


    Quoting Sagar Shukla <sagar_shukla@persistent.co.in>:
    Hi Khaled,
    Errors are usually thrown in logs/hadoop-hadoop-namenode.log
    file. Please check for the causes of why namenode startup failed.

    If you face problems debugging, you can post it here and we can help
    you debug.

    Thanks,
    Sagar

    -----Original Message-----
    From: Khaled BEN BAHRI
    Sent: Wednesday, June 02, 2010 5:49 PM
    To: common-user@hadoop.apache.org
    Subject: Starting namenode

    Hi,

    When i started hadoop daemons with start-dfs.sh
    the namenode fail to run

    only the secondarynamenode and the datanode run
    what is the problem here and how can i run the namenode??

    thanks a lot for any help :)

    khaled


    DISCLAIMER
    ==========
    This e-mail may contain privileged and confidential information
    which is the property of Persistent Systems Ltd. It is intended only
    for the use of the individual or entity to which it is addressed.
    If you are not the intended recipient, you are not authorized to
    read, retain, copy, print, distribute or use this message. If you
    have received this communication in error, please notify the sender
    and delete all copies of this message. Persistent Systems Ltd. does
    not accept any liability for virus infected mails.



    DISCLAIMER
    ==========
    This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.
  • Khaled BEN BAHRI at Jun 2, 2010 at 3:07 pm
    Hi

    The problem still persist the namenode don't start and gives the same error

    Thanks for help
    khaled

    Quoting Sagar Shukla <sagar_shukla@persistent.co.in>:
    Error " Address already in use" indicates that there is already a
    process running on port 9000, because of which this service is
    unable to start.

    Looks like previously shutdown of namenode process was not clean
    because of which this process is already running. Not allowing new
    process to start.

    You can kill previous process using command
    # fuser -k 9000/tcp

    And then trying to start namenode should work.

    Thanks,
    Sagar

    -----Original Message-----
    From: Khaled BEN BAHRI
    Sent: Wednesday, June 02, 2010 6:52 PM
    To: common-user@hadoop.apache.org
    Subject: RE: Starting namenode

    Hi,
    When i started the namenode that gives these errors in logs file :
    when i want to enter in the safe mode it also fail and gives that it
    can't connect to server

    org.apache.hadoop.hdfs.server.namenode.DecommissionManager:
    Interrupted Monitor
    java.lang.InterruptedException: sleep interrupted
    at java.lang.Thread.sleep(Native Method)
    at
    org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
    at java.lang.Thread.run(Thread.java:619)
    2010-06-02 14:13:21,079 INFO org.apache.hadoop.ipc.Server: Stopping
    server on 9000
    2010-06-02 14:13:21,079 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode:
    java.net.BindException: Address already in use
    at sun.nio.ch.Net.bind(Native Method)
    at
    sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at
    org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
    at org.apache.hadoop.http.HttpServer.start(HttpServer.java:425)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:246)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:202)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)



    Thanks in advance
    Khaled


    Quoting Sagar Shukla <sagar_shukla@persistent.co.in>:
    Hi Khaled,
    Errors are usually thrown in logs/hadoop-hadoop-namenode.log
    file. Please check for the causes of why namenode startup failed.

    If you face problems debugging, you can post it here and we can help
    you debug.

    Thanks,
    Sagar

    -----Original Message-----
    From: Khaled BEN BAHRI
    Sent: Wednesday, June 02, 2010 5:49 PM
    To: common-user@hadoop.apache.org
    Subject: Starting namenode

    Hi,

    When i started hadoop daemons with start-dfs.sh
    the namenode fail to run

    only the secondarynamenode and the datanode run
    what is the problem here and how can i run the namenode??

    thanks a lot for any help :)

    khaled


    DISCLAIMER
    ==========
    This e-mail may contain privileged and confidential information
    which is the property of Persistent Systems Ltd. It is intended only
    for the use of the individual or entity to which it is addressed.
    If you are not the intended recipient, you are not authorized to
    read, retain, copy, print, distribute or use this message. If you
    have received this communication in error, please notify the sender
    and delete all copies of this message. Persistent Systems Ltd. does
    not accept any liability for virus infected mails.



    DISCLAIMER
    ==========
    This e-mail may contain privileged and confidential information
    which is the property of Persistent Systems Ltd. It is intended only
    for the use of the individual or entity to which it is addressed.
    If you are not the intended recipient, you are not authorized to
    read, retain, copy, print, distribute or use this message. If you
    have received this communication in error, please notify the sender
    and delete all copies of this message. Persistent Systems Ltd. does
    not accept any liability for virus infected mails.
  • Sagar Shukla at Jun 3, 2010 at 4:59 am
    Hi Khaled,
    After killing zombie hadoop process on port 9000, and trying to start namenode will throw different error message in log/hadoop-hadoop-namenode.log . This will give more detailed information on why namenode startup is failing.

    Can you follow the same process again and paste the error message ?

    Thanks,
    Sagar

    -----Original Message-----
    From: Khaled BEN BAHRI
    Sent: Wednesday, June 02, 2010 8:37 PM
    To: common-user@hadoop.apache.org
    Subject: RE: Starting namenode

    Hi

    The problem still persist the namenode don't start and gives the same error

    Thanks for help
    khaled

    Quoting Sagar Shukla <sagar_shukla@persistent.co.in>:
    Error " Address already in use" indicates that there is already a
    process running on port 9000, because of which this service is
    unable to start.

    Looks like previously shutdown of namenode process was not clean
    because of which this process is already running. Not allowing new
    process to start.

    You can kill previous process using command
    # fuser -k 9000/tcp

    And then trying to start namenode should work.

    Thanks,
    Sagar

    -----Original Message-----
    From: Khaled BEN BAHRI
    Sent: Wednesday, June 02, 2010 6:52 PM
    To: common-user@hadoop.apache.org
    Subject: RE: Starting namenode

    Hi,
    When i started the namenode that gives these errors in logs file :
    when i want to enter in the safe mode it also fail and gives that it
    can't connect to server

    org.apache.hadoop.hdfs.server.namenode.DecommissionManager:
    Interrupted Monitor
    java.lang.InterruptedException: sleep interrupted
    at java.lang.Thread.sleep(Native Method)
    at
    org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
    at java.lang.Thread.run(Thread.java:619)
    2010-06-02 14:13:21,079 INFO org.apache.hadoop.ipc.Server: Stopping
    server on 9000
    2010-06-02 14:13:21,079 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode:
    java.net.BindException: Address already in use
    at sun.nio.ch.Net.bind(Native Method)
    at
    sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at
    org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
    at org.apache.hadoop.http.HttpServer.start(HttpServer.java:425)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:246)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:202)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)



    Thanks in advance
    Khaled


    Quoting Sagar Shukla <sagar_shukla@persistent.co.in>:
    Hi Khaled,
    Errors are usually thrown in logs/hadoop-hadoop-namenode.log
    file. Please check for the causes of why namenode startup failed.

    If you face problems debugging, you can post it here and we can help
    you debug.

    Thanks,
    Sagar

    -----Original Message-----
    From: Khaled BEN BAHRI
    Sent: Wednesday, June 02, 2010 5:49 PM
    To: common-user@hadoop.apache.org
    Subject: Starting namenode

    Hi,

    When i started hadoop daemons with start-dfs.sh
    the namenode fail to run

    only the secondarynamenode and the datanode run
    what is the problem here and how can i run the namenode??

    thanks a lot for any help :)

    khaled


    DISCLAIMER
    ==========
    This e-mail may contain privileged and confidential information
    which is the property of Persistent Systems Ltd. It is intended only
    for the use of the individual or entity to which it is addressed.
    If you are not the intended recipient, you are not authorized to
    read, retain, copy, print, distribute or use this message. If you
    have received this communication in error, please notify the sender
    and delete all copies of this message. Persistent Systems Ltd. does
    not accept any liability for virus infected mails.



    DISCLAIMER
    ==========
    This e-mail may contain privileged and confidential information
    which is the property of Persistent Systems Ltd. It is intended only
    for the use of the individual or entity to which it is addressed.
    If you are not the intended recipient, you are not authorized to
    read, retain, copy, print, distribute or use this message. If you
    have received this communication in error, please notify the sender
    and delete all copies of this message. Persistent Systems Ltd. does
    not accept any liability for virus infected mails.



    DISCLAIMER
    ==========
    This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.
  • Khaled BEN BAHRI at Jun 3, 2010 at 8:14 am
    Hello ,

    after killing process on port 9000 and trying to start namenode with
    start-dfs.sh the error message is :
    i send you also the core site.xml in the attachment



    2010-06-03 10:06:37,542 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = node004/x.x.x.x
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 0.20.2
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
    911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
    ************************************************************/
    2010-06-03 10:06:37,633 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
    Initializing RPC Metrics with hostName=NameNode, port=9000
    2010-06-03 10:06:37,639 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
    node004.ib.cluster/x.x.x.x:9000
    2010-06-03 10:06:37,641 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
    Initializing JVM Metrics with processName=NameNode, sessionId=null
    2010-06-03 10:06:37,644 INFO
    org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
    Initializing NameNodeMeterics using context
    object:org.apache.hadoop.metrics.spi.NullContext
    2010-06-03 10:06:37,691 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    fsOwner=khaled-b,khaled-b
    2010-06-03 10:06:37,691 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    supergroup=supergroup
    2010-06-03 10:06:37,691 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isPermissionEnabled=true
    2010-06-03 10:06:37,698 INFO
    org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
    Initializing FSNamesystemMetrics using context
    object:org.apache.hadoop.metrics.spi.NullContext
    2010-06-03 10:06:37,699 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
    FSNamesystemStatusMBean
    2010-06-03 10:06:37,728 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
    2010-06-03 10:06:37,732 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files under
    construction = 0
    2010-06-03 10:06:37,732 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 98
    loaded in 0 seconds.
    2010-06-03 10:06:37,732 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Edits file
    /usr/local/hadoop-0.20.2/namespace/current/edits of size 4 edits # 0
    loaded in 0 seconds.
    2010-06-03 10:06:37,736 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 98
    saved in 0 seconds.
    2010-06-03 10:06:37,747 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
    FSImage in 77 msecs
    2010-06-03 10:06:37,748 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
    blocks = 0
    2010-06-03 10:06:37,748 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
    blocks = 0
    2010-06-03 10:06:37,748 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    under-replicated blocks = 0
    2010-06-03 10:06:37,748 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    over-replicated blocks = 0
    2010-06-03 10:06:37,748 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Leaving safe mode after 0 secs.
    2010-06-03 10:06:37,748 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Network topology has 0 racks and 0 datanodes
    2010-06-03 10:06:37,749 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* UnderReplicatedBlocks has 0 blocks
    2010-06-03 10:06:37,843 INFO org.mortbay.log: Logging to
    org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
    org.mortbay.log.Slf4jLog
    2010-06-03 10:06:37,893 INFO org.apache.hadoop.http.HttpServer: Port
    returned by webServer.getConnectors()[0].getLocalPort() before open()
    is -1. Opening the listener on 50070
    2010-06-03 10:06:37,894 WARN
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    ReplicationMonitor thread received
    InterruptedException.java.lang.InterruptedException: sleep interrupted
    2010-06-03 10:06:37,894 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    transactions: 0 Total time for transactions(ms): 0Number of
    transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
    2010-06-03 10:06:37,895 INFO
    org.apache.hadoop.hdfs.server.namenode.DecommissionManager:
    Interrupted Monitor
    java.lang.InterruptedException: sleep interrupted
    at java.lang.Thread.sleep(Native Method)
    at
    org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
    at java.lang.Thread.run(Thread.java:619)
    2010-06-03 10:06:37,896 INFO org.apache.hadoop.ipc.Server: Stopping
    server on 9000
    2010-06-03 10:06:37,897 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode:
    java.net.BindException: Address already in use
    at sun.nio.ch.Net.bind(Native Method)
    at
    sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at
    org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
    at org.apache.hadoop.http.HttpServer.start(HttpServer.java:425)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:246)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:202)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:956)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)

    2010-06-03 10:06:37,897 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at node004/x.x.x.x
    ************************************************************/


    Thanks a lot
    khaled

    Quoting Sagar Shukla <sagar_shukla@persistent.co.in>:
    Hi Khaled,
    After killing zombie hadoop process on port 9000, and trying
    to start namenode will throw different error message in
    log/hadoop-hadoop-namenode.log . This will give more detailed
    information on why namenode startup is failing.

    Can you follow the same process again and paste the error message ?

    Thanks,
    Sagar

    -----Original Message-----
    From: Khaled BEN BAHRI
    Sent: Wednesday, June 02, 2010 8:37 PM
    To: common-user@hadoop.apache.org
    Subject: RE: Starting namenode

    Hi

    The problem still persist the namenode don't start and gives the same error

    Thanks for help
    khaled

    Quoting Sagar Shukla <sagar_shukla@persistent.co.in>:
    Error " Address already in use" indicates that there is already a
    process running on port 9000, because of which this service is
    unable to start.

    Looks like previously shutdown of namenode process was not clean
    because of which this process is already running. Not allowing new
    process to start.

    You can kill previous process using command
    # fuser -k 9000/tcp

    And then trying to start namenode should work.

    Thanks,
    Sagar

    -----Original Message-----
    From: Khaled BEN BAHRI
    Sent: Wednesday, June 02, 2010 6:52 PM
    To: common-user@hadoop.apache.org
    Subject: RE: Starting namenode

    Hi,
    When i started the namenode that gives these errors in logs file :
    when i want to enter in the safe mode it also fail and gives that it
    can't connect to server

    org.apache.hadoop.hdfs.server.namenode.DecommissionManager:
    Interrupted Monitor
    java.lang.InterruptedException: sleep interrupted
    at java.lang.Thread.sleep(Native Method)
    at
    org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
    at java.lang.Thread.run(Thread.java:619)
    2010-06-02 14:13:21,079 INFO org.apache.hadoop.ipc.Server: Stopping
    server on 9000
    2010-06-02 14:13:21,079 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode:
    java.net.BindException: Address already in use
    at sun.nio.ch.Net.bind(Native Method)
    at
    sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at
    org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
    at org.apache.hadoop.http.HttpServer.start(HttpServer.java:425)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:246)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:202)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)



    Thanks in advance
    Khaled


    Quoting Sagar Shukla <sagar_shukla@persistent.co.in>:
    Hi Khaled,
    Errors are usually thrown in logs/hadoop-hadoop-namenode.log
    file. Please check for the causes of why namenode startup failed.

    If you face problems debugging, you can post it here and we can help
    you debug.

    Thanks,
    Sagar

    -----Original Message-----
    From: Khaled BEN BAHRI
    Sent: Wednesday, June 02, 2010 5:49 PM
    To: common-user@hadoop.apache.org
    Subject: Starting namenode

    Hi,

    When i started hadoop daemons with start-dfs.sh
    the namenode fail to run

    only the secondarynamenode and the datanode run
    what is the problem here and how can i run the namenode??

    thanks a lot for any help :)

    khaled


    DISCLAIMER
    ==========
    This e-mail may contain privileged and confidential information
    which is the property of Persistent Systems Ltd. It is intended only
    for the use of the individual or entity to which it is addressed.
    If you are not the intended recipient, you are not authorized to
    read, retain, copy, print, distribute or use this message. If you
    have received this communication in error, please notify the sender
    and delete all copies of this message. Persistent Systems Ltd. does
    not accept any liability for virus infected mails.



    DISCLAIMER
    ==========
    This e-mail may contain privileged and confidential information
    which is the property of Persistent Systems Ltd. It is intended only
    for the use of the individual or entity to which it is addressed.
    If you are not the intended recipient, you are not authorized to
    read, retain, copy, print, distribute or use this message. If you
    have received this communication in error, please notify the sender
    and delete all copies of this message. Persistent Systems Ltd. does
    not accept any liability for virus infected mails.



    DISCLAIMER
    ==========
    This e-mail may contain privileged and confidential information
    which is the property of Persistent Systems Ltd. It is intended only
    for the use of the individual or entity to which it is addressed.
    If you are not the intended recipient, you are not authorized to
    read, retain, copy, print, distribute or use this message. If you
    have received this communication in error, please notify the sender
    and delete all copies of this message. Persistent Systems Ltd. does
    not accept any liability for virus infected mails.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedJun 2, '10 at 12:19p
activeJun 3, '10 at 8:14a
posts7
users2
websitehadoop.apache.org...
irc#hadoop

2 users in discussion

Khaled BEN BAHRI: 4 posts Sagar Shukla: 3 posts

People

Translate

site design / logo © 2022 Grokbase