FAQ

[Hadoop-common-user] DataNode is shutting down

Yibo820217
Oct 14, 2009 at 6:14 am
hi all,there is my problem.
when add a datanode to hadoop,the way is;
1.in namenode add the new datanode to conf/slave

2.in new datanode cd $HADOOP_HOME then
$ bin/hadoop-daemon.sh start datanode
$ bin/hadoop-daemon.sh start tasktracker

3.in namenode,
$bin/hadoop balancer

and the new datanode is added to hadoop,but a old datanode is shutdown
and here is the logs in datanode which is shutdown

2009-10-14 13:16:30,604 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 3 blocks got
processed in 5 msecs
2009-10-14 13:48:44,395 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action:
DNA_REGISTER
2009-10-14 13:48:47,402 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action:
DNA_REGISTER
2009-10-14 13:48:50,403 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action:
DNA_REGISTER
2009-10-14 13:48:53,407 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action:
DNA_REGISTER
2009-10-14 13:48:56,418 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action:
DNA_REGISTER
2009-10-14 13:48:59,415 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action:
DNA_REGISTER
2009-10-14 13:49:02,420 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is shutting down:
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Data node
100.207.100.33:50010 is attempting to report storage ID
DS-1277539940-100.207.100.33-50010-1255486116525. Node 100.207.100.25:50010
is expected to serve this storage.
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDatanode(FSNamesystem.java:3914)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.processReport(FSNamesystem.java:2885)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.blockReport(NameNode.java:715)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apache.hadoop.ipc.Client.call(Client.java:739)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.blockReport(Unknown Source)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:756)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1186)
at java.lang.Thread.run(Thread.java:619)

2009-10-14 13:49:02,527 INFO org.apache.hadoop.ipc.Server: Stopping server
on 50020
2009-10-14 13:49:02,528 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 50020: exiting
2009-10-14 13:49:02,529 INFO org.apache.hadoop.ipc.Server: Stopping IPC
Server Responder
2009-10-14 13:49:02,529 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
exit, active threads is 1
2009-10-14 13:49:02,528 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 0 on 50020: exiting
2009-10-14 13:49:02,528 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 50020: exiting
2009-10-14 13:49:02,529 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(10.207.0.33:50010,
storageID=DS-1277539940-100.207.100.33-50010-1255486116525, infoPort=50075,
ipcPort=50020):DataXceiveServer:
java.nio.channels.AsynchronousCloseException
at
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
at
sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
at java.lang.Thread.run(Thread.java:619)

2009-10-14 13:49:02,530 INFO org.apache.hadoop.ipc.Server: Stopping IPC
Server listener on 50020
2009-10-14 13:49:03,267 INFO
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Exiting
DataBlockScanner thread.
2009-10-14 13:49:03,530 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
exit, active threads is 0
2009-10-14 13:49:03,635 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(10.207.0.33:50010,
storageID=DS-1277539940-100.207.100.33-50010-1255486116525, infoPort=50075,
ipcPort=50020):Finishing DataNode in:
FSDataset{dirpath='/data0/hadoop/hadoopfs/data/current'}
2009-10-14 13:49:03,635 INFO org.apache.hadoop.ipc.Server: Stopping server
on 50020
2009-10-14 13:49:03,635 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
exit, active threads is 0
2009-10-14 13:49:03,636 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at cent52ip33/100.207.100.33
************************************************************/

----hdfs-core.xml----
...
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
...

can anybody help me please?

Thanks!

Darren.




--
View this message in context: http://www.nabble.com/DataNode-is-shutting-down-tp25885861p25885861.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
reply

Search Discussions

2 responses

  • Sudha sadhasivam at Oct 14, 2009 at 11:30 am
    May be master / slave file in conf directory is over written with new data nodes address.
    Instead it should be appended.
    G Sudha Sadasivam

    --- On Wed, 10/14/09, yibo820217 wrote:


    From: yibo820217 <yibo0217@gmail.com>
    Subject: DataNode is shutting down
    To: core-user@hadoop.apache.org
    Date: Wednesday, October 14, 2009, 11:41 AM



    hi all,there is my problem.
    when add a datanode to hadoop,the way is;
    1.in namenode add the new datanode to conf/slave

    2.in new datanode cd $HADOOP_HOME then
    $ bin/hadoop-daemon.sh start datanode
    $ bin/hadoop-daemon.sh start tasktracker

    3.in namenode,
    $bin/hadoop  balancer

    and the new datanode is added to hadoop,but a old datanode is shutdown
    and here is the logs in datanode which is shutdown

    2009-10-14 13:16:30,604 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 3 blocks got
    processed in 5 msecs
    2009-10-14 13:48:44,395 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action:
    DNA_REGISTER
    2009-10-14 13:48:47,402 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action:
    DNA_REGISTER
    2009-10-14 13:48:50,403 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action:
    DNA_REGISTER
    2009-10-14 13:48:53,407 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action:
    DNA_REGISTER
    2009-10-14 13:48:56,418 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action:
    DNA_REGISTER
    2009-10-14 13:48:59,415 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action:
    DNA_REGISTER
    2009-10-14 13:49:02,420 WARN
    org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is shutting down:
    org.apache.hadoop.ipc.RemoteException:
    org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Data node
    100.207.100.33:50010 is attempting to report storage ID
    DS-1277539940-100.207.100.33-50010-1255486116525. Node 100.207.100.25:50010
    is expected to serve this storage.
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDatanode(FSNamesystem.java:3914)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.processReport(FSNamesystem.java:2885)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.blockReport(NameNode.java:715)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

    at org.apache.hadoop.ipc.Client.call(Client.java:739)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    at $Proxy4.blockReport(Unknown Source)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:756)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1186)
    at java.lang.Thread.run(Thread.java:619)

    2009-10-14 13:49:02,527 INFO org.apache.hadoop.ipc.Server: Stopping server
    on 50020
    2009-10-14 13:49:02,528 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 1 on 50020: exiting
    2009-10-14 13:49:02,529 INFO org.apache.hadoop.ipc.Server: Stopping IPC
    Server Responder
    2009-10-14 13:49:02,529 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
    exit, active threads is 1
    2009-10-14 13:49:02,528 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 50020: exiting
    2009-10-14 13:49:02,528 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 50020: exiting
    2009-10-14 13:49:02,529 WARN
    org.apache.hadoop.hdfs.server.datanode.DataNode:
    DatanodeRegistration(10.207.0.33:50010,
    storageID=DS-1277539940-100.207.100.33-50010-1255486116525, infoPort=50075,
    ipcPort=50020):DataXceiveServer:
    java.nio.channels.AsynchronousCloseException
    at
    java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    at
    sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    at
    sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    at
    org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    at java.lang.Thread.run(Thread.java:619)

    2009-10-14 13:49:02,530 INFO org.apache.hadoop.ipc.Server: Stopping IPC
    Server listener on 50020
    2009-10-14 13:49:03,267 INFO
    org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Exiting
    DataBlockScanner thread.
    2009-10-14 13:49:03,530 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
    exit, active threads is 0
    2009-10-14 13:49:03,635 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode:
    DatanodeRegistration(10.207.0.33:50010,
    storageID=DS-1277539940-100.207.100.33-50010-1255486116525, infoPort=50075,
    ipcPort=50020):Finishing DataNode in:
    FSDataset{dirpath='/data0/hadoop/hadoopfs/data/current'}
    2009-10-14 13:49:03,635 INFO org.apache.hadoop.ipc.Server: Stopping server
    on 50020
    2009-10-14 13:49:03,635 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
    exit, active threads is 0
    2009-10-14 13:49:03,636 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down DataNode at cent52ip33/100.207.100.33
    ************************************************************/

    ----hdfs-core.xml----
    ...
    <property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>
    ...

    can anybody help me please?

    Thanks!

    Darren.




    --
    View this message in context: http://www.nabble.com/DataNode-is-shutting-down-tp25885861p25885861.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
  • Brian Bockelman at Oct 14, 2009 at 12:50 pm
    Hey,

    Another possibility is that you have inadvertently put some of the
    datanode files on a shared directory, such as an NFS mount. I've seen
    the same problem before on the mailing list (did you search the list
    archives?)

    Brian
    On Oct 14, 2009, at 6:30 AM, sudha sadhasivam wrote:

    May be master / slave file in conf directory is over written with
    new data nodes address.
    Instead it should be appended.
    G Sudha Sadasivam

    --- On Wed, 10/14/09, yibo820217 wrote:


    From: yibo820217 <yibo0217@gmail.com>
    Subject: DataNode is shutting down
    To: core-user@hadoop.apache.org
    Date: Wednesday, October 14, 2009, 11:41 AM



    hi all,there is my problem.
    when add a datanode to hadoop,the way is;
    1.in namenode add the new datanode to conf/slave

    2.in new datanode cd $HADOOP_HOME then
    $ bin/hadoop-daemon.sh start datanode
    $ bin/hadoop-daemon.sh start tasktracker

    3.in namenode,
    $bin/hadoop balancer

    and the new datanode is added to hadoop,but a old datanode is shutdown
    and here is the logs in datanode which is shutdown

    2009-10-14 13:16:30,604 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 3
    blocks got
    processed in 5 msecs
    2009-10-14 13:48:44,395 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand
    action:
    DNA_REGISTER
    2009-10-14 13:48:47,402 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand
    action:
    DNA_REGISTER
    2009-10-14 13:48:50,403 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand
    action:
    DNA_REGISTER
    2009-10-14 13:48:53,407 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand
    action:
    DNA_REGISTER
    2009-10-14 13:48:56,418 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand
    action:
    DNA_REGISTER
    2009-10-14 13:48:59,415 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand
    action:
    DNA_REGISTER
    2009-10-14 13:49:02,420 WARN
    org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is
    shutting down:
    org.apache.hadoop.ipc.RemoteException:
    org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Data
    node
    100.207.100.33:50010 is attempting to report storage ID
    DS-1277539940-100.207.100.33-50010-1255486116525. Node
    100.207.100.25:50010
    is expected to serve this storage.
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDatanode
    (FSNamesystem.java:3914)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.processReport
    (FSNamesystem.java:2885)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.blockReport
    (NameNode.java:715)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown
    Source)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke
    (DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

    at org.apache.hadoop.ipc.Client.call(Client.java:739)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    at $Proxy4.blockReport(Unknown Source)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.offerService
    (DataNode.java:756)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:
    1186)
    at java.lang.Thread.run(Thread.java:619)

    2009-10-14 13:49:02,527 INFO org.apache.hadoop.ipc.Server: Stopping
    server
    on 50020
    2009-10-14 13:49:02,528 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 1 on 50020: exiting
    2009-10-14 13:49:02,529 INFO org.apache.hadoop.ipc.Server: Stopping
    IPC
    Server Responder
    2009-10-14 13:49:02,529 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for
    threadgroup to
    exit, active threads is 1
    2009-10-14 13:49:02,528 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 50020: exiting
    2009-10-14 13:49:02,528 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 50020: exiting
    2009-10-14 13:49:02,529 WARN
    org.apache.hadoop.hdfs.server.datanode.DataNode:
    DatanodeRegistration(10.207.0.33:50010,
    storageID=DS-1277539940-100.207.100.33-50010-1255486116525,
    infoPort=50075,
    ipcPort=50020):DataXceiveServer:
    java.nio.channels.AsynchronousCloseException
    at
    java.nio.channels.spi.AbstractInterruptibleChannel.end
    (AbstractInterruptibleChannel.java:185)
    at
    sun.nio.ch.ServerSocketChannelImpl.accept
    (ServerSocketChannelImpl.java:152)
    at
    sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    at
    org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run
    (DataXceiverServer.java:130)
    at java.lang.Thread.run(Thread.java:619)

    2009-10-14 13:49:02,530 INFO org.apache.hadoop.ipc.Server: Stopping
    IPC
    Server listener on 50020
    2009-10-14 13:49:03,267 INFO
    org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Exiting
    DataBlockScanner thread.
    2009-10-14 13:49:03,530 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for
    threadgroup to
    exit, active threads is 0
    2009-10-14 13:49:03,635 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode:
    DatanodeRegistration(10.207.0.33:50010,
    storageID=DS-1277539940-100.207.100.33-50010-1255486116525,
    infoPort=50075,
    ipcPort=50020):Finishing DataNode in:
    FSDataset{dirpath='/data0/hadoop/hadoopfs/data/current'}
    2009-10-14 13:49:03,635 INFO org.apache.hadoop.ipc.Server: Stopping
    server
    on 50020
    2009-10-14 13:49:03,635 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for
    threadgroup to
    exit, active threads is 0
    2009-10-14 13:49:03,636 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down DataNode at cent52ip33/100.207.100.33
    ************************************************************/

    ----hdfs-core.xml----
    ...
    <property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>
    ...

    can anybody help me please?

    Thanks!

    Darren.




    --
    View this message in context: http://www.nabble.com/DataNode-is-shutting-down-tp25885861p25885861.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.


Related Discussions

Discussion Navigation
viewthread | post