FAQ
See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/43/changes

Changes:

[szetszwo] HDFS-451. Add fault injection tests, Pipeline_Fi_06,07,14,15, for DataTransferProtocol.

[szetszwo] Update hadoop-core-0.21.0-dev.jar and hadoop-core-test-0.21.0-dev.jar.

------------------------------------------
[...truncated 312630 lines...]
[junit] 2009-08-08 17:14:19,643 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
[junit] 2009-08-08 17:14:19,927 INFO datanode.DataNode (FSDataset.java:registerMBean(1417)) - Registered FSDatasetStatusMBean
[junit] 2009-08-08 17:14:19,928 INFO datanode.DataNode (DataNode.java:startDataNode(326)) - Opened info server at 54191
[junit] 2009-08-08 17:14:19,928 INFO datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
[junit] 2009-08-08 17:14:19,928 INFO datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1249762895928 with interval 21600000
[junit] 2009-08-08 17:14:19,930 INFO http.HttpServer (HttpServer.java:start(425)) - Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
[junit] 2009-08-08 17:14:19,930 INFO http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort() returned 42641 webServer.getConnectors()[0].getLocalPort() returned 42641
[junit] 2009-08-08 17:14:19,930 INFO http.HttpServer (HttpServer.java:start(463)) - Jetty bound to port 42641
[junit] 2009-08-08 17:14:19,930 INFO mortbay.log (?:invoke(?)) - jetty-6.1.14
[junit] 2009-08-08 17:14:20,005 INFO mortbay.log (?:invoke(?)) - Started SelectChannelConnector@localhost:42641
[junit] 2009-08-08 17:14:20,006 INFO jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
[junit] 2009-08-08 17:14:20,007 INFO metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=54116
[junit] 2009-08-08 17:14:20,007 INFO ipc.Server (Server.java:run(474)) - IPC Server Responder: starting
[junit] 2009-08-08 17:14:20,007 INFO datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:54191, storageID=, infoPort=42641, ipcPort=54116)
[junit] 2009-08-08 17:14:20,007 INFO ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 54116: starting
[junit] 2009-08-08 17:14:20,007 INFO ipc.Server (Server.java:run(313)) - IPC Server listener on 54116: starting
[junit] 2009-08-08 17:14:20,009 INFO hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:54191 storage DS-1647170729-67.195.138.9-54191-1249751660008
[junit] 2009-08-08 17:14:20,009 INFO net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:54191
[junit] 2009-08-08 17:14:20,054 INFO datanode.DataNode (DataNode.java:register(571)) - New storage id DS-1647170729-67.195.138.9-54191-1249751660008 is assigned to data-node 127.0.0.1:54191
[junit] 2009-08-08 17:14:20,054 INFO datanode.DataNode (DataNode.java:run(1258)) - DatanodeRegistration(127.0.0.1:54191, storageID=DS-1647170729-67.195.138.9-54191-1249751660008, infoPort=42641, ipcPort=54116)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'}
[junit] Starting DataNode 2 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6
[junit] 2009-08-08 17:14:20,055 INFO datanode.DataNode (DataNode.java:offerService(739)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
[junit] 2009-08-08 17:14:20,062 INFO common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5 is not formatted.
[junit] 2009-08-08 17:14:20,062 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
[junit] 2009-08-08 17:14:20,092 INFO datanode.DataNode (DataNode.java:blockReport(974)) - BlockReport of 0 blocks got processed in 1 msecs
[junit] 2009-08-08 17:14:20,092 INFO datanode.DataNode (DataNode.java:offerService(782)) - Starting Periodic block scanner.
[junit] 2009-08-08 17:14:20,244 INFO common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data6 is not formatted.
[junit] 2009-08-08 17:14:20,245 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
[junit] 2009-08-08 17:14:20,533 INFO datanode.DataNode (FSDataset.java:registerMBean(1417)) - Registered FSDatasetStatusMBean
[junit] 2009-08-08 17:14:20,534 INFO datanode.DataNode (DataNode.java:startDataNode(326)) - Opened info server at 34975
[junit] 2009-08-08 17:14:20,534 INFO datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
[junit] 2009-08-08 17:14:20,535 INFO datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1249761925535 with interval 21600000
[junit] 2009-08-08 17:14:20,536 INFO http.HttpServer (HttpServer.java:start(425)) - Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
[junit] 2009-08-08 17:14:20,536 INFO http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort() returned 47612 webServer.getConnectors()[0].getLocalPort() returned 47612
[junit] 2009-08-08 17:14:20,536 INFO http.HttpServer (HttpServer.java:start(463)) - Jetty bound to port 47612
[junit] 2009-08-08 17:14:20,537 INFO mortbay.log (?:invoke(?)) - jetty-6.1.14
[junit] 2009-08-08 17:14:20,601 INFO mortbay.log (?:invoke(?)) - Started SelectChannelConnector@localhost:47612
[junit] 2009-08-08 17:14:20,602 INFO jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
[junit] 2009-08-08 17:14:20,603 INFO metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=44352
[junit] 2009-08-08 17:14:20,604 INFO ipc.Server (Server.java:run(474)) - IPC Server Responder: starting
[junit] 2009-08-08 17:14:20,604 INFO ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 44352: starting
[junit] 2009-08-08 17:14:20,604 INFO ipc.Server (Server.java:run(313)) - IPC Server listener on 44352: starting
[junit] 2009-08-08 17:14:20,605 INFO datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:34975, storageID=, infoPort=47612, ipcPort=44352)
[junit] 2009-08-08 17:14:20,606 INFO hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:34975 storage DS-123548898-67.195.138.9-34975-1249751660605
[junit] 2009-08-08 17:14:20,606 INFO net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:34975
[junit] 2009-08-08 17:14:20,647 INFO datanode.DataNode (DataNode.java:register(571)) - New storage id DS-123548898-67.195.138.9-34975-1249751660605 is assigned to data-node 127.0.0.1:34975
[junit] 2009-08-08 17:14:20,647 INFO datanode.DataNode (DataNode.java:run(1258)) - DatanodeRegistration(127.0.0.1:34975, storageID=DS-123548898-67.195.138.9-34975-1249751660605, infoPort=47612, ipcPort=44352)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'}
[junit] 2009-08-08 17:14:20,648 INFO datanode.DataNode (DataNode.java:offerService(739)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
[junit] 2009-08-08 17:14:20,687 INFO datanode.DataNode (DataNode.java:blockReport(974)) - BlockReport of 0 blocks got processed in 1 msecs
[junit] 2009-08-08 17:14:20,687 INFO datanode.DataNode (DataNode.java:offerService(782)) - Starting Periodic block scanner.
[junit] 2009-08-08 17:14:20,747 INFO FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson ip=/127.0.0.1 cmd=create src=/testPipelineFi15/foo dst=null perm=hudson:supergroup:rw-r--r--
[junit] 2009-08-08 17:14:20,749 INFO hdfs.StateChange (FSNamesystem.java:allocateBlock(1303)) - BLOCK* NameSystem.allocateBlock: /testPipelineFi15/foo. blk_-5657119858028835158_1001
[junit] 2009-08-08 17:14:20,781 INFO protocol.ClientProtocolAspects (ClientProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_protocol_ClientProtocolAspects$1$7076326d(32)) - FI: addBlock Pipeline[127.0.0.1:54191, 127.0.0.1:42259, 127.0.0.1:34975]
[junit] 2009-08-08 17:14:20,782 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:54191
[junit] 2009-08-08 17:14:20,783 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70)) - FI: receiverOpWriteBlock
[junit] 2009-08-08 17:14:20,783 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-5657119858028835158_1001 src: /127.0.0.1:37959 dest: /127.0.0.1:54191
[junit] 2009-08-08 17:14:20,784 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:42259
[junit] 2009-08-08 17:14:20,785 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70)) - FI: receiverOpWriteBlock
[junit] 2009-08-08 17:14:20,785 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-5657119858028835158_1001 src: /127.0.0.1:46736 dest: /127.0.0.1:42259
[junit] 2009-08-08 17:14:20,787 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:34975
[junit] 2009-08-08 17:14:20,787 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70)) - FI: receiverOpWriteBlock
[junit] 2009-08-08 17:14:20,787 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-5657119858028835158_1001 src: /127.0.0.1:58987 dest: /127.0.0.1:34975
[junit] 2009-08-08 17:14:20,788 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60)) - FI: statusRead SUCCESS, datanode=127.0.0.1:42259
[junit] 2009-08-08 17:14:20,788 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60)) - FI: statusRead SUCCESS, datanode=127.0.0.1:54191
[junit] 2009-08-08 17:14:20,790 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
[junit] 2009-08-08 17:14:20,790 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
[junit] 2009-08-08 17:14:20,790 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
[junit] 2009-08-08 17:14:20,790 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
[junit] 2009-08-08 17:14:20,791 INFO fi.FiTestUtil (DataTransferTestUtil.java:run(158)) - FI: testPipelineFi15, index=1, datanode=127.0.0.1:42259
[junit] 2009-08-08 17:14:20,791 INFO datanode.DataNode (BlockReceiver.java:handleMirrorOutError(185)) - DatanodeRegistration(127.0.0.1:42259, storageID=DS-1381817528-67.195.138.9-42259-1249751659354, infoPort=36286, ipcPort=41586):Exception writing block blk_-5657119858028835158_1001 to mirror 127.0.0.1:34975
[junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: testPipelineFi15, index=1, datanode=127.0.0.1:42259
[junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:159)
[junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
[junit] at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
[junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:47)
[junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:408)
[junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:532)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
[junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
[junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2009-08-08 17:14:20,792 INFO datanode.DataNode (BlockReceiver.java:receiveBlock(566)) - Exception in receiveBlock for block blk_-5657119858028835158_1001 org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: testPipelineFi15, index=1, datanode=127.0.0.1:42259
[junit] 2009-08-08 17:14:20,792 INFO datanode.DataNode (BlockReceiver.java:run(907)) - PacketResponder blk_-5657119858028835158_1001 1 Exception java.io.InterruptedIOException: Interruped while waiting for IO on channel java.nio.channels.SocketChannel[connected local=/127.0.0.1:58987 remote=/127.0.0.1:34975]. 59997 millis timeout left.
[junit] at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
[junit] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
[junit] at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
[junit] at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
[junit] at java.io.DataInputStream.readFully(DataInputStream.java:178)
[junit] at java.io.DataInputStream.readLong(DataInputStream.java:399)
[junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:869)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2009-08-08 17:14:20,793 INFO datanode.DataNode (BlockReceiver.java:run(922)) - PacketResponder blk_-5657119858028835158_1001 1 : Thread is interrupted.
[junit] 2009-08-08 17:14:20,793 INFO datanode.DataNode (BlockReceiver.java:run(1009)) - PacketResponder 1 for block blk_-5657119858028835158_1001 terminating
[junit] 2009-08-08 17:14:20,793 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(358)) - writeBlock blk_-5657119858028835158_1001 received exception org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: testPipelineFi15, index=1, datanode=127.0.0.1:42259
[junit] 2009-08-08 17:14:20,793 ERROR datanode.DataNode (DataXceiver.java:run(112)) - DatanodeRegistration(127.0.0.1:42259, storageID=DS-1381817528-67.195.138.9-42259-1249751659354, infoPort=36286, ipcPort=41586):DataXceiver
[junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: testPipelineFi15, index=1, datanode=127.0.0.1:42259
[junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:159)
[junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
[junit] at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
[junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:47)
[junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:408)
[junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:532)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
[junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
[junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit] 2009-08-08 17:14:20,793 INFO datanode.DataNode (BlockReceiver.java:run(907)) - PacketResponder blk_-5657119858028835158_1001 2 Exception java.io.EOFException
[junit] at java.io.DataInputStream.readFully(DataInputStream.java:180)
[junit] at java.io.DataInputStream.readLong(DataInputStream.java:399)
[junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:869)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2009-08-08 17:14:20,793 INFO datanode.DataNode (BlockReceiver.java:receiveBlock(566)) - Exception in receiveBlock for block blk_-5657119858028835158_1001 java.io.EOFException: while trying to read 65557 bytes
[junit] 2009-08-08 17:14:20,794 INFO datanode.DataNode (BlockReceiver.java:run(1009)) - PacketResponder 2 for block blk_-5657119858028835158_1001 terminating
[junit] 2009-08-08 17:14:20,794 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(779)) - PacketResponder 0 for block blk_-5657119858028835158_1001 Interrupted.
[junit] 2009-08-08 17:14:20,794 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(843)) - PacketResponder 0 for block blk_-5657119858028835158_1001 terminating
[junit] 2009-08-08 17:14:20,794 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(358)) - writeBlock blk_-5657119858028835158_1001 received exception java.io.EOFException: while trying to read 65557 bytes
[junit] 2009-08-08 17:14:20,794 WARN hdfs.DFSClient (DFSClient.java:run(2593)) - DFSOutputStream ResponseProcessor exception for block blk_-5657119858028835158_1001java.io.IOException: Bad response ERROR for block blk_-5657119858028835158_1001 from datanode 127.0.0.1:42259
[junit] at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSClient.java:2573)
[junit]
[junit] 2009-08-08 17:14:20,795 WARN hdfs.DFSClient (DFSClient.java:processDatanodeError(2622)) - Error Recovery for block blk_-5657119858028835158_1001 bad datanode[1] 127.0.0.1:42259
[junit] 2009-08-08 17:14:20,795 ERROR datanode.DataNode (DataXceiver.java:run(112)) - DatanodeRegistration(127.0.0.1:34975, storageID=DS-123548898-67.195.138.9-34975-1249751660605, infoPort=47612, ipcPort=44352):DataXceiver
[junit] java.io.EOFException: while trying to read 65557 bytes
[junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:271)
[junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:315)
[junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:379)
[junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:532)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
[junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
[junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit] 2009-08-08 17:14:20,795 WARN hdfs.DFSClient (DFSClient.java:processDatanodeError(2666)) - Error Recovery for block blk_-5657119858028835158_1001 in pipeline 127.0.0.1:54191, 127.0.0.1:42259, 127.0.0.1:34975: bad datanode 127.0.0.1:42259
[junit] 2009-08-08 17:14:20,798 INFO datanode.DataNode (DataNode.java:logRecoverBlock(1700)) - Client calls recoverBlock(block=blk_-5657119858028835158_1001, targets=[127.0.0.1:54191, 127.0.0.1:34975])
[junit] 2009-08-08 17:14:20,802 INFO datanode.DataNode (DataNode.java:updateBlock(1510)) - oldblock=blk_-5657119858028835158_1001(length=1), newblock=blk_-5657119858028835158_1002(length=0), datanode=127.0.0.1:54191
[junit] 2009-08-08 17:14:20,803 INFO datanode.DataNode (DataNode.java:updateBlock(1510)) - oldblock=blk_-5657119858028835158_1001(length=0), newblock=blk_-5657119858028835158_1002(length=0), datanode=127.0.0.1:34975
[junit] 2009-08-08 17:14:20,804 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1613)) - commitBlockSynchronization(lastblock=blk_-5657119858028835158_1001, newgenerationstamp=1002, newlength=0, newtargets=[127.0.0.1:54191, 127.0.0.1:34975], closeFile=false, deleteBlock=false)
[junit] 2009-08-08 17:14:20,804 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1677)) - commitBlockSynchronization(blk_-5657119858028835158_1002) successful
[junit] 2009-08-08 17:14:20,805 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:54191
[junit] 2009-08-08 17:14:20,806 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70)) - FI: receiverOpWriteBlock
[junit] 2009-08-08 17:14:20,806 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-5657119858028835158_1002 src: /127.0.0.1:37964 dest: /127.0.0.1:54191
[junit] 2009-08-08 17:14:20,806 INFO datanode.DataNode (FSDataset.java:writeToBlock(1011)) - Reopen already-open Block for append blk_-5657119858028835158_1002
[junit] 2009-08-08 17:14:20,807 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:34975
[junit] 2009-08-08 17:14:20,807 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70)) - FI: receiverOpWriteBlock
[junit] 2009-08-08 17:14:20,807 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-5657119858028835158_1002 src: /127.0.0.1:58991 dest: /127.0.0.1:34975
[junit] 2009-08-08 17:14:20,807 INFO datanode.DataNode (FSDataset.java:writeToBlock(1011)) - Reopen already-open Block for append blk_-5657119858028835158_1002
[junit] 2009-08-08 17:14:20,808 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60)) - FI: statusRead SUCCESS, datanode=127.0.0.1:54191
[junit] 2009-08-08 17:14:20,809 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
[junit] 2009-08-08 17:14:20,809 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
[junit] 2009-08-08 17:14:20,809 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
[junit] 2009-08-08 17:14:20,809 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
[junit] 2009-08-08 17:14:20,809 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
[junit] 2009-08-08 17:14:20,811 INFO DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(819)) - src: /127.0.0.1:58991, dest: /127.0.0.1:34975, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_-1504818771, offset: 0, srvID: DS-123548898-67.195.138.9-34975-1249751660605, blockid: blk_-5657119858028835158_1002, duration: 2370502
[junit] 2009-08-08 17:14:20,811 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(843)) - PacketResponder 0 for block blk_-5657119858028835158_1002 terminating
[junit] 2009-08-08 17:14:20,851 INFO hdfs.StateChange (BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:34975 is added to blk_-5657119858028835158_1002 size 1
[junit] 2009-08-08 17:14:20,852 INFO DataNode.clienttrace (BlockReceiver.java:run(945)) - src: /127.0.0.1:37964, dest: /127.0.0.1:54191, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_-1504818771, offset: 0, srvID: DS-1647170729-67.195.138.9-54191-1249751660008, blockid: blk_-5657119858028835158_1002, duration: 3023379
[junit] 2009-08-08 17:14:20,852 INFO hdfs.StateChange (BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:54191 is added to blk_-5657119858028835158_1002 size 1
[junit] 2009-08-08 17:14:20,852 INFO datanode.DataNode (BlockReceiver.java:run(1009)) - PacketResponder 1 for block blk_-5657119858028835158_1002 terminating
[junit] 2009-08-08 17:14:20,854 INFO hdfs.StateChange (FSNamesystem.java:completeFileInternal(1269)) - DIR* NameSystem.completeFile: file /testPipelineFi15/foo is closed by DFSClient_-1504818771
[junit] 2009-08-08 17:14:20,863 INFO FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson ip=/127.0.0.1 cmd=open src=/testPipelineFi15/foo dst=null perm=null
[junit] 2009-08-08 17:14:20,865 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp READ_BLOCK, datanode=127.0.0.1:54191
[junit] Shutting down the Mini HDFS Cluster
[junit] Shutting down DataNode 2
[junit] 2009-08-08 17:14:20,866 INFO DataNode.clienttrace (BlockSender.java:sendBlock(417)) - src: /127.0.0.1:54191, dest: /127.0.0.1:37966, bytes: 5, op: HDFS_READ, cliID: DFSClient_-1504818771, offset: 0, srvID: DS-1647170729-67.195.138.9-54191-1249751660008, blockid: blk_-5657119858028835158_1002, duration: 234440
[junit] 2009-08-08 17:14:20,867 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60)) - FI: statusRead CHECKSUM_OK, datanode=127.0.0.1:54191
[junit] 2009-08-08 17:14:20,968 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 44352
[junit] 2009-08-08 17:14:20,968 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 44352: exiting
[junit] 2009-08-08 17:14:20,969 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
[junit] 2009-08-08 17:14:20,969 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2009-08-08 17:14:20,969 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:34975, storageID=DS-123548898-67.195.138.9-34975-1249751660605, infoPort=47612, ipcPort=44352):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2009-08-08 17:14:20,970 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 44352
[junit] 2009-08-08 17:14:20,972 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2009-08-08 17:14:20,972 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
[junit] 2009-08-08 17:14:20,972 INFO datanode.DataNode (DataNode.java:run(1278)) - DatanodeRegistration(127.0.0.1:34975, storageID=DS-123548898-67.195.138.9-34975-1249751660605, infoPort=47612, ipcPort=44352):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'}
[junit] 2009-08-08 17:14:20,972 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 44352
[junit] 2009-08-08 17:14:20,973 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
[junit] Shutting down DataNode 1
[junit] 2009-08-08 17:14:21,075 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 54116
[junit] 2009-08-08 17:14:21,075 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 54116
[junit] 2009-08-08 17:14:21,075 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
[junit] 2009-08-08 17:14:21,076 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 54116: exiting
[junit] 2009-08-08 17:14:21,076 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2009-08-08 17:14:21,075 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:54191, storageID=DS-1647170729-67.195.138.9-54191-1249751660008, infoPort=42641, ipcPort=54116):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2009-08-08 17:14:21,077 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
[junit] 2009-08-08 17:14:21,077 INFO datanode.DataNode (DataNode.java:run(1278)) - DatanodeRegistration(127.0.0.1:54191, storageID=DS-1647170729-67.195.138.9-54191-1249751660008, infoPort=42641, ipcPort=54116):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'}
[junit] 2009-08-08 17:14:21,077 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 54116
[junit] 2009-08-08 17:14:21,077 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
[junit] Shutting down DataNode 0
[junit] 2009-08-08 17:14:21,116 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 41586
[junit] 2009-08-08 17:14:21,116 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 41586: exiting
[junit] 2009-08-08 17:14:21,117 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 41586
[junit] 2009-08-08 17:14:21,117 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2009-08-08 17:14:21,117 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
[junit] 2009-08-08 17:14:21,117 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:42259, storageID=DS-1381817528-67.195.138.9-42259-1249751659354, infoPort=36286, ipcPort=41586):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2009-08-08 17:14:21,119 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2009-08-08 17:14:21,120 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
[junit] 2009-08-08 17:14:21,120 INFO datanode.DataNode (DataNode.java:run(1278)) - DatanodeRegistration(127.0.0.1:42259, storageID=DS-1381817528-67.195.138.9-42259-1249751659354, infoPort=36286, ipcPort=41586):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current'}
[junit] 2009-08-08 17:14:21,120 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 41586
[junit] 2009-08-08 17:14:21,120 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2009-08-08 17:14:21,223 WARN namenode.FSNamesystem (FSNamesystem.java:run(2077)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2009-08-08 17:14:21,223 INFO namenode.FSNamesystem (FSEditLog.java:printStatistics(884)) - Number of transactions: 5 Total time for transactions(ms): 2Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 43 37
[junit] 2009-08-08 17:14:21,223 WARN namenode.DecommissionManager (DecommissionManager.java:run(67)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2009-08-08 17:14:21,232 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 45914
[junit] 2009-08-08 17:14:21,232 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 45914: exiting
[junit] 2009-08-08 17:14:21,232 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 2 on 45914: exiting
[junit] 2009-08-08 17:14:21,232 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 1 on 45914: exiting
[junit] 2009-08-08 17:14:21,232 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 3 on 45914: exiting
[junit] 2009-08-08 17:14:21,233 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 9 on 45914: exiting
[junit] 2009-08-08 17:14:21,233 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 4 on 45914: exiting
[junit] 2009-08-08 17:14:21,233 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 6 on 45914: exiting
[junit] 2009-08-08 17:14:21,233 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 5 on 45914: exiting
[junit] 2009-08-08 17:14:21,233 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 8 on 45914: exiting
[junit] 2009-08-08 17:14:21,233 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 7 on 45914: exiting
[junit] 2009-08-08 17:14:21,234 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
[junit] 2009-08-08 17:14:21,234 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 45914
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 97.775 sec

checkfailure:

BUILD FAILED
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build.xml :725: Tests failed!

Total time: 80 minutes 54 seconds
Publishing Javadoc
Recording test results
Recording fingerprints
Publishing Clover coverage report...

Search Discussions

  • Apache Hudson Server at Aug 9, 2009 at 12:50 pm
    See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/44/

    ------------------------------------------
    [...truncated 351784 lines...]
    [junit] 2009-08-09 12:39:59,468 INFO common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data4 is not formatted.
    [junit] 2009-08-09 12:39:59,468 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-09 12:39:59,718 INFO datanode.DataNode (FSDataset.java:registerMBean(1417)) - Registered FSDatasetStatusMBean
    [junit] 2009-08-09 12:39:59,719 INFO datanode.DataNode (DataNode.java:startDataNode(326)) - Opened info server at 52546
    [junit] 2009-08-09 12:39:59,719 INFO datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-08-09 12:39:59,720 INFO datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1249823953720 with interval 21600000
    [junit] 2009-08-09 12:39:59,721 INFO http.HttpServer (HttpServer.java:start(425)) - Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
    [junit] 2009-08-09 12:39:59,721 INFO http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort() returned 48158 webServer.getConnectors()[0].getLocalPort() returned 48158
    [junit] 2009-08-09 12:39:59,721 INFO http.HttpServer (HttpServer.java:start(463)) - Jetty bound to port 48158
    [junit] 2009-08-09 12:39:59,722 INFO mortbay.log (?:invoke(?)) - jetty-6.1.14
    [junit] 2009-08-09 12:39:59,787 INFO mortbay.log (?:invoke(?)) - Started SelectChannelConnector@localhost:48158
    [junit] 2009-08-09 12:39:59,788 INFO jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-09 12:39:59,789 INFO metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=49960
    [junit] 2009-08-09 12:39:59,790 INFO ipc.Server (Server.java:run(474)) - IPC Server Responder: starting
    [junit] 2009-08-09 12:39:59,790 INFO datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:52546, storageID=, infoPort=48158, ipcPort=49960)
    [junit] 2009-08-09 12:39:59,790 INFO ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 49960: starting
    [junit] 2009-08-09 12:39:59,790 INFO ipc.Server (Server.java:run(313)) - IPC Server listener on 49960: starting
    [junit] 2009-08-09 12:39:59,791 INFO hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:52546 storage DS-431642151-67.195.138.9-52546-1249821599791
    [junit] 2009-08-09 12:39:59,792 INFO net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:52546
    [junit] 2009-08-09 12:39:59,850 INFO datanode.DataNode (DataNode.java:register(571)) - New storage id DS-431642151-67.195.138.9-52546-1249821599791 is assigned to data-node 127.0.0.1:52546
    [junit] 2009-08-09 12:39:59,851 INFO datanode.DataNode (DataNode.java:run(1258)) - DatanodeRegistration(127.0.0.1:52546, storageID=DS-431642151-67.195.138.9-52546-1249821599791, infoPort=48158, ipcPort=49960)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'}
    [junit] Starting DataNode 2 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6
    [junit] 2009-08-09 12:39:59,851 INFO datanode.DataNode (DataNode.java:offerService(739)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
    [junit] 2009-08-09 12:39:59,860 INFO common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5 is not formatted.
    [junit] 2009-08-09 12:39:59,860 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-09 12:39:59,891 INFO datanode.DataNode (DataNode.java:blockReport(974)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-08-09 12:39:59,891 INFO datanode.DataNode (DataNode.java:offerService(782)) - Starting Periodic block scanner.
    [junit] 2009-08-09 12:40:00,022 INFO common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data6 is not formatted.
    [junit] 2009-08-09 12:40:00,022 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-09 12:40:00,288 INFO datanode.DataNode (FSDataset.java:registerMBean(1417)) - Registered FSDatasetStatusMBean
    [junit] 2009-08-09 12:40:00,288 INFO datanode.DataNode (DataNode.java:startDataNode(326)) - Opened info server at 36519
    [junit] 2009-08-09 12:40:00,289 INFO datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-08-09 12:40:00,289 INFO datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1249834820289 with interval 21600000
    [junit] 2009-08-09 12:40:00,290 INFO http.HttpServer (HttpServer.java:start(425)) - Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
    [junit] 2009-08-09 12:40:00,291 INFO http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort() returned 35828 webServer.getConnectors()[0].getLocalPort() returned 35828
    [junit] 2009-08-09 12:40:00,291 INFO http.HttpServer (HttpServer.java:start(463)) - Jetty bound to port 35828
    [junit] 2009-08-09 12:40:00,291 INFO mortbay.log (?:invoke(?)) - jetty-6.1.14
    [junit] 2009-08-09 12:40:00,367 INFO mortbay.log (?:invoke(?)) - Started SelectChannelConnector@localhost:35828
    [junit] 2009-08-09 12:40:00,368 INFO jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-09 12:40:00,369 INFO metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=42704
    [junit] 2009-08-09 12:40:00,370 INFO ipc.Server (Server.java:run(474)) - IPC Server Responder: starting
    [junit] 2009-08-09 12:40:00,370 INFO ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 42704: starting
    [junit] 2009-08-09 12:40:00,370 INFO ipc.Server (Server.java:run(313)) - IPC Server listener on 42704: starting
    [junit] 2009-08-09 12:40:00,370 INFO datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:36519, storageID=, infoPort=35828, ipcPort=42704)
    [junit] 2009-08-09 12:40:00,372 INFO hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:36519 storage DS-307716393-67.195.138.9-36519-1249821600371
    [junit] 2009-08-09 12:40:00,372 INFO net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:36519
    [junit] 2009-08-09 12:40:00,408 INFO datanode.DataNode (DataNode.java:register(571)) - New storage id DS-307716393-67.195.138.9-36519-1249821600371 is assigned to data-node 127.0.0.1:36519
    [junit] 2009-08-09 12:40:00,408 INFO datanode.DataNode (DataNode.java:run(1258)) - DatanodeRegistration(127.0.0.1:36519, storageID=DS-307716393-67.195.138.9-36519-1249821600371, infoPort=35828, ipcPort=42704)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'}
    [junit] 2009-08-09 12:40:00,408 INFO datanode.DataNode (DataNode.java:offerService(739)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
    [junit] 2009-08-09 12:40:00,444 INFO datanode.DataNode (DataNode.java:blockReport(974)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-08-09 12:40:00,444 INFO datanode.DataNode (DataNode.java:offerService(782)) - Starting Periodic block scanner.
    [junit] 2009-08-09 12:40:00,504 INFO FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson ip=/127.0.0.1 cmd=create src=/testPipelineFi15/foo dst=null perm=hudson:supergroup:rw-r--r--
    [junit] 2009-08-09 12:40:00,506 INFO hdfs.StateChange (FSNamesystem.java:allocateBlock(1303)) - BLOCK* NameSystem.allocateBlock: /testPipelineFi15/foo. blk_-4004277172538634555_1001
    [junit] 2009-08-09 12:40:00,541 INFO protocol.ClientProtocolAspects (ClientProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_protocol_ClientProtocolAspects$1$7076326d(32)) - FI: addBlock Pipeline[127.0.0.1:52546, 127.0.0.1:48078, 127.0.0.1:36519]
    [junit] 2009-08-09 12:40:00,542 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:52546
    [junit] 2009-08-09 12:40:00,542 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70)) - FI: receiverOpWriteBlock
    [junit] 2009-08-09 12:40:00,542 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-4004277172538634555_1001 src: /127.0.0.1:38436 dest: /127.0.0.1:52546
    [junit] 2009-08-09 12:40:00,544 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:48078
    [junit] 2009-08-09 12:40:00,544 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70)) - FI: receiverOpWriteBlock
    [junit] 2009-08-09 12:40:00,544 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-4004277172538634555_1001 src: /127.0.0.1:42417 dest: /127.0.0.1:48078
    [junit] 2009-08-09 12:40:00,545 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:36519
    [junit] 2009-08-09 12:40:00,545 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70)) - FI: receiverOpWriteBlock
    [junit] 2009-08-09 12:40:00,546 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-4004277172538634555_1001 src: /127.0.0.1:60263 dest: /127.0.0.1:36519
    [junit] 2009-08-09 12:40:00,546 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60)) - FI: statusRead SUCCESS, datanode=127.0.0.1:48078
    [junit] 2009-08-09 12:40:00,547 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60)) - FI: statusRead SUCCESS, datanode=127.0.0.1:52546
    [junit] 2009-08-09 12:40:00,548 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-09 12:40:00,548 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-09 12:40:00,548 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-09 12:40:00,548 INFO fi.FiTestUtil (DataTransferTestUtil.java:run(158)) - FI: testPipelineFi15, index=1, datanode=127.0.0.1:48078
    [junit] 2009-08-09 12:40:00,548 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-09 12:40:00,549 INFO datanode.DataNode (BlockReceiver.java:handleMirrorOutError(185)) - DatanodeRegistration(127.0.0.1:48078, storageID=DS-36780648-67.195.138.9-48078-1249821599228, infoPort=42944, ipcPort=58112):Exception writing block blk_-4004277172538634555_1001 to mirror 127.0.0.1:36519
    [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: testPipelineFi15, index=1, datanode=127.0.0.1:48078
    [junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:159)
    [junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
    [junit] at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:47)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:408)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:532)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit]
    [junit] 2009-08-09 12:40:00,549 INFO datanode.DataNode (BlockReceiver.java:receiveBlock(566)) - Exception in receiveBlock for block blk_-4004277172538634555_1001 org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: testPipelineFi15, index=1, datanode=127.0.0.1:48078
    [junit] 2009-08-09 12:40:00,549 INFO datanode.DataNode (BlockReceiver.java:run(907)) - PacketResponder blk_-4004277172538634555_1001 1 Exception java.io.InterruptedIOException: Interruped while waiting for IO on channel java.nio.channels.SocketChannel[connected local=/127.0.0.1:60263 remote=/127.0.0.1:36519]. 59998 millis timeout left.
    [junit] at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
    [junit] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
    [junit] at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    [junit] at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    [junit] at java.io.DataInputStream.readFully(DataInputStream.java:178)
    [junit] at java.io.DataInputStream.readLong(DataInputStream.java:399)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:869)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit]
    [junit] 2009-08-09 12:40:00,549 INFO datanode.DataNode (BlockReceiver.java:run(922)) - PacketResponder blk_-4004277172538634555_1001 1 : Thread is interrupted.
    [junit] 2009-08-09 12:40:00,549 INFO datanode.DataNode (BlockReceiver.java:run(1009)) - PacketResponder 1 for block blk_-4004277172538634555_1001 terminating
    [junit] 2009-08-09 12:40:00,549 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(358)) - writeBlock blk_-4004277172538634555_1001 received exception org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: testPipelineFi15, index=1, datanode=127.0.0.1:48078
    [junit] 2009-08-09 12:40:00,550 ERROR datanode.DataNode (DataXceiver.java:run(112)) - DatanodeRegistration(127.0.0.1:48078, storageID=DS-36780648-67.195.138.9-48078-1249821599228, infoPort=42944, ipcPort=58112):DataXceiver
    [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: testPipelineFi15, index=1, datanode=127.0.0.1:48078
    [junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:159)
    [junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
    [junit] at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:47)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:408)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:532)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit] 2009-08-09 12:40:00,550 INFO datanode.DataNode (BlockReceiver.java:run(907)) - PacketResponder blk_-4004277172538634555_1001 2 Exception java.io.EOFException
    [junit] at java.io.DataInputStream.readFully(DataInputStream.java:180)
    [junit] at java.io.DataInputStream.readLong(DataInputStream.java:399)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:869)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit]
    [junit] 2009-08-09 12:40:00,550 INFO datanode.DataNode (BlockReceiver.java:receiveBlock(566)) - Exception in receiveBlock for block blk_-4004277172538634555_1001 java.io.EOFException: while trying to read 65557 bytes
    [junit] 2009-08-09 12:40:00,551 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(779)) - PacketResponder 0 for block blk_-4004277172538634555_1001 Interrupted.
    [junit] 2009-08-09 12:40:00,551 WARN hdfs.DFSClient (DFSClient.java:run(2593)) - DFSOutputStream ResponseProcessor exception for block blk_-4004277172538634555_1001java.io.IOException: Bad response ERROR for block blk_-4004277172538634555_1001 from datanode 127.0.0.1:48078
    [junit] at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSClient.java:2573)
    [junit]
    [junit] 2009-08-09 12:40:00,551 INFO datanode.DataNode (BlockReceiver.java:run(1009)) - PacketResponder 2 for block blk_-4004277172538634555_1001 terminating
    [junit] 2009-08-09 12:40:00,551 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(843)) - PacketResponder 0 for block blk_-4004277172538634555_1001 terminating
    [junit] 2009-08-09 12:40:00,551 WARN hdfs.DFSClient (DFSClient.java:processDatanodeError(2622)) - Error Recovery for block blk_-4004277172538634555_1001 bad datanode[1] 127.0.0.1:48078
    [junit] 2009-08-09 12:40:00,552 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(358)) - writeBlock blk_-4004277172538634555_1001 received exception java.io.EOFException: while trying to read 65557 bytes
    [junit] 2009-08-09 12:40:00,552 WARN hdfs.DFSClient (DFSClient.java:processDatanodeError(2666)) - Error Recovery for block blk_-4004277172538634555_1001 in pipeline 127.0.0.1:52546, 127.0.0.1:48078, 127.0.0.1:36519: bad datanode 127.0.0.1:48078
    [junit] 2009-08-09 12:40:00,552 ERROR datanode.DataNode (DataXceiver.java:run(112)) - DatanodeRegistration(127.0.0.1:36519, storageID=DS-307716393-67.195.138.9-36519-1249821600371, infoPort=35828, ipcPort=42704):DataXceiver
    [junit] java.io.EOFException: while trying to read 65557 bytes
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:271)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:315)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:379)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:532)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit] 2009-08-09 12:40:00,554 INFO datanode.DataNode (DataNode.java:logRecoverBlock(1700)) - Client calls recoverBlock(block=blk_-4004277172538634555_1001, targets=[127.0.0.1:52546, 127.0.0.1:36519])
    [junit] 2009-08-09 12:40:00,558 INFO datanode.DataNode (DataNode.java:updateBlock(1510)) - oldblock=blk_-4004277172538634555_1001(length=1), newblock=blk_-4004277172538634555_1002(length=0), datanode=127.0.0.1:52546
    [junit] 2009-08-09 12:40:00,560 INFO datanode.DataNode (DataNode.java:updateBlock(1510)) - oldblock=blk_-4004277172538634555_1001(length=0), newblock=blk_-4004277172538634555_1002(length=0), datanode=127.0.0.1:36519
    [junit] 2009-08-09 12:40:00,560 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1613)) - commitBlockSynchronization(lastblock=blk_-4004277172538634555_1001, newgenerationstamp=1002, newlength=0, newtargets=[127.0.0.1:52546, 127.0.0.1:36519], closeFile=false, deleteBlock=false)
    [junit] 2009-08-09 12:40:00,561 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1677)) - commitBlockSynchronization(blk_-4004277172538634555_1002) successful
    [junit] 2009-08-09 12:40:00,562 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:52546
    [junit] 2009-08-09 12:40:00,562 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70)) - FI: receiverOpWriteBlock
    [junit] 2009-08-09 12:40:00,562 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-4004277172538634555_1002 src: /127.0.0.1:38441 dest: /127.0.0.1:52546
    [junit] 2009-08-09 12:40:00,563 INFO datanode.DataNode (FSDataset.java:writeToBlock(1011)) - Reopen already-open Block for append blk_-4004277172538634555_1002
    [junit] 2009-08-09 12:40:00,563 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:36519
    [junit] 2009-08-09 12:40:00,563 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70)) - FI: receiverOpWriteBlock
    [junit] 2009-08-09 12:40:00,564 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-4004277172538634555_1002 src: /127.0.0.1:60267 dest: /127.0.0.1:36519
    [junit] 2009-08-09 12:40:00,564 INFO datanode.DataNode (FSDataset.java:writeToBlock(1011)) - Reopen already-open Block for append blk_-4004277172538634555_1002
    [junit] 2009-08-09 12:40:00,564 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60)) - FI: statusRead SUCCESS, datanode=127.0.0.1:52546
    [junit] 2009-08-09 12:40:00,565 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-09 12:40:00,565 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-09 12:40:00,565 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-09 12:40:00,566 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-09 12:40:00,566 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-09 12:40:00,567 INFO DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(819)) - src: /127.0.0.1:60267, dest: /127.0.0.1:36519, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_-1303102380, offset: 0, srvID: DS-307716393-67.195.138.9-36519-1249821600371, blockid: blk_-4004277172538634555_1002, duration: 2049711
    [junit] 2009-08-09 12:40:00,567 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(843)) - PacketResponder 0 for block blk_-4004277172538634555_1002 terminating
    [junit] 2009-08-09 12:40:00,568 INFO hdfs.StateChange (BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:36519 is added to blk_-4004277172538634555_1002 size 1
    [junit] 2009-08-09 12:40:00,568 INFO hdfs.StateChange (BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:52546 is added to blk_-4004277172538634555_1002 size 1
    [junit] 2009-08-09 12:40:00,569 INFO DataNode.clienttrace (BlockReceiver.java:run(945)) - src: /127.0.0.1:38441, dest: /127.0.0.1:52546, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_-1303102380, offset: 0, srvID: DS-431642151-67.195.138.9-52546-1249821599791, blockid: blk_-4004277172538634555_1002, duration: 2646397
    [junit] 2009-08-09 12:40:00,569 INFO datanode.DataNode (BlockReceiver.java:run(1009)) - PacketResponder 1 for block blk_-4004277172538634555_1002 terminating
    [junit] 2009-08-09 12:40:00,570 INFO hdfs.StateChange (FSNamesystem.java:completeFileInternal(1269)) - DIR* NameSystem.completeFile: file /testPipelineFi15/foo is closed by DFSClient_-1303102380
    [junit] 2009-08-09 12:40:00,585 INFO FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson ip=/127.0.0.1 cmd=open src=/testPipelineFi15/foo dst=null perm=null
    [junit] 2009-08-09 12:40:00,586 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp READ_BLOCK, datanode=127.0.0.1:36519
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 2
    [junit] 2009-08-09 12:40:00,587 INFO DataNode.clienttrace (BlockSender.java:sendBlock(417)) - src: /127.0.0.1:36519, dest: /127.0.0.1:60268, bytes: 5, op: HDFS_READ, cliID: DFSClient_-1303102380, offset: 0, srvID: DS-307716393-67.195.138.9-36519-1249821600371, blockid: blk_-4004277172538634555_1002, duration: 232839
    [junit] 2009-08-09 12:40:00,588 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60)) - FI: statusRead CHECKSUM_OK, datanode=127.0.0.1:36519
    [junit] 2009-08-09 12:40:00,690 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 42704
    [junit] 2009-08-09 12:40:00,690 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 42704: exiting
    [junit] 2009-08-09 12:40:00,690 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 42704
    [junit] 2009-08-09 12:40:00,691 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-09 12:40:00,691 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:36519, storageID=DS-307716393-67.195.138.9-36519-1249821600371, infoPort=35828, ipcPort=42704):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit]
    [junit] 2009-08-09 12:40:00,691 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-09 12:40:00,692 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-09 12:40:00,692 INFO datanode.DataNode (DataNode.java:run(1278)) - DatanodeRegistration(127.0.0.1:36519, storageID=DS-307716393-67.195.138.9-36519-1249821600371, infoPort=35828, ipcPort=42704):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'}
    [junit] 2009-08-09 12:40:00,692 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 42704
    [junit] 2009-08-09 12:40:00,693 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 1
    [junit] 2009-08-09 12:40:00,795 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 49960
    [junit] 2009-08-09 12:40:00,795 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 49960: exiting
    [junit] 2009-08-09 12:40:00,796 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 49960
    [junit] 2009-08-09 12:40:00,796 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:52546, storageID=DS-431642151-67.195.138.9-52546-1249821599791, infoPort=48158, ipcPort=49960):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit]
    [junit] 2009-08-09 12:40:00,796 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-09 12:40:00,796 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-08-09 12:40:00,797 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-09 12:40:00,797 INFO datanode.DataNode (DataNode.java:run(1278)) - DatanodeRegistration(127.0.0.1:52546, storageID=DS-431642151-67.195.138.9-52546-1249821599791, infoPort=48158, ipcPort=49960):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'}
    [junit] 2009-08-09 12:40:00,797 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 49960
    [junit] 2009-08-09 12:40:00,797 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 0
    [junit] 2009-08-09 12:40:00,899 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 58112
    [junit] 2009-08-09 12:40:00,899 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 58112: exiting
    [junit] 2009-08-09 12:40:00,900 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 58112
    [junit] 2009-08-09 12:40:00,900 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-08-09 12:40:00,900 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:48078, storageID=DS-36780648-67.195.138.9-48078-1249821599228, infoPort=42944, ipcPort=58112):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit]
    [junit] 2009-08-09 12:40:00,900 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-09 12:40:00,902 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-09 12:40:00,903 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-09 12:40:00,903 INFO datanode.DataNode (DataNode.java:run(1278)) - DatanodeRegistration(127.0.0.1:48078, storageID=DS-36780648-67.195.138.9-48078-1249821599228, infoPort=42944, ipcPort=58112):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current'}
    [junit] 2009-08-09 12:40:00,903 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 58112
    [junit] 2009-08-09 12:40:00,904 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-09 12:40:01,005 WARN namenode.DecommissionManager (DecommissionManager.java:run(67)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2009-08-09 12:40:01,005 WARN namenode.FSNamesystem (FSNamesystem.java:run(2077)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2009-08-09 12:40:01,005 INFO namenode.FSNamesystem (FSEditLog.java:printStatistics(884)) - Number of transactions: 5 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 49 32
    [junit] 2009-08-09 12:40:01,015 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 41708
    [junit] 2009-08-09 12:40:01,015 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 41708: exiting
    [junit] 2009-08-09 12:40:01,015 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 8 on 41708: exiting
    [junit] 2009-08-09 12:40:01,015 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 7 on 41708: exiting
    [junit] 2009-08-09 12:40:01,016 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 6 on 41708: exiting
    [junit] 2009-08-09 12:40:01,015 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 2 on 41708: exiting
    [junit] 2009-08-09 12:40:01,015 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-09 12:40:01,015 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 1 on 41708: exiting
    [junit] 2009-08-09 12:40:01,015 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 5 on 41708: exiting
    [junit] 2009-08-09 12:40:01,015 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 4 on 41708: exiting
    [junit] 2009-08-09 12:40:01,016 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 9 on 41708: exiting
    [junit] 2009-08-09 12:40:01,016 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 3 on 41708: exiting
    [junit] 2009-08-09 12:40:01,016 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 41708
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 43.435 sec

    checkfailure:

    BUILD FAILED
    http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build.xml :725: Tests failed!

    Total time: 66 minutes 2 seconds
    Publishing Javadoc
    Recording test results
    Recording fingerprints
    Publishing Clover coverage report...
  • Apache Hudson Server at Aug 10, 2009 at 3:42 pm
    See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/45/

    ------------------------------------------
    [...truncated 317708 lines...]
    [junit] 2009-08-10 15:21:36,540 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-10 15:21:36,943 INFO datanode.DataNode (FSDataset.java:registerMBean(1417)) - Registered FSDatasetStatusMBean
    [junit] 2009-08-10 15:21:36,944 INFO datanode.DataNode (DataNode.java:startDataNode(326)) - Opened info server at 49054
    [junit] 2009-08-10 15:21:36,944 INFO datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-08-10 15:21:36,944 INFO datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1249929725944 with interval 21600000
    [junit] 2009-08-10 15:21:36,946 INFO http.HttpServer (HttpServer.java:start(425)) - Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
    [junit] 2009-08-10 15:21:36,946 INFO http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort() returned 46168 webServer.getConnectors()[0].getLocalPort() returned 46168
    [junit] 2009-08-10 15:21:36,946 INFO http.HttpServer (HttpServer.java:start(463)) - Jetty bound to port 46168
    [junit] 2009-08-10 15:21:36,946 INFO mortbay.log (?:invoke(?)) - jetty-6.1.14
    [junit] 2009-08-10 15:21:37,070 INFO mortbay.log (?:invoke(?)) - Started SelectChannelConnector@localhost:46168
    [junit] 2009-08-10 15:21:37,071 INFO jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-10 15:21:37,072 INFO metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=34492
    [junit] 2009-08-10 15:21:37,073 INFO ipc.Server (Server.java:run(474)) - IPC Server Responder: starting
    [junit] 2009-08-10 15:21:37,073 INFO datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:49054, storageID=, infoPort=46168, ipcPort=34492)
    [junit] 2009-08-10 15:21:37,073 INFO ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 34492: starting
    [junit] 2009-08-10 15:21:37,073 INFO ipc.Server (Server.java:run(313)) - IPC Server listener on 34492: starting
    [junit] 2009-08-10 15:21:37,094 INFO hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:49054 storage DS-1109244666-67.195.138.9-49054-1249917697094
    [junit] 2009-08-10 15:21:37,095 INFO net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:49054
    [junit] 2009-08-10 15:21:37,155 INFO datanode.DataNode (DataNode.java:register(571)) - New storage id DS-1109244666-67.195.138.9-49054-1249917697094 is assigned to data-node 127.0.0.1:49054
    [junit] 2009-08-10 15:21:37,155 INFO datanode.DataNode (DataNode.java:run(1258)) - DatanodeRegistration(127.0.0.1:49054, storageID=DS-1109244666-67.195.138.9-49054-1249917697094, infoPort=46168, ipcPort=34492)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'}
    [junit] Starting DataNode 2 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6
    [junit] 2009-08-10 15:21:37,156 INFO datanode.DataNode (DataNode.java:offerService(739)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
    [junit] 2009-08-10 15:21:37,164 INFO common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5 is not formatted.
    [junit] 2009-08-10 15:21:37,164 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-10 15:21:37,193 INFO datanode.DataNode (DataNode.java:blockReport(974)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-08-10 15:21:37,194 INFO datanode.DataNode (DataNode.java:offerService(782)) - Starting Periodic block scanner.
    [junit] 2009-08-10 15:21:37,350 INFO common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data6 is not formatted.
    [junit] 2009-08-10 15:21:37,350 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-10 15:21:37,605 INFO datanode.DataNode (FSDataset.java:registerMBean(1417)) - Registered FSDatasetStatusMBean
    [junit] 2009-08-10 15:21:37,605 INFO datanode.DataNode (DataNode.java:startDataNode(326)) - Opened info server at 38273
    [junit] 2009-08-10 15:21:37,605 INFO datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-08-10 15:21:37,606 INFO datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1249929966606 with interval 21600000
    [junit] 2009-08-10 15:21:37,607 INFO http.HttpServer (HttpServer.java:start(425)) - Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
    [junit] 2009-08-10 15:21:37,607 INFO http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort() returned 47864 webServer.getConnectors()[0].getLocalPort() returned 47864
    [junit] 2009-08-10 15:21:37,608 INFO http.HttpServer (HttpServer.java:start(463)) - Jetty bound to port 47864
    [junit] 2009-08-10 15:21:37,608 INFO mortbay.log (?:invoke(?)) - jetty-6.1.14
    [junit] 2009-08-10 15:21:37,673 INFO mortbay.log (?:invoke(?)) - Started SelectChannelConnector@localhost:47864
    [junit] 2009-08-10 15:21:37,673 INFO jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-10 15:21:37,675 INFO metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=35997
    [junit] 2009-08-10 15:21:37,675 INFO ipc.Server (Server.java:run(474)) - IPC Server Responder: starting
    [junit] 2009-08-10 15:21:37,675 INFO ipc.Server (Server.java:run(313)) - IPC Server listener on 35997: starting
    [junit] 2009-08-10 15:21:37,676 INFO ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 35997: starting
    [junit] 2009-08-10 15:21:37,676 INFO datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:38273, storageID=, infoPort=47864, ipcPort=35997)
    [junit] 2009-08-10 15:21:37,677 INFO hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:38273 storage DS-124577835-67.195.138.9-38273-1249917697677
    [junit] 2009-08-10 15:21:37,677 INFO net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:38273
    [junit] 2009-08-10 15:21:37,726 INFO datanode.DataNode (DataNode.java:register(571)) - New storage id DS-124577835-67.195.138.9-38273-1249917697677 is assigned to data-node 127.0.0.1:38273
    [junit] 2009-08-10 15:21:37,726 INFO datanode.DataNode (DataNode.java:run(1258)) - DatanodeRegistration(127.0.0.1:38273, storageID=DS-124577835-67.195.138.9-38273-1249917697677, infoPort=47864, ipcPort=35997)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'}
    [junit] 2009-08-10 15:21:37,727 INFO datanode.DataNode (DataNode.java:offerService(739)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
    [junit] 2009-08-10 15:21:37,768 INFO datanode.DataNode (DataNode.java:blockReport(974)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-08-10 15:21:37,768 INFO datanode.DataNode (DataNode.java:offerService(782)) - Starting Periodic block scanner.
    [junit] 2009-08-10 15:21:37,808 INFO FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson ip=/127.0.0.1 cmd=create src=/testPipelineFi15/foo dst=null perm=hudson:supergroup:rw-r--r--
    [junit] 2009-08-10 15:21:37,811 INFO hdfs.StateChange (FSNamesystem.java:allocateBlock(1303)) - BLOCK* NameSystem.allocateBlock: /testPipelineFi15/foo. blk_6077790398764684064_1001
    [junit] 2009-08-10 15:21:37,843 INFO protocol.ClientProtocolAspects (ClientProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_protocol_ClientProtocolAspects$1$7076326d(32)) - FI: addBlock Pipeline[127.0.0.1:38273, 127.0.0.1:49054, 127.0.0.1:52882]
    [junit] 2009-08-10 15:21:37,845 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:38273
    [junit] 2009-08-10 15:21:37,845 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70)) - FI: receiverOpWriteBlock
    [junit] 2009-08-10 15:21:37,846 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_6077790398764684064_1001 src: /127.0.0.1:35698 dest: /127.0.0.1:38273
    [junit] 2009-08-10 15:21:37,847 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:49054
    [junit] 2009-08-10 15:21:37,847 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70)) - FI: receiverOpWriteBlock
    [junit] 2009-08-10 15:21:37,848 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_6077790398764684064_1001 src: /127.0.0.1:59555 dest: /127.0.0.1:49054
    [junit] 2009-08-10 15:21:37,849 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:52882
    [junit] 2009-08-10 15:21:37,849 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70)) - FI: receiverOpWriteBlock
    [junit] 2009-08-10 15:21:37,849 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_6077790398764684064_1001 src: /127.0.0.1:38921 dest: /127.0.0.1:52882
    [junit] 2009-08-10 15:21:37,850 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60)) - FI: statusRead SUCCESS, datanode=127.0.0.1:49054
    [junit] 2009-08-10 15:21:37,850 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60)) - FI: statusRead SUCCESS, datanode=127.0.0.1:38273
    [junit] 2009-08-10 15:21:37,852 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-10 15:21:37,853 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-10 15:21:37,853 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-10 15:21:37,854 INFO fi.FiTestUtil (DataTransferTestUtil.java:run(158)) - FI: testPipelineFi15, index=1, datanode=127.0.0.1:49054
    [junit] 2009-08-10 15:21:37,854 INFO datanode.DataNode (BlockReceiver.java:handleMirrorOutError(185)) - DatanodeRegistration(127.0.0.1:49054, storageID=DS-1109244666-67.195.138.9-49054-1249917697094, infoPort=46168, ipcPort=34492):Exception writing block blk_6077790398764684064_1001 to mirror 127.0.0.1:52882
    [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: testPipelineFi15, index=1, datanode=127.0.0.1:49054
    [junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:159)
    [junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
    [junit] at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:47)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:408)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:532)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit]
    [junit] 2009-08-10 15:21:37,854 INFO datanode.DataNode (BlockReceiver.java:receiveBlock(566)) - Exception in receiveBlock for block blk_6077790398764684064_1001 org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: testPipelineFi15, index=1, datanode=127.0.0.1:49054
    [junit] 2009-08-10 15:21:37,855 INFO datanode.DataNode (BlockReceiver.java:run(907)) - PacketResponder blk_6077790398764684064_1001 1 Exception java.io.InterruptedIOException: Interruped while waiting for IO on channel java.nio.channels.SocketChannel[connected local=/127.0.0.1:38921 remote=/127.0.0.1:52882]. 59997 millis timeout left.
    [junit] at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
    [junit] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
    [junit] at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    [junit] at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    [junit] at java.io.DataInputStream.readFully(DataInputStream.java:178)
    [junit] at java.io.DataInputStream.readLong(DataInputStream.java:399)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:869)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit]
    [junit] 2009-08-10 15:21:37,855 INFO datanode.DataNode (BlockReceiver.java:run(922)) - PacketResponder blk_6077790398764684064_1001 1 : Thread is interrupted.
    [junit] 2009-08-10 15:21:37,855 INFO datanode.DataNode (BlockReceiver.java:run(1009)) - PacketResponder 1 for block blk_6077790398764684064_1001 terminating
    [junit] 2009-08-10 15:21:37,855 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(358)) - writeBlock blk_6077790398764684064_1001 received exception org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: testPipelineFi15, index=1, datanode=127.0.0.1:49054
    [junit] 2009-08-10 15:21:37,855 ERROR datanode.DataNode (DataXceiver.java:run(112)) - DatanodeRegistration(127.0.0.1:49054, storageID=DS-1109244666-67.195.138.9-49054-1249917697094, infoPort=46168, ipcPort=34492):DataXceiver
    [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: testPipelineFi15, index=1, datanode=127.0.0.1:49054
    [junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:159)
    [junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
    [junit] at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:47)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:408)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:532)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit] 2009-08-10 15:21:37,856 INFO datanode.DataNode (BlockReceiver.java:receiveBlock(566)) - Exception in receiveBlock for block blk_6077790398764684064_1001 java.io.EOFException: while trying to read 65557 bytes
    [junit] 2009-08-10 15:21:37,856 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(779)) - PacketResponder 0 for block blk_6077790398764684064_1001 Interrupted.
    [junit] 2009-08-10 15:21:37,857 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(843)) - PacketResponder 0 for block blk_6077790398764684064_1001 terminating
    [junit] 2009-08-10 15:21:37,857 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(358)) - writeBlock blk_6077790398764684064_1001 received exception java.io.EOFException: while trying to read 65557 bytes
    [junit] 2009-08-10 15:21:37,857 ERROR datanode.DataNode (DataXceiver.java:run(112)) - DatanodeRegistration(127.0.0.1:52882, storageID=DS-593760917-67.195.138.9-52882-1249917696278, infoPort=35631, ipcPort=41464):DataXceiver
    [junit] java.io.EOFException: while trying to read 65557 bytes
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:271)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:315)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:379)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:532)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit] 2009-08-10 15:21:37,854 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-10 15:21:37,857 INFO datanode.DataNode (BlockReceiver.java:run(907)) - PacketResponder blk_6077790398764684064_1001 2 Exception java.io.EOFException
    [junit] at java.io.DataInputStream.readFully(DataInputStream.java:180)
    [junit] at java.io.DataInputStream.readLong(DataInputStream.java:399)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:869)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit]
    [junit] 2009-08-10 15:21:37,859 INFO datanode.DataNode (BlockReceiver.java:run(1009)) - PacketResponder 2 for block blk_6077790398764684064_1001 terminating
    [junit] 2009-08-10 15:21:37,859 WARN hdfs.DFSClient (DFSClient.java:run(2593)) - DFSOutputStream ResponseProcessor exception for block blk_6077790398764684064_1001java.io.IOException: Bad response ERROR for block blk_6077790398764684064_1001 from datanode 127.0.0.1:49054
    [junit] at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSClient.java:2573)
    [junit]
    [junit] 2009-08-10 15:21:37,860 WARN hdfs.DFSClient (DFSClient.java:processDatanodeError(2622)) - Error Recovery for block blk_6077790398764684064_1001 bad datanode[1] 127.0.0.1:49054
    [junit] 2009-08-10 15:21:37,860 WARN hdfs.DFSClient (DFSClient.java:processDatanodeError(2666)) - Error Recovery for block blk_6077790398764684064_1001 in pipeline 127.0.0.1:38273, 127.0.0.1:49054, 127.0.0.1:52882: bad datanode 127.0.0.1:49054
    [junit] 2009-08-10 15:21:37,862 INFO datanode.DataNode (DataNode.java:logRecoverBlock(1700)) - Client calls recoverBlock(block=blk_6077790398764684064_1001, targets=[127.0.0.1:38273, 127.0.0.1:52882])
    [junit] 2009-08-10 15:21:37,865 INFO datanode.DataNode (DataNode.java:updateBlock(1510)) - oldblock=blk_6077790398764684064_1001(length=1), newblock=blk_6077790398764684064_1002(length=0), datanode=127.0.0.1:38273
    [junit] 2009-08-10 15:21:37,867 INFO datanode.DataNode (DataNode.java:updateBlock(1510)) - oldblock=blk_6077790398764684064_1001(length=0), newblock=blk_6077790398764684064_1002(length=0), datanode=127.0.0.1:52882
    [junit] 2009-08-10 15:21:37,868 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1613)) - commitBlockSynchronization(lastblock=blk_6077790398764684064_1001, newgenerationstamp=1002, newlength=0, newtargets=[127.0.0.1:38273, 127.0.0.1:52882], closeFile=false, deleteBlock=false)
    [junit] 2009-08-10 15:21:37,868 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1677)) - commitBlockSynchronization(blk_6077790398764684064_1002) successful
    [junit] 2009-08-10 15:21:37,870 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:38273
    [junit] 2009-08-10 15:21:37,870 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70)) - FI: receiverOpWriteBlock
    [junit] 2009-08-10 15:21:37,870 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_6077790398764684064_1002 src: /127.0.0.1:35703 dest: /127.0.0.1:38273
    [junit] 2009-08-10 15:21:37,870 INFO datanode.DataNode (FSDataset.java:writeToBlock(1011)) - Reopen already-open Block for append blk_6077790398764684064_1002
    [junit] 2009-08-10 15:21:37,871 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:52882
    [junit] 2009-08-10 15:21:37,871 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70)) - FI: receiverOpWriteBlock
    [junit] 2009-08-10 15:21:37,871 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_6077790398764684064_1002 src: /127.0.0.1:38925 dest: /127.0.0.1:52882
    [junit] 2009-08-10 15:21:37,871 INFO datanode.DataNode (FSDataset.java:writeToBlock(1011)) - Reopen already-open Block for append blk_6077790398764684064_1002
    [junit] 2009-08-10 15:21:37,872 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60)) - FI: statusRead SUCCESS, datanode=127.0.0.1:38273
    [junit] 2009-08-10 15:21:37,873 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-10 15:21:37,873 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-10 15:21:37,873 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-10 15:21:37,873 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-10 15:21:37,873 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-10 15:21:37,874 INFO DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(819)) - src: /127.0.0.1:38925, dest: /127.0.0.1:52882, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_-1214791587, offset: 0, srvID: DS-593760917-67.195.138.9-52882-1249917696278, blockid: blk_6077790398764684064_1002, duration: 1809582
    [junit] 2009-08-10 15:21:37,875 INFO hdfs.StateChange (BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:52882 is added to blk_6077790398764684064_1002 size 1
    [junit] 2009-08-10 15:21:37,875 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(843)) - PacketResponder 0 for block blk_6077790398764684064_1002 terminating
    [junit] 2009-08-10 15:21:37,876 INFO DataNode.clienttrace (BlockReceiver.java:run(945)) - src: /127.0.0.1:35703, dest: /127.0.0.1:38273, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_-1214791587, offset: 0, srvID: DS-124577835-67.195.138.9-38273-1249917697677, blockid: blk_6077790398764684064_1002, duration: 2971944
    [junit] 2009-08-10 15:21:37,876 INFO hdfs.StateChange (BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:38273 is added to blk_6077790398764684064_1002 size 1
    [junit] 2009-08-10 15:21:37,877 INFO datanode.DataNode (BlockReceiver.java:run(1009)) - PacketResponder 1 for block blk_6077790398764684064_1002 terminating
    [junit] 2009-08-10 15:21:37,878 INFO hdfs.StateChange (FSNamesystem.java:completeFileInternal(1269)) - DIR* NameSystem.completeFile: file /testPipelineFi15/foo is closed by DFSClient_-1214791587
    [junit] 2009-08-10 15:21:37,889 INFO FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson ip=/127.0.0.1 cmd=open src=/testPipelineFi15/foo dst=null perm=null
    [junit] 2009-08-10 15:21:37,890 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp READ_BLOCK, datanode=127.0.0.1:38273
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 2
    [junit] 2009-08-10 15:21:37,891 INFO DataNode.clienttrace (BlockSender.java:sendBlock(417)) - src: /127.0.0.1:38273, dest: /127.0.0.1:35705, bytes: 5, op: HDFS_READ, cliID: DFSClient_-1214791587, offset: 0, srvID: DS-124577835-67.195.138.9-38273-1249917697677, blockid: blk_6077790398764684064_1002, duration: 233320
    [junit] 2009-08-10 15:21:37,891 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60)) - FI: statusRead CHECKSUM_OK, datanode=127.0.0.1:38273
    [junit] 2009-08-10 15:21:37,993 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 35997
    [junit] 2009-08-10 15:21:37,993 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 35997: exiting
    [junit] 2009-08-10 15:21:37,993 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-10 15:21:37,994 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:38273, storageID=DS-124577835-67.195.138.9-38273-1249917697677, infoPort=47864, ipcPort=35997):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit]
    [junit] 2009-08-10 15:21:37,994 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-08-10 15:21:37,993 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 35997
    [junit] 2009-08-10 15:21:38,043 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-10 15:21:38,043 INFO datanode.DataNode (DataNode.java:run(1278)) - DatanodeRegistration(127.0.0.1:38273, storageID=DS-124577835-67.195.138.9-38273-1249917697677, infoPort=47864, ipcPort=35997):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'}
    [junit] 2009-08-10 15:21:38,043 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 35997
    [junit] 2009-08-10 15:21:38,044 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 1
    [junit] 2009-08-10 15:21:38,146 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 34492
    [junit] 2009-08-10 15:21:38,146 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 34492: exiting
    [junit] 2009-08-10 15:21:38,146 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 34492
    [junit] 2009-08-10 15:21:38,147 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-08-10 15:21:38,147 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-10 15:21:38,147 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:49054, storageID=DS-1109244666-67.195.138.9-49054-1249917697094, infoPort=46168, ipcPort=34492):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit]
    [junit] 2009-08-10 15:21:38,149 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-10 15:21:38,150 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-10 15:21:38,150 INFO datanode.DataNode (DataNode.java:run(1278)) - DatanodeRegistration(127.0.0.1:49054, storageID=DS-1109244666-67.195.138.9-49054-1249917697094, infoPort=46168, ipcPort=34492):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'}
    [junit] 2009-08-10 15:21:38,150 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 34492
    [junit] 2009-08-10 15:21:38,150 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 0
    [junit] 2009-08-10 15:21:38,252 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 41464
    [junit] 2009-08-10 15:21:38,253 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 41464: exiting
    [junit] 2009-08-10 15:21:38,253 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 41464
    [junit] 2009-08-10 15:21:38,253 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-08-10 15:21:38,253 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-10 15:21:38,253 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:52882, storageID=DS-593760917-67.195.138.9-52882-1249917696278, infoPort=35631, ipcPort=41464):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit]
    [junit] 2009-08-10 15:21:38,255 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-10 15:21:38,256 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-10 15:21:38,256 INFO datanode.DataNode (DataNode.java:run(1278)) - DatanodeRegistration(127.0.0.1:52882, storageID=DS-593760917-67.195.138.9-52882-1249917696278, infoPort=35631, ipcPort=41464):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current'}
    [junit] 2009-08-10 15:21:38,256 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 41464
    [junit] 2009-08-10 15:21:38,256 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-10 15:21:38,395 WARN namenode.DecommissionManager (DecommissionManager.java:run(67)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2009-08-10 15:21:38,395 INFO namenode.FSNamesystem (FSEditLog.java:printStatistics(884)) - Number of transactions: 5 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 39 29
    [junit] 2009-08-10 15:21:38,395 WARN namenode.FSNamesystem (FSNamesystem.java:run(2077)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2009-08-10 15:21:38,403 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 60444
    [junit] 2009-08-10 15:21:38,404 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 60444: exiting
    [junit] 2009-08-10 15:21:38,404 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 5 on 60444: exiting
    [junit] 2009-08-10 15:21:38,405 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 7 on 60444: exiting
    [junit] 2009-08-10 15:21:38,405 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 1 on 60444: exiting
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 44.091 sec
    [junit] 2009-08-10 15:21:38,404 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 4 on 60444: exiting
    [junit] 2009-08-10 15:21:38,404 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 60444
    [junit] 2009-08-10 15:21:38,404 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 2 on 60444: exiting
    [junit] 2009-08-10 15:21:38,404 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-10 15:21:38,404 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 9 on 60444: exiting
    [junit] 2009-08-10 15:21:38,405 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 6 on 60444: exiting
    [junit] 2009-08-10 15:21:38,405 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 3 on 60444: exiting
    [junit] 2009-08-10 15:21:38,404 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 8 on 60444: exiting

    checkfailure:

    BUILD FAILED
    http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build.xml :725: Tests failed!

    Total time: 56 minutes 23 seconds
    Publishing Javadoc
    Recording test results
    Recording fingerprints
    Publishing Clover coverage report...
  • Apache Hudson Server at Aug 11, 2009 at 2:58 pm
    See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/46/changes

    Changes:

    [szetszwo] Move HDFS-525 from 0.21.0 to 0.20.1 in CHANGES.txt.

    [szetszwo] HDFS-525. The SimpleDateFormat object in ListPathsServlet is not thread safe. Contributed by Suresh Srinivas

    ------------------------------------------
    [...truncated 284574 lines...]
    [junit] 2009-08-11 14:55:13,995 INFO datanode.DataNode (DataNode.java:offerService(782)) - Starting Periodic block scanner.
    [junit] 2009-08-11 14:55:14,177 INFO common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data4 is not formatted.
    [junit] 2009-08-11 14:55:14,177 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-11 14:55:14,418 INFO datanode.DataNode (FSDataset.java:registerMBean(1417)) - Registered FSDatasetStatusMBean
    [junit] 2009-08-11 14:55:14,418 INFO datanode.DataNode (DataNode.java:startDataNode(326)) - Opened info server at 50135
    [junit] 2009-08-11 14:55:14,419 INFO datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-08-11 14:55:14,419 INFO datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1250019415419 with interval 21600000
    [junit] 2009-08-11 14:55:14,420 INFO http.HttpServer (HttpServer.java:start(425)) - Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
    [junit] 2009-08-11 14:55:14,421 INFO http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort() returned 33128 webServer.getConnectors()[0].getLocalPort() returned 33128
    [junit] 2009-08-11 14:55:14,421 INFO http.HttpServer (HttpServer.java:start(463)) - Jetty bound to port 33128
    [junit] 2009-08-11 14:55:14,421 INFO mortbay.log (?:invoke(?)) - jetty-6.1.14
    [junit] 2009-08-11 14:55:14,484 INFO mortbay.log (?:invoke(?)) - Started SelectChannelConnector@localhost:33128
    [junit] 2009-08-11 14:55:14,485 INFO jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-11 14:55:14,486 INFO metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=47342
    [junit] 2009-08-11 14:55:14,487 INFO ipc.Server (Server.java:run(474)) - IPC Server Responder: starting
    [junit] 2009-08-11 14:55:14,487 INFO datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:50135, storageID=, infoPort=33128, ipcPort=47342)
    [junit] 2009-08-11 14:55:14,487 INFO ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 47342: starting
    [junit] 2009-08-11 14:55:14,487 INFO ipc.Server (Server.java:run(313)) - IPC Server listener on 47342: starting
    [junit] 2009-08-11 14:55:14,489 INFO hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:50135 storage DS-1031587713-67.195.138.9-50135-1250002514488
    [junit] 2009-08-11 14:55:14,489 INFO net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:50135
    [junit] 2009-08-11 14:55:14,530 INFO datanode.DataNode (DataNode.java:register(571)) - New storage id DS-1031587713-67.195.138.9-50135-1250002514488 is assigned to data-node 127.0.0.1:50135
    [junit] 2009-08-11 14:55:14,531 INFO datanode.DataNode (DataNode.java:run(1258)) - DatanodeRegistration(127.0.0.1:50135, storageID=DS-1031587713-67.195.138.9-50135-1250002514488, infoPort=33128, ipcPort=47342)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'}
    [junit] Starting DataNode 2 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6
    [junit] 2009-08-11 14:55:14,531 INFO datanode.DataNode (DataNode.java:offerService(739)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
    [junit] 2009-08-11 14:55:14,540 INFO common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5 is not formatted.
    [junit] 2009-08-11 14:55:14,541 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-11 14:55:14,563 INFO datanode.DataNode (DataNode.java:blockReport(974)) - BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-08-11 14:55:14,563 INFO datanode.DataNode (DataNode.java:offerService(782)) - Starting Periodic block scanner.
    [junit] 2009-08-11 14:55:14,735 INFO common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data6 is not formatted.
    [junit] 2009-08-11 14:55:14,736 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
    [junit] 2009-08-11 14:55:15,042 INFO datanode.DataNode (FSDataset.java:registerMBean(1417)) - Registered FSDatasetStatusMBean
    [junit] 2009-08-11 14:55:15,043 INFO datanode.DataNode (DataNode.java:startDataNode(326)) - Opened info server at 51008
    [junit] 2009-08-11 14:55:15,043 INFO datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
    [junit] 2009-08-11 14:55:15,043 INFO datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1250020223043 with interval 21600000
    [junit] 2009-08-11 14:55:15,045 INFO http.HttpServer (HttpServer.java:start(425)) - Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
    [junit] 2009-08-11 14:55:15,045 INFO http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort() returned 49706 webServer.getConnectors()[0].getLocalPort() returned 49706
    [junit] 2009-08-11 14:55:15,045 INFO http.HttpServer (HttpServer.java:start(463)) - Jetty bound to port 49706
    [junit] 2009-08-11 14:55:15,045 INFO mortbay.log (?:invoke(?)) - jetty-6.1.14
    [junit] 2009-08-11 14:55:15,144 INFO mortbay.log (?:invoke(?)) - Started SelectChannelConnector@localhost:49706
    [junit] 2009-08-11 14:55:15,145 INFO jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-11 14:55:15,146 INFO metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=42341
    [junit] 2009-08-11 14:55:15,146 INFO ipc.Server (Server.java:run(474)) - IPC Server Responder: starting
    [junit] 2009-08-11 14:55:15,146 INFO datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:51008, storageID=, infoPort=49706, ipcPort=42341)
    [junit] 2009-08-11 14:55:15,146 INFO ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 42341: starting
    [junit] 2009-08-11 14:55:15,146 INFO ipc.Server (Server.java:run(313)) - IPC Server listener on 42341: starting
    [junit] 2009-08-11 14:55:15,148 INFO hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:51008 storage DS-869684786-67.195.138.9-51008-1250002515148
    [junit] 2009-08-11 14:55:15,149 INFO net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:51008
    [junit] 2009-08-11 14:55:15,219 INFO datanode.DataNode (DataNode.java:register(571)) - New storage id DS-869684786-67.195.138.9-51008-1250002515148 is assigned to data-node 127.0.0.1:51008
    [junit] 2009-08-11 14:55:15,219 INFO datanode.DataNode (DataNode.java:run(1258)) - DatanodeRegistration(127.0.0.1:51008, storageID=DS-869684786-67.195.138.9-51008-1250002515148, infoPort=49706, ipcPort=42341)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'}
    [junit] 2009-08-11 14:55:15,220 INFO datanode.DataNode (DataNode.java:offerService(739)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
    [junit] 2009-08-11 14:55:15,253 INFO datanode.DataNode (DataNode.java:blockReport(974)) - BlockReport of 0 blocks got processed in 0 msecs
    [junit] 2009-08-11 14:55:15,254 INFO datanode.DataNode (DataNode.java:offerService(782)) - Starting Periodic block scanner.
    [junit] 2009-08-11 14:55:15,316 INFO FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson ip=/127.0.0.1 cmd=create src=/testPipelineFi15/foo dst=null perm=hudson:supergroup:rw-r--r--
    [junit] 2009-08-11 14:55:15,318 INFO hdfs.StateChange (FSNamesystem.java:allocateBlock(1303)) - BLOCK* NameSystem.allocateBlock: /testPipelineFi15/foo. blk_-9169371959972368887_1001
    [junit] 2009-08-11 14:55:15,319 INFO protocol.ClientProtocolAspects (ClientProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_protocol_ClientProtocolAspects$1$7076326d(32)) - FI: addBlock Pipeline[127.0.0.1:50135, 127.0.0.1:51008, 127.0.0.1:35769]
    [junit] 2009-08-11 14:55:15,320 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:50135
    [junit] 2009-08-11 14:55:15,320 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70)) - FI: receiverOpWriteBlock
    [junit] 2009-08-11 14:55:15,320 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-9169371959972368887_1001 src: /127.0.0.1:36686 dest: /127.0.0.1:50135
    [junit] 2009-08-11 14:55:15,322 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:51008
    [junit] 2009-08-11 14:55:15,322 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70)) - FI: receiverOpWriteBlock
    [junit] 2009-08-11 14:55:15,322 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-9169371959972368887_1001 src: /127.0.0.1:48148 dest: /127.0.0.1:51008
    [junit] 2009-08-11 14:55:15,324 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:35769
    [junit] 2009-08-11 14:55:15,324 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70)) - FI: receiverOpWriteBlock
    [junit] 2009-08-11 14:55:15,324 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-9169371959972368887_1001 src: /127.0.0.1:50935 dest: /127.0.0.1:35769
    [junit] 2009-08-11 14:55:15,325 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60)) - FI: statusRead SUCCESS, datanode=127.0.0.1:51008
    [junit] 2009-08-11 14:55:15,325 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60)) - FI: statusRead SUCCESS, datanode=127.0.0.1:50135
    [junit] 2009-08-11 14:55:15,327 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-11 14:55:15,327 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-11 14:55:15,327 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-11 14:55:15,327 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-11 14:55:15,327 INFO fi.FiTestUtil (DataTransferTestUtil.java:run(158)) - FI: testPipelineFi15, index=1, datanode=127.0.0.1:51008
    [junit] 2009-08-11 14:55:15,328 INFO datanode.DataNode (BlockReceiver.java:handleMirrorOutError(185)) - DatanodeRegistration(127.0.0.1:51008, storageID=DS-869684786-67.195.138.9-51008-1250002515148, infoPort=49706, ipcPort=42341):Exception writing block blk_-9169371959972368887_1001 to mirror 127.0.0.1:35769
    [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: testPipelineFi15, index=1, datanode=127.0.0.1:51008
    [junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:159)
    [junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
    [junit] at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:47)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:408)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:532)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit]
    [junit] 2009-08-11 14:55:15,328 INFO datanode.DataNode (BlockReceiver.java:receiveBlock(566)) - Exception in receiveBlock for block blk_-9169371959972368887_1001 org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: testPipelineFi15, index=1, datanode=127.0.0.1:51008
    [junit] 2009-08-11 14:55:15,329 INFO datanode.DataNode (BlockReceiver.java:run(907)) - PacketResponder blk_-9169371959972368887_1001 1 Exception java.io.InterruptedIOException: Interruped while waiting for IO on channel java.nio.channels.SocketChannel[connected local=/127.0.0.1:50935 remote=/127.0.0.1:35769]. 59997 millis timeout left.
    [junit] at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
    [junit] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
    [junit] at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    [junit] at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    [junit] at java.io.DataInputStream.readFully(DataInputStream.java:178)
    [junit] at java.io.DataInputStream.readLong(DataInputStream.java:399)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:869)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit]
    [junit] 2009-08-11 14:55:15,329 INFO datanode.DataNode (BlockReceiver.java:run(922)) - PacketResponder blk_-9169371959972368887_1001 1 : Thread is interrupted.
    [junit] 2009-08-11 14:55:15,329 INFO datanode.DataNode (BlockReceiver.java:run(1009)) - PacketResponder 1 for block blk_-9169371959972368887_1001 terminating
    [junit] 2009-08-11 14:55:15,329 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(358)) - writeBlock blk_-9169371959972368887_1001 received exception org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: testPipelineFi15, index=1, datanode=127.0.0.1:51008
    [junit] 2009-08-11 14:55:15,330 INFO datanode.DataNode (BlockReceiver.java:receiveBlock(566)) - Exception in receiveBlock for block blk_-9169371959972368887_1001 java.io.EOFException: while trying to read 65557 bytes
    [junit] 2009-08-11 14:55:15,330 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(779)) - PacketResponder 0 for block blk_-9169371959972368887_1001 Interrupted.
    [junit] 2009-08-11 14:55:15,330 ERROR datanode.DataNode (DataXceiver.java:run(112)) - DatanodeRegistration(127.0.0.1:51008, storageID=DS-869684786-67.195.138.9-51008-1250002515148, infoPort=49706, ipcPort=42341):DataXceiver
    [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: testPipelineFi15, index=1, datanode=127.0.0.1:51008
    [junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:159)
    [junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
    [junit] at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:47)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:408)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:532)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit] 2009-08-11 14:55:15,330 INFO datanode.DataNode (BlockReceiver.java:run(907)) - PacketResponder blk_-9169371959972368887_1001 2 Exception java.io.EOFException
    [junit] at java.io.DataInputStream.readFully(DataInputStream.java:180)
    [junit] at java.io.DataInputStream.readLong(DataInputStream.java:399)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:869)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit]
    [junit] 2009-08-11 14:55:15,330 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(843)) - PacketResponder 0 for block blk_-9169371959972368887_1001 terminating
    [junit] 2009-08-11 14:55:15,331 INFO datanode.DataNode (BlockReceiver.java:run(1009)) - PacketResponder 2 for block blk_-9169371959972368887_1001 terminating
    [junit] 2009-08-11 14:55:15,331 WARN hdfs.DFSClient (DFSClient.java:run(2593)) - DFSOutputStream ResponseProcessor exception for block blk_-9169371959972368887_1001java.io.IOException: Bad response ERROR for block blk_-9169371959972368887_1001 from datanode 127.0.0.1:51008
    [junit] at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSClient.java:2573)
    [junit]
    [junit] 2009-08-11 14:55:15,331 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(358)) - writeBlock blk_-9169371959972368887_1001 received exception java.io.EOFException: while trying to read 65557 bytes
    [junit] 2009-08-11 14:55:15,332 ERROR datanode.DataNode (DataXceiver.java:run(112)) - DatanodeRegistration(127.0.0.1:35769, storageID=DS-78635520-67.195.138.9-35769-1250002513915, infoPort=33783, ipcPort=43062):DataXceiver
    [junit] java.io.EOFException: while trying to read 65557 bytes
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:271)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:315)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:379)
    [junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:532)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit] 2009-08-11 14:55:15,333 WARN hdfs.DFSClient (DFSClient.java:processDatanodeError(2622)) - Error Recovery for block blk_-9169371959972368887_1001 bad datanode[1] 127.0.0.1:51008
    [junit] 2009-08-11 14:55:15,333 WARN hdfs.DFSClient (DFSClient.java:processDatanodeError(2666)) - Error Recovery for block blk_-9169371959972368887_1001 in pipeline 127.0.0.1:50135, 127.0.0.1:51008, 127.0.0.1:35769: bad datanode 127.0.0.1:51008
    [junit] 2009-08-11 14:55:15,334 INFO datanode.DataNode (DataNode.java:logRecoverBlock(1700)) - Client calls recoverBlock(block=blk_-9169371959972368887_1001, targets=[127.0.0.1:50135, 127.0.0.1:35769])
    [junit] 2009-08-11 14:55:15,338 INFO datanode.DataNode (DataNode.java:updateBlock(1510)) - oldblock=blk_-9169371959972368887_1001(length=1), newblock=blk_-9169371959972368887_1002(length=0), datanode=127.0.0.1:50135
    [junit] 2009-08-11 14:55:15,339 INFO datanode.DataNode (DataNode.java:updateBlock(1510)) - oldblock=blk_-9169371959972368887_1001(length=0), newblock=blk_-9169371959972368887_1002(length=0), datanode=127.0.0.1:35769
    [junit] 2009-08-11 14:55:15,339 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1613)) - commitBlockSynchronization(lastblock=blk_-9169371959972368887_1001, newgenerationstamp=1002, newlength=0, newtargets=[127.0.0.1:50135, 127.0.0.1:35769], closeFile=false, deleteBlock=false)
    [junit] 2009-08-11 14:55:15,340 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1677)) - commitBlockSynchronization(blk_-9169371959972368887_1002) successful
    [junit] 2009-08-11 14:55:15,341 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:50135
    [junit] 2009-08-11 14:55:15,341 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70)) - FI: receiverOpWriteBlock
    [junit] 2009-08-11 14:55:15,341 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-9169371959972368887_1002 src: /127.0.0.1:36691 dest: /127.0.0.1:50135
    [junit] 2009-08-11 14:55:15,341 INFO datanode.DataNode (FSDataset.java:writeToBlock(1011)) - Reopen already-open Block for append blk_-9169371959972368887_1002
    [junit] 2009-08-11 14:55:15,342 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:35769
    [junit] 2009-08-11 14:55:15,342 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70)) - FI: receiverOpWriteBlock
    [junit] 2009-08-11 14:55:15,342 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-9169371959972368887_1002 src: /127.0.0.1:50939 dest: /127.0.0.1:35769
    [junit] 2009-08-11 14:55:15,342 INFO datanode.DataNode (FSDataset.java:writeToBlock(1011)) - Reopen already-open Block for append blk_-9169371959972368887_1002
    [junit] 2009-08-11 14:55:15,342 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60)) - FI: statusRead SUCCESS, datanode=127.0.0.1:50135
    [junit] 2009-08-11 14:55:15,343 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-11 14:55:15,344 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-11 14:55:15,344 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-11 14:55:15,344 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-11 14:55:15,344 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46)) - FI: callReceivePacket
    [junit] 2009-08-11 14:55:15,346 INFO DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(819)) - src: /127.0.0.1:50939, dest: /127.0.0.1:35769, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_196313700, offset: 0, srvID: DS-78635520-67.195.138.9-35769-1250002513915, blockid: blk_-9169371959972368887_1002, duration: 2473681
    [junit] 2009-08-11 14:55:15,346 INFO hdfs.StateChange (BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:35769 is added to blk_-9169371959972368887_1002 size 1
    [junit] 2009-08-11 14:55:15,346 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(843)) - PacketResponder 0 for block blk_-9169371959972368887_1002 terminating
    [junit] 2009-08-11 14:55:15,383 INFO hdfs.StateChange (BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50135 is added to blk_-9169371959972368887_1002 size 1
    [junit] 2009-08-11 14:55:15,383 INFO DataNode.clienttrace (BlockReceiver.java:run(945)) - src: /127.0.0.1:36691, dest: /127.0.0.1:50135, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_196313700, offset: 0, srvID: DS-1031587713-67.195.138.9-50135-1250002514488, blockid: blk_-9169371959972368887_1002, duration: 38915582
    [junit] 2009-08-11 14:55:15,384 INFO datanode.DataNode (BlockReceiver.java:run(1009)) - PacketResponder 1 for block blk_-9169371959972368887_1002 terminating
    [junit] 2009-08-11 14:55:15,385 INFO hdfs.StateChange (FSNamesystem.java:completeFileInternal(1269)) - DIR* NameSystem.completeFile: file /testPipelineFi15/foo is closed by DFSClient_196313700
    [junit] 2009-08-11 14:55:15,410 INFO FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson ip=/127.0.0.1 cmd=open src=/testPipelineFi15/foo dst=null perm=null
    [junit] 2009-08-11 14:55:15,411 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50)) - FI: receiverOp READ_BLOCK, datanode=127.0.0.1:50135
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 2
    [junit] 2009-08-11 14:55:15,412 INFO DataNode.clienttrace (BlockSender.java:sendBlock(417)) - src: /127.0.0.1:50135, dest: /127.0.0.1:36693, bytes: 5, op: HDFS_READ, cliID: DFSClient_196313700, offset: 0, srvID: DS-1031587713-67.195.138.9-50135-1250002514488, blockid: blk_-9169371959972368887_1002, duration: 229116
    [junit] 2009-08-11 14:55:15,413 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60)) - FI: statusRead CHECKSUM_OK, datanode=127.0.0.1:50135
    [junit] 2009-08-11 14:55:15,514 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 42341
    [junit] 2009-08-11 14:55:15,515 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 42341: exiting
    [junit] 2009-08-11 14:55:15,515 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 42341
    [junit] 2009-08-11 14:55:15,515 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-11 14:55:15,515 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:51008, storageID=DS-869684786-67.195.138.9-51008-1250002515148, infoPort=49706, ipcPort=42341):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit]
    [junit] 2009-08-11 14:55:15,516 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-11 14:55:15,516 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-11 14:55:15,517 INFO datanode.DataNode (DataNode.java:run(1278)) - DatanodeRegistration(127.0.0.1:51008, storageID=DS-869684786-67.195.138.9-51008-1250002515148, infoPort=49706, ipcPort=42341):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'}
    [junit] 2009-08-11 14:55:15,517 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 42341
    [junit] 2009-08-11 14:55:15,517 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 1
    [junit] 2009-08-11 14:55:15,620 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 47342
    [junit] 2009-08-11 14:55:15,620 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 47342: exiting
    [junit] 2009-08-11 14:55:15,620 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 47342
    [junit] 2009-08-11 14:55:15,620 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-11 14:55:15,620 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:50135, storageID=DS-1031587713-67.195.138.9-50135-1250002514488, infoPort=33128, ipcPort=47342):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit]
    [junit] 2009-08-11 14:55:15,621 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-11 14:55:15,621 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-11 14:55:15,622 INFO datanode.DataNode (DataNode.java:run(1278)) - DatanodeRegistration(127.0.0.1:50135, storageID=DS-1031587713-67.195.138.9-50135-1250002514488, infoPort=33128, ipcPort=47342):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'}
    [junit] 2009-08-11 14:55:15,622 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 47342
    [junit] 2009-08-11 14:55:15,622 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 0
    [junit] 2009-08-11 14:55:15,724 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 43062
    [junit] 2009-08-11 14:55:15,725 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 43062: exiting
    [junit] 2009-08-11 14:55:15,725 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 43062
    [junit] 2009-08-11 14:55:15,725 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-11 14:55:15,725 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:35769, storageID=DS-78635520-67.195.138.9-35769-1250002513915, infoPort=33783, ipcPort=43062):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] at java.lang.Thread.run(Thread.java:619)
    [junit]
    [junit] 2009-08-11 14:55:15,725 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-08-11 14:55:15,726 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
    [junit] 2009-08-11 14:55:15,726 INFO datanode.DataNode (DataNode.java:run(1278)) - DatanodeRegistration(127.0.0.1:35769, storageID=DS-78635520-67.195.138.9-35769-1250002513915, infoPort=33783, ipcPort=43062):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current'}
    [junit] 2009-08-11 14:55:15,726 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 43062
    [junit] 2009-08-11 14:55:15,726 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-11 14:55:15,829 WARN namenode.FSNamesystem (FSNamesystem.java:run(2077)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 2009-08-11 14:55:15,829 INFO namenode.FSNamesystem (FSEditLog.java:printStatistics(884)) - Number of transactions: 5 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 61 46
    [junit] 2009-08-11 14:55:15,829 WARN namenode.DecommissionManager (DecommissionManager.java:run(67)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2009-08-11 14:55:15,851 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 51185
    [junit] 2009-08-11 14:55:15,851 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 1 on 51185: exiting
    [junit] 2009-08-11 14:55:15,851 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 4 on 51185: exiting
    [junit] 2009-08-11 14:55:15,852 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 9 on 51185: exiting
    [junit] 2009-08-11 14:55:15,852 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 6 on 51185: exiting
    [junit] 2009-08-11 14:55:15,852 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 3 on 51185: exiting
    [junit] 2009-08-11 14:55:15,852 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 5 on 51185: exiting
    [junit] 2009-08-11 14:55:15,852 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 2 on 51185: exiting
    [junit] 2009-08-11 14:55:15,852 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
    [junit] 2009-08-11 14:55:15,851 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 51185
    [junit] 2009-08-11 14:55:15,851 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 51185: exiting
    [junit] 2009-08-11 14:55:15,851 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 7 on 51185: exiting
    [junit] 2009-08-11 14:55:15,852 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 8 on 51185: exiting
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 43.55 sec

    checkfailure:

    BUILD FAILED
    http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build.xml :725: Tests failed!

    Total time: 59 minutes 38 seconds
    Publishing Javadoc
    Recording test results
    Recording fingerprints
    Publishing Clover coverage report...
  • Apache Hudson Server at Aug 12, 2009 at 3:21 pm

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouphdfs-dev @
categorieshadoop
postedAug 8, '09 at 5:47p
activeAug 12, '09 at 3:21p
posts5
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Apache Hudson Server: 5 posts

People

Translate

site design / logo © 2022 Grokbase