See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/61/changes
Changes:
[hairong] HDFS-553. BlockSender reports wrong failed position in ChecksumException. Contributed by Hairong Kuang.
[szetszwo] HDFS-561. Fix write pipeline READ_TIMEOUT in DataTransferProtocol. Contributed by Kan Zhang
[szetszwo] HDFS-549. Allow a non-fault-inject test, which is specified by -Dtestcase, to be executed by the run-test-hdfs-fault-inject target. Contributed by Konstantin Boudnik
------------------------------------------
[...truncated 221189 lines...]
[junit] 2009-08-25 16:23:50,981 INFO mortbay.log (?:invoke(?)) - jetty-6.1.14
[junit] 2009-08-25 16:23:51,037 INFO mortbay.log (?:invoke(?)) - Started [email protected]:53933
[junit] 2009-08-25 16:23:51,038 INFO jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
[junit] 2009-08-25 16:23:51,039 INFO metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=36715
[junit] 2009-08-25 16:23:51,040 INFO ipc.Server (Server.java:run(474)) - IPC Server Responder: starting
[junit] 2009-08-25 16:23:51,040 INFO ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 36715: starting
[junit] 2009-08-25 16:23:51,040 INFO datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:56241, storageID=, infoPort=53933, ipcPort=36715)
[junit] 2009-08-25 16:23:51,040 INFO ipc.Server (Server.java:run(313)) - IPC Server listener on 36715: starting
[junit] 2009-08-25 16:23:51,041 INFO hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:56241 storage DS-98040998-67.195.138.9-56241-1251217431041
[junit] 2009-08-25 16:23:51,042 INFO net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:56241
[junit] 2009-08-25 16:23:51,086 INFO datanode.DataNode (DataNode.java:register(571)) - New storage id DS-98040998-67.195.138.9-56241-1251217431041 is assigned to data-node 127.0.0.1:56241
[junit] 2009-08-25 16:23:51,087 INFO datanode.DataNode (DataNode.java:run(1285)) - DatanodeRegistration(127.0.0.1:56241, storageID=DS-98040998-67.195.138.9-56241-1251217431041, infoPort=53933, ipcPort=36715)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current'}
[junit] Starting DataNode 1 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4
[junit] 2009-08-25 16:23:51,087 INFO datanode.DataNode (DataNode.java:offerService(763)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
[junit] 2009-08-25 16:23:51,096 INFO common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3 is not formatted.
[junit] 2009-08-25 16:23:51,097 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
[junit] 2009-08-25 16:23:51,123 INFO datanode.DataNode (DataNode.java:blockReport(998)) - BlockReport of 0 blocks got processed in 0 msecs
[junit] 2009-08-25 16:23:51,124 INFO datanode.DataNode (DataNode.java:offerService(806)) - Starting Periodic block scanner.
[junit] 2009-08-25 16:23:51,304 INFO common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data4 is not formatted.
[junit] 2009-08-25 16:23:51,304 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
[junit] 2009-08-25 16:23:51,565 INFO datanode.DataNode (FSDataset.java:registerMBean(1547)) - Registered FSDatasetStatusMBean
[junit] 2009-08-25 16:23:51,565 INFO datanode.DataNode (DataNode.java:startDataNode(326)) - Opened info server at 44609
[junit] 2009-08-25 16:23:51,566 INFO datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
[junit] 2009-08-25 16:23:51,566 INFO datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1251238139566 with interval 21600000
[junit] 2009-08-25 16:23:51,567 INFO http.HttpServer (HttpServer.java:start(425)) - Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
[junit] 2009-08-25 16:23:51,567 INFO http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort() returned 51589 webServer.getConnectors()[0].getLocalPort() returned 51589
[junit] 2009-08-25 16:23:51,568 INFO http.HttpServer (HttpServer.java:start(463)) - Jetty bound to port 51589
[junit] 2009-08-25 16:23:51,568 INFO mortbay.log (?:invoke(?)) - jetty-6.1.14
[junit] 2009-08-25 16:23:51,634 INFO mortbay.log (?:invoke(?)) - Started [email protected]:51589
[junit] 2009-08-25 16:23:51,634 INFO jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
[junit] 2009-08-25 16:23:51,635 INFO metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=33485
[junit] 2009-08-25 16:23:51,636 INFO ipc.Server (Server.java:run(474)) - IPC Server Responder: starting
[junit] 2009-08-25 16:23:51,636 INFO datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:44609, storageID=, infoPort=51589, ipcPort=33485)
[junit] 2009-08-25 16:23:51,636 INFO ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 33485: starting
[junit] 2009-08-25 16:23:51,636 INFO ipc.Server (Server.java:run(313)) - IPC Server listener on 33485: starting
[junit] 2009-08-25 16:23:51,638 INFO hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:44609 storage DS-1943273741-67.195.138.9-44609-1251217431637
[junit] 2009-08-25 16:23:51,638 INFO net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:44609
[junit] 2009-08-25 16:23:51,680 INFO datanode.DataNode (DataNode.java:register(571)) - New storage id DS-1943273741-67.195.138.9-44609-1251217431637 is assigned to data-node 127.0.0.1:44609
[junit] 2009-08-25 16:23:51,680 INFO datanode.DataNode (DataNode.java:run(1285)) - DatanodeRegistration(127.0.0.1:44609, storageID=DS-1943273741-67.195.138.9-44609-1251217431637, infoPort=51589, ipcPort=33485)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'}
[junit] Starting DataNode 2 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6
[junit] 2009-08-25 16:23:51,681 INFO datanode.DataNode (DataNode.java:offerService(763)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
[junit] 2009-08-25 16:23:51,683 INFO common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5 is not formatted.
[junit] 2009-08-25 16:23:51,683 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
[junit] 2009-08-25 16:23:51,716 INFO datanode.DataNode (DataNode.java:blockReport(998)) - BlockReport of 0 blocks got processed in 1 msecs
[junit] 2009-08-25 16:23:51,717 INFO datanode.DataNode (DataNode.java:offerService(806)) - Starting Periodic block scanner.
[junit] 2009-08-25 16:23:51,859 INFO common.Storage (DataStorage.java:recoverTransitionRead(122)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data6 is not formatted.
[junit] 2009-08-25 16:23:51,860 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Formatting ...
[junit] 2009-08-25 16:23:52,125 INFO datanode.DataNode (FSDataset.java:registerMBean(1547)) - Registered FSDatasetStatusMBean
[junit] 2009-08-25 16:23:52,126 INFO datanode.DataNode (DataNode.java:startDataNode(326)) - Opened info server at 53608
[junit] 2009-08-25 16:23:52,127 INFO datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
[junit] 2009-08-25 16:23:52,127 INFO datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1251231381127 with interval 21600000
[junit] 2009-08-25 16:23:52,128 INFO http.HttpServer (HttpServer.java:start(425)) - Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
[junit] 2009-08-25 16:23:52,129 INFO http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort() returned 56130 webServer.getConnectors()[0].getLocalPort() returned 56130
[junit] 2009-08-25 16:23:52,129 INFO http.HttpServer (HttpServer.java:start(463)) - Jetty bound to port 56130
[junit] 2009-08-25 16:23:52,129 INFO mortbay.log (?:invoke(?)) - jetty-6.1.14
[junit] 2009-08-25 16:23:52,185 INFO mortbay.log (?:invoke(?)) - Started [email protected]:56130
[junit] 2009-08-25 16:23:52,186 INFO jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
[junit] 2009-08-25 16:23:52,187 INFO metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=36493
[junit] 2009-08-25 16:23:52,187 INFO ipc.Server (Server.java:run(474)) - IPC Server Responder: starting
[junit] 2009-08-25 16:23:52,188 INFO datanode.DataNode (DataNode.java:startDataNode(404)) - dnRegistration = DatanodeRegistration(vesta.apache.org:53608, storageID=, infoPort=56130, ipcPort=36493)
[junit] 2009-08-25 16:23:52,188 INFO ipc.Server (Server.java:run(939)) - IPC Server handler 0 on 36493: starting
[junit] 2009-08-25 16:23:52,188 INFO ipc.Server (Server.java:run(313)) - IPC Server listener on 36493: starting
[junit] 2009-08-25 16:23:52,190 INFO hdfs.StateChange (FSNamesystem.java:registerDatanode(1774)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:53608 storage DS-651771013-67.195.138.9-53608-1251217432189
[junit] 2009-08-25 16:23:52,190 INFO net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:53608
[junit] 2009-08-25 16:23:52,240 INFO datanode.DataNode (DataNode.java:register(571)) - New storage id DS-651771013-67.195.138.9-53608-1251217432189 is assigned to data-node 127.0.0.1:53608
[junit] 2009-08-25 16:23:52,241 INFO datanode.DataNode (DataNode.java:run(1285)) - DatanodeRegistration(127.0.0.1:53608, storageID=DS-651771013-67.195.138.9-53608-1251217432189, infoPort=56130, ipcPort=36493)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'}
[junit] 2009-08-25 16:23:52,241 INFO datanode.DataNode (DataNode.java:offerService(763)) - using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
[junit] 2009-08-25 16:23:52,279 INFO datanode.DataNode (DataNode.java:blockReport(998)) - BlockReport of 0 blocks got processed in 1 msecs
[junit] 2009-08-25 16:23:52,279 INFO datanode.DataNode (DataNode.java:offerService(806)) - Starting Periodic block scanner.
[junit] 2009-08-25 16:23:52,421 INFO FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson ip=/127.0.0.1 cmd=create src=/pipeline_Fi_16/foo dst=null perm=hudson:supergroup:rw-r--r--
[junit] 2009-08-25 16:23:52,424 INFO hdfs.StateChange (FSNamesystem.java:allocateBlock(1303)) - BLOCK* NameSystem.allocateBlock: /pipeline_Fi_16/foo. blk_-7389658038059638367_1001
[junit] 2009-08-25 16:23:52,424 INFO protocol.ClientProtocolAspects (ClientProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_protocol_ClientProtocolAspects$1$7076326d(35)) - FI: addBlock Pipeline[127.0.0.1:53608, 127.0.0.1:56241, 127.0.0.1:44609]
[junit] 2009-08-25 16:23:52,425 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:53608
[junit] 2009-08-25 16:23:52,426 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
[junit] 2009-08-25 16:23:52,426 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-7389658038059638367_1001 src: /127.0.0.1:59613 dest: /127.0.0.1:53608
[junit] 2009-08-25 16:23:52,427 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:56241
[junit] 2009-08-25 16:23:52,427 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
[junit] 2009-08-25 16:23:52,427 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-7389658038059638367_1001 src: /127.0.0.1:50548 dest: /127.0.0.1:56241
[junit] 2009-08-25 16:23:52,428 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:44609
[junit] 2009-08-25 16:23:52,428 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
[junit] 2009-08-25 16:23:52,429 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-7389658038059638367_1001 src: /127.0.0.1:45529 dest: /127.0.0.1:44609
[junit] 2009-08-25 16:23:52,429 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead SUCCESS, datanode=127.0.0.1:56241
[junit] 2009-08-25 16:23:52,429 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead SUCCESS, datanode=127.0.0.1:53608
[junit] 2009-08-25 16:23:52,430 INFO hdfs.DFSClientAspects (DFSClientAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_DFSClientAspects$2$9396d2df(47)) - FI: after pipelineInitNonAppend: hasError=false errorIndex=0
[junit] 2009-08-25 16:23:52,430 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
[junit] 2009-08-25 16:23:52,431 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
[junit] 2009-08-25 16:23:52,431 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
[junit] 2009-08-25 16:23:52,431 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
[junit] 2009-08-25 16:23:52,431 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
[junit] 2009-08-25 16:23:52,431 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
[junit] 2009-08-25 16:23:52,431 INFO fi.FiTestUtil (DataTransferTestUtil.java:run(170)) - FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:44609
[junit] 2009-08-25 16:23:52,431 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
[junit] 2009-08-25 16:23:52,431 WARN datanode.DataNode (DataNode.java:checkDiskError(702)) - checkDiskError: exception:
[junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:44609
[junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:171)
[junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
[junit] at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
[junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:50)
[junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:459)
[junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:535)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:340)
[junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
[junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit] 2009-08-25 16:23:52,432 INFO mortbay.log (?:invoke(?)) - Completed FSVolumeSet.checkDirs. Removed=0volumes. List of current volumes: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current
[junit] 2009-08-25 16:23:52,432 INFO datanode.DataNode (BlockReceiver.java:receiveBlock(569)) - Exception in receiveBlock for block blk_-7389658038059638367_1001 org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:44609
[junit] 2009-08-25 16:23:52,432 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(782)) - PacketResponder 0 for block blk_-7389658038059638367_1001 Interrupted.
[junit] 2009-08-25 16:23:52,432 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(853)) - PacketResponder 0 for block blk_-7389658038059638367_1001 terminating
[junit] 2009-08-25 16:23:52,453 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(359)) - writeBlock blk_-7389658038059638367_1001 received exception org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:44609
[junit] 2009-08-25 16:23:52,453 ERROR datanode.DataNode (DataXceiver.java:run(112)) - DatanodeRegistration(127.0.0.1:44609, storageID=DS-1943273741-67.195.138.9-44609-1251217431637, infoPort=51589, ipcPort=33485):DataXceiver
[junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:44609
[junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:171)
[junit] at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
[junit] at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
[junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:50)
[junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:459)
[junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:535)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:340)
[junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
[junit] at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit] 2009-08-25 16:23:52,453 INFO datanode.DataNode (BlockReceiver.java:run(917)) - PacketResponder blk_-7389658038059638367_1001 1 Exception java.io.EOFException
[junit] at java.io.DataInputStream.readFully(DataInputStream.java:180)
[junit] at java.io.DataInputStream.readLong(DataInputStream.java:399)
[junit] at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:879)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2009-08-25 16:23:52,478 INFO datanode.DataNode (BlockReceiver.java:run(1025)) - PacketResponder 1 for block blk_-7389658038059638367_1001 terminating
[junit] 2009-08-25 16:23:52,479 INFO datanode.DataNode (BlockReceiver.java:run(1025)) - PacketResponder 2 for block blk_-7389658038059638367_1001 terminating
[junit] 2009-08-25 16:23:52,479 WARN hdfs.DFSClient (DFSClient.java:run(2601)) - DFSOutputStream ResponseProcessor exception for block blk_-7389658038059638367_1001java.io.IOException: Bad response ERROR for block blk_-7389658038059638367_1001 from datanode 127.0.0.1:44609
[junit] at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSClient.java:2581)
[junit]
[junit] 2009-08-25 16:23:52,479 INFO hdfs.DFSClientAspects (DFSClientAspects.aj:ajc$before$org_apache_hadoop_hdfs_DFSClientAspects$4$1f7d37b0(77)) - FI: before pipelineErrorAfterInit: errorIndex=2
[junit] 2009-08-25 16:23:52,479 INFO fi.FiTestUtil (DataTransferTestUtil.java:run(235)) - pipeline_Fi_16, errorIndex=2, successfully verified.
[junit] 2009-08-25 16:23:52,479 WARN hdfs.DFSClient (DFSClient.java:processDatanodeError(2630)) - Error Recovery for block blk_-7389658038059638367_1001 bad datanode[2] 127.0.0.1:44609
[junit] 2009-08-25 16:23:52,480 WARN hdfs.DFSClient (DFSClient.java:processDatanodeError(2674)) - Error Recovery for block blk_-7389658038059638367_1001 in pipeline 127.0.0.1:53608, 127.0.0.1:56241, 127.0.0.1:44609: bad datanode 127.0.0.1:44609
[junit] 2009-08-25 16:23:52,482 INFO datanode.DataNode (DataNode.java:logRecoverBlock(1727)) - Client calls recoverBlock(block=blk_-7389658038059638367_1001, targets=[127.0.0.1:53608, 127.0.0.1:56241])
[junit] 2009-08-25 16:23:52,485 INFO datanode.DataNode (DataNode.java:updateBlock(1537)) - oldblock=blk_-7389658038059638367_1001(length=1), newblock=blk_-7389658038059638367_1002(length=1), datanode=127.0.0.1:53608
[junit] 2009-08-25 16:23:52,487 INFO datanode.DataNode (DataNode.java:updateBlock(1537)) - oldblock=blk_-7389658038059638367_1001(length=1), newblock=blk_-7389658038059638367_1002(length=1), datanode=127.0.0.1:56241
[junit] 2009-08-25 16:23:52,488 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1613)) - commitBlockSynchronization(lastblock=blk_-7389658038059638367_1001, newgenerationstamp=1002, newlength=1, newtargets=[127.0.0.1:53608, 127.0.0.1:56241], closeFile=false, deleteBlock=false)
[junit] 2009-08-25 16:23:52,488 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1677)) - commitBlockSynchronization(blk_-7389658038059638367_1002) successful
[junit] 2009-08-25 16:23:52,489 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:53608
[junit] 2009-08-25 16:23:52,489 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
[junit] 2009-08-25 16:23:52,490 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-7389658038059638367_1002 src: /127.0.0.1:59618 dest: /127.0.0.1:53608
[junit] 2009-08-25 16:23:52,490 INFO datanode.DataNode (FSDataset.java:writeToBlock(1085)) - Reopen already-open Block for append blk_-7389658038059638367_1002
[junit] 2009-08-25 16:23:52,490 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:56241
[junit] 2009-08-25 16:23:52,491 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72)) - FI: receiverOpWriteBlock
[junit] 2009-08-25 16:23:52,491 INFO datanode.DataNode (DataXceiver.java:opWriteBlock(222)) - Receiving block blk_-7389658038059638367_1002 src: /127.0.0.1:50553 dest: /127.0.0.1:56241
[junit] 2009-08-25 16:23:52,491 INFO datanode.DataNode (FSDataset.java:writeToBlock(1085)) - Reopen already-open Block for append blk_-7389658038059638367_1002
[junit] 2009-08-25 16:23:52,491 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead SUCCESS, datanode=127.0.0.1:53608
[junit] 2009-08-25 16:23:52,492 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
[junit] 2009-08-25 16:23:52,492 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
[junit] 2009-08-25 16:23:52,492 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
[junit] 2009-08-25 16:23:52,492 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
[junit] 2009-08-25 16:23:52,492 INFO datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47)) - FI: callReceivePacket
[junit] 2009-08-25 16:23:52,494 INFO DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(822)) - src: /127.0.0.1:50553, dest: /127.0.0.1:56241, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_1922674027, offset: 0, srvID: DS-98040998-67.195.138.9-56241-1251217431041, blockid: blk_-7389658038059638367_1002, duration: 1825731
[junit] 2009-08-25 16:23:52,494 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(853)) - PacketResponder 0 for block blk_-7389658038059638367_1002 terminating
[junit] 2009-08-25 16:23:52,494 INFO hdfs.StateChange (BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:56241 is added to blk_-7389658038059638367_1002 size 1
[junit] 2009-08-25 16:23:52,495 INFO DataNode.clienttrace (BlockReceiver.java:run(955)) - src: /127.0.0.1:59618, dest: /127.0.0.1:53608, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_1922674027, offset: 0, srvID: DS-651771013-67.195.138.9-53608-1251217432189, blockid: blk_-7389658038059638367_1002, duration: 2850663
[junit] 2009-08-25 16:23:52,495 INFO datanode.DataNode (BlockReceiver.java:run(1025)) - PacketResponder 1 for block blk_-7389658038059638367_1002 terminating
[junit] 2009-08-25 16:23:52,535 INFO hdfs.StateChange (BlockManager.java:addStoredBlock(950)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:53608 is added to blk_-7389658038059638367_1002 size 1
[junit] 2009-08-25 16:23:52,536 INFO hdfs.StateChange (FSNamesystem.java:completeFileInternal(1269)) - DIR* NameSystem.completeFile: file /pipeline_Fi_16/foo is closed by DFSClient_1922674027
[junit] 2009-08-25 16:23:52,558 INFO FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114)) - ugi=hudson,hudson ip=/127.0.0.1 cmd=open src=/pipeline_Fi_16/foo dst=null perm=null
[junit] 2009-08-25 16:23:52,559 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51)) - FI: receiverOp READ_BLOCK, datanode=127.0.0.1:53608
[junit] Shutting down the Mini HDFS Cluster
[junit] Shutting down DataNode 2
[junit] 2009-08-25 16:23:52,560 INFO DataNode.clienttrace (BlockSender.java:sendBlock(418)) - src: /127.0.0.1:53608, dest: /127.0.0.1:59620, bytes: 5, op: HDFS_READ, cliID: DFSClient_1922674027, offset: 0, srvID: DS-651771013-67.195.138.9-53608-1251217432189, blockid: blk_-7389658038059638367_1002, duration: 223811
[junit] 2009-08-25 16:23:52,561 INFO datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61)) - FI: statusRead CHECKSUM_OK, datanode=127.0.0.1:53608
[junit] 2009-08-25 16:23:52,662 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 36493
[junit] 2009-08-25 16:23:52,662 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 36493
[junit] 2009-08-25 16:23:52,662 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
[junit] 2009-08-25 16:23:52,662 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 36493: exiting
[junit] 2009-08-25 16:23:52,662 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2009-08-25 16:23:52,662 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:53608, storageID=DS-651771013-67.195.138.9-53608-1251217432189, infoPort=56130, ipcPort=36493):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2009-08-25 16:23:52,663 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
[junit] 2009-08-25 16:23:52,664 INFO datanode.DataNode (DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:53608, storageID=DS-651771013-67.195.138.9-53608-1251217432189, infoPort=56130, ipcPort=36493):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'}
[junit] 2009-08-25 16:23:52,664 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 36493
[junit] 2009-08-25 16:23:52,664 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
[junit] Shutting down DataNode 1
[junit] 2009-08-25 16:23:52,765 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 33485
[junit] 2009-08-25 16:23:52,766 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 33485: exiting
[junit] 2009-08-25 16:23:52,766 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 33485
[junit] 2009-08-25 16:23:52,766 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
[junit] 2009-08-25 16:23:52,766 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:44609, storageID=DS-1943273741-67.195.138.9-44609-1251217431637, infoPort=51589, ipcPort=33485):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2009-08-25 16:23:52,766 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2009-08-25 16:23:52,767 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
[junit] 2009-08-25 16:23:52,767 INFO datanode.DataNode (DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:44609, storageID=DS-1943273741-67.195.138.9-44609-1251217431637, infoPort=51589, ipcPort=33485):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'}
[junit] 2009-08-25 16:23:52,767 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 33485
[junit] 2009-08-25 16:23:52,767 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
[junit] Shutting down DataNode 0
[junit] 2009-08-25 16:23:52,869 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 36715
[junit] 2009-08-25 16:23:52,870 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 36715: exiting
[junit] 2009-08-25 16:23:52,870 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2009-08-25 16:23:52,870 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:56241, storageID=DS-98040998-67.195.138.9-56241-1251217431041, infoPort=53933, ipcPort=36715):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2009-08-25 16:23:52,870 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
[junit] 2009-08-25 16:23:52,870 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 36715
[junit] 2009-08-25 16:23:52,872 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2009-08-25 16:23:52,872 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(616)) - Exiting DataBlockScanner thread.
[junit] 2009-08-25 16:23:52,873 INFO datanode.DataNode (DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:56241, storageID=DS-98040998-67.195.138.9-56241-1251217431041, infoPort=53933, ipcPort=36715):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current'}
[junit] 2009-08-25 16:23:52,873 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 36715
[junit] 2009-08-25 16:23:52,873 INFO datanode.DataNode (DataNode.java:shutdown(643)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2009-08-25 16:23:52,975 WARN namenode.DecommissionManager (DecommissionManager.java:run(67)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2009-08-25 16:23:52,975 INFO namenode.FSNamesystem (FSEditLog.java:printStatistics(884)) - Number of transactions: 5 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 45 43
[junit] 2009-08-25 16:23:52,975 WARN namenode.FSNamesystem (FSNamesystem.java:run(2077)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2009-08-25 16:23:52,981 INFO ipc.Server (Server.java:stop(1103)) - Stopping server on 54008
[junit] 2009-08-25 16:23:52,981 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 0 on 54008: exiting
[junit] 2009-08-25 16:23:52,981 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 5 on 54008: exiting
[junit] 2009-08-25 16:23:52,982 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 1 on 54008: exiting
[junit] 2009-08-25 16:23:52,982 INFO ipc.Server (Server.java:run(539)) - Stopping IPC Server Responder
[junit] 2009-08-25 16:23:52,982 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 6 on 54008: exiting
[junit] 2009-08-25 16:23:52,981 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 3 on 54008: exiting
[junit] 2009-08-25 16:23:52,981 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 4 on 54008: exiting
[junit] 2009-08-25 16:23:52,981 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 2 on 54008: exiting
[junit] 2009-08-25 16:23:52,982 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 7 on 54008: exiting
[junit] 2009-08-25 16:23:52,982 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 54008
[junit] 2009-08-25 16:23:52,982 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 9 on 54008: exiting
[junit] 2009-08-25 16:23:52,982 INFO ipc.Server (Server.java:run(997)) - IPC Server handler 8 on 54008: exiting
[junit] Tests run: 16, Failures: 0, Errors: 0, Time elapsed: 228.82 sec
checkfailure:
BUILD FAILED
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build.xml :727: Tests failed!
Total time: 71 minutes 16 seconds
Publishing Javadoc
Recording test results
Recording fingerprints
Publishing Clover coverage report...