FAQ
See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/823/changes

Changes:

[tomwhite] HADOOP-5613. Change S3Exception to checked exception. Contributed by Andrew Hitchcock.

[tomwhite] HADOOP-5656. Counter for S3N Read Bytes does not work. Contributed by Ian Nowland.

[tomwhite] HADOOP-5592. Fix typo in Streaming doc in reference to GzipCodec. Contributed via Corinne Chandel.

[dhruba] HADOOP-5213. Fix Null pointer exception caused when bzip2compression
was used and user closed a output stream without writing any data.
(Zheng Shao via dhruba)

[tomwhite] HADOOP-5611. Fix C++ libraries to build on Debian Lenny. Contributed by Todd Lipcon.

[tomwhite] HADOOP-5612. Some c++ scripts are not chmodded before ant execution. Contributed by Todd Lipcon.

------------------------------------------
[...truncated 468038 lines...]
[junit] 2009-05-01 18:23:23,633 INFO datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
[junit] 2009-05-01 18:23:23,633 INFO datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1241214037633 with interval 21600000
[junit] 2009-05-01 18:23:23,635 INFO http.HttpServer (HttpServer.java:start(454)) - Jetty bound to port 37642
[junit] 2009-05-01 18:23:23,635 INFO mortbay.log (?:invoke0(?)) - jetty-6.1.14
[junit] 2009-05-01 18:23:23,712 INFO mortbay.log (?:invoke0(?)) - Started SelectChannelConnector@localhost:37642
[junit] 2009-05-01 18:23:23,713 INFO jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
[junit] 2009-05-01 18:23:23,714 INFO metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=47145
[junit] 2009-05-01 18:23:23,715 INFO ipc.Server (Server.java:run(471)) - IPC Server Responder: starting
[junit] 2009-05-01 18:23:23,715 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 0 on 47145: starting
[junit] 2009-05-01 18:23:23,715 INFO ipc.Server (Server.java:run(313)) - IPC Server listener on 47145: starting
[junit] 2009-05-01 18:23:23,715 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 1 on 47145: starting
[junit] 2009-05-01 18:23:23,716 INFO datanode.DataNode (DataNode.java:startDataNode(398)) - dnRegistration = DatanodeRegistration(vesta.apache.org:45260, storageID=, infoPort=37642, ipcPort=47145)
[junit] 2009-05-01 18:23:23,715 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 2 on 47145: starting
[junit] 2009-05-01 18:23:23,718 INFO hdfs.StateChange (FSNamesystem.java:registerDatanode(2082)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:45260 storage DS-2036803848-67.195.138.9-45260-1241202203717
[junit] 2009-05-01 18:23:23,718 INFO net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:45260
[junit] 2009-05-01 18:23:23,721 INFO datanode.DataNode (DataNode.java:register(556)) - New storage id DS-2036803848-67.195.138.9-45260-1241202203717 is assigned to data-node 127.0.0.1:45260
[junit] 2009-05-01 18:23:23,721 INFO datanode.DataNode (DataNode.java:run(1216)) - DatanodeRegistration(127.0.0.1:45260, storageID=DS-2036803848-67.195.138.9-45260-1241202203717, infoPort=37642, ipcPort=47145)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'}
[junit] Starting DataNode 1 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4
[junit] 2009-05-01 18:23:23,722 INFO datanode.DataNode (DataNode.java:offerService(698)) - using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
[junit] 2009-05-01 18:23:23,731 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3 is not formatted.
[junit] 2009-05-01 18:23:23,731 INFO common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
[junit] 2009-05-01 18:23:23,739 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data4 is not formatted.
[junit] 2009-05-01 18:23:23,740 INFO common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ...
[junit] 2009-05-01 18:23:23,757 INFO datanode.DataNode (DataNode.java:blockReport(927)) - BlockReport of 0 blocks got processed in 3 msecs
[junit] 2009-05-01 18:23:23,757 INFO datanode.DataNode (DataNode.java:offerService(741)) - Starting Periodic block scanner.
[junit] 2009-05-01 18:23:23,780 INFO datanode.DataNode (FSDataset.java:registerMBean(1417)) - Registered FSDatasetStatusMBean
[junit] 2009-05-01 18:23:23,780 INFO datanode.DataNode (DataNode.java:startDataNode(319)) - Opened info server at 44319
[junit] 2009-05-01 18:23:23,781 INFO datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s
[junit] 2009-05-01 18:23:23,781 INFO datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1241212230781 with interval 21600000
[junit] 2009-05-01 18:23:23,783 INFO http.HttpServer (HttpServer.java:start(454)) - Jetty bound to port 57041
[junit] 2009-05-01 18:23:23,783 INFO mortbay.log (?:invoke0(?)) - jetty-6.1.14
[junit] 2009-05-01 18:23:23,851 INFO mortbay.log (?:invoke0(?)) - Started SelectChannelConnector@localhost:57041
[junit] 2009-05-01 18:23:23,851 INFO jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
[junit] 2009-05-01 18:23:23,853 INFO metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=58833
[junit] 2009-05-01 18:23:23,853 INFO ipc.Server (Server.java:run(471)) - IPC Server Responder: starting
[junit] 2009-05-01 18:23:23,854 INFO datanode.DataNode (DataNode.java:startDataNode(398)) - dnRegistration = DatanodeRegistration(vesta.apache.org:44319, storageID=, infoPort=57041, ipcPort=58833)
[junit] 2009-05-01 18:23:23,854 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 1 on 58833: starting
[junit] 2009-05-01 18:23:23,854 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 0 on 58833: starting
[junit] 2009-05-01 18:23:23,853 INFO ipc.Server (Server.java:run(313)) - IPC Server listener on 58833: starting
[junit] 2009-05-01 18:23:23,855 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 2 on 58833: starting
[junit] 2009-05-01 18:23:23,856 INFO hdfs.StateChange (FSNamesystem.java:registerDatanode(2082)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:44319 storage DS-1228759262-67.195.138.9-44319-1241202203855
[junit] 2009-05-01 18:23:23,857 INFO net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:44319
[junit] 2009-05-01 18:23:23,874 INFO datanode.DataNode (DataNode.java:register(556)) - New storage id DS-1228759262-67.195.138.9-44319-1241202203855 is assigned to data-node 127.0.0.1:44319
[junit] 2009-05-01 18:23:23,875 INFO datanode.DataNode (DataNode.java:run(1216)) - DatanodeRegistration(127.0.0.1:44319, storageID=DS-1228759262-67.195.138.9-44319-1241202203855, infoPort=57041, ipcPort=58833)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'}
[junit] 2009-05-01 18:23:23,880 INFO datanode.DataNode (DataNode.java:offerService(698)) - using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
[junit] 2009-05-01 18:23:23,907 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] 2009-05-01 18:23:23,908 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] 2009-05-01 18:23:23,912 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] 2009-05-01 18:23:23,913 INFO FSNamesystem.audit (FSNamesystem.java:logAuditEvent(110)) - ugi=hudson,hudson ip=/127.0.0.1 cmd=create src=/test dst=null perm=hudson:supergroup:rw-r--r--
[junit] 2009-05-01 18:23:23,918 INFO hdfs.StateChange (FSNamesystem.java:allocateBlock(1477)) - BLOCK* NameSystem.allocateBlock: /test. blk_4008157670003692169_1001
[junit] 2009-05-01 18:23:23,920 INFO datanode.DataNode (DataNode.java:blockReport(927)) - BlockReport of 0 blocks got processed in 1 msecs
[junit] 2009-05-01 18:23:23,920 INFO datanode.DataNode (DataNode.java:offerService(741)) - Starting Periodic block scanner.
[junit] 2009-05-01 18:23:23,921 INFO datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_4008157670003692169_1001 src: /127.0.0.1:41989 dest: /127.0.0.1:45260
[junit] 2009-05-01 18:23:23,922 INFO datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_4008157670003692169_1001 src: /127.0.0.1:44836 dest: /127.0.0.1:44319
[junit] 2009-05-01 18:23:23,925 INFO DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(807)) - src: /127.0.0.1:44836, dest: /127.0.0.1:44319, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_-1446114542, offset: 0, srvID: DS-1228759262-67.195.138.9-44319-1241202203855, blockid: blk_4008157670003692169_1001, duration: 1172596
[junit] 2009-05-01 18:23:23,925 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(831)) - PacketResponder 0 for block blk_4008157670003692169_1001 terminating
[junit] 2009-05-01 18:23:23,925 INFO hdfs.StateChange (FSNamesystem.java:addStoredBlock(3084)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:44319 is added to blk_4008157670003692169_1001 size 4096
[junit] 2009-05-01 18:23:23,925 INFO DataNode.clienttrace (BlockReceiver.java:run(933)) - src: /127.0.0.1:41989, dest: /127.0.0.1:45260, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_-1446114542, offset: 0, srvID: DS-2036803848-67.195.138.9-45260-1241202203717, blockid: blk_4008157670003692169_1001, duration: 1565148
[junit] 2009-05-01 18:23:23,927 INFO hdfs.StateChange (FSNamesystem.java:addStoredBlock(3084)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:45260 is added to blk_4008157670003692169_1001 size 4096
[junit] 2009-05-01 18:23:23,927 INFO datanode.DataNode (BlockReceiver.java:run(997)) - PacketResponder 1 for block blk_4008157670003692169_1001 terminating
[junit] 2009-05-01 18:23:23,928 INFO hdfs.StateChange (FSNamesystem.java:allocateBlock(1477)) - BLOCK* NameSystem.allocateBlock: /test. blk_3485936179055972990_1001
[junit] 2009-05-01 18:23:23,930 INFO datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_3485936179055972990_1001 src: /127.0.0.1:41991 dest: /127.0.0.1:45260
[junit] 2009-05-01 18:23:23,931 INFO datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_3485936179055972990_1001 src: /127.0.0.1:44838 dest: /127.0.0.1:44319
[junit] 2009-05-01 18:23:23,933 INFO DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(807)) - src: /127.0.0.1:44838, dest: /127.0.0.1:44319, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_-1446114542, offset: 0, srvID: DS-1228759262-67.195.138.9-44319-1241202203855, blockid: blk_3485936179055972990_1001, duration: 879499
[junit] 2009-05-01 18:23:23,933 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(831)) - PacketResponder 0 for block blk_3485936179055972990_1001 terminating
[junit] 2009-05-01 18:23:23,933 INFO hdfs.StateChange (FSNamesystem.java:addStoredBlock(3084)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:44319 is added to blk_3485936179055972990_1001 size 4096
[junit] 2009-05-01 18:23:23,934 INFO DataNode.clienttrace (BlockReceiver.java:run(933)) - src: /127.0.0.1:41991, dest: /127.0.0.1:45260, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_-1446114542, offset: 0, srvID: DS-2036803848-67.195.138.9-45260-1241202203717, blockid: blk_3485936179055972990_1001, duration: 1339814
[junit] 2009-05-01 18:23:23,934 INFO hdfs.StateChange (FSNamesystem.java:addStoredBlock(3084)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:45260 is added to blk_3485936179055972990_1001 size 4096
[junit] 2009-05-01 18:23:23,934 INFO datanode.DataNode (BlockReceiver.java:run(997)) - PacketResponder 1 for block blk_3485936179055972990_1001 terminating
[junit] 2009-05-01 18:23:23,935 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] 2009-05-01 18:23:23,935 INFO hdfs.StateChange (FSNamesystem.java:completeFileInternal(1436)) - DIR* NameSystem.completeFile: file /test is closed by DFSClient_-1446114542
[junit] 2009-05-01 18:23:23,940 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] init: server=localhost;port=;service=DataNode;localVMUrl=null
[junit]
[junit] Domains:
[junit] Domain = JMImplementation
[junit] Domain = com.sun.management
[junit] Domain = hadoop
[junit] Domain = java.lang
[junit] Domain = java.util.logging
[junit]
[junit] MBeanServer default domain = DefaultDomain
[junit]
[junit] MBean count = 26
[junit]
[junit] Query MBeanServer MBeans:
[junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId-1159908103
[junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId-152543741
[junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId2014849582
[junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId371396766
[junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort47145
[junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort58833
[junit] Info: key = bytes_written; val = 0
[junit] Shutting down the Mini HDFS Cluster
[junit] Shutting down DataNode 1
[junit] 2009-05-01 18:23:24,043 INFO ipc.Server (Server.java:stop(1098)) - Stopping server on 58833
[junit] 2009-05-01 18:23:24,043 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 58833: exiting
[junit] 2009-05-01 18:23:24,044 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 58833: exiting
[junit] 2009-05-01 18:23:24,044 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:44319, storageID=DS-1228759262-67.195.138.9-44319-1241202203855, infoPort=57041, ipcPort=58833):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2009-05-01 18:23:24,043 INFO datanode.DataNode (DataNode.java:shutdown(606)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2009-05-01 18:23:24,043 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 58833: exiting
[junit] 2009-05-01 18:23:24,043 INFO ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
[junit] 2009-05-01 18:23:24,043 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 58833
[junit] 2009-05-01 18:23:24,044 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(618)) - Exiting DataBlockScanner thread.
[junit] 2009-05-01 18:23:24,045 INFO datanode.DataNode (DataNode.java:run(1236)) - DatanodeRegistration(127.0.0.1:44319, storageID=DS-1228759262-67.195.138.9-44319-1241202203855, infoPort=57041, ipcPort=58833):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'}
[junit] 2009-05-01 18:23:24,045 INFO ipc.Server (Server.java:stop(1098)) - Stopping server on 58833
[junit] 2009-05-01 18:23:24,045 INFO datanode.DataNode (DataNode.java:shutdown(606)) - Waiting for threadgroup to exit, active threads is 0
[junit] Shutting down DataNode 0
[junit] 2009-05-01 18:23:24,047 INFO ipc.Server (Server.java:stop(1098)) - Stopping server on 47145
[junit] 2009-05-01 18:23:24,047 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 47145: exiting
[junit] 2009-05-01 18:23:24,047 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 47145: exiting
[junit] 2009-05-01 18:23:24,047 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 47145: exiting
[junit] 2009-05-01 18:23:24,047 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:45260, storageID=DS-2036803848-67.195.138.9-45260-1241202203717, infoPort=37642, ipcPort=47145):DataXceiveServer: java.nio.channels.AsynchronousCloseException
[junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
[junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
[junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
[junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
[junit] at java.lang.Thread.run(Thread.java:619)
[junit]
[junit] 2009-05-01 18:23:24,047 INFO ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
[junit] 2009-05-01 18:23:24,047 INFO datanode.DataNode (DataNode.java:shutdown(606)) - Waiting for threadgroup to exit, active threads is 1
[junit] 2009-05-01 18:23:24,047 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 47145
[junit] 2009-05-01 18:23:24,048 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(618)) - Exiting DataBlockScanner thread.
[junit] 2009-05-01 18:23:24,049 INFO datanode.DataNode (DataNode.java:run(1236)) - DatanodeRegistration(127.0.0.1:45260, storageID=DS-2036803848-67.195.138.9-45260-1241202203717, infoPort=37642, ipcPort=47145):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'}
[junit] 2009-05-01 18:23:24,049 INFO ipc.Server (Server.java:stop(1098)) - Stopping server on 47145
[junit] 2009-05-01 18:23:24,049 INFO datanode.DataNode (DataNode.java:shutdown(606)) - Waiting for threadgroup to exit, active threads is 0
[junit] 2009-05-01 18:23:24,200 WARN namenode.FSNamesystem (FSNamesystem.java:run(2357)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 2009-05-01 18:23:24,200 WARN namenode.DecommissionManager (DecommissionManager.java:run(67)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 2009-05-01 18:23:24,200 INFO namenode.FSNamesystem (FSEditLog.java:printStatistics(1082)) - Number of transactions: 3 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 7 1
[junit] 2009-05-01 18:23:24,201 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);
[junit] 2009-05-01 18:23:24,201 INFO ipc.Server (Server.java:stop(1098)) - Stopping server on 59514
[junit] 2009-05-01 18:23:24,202 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 59514: exiting
[junit] 2009-05-01 18:23:24,202 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 8 on 59514: exiting
[junit] 2009-05-01 18:23:24,202 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 9 on 59514: exiting
[junit] 2009-05-01 18:23:24,202 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 7 on 59514: exiting
[junit] 2009-05-01 18:23:24,202 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 6 on 59514: exiting
[junit] 2009-05-01 18:23:24,202 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 5 on 59514: exiting
[junit] 2009-05-01 18:23:24,202 INFO ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder
[junit] 2009-05-01 18:23:24,202 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 4 on 59514: exiting
[junit] 2009-05-01 18:23:24,202 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 59514: exiting
[junit] 2009-05-01 18:23:24,202 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 3 on 59514: exiting
[junit] 2009-05-01 18:23:24,202 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 59514
[junit] 2009-05-01 18:23:24,202 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 59514: exiting
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 3.188 sec
[junit] Running org.apache.hadoop.util.TestCyclicIteration
[junit]
[junit]
[junit] integers=[]
[junit] map={}
[junit] start=-1, iteration=[]
[junit]
[junit]
[junit] integers=[0]
[junit] map={0=0}
[junit] start=-1, iteration=[0]
[junit] start=0, iteration=[0]
[junit] start=1, iteration=[0]
[junit]
[junit]
[junit] integers=[0, 2]
[junit] map={0=0, 2=2}
[junit] start=-1, iteration=[0, 2]
[junit] start=0, iteration=[2, 0]
[junit] start=1, iteration=[2, 0]
[junit] start=2, iteration=[0, 2]
[junit] start=3, iteration=[0, 2]
[junit]
[junit]
[junit] integers=[0, 2, 4]
[junit] map={0=0, 2=2, 4=4}
[junit] start=-1, iteration=[0, 2, 4]
[junit] start=0, iteration=[2, 4, 0]
[junit] start=1, iteration=[2, 4, 0]
[junit] start=2, iteration=[4, 0, 2]
[junit] start=3, iteration=[4, 0, 2]
[junit] start=4, iteration=[0, 2, 4]
[junit] start=5, iteration=[0, 2, 4]
[junit]
[junit]
[junit] integers=[0, 2, 4, 6]
[junit] map={0=0, 2=2, 4=4, 6=6}
[junit] start=-1, iteration=[0, 2, 4, 6]
[junit] start=0, iteration=[2, 4, 6, 0]
[junit] start=1, iteration=[2, 4, 6, 0]
[junit] start=2, iteration=[4, 6, 0, 2]
[junit] start=3, iteration=[4, 6, 0, 2]
[junit] start=4, iteration=[6, 0, 2, 4]
[junit] start=5, iteration=[6, 0, 2, 4]
[junit] start=6, iteration=[0, 2, 4, 6]
[junit] start=7, iteration=[0, 2, 4, 6]
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.109 sec
[junit] Running org.apache.hadoop.util.TestGenericsUtil
[junit] 2009-05-01 18:23:25,182 WARN conf.Configuration (Configuration.java:<clinit>(176)) - DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
[junit] 2009-05-01 18:23:25,195 WARN util.GenericOptionsParser (GenericOptionsParser.java:parseGeneralOptions(377)) - options parsing failed: Missing argument for option:jt
[junit] usage: general options are:
[junit] -archives <paths> comma separated archives to be unarchived
[junit] on the compute machines.
[junit] -conf <configuration file> specify an application configuration file
[junit] -D <property=value> use value for given property
[junit] -files <paths> comma separated files to be copied to the
[junit] map reduce cluster
[junit] -fs <local|namenode:port> specify a namenode
[junit] -jt <local|jobtracker:port> specify a job tracker
[junit] -libjars <paths> comma separated jar files to include in the
[junit] classpath.
[junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.185 sec
[junit] Running org.apache.hadoop.util.TestIndexedSort
[junit] sortRandom seed: -2806234358591427219(org.apache.hadoop.util.QuickSort)
[junit] testSorted seed: 5921049797330572392(org.apache.hadoop.util.QuickSort)
[junit] testAllEqual setting min/max at 373/106(org.apache.hadoop.util.QuickSort)
[junit] sortWritable seed: 5292356891873061095(org.apache.hadoop.util.QuickSort)
[junit] QuickSort degen cmp/swp: 23252/3713(org.apache.hadoop.util.QuickSort)
[junit] sortRandom seed: -4002401274184521732(org.apache.hadoop.util.HeapSort)
[junit] testSorted seed: -3304166714110103333(org.apache.hadoop.util.HeapSort)
[junit] testAllEqual setting min/max at 456/39(org.apache.hadoop.util.HeapSort)
[junit] sortWritable seed: -1969382330031265178(org.apache.hadoop.util.HeapSort)
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 1.004 sec
[junit] Running org.apache.hadoop.util.TestProcfsBasedProcessTree
[junit] 2009-05-01 18:23:26,977 INFO util.ProcessTree (ProcessTree.java:isSetsidSupported(51)) - setsid exited with exit code 0
[junit] 2009-05-01 18:23:27,489 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(142)) - Root process pid: 12833
[junit] 2009-05-01 18:23:27,537 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(147)) - ProcessTree: [ 12836 12833 12835 ]
[junit] 2009-05-01 18:23:34,061 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(160)) - ProcessTree: [ 12837 12839 12833 12849 12835 12851 12845 12847 12843 ]
[junit] 2009-05-01 18:23:34,084 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(65)) - Shell Command exit with a non-zero exit code. This is expected as we are killing the subprocesses of the task intentionally. org.apache.hadoop.util.Shell$ExitCodeException:
[junit] 2009-05-01 18:23:34,084 INFO util.ProcessTree (ProcessTree.java:destroyProcessGroup(168)) - Killing all processes in the process group 12833 with SIGTERM. Exit code 0
[junit] 2009-05-01 18:23:34,084 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(71)) - Exit code: 143
[junit] 2009-05-01 18:23:34,172 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(174)) - RogueTaskThread successfully joined.
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.282 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] 2009-05-01 18:23:35,075 WARN conf.Configuration (Configuration.java:<clinit>(176)) - DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.602 sec
[junit] Running org.apache.hadoop.util.TestShell
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.184 sec
[junit] Running org.apache.hadoop.util.TestStringUtils
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.094 sec

BUILD FAILED
http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build.xml :772: Tests failed!

Total time: 212 minutes 3 seconds
Publishing Javadoc
Recording test results
Recording fingerprints
Publishing Clover coverage report...

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-dev @
categorieshadoop
postedMay 1, '09 at 6:10p
activeMay 2, '09 at 3:40p
posts2
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Apache Hudson Server: 2 posts

People

Translate

site design / logo © 2022 Grokbase