+1 (non-binding)
I downloaded the patch and installed it on a 10-node cluster.
I successfully ran randomwriter twice and the following 2 SLive tests:
hadoop --config $HADOOP_CONF_DIR org.apache.hadoop.fs.slive.SliveTest \
-appendSize 1,67108864 -append 0,uniform -baseDir /user/$USER/S-Live \
-blockSize 67108864,67108864 -create 0,uniform -delete 20,uniform -dirSize 16 \
-duration 300 -files 1024 -ls 20,uniform -maps 20 -mkdir 20,uniform -ops 10000 \
-packetSize 65536 -readSize 1,4294967295 -read 20,uniform -reduces 5 \
-rename 20,uniform -replication 1,3 -resFile $RESFILE \
-seed 12345678 -sleep 100,1000 -writeSize 1,67108864
hadoop --config $HADOOP_CONF_DIR org.apache.hadoop.fs.slive.SliveTest \
-appendSize 1,67108864 -append 0,uniform -baseDir /user/$USER/S-Live \
-blockSize 67108864,67108864 -create 100,uniform -delete 0,uniform -dirSize 16 \
-duration 300 -files 1024 -ls 0,uniform -maps 20 -mkdir 0,uniform -ops 10000 \
-packetSize 65536 -readSize 1,4294967295 -read 0,uniform -reduces 5 \
-rename 0,uniform -replication 1,3 -resFile $RESFILE -seed 12345678 \
-sleep 100,1000 -writeSize 1,67108864
Thanks,
-Eric Payne
----------------------
From: Owen O'Malley [
[email protected]]
Sent: Thu 8/25/2011 7:12 PM
To:
[email protected]Subject: [VOTE] Should we release 0.20.204.0-rc3?
All,
I've fixed the issues that Allen observed in the previous rc for 0.20.204 and rolled the new bundled up in
http://people.apache.org/~omalley/hadoop-0.20.204.0-rc3. Please download the tarball, compile it, and try it out. All of the tests pass, and I've run several 1TB sorts with 15,000 maps and 110 reduces with only 1 task failures out of 3 runs.
Thanks,
Owen