On Fri, Oct 05, 2007 at 08:46:13AM -0400, David Golden wrote: On 10/5/07, David Landgren wrote:
David Cantrell wrote:
From a quick look over the reports I've sent this month, 40K looks like
a good cutoff point. Add the introductory text, perl -V, and any
comments I might add by hand, and it'll still be well under 50K.
I've set it at 100K in the devel version in the repo. (Engineering
training instinctively says 'safety factor of 2'!)
But I could be argued down. What's the basis for 40K? Is that a max?
99th percentile? Twice max already?
It might be interesting to run a script that can do a check on that sort
of thing. I should be able to a canabalised script from the current
daily cpanstats script. If I get time over the weekend, I'll try and see
what stats I get for so far this year.
Can I also suggest - probably for the future as it would require more
work - some way of automatically killing off a test that has done
nothing but spew megabyte after megabyte after megabyte of errors and
warnings to the console, and just make it a FAIL? Spewing a gazillion
warnings is generally considered to be a Bad Thing even if they're
harmless and the code actually works.
That's actually really hard -- the output stream isn't being read
interactively by a process. It's all batch, executed in a separate
command line process and picked up from teed output later.
I might be able to build that kind of a kill as a parameter to Tee --
if the tee file exceeds a certain size, then kill the process. But
even that might require some real work to be portable. Forks and
timeouts are not the nicest stuff to work with on Win32.
Don't go there, there be dragons!
And arguably, CPAN Testers isn't supposed to be judging cleanliness of
tests. Just whether tests work. If it's a test that's spewing
"uninitialized variable" warnings, but the code works, then it should
PASS, not FAIL. At most, the test could be aborted and the report
This was another suggested report type I had several years ago. WARNING.
Although the distribution passes, a report is produced and sent to the
author, as per a FAIL report. It would still count as a PASS, but the
author would get the benefit of being alerted to any warnings that they
may not have been aware of.
Unfortunately this would require a change to CPANPLUS (and probably
CPAN/CPAN-Reporter) to capture the output rather than just create a
simple PASS report.
When I was testing I did get a few distributions that resulted in plenty
of output, mostly from annoying distributions that insisted that you
enter an value interactively, and thus got you into an endless loop
waiting for me to come in in the morning to find I'd only tested a
couple of distributions instead of the few hundred I'd expected! (sorry
it's been a long day :)). But those are the more annoying, I don't
expect you're going to get a decent cross-platform way of spotting those
or stopping them. It was bad enough doing them manually on Win32 :(