Hi.

Thank you for your comments and suggestions on CPANTS. I'm going to
talk about some CPANTS stuff that I'm working on at YAPC::EU. I'm not
sure if I can implement some of what you mentioned before the YAPC,
but maybe we can discuss them there.

Regards,

Kenichi Ishigaki, aka charsbar

2012/8/8 David Golden <xdaveg@gmail.com>:
I understand how frustrating it is to encounter these sorts of issues
and I applaud your desire to get more transparency about them.
However, I have several reservations about trying to address them with
CPAN Testers.

My first reservation is philosophical. CPAN Testers is primarily
about *tests* -- whether or not tests passed or failed. It is not a
general distribution quality (Kwalitee) reporting system like CPANTS
(http://cpants.charsbar.org/index.html). (Nor is it a general test
for "works with toolchain".)

My next reservation is about the relation between "failure" and
"blame". Consider circular dependencies and assume that the
circularity spans two or more authors. Which distribution is to
blame? Should they all start getting "fail" reports? I don't think
so. That's just likely to piss people off and lead to Barbie getting
more hate mail.

It also would not be consistent with our policy for dependencies.
Because we focus on whether tests pass, we want to provide a fair
chance of passage. IMO, a fair test is one in which all specified
prerequisites are satisfied. We don't send reports when dependencies
are missing and we encourage the "exit(0)" trick if external
dependencies are not available.

Reporting circular dependencies as a failure would be a special case
and I don't think it's justified. There may be reasons why someone
has created co-dependencies. That's an installer-level issue, just
like external library dependencies. From a prerequisites
specification standpoint, the distribution is asserting "if you
install these modules, my tests will pass". If we can't meet the
preconditions -- for whatever reason -- then we shouldn't be reporting
a failure.

Even if we were going to change our approach to report on this, I
would favor the existing "UNKNOWN" grade we use for build failures.
The meaning of this in the testing context is clear: we don't know if
your tests would pass because we never got that far.

To your specific list of issues, here are my recommendations for how
to address them using CPANTS, which I think is the right tool for most
of the issues you raise, particularly since most of these can be
tested once centrally and don't need to be tested in a distributed
manner.
- trying to unpack an archive for which no packer is available on the system
There is no specification for what archive forms are acceptable, only
historical practice. Again, I find this the testers responsibility to
have a sane toolchain, rather than the author's responsibility. E.g.
what about broken tar on Solaris? I don't think it's the author's
fault if tester can't untar on Solaris? These are real issues.
CPAN.pm actually has special code to deal with this:
https://github.com/andk/cpanpm/blob/master/lib/CPAN/Tarzip.pm#L278

CPANTS already tests "extractable" -- meaning tar/gzip or zip, so I
think that's sufficient.
- trying to unpack an archive that contains characters that cannot be used
in paths on the system
I don't think there is a CPANTS tests for portable filenames and I
would encourage you to write one.
- invalid yml/json files
I think CPANTS tests valid yml, but I think it does so poorly. These
days, it should probably be checking if CPAN::Meta->load_file($file)
will work for all META.* files found.
- circular dependencies
This is, of course, tricky. I wouldn't mind seeing an 'optional'
CPANTS analysis that does static analysis of dependencies to try to
detect cycles. That's not perfect (missing dynamic deps), but it
would be a reasonable approximation, since dynamic deps are pretty
rare. By making it 'optional', it also diffuses the "blame" issue, so
someone's core Kwalitee score isn't diminished if someone else creates
a cycle.

I think CPANTS fell out of favor when it stopped working for so long.
If you're really looking to address these things, I would encourage
you to do several things:

- contact the current maintainer and offer to help/enhance it (maybe
get it back on cpants.perl.org instead of just a redirect)

- patch MetaCPAN to actually show a CPANTS score instead of just
having a link to a CPANTS report. I think for CPAN Testers, the
snapshot of pass/fail/etc reports helps raise the visibility and doing
something similar for CPANTS could help as well.

- build a Kwalitee notification system for CPANTS, so that authors get
their Kwalitee score after upload (but consider the opt-in/opt-out
issues carefully)

- rebuild the visibility of Kwalitee in the perl community so people
take it seriously again. Blog about it monthy? Talk about who's
going up and going down in the stats? Pick over bad dists as
examples? I don't know what, but more attention to Kwalitee might
help (assuming authors care)

That said, I think the current CPANTS overall kwalitee metirc is not
very helpful because everything clusters so closely. See
http://cpants.charsbar.org/graphs.html -- figuring out a better way to
report kwalitee would help make it a more actionable. Maybe it's a
subset of the current 'core' that matters. Maybe it's reporting
Kwalitee deciles instead of raw scores. Something.

So.. if you read this far, I think you're on to problems that we do
want to detect, but I think CPANTS is a much better place to spend
your round tuits.

-- David

Search Discussions

Discussion Posts

Previous

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 4 of 6 | next ›
Discussion Overview
groupcpan-testers-discuss @
categoriesperl
postedAug 8, '12 at 10:26a
activeAug 13, '12 at 6:29p
posts6
users5
websitecpan.org

People

Translate

site design / logo © 2021 Grokbase