I've got a suggestion about refactoring our tests suite. I'd like to
remove XFAIL institution and mark all failing tests just as FAIL.
XFAIL has a problem that it hides attention from failing tests
depending on not yet fixed bugs (most important), not yet implemented
features (less important).
Failed tests should make pain. They should bug you every day until you
go and fix them.
XFAILs serve now as a pain-killers, we've got about 50 of them in the
repo, so devs (I assume) think this way: "It's failing, but it's
EXPECTED to fail, so let's leave it as is".
That's wrong thinking. Either tests are correct and if they fail you
should fix the code and leave them failed until the code is fixed, or,
if the tests are incorrect - fix the tests or remove them completely.
Reasons introducing XFAILs were described in this article:
I'll quote some thoughts from there:
The intention of XFAIL is to help people working on developing PHP. Consider first the situation where you (as a PHP implementer) are working through a set of failing tests. You do some analysis on one test but you can't fix the implementation until something else is fixed – however – you don't want to lose the analysis and it might be some time before you can get back to the failing test. In this case I think it's reasonable to add an XFAIL section with a brief description of the analysis. This takes the test out of the list of reported failures making it easier for you to see what is really on your priority list but still leaving the test as a failing test.If you need something else to be fixed, leave your failing test, it
would annoy everybody, so something else will be fixed faster. If you
"don't want to lose analysis" you can make a list of tests you need
with (run-tests.php has -l option which can read/write test filenames
from a file) and use that. Failing tests should not be hidden.
The second place that I can see that XFAIL might be useful is when a group of people are working on the same development project. Essentially one person on the project finds a missing feature or capability but it isn't something they can add immediately, or maybe another person has agreed to implement it. A really good way to document the omission is to write a test case which is expected to fail but which will pass if the feature is implemented. This assumes that there is general agreement that implementation is a good idea and needs to be done at some stage.These "feature tests" can be put in a separate branch to not pollute
main release branches until the feature is ready. These usually called
"acceptance tests" so they mark if a feature fully implements its
functions or not. We could also introduce "Incomplete" state like it's
done in PHPUnit for these tests.
What do you think?