FAQ
Test::More/Test::Builder work VERY hard to ensure nothing inside them
alters $! or $@. This is for thing like this:

     ok(do_something_scary());
is($!, 0, "expected $! val");
is($@, undef, '$@ not changed');

Without Test::More/Builder being careful to support this, the second 2
assertions could fail because something inside ok() modifies $! or $@.

*I cannot change Test::Builder/More* they must continue to protect $! and
$@, otherwise things break (I have a few downstream samples to show that it
does).

However. It is easy enough to have Test::Builder/More protect $! and $@
without requiring Test2 to do so.

Since Test2 is new, and nothing depends on any of its behaviors yet, I am
considering having Test2 not care about modifying $! and $@. So far I have
been taking care to preserve $! and $@, but it does add significant
complexity (and minor, but noticeable slowdown) to Test2.

*Reasons for dropping this promise from Test2:*

    - Simplifies code
    - $! and $@ are altered by many many things, adding { local ... } around
    all of them is a pain
    - Sometimes internals you don't expect to set $! will
    - Perl itself documents that you cannot depend on $! and $@ being
    unchanged past the immediate line after you set it.
    - Test::Builder will continue to protect $! and $@, so nothing will break
    - Test2 is new, nothing depends on it preserving these

*Reasons not to drop it:*

    - It is helpful to people who might not realize $! and $@ are often
    altered unexpectedly.

I am asking for input in case I missed any reasons/arguments for either
side. In either case backwards compatibility will be preserved.

-Chad

Search Discussions

  • Kent Fredric at Jan 12, 2016 at 1:34 am

    On 12 January 2016 at 13:53, Chad Granum wrote:
    $! and $@ are altered by many many things, adding { local ... } around all
    of them is a pain
    As much as I agree, and as much as this is a "thing in all perl, so we
    should expect this problem from every module"

    If I was to offer a counter example, consider the effort of a `*.t`
    file needing to preserve these things instead.
    ok(do_something_scary());
    is($!, 0, "expected $! val");
    is($@, undef, '$@ not changed'); vs
    my ( $error, $exception );
    ok(do {
    local $@;
    local $!;
    my $ret = do_something_scary());
    ( $error, $exception ) = ($!, $@);
    $ret
    });
    is($error, 0, "expected $! val");
    is($exception, undef, '$@ not changed);
    I'm not sure I'd consider the second of these an "improvement".
  • Chris Nandor at Jan 12, 2016 at 2:52 am
    I agree, Kent. Shouldn't it be simple to wrap ok() and is() accordingly,
    to simply save-and-restore $! and $@, without going deep into anything?
    There's already a working example of how to do that right here.

    sub ok {
       my($err, $exception) = ($!, $@);
       # ...
       ($!, $@) = ($err, $exception);
       return $ret;
    }

    And if ok() accepts a code reference to execute, fine ... same principle,
    for whenever you execute the code. It doesn't seem like it's a lot of
    work, though I imagine I am probably missing something. That said, I
    generally do something like you have, except I don't put it in a do(), I
    just execute it external to the test function:

       local($@, $!);
       my $ret = do_something_scary();
       ($error, $exception) = ($!, $@);
       ok($ret);
       is($err, 0);
       is($exception, undef);

    It's not as nice, but it's safe. And if I call it multiple times, I
    usually put it in my own wrapper:

       sub my_test {
         local($@, $!);
         my($ref) = @_;
         my $ret = $ref->();
         ($error, $exception) = ($!, $@);
         ok($ret);
         is($err, 0);
         is($exception, undef);
       }

    or whatever. Again ... it's not ideal, but if I am calling it multiple
    times, and I want to check for no errors, even if $@ and $! are protected,
    I'd still want to have a separate function ... and then the overhead isn't
    a big deal. So, I guess my point is "whatever." :-)

    On Mon, Jan 11, 2016 at 5:34 PM, Kent Fredric wrote:
    On 12 January 2016 at 13:53, Chad Granum wrote:
    $! and $@ are altered by many many things, adding { local ... } around all
    of them is a pain
    As much as I agree, and as much as this is a "thing in all perl, so we
    should expect this problem from every module"

    If I was to offer a counter example, consider the effort of a `*.t`
    file needing to preserve these things instead.
    ok(do_something_scary());
    is($!, 0, "expected $! val");
    is($@, undef, '$@ not changed'); vs
    my ( $error, $exception );
    ok(do {
    local $@;
    local $!;
    my $ret = do_something_scary());
    ( $error, $exception ) = ($!, $@);
    $ret
    });
    is($error, 0, "expected $! val");
    is($exception, undef, '$@ not changed);
    I'm not sure I'd consider the second of these an "improvement".


    --
    Kent

    KENTNL - https://metacpan.org/author/KENTNL
  • Ricardo Signes at Jan 12, 2016 at 3:04 am
    * Chad Granum [2016-01-11T19:53:15]
    However. It is easy enough to have Test::Builder/More protect $! and $@
    without requiring Test2 to do so.
    I don't have really strong feelings here, but the general reaction on p5p over
    the years to "some core library changes $! needlessly" is: $! is only reliable
    (a) *immediately* after a routine that uses $! to communicate and (b) if that
    routine returns in error. In some rare cases it's worth maintaining that.
    Mostly, we say "too bad."

    --
    rjbs
  • Chad Granum at Jan 12, 2016 at 3:14 am
    Some things I forgot to mention:

    Test2, the dist, is just internals. It provides no tools. It does not have
    ok(), is(), etc. What I am talking about is not thr simple task of putting
    local $! In some exports. This is localizing $! Around any call to use,
    require, open, close, etc. Anything that can alter a handle, write a file,
    etc.

    The reason for these extreme measures is because that is what Test::Builder
    does. Test::More does not localize $! In ok() and is(), it instead has
    protections scattered about its internals.

    So my proposal is not to write tools that modify $!, it is to spare the
    internals from all these hurdles and let the tools do the $! Protection if
    they need it.

    Now, lots of things depend on Test::Builder jumping through these hoops, so
    I will ensure Test::Builder still protects $!.

    The question is, do I accomplish this by wrapping everything that canchange
    $! In all the Test2 internals, or do I do it by having Test::Builder
    protect them when it calls out to Test2? The latter option would reduce
    the complexity of Test2, but not break things.

    As for tools, yes, preserving $! Is a valuable behavior, but I think it
    belongs at the tool level, not the internals.

    That said, it just occured to me that this can possibly be accomplished by
    having a context store $! And $@ when it is obtained, then restore them
    when it is released, which would avoid needing to use local everywhere, and
    still preserve them for all tools automatically...
    On Jan 11, 2016 4:53 PM, "Chad Granum" wrote:

    Test::More/Test::Builder work VERY hard to ensure nothing inside them
    alters $! or $@. This is for thing like this:

    ok(do_something_scary());
    is($!, 0, "expected $! val");
    is($@, undef, '$@ not changed');

    Without Test::More/Builder being careful to support this, the second 2
    assertions could fail because something inside ok() modifies $! or $@.

    *I cannot change Test::Builder/More* they must continue to protect $! and
    $@, otherwise things break (I have a few downstream samples to show that it
    does).

    However. It is easy enough to have Test::Builder/More protect $! and $@
    without requiring Test2 to do so.

    Since Test2 is new, and nothing depends on any of its behaviors yet, I am
    considering having Test2 not care about modifying $! and $@. So far I have
    been taking care to preserve $! and $@, but it does add significant
    complexity (and minor, but noticeable slowdown) to Test2.

    *Reasons for dropping this promise from Test2:*

    - Simplifies code
    - $! and $@ are altered by many many things, adding { local ... }
    around all of them is a pain
    - Sometimes internals you don't expect to set $! will
    - Perl itself documents that you cannot depend on $! and $@ being
    unchanged past the immediate line after you set it.
    - Test::Builder will continue to protect $! and $@, so nothing will
    break
    - Test2 is new, nothing depends on it preserving these

    *Reasons not to drop it:*

    - It is helpful to people who might not realize $! and $@ are often
    altered unexpectedly.

    I am asking for input in case I missed any reasons/arguments for either
    side. In either case backwards compatibility will be preserved.

    -Chad

  • Chad Granum at Jan 12, 2016 at 3:19 am
    https://github.com/Test-More/Test2/issues/9

    Issue created to do this the easy/efficient way.
    On Jan 11, 2016 7:14 PM, "Chad Granum" wrote:

    Some things I forgot to mention:

    Test2, the dist, is just internals. It provides no tools. It does not have
    ok(), is(), etc. What I am talking about is not thr simple task of putting
    local $! In some exports. This is localizing $! Around any call to use,
    require, open, close, etc. Anything that can alter a handle, write a file,
    etc.

    The reason for these extreme measures is because that is what
    Test::Builder does. Test::More does not localize $! In ok() and is(), it
    instead has protections scattered about its internals.

    So my proposal is not to write tools that modify $!, it is to spare the
    internals from all these hurdles and let the tools do the $! Protection if
    they need it.

    Now, lots of things depend on Test::Builder jumping through these hoops,
    so I will ensure Test::Builder still protects $!.

    The question is, do I accomplish this by wrapping everything that
    canchange $! In all the Test2 internals, or do I do it by having
    Test::Builder protect them when it calls out to Test2? The latter option
    would reduce the complexity of Test2, but not break things.

    As for tools, yes, preserving $! Is a valuable behavior, but I think it
    belongs at the tool level, not the internals.

    That said, it just occured to me that this can possibly be accomplished by
    having a context store $! And $@ when it is obtained, then restore them
    when it is released, which would avoid needing to use local everywhere, and
    still preserve them for all tools automatically...
    On Jan 11, 2016 4:53 PM, "Chad Granum" wrote:

    Test::More/Test::Builder work VERY hard to ensure nothing inside them
    alters $! or $@. This is for thing like this:

    ok(do_something_scary());
    is($!, 0, "expected $! val");
    is($@, undef, '$@ not changed');

    Without Test::More/Builder being careful to support this, the second 2
    assertions could fail because something inside ok() modifies $! or $@.

    *I cannot change Test::Builder/More* they must continue to protect $!
    and $@, otherwise things break (I have a few downstream samples to show
    that it does).

    However. It is easy enough to have Test::Builder/More protect $! and $@
    without requiring Test2 to do so.

    Since Test2 is new, and nothing depends on any of its behaviors yet, I am
    considering having Test2 not care about modifying $! and $@. So far I have
    been taking care to preserve $! and $@, but it does add significant
    complexity (and minor, but noticeable slowdown) to Test2.

    *Reasons for dropping this promise from Test2:*

    - Simplifies code
    - $! and $@ are altered by many many things, adding { local ... }
    around all of them is a pain
    - Sometimes internals you don't expect to set $! will
    - Perl itself documents that you cannot depend on $! and $@ being
    unchanged past the immediate line after you set it.
    - Test::Builder will continue to protect $! and $@, so nothing will
    break
    - Test2 is new, nothing depends on it preserving these

    *Reasons not to drop it:*

    - It is helpful to people who might not realize $! and $@ are often
    altered unexpectedly.

    I am asking for input in case I missed any reasons/arguments for either
    side. In either case backwards compatibility will be preserved.

    -Chad

  • Kent Fredric at Jan 12, 2016 at 3:21 am

    On 12 January 2016 at 16:14, Chad Granum wrote:
    That said, it just occured to me that this can possibly be accomplished by
    having a context store $! And $@ when it is obtained, then restore them when
    it is released, which would avoid needing to use local everywhere, and still
    preserve them for all tools automatically...

    As written, that suggestion scares me slightly, because its behaviour
    wouldn't be entirely obvious that it does that.

    I'd be fine with a system like:

         my $error_state = save_errors();
         ...
         restore_errors($error_state);
         return $value;

    Or

        return preserve_errors(sub{
            /* code that can generate errors that shouldn't leak */
        });

    Just lumping it in with the "Context" object seems too magical.
  • Chad Granum at Jan 12, 2016 at 3:26 am
    The magic is already there, but it is there by localizing the vars in a ton
    of non obvious places. Between that and having it done in one obvious and
    documented place seems like a Much better option.
    On Jan 11, 2016 7:21 PM, "Kent Fredric" wrote:
    On 12 January 2016 at 16:14, Chad Granum wrote:
    That said, it just occured to me that this can possibly be accomplished by
    having a context store $! And $@ when it is obtained, then restore them when
    it is released, which would avoid needing to use local everywhere, and still
    preserve them for all tools automatically...

    As written, that suggestion scares me slightly, because its behaviour
    wouldn't be entirely obvious that it does that.

    I'd be fine with a system like:

    my $error_state = save_errors();
    ...
    restore_errors($error_state);
    return $value;

    Or

    return preserve_errors(sub{
    /* code that can generate errors that shouldn't leak */
    });

    Just lumping it in with the "Context" object seems too magical.

    --
    Kent

    KENTNL - https://metacpan.org/author/KENTNL
  • Karen Etheridge at Jan 12, 2016 at 3:25 am
    Test2, the dist, is just internals. It provides no tools. It does not
    have ok(), is(), etc.

    Um, so... what *is* Test2 then? (And the second question would be: and what
    does it have to do with Test::More?) Without context, your first question
    is equivalent to "should Foo::Bar maintain $! and $@?".

    On Mon, Jan 11, 2016 at 7:14 PM, Chad Granum wrote:

    Some things I forgot to mention:

    Test2, the dist, is just internals. It provides no tools. It does not have
    ok(), is(), etc. What I am talking about is not thr simple task of putting
    local $! In some exports. This is localizing $! Around any call to use,
    require, open, close, etc. Anything that can alter a handle, write a file,
    etc.

    The reason for these extreme measures is because that is what
    Test::Builder does. Test::More does not localize $! In ok() and is(), it
    instead has protections scattered about its internals.

    So my proposal is not to write tools that modify $!, it is to spare the
    internals from all these hurdles and let the tools do the $! Protection if
    they need it.

    Now, lots of things depend on Test::Builder jumping through these hoops,
    so I will ensure Test::Builder still protects $!.

    The question is, do I accomplish this by wrapping everything that
    canchange $! In all the Test2 internals, or do I do it by having
    Test::Builder protect them when it calls out to Test2? The latter option
    would reduce the complexity of Test2, but not break things.

    As for tools, yes, preserving $! Is a valuable behavior, but I think it
    belongs at the tool level, not the internals.

    That said, it just occured to me that this can possibly be accomplished by
    having a context store $! And $@ when it is obtained, then restore them
    when it is released, which would avoid needing to use local everywhere, and
    still preserve them for all tools automatically...
    On Jan 11, 2016 4:53 PM, "Chad Granum" wrote:

    Test::More/Test::Builder work VERY hard to ensure nothing inside them
    alters $! or $@. This is for thing like this:

    ok(do_something_scary());
    is($!, 0, "expected $! val");
    is($@, undef, '$@ not changed');

    Without Test::More/Builder being careful to support this, the second 2
    assertions could fail because something inside ok() modifies $! or $@.

    *I cannot change Test::Builder/More* they must continue to protect $!
    and $@, otherwise things break (I have a few downstream samples to show
    that it does).

    However. It is easy enough to have Test::Builder/More protect $! and $@
    without requiring Test2 to do so.

    Since Test2 is new, and nothing depends on any of its behaviors yet, I am
    considering having Test2 not care about modifying $! and $@. So far I have
    been taking care to preserve $! and $@, but it does add significant
    complexity (and minor, but noticeable slowdown) to Test2.

    *Reasons for dropping this promise from Test2:*

    - Simplifies code
    - $! and $@ are altered by many many things, adding { local ... }
    around all of them is a pain
    - Sometimes internals you don't expect to set $! will
    - Perl itself documents that you cannot depend on $! and $@ being
    unchanged past the immediate line after you set it.
    - Test::Builder will continue to protect $! and $@, so nothing will
    break
    - Test2 is new, nothing depends on it preserving these

    *Reasons not to drop it:*

    - It is helpful to people who might not realize $! and $@ are often
    altered unexpectedly.

    I am asking for input in case I missed any reasons/arguments for either
    side. In either case backwards compatibility will be preserved.

    -Chad

  • Aristotle Pagaltzis at Jan 18, 2016 at 5:41 am

    * Chad Granum [2016-01-12 04:20]:
    That said, it just occured to me that this can possibly be
    accomplished by having a context store $! And $@ when it is obtained,
    then restore them when it is released, which would avoid needing to
    use local everywhere, and still preserve them for all tools
    automatically...
    I actually like the magic Kent is wary about in this instance, as it
    makes it easier for test functions to get this right without each of
    them having to carry extra boilerplate. But this also means that a test
    function which explicitly *wants* to change these variables has to fight
    the framework for it. So maybe there ought to be a mechanism to request
    that they not be restored on context release.

    Regards,
    --
    Aristotle Pagaltzis // <http://plasmasturm.org/>
  • Chad Granum at Jan 18, 2016 at 5:53 am
    I can add a complex mechanism, or someone can just do this:

    sub modifies_err {
    my $ctx = context();
    ...
    my $err = $@;
    $ctx->release();
    $@ = $err;
    return ...;
    }

    It is unlikely anyone would want to have $@, $!, and $?, all at once, but
    just as easy to manually maintain them. Wanting to preserve any, let alone
    all 3 is rare enough, and easy enough, that I think adding a mechanism into
    release() or context() would be noise. Add to this the fact that context()
    and release() are called so often, and have been written so very carefully
    to be as fast as possible, I am not convinced this mechanism is necessary.

    Then again, if you /really/ want the mechanism in $ctx, I can add
    $ctx->release_preserving (naming is hard, give me a better one) which does
    have the behavior... but at that point, which behavior do you want,
    preserve one, preserve all, preserve what is requested in arguments? Once
    again, seems over-complicated for something done so rarely, and so easy to
    just do without a mechanism.

    rjbs and I actually talked about this Friday. He asked me how a function
    could set $@ if it wanted to, I showed him a sample like the one above and
    his response was:

    <Exodist> ->release; $@ = $err;
    <rjbs> oh, of course
    <rjbs> Ha, right. I'm dumb.
    <rjbs> Thanks!

    -Chad
    On Sun, Jan 17, 2016 at 9:41 PM, Aristotle Pagaltzis wrote:

    * Chad Granum [2016-01-12 04:20]:
    That said, it just occured to me that this can possibly be
    accomplished by having a context store $! And $@ when it is obtained,
    then restore them when it is released, which would avoid needing to
    use local everywhere, and still preserve them for all tools
    automatically...
    I actually like the magic Kent is wary about in this instance, as it
    makes it easier for test functions to get this right without each of
    them having to carry extra boilerplate. But this also means that a test
    function which explicitly *wants* to change these variables has to fight
    the framework for it. So maybe there ought to be a mechanism to request
    that they not be restored on context release.

    Regards,
    --
    Aristotle Pagaltzis // <http://plasmasturm.org/>
  • Chad Granum at Jan 18, 2016 at 5:59 am
    Just realized my response may not have been clear. My response assumes that
    context() DOES store $!, $@ and $?. And that release() restores them. I was
    saying that it is easy to get around this magic in the very rare case you
    want to. Most cases will not have any need to work around it as it is a
    typically desired behavior.
    On Sun, Jan 17, 2016 at 9:53 PM, Chad Granum wrote:

    I can add a complex mechanism, or someone can just do this:

    sub modifies_err {
    my $ctx = context();
    ...
    my $err = $@;
    $ctx->release();
    $@ = $err;
    return ...;
    }

    It is unlikely anyone would want to have $@, $!, and $?, all at once, but
    just as easy to manually maintain them. Wanting to preserve any, let alone
    all 3 is rare enough, and easy enough, that I think adding a mechanism into
    release() or context() would be noise. Add to this the fact that context()
    and release() are called so often, and have been written so very carefully
    to be as fast as possible, I am not convinced this mechanism is necessary.

    Then again, if you /really/ want the mechanism in $ctx, I can add
    $ctx->release_preserving (naming is hard, give me a better one) which does
    have the behavior... but at that point, which behavior do you want,
    preserve one, preserve all, preserve what is requested in arguments? Once
    again, seems over-complicated for something done so rarely, and so easy to
    just do without a mechanism.

    rjbs and I actually talked about this Friday. He asked me how a function
    could set $@ if it wanted to, I showed him a sample like the one above and
    his response was:

    <Exodist> ->release; $@ = $err;
    <rjbs> oh, of course
    <rjbs> Ha, right. I'm dumb.
    <rjbs> Thanks!

    -Chad
    On Sun, Jan 17, 2016 at 9:41 PM, Aristotle Pagaltzis wrote:

    * Chad Granum [2016-01-12 04:20]:
    That said, it just occured to me that this can possibly be
    accomplished by having a context store $! And $@ when it is obtained,
    then restore them when it is released, which would avoid needing to
    use local everywhere, and still preserve them for all tools
    automatically...
    I actually like the magic Kent is wary about in this instance, as it
    makes it easier for test functions to get this right without each of
    them having to carry extra boilerplate. But this also means that a test
    function which explicitly *wants* to change these variables has to fight
    the framework for it. So maybe there ought to be a mechanism to request
    that they not be restored on context release.

    Regards,
    --
    Aristotle Pagaltzis // <http://plasmasturm.org/>
  • Kent Fredric at Jan 18, 2016 at 6:08 am

    On 18 January 2016 at 18:53, Chad Granum wrote:
    Then again, if you /really/ want the mechanism in $ctx, I can add
    $ctx->release_preserving (naming is hard, give me a better one) which does
    have the behavior... but at that point, which behavior do you want, preserve
    one, preserve all, preserve what is requested in arguments? Once again,
    seems over-complicated for something done so rarely, and so easy to just do
    without a mechanism.

    You could possibly make it a parameter to ->release

    ->release({ no_restore => 1 }) # don't restore anything
    ->release({ no_restore => [qw( $@ )] }) # only avoid restoring $@

    But then you might be slowing down the code-path of "release" by
    having an additional condition.

    Though I think given how infrequently you'll need nuanced control over
    variables, "no_restore => 1" is the only thing you need short term, as
    the combination of "preserve everything" or "preserve nothing" are
    both simple enough to be useful.

    Either way, if preserve/restore are to be done by the context without
    any user side control, the simplest way of avoiding
    the undesired side effects should be documented to discourage the user
    doing cheap tricks that cause future maintenance headaches.
  • Kent Fredric at Jan 18, 2016 at 6:11 am

    On 18 January 2016 at 19:08, Kent Fredric wrote:
    You could possibly make it a parameter to ->release

    ->release({ no_restore => 1 }) # don't restore anything
    ->release({ no_restore => [qw( $@ )] }) # only avoid restoring $@

    Aaaactually it might make more sense on `context()`

    my $ctx = context(); # default, preserves all
    my $ctx = context({ preserve => undef }); # preserve nothing
    my $ctx = context({ preserve => [] }); # preserve nothing
    my $ctx = context({ preserve => [qw( $@ )] }); # preserve $@

    $ctx->release(); # restores preserved values

    Though which is more effective would depend on how its implemented and
    how those vars are stored/preserved.

    I'll have to dig into the code later maybe.
  • Aristotle Pagaltzis at Jan 18, 2016 at 8:40 am

    * Chad Granum [2016-01-18 06:55]:
    Which behavior do you want, preserve one, preserve all, preserve what
    is requested in arguments?
    I didn’t consider that. I was thinking ->release would take an optional
    bool flag and that’s all. But you’re right that this would not be enough
    and that providing enough interface to cover all the cases makes it too
    complicated to be worth it.
    my $err = $@;
    $ctx->release();
    $@ = $err;
    Heh, which implies an obvious and amusing improvement:

         { local $@; $ctx->release }

    Which is *easier* than any special interface, too. So, scrap that.

    Regards,
    --
    Aristotle Pagaltzis // <http://plasmasturm.org/>
  • Chad Granum at Jan 18, 2016 at 9:13 pm
    So, new problem I have found based on the change to restoring $@, $!, and
    $?.

    perl-bleads ext/XS-APItest/t/svpeek.t has 2 test failures.

    One tests $! and it *looks like* it assumes something recently looked at $!
    from a string context:

    like (DPeek ($!), qr'^PVMG\("', '$!');


    I can fix the test like this:

       # This tests expects that $! will have been used as a string recently.
    my $foo = "$!";
    like (DPeek ($!), qr'^PVMG\("', '$!');

    The other checks $? and *looks like* it verifies it has not been looked at
    by anything:

    if ($^O eq 'VMS') {
    # VMS defines COMPLEX_STATUS and upgrades $? to PVLV
    is (DPeek ($?), 'PVLV()', '$?');
    } else {
    is (DPeek ($?), 'PVMG()', '$?');
    }

    This can be fixed by a similar trick (not tested in vms as I have no vms
    system):

    if ($^O eq 'VMS') {
    local $?; # Reset anything Test::* has done to it.
    # VMS defines COMPLEX_STATUS and upgrades $? to PVLV
    is (DPeek ($?), 'PVLV()', '$?');
    } else {
    local $?; # Reset anything Test::* has done to it.
    is (DPeek ($?), 'PVMG()', '$?');
    }

    This was never a problem when we localized $! and $@. But now that we are
    reading the current value, then restoring it, we no longer preserve the
    internals of the variables, just the surface values.

    This is the only test I have found broken by this. A test in the perl test
    suite itself. I have not found anything on cpan broken by this, but my
    testing is not a full smoke.

    I am just reaching out to see if anyone predicts this is an *actual*
    problem? Is it just as simple as updating the 1 failing test and calling it
    good, or is this actually a scary thing? If it is scary, then I would love
    pointers on how I can actually preserve the internal state of the variables
    as well as their value, but I would really prefer not to do that if I do
    not have to.

    -Chad
    On Mon, Jan 18, 2016 at 12:39 AM, Aristotle Pagaltzis wrote:

    * Chad Granum [2016-01-18 06:55]:
    Which behavior do you want, preserve one, preserve all, preserve what
    is requested in arguments?
    I didn’t consider that. I was thinking ->release would take an optional
    bool flag and that’s all. But you’re right that this would not be enough
    and that providing enough interface to cover all the cases makes it too
    complicated to be worth it.
    my $err = $@;
    $ctx->release();
    $@ = $err;
    Heh, which implies an obvious and amusing improvement:

    { local $@; $ctx->release }

    Which is *easier* than any special interface, too. So, scrap that.

    Regards,
    --
    Aristotle Pagaltzis // <http://plasmasturm.org/>
  • Chad Granum at Jan 18, 2016 at 10:52 pm
    Been talking about this in #p5p, so far it seems the problem is the test is
    fragile, and patching the test is fine.
    On Mon, Jan 18, 2016 at 1:13 PM, Chad Granum wrote:

    So, new problem I have found based on the change to restoring $@, $!, and
    $?.

    perl-bleads ext/XS-APItest/t/svpeek.t has 2 test failures.

    One tests $! and it *looks like* it assumes something recently looked at
    $! from a string context:

    like (DPeek ($!), qr'^PVMG\("', '$!');


    I can fix the test like this:

    # This tests expects that $! will have been used as a string recently.
    my $foo = "$!";
    like (DPeek ($!), qr'^PVMG\("', '$!');

    The other checks $? and *looks like* it verifies it has not been looked at
    by anything:

    if ($^O eq 'VMS') {
    # VMS defines COMPLEX_STATUS and upgrades $? to PVLV
    is (DPeek ($?), 'PVLV()', '$?');
    } else {
    is (DPeek ($?), 'PVMG()', '$?');
    }

    This can be fixed by a similar trick (not tested in vms as I have no vms
    system):

    if ($^O eq 'VMS') {
    local $?; # Reset anything Test::* has done to it.
    # VMS defines COMPLEX_STATUS and upgrades $? to PVLV
    is (DPeek ($?), 'PVLV()', '$?');
    } else {
    local $?; # Reset anything Test::* has done to it.
    is (DPeek ($?), 'PVMG()', '$?');
    }

    This was never a problem when we localized $! and $@. But now that we are
    reading the current value, then restoring it, we no longer preserve the
    internals of the variables, just the surface values.

    This is the only test I have found broken by this. A test in the perl test
    suite itself. I have not found anything on cpan broken by this, but my
    testing is not a full smoke.

    I am just reaching out to see if anyone predicts this is an *actual*
    problem? Is it just as simple as updating the 1 failing test and calling it
    good, or is this actually a scary thing? If it is scary, then I would love
    pointers on how I can actually preserve the internal state of the variables
    as well as their value, but I would really prefer not to do that if I do
    not have to.

    -Chad
    On Mon, Jan 18, 2016 at 12:39 AM, Aristotle Pagaltzis wrote:

    * Chad Granum [2016-01-18 06:55]:
    Which behavior do you want, preserve one, preserve all, preserve what
    is requested in arguments?
    I didn’t consider that. I was thinking ->release would take an optional
    bool flag and that’s all. But you’re right that this would not be enough
    and that providing enough interface to cover all the cases makes it too
    complicated to be worth it.
    my $err = $@;
    $ctx->release();
    $@ = $err;
    Heh, which implies an obvious and amusing improvement:

    { local $@; $ctx->release }

    Which is *easier* than any special interface, too. So, scrap that.

    Regards,
    --
    Aristotle Pagaltzis // <http://plasmasturm.org/>
  • Michael G Schwern at Jan 12, 2016 at 7:41 pm

    On 1/11/16 4:53 PM, Chad Granum wrote:
    Test::More/Test::Builder work VERY hard to ensure nothing inside them alters $! or $@. This is for
    thing like this:

    ok(do_something_scary());
    is($!, 0, "expected $! val");
    is($@, undef, '$@ not changed');

    Without Test::More/Builder being careful to support this, the second 2 assertions could fail because
    something inside ok() modifies $! or $@.
    If your test functions modify the really common parts of your global environment then it becomes
    very difficult to test them... and testing error reporting is a very common thing you'd want to do!

    Kent already pointed out why changing this makes testing $! and $@ very, very awkward. Let's see
    that again.

         my ( $error, $exception );
         ok(do {
                   local $@;
                   local $!;
                   my $ret = do_something_scary());
                   ( $error, $exception ) = ($!, $@);
                   $ret
         });
         is($error, 0, "expected $! val");
         is($exception, undef, '$@ not changed);

    Gross. I don't know how many times I've written code like this. I hate it. I always encapsulate
    it somehow. And when a library doesn't have good global discipline it makes it even harder for me
    to have good global discipline.

    We tell the users that $! and $@ are only safe per function call. Then we encourage them all over
    the docs and interface to pass function return values directly into test functions. Then we tell
    them they should be testing their error cases... but to do safely requires gross scaffolding.
    That's not fair to the user. The result will be that people won't test their error conditions,
    library quality will drop, and you'll waste a lot of time on a bug that should have been tested.

    The argument that $! is only reliable per function call, that's a lowest common denominator
    thinking. One of the fundamental design principles of Test::Builder was that it had to be the
    GREATEST common denominator!

    I don't write libraries to the lowest common denominator. I write libraries that raise the bar and
    encourage others to do so as well. Perl 5's error handling is bad, but that doesn't mean my
    library's error handling also has to be bad. As library authors we've been spending decades working
    around Perl 5's bad parts. We know how to do it better. This is extraordinarily important for a
    language's testing library: if you can't test it (or it's gross and annoying) then it won't happen.
      If you want to see better global discipline in Perl libraries, then the testing library has to
    start first, because everything will use it.

    Test libraries are first and foremost about the user. They encourage the user to more and better
    testing! Implementation effort is a much, much lesser concern. And all the positives reasons below
    not to do it are implementation reasons. There are no positives for the Test2 user.
    *Reasons for dropping this promise from Test2:*

    * Simplifies code
    * $! and $@ are altered by many many things, adding { local ... } around all of them is a pain
    * Sometimes internals you don't expect to set $! will
    * Perl itself documents that you cannot depend on $! and $@ being unchanged past the immediate
    line after you set it.
    * Test::Builder will continue to protect $! and $@, so nothing will break
    * Test2 is new, nothing depends on it preserving these
    The one below is the only reason that talks about making Test2 better for the user.
    *Reasons not to drop it:*

    * It is helpful to people who might not realize $! and $@ are often altered unexpectedly.

    I am asking for input in case I missed any reasons/arguments for either side. In either case
    backwards compatibility will be preserved.

    -Chad
  • Chad Granum at Jan 12, 2016 at 7:56 pm
    Thanks for the input.

    My question was not very well formed.

    What I should have asked is this:

    Preserving $! and $@ is a valuable behavior, and one I won't get rid of.
    However, I am questioning the decision to make the test library jump
    through hoops to preserve them, as opposed to having the test functions be
    responsible for doing it.

    Now then, having the test library do it means the tools usually do not have
    to worry about it (though it would be easy for them to still damage $! or
    $@). On the other hand, the protection in-library gets very complicated,
    and hard to maintain, and will not solve all cases.

    It was not a question of if we should do it, but how we should do it. Test2
    is just the back-end, but asking if Test2 should do it sounded like I was
    asking if we should do it at all, which was not the intent, the front end
    was always gonna do it.

    That said, the discussion served as rubber ducking and I was able to find a
    solution that lets the library do the heavy lifting, but in a single spot
    that avoids the complexity and un-maintainability of the way it has done it
    so far. So the discussion is no longer necessary, the library will continue
    to do the protection, and it will do it in a better way. New tools will not
    need to start concerning themselves with this. Old tools never had to worry
    as I was not going to break backcompat.

    -Chad
    On Tue, Jan 12, 2016 at 11:41 AM, Michael G Schwern wrote:
    On 1/11/16 4:53 PM, Chad Granum wrote:
    Test::More/Test::Builder work VERY hard to ensure nothing inside them
    alters $! or $@. This is for
    thing like this:

    ok(do_something_scary());
    is($!, 0, "expected $! val");
    is($@, undef, '$@ not changed');

    Without Test::More/Builder being careful to support this, the second 2
    assertions could fail because
    something inside ok() modifies $! or $@.
    If your test functions modify the really common parts of your global
    environment then it becomes
    very difficult to test them... and testing error reporting is a very
    common thing you'd want to do!

    Kent already pointed out why changing this makes testing $! and $@ very,
    very awkward. Let's see
    that again.

    my ( $error, $exception );
    ok(do {
    local $@;
    local $!;
    my $ret = do_something_scary());
    ( $error, $exception ) = ($!, $@);
    $ret
    });
    is($error, 0, "expected $! val");
    is($exception, undef, '$@ not changed);

    Gross. I don't know how many times I've written code like this. I hate
    it. I always encapsulate
    it somehow. And when a library doesn't have good global discipline it
    makes it even harder for me
    to have good global discipline.

    We tell the users that $! and $@ are only safe per function call. Then we
    encourage them all over
    the docs and interface to pass function return values directly into test
    functions. Then we tell
    them they should be testing their error cases... but to do safely requires
    gross scaffolding.
    That's not fair to the user. The result will be that people won't test
    their error conditions,
    library quality will drop, and you'll waste a lot of time on a bug that
    should have been tested.

    The argument that $! is only reliable per function call, that's a lowest
    common denominator
    thinking. One of the fundamental design principles of Test::Builder was
    that it had to be the
    GREATEST common denominator!

    I don't write libraries to the lowest common denominator. I write
    libraries that raise the bar and
    encourage others to do so as well. Perl 5's error handling is bad, but
    that doesn't mean my
    library's error handling also has to be bad. As library authors we've
    been spending decades working
    around Perl 5's bad parts. We know how to do it better. This is
    extraordinarily important for a
    language's testing library: if you can't test it (or it's gross and
    annoying) then it won't happen.
    If you want to see better global discipline in Perl libraries, then the
    testing library has to
    start first, because everything will use it.

    Test libraries are first and foremost about the user. They encourage the
    user to more and better
    testing! Implementation effort is a much, much lesser concern. And all
    the positives reasons below
    not to do it are implementation reasons. There are no positives for the
    Test2 user.
    *Reasons for dropping this promise from Test2:*

    * Simplifies code
    * $! and $@ are altered by many many things, adding { local ... }
    around all of them is a pain
    * Sometimes internals you don't expect to set $! will
    * Perl itself documents that you cannot depend on $! and $@ being
    unchanged past the immediate
    line after you set it.
    * Test::Builder will continue to protect $! and $@, so nothing will break
    * Test2 is new, nothing depends on it preserving these
    The one below is the only reason that talks about making Test2 better for
    the user.
    *Reasons not to drop it:*

    * It is helpful to people who might not realize $! and $@ are often
    altered unexpectedly.
    I am asking for input in case I missed any reasons/arguments for either
    side. In either case
    backwards compatibility will be preserved.

    -Chad
  • Sawyer X at Jan 12, 2016 at 9:49 pm
    [Top-posting]

    Chad, I think I understand what you mean now. You were referring to
    whether the underlying pinnings should take care of it (Test2) or
    whether the chrome (testing functions) around that should do so. Yes?

    If so, I think you should probably clarify what Test2 *does* do. It
    doesn't provide the functions - alright. What *does* it provide then?

    Also, since the discussion was "Should I - the testing functions' user
    - write code around my testing functions to accommodate for the
    testing framework not preserving `$!` and `$@`?" instead of "Should
    the testing functions take care of it rather than the gory details
    underneath them?", it might be useful (other than explaining what the
    difference between the testing functions and the gory details,
    recommended in previous paragraph) explaining how such an
    implementation would look like, or how it would be different, in the
    testing functions vs. in the gory details.

    I think then it would be simple to understand your intent and to help
    you with some useful commentary.

    If I didn't understand what you meant, then... this might needs
    additional clarification.

    S. :)

    On Tue, Jan 12, 2016 at 8:56 PM, Chad Granum wrote:
    Thanks for the input.

    My question was not very well formed.

    What I should have asked is this:

    Preserving $! and $@ is a valuable behavior, and one I won't get rid of.
    However, I am questioning the decision to make the test library jump through
    hoops to preserve them, as opposed to having the test functions be
    responsible for doing it.

    Now then, having the test library do it means the tools usually do not have
    to worry about it (though it would be easy for them to still damage $! or
    $@). On the other hand, the protection in-library gets very complicated, and
    hard to maintain, and will not solve all cases.

    It was not a question of if we should do it, but how we should do it. Test2
    is just the back-end, but asking if Test2 should do it sounded like I was
    asking if we should do it at all, which was not the intent, the front end
    was always gonna do it.

    That said, the discussion served as rubber ducking and I was able to find a
    solution that lets the library do the heavy lifting, but in a single spot
    that avoids the complexity and un-maintainability of the way it has done it
    so far. So the discussion is no longer necessary, the library will continue
    to do the protection, and it will do it in a better way. New tools will not
    need to start concerning themselves with this. Old tools never had to worry
    as I was not going to break backcompat.

    -Chad
    On Tue, Jan 12, 2016 at 11:41 AM, Michael G Schwern wrote:
    On 1/11/16 4:53 PM, Chad Granum wrote:
    Test::More/Test::Builder work VERY hard to ensure nothing inside them
    alters $! or $@. This is for
    thing like this:

    ok(do_something_scary());
    is($!, 0, "expected $! val");
    is($@, undef, '$@ not changed');

    Without Test::More/Builder being careful to support this, the second 2
    assertions could fail because
    something inside ok() modifies $! or $@.
    If your test functions modify the really common parts of your global
    environment then it becomes
    very difficult to test them... and testing error reporting is a very
    common thing you'd want to do!

    Kent already pointed out why changing this makes testing $! and $@ very,
    very awkward. Let's see
    that again.

    my ( $error, $exception );
    ok(do {
    local $@;
    local $!;
    my $ret = do_something_scary());
    ( $error, $exception ) = ($!, $@);
    $ret
    });
    is($error, 0, "expected $! val");
    is($exception, undef, '$@ not changed);

    Gross. I don't know how many times I've written code like this. I hate
    it. I always encapsulate
    it somehow. And when a library doesn't have good global discipline it
    makes it even harder for me
    to have good global discipline.

    We tell the users that $! and $@ are only safe per function call. Then we
    encourage them all over
    the docs and interface to pass function return values directly into test
    functions. Then we tell
    them they should be testing their error cases... but to do safely requires
    gross scaffolding.
    That's not fair to the user. The result will be that people won't test
    their error conditions,
    library quality will drop, and you'll waste a lot of time on a bug that
    should have been tested.

    The argument that $! is only reliable per function call, that's a lowest
    common denominator
    thinking. One of the fundamental design principles of Test::Builder was
    that it had to be the
    GREATEST common denominator!

    I don't write libraries to the lowest common denominator. I write
    libraries that raise the bar and
    encourage others to do so as well. Perl 5's error handling is bad, but
    that doesn't mean my
    library's error handling also has to be bad. As library authors we've
    been spending decades working
    around Perl 5's bad parts. We know how to do it better. This is
    extraordinarily important for a
    language's testing library: if you can't test it (or it's gross and
    annoying) then it won't happen.
    If you want to see better global discipline in Perl libraries, then the
    testing library has to
    start first, because everything will use it.

    Test libraries are first and foremost about the user. They encourage the
    user to more and better
    testing! Implementation effort is a much, much lesser concern. And all
    the positives reasons below
    not to do it are implementation reasons. There are no positives for the
    Test2 user.
    *Reasons for dropping this promise from Test2:*

    * Simplifies code
    * $! and $@ are altered by many many things, adding { local ... }
    around all of them is a pain
    * Sometimes internals you don't expect to set $! will
    * Perl itself documents that you cannot depend on $! and $@ being
    unchanged past the immediate
    line after you set it.
    * Test::Builder will continue to protect $! and $@, so nothing will
    break
    * Test2 is new, nothing depends on it preserving these
    The one below is the only reason that talks about making Test2 better for
    the user.
    *Reasons not to drop it:*

    * It is helpful to people who might not realize $! and $@ are often
    altered unexpectedly.

    I am asking for input in case I missed any reasons/arguments for either
    side. In either case
    backwards compatibility will be preserved.

    -Chad
  • Kent Fredric at Jan 12, 2016 at 9:55 pm

    On 13 January 2016 at 10:48, Sawyer X wrote:
    If so, I think you should probably clarify what Test2 *does* do. It
    doesn't provide the functions - alright. What *does* it provide then?

    Oh, and thought: It may help to consider what testing /testing tools/
    looks like here, and wether the tools themselves need to trap $! and
    $@ and test for their changes.

    Its probably immaterial and indifferent from the "handle it at the
    chrome layer", but it may have implications in internals that make
    things more difficult if they're suppressed/not-suppressed.
  • Sawyer X at Jan 12, 2016 at 9:59 pm

    On Tue, Jan 12, 2016 at 10:55 PM, Kent Fredric wrote:
    On 13 January 2016 at 10:48, Sawyer X wrote:

    If so, I think you should probably clarify what Test2 *does* do. It
    doesn't provide the functions - alright. What *does* it provide then?

    Oh, and thought: It may help to consider what testing /testing tools/
    looks like here, and wether the tools themselves need to trap $! and
    $@ and test for their changes.

    Its probably immaterial and indifferent from the "handle it at the
    chrome layer", but it may have implications in internals that make
    things more difficult if they're suppressed/not-suppressed.
    Good point!
  • Chad Granum at Jan 12, 2016 at 10:26 pm
    Yes, your understanding appears correct. And I can make it more clear.

    This is a very simple test tool in Test::Builder (the chrome):

    my $TB = Test::Builder->new; # Call to internals/guts (singleton)

    sub ok($;$) {
    my ($bool, $name) = @_;
    $TB->ok($bool, $name); # Call to internals/guts
    return $bool
    }

    Here it is again using Test2 instead of Test::Builder (the chrome):

    sub ok($;$) {
    my ($bool, $name) = @_;
    my $ctx = context(); # Call to internals/guts (A)
    $ctx->ok($bool, $name); # another one (B)
    $ctx->release; # another one (C)
    return $bool;
    }

    The lines marked with 'Call to internals/guts' kick off a series of things
    that read/write from filehandles, possibly opens them, evals code/catches
    exceptions, and any number of other things that can squash $! and $@. It
    should be noted that 'ok' is not the only method ever called on either
    Test::Builder or $ctx, this is just a very simple illustration.

    Now for starters, Test::Builder uses a singleton, so it can do all its
    initialization at load time, which allows it to leave several things
    unprotected. The singleton was bad, so Test2 does not use one, which means
    it has to be more protective of $! and $@ in more places to accomplish the
    same end goal.

    *History, what Test::Builder does:* It localizes $! and $@ in an eval
    wrapper called _try() that it uses to wrap things it expects could squash
    $! and $@. It also localizes $SIG{__DIE__} for various reasons. In some
    places where $SIG{__DIE__} should not be localized it will instead use
    local independently of _try(). There is also extra logic for subtests to
    ensure $? from the end of the subtest is carried-over into the regular
    testing outside of the subtest. Some places also need to be careful of $?
    because they run in an end block where squashing $? unintentionally is bad.
    (Yeah, $? is involved in all this as well, but much less so)

    This results in a lot of places where things are localized, and several
    places that run through an eval. People simply looking at the code may
    overlook these things, and not know that the protection is happening. The
    first time a new-dev will notice it is when tests start breaking because
    they added an open/close/eval/etc in the wrong place. Thanfully there are
    some tests for this, but not enough as I have found downstream things
    (PerlIO::via::Timeout as an example) that break when $! is squashed in a
    way Test::Builder never tests for.

    Test::Builder does not localize $! and $@ in all its public method.
    Realistically it cannot for 2 reasons:

        - Performance hit
        - Can mask real exceptions being thrown that are not caught by
        Test::Builder itself.

    *In short, this is a significant maintenance burden, with insufficient
    testing, and no catch-all solution.*

    ------------------

    *Up until this email thread, Test2 was doing the same thing as
    Test::Builder.* The problem is that Test2 does lots of other things
    differently for good reasons, unfortunately it provides a lot more
    opportunity to squash $! and $@. Like Test::Builder it is not reasonable to
    localize $! and $@ at every entry point.

    I decided to start this thread after a few downstream breakage was detected
    due to $! being squashed. One because perl changes $! when you clone a
    filehandle, even when no errors happen. Another because of a shared memory
    read that was added in an optional concurrency driver used by
    Test::SharedFork. I realized this would be an eternal whack-a-mole for all
    future maintainers of both projects, and one that is hard to write
    sufficient testing for.

    *The solution:*
    Go back to my second example. Notice there are 3 calls to the guts, marked
    (A), (B), and (C). (A) and (C) are universal to all Test2 based tools, and
    are also universally used in the Test::Builder dev releases when it calls
    out to Test2. No tool will function properly if it does not use both of
    those when it uses Test2. Calling context() should already always be done
    as soon as possible. the call to release() should be called as late as
    possible.

    I realized I could make the call to context() store $! and $@ in the $ctx
    object it returns. I could then also have release() restore them. This is
    similar to localizing except it bypasses the flaws:

        - There is very little performance hit to this, in my 100k ok test,
        which takes just under 2 seconds on my machine, it added ~100ms. That is
        peanuts. There is more random variation in performance from run to run then
        the increase itself.
        - It will not prevent $@/$! from being set by true uncaught exceptions,
        $ctx does not restore the vars when it si destroyed, so if an exception
        occurs release() is never called.
        - It is a lot less magic than scattering random locals throughout the
        codebase
        - It can be documented easily in one place
        - It is easy to test and maintain

    -Chad
  • Sawyer X at Jan 13, 2016 at 2:13 am
    [Top-posted]

    Chad, thank you for the detailed response. I think I now understand
    the scope of the problem and your solutions.

    I think it makes sense to put this in the guts inside the construction
    of a new context (or retrieval of current context) and in the release
    of that context. Kent, am I right to believe this also addresses the
    concerns you raised regarding testing of testing modules?

    I'm sorry to say I'm quite ignorant [also] when it comes to the
    testing underpinnings, so I'm not sure if there are additional
    concerns this does not address, but otherwise it sounds very
    reasonable to me.

    Thanks again for taking the time to clarify in detail. :)
    (It might be useful to start working on a document for the internals
    for anyone who wants to hack on it. This should at least be in the
    commit messages so it could be tracked down.)

    S.


    On Tue, Jan 12, 2016 at 11:26 PM, Chad Granum wrote:
    Yes, your understanding appears correct. And I can make it more clear.

    This is a very simple test tool in Test::Builder (the chrome):
    my $TB = Test::Builder->new; # Call to internals/guts (singleton)

    sub ok($;$) {
    my ($bool, $name) = @_;
    $TB->ok($bool, $name); # Call to internals/guts
    return $bool
    }

    Here it is again using Test2 instead of Test::Builder (the chrome):
    sub ok($;$) {
    my ($bool, $name) = @_;
    my $ctx = context(); # Call to internals/guts (A)
    $ctx->ok($bool, $name); # another one (B)
    $ctx->release; # another one (C)
    return $bool;
    }

    The lines marked with 'Call to internals/guts' kick off a series of things
    that read/write from filehandles, possibly opens them, evals code/catches
    exceptions, and any number of other things that can squash $! and $@. It
    should be noted that 'ok' is not the only method ever called on either
    Test::Builder or $ctx, this is just a very simple illustration.

    Now for starters, Test::Builder uses a singleton, so it can do all its
    initialization at load time, which allows it to leave several things
    unprotected. The singleton was bad, so Test2 does not use one, which means
    it has to be more protective of $! and $@ in more places to accomplish the
    same end goal.

    History, what Test::Builder does: It localizes $! and $@ in an eval wrapper
    called _try() that it uses to wrap things it expects could squash $! and $@.
    It also localizes $SIG{__DIE__} for various reasons. In some places where
    $SIG{__DIE__} should not be localized it will instead use local
    independently of _try(). There is also extra logic for subtests to ensure $?
    from the end of the subtest is carried-over into the regular testing outside
    of the subtest. Some places also need to be careful of $? because they run
    in an end block where squashing $? unintentionally is bad. (Yeah, $? is
    involved in all this as well, but much less so)

    This results in a lot of places where things are localized, and several
    places that run through an eval. People simply looking at the code may
    overlook these things, and not know that the protection is happening. The
    first time a new-dev will notice it is when tests start breaking because
    they added an open/close/eval/etc in the wrong place. Thanfully there are
    some tests for this, but not enough as I have found downstream things
    (PerlIO::via::Timeout as an example) that break when $! is squashed in a way
    Test::Builder never tests for.

    Test::Builder does not localize $! and $@ in all its public method.
    Realistically it cannot for 2 reasons:

    Performance hit
    Can mask real exceptions being thrown that are not caught by Test::Builder
    itself.

    In short, this is a significant maintenance burden, with insufficient
    testing, and no catch-all solution.

    ------------------

    Up until this email thread, Test2 was doing the same thing as Test::Builder.
    The problem is that Test2 does lots of other things differently for good
    reasons, unfortunately it provides a lot more opportunity to squash $! and
    $@. Like Test::Builder it is not reasonable to localize $! and $@ at every
    entry point.

    I decided to start this thread after a few downstream breakage was detected
    due to $! being squashed. One because perl changes $! when you clone a
    filehandle, even when no errors happen. Another because of a shared memory
    read that was added in an optional concurrency driver used by
    Test::SharedFork. I realized this would be an eternal whack-a-mole for all
    future maintainers of both projects, and one that is hard to write
    sufficient testing for.

    The solution:
    Go back to my second example. Notice there are 3 calls to the guts, marked
    (A), (B), and (C). (A) and (C) are universal to all Test2 based tools, and
    are also universally used in the Test::Builder dev releases when it calls
    out to Test2. No tool will function properly if it does not use both of
    those when it uses Test2. Calling context() should already always be done as
    soon as possible. the call to release() should be called as late as
    possible.

    I realized I could make the call to context() store $! and $@ in the $ctx
    object it returns. I could then also have release() restore them. This is
    similar to localizing except it bypasses the flaws:

    There is very little performance hit to this, in my 100k ok test, which
    takes just under 2 seconds on my machine, it added ~100ms. That is peanuts.
    There is more random variation in performance from run to run then the
    increase itself.
    It will not prevent $@/$! from being set by true uncaught exceptions, $ctx
    does not restore the vars when it si destroyed, so if an exception occurs
    release() is never called.
    It is a lot less magic than scattering random locals throughout the codebase
    It can be documented easily in one place
    It is easy to test and maintain

    -Chad
  • Chad Granum at Jan 13, 2016 at 6:39 pm
    One remaining question.

    Should the context maintain a stack, where each time context() is called
    (nested) it pushes the vars to the stack, and each call to release pops it,
    or should it just store them on creation, do nothing on retrieval, and only
    restore them on the final release?

    Right now the version I have up on cpan just stores them on creation, and
    restores them on final release. Nothing happens for nested calls to
    context()/release(), all my downstream testing shows no breakages (not a
    full smoke, but does include several modules sensitive to $! and $@
    changes).

    This question is not about backwards compatibility, but rather about what
    test tool authors might/should expect the behavior to be. Both maintain
    backwards compatibility equally. However the stack form will have more of a
    performance impact, probably minor though, would need to benchmark/profile.

    -Chad
    On Tue, Jan 12, 2016 at 6:12 PM, Sawyer X wrote:

    [Top-posted]

    Chad, thank you for the detailed response. I think I now understand
    the scope of the problem and your solutions.

    I think it makes sense to put this in the guts inside the construction
    of a new context (or retrieval of current context) and in the release
    of that context. Kent, am I right to believe this also addresses the
    concerns you raised regarding testing of testing modules?

    I'm sorry to say I'm quite ignorant [also] when it comes to the
    testing underpinnings, so I'm not sure if there are additional
    concerns this does not address, but otherwise it sounds very
    reasonable to me.

    Thanks again for taking the time to clarify in detail. :)
    (It might be useful to start working on a document for the internals
    for anyone who wants to hack on it. This should at least be in the
    commit messages so it could be tracked down.)

    S.


    On Tue, Jan 12, 2016 at 11:26 PM, Chad Granum wrote:
    Yes, your understanding appears correct. And I can make it more clear.

    This is a very simple test tool in Test::Builder (the chrome):
    my $TB = Test::Builder->new; # Call to internals/guts (singleton)

    sub ok($;$) {
    my ($bool, $name) = @_;
    $TB->ok($bool, $name); # Call to internals/guts
    return $bool
    }

    Here it is again using Test2 instead of Test::Builder (the chrome):
    sub ok($;$) {
    my ($bool, $name) = @_;
    my $ctx = context(); # Call to internals/guts (A)
    $ctx->ok($bool, $name); # another one (B)
    $ctx->release; # another one (C)
    return $bool;
    }

    The lines marked with 'Call to internals/guts' kick off a series of things
    that read/write from filehandles, possibly opens them, evals code/catches
    exceptions, and any number of other things that can squash $! and $@. It
    should be noted that 'ok' is not the only method ever called on either
    Test::Builder or $ctx, this is just a very simple illustration.

    Now for starters, Test::Builder uses a singleton, so it can do all its
    initialization at load time, which allows it to leave several things
    unprotected. The singleton was bad, so Test2 does not use one, which means
    it has to be more protective of $! and $@ in more places to accomplish the
    same end goal.

    History, what Test::Builder does: It localizes $! and $@ in an eval wrapper
    called _try() that it uses to wrap things it expects could squash $! and $@.
    It also localizes $SIG{__DIE__} for various reasons. In some places where
    $SIG{__DIE__} should not be localized it will instead use local
    independently of _try(). There is also extra logic for subtests to ensure $?
    from the end of the subtest is carried-over into the regular testing outside
    of the subtest. Some places also need to be careful of $? because they run
    in an end block where squashing $? unintentionally is bad. (Yeah, $? is
    involved in all this as well, but much less so)

    This results in a lot of places where things are localized, and several
    places that run through an eval. People simply looking at the code may
    overlook these things, and not know that the protection is happening. The
    first time a new-dev will notice it is when tests start breaking because
    they added an open/close/eval/etc in the wrong place. Thanfully there are
    some tests for this, but not enough as I have found downstream things
    (PerlIO::via::Timeout as an example) that break when $! is squashed in a way
    Test::Builder never tests for.

    Test::Builder does not localize $! and $@ in all its public method.
    Realistically it cannot for 2 reasons:

    Performance hit
    Can mask real exceptions being thrown that are not caught by
    Test::Builder
    itself.

    In short, this is a significant maintenance burden, with insufficient
    testing, and no catch-all solution.

    ------------------

    Up until this email thread, Test2 was doing the same thing as
    Test::Builder.
    The problem is that Test2 does lots of other things differently for good
    reasons, unfortunately it provides a lot more opportunity to squash $! and
    $@. Like Test::Builder it is not reasonable to localize $! and $@ at every
    entry point.

    I decided to start this thread after a few downstream breakage was detected
    due to $! being squashed. One because perl changes $! when you clone a
    filehandle, even when no errors happen. Another because of a shared memory
    read that was added in an optional concurrency driver used by
    Test::SharedFork. I realized this would be an eternal whack-a-mole for all
    future maintainers of both projects, and one that is hard to write
    sufficient testing for.

    The solution:
    Go back to my second example. Notice there are 3 calls to the guts, marked
    (A), (B), and (C). (A) and (C) are universal to all Test2 based tools, and
    are also universally used in the Test::Builder dev releases when it calls
    out to Test2. No tool will function properly if it does not use both of
    those when it uses Test2. Calling context() should already always be done as
    soon as possible. the call to release() should be called as late as
    possible.

    I realized I could make the call to context() store $! and $@ in the $ctx
    object it returns. I could then also have release() restore them. This is
    similar to localizing except it bypasses the flaws:

    There is very little performance hit to this, in my 100k ok test, which
    takes just under 2 seconds on my machine, it added ~100ms. That is peanuts.
    There is more random variation in performance from run to run then the
    increase itself.
    It will not prevent $@/$! from being set by true uncaught exceptions, $ctx
    does not restore the vars when it si destroyed, so if an exception occurs
    release() is never called.
    It is a lot less magic than scattering random locals throughout the codebase
    It can be documented easily in one place
    It is easy to test and maintain

    -Chad
  • Kent Fredric at Jan 13, 2016 at 6:50 pm

    On 14 January 2016 at 07:39, Chad Granum wrote:
    Right now the version I have up on cpan just stores them on creation, and
    restores them on final release. Nothing happens for nested calls to
    context()/release(), all my downstream testing shows no breakages (not a
    full smoke, but does include several modules sensitive to $! and $@
    changes).

    In the event some code like this dies:

          sub foo {
               my $context = context();
               die "Bar";
          }

    What will happen with regards to $@ auto-stacking?

    If somebody catches the die in a higher context, what will $@ be?
  • Sawyer X at Jan 13, 2016 at 7:59 pm
    [Top-posted]

    The extra cost would be:
    1. Array storage
    2. Push
    3. Pop

    At the cost of supporting any level of nesting, I think it's a negligible
    cost, but I would profile it.

    On Wed, Jan 13, 2016 at 7:50 PM, Kent Fredric wrote:
    On 14 January 2016 at 07:39, Chad Granum wrote:
    Right now the version I have up on cpan just stores them on creation, and
    restores them on final release. Nothing happens for nested calls to
    context()/release(), all my downstream testing shows no breakages (not a
    full smoke, but does include several modules sensitive to $! and $@
    changes).

    In the event some code like this dies:

    sub foo {
    my $context = context();
    die "Bar";
    }

    What will happen with regards to $@ auto-stacking?

    If somebody catches the die in a higher context, what will $@ be?


    --
    Kent

    KENTNL - https://metacpan.org/author/KENTNL
  • Chad Granum at Jan 13, 2016 at 8:15 pm
    yeah, I am really not worried about the performance on this one, just noted
    it for completeness.

    What is nagging at me is what kentnl mentioned where something nested
    catches an exception from something that failed to release it's context due
    to the exception. Not because of these vars, but because it can cause other
    problems, I need to look into it.
    On Wed, Jan 13, 2016 at 11:58 AM, Sawyer X wrote:

    [Top-posted]

    The extra cost would be:
    1. Array storage
    2. Push
    3. Pop

    At the cost of supporting any level of nesting, I think it's a negligible
    cost, but I would profile it.

    On Wed, Jan 13, 2016 at 7:50 PM, Kent Fredric wrote:
    On 14 January 2016 at 07:39, Chad Granum wrote:
    Right now the version I have up on cpan just stores them on creation, and
    restores them on final release. Nothing happens for nested calls to
    context()/release(), all my downstream testing shows no breakages (not a
    full smoke, but does include several modules sensitive to $! and $@
    changes).

    In the event some code like this dies:

    sub foo {
    my $context = context();
    die "Bar";
    }

    What will happen with regards to $@ auto-stacking?

    If somebody catches the die in a higher context, what will $@ be?


    --
    Kent

    KENTNL - https://metacpan.org/author/KENTNL

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcpan-workers @
categoriesperl
postedJan 12, '16 at 12:53a
activeJan 18, '16 at 10:52p
posts28
users8
websitecpan.org

People

Translate

site design / logo © 2018 Grokbase