FAQ
sub foo {
local $@;

eval { # a generic wrapper, doesn'tknow about bar()'s details
bar();
};

if ( $@ ) {
# do something meaningful
die $@;
}
}

sub bar {
die "blah";
}

eval { foo() };

warn "Error: $@";

in the 'foo' subroutine $@ is localized to prevent clobbering it in
cases such as:

eval { ... };
foo();
if ( $@ ) { # for the prev eval }

From a control flow POV everything works correctly here, but in the
outermost eval { } the value of $@ is not preserved (it will jump
though). The value is just ''.

Since there is no other way to know if the eval actually failed
without inspecting $@ that makes it faily useless, and furthermore
the documentation of eval implies this should not be the case (but
doesn't mention local).

I believe this is an implementation detail, likely die() in the
context of an eval assigning to $@ with it's localization stack,
instead of the assignment happenning in the scope of the eval { }
that is actually trapping the error, so in effect the error that it
trapped is in $@.

The work around is in foo():

sub foo {
my $e;
{
local $@;
eval { bar () };
$e = $@;
}

if ( $e ) { die $e }
}

but that kinda sucks.

Every time this happenned to me it took a really long while to
figure out, a conservative guess is that I've lost about 2-3 days of
my life to this behavior over the past few years. This is because
it's action at a distance on several levels.

--
Yuval Kogman <nothingmuch@woobling.org>
http://nothingmuch.woobling.org 0xEBD27418

Search Discussions

  • Abigail at Mar 21, 2008 at 4:27 pm

    On Fri, Mar 21, 2008 at 05:44:33PM +0200, Yuval Kogman wrote:
    sub foo {
    local $@;

    eval { # a generic wrapper, doesn'tknow about bar()'s details
    bar();
    };

    if ( $@ ) {
    # do something meaningful
    die $@;
    }
    }

    sub bar {
    die "blah";
    }

    eval { foo() };

    warn "Error: $@";

    in the 'foo' subroutine $@ is localized to prevent clobbering it in
    cases such as:

    eval { ... };
    foo();
    if ( $@ ) { # for the prev eval }

    From a control flow POV everything works correctly here, but in the
    outermost eval { } the value of $@ is not preserved (it will jump
    though). The value is just ''.
    Well, that's what you want, isn't? In:

    eval { ... }
    foo ();
    if ($@) { ... }

    you want $@ to be the result of the first eval {} (at least, that's what
    I understand from your comment). How else do you want to achieve that then
    by "ignoring" whatever foo() does with $@?
    Since there is no other way to know if the eval actually failed
    without inspecting $@ that makes it faily useless, and furthermore
    the documentation of eval implies this should not be the case (but
    doesn't mention local).

    Inspecting $@ to check whether an eval die()d is wrong. It can
    trigger both false positives, and false negatives:

    sub Foo::DESTROY {die "Hello"}
    sub Bar::DESTROY {eval ""}

    #
    # No die() here.
    #
    eval {my $o = bless [] => 'Foo';};
    if ($@) {warn "Triggered wrongly; the previous eval did NOT die.\n"}

    #
    # There's a die() here.
    #
    eval {my $o = bless [] => 'Bar'; die "Eeep"};
    unless ($@) {warn "Triggered wrongly; the previous eval DID die.\n"}

    __END__
    Triggered wrongly; the previous eval did NOT die.
    Triggered wrongly; the previous eval DID die.


    The correct way of checking whether an eval failed is to check its return
    value:

    sub Foo::DESTROY {die "Hello"}
    sub Bar::DESTROY {eval ""}

    #
    # No die() here.
    #
    eval {my $o = bless [] => 'Foo'; 1} or do {
    warn "Triggered wrongly; the previous eval did NOT die.\n"
    };

    #
    # There's a die() here.
    #
    eval {my $o = bless [] => 'Bar'; die "Eeep"; 1} and do {
    warn "Triggered wrongly; the previous eval DID die.\n"
    };

    __END__

    I believe this is an implementation detail, likely die() in the
    context of an eval assigning to $@ with it's localization stack,
    instead of the assignment happenning in the scope of the eval { }
    that is actually trapping the error, so in effect the error that it
    trapped is in $@.

    The work around is in foo():

    sub foo {
    my $e;
    {
    local $@;
    eval { bar () };
    $e = $@;
    }

    if ( $e ) { die $e }
    }

    but that kinda sucks.
    But that's how it ought to be done.

    While I agree that people might get bitten by 'local $@' (I have myself),
    it *is* consistent. The eval fails (due to die), which sets $@. Then eval
    is done, after which the scope is exited, triggering DESTROYs, and unrolling
    the effects of local. IMO, you are calling for an exception.
    Every time this happenned to me it took a really long while to
    figure out, a conservative guess is that I've lost about 2-3 days of
    my life to this behavior over the past few years. This is because
    it's action at a distance on several levels.
    But that's what local *IS* all about.

    If you don't want the effects of local(), by all means, don't use it.
    Don't make an exception for a specific case. Specially not for a case
    where you were using it wrongly in the first place.



    Abigail
  • David Nicol at Mar 21, 2008 at 4:41 pm
    So should we introduce an automatic localization of $@ before destruction?

    --
    ... but world war IV will be fought with rocks
  • Abigail at Mar 21, 2008 at 4:52 pm

    On Fri, Mar 21, 2008 at 11:41:41AM -0500, David Nicol wrote:
    So should we introduce an automatic localization of $@ before destruction?

    Well, it might have been a good idea to do this when DESTROY was first
    introduced.

    I know that I've written code in the past which would die() in a DESTROY,
    which was triggered from an eval, and I'd expect $@ afterwards.

    Such code would fail if we'd introduce automatic localization of $@.

    Is that worth it?


    Abigail
  • David Nicol at Mar 21, 2008 at 4:58 pm

    On Fri, Mar 21, 2008 at 11:52 AM, Abigail wrote:
    Such code would fail if we'd introduce automatic localization of $@.

    Is that worth it?
    no.

    What code would fail if the order of localization restoration and
    assignment to $@ were switched?


    --
    ... but world war IV will be fought with rocks
  • Mark Mielke at Mar 21, 2008 at 5:46 pm

    David Nicol wrote:
    On Fri, Mar 21, 2008 at 11:52 AM, Abigail wrote:

    Such code would fail if we'd introduce automatic localization of $@.

    Is that worth it?
    no.

    What code would fail if the order of localization restoration and
    assignment to $@ were switched?
    Any code that is exhibiting an error that is being silenced? :-)

    I believe I have such code. I choose to local $@ in some DESTROY
    methods, because at DESTROY time (and especially if at end-of-program),
    there is no reasonable way of trapping certain errors. Now, I suspect I
    use eval to trap all errors in this case, but if my code has any bugs
    that were silent from this particular consistent-but-broken behaviour
    being discussed, then they would become exposed, and my code is known to
    behave acceptably at present (at least in terms of how the user sees
    it), but if it suddenly started dying during exit where it previously
    did not, they might be surprised and upset?

    Perl's exception model sucks - it always has. Tweaking it for cases like
    this isn't the answer. It needs a replacement. I haven't followed Perl 6
    in a few years due to a loss of interest, but I think that is where it
    needs to be fixed. Fixing Perl 5 is almost a dead goal.

    Cheers,
    mark

    --
    Mark Mielke <mark@mielke.cc>
  • Dr.Ruud at Mar 21, 2008 at 10:00 pm

    "David Nicol" schreef:

    So should we introduce an automatic localization of $@ before
    destruction?
    Just leave $@ as is, and introduce @@.

    --
    Affijn, Ruud

    "Gewoon is een tijger."
  • Chromatic at Mar 21, 2008 at 11:38 pm

    On Friday 21 March 2008 14:55:27 Dr.Ruud wrote:

    "David Nicol" schreef:
    So should we introduce an automatic localization of $@ before
    destruction?
    Just leave $@ as is, and introduce @@.
    Rubyometer++

    Wait, is this the Perl *5* list? Sorry!

    -- c
  • Dr.Ruud at Mar 22, 2008 at 11:17 am

    chromatic schreef:
    Dr.Ruud:
    David Nicol:
    So should we introduce an automatic localization of $@ before
    destruction?
    Just leave $@ as is, and introduce @@.
    Rubyometer++

    Wait, is this the Perl *5* list? Sorry!
    <g>

    #!/usr/bin/perl
    # dieritis-@@.pl

    use strict;
    use warnings;
    use Time::HiRes qw/time/;
    use Data::Dumper;

    sub Q::DESTROY {
    $@ and push @@, [ $@, time, $$ ];
    eval {
    die "died in INNER eval";
    };
    }

    push @@, [ "__start", time, $$ ];

    eval {
    my $x = bless [], "Q";
    die "died in OUTER eval";
    };
    $@ and push @@, [ $@, time, $$ ];

    chomp $_->[0] for @@;
    print Dumper(\@@);


    Output:

    $VAR1 = [
    [
    '__start',
    '1206184429.74886',
    22961
    ],
    [
    'died in OUTER eval at ./dieritis-@@.pl line 20.',
    '1206184429.74889',
    22961
    ],
    [
    'died in INNER eval at ./dieritis-@@.pl line 12.',
    '1206184429.7489',
    22961
    ]
    ];

    --
    Affijn, Ruud

    "Gewoon is een tijger."
  • Rafael Garcia-Suarez at Mar 22, 2008 at 11:12 pm

    On 21/03/2008, Dr.Ruud wrote:
    So should we introduce an automatic localization of $@ before
    destruction?

    Just leave $@ as is, and introduce @@.
    I agree that the behaviour of $@ is very hard to modify right now in
    Perl 5. It has many complications and many people have worked around
    features or misfeatures in many ways. Introducing a parallel system
    might work.
  • Aristotle Pagaltzis at Mar 23, 2008 at 12:00 am

    * Rafael Garcia-Suarez [2008-03-23 00:15]:
    I agree that the behaviour of $@ is very hard to modify right
    now in Perl 5. It has many complications and many people have
    worked around features or misfeatures in many ways. Introducing
    a parallel system might work.
    What does Perl 6 do in that respect? Maybe semantics could be
    borrowed from there?

    Regards,
    --
    Aristotle Pagaltzis // <http://plasmasturm.org/>
  • Mark J. Reed at Mar 23, 2008 at 5:28 am

    On Sat, Mar 22, 2008 at 8:00 PM, Aristotle Pagaltzis wrote:
    What does Perl 6 do in that respect? Maybe semantics could be
    borrowed from there?
    In which respect?

    TTBOMK, both eval's role as pseudo-"try" and the $@ variable are gone
    in Perl6, which has a real "try" instead. If the eval'ed code fails,
    the eval itself just fails right along with it; so there's no need for
    a split along the lines of $! vs $@.

    --
    Mark J. Reed <markjreed@mail.com>
  • Larry Wall at Mar 26, 2008 at 4:56 pm

    On Sun, Mar 23, 2008 at 01:28:06AM -0400, Mark J. Reed wrote: : On Sat, Mar 22, 2008 at 8:00 PM, Aristotle Pagaltzis wrote:
    : > What does Perl 6 do in that respect? Maybe semantics could be
    : > borrowed from there?
    :
    : In which respect?
    :
    : TTBOMK, both eval's role as pseudo-"try" and the $@ variable are gone
    : in Perl6, which has a real "try" instead. If the eval'ed code fails,
    : the eval itself just fails right along with it; so there's no need for
    : a split along the lines of $! vs $@.

    Actually, we haven't gone that far with eval-string yet. It still traps
    errors, but puts them into $!, which now holds all exception values. $@
    and $? and $^E are all gone.

    The main difference semantically is that $! is always a lexically scoped
    variable, and when you call a function and it fails, the failure code is
    authorized to set its parent's $!, which happens to be your $!.

    We also distinguish die (which always throws an exception) from fail
    (which only throws an exception under "use fatal", and otherwise
    returns an unthrown exception, which is an interesting value
    of undef). Unthrown exceptions are an important way forward into
    parallel programming. You really can't afford to have the vector
    operations on your rocket blowing up (literally, in this case) merely
    because one of your sensors is reporting an impossible 0 value.

    Of couse, if you try to use such an unthrown exception as an actual
    value, then the exception gets thrown for real. (But it has all
    the original exception info in it, so you can still figure out why
    the rocket blew up, hopefully, assuming your rocket is smart enough
    to transmit the exception info before it triggers a self-destruct.)

    Another important semantic difference is that exception handlers are
    run before the stack is unwound, not after. This has to big benefits.
    An exception handler can turn a "fatal" exception into a mere warning
    and continue. But the bigger win is that this unifies warnings with
    exceptions with warnings, so the dynamic scope handles both the same
    way, and can also go the other way and turn a warning into a fatal
    exception. A warning is simply an exception that is expected to
    resume, but might not.

    Another divergence from Perl 5 is that you can turn any block into
    a "try" block merely by putting a CATCH block inside it. And the
    exception handlers within a CATCH are merely a switch statement where
    the topic is $!. No special exception matching syntax--I'm too lazy
    for that. :)

    The other aspects of exception handling in Perl 6 are discussed in

    http://perlcabal.org/syn/S04.html

    if you're interested. (Or even if you're not...)

    Anyway, a lot of these design changes work together to produce a
    smoother result. I confess I haven't thought much about whether they
    could be borrowed piecemeal; I have a hard enough time keeping one
    fanatasy language in my head at a time. :)

    One place we have not yet completed the Perl 6 design is to create
    a hierarchy of exception object types. We're kinda waiting to
    see what the various implementations need on that, but if anyone
    wants to contribute, a preliminary design of that would be useful.
    Probably even a discussion of the current state of the art in Perl 5
    would be useful there.

    [Note, this is intentionally cross-posted, so please be careful with
    your followups if they would be of interest to only one list or
    the other.]

    Larry
  • Jim Cromie at Mar 23, 2008 at 12:26 pm

    Rafael Garcia-Suarez wrote:
    On 21/03/2008, Dr.Ruud wrote:

    So should we introduce an automatic localization of $@ before
    destruction?

    Just leave $@ as is, and introduce @@.
    I agree that the behaviour of $@ is very hard to modify right now in
    Perl 5. It has many complications and many people have worked around
    features or misfeatures in many ways. Introducing a parallel system
    might work.
    dare we add try { stuff } or handle_exception.. ?
  • Yuval Kogman at Mar 21, 2008 at 4:42 pm

    On Fri, Mar 21, 2008 at 17:27:04 +0100, Abigail wrote:
    On Fri, Mar 21, 2008 at 05:44:33PM +0200, Yuval Kogman wrote:
    sub foo {
    local $@;

    eval { # a generic wrapper, doesn'tknow about bar()'s details
    bar();
    };

    if ( $@ ) {
    # do something meaningful
    die $@;
    }
    }

    sub bar {
    die "blah";
    }

    eval { foo() };

    warn "Error: $@";

    in the 'foo' subroutine $@ is localized to prevent clobbering it in
    cases such as:

    eval { ... };
    foo();
    if ( $@ ) { # for the prev eval }

    From a control flow POV everything works correctly here, but in the
    outermost eval { } the value of $@ is not preserved (it will jump
    though). The value is just ''.
    Well, that's what you want, isn't? In:

    eval { ... }
    foo ();
    if ($@) { ... }

    you want $@ to be the result of the first eval {} (at least, that's what
    I understand from your comment). How else do you want to achieve that then
    by "ignoring" whatever foo() does with $@?
    Umm, that's exactly what foo is trying to achieve with the local, in
    an attempt to not interfere with its caller's environment...
    Inspecting $@ to check whether an eval die()d is wrong. It can
    trigger both false positives, and false negatives:
    Fair enough, but that goes against the idimatic usage encouraged by
    perlfunc which is widely used today.
    But that's how it ought to be done.

    While I agree that people might get bitten by 'local $@' (I have myself),
    it *is* consistent. The eval fails (due to die), which sets $@. Then eval
    is done, after which the scope is exited, triggering DESTROYs, and unrolling
    the effects of local. IMO, you are calling for an exception.
    Why? The docs doesn't say this. The fact that die actually sets $@
    and not the outer eval { } that does the assignment is not mentioned
    or even hinted at in the docs, and there is little benefit from this
    behavior.
    But that's what local *IS* all about. ?
    If you don't want the effects of local(), by all means, don't use it.
    Don't make an exception for a specific case. Specially not for a case
    where you were using it wrongly in the first place.
    Again, there is no evidence to support that my usage was any more
    wrong than the work around is right - I understand how local works,
    and I conjectured that die/eval work independently of it, when it is
    in fact done by actual assignment. That doesn't mean that my usage
    is wrong by design.

    --
    Yuval Kogman <nothingmuch@woobling.org>
    http://nothingmuch.woobling.org 0xEBD27418
  • David Nicol at Mar 21, 2008 at 4:56 pm
    $ perl -le 'sub localdie { local $@; die "DIED\n" }; eval {
    localdie(); }; print $@||"cleared\n"'
    cleared
    Why? The docs doesn't say this. The fact that die actually sets $@
    and not the outer eval { } that does the assignment is not mentioned
    or even hinted at in the docs, and there is little benefit from this
    behavior.
    i don't think it is accurate to say "die sets $@"
    I believe eval sets $@ to whatever the "death rattle" of the contents
    were, and die
    is a way of forcing such.

    I believe local sets aside the current value of something until its
    scope is torn down,
    then it assigns it back.

    I believe the confusion arises from lack of explicit documentation of
    the order of
    the two operations in question here, which are, in the order they appear to be
    happening, 1: assignment of "DIED\n" to $@ 2: restoring the old value

    I think it is reasonable to expect the assignment of the death rattle to $@ by
    the eval framework would occur _after_ the stack frame is torn down including
    the restoration of local variables.

    I think Abigail can be something of a show-off.

    --
    ... but world war IV will be fought with rocks
  • Abigail at Mar 21, 2008 at 5:03 pm

    On Fri, Mar 21, 2008 at 06:42:26PM +0200, Yuval Kogman wrote:
    On Fri, Mar 21, 2008 at 17:27:04 +0100, Abigail wrote:
    On Fri, Mar 21, 2008 at 05:44:33PM +0200, Yuval Kogman wrote:
    sub foo {
    local $@;

    eval { # a generic wrapper, doesn'tknow about bar()'s details
    bar();
    };

    if ( $@ ) {
    # do something meaningful
    die $@;
    }
    }

    sub bar {
    die "blah";
    }

    eval { foo() };

    warn "Error: $@";

    in the 'foo' subroutine $@ is localized to prevent clobbering it in
    cases such as:

    eval { ... };
    foo();
    if ( $@ ) { # for the prev eval }

    From a control flow POV everything works correctly here, but in the
    outermost eval { } the value of $@ is not preserved (it will jump
    though). The value is just ''.
    Well, that's what you want, isn't? In:

    eval { ... }
    foo ();
    if ($@) { ... }

    you want $@ to be the result of the first eval {} (at least, that's what
    I understand from your comment). How else do you want to achieve that then
    by "ignoring" whatever foo() does with $@?
    Umm, that's exactly what foo is trying to achieve with the local, in
    an attempt to not interfere with its caller's environment...

    Now, I'm getting confused. To me it seems you are saying that you are
    using 'local $@' inside 'foo' as to not clobber an outside $@, and then
    you're complaining that $@ isn't set when returning from 'foo'. You can't
    have it both ways.
    Inspecting $@ to check whether an eval die()d is wrong. It can
    trigger both false positives, and false negatives:
    Fair enough, but that goes against the idimatic usage encouraged by
    perlfunc which is widely used today.
    But that's how it ought to be done.

    While I agree that people might get bitten by 'local $@' (I have myself),
    it *is* consistent. The eval fails (due to die), which sets $@. Then eval
    is done, after which the scope is exited, triggering DESTROYs, and unrolling
    the effects of local. IMO, you are calling for an exception.
    Why? The docs doesn't say this. The fact that die actually sets $@
    and not the outer eval { } that does the assignment is not mentioned
    or even hinted at in the docs, and there is little benefit from this
    behavior.
    I never claimed that die sets $@. Implementation wise, it might do it (I
    have no idea), but from a language point of view, it doesn't. Due to the
    die, eval sets $@.

    As for the docs, "perldoc -f eval" says:

    ... If there is a syntax error or runtime error, or a "die" statement
    is executed, an undefined value is returned by "eval", and $@ is
    set to the error message. ...

    Note that the docs do mention the possibility of $@ being something else
    than the argument of die(), giving the example of a __DIE__ hook die()ing,
    and hence setting $@.
    But that's what local *IS* all about. ?
    If you don't want the effects of local(), by all means, don't use it.
    Don't make an exception for a specific case. Specially not for a case
    where you were using it wrongly in the first place.
    Again, there is no evidence to support that my usage was any more
    wrong than the work around is right - I understand how local works,
    and I conjectured that die/eval work independently of it, when it is
    in fact done by actual assignment. That doesn't mean that my usage
    is wrong by design.

    --
    Yuval Kogman <nothingmuch@woobling.org>
    http://nothingmuch.woobling.org 0xEBD27418
  • David Nicol at Mar 21, 2008 at 5:56 pm

    On Fri, Mar 21, 2008 at 12:03 PM, Abigail wrote:

    Now, I'm getting confused. To me it seems you are saying that you are
    using 'local $@' inside 'foo' as to not clobber an outside $@, and then
    you're complaining that $@ isn't set when returning from 'foo'. You can't
    have it both ways.
    The OP's example is too long. He wants to hide $@, but set it again in
    certain circumstances, without using intermediate variables. (which will make
    his code that does this not work with "older" perls if the change
    would get made)

    $ perl -le 'sub localdie { local $@; die "DIED\n" }; eval {
    localdie(); }; print $@||"cleared\n"'
    cleared

    $ perl -le 'sub localdie { { local $@; }; die "DIED\n" }; eval {
    localdie(); }; print $@||"cleared\n"'
    DIED

    Ths issue is that in the first case, the assignment to $@, which is
    supposed to happen
    by the eval, is affected by the local within localdie, which should
    have been completed
    before eval sets $@.

    How can this be documented?

    $ diff -u /usr/lib/perl5/5.8/pods/perlfunc.pod.orig
    /usr/lib/perl5/5.8/pods/perlfunc.pod
    --- /usr/lib/perl5/5.8/pods/perlfunc.pod.orig 2008-03-21
    12:36:39.263371300 -0500
    +++ /usr/lib/perl5/5.8/pods/perlfunc.pod 2008-03-21
    12:53:19.812144700 -0500
    @@ -1568,6 +1568,22 @@
    particular situation, you can just use symbolic references instead, as
    in case 6.

    +The assignment to C<$@> occurs before restoration of localised variables,
    +which means a temporary is required to mask some but not all errors:
    +
    + # alter $@ on nefarious repugnancy only
    + {
    + my $e;
    + {
    + local $@; # hide outer $@
    + eval { test_repugnancy() };
    + # $@ =~ /nefarious/ and die $@; # DOES NOT WORK
    + $@ =~ /nefarious/ and $e = $@;
    + }
    + die $e if defined $e
    + }
    +
    +
    C<eval BLOCK> does I<not> count as a loop, so the loop control statements
    C<next>, C<last>, or C<redo> cannot be used to leave or restart the block.
  • Tim Bunce at Mar 24, 2008 at 1:18 pm

    On Fri, Mar 21, 2008 at 12:56:12PM -0500, David Nicol wrote:
    Ths issue is that in the first case, the assignment to $@, which is
    supposed to happen by the eval, is affected by the local within
    localdie, which should have been completed before eval sets $@.
    How can this be documented?

    $ diff -u /usr/lib/perl5/5.8/pods/perlfunc.pod.orig
    /usr/lib/perl5/5.8/pods/perlfunc.pod
    --- /usr/lib/perl5/5.8/pods/perlfunc.pod.orig 2008-03-21 12:36:39.263371300 -0500
    +++ /usr/lib/perl5/5.8/pods/perlfunc.pod 2008-03-21 12:53:19.812144700 -0500
    @@ -1568,6 +1568,22 @@
    particular situation, you can just use symbolic references instead, as
    in case 6.

    +The assignment to C<$@> occurs before restoration of localised variables,
    +which means a temporary is required to mask some but not all errors:
    +
    + # alter $@ on nefarious repugnancy only
    + {
    + my $e;
    + {
    + local $@; # hide outer $@
    + eval { test_repugnancy() };
    + # $@ =~ /nefarious/ and die $@; # DOES NOT WORK
    + $@ =~ /nefarious/ and $e = $@;
    + }
    + die $e if defined $e
    + }
    +
    +
    C<eval BLOCK> does I<not> count as a loop, so the loop control statements
    C<next>, C<last>, or C<redo> cannot be used to leave or restart the block.
    Nobody commented on this, so I'll add a +1

    Tim.

    p.s. I'd also make a couple of minor changes:
    Change "is required to mask" to "is required if you want to mask", and
    change "# hide outer $@" to "# protect existing $@".
  • Rafael Garcia-Suarez at Mar 25, 2008 at 9:23 am

    On 21/03/2008, David Nicol wrote:
    Ths issue is that in the first case, the assignment to $@, which is
    supposed to happen
    by the eval, is affected by the local within localdie, which should
    have been completed
    before eval sets $@.

    How can this be documented?
    Thanks, applied (with Tim Bunce's amendments) as change #33558.
  • Ronald J Kimball at Mar 21, 2008 at 4:48 pm

    On Fri, Mar 21, 2008 at 05:27:04PM +0100, Abigail wrote:
    Inspecting $@ to check whether an eval die()d is wrong. It can
    trigger both false positives, and false negatives:

    [snip]

    The correct way of checking whether an eval failed is to check its return
    value:

    [snip]
    The documentation for eval should probably be updated...

    perldoc -f eval

    [...]

    If there is a syntax error or runtime error, or a die statement is
    executed, an undefined value is returned by eval, and $@ is set to the
    error message. If there was no error, $@ is guaranteed to be a null
    string. [...]

    [...]

    # make divide-by-zero nonfatal
    eval { $answer = $a / $b; }; warn $@ if $@;

    [etc.]

    Ronald
  • Zefram at Mar 21, 2008 at 4:57 pm

    Abigail wrote:
    Inspecting $@ to check whether an eval die()d is wrong. It can
    trigger both false positives, and false negatives:
    By that logic, there is no way at all to determine what exception
    was thrown. It's not a practical approach.

    The possibility of DESTROY functions clobbering $@ is a design bug.
    I suggest that $@ should be automatically preserved across (localised
    to) DESTROY functions, and in the absence of such a change to the core
    I have argued for all DESTROY functions to explicitly localise $@ and
    the other global status variables.

    -zefram
  • Jan Dubois at Mar 21, 2008 at 5:50 pm

    On Fri, 21 Mar 2008, Zefram wrote:
    Abigail wrote:
    Inspecting $@ to check whether an eval die()d is wrong. It can
    trigger both false positives, and false negatives:
    By that logic, there is no way at all to determine what exception
    was thrown. It's not a practical approach.

    The possibility of DESTROY functions clobbering $@ is a design bug.
    I agree with that.
    I suggest that $@ should be automatically preserved across (localised
    to) DESTROY functions, and in the absence of such a change to the core
    I have argued for all DESTROY functions to explicitly localise $@ and
    the other global status variables.
    I would find this an acceptable change for 5.12 (with a corresponding
    warning in 5.10.1 docs).

    If we can't agree on doing that, then a somewhat reasonable compromise
    might be to only localize $@ during the processing of a die() call
    itself. That way any additional exceptions thrown by destructors won't
    overwrite the original exception.

    I wonder if for consistency eval() should then check $@ *after* the
    stack-unwinding and change the return value to undef if $@ is true?
    Or why should the eval() be considered successful if the failure of
    a destructor is deemed important enough to change $@?

    I would still prefer to always localize $@ around DESTROY calls.

    Cheers,
    -Jan

    PS: This is another can of worms, but I would even argue that any
    exception thrown that would unwind outside the DESTROY call frame
    while we are already unwinding the stack for an earlier exception
    should be a fatal error. This would essentially mean that DESTROY
    should not throw exceptions at all (unless we make the fact that
    we are unwinding the stack already available in something like $^S).
  • Abigail at Mar 21, 2008 at 5:58 pm

    On Fri, Mar 21, 2008 at 10:49:28AM -0700, Jan Dubois wrote:

    PS: This is another can of worms, but I would even argue that any
    exception thrown that would unwind outside the DESTROY call frame
    while we are already unwinding the stack for an earlier exception
    should be a fatal error. This would essentially mean that DESTROY
    should not throw exceptions at all (unless we make the fact that
    we are unwinding the stack already available in something like $^S).

    But DESTROY is also the place to shutdown resources. And

    close $handle or die "...";

    is very common idiom.



    Abigail
  • Mark Mielke at Mar 21, 2008 at 6:00 pm

    Jan Dubois wrote:
    I suggest that $@ should be automatically preserved across (localised
    to) DESTROY functions, and in the absence of such a change to the core
    I have argued for all DESTROY functions to explicitly localise $@ and
    the other global status variables.
    I would find this an acceptable change for 5.12 (with a corresponding
    warning in 5.10.1 docs).

    If we can't agree on doing that, then a somewhat reasonable compromise
    might be to only localize $@ during the processing of a die() call
    itself. That way any additional exceptions thrown by destructors won't
    overwrite the original exception
    If the behaviour absolutely must change in Perl 5 (perhaps
    insignificant, but I do not agree - small border cases that go
    undiscovered or at least uncomplained about for over a decade, does not
    seem to be something that absolutely must change), I believe the true
    problem here is that die seems to be pass by reference, and that local
    is restoring the value to undef (or whatever it was before) before $@ is
    made available to the caller.

    Talking about automatically localizing of $@ during DESTROY, although
    useful, is not related to the original complaint. The original complaint
    was that local causes a problem. Automatic localization does not fix the
    problem - it propagates it. Other people are bringing up other failures
    of the Perl 5 exception model (their own peaves?) and twisting them
    together, in what can only be summarized as "why Perl 5 exceptions
    suck." I've known they sucked for a long time, and I accept it. Fixing
    one place here, fixing one place there, and eventually you have
    something that WILL break existing Perl 5 code. It might be great - but
    if existing programs start to fail, whose fault is it?

    Cheers,
    mark

    --
    Mark Mielke <mark@mielke.cc>
  • David Nicol at Mar 21, 2008 at 9:20 pm

    On Fri, Mar 21, 2008 at 12:59 PM, Mark Mielke wrote:
    seem to be something that absolutely must change), I believe the true
    problem here is that die seems to be pass by reference, and that local
    is restoring the value to undef (or whatever it was before) before $@ is
    made available to the caller.
    no that's not it but it is a red herring -- I would like to revise the proposed
    addition of this issue to the eval documentation to but double-quotes
    around the $@ in the commented-out line in the example code.


    --
    ... but world war IV will be fought with rocks
  • Mark Mielke at Mar 21, 2008 at 10:37 pm

    David Nicol wrote:
    On Fri, Mar 21, 2008 at 12:59 PM, Mark Mielke wrote:

    seem to be something that absolutely must change), I believe the true
    problem here is that die seems to be pass by reference, and that local
    is restoring the value to undef (or whatever it was before) before $@ is
    made available to the caller.
    no that's not it but it is a red herring -- I would like to revise the proposed
    addition of this issue to the eval documentation to but double-quotes
    around the $@ in the commented-out line in the example code.

    That's not a real solution - what about if $@ is an object?

    Effectively you are changing pass by reference to pass by string copy -
    which is good for probably 95%+ cases. As I said, the problem is pass by
    reference.

    Cheers,
    mark

    --
    Mark Mielke <mark@mielke.cc>
  • Mark Mielke at Mar 21, 2008 at 11:13 pm

    Mark Mielke wrote:
    David Nicol wrote:
    On Fri, Mar 21, 2008 at 12:59 PM, Mark Mielke <mark@mark.mielke.cc>
    wrote:
    seem to be something that absolutely must change), I believe the true
    problem here is that die seems to be pass by reference, and that local
    is restoring the value to undef (or whatever it was before) before
    $@ is
    made available to the caller.
    no that's not it but it is a red herring -- I would like to revise
    the proposed
    addition of this issue to the eval documentation to but double-quotes
    around the $@ in the commented-out line in the example code.

    That's not a real solution - what about if $@ is an object?

    Effectively you are changing pass by reference to pass by string copy
    - which is good for probably 95%+ cases. As I said, the problem is
    pass by reference.
    Just had a thought - perhaps some Perl people don't know what I mean by
    pass by reference. That's the only reason I can think of why Dave might
    have called my claim that pass by reference was the real problem a "red
    herring", but then suggest pass by copy as a solution (thereby agreeing
    with me).

    Pass by reference doesn't mean Perl \ operator or Perl language level
    references. I mean that the $@ is bound to an SV, and this SV is passed
    by C reference to the caller, however, local replaces the value of the C
    reference with the old value. David's suggestion of "$@" causes the $@
    to be evaluated as a string, and a COPY of it be taken, returning it to
    the caller as a new string.

    Does this explain it Dave?

    Cheers,
    mark

    --
    Mark Mielke <mark@mielke.cc>
  • David Nicol at Mar 21, 2008 at 11:19 pm

    On Fri, Mar 21, 2008 at 6:13 PM, Mark Mielke wrote:
    Does this explain it Dave?
    but that doesn't work either; the problem is that $@ gets unlocalized after
    eval assigns to it, even when the argument to die has nothing to do with $@.


    perl -le 'eval { local $@; die "yabbazabba" }; print $@||"nothing there"'

    So including an attempt at pass-by-copy in the documentation of what
    doesn't work (1) indicates that thinking it might be a pass-by-copy problem
    is reasonable and (2) documents by example quoting as a method to force
    pass-by-copy.
  • Mark Mielke at Mar 22, 2008 at 12:21 am

    David Nicol wrote:
    On Fri, Mar 21, 2008 at 6:13 PM, Mark Mielke wrote:

    Does this explain it Dave?
    but that doesn't work either; the problem is that $@ gets unlocalized after
    eval assigns to it, even when the argument to die has nothing to do with $@.


    perl -le 'eval { local $@; die "yabbazabba" }; print $@||"nothing there"'

    So including an attempt at pass-by-copy in the documentation of what
    doesn't work (1) indicates that thinking it might be a pass-by-copy problem
    is reasonable and (2) documents by example quoting as a method to force
    pass-by-copy.
    I see what you mean now. Yes, I've hit this too. I may have
    misunderstood the original problem as well. The issue being, that die
    stores to $@ before $@ is restored to its original value, making the die
    unavailable. If this is the original problem, then yes, it's not a
    pass-by-reference problem, but a local vs my problem. When I hit the
    above issue before, my conclusion was that local was the wrong choice
    for $@ and that my would have worked a lot better(?). But again, a "too
    late now, don't complain, just move on" is the route I took.

    Cheers,
    mark

    --
    Mark Mielke <mark@mielke.cc>
  • Mark Mielke at Mar 21, 2008 at 5:51 pm

    Zefram wrote:
    Abigail wrote:
    Inspecting $@ to check whether an eval die()d is wrong. It can
    trigger both false positives, and false negatives:
    By that logic, there is no way at all to determine what exception
    was thrown. It's not a practical approach.
    That's not correct.

    I find Abigail's comment a little surprising given the way that eval{}
    and exception handling is documented. I think Abigail may mean "if you
    require local $@, then inspecting $@ is wrong, otherwise if you do not
    require local $@, I have found in my personal experience, that there are
    other techniques for handling $@ than checking if ($@) that seem to work
    better."

    In my own experience, I find if ($@) to not always be the best approach.
    For example, I'll often do:

    my $variable_expected_to_be_truthful = eval {...}
    or die "....\n";

    I may or may not make use of $@. I find the above to be cleaner to read
    than the attempt at mapping eval{}/if($@){} to try/catch, and I find it
    to be more exact. After all, what happens if:

    my $variable;
    eval {
    $variable = ...;
    };
    if ($@) {
    die "...\n";
    }

    $variable->...;

    What if $variable is undef by the time it reaches the bottom? I believe
    I agree with Abigail, that if you really want to know whether $variable
    contains a usable value, checking $@ is a backwards way to do it, and in
    cases such as described in this thread, it's a broken way to do it.
    The possibility of DESTROY functions clobbering $@ is a design bug.
    I suggest that $@ should be automatically preserved across (localised
    to) DESTROY functions, and in the absence of such a change to the core
    I have argued for all DESTROY functions to explicitly localise $@ and
    the other global status variables
    What might this break?

    To me, it's a major change in behaviour. Perhaps a good one - but it
    would have been better suggested before Perl-5.000 was released.

    Cheers,
    mark

    --
    Mark Mielke <mark@mielke.cc>

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupperl5-porters @
categoriesperl
postedMar 21, '08 at 3:44p
activeMar 26, '08 at 4:56p
posts31
users15
websiteperl.org

People

Translate

site design / logo © 2022 Grokbase