On 6/14/06, Rafael Garcia-Suarez wrote:
demerphq wrote:
This appears to fix :
--- regcomp.h (révision 7997)
+++ regcomp.h (copie de travail)
@@ -570,7 +570,7 @@
/* Compile */
#define DEBUG_COMPILE_r(x) DEBUG_r( \
- if (SvIV(re_debug_flags) & RE_DEBUG_COMPILE_MASK) x )
+ if (re_debug_flags && SvIV(re_debug_flags) & RE_DEBUG_COMPILE_MASK) x )
#define DEBUG_PARSE_r(x) DEBUG_r( \
if (SvIV(re_debug_flags) & RE_DEBUG_COMPILE_PARSE) x )
#define DEBUG_OPTIMISE_r(x) DEBUG_r( \
End of patch
(other DEBUG_.*_r macros should probably be adapted too)
I wonder what set re_debug_flags to NULL. Any idea ?
demerphq wrote:
Hmm, i cant recreate this. The line in question is debugging related,
and I dont see what could result in a segfault after the "Freeing REx"
message has been emitted...
I wonder what compile options you are building under...
The default ones.and I dont see what could result in a segfault after the "Freeing REx"
message has been emitted...
I wonder what compile options you are building under...
This appears to fix :
--- regcomp.h (révision 7997)
+++ regcomp.h (copie de travail)
@@ -570,7 +570,7 @@
/* Compile */
#define DEBUG_COMPILE_r(x) DEBUG_r( \
- if (SvIV(re_debug_flags) & RE_DEBUG_COMPILE_MASK) x )
+ if (re_debug_flags && SvIV(re_debug_flags) & RE_DEBUG_COMPILE_MASK) x )
#define DEBUG_PARSE_r(x) DEBUG_r( \
if (SvIV(re_debug_flags) & RE_DEBUG_COMPILE_PARSE) x )
#define DEBUG_OPTIMISE_r(x) DEBUG_r( \
End of patch
(other DEBUG_.*_r macros should probably be adapted too)
I wonder what set re_debug_flags to NULL. Any idea ?
Nicholas spotted this and applied a small patch that resolved a bit of
the problem, but unfortunately I didnt connect the two together until
your reply illuminated the source of the problem.
Anyway, the solution is to convert the macros to operate on a UV and
not on the SV directly and let the GET_RE_DEBUG_FLAGS handle setting
up the UV correctly assuming get_sv returns something useful.
Also, i noticed that Benchmark sometimes goes into an infinte loop on
test 67 of lib/Benchmark.t, the problem is that its possible for the
empty loop to take the same time or longer than the item being
benchmarked. In this case there are at least two loops in countit()
that go infinite as they assume that timeit() will return positive
results. Here is the test code in question:
my $start = times;
my $chart = cmpthese( -0.1, { a => "++\$i", b => "\$i = sqrt(\$i++)" } ) ;
my $end = times;
select(STDOUT);
ok (($end - $start) > 0.05, "benchmarked code ran for over 0.05 seconds");
The timing of ++$i is so close to the time of an empty loop that we
can end up in an infinite loop.
As a block to this I have added some heuristics to prevent an infinite
loop. If the timing loops get a negative or zero timing from timeit()
16 times in a row they will bail with an error that the benchmark is
impossible to complete.
I will say however that [ersonally I think that this is the wrong
approach: Timing the empty loop and subtracting it from the run time
of the benchmarked code is fraught with errors. If the load average
changes during the timing of the empty loop the benchmark is basically
garbage, if the run time of the empty loop is very close or slower due
to load reasons than the timing of the actual code, then there is the
possibility of an infinite loop. If somebody ever hacks perl to
optimise away useless code such as empty loops or sub calls to
contentless subs then the empty loop timings are toast anyway.
Id say that the best thing to do would be to remove the empty loop
timing stuff from the benchmark outright. It would eliminate the pesky
infinite loop, it would improve confidence in the benchmarks, and it
would have a minimal cost as most benchmarking is comparative anyway,
so removing the empty loop is irrelevent to it. The only thing that
would be affected is that the timing of a routine would include the
cost of execution the timing loop itself, something that I dont think
many people would care about.
Yves
--
perl -Mre=debug -e "/just|another|perl|hacker/"