FAQ
I have a job which runs on a crontab that is having a strange error
(Error 139: segmentation fault). it starts out as a shell script
wrapper, which calls a perl
<http://www.experts-exchange.com/Programming/Languages/Scripting/Perl/Q_
22708174.html##> script, which itself is a wrapper for another perl
script that runs with various ini files. This segmentation error
doesn't happen all the time, and it is impossible to reproduce in a
development environment. I don't want to post all of the code here, but
I'll post some of it:

DeltaWrapper.sh

ulimit -c unlimited

. /usr/oracle/oraenv.sh
. /dmi/projects/autoload/legacy/autoloadenv.sh

/xxxxDeltaExport.pl -f conf/xxxxDeltaExport.ini

xxxxDeltaExport.pl:

CheckLock();
foreach my $IniFile ( @{$INI->{IniFiles}} ) {
TRACE2( "$IniFile...\n" );
my $res;

if ( $opts{n} ) {
$res = `$Exporter -n -f $IniFile 2>&1`;

} else {
$res = `$Exporter -f $IniFile 2>&1`;

}

$Exporter is the individial script that runs with different ini
files...here's some of that:

TRACE0("Start Time: $INI->{LogTime}");
my $fh = new FileHandle;

if ($fh->open("> $INI->{Exportfile}")) {
eval {
WriteHeader($fh,
$INI->{Header},
$count,
$INI->{Delimiter},
$INI->{HFTimeFormat})
if ($INI->{Header});

my $exp = new
DMI::Customers::Ford::Export::FORDExporter($opts{f})
or die "Can't get FORDExporter";

$count = $exp->WriteFile($fh);

WriteFooter($fh,
$INI->{Footer},
$count,
$INI->{Delimiter},
$INI->{HFTimeFormat})
if ($INI->{Footer});

$fh->close();
};

if ($@) {
TRACE0("Error Encountered While Creating Export:\n$@\n");
undef $success;
push @messages, $@;

It fails within the WriteFile process...it is writing and writing and
writing, and for no apparent reason at all, it stops and throws a
SEGFAULT. Here's part of the WriteFile, in another PM:

eval {
$this->{DB}->_CheckConnection()
or die("No db connection, can't continue");
$dbh = $this->{DB}->{m_DBH};

$dbh->{PrintError} = 1;

# Set the row cache size (LATER: make a setting)
$dbh->{RowCacheSize} = 256;
TRACE2("Fetch size: $dbh->{RowCacheSize}");

# Perform the PreSQL
if ($this->{PreSQL}) {
TRACE2("Begin PreSQL");
TRACE2($this->{PreSQL});
TRACE2("End PreSQL");

$dbh->do($this->{PreSQL}) or db_error();
}

my $sth = $dbh->prepare($this->{Query}) or db_error();
my $rv = $sth->execute(@bind) or db_error();

$count = $this->_DelimitedFile($fh, $sth);


# Perform the PostSQL
if ($this->{PostSQL}) {
TRACE2("Begin PostSQL");
TRACE2($this->{PostSQL});
TRACE2("End PostSQL");

$dbh->do($this->{PostSQL}) or db_error();
}

# Commit any changes that may have occured
$dbh->commit() or db_error();
};

Within the _DelimitedFile portion, as it reaches a certain area, it
dies, and here's the error I saw in Perl Debug:


exporter::WriteFile(Exporter.pm:336):
336: $count = $this->_DelimitedFile($fh, $sth);
DB<5> n
Signal SEGV at Exporter.pm line 693
Exporter::_DelimitedFile('Exporter=HASH(0x8767dd8)',
'FileHandle=GLOB(0x8769a6c)', 'DBI::st=HASH(0x87e74c4)') called at
Exporter.pm line 336
eval {...} called at Exporter.pm line 313
Exporter::WriteFile('Exporter=HASH(0x8767dd8)',
'FileHandle=GLOB(0x8769a6c)') called at exporter.pl line 300
eval {...} called at exporter.pl line 289
Aborted



I traced the error with the Perl debugger down further, and got to this
procedure, in which it makes a DBI call and retrieves data with
fetchrow_array:



#my $rv = $sth->execute($year, $div, $vehline, $code) or db_error(1),
die $dbh->errstr;

my $rv = $sth->execute($year, $div, $vehline, $code) or die
"Execute failed due to $DBI::errstr" if $DBI::err;

if ($DBI::errstr) {

TRACE0("ERROR: " . $DBI::errstr);

$OraError = 1;

return undef;

}

TRACE0("_getOptDesc: rv = $rv\n");

TRACE0("_getOptDesc: SQL executed successfully!\n");

my $row = $sth->fetchrow_arrayref() or die "Fetch failed due to
$DBI::errstr" if $DBI::err;

if ($DBI::errstr) {

TRACE0("ERROR: " . $DBI::errstr);

$OraError = 1;

return undef;

}

TRACE0("_getOptDesc: row = $row\n");



I put in those TRACE statements to see how far it gets. The odd thing
is that sometimes it dies with Error 139 (Segmentation Fault) on the
sth->execute and sometimes it dies on the fetchrow_arrayref with the
Error 139. This doesn't happen all the time, but it usually happens
once a day when this job runs.

My initial thought is that there are too many filehandles somehow
getting opened and not closed, but I couldn't really verify that with
lsof. Also checked dbh (handles) and they appear to be fine and the
queries run without error. Does anyone have any thoughts as to why this
is happening? To be honest, I've never seen a Segmentation Fault in a
Perl program like this one before. It would be a lot easier if it
occurred always and in the same place, but it doesn't. I'm stumped.
Thanks!



Ken



This message and any attachments are intended only for the use of the addressee and may contain information that is privileged and confidential. If the reader of the message is not the intended recipient or an authorized representative of the intended recipient, you are hereby notified that any dissemination of this communication is strictly prohibited. If you have received this communication in error, please notify us immediately by e-mail and delete the message and any attachments from your system.

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupdbi-users @
categoriesperl
postedJul 23, '07 at 1:29p
activeJul 23, '07 at 1:29p
posts1
users1
websitedbi.perl.org

1 user in discussion

Gerdes, Ken: 1 post

People

Translate

site design / logo © 2022 Grokbase