I'm running master-master replication with MySQL 5.1.23-rc built from
source, on Solaris x86 (5.10 Generic_120012-14 i86pc i386 i86pc).
If I run both servers with transaction-isolation REPEATABLE-READ or
SERIALIZABLE the setup works fine and has worked for several weeks.
If I change to READ-COMMITTED or READ-UNCOMMITTED, enter an INSERT or
UPDATE on one server, the other one dies with a segmentation fault. If
I start it back up it tries to recover by replaying the insert or
update (I think), and dies and restarts in a loop.
Initially I thought this might be the binlog_format, so I set it to
ROW, bounced the two servers, and the problem remains. Is changing the
entry in my.cnf all I need to do to change the binlog format?
The relevant lines from the log are:
080328 15:53:26 [Note] Slave SQL thread initialized, starting
replication in log 'mysql-bin.015834' at position 3400, relay log
'/prod/webgroup/local-root/var/slave-relay.018869' position: 580
080328 15:53:26 [Note] Slave I/O thread: connected to master
'firstname.lastname@example.org:3306',replication started in log
'mysql-bin.015834' at position 3862
080328 15:53:26 - mysqld got signal 11;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose
the problem, but since we have already crashed, something is definitely wrong
and this may fail.
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 49232 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Unfortunately there's no stack trace on Solaris. I have a
mysqld.trace of the server starting up and dying in a loop which is
about 1.5M, on request.
Has anyone come across this? Is there something in my.cnf that I've
missed? Any and all help much appreciated.
Thanks in advance,