FAQ
Hi!

Mailman qrunners eat a lot of memory on my site:

USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
mailman 30263 0.0 0.6 80108 13520 ? S 02:20 0:01 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=VirginRunner:0:1
mailman 30584 0.0 0.7 82252 15640 ? S 02:20 0:01 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=IncomingRunner:5:6
mailman 30616 0.0 0.7 81940 15524 ? S 02:20 0:01 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=IncomingRunner:4:6
mailman 30646 0.0 0.8 84020 17548 ? S 02:20 0:01 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=IncomingRunner:3:6
mailman 30672 0.0 0.7 82192 15704 ? S 02:20 0:01 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=IncomingRunner:2:6
mailman 31774 0.0 0.6 79592 13108 ? S 02:20 0:01 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=CommandRunner:0:1
mailman 31902 0.0 0.7 88520 15996 ? S 02:20 0:01 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=ArchRunner:0:1
mailman 17971 0.0 1.0 87656 21112 ? S 02:36 0:01 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=BounceRunner:2:3
mailman 18106 0.0 1.1 90700 24164 ? S 02:36 0:01 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=BounceRunner:0:3
mailman 32494 0.0 1.2 92020 25632 ? S 02:52 0:00 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=RetryRunner:0:3
mailman 32571 0.0 1.2 93360 26984 ? S 02:52 0:01 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=OutgoingRunner:5:6
mailman 32739 0.0 0.7 81420 15040 ? S 02:52 0:00 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=BounceRunner:1:3
mailman 1875 0.0 1.2 91780 25388 ? S 02:53 0:00 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=RetryRunner:2:3
mailman 5216 0.0 1.3 94492 28124 ? S 02:56 0:01 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=OutgoingRunner:2:6
mailman 5779 0.0 0.6 80296 13740 ? S 02:57 0:00 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=IncomingRunner:0:6
mailman 25687 0.1 1.1 90904 24532 ? S 03:13 0:00 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=OutgoingRunner:3:6
mailman 26274 0.0 0.8 83828 17548 ? S 03:14 0:00 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=RetryRunner:1:3
mailman 5314 0.3 0.7 81504 14984 ? S 03:21 0:00 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=OutgoingRunner:4:6
mailman 5382 0.2 0.5 78340 11968 ? S 03:21 0:00 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=OutgoingRunner:1:6
mailman 5500 0.2 0.4 75856 9520 ? S 03:22 0:00 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=OutgoingRunner:0:6

Sometimes their memory usage even grows up to 35Mb each.

I would like to know, is it possible to reduce qrunners memory usage and how?
There are 150 lists on the site and about 800 subscribers maximum.

--
Grigory Batalov

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
Url : http://mail.python.org/pipermail/mailman-users/attachments/20071204/43604d8d/attachment.pgp

Search Discussions

  • Mark Sapiro at Dec 4, 2007 at 4:14 am

    Grigory Batalov wrote:
    I would like to know, is it possible to reduce qrunners memory usage and how?
    There are 150 lists on the site and about 800 subscribers maximum.

    See
    <http://www.python.org/cgi-bin/faqw-mm.py?req=show&file=faq04.056.htp>.

    --
    Mark Sapiro <mark at msapiro.net> The highway is for gamblers,
    San Francisco Bay Area, California better use your sense - B. Dylan
  • Grigory Batalov at Dec 4, 2007 at 5:56 pm

    On Mon, 3 Dec 2007 20:14:14 -0800, Mark Sapiro wrote:

    I would like to know, is it possible to reduce qrunners memory usage and how?
    There are 150 lists on the site and about 800 subscribers maximum.
    See
    <http://www.python.org/cgi-bin/faqw-mm.py?req=show&file=faq04.056.htp>.
    Sorry, not much help.
    Can you explain me, why qrunners take more and more memory (RES)?
    It was 25Mb maximum in my previous letter, now it is 36Mb:

    $ top
    PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
    17660 mailman 15 0 101M 36M 2668 S 0.0 1.8 0:41.33 qrunner
    32356 mailman 15 0 100M 35M 2668 S 0.0 1.7 0:38.30 qrunner
    17584 mailman 15 0 100M 35M 2668 S 0.0 1.7 0:40.04 qrunner
    32739 mailman 18 0 99.7M 34M 2660 S 0.0 1.7 0:33.94 qrunner
    3182 mailman 15 0 99.5M 34M 2668 S 0.0 1.7 0:39.10 qrunner
    ...

    Some of them took up to 200Mb (!) before I had to restart them.
    All this looks like slow and fast memory leak.

    My vmstat, if you are interested in:

    $ vmstat -a 1 3
    procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
    r b swpd free inact active si so bi bo in cs us sy id wa
    1 0 0 1302408 0 0 0 0 0 0 0 1 1 0 94 5
    0 0 0 1302408 0 0 0 0 0 0 0 1625 0 0 99 1
    0 0 0 1303084 0 0 0 0 0 0 0 1800 0 0 88 12

    This is OpenVZ VE on Linux.

    $ uname -a
    Linux lists.altlinux.org 2.6.18-ovz-smp-alt17 #1 SMP Sat Oct 6 04:41:41 MSD 2007 x86_64 GNU/Linux


    --
    Grigory Batalov,
    ALT Linux Team
    -------------- next part --------------
    A non-text attachment was scrubbed...
    Name: not available
    Type: application/pgp-signature
    Size: 189 bytes
    Desc: not available
    Url : http://mail.python.org/pipermail/mailman-users/attachments/20071204/1af14a9c/attachment.pgp
  • Brad Knowles at Dec 4, 2007 at 7:19 pm

    Sorry, not much help.
    Can you explain me, why qrunners take more and more memory (RES)?
    You seem to have fewer lists and fewer numbers of members per list
    than some of the sites I'm familiar with, but your lists may be
    higher in hourly or daily traffic.
    It was 25Mb maximum in my previous letter, now it is 36Mb:

    $ top
    PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
    17660 mailman 15 0 101M 36M 2668 S 0.0 1.8 0:41.33 qrunner
    32356 mailman 15 0 100M 35M 2668 S 0.0 1.7 0:38.30 qrunner
    17584 mailman 15 0 100M 35M 2668 S 0.0 1.7 0:40.04 qrunner
    32739 mailman 18 0 99.7M 34M 2660 S 0.0 1.7 0:33.94 qrunner
    3182 mailman 15 0 99.5M 34M 2668 S 0.0 1.7 0:39.10 qrunner
    ....

    Some of them took up to 200Mb (!) before I had to restart them.
    All this looks like slow and fast memory leak.
    That's not so different from what we've got on python.org (see
    <http://www.python.org/cgi-bin/faqw-mm.py?req=show&file=faq01.015.htp>),
    and the RSS for our qrunners is between 11MB and 41MB, depending on
    the specific runner. Note that neither yours nor ours are sucking up
    any CPU time, so they're primed for being paged or swapped out if you
    do run into any memory pressure. Also note that all that Linux stats
    quoted on
    <http://www.python.org/cgi-bin/faqw-mm.py?req=show&file=faq04.056.htp>
    are from the python.org machines.

    I'm not a Linux performance tuning expert, but I'm not seeing any
    real problems in what you've shown us so far. If you are seeing
    problems, then you might want to consult a Linux performance tuning
    expert.
    My vmstat, if you are interested in:

    $ vmstat -a 1 3
    procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
    r b swpd free inact active si so bi bo in cs us sy id wa
    1 0 0 1302408 0 0 0 0 0 0 0 1
    1 0 94 5
    0 0 0 1302408 0 0 0 0 0 0 0 1625
    0 0 99 1
    0 0 0 1303084 0 0 0 0 0 0 0 1800
    0 0 88 12

    This is OpenVZ VE on Linux.
    You've got over 1GB of memory that is marked as "free". I'm not
    seeing any memory pressure here.

    However, I would wonder why your command isn't showing you how much
    memory is inactive or active. This would seem to me to be a system
    problem that you probably want to get resolved, although it doesn't
    have anything to do with Mailman.


    That said, MTAs and mailing lists really, really want direct access
    to their disk subsystems where they handle all their messages, and
    they are likely to perform much worse in a virtual server environment
    than most other types of applications.

    The types of applications that will tend to perform well under
    virtualization are those which are CPU-bound, but are infrequently
    used. The I/O-bound systems, especially those that are disk
    I/O-bound on very specific issues like synchronous meta-data updates
    (which also involves lots of filesystem overhead, as well as physical
    disk I/O), will tend to perform poorly under virtualization.

    --
    Brad Knowles <brad at shub-internet.org>
    LinkedIn Profile: <http://tinyurl.com/y8kpxu>
  • Grigory Batalov at Dec 5, 2007 at 6:36 pm

    On Tue, 4 Dec 2007 13:19:16 -0600, Brad Knowles wrote:

    It was 25Mb maximum in my previous letter, now it is 36Mb:

    $ top
    PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
    17660 mailman 15 0 101M 36M 2668 S 0.0 1.8 0:41.33 qrunner
    32356 mailman 15 0 100M 35M 2668 S 0.0 1.7 0:38.30 qrunner
    17584 mailman 15 0 100M 35M 2668 S 0.0 1.7 0:40.04 qrunner
    32739 mailman 18 0 99.7M 34M 2660 S 0.0 1.7 0:33.94 qrunner
    3182 mailman 15 0 99.5M 34M 2668 S 0.0 1.7 0:39.10 qrunner
    ....

    Some of them took up to 200Mb (!) before I had to restart them.
    All this looks like slow and fast memory leak.
    That's not so different from what we've got on python.org (see
    <http://www.python.org/cgi-bin/faqw-mm.py?req=show&file=faq01.015.htp>),
    and the RSS for our qrunners is between 11MB and 41MB, depending on
    the specific runner. Note that neither yours nor ours are sucking up
    any CPU time, so they're primed for being paged or swapped out if you
    do run into any memory pressure. Also note that all that Linux stats
    quoted on
    <http://www.python.org/cgi-bin/faqw-mm.py?req=show&file=faq04.056.htp>
    are from the python.org machines.

    I'm not a Linux performance tuning expert, but I'm not seeing any
    real problems in what you've shown us so far. If you are seeing
    problems, then you might want to consult a Linux performance tuning
    expert.
    The problem is that some qrunners quickly eat memory. Most of them
    use 20-37Mb after 13 hours of running. But today several qrunners
    6 times took above 200Mb! Fortunately now I have Monit that checks
    memory usage, and kills such runners.

    I wrote previous letter after server failure when 2 greedy qrunners
    took 249 and 235 Mb. In that moment even crond couldn't fork and
    mail delivery was aborted.

    After that I have increased memory limit to 2Gb and started Monit
    daemon to prevent such failure.

    --
    Grigory Batalov,
    ALT Linux Team
    -------------- next part --------------
    A non-text attachment was scrubbed...
    Name: not available
    Type: application/pgp-signature
    Size: 189 bytes
    Desc: not available
    Url : http://mail.python.org/pipermail/mailman-users/attachments/20071205/cf371ad1/attachment.pgp
  • Brad Knowles at Dec 6, 2007 at 2:27 am

    On 12/5/07, Grigory Batalov wrote:

    The problem is that some qrunners quickly eat memory. Most of them
    use 20-37Mb after 13 hours of running. But today several qrunners
    6 times took above 200Mb! Fortunately now I have Monit that checks
    memory usage, and kills such runners.
    I'm reasonably sure there aren't any memory leaks in the Mailman or
    Python code, but unless someone who is an expert in locating memory
    leaks in the code can step forward and give us a complete
    stem-to-stern audit and give us hard confirmation one way or the
    other, we're not likely to get any further down this road.

    If you can look at the code and tell us that you're definitely
    finding memory leaks, then I'm sure that the core developers will
    look very closely at that. Otherwise, I know that this is one of
    they things they're always on the lookout for, and they eliminate
    them as soon as they find them.


    If you are, or you can get, a Linux performance tuning expert to look
    closely at your system and tell you exactly what is going on, we'd
    love to find out what they have to say. But we're not Linux
    performance tuning experts ourselves, and it's hard for us to try to
    guess as to why you're seeing such strange behaviour when I certainly
    don't recall hearing any such reports from anyone else in a very long
    time.

    The last time we had such reports, it was because someone didn't
    understand the nature of how Unix-like OSes work and how they
    aggressively try to cache everything in memory, which is why I wrote
    the FAQ entry that you do not find to be of any use.


    I am not a Linux performance tuning expert, but I have a fair amount
    of experience in doing general purpose Unix performance tuning, and I
    have a certain amount of lower-level kernel knowledge of how the
    various components within most Unix-like OSes interact with each
    other.

    My problem is that I don't fully understand how this knowledge could
    be transferred or translated into a Linux environment.
    I wrote previous letter after server failure when 2 greedy qrunners
    took 249 and 235 Mb. In that moment even crond couldn't fork and
    mail delivery was aborted.
    I don't know what's going on. I didn't see it happen. From what I
    have seen of what your tools are reporting, there's definitely some
    very strange stuff going on, but I can't tell if the problem is that
    the tool is broken and therefore it's not reporting useful
    information, or if there is something else going on.

    Certainly, your tools should not be saying that there is literally
    zero memory that is active, and literally zero memory that is
    inactive, with over a gigabyte of RAM being marked as free. That's
    absolutely the furthest away possible type of situation that we would
    expect to see, based on what you're reporting in terms of how much
    memory is being used by the queue runners.
    After that I have increased memory limit to 2Gb and started Monit
    daemon to prevent such failure.
    That may help, but until you figure out why netstat is reporting such
    totally and completely bogus numbers, I really don't think you're
    going to get anywhere that is very useful.

    I suspect, but I have no evidence to back up this claim, that the
    problem may be related to the fact that you're running under a
    virtualization system.

    I would suggest trying to run Mailman and the MTA directly underneath
    the primary OS on the machine (frequently called "domain zero" or
    "dom0" in virtualization parlance), and see if that at least helps
    the tools produce information that makes more sense.

    Running under dom0 may not solve the actual underlying problem of the
    Mailman queue runners sucking up so much RAM, but at the very least
    it would help reduce the complexity of the system we're trying to
    help you debug.

    --
    Brad Knowles <brad at shub-internet.org>
    LinkedIn Profile: <http://tinyurl.com/y8kpxu>
  • Grigory Batalov at Dec 4, 2007 at 5:26 am

    On Tue, 4 Dec 2007 07:01:05 +0300, Grigory Batalov wrote:

    USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND ...
    mailman 30584 0.0 0.7 82252 15640 ? S 02:20 0:01 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=IncomingRunner:5:6
    mailman 30616 0.0 0.7 81940 15524 ? S 02:20 0:01 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=IncomingRunner:4:6
    mailman 30646 0.0 0.8 84020 17548 ? S 02:20 0:01 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=IncomingRunner:3:6
    mailman 30672 0.0 0.7 82192 15704 ? S 02:20 0:01 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=IncomingRunner:2:6
    mailman 5779 0.0 0.6 80296 13740 ? S 02:57 0:00 /usr/bin/python /usr/share/mailman/bin/qrunner --runner=IncomingRunner:0:6
    Also, as you can see, I have no IncomingRunner:1:6 and it fails to start
    every time I try to:

    $ sudo -u mailman /usr/bin/python /usr/share/mailman/bin/qrunner --runner=IncomingRunner:1:6

    Traceback (most recent call last):
    File "/usr/share/mailman/bin/qrunner", line 278, in ?
    main()
    File "/usr/share/mailman/bin/qrunner", line 238, in main
    qrunner.run()
    File "/usr/share/mailman/Mailman/Queue/Runner.py", line 71, in run
    filecnt = self._oneloop()
    File "/usr/share/mailman/Mailman/Queue/Runner.py", line 100, in _oneloop
    msg, msgdata = self._switchboard.dequeue(filebase)
    File "/usr/share/mailman/Mailman/Queue/Switchboard.py", line 159, in dequeue
    msg = cPickle.load(fp)
    ValueError: insecure string pickle

    Using "strace" I have found which file it tries to load:

    $ python
    file=open("/var/spool/mailman/in/1196682806.813381+4ffeef3dcbdc578279784fb47aa271ad8f6462f7.bak", "r")
    import cPickle
    msg=cPickle.load(file)
    Traceback (most recent call last):
    File "<stdin>", line 1, in ?
    ValueError: insecure string pickle

    This file in spool looks regular except "^@" symbol at the end of first
    (long) line:

    ...bYQlIS1Mprnphtkfp4Urlx28fbCEAVsvLFc9KCkIQgH//2Q==\n\n------=_NextPart_000_0003_01C8359F.078C8BD9--\n\n\n^@
    p1
    .(dp1
    S'listname'
    ...

    After I changed ^@ to quote (') I could load this file and print msg.

    How could it happen that pickle in queue became broken?

    P.S. this is mailman-2.1.9

    --
    Grigory Batalov,
    ALT Linux Team
    -------------- next part --------------
    A non-text attachment was scrubbed...
    Name: not available
    Type: application/pgp-signature
    Size: 189 bytes
    Desc: not available
    Url : http://mail.python.org/pipermail/mailman-users/attachments/20071204/fe9d8ddc/attachment.pgp
  • Mark Sapiro at Dec 5, 2007 at 1:06 am

    Grigory Batalov wrote:
    Also, as you can see, I have no IncomingRunner:1:6 and it fails to start
    every time I try to:

    $ sudo -u mailman /usr/bin/python /usr/share/mailman/bin/qrunner --runner=IncomingRunner:1:6

    Note that the documentation states that the number of slices must be a
    power of two. I don't think it really matters, and I'm sure it isn't
    anything to do with the corrupt pickle, but that's what it says.

    Traceback (most recent call last):
    File "/usr/share/mailman/bin/qrunner", line 278, in ?
    main()
    File "/usr/share/mailman/bin/qrunner", line 238, in main
    qrunner.run()
    File "/usr/share/mailman/Mailman/Queue/Runner.py", line 71, in run
    filecnt = self._oneloop()
    File "/usr/share/mailman/Mailman/Queue/Runner.py", line 100, in _oneloop
    msg, msgdata = self._switchboard.dequeue(filebase)
    File "/usr/share/mailman/Mailman/Queue/Switchboard.py", line 159, in dequeue
    msg = cPickle.load(fp)
    ValueError: insecure string pickle

    Using "strace" I have found which file it tries to load:

    $ python
    file=open("/var/spool/mailman/in/1196682806.813381+4ffeef3dcbdc578279784fb47aa271ad8f6462f7.bak", "r")

    Actually, I hope it was actually trying to open
    1196682806.813381+4ffeef3dcbdc578279784fb47aa271ad8f6462f7.pck
    and had then renamed it to
    1196682806.813381+4ffeef3dcbdc578279784fb47aa271ad8f6462f7.bak

    dequeue should never be trying to open a .bak file.

    In any case, the problem file would have been the first one in the
    /var/spool/mailman/in/ directory in sequence by the part of the name
    up to the '+'.

    import cPickle
    msg=cPickle.load(file)
    Traceback (most recent call last):
    File "<stdin>", line 1, in ?
    ValueError: insecure string pickle

    This file in spool looks regular except "^@" symbol at the end of first
    (long) line:

    ...bYQlIS1Mprnphtkfp4Urlx28fbCEAVsvLFc9KCkIQgH//2Q==\n\n------=_NextPart_000_0003_01C8359F.078C8BD9--\n\n\n^@

    This is the string containing the raw message text and somehow the
    quote at the end of the string has been changed to a null byte (^@). I
    haven't got a clue as to how this could happen unless it's a Python
    cPickle bug of some sort.

    I just tried to make a queue entry of this type from a messsage that
    had a trailing null byte and, with Python 2.5.1 at least, the null
    byte was properly escaped as '\x00' in the pickle.

    --
    Mark Sapiro <mark at msapiro.net> The highway is for gamblers,
    San Francisco Bay Area, California better use your sense - B. Dylan
  • Grigory Batalov at Dec 5, 2007 at 6:46 am

    On Tue, 4 Dec 2007 17:06:36 -0800, Mark Sapiro wrote:

    Also, as you can see, I have no IncomingRunner:1:6 and it fails to start
    every time I try to:

    $ sudo -u mailman /usr/bin/python /usr/share/mailman/bin/qrunner --runner=IncomingRunner:1:6
    Note that the documentation states that the number of slices must be a
    power of two.
    Which documentation?

    Mailman/Defaults.py.in:677
    # BAW: Although not enforced, the # of slices must be a power of 2

    $ /usr/share/mailman/bin/qrunner --help
    ...
    -r runner[:slice:range]
    --runner=runner[:slice:range]
    Run the named qrunner, which must be one of the strings returned by
    the -l option. Optional slice:range if given, is used to assign
    multiple qrunner processes to a queue. range is the total number of
    qrunners for this queue while slice is the number of this qrunner from
    [0..range).

    If using the slice:range form, you better make sure that each qrunner
    for the queue is given the same range value. If slice:runner is not
    given, then 1:1 is used.

    Multiple -r options may be given, in which case each qrunner will run
    once in round-robin fashion. The special runner `All' is shorthand
    for a qrunner for each listed by the -l option.
    ...

    If power of 2 is important, it should be noted in qrunner's help too.
    (Also "then 1:1 is used" is wrong, usually 0:1 is used.)
    I don't think it really matters, and I'm sure it isn't
    anything to do with the corrupt pickle, but that's what it says.
    ...
    Using "strace" I have found which file it tries to load:

    $ python
    file=open("/var/spool/mailman/in/1196682806.813381+4ffeef3dcbdc578279784fb47aa271ad8f6462f7.bak", "r")
    Actually, I hope it was actually trying to open
    1196682806.813381+4ffeef3dcbdc578279784fb47aa271ad8f6462f7.pck
    and had then renamed it to
    1196682806.813381+4ffeef3dcbdc578279784fb47aa271ad8f6462f7.bak

    dequeue should never be trying to open a .bak file.
    Sure, I just took last one which was mentioned in strace log.

    --
    Grigory Batalov,
    ALT Linux Team

    -------------- next part --------------
    A non-text attachment was scrubbed...
    Name: not available
    Type: application/pgp-signature
    Size: 189 bytes
    Desc: not available
    Url : http://mail.python.org/pipermail/mailman-users/attachments/20071205/8d505577/attachment.pgp
  • Mark Sapiro at Dec 6, 2007 at 3:35 am

    Grigory Batalov wrote:
    On Tue, 4 Dec 2007 17:06:36 -0800, Mark Sapiro wrote:

    Also, as you can see, I have no IncomingRunner:1:6 and it fails to start
    every time I try to:

    $ sudo -u mailman /usr/bin/python /usr/share/mailman/bin/qrunner --runner=IncomingRunner:1:6
    Note that the documentation states that the number of slices must be a
    power of two.
    Which documentation?

    Mailman/Defaults.py.in:677
    # BAW: Although not enforced, the # of slices must be a power of 2

    That's the one I was referring to.

    $ /usr/share/mailman/bin/qrunner --help
    ...
    -r runner[:slice:range]
    --runner=runner[:slice:range]
    Run the named qrunner, which must be one of the strings returned by
    the -l option. Optional slice:range if given, is used to assign
    multiple qrunner processes to a queue. range is the total number of
    qrunners for this queue while slice is the number of this qrunner from
    [0..range).

    If using the slice:range form, you better make sure that each qrunner
    for the queue is given the same range value. If slice:runner is not
    given, then 1:1 is used.

    Multiple -r options may be given, in which case each qrunner will run
    once in round-robin fashion. The special runner `All' is shorthand
    for a qrunner for each listed by the -l option.
    ...

    If power of 2 is important, it should be noted in qrunner's help too.
    (Also "then 1:1 is used" is wrong, usually 0:1 is used.)
    I don't think it really matters, and I'm sure it isn't
    anything to do with the corrupt pickle, but that's what it says.

    As I said, I don't think it is important. I just mentioned it in the first
    place because I'd never seen a non-power of two number of slices used before.

    As far as the default being 1:1, if you look in the code, that actually is the
    default, and it doesn't matter because if the range is 1, the slice number is
    ignored.

    I agree that it's inconsistent, but I don't think it's worth breaking all the
    translations to change it.

    --
    Mark Sapiro <mark at msapiro.net> The highway is for gamblers,
    San Francisco Bay Area, California better use your sense - B. Dylan

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupmailman-users @
categoriespython
postedDec 4, '07 at 4:01a
activeDec 6, '07 at 3:35a
posts10
users3
websitelist.org

People

Translate

site design / logo © 2022 Grokbase