FAQ
There has been little or no activity at all in this project in the
last months, and the last comments on their mailing list seem to
conrfim that it's future is uncertain.
It's also very strange the lack of updates, news or discussions,
specially considering that the merging plan has been approved. Or it
hasn't?

Search Discussions

  • Dan Stromberg at Nov 16, 2010 at 11:40 pm
    I'm not a contributor to the U-S project, but I have been monitoring
    alternative python implementations' progress some, and seem to be adding
    something to pypy all of a sudden.

    I think unladen swallow has produced performance improvements, and they are
    likely to be merged into cpython 3.3.

    However, the improvements are not quite as substantial as were hoped for,
    and pypy seems to be getting some steam behind it. With pypy looking likely
    to even be able to provide source-level compatibility with C extension
    modules, the need for unladen swallow is perhaps somewhat lessened. At the
    outset, the U-S people looked at pypy and got the impression it was years
    from being a fast alternative to cpython for production code, but it sounds
    like pypy's coming along nicely now.

    Then again, the pypy people don't seem to feel that C extensions are going
    to perform that well in pypy (it sounds like an issue of initialization
    overhead and infeasibility of JIT compiling C extensions).

    I've been testing some of my own code in cpython 2.6, cpython 3.1 and pypy
    1.3; with this specific program pypy seems to be about 4x faster than
    cpython 2.6, and almost that much faster than cpython 3.1 (I have the same -
    albeit unfinished - code running on all 3). This code is I/O intensive and
    somewhat math-intensive.
    On Tue, Nov 16, 2010 at 2:30 PM, laspi wrote:

    There has been little or no activity at all in this project in the
    last months, and the last comments on their mailing list seem to
    conrfim that it's future is uncertain.
    It's also very strange the lack of updates, news or discussions,
    specially considering that the merging plan has been approved. Or it
    hasn't?
    --
    http://mail.python.org/mailman/listinfo/python-list
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: <http://mail.python.org/pipermail/python-list/attachments/20101116/a13b3b6e/attachment-0001.html>
  • Swapnil at Nov 17, 2010 at 6:24 am

    On Nov 17, 3:30?am, laspi wrote:
    There has been little or no activity at all in this project in the
    last months, and the last comments on their mailing list seem to
    conrfim that it's future is uncertain.
    It's also very strange the lack of updates, news or discussions,
    specially considering that the merging plan has been approved. Or it
    hasn't?
    AFAIK, the merging plan was approved by Guido early this year. I guess
    Google is expecting the community to drive the project from here on.
    That was the whole idea for merging it to mainline. From my last
    conversation with Collin, they are targeting Python 3.3
  • John Nagle at Nov 17, 2010 at 7:02 am

    On 11/16/2010 10:24 PM, swapnil wrote:
    On Nov 17, 3:30 am, laspiwrote:
    There has been little or no activity at all in this project in the
    last months, and the last comments on their mailing list seem to
    conrfim that it's future is uncertain.
    It's also very strange the lack of updates, news or discussions,
    specially considering that the merging plan has been approved. Or it
    hasn't?
    AFAIK, the merging plan was approved by Guido early this year. I guess
    Google is expecting the community to drive the project from here on.
    That was the whole idea for merging it to mainline. From my last
    conversation with Collin, they are targeting Python 3.3
    I think it's dead. They're a year behind on quarterly releases.
    The last release was Q3 2009. The project failed to achieve its
    stated goal of a 5x speedup. Not even close. More like 1.5x
    (http://www.python.org/dev/peps/pep-3146)

    The Google blog at
    "http://groups.google.com/group/unladen-swallow/browse_thread/thread/f2011129c4414d04"
    says, as of November 8, 2010:

    "Jeffrey and I have been pulled on to other projects of higher
    importance to Google. Unfortunately, no-one from the Python
    open-source community has been interested in picking up the merger
    work, and since none of the original team is still full-time on the
    project, it's moving very slowly. Finishing up the merger into the
    py3k-jit branch is a high priority for me this quarter, but what
    happens then is an open question."

    So Google has pulled the plug on Unladen Swallow. It looks
    like they underestimated the difficulty of speeding up the CPython
    model. The performance improvement achieved was so low
    that cluttering up CPython with a JIT system and LLVM probably is
    a lose.

    John Nagle
  • BartC at Nov 18, 2010 at 12:24 pm
    "John Nagle" <nagle at animats.com> wrote in message
    news:4ce37e01$0$1666$742ec2ed at news.sonic.net...
    On 11/16/2010 10:24 PM, swapnil wrote:

    AFAIK, the merging plan was approved by Guido early this year. I guess
    Google is expecting the community to drive the project from here on.
    That was the whole idea for merging it to mainline. From my last
    conversation with Collin, they are targeting Python 3.3
    I think it's dead. They're a year behind on quarterly releases.
    The last release was Q3 2009. The project failed to achieve its
    stated goal of a 5x speedup. Not even close. More like 1.5x
    (http://www.python.org/dev/peps/pep-3146)
    There must have been good reasons to predict a 5x increase. But why did it
    take so long to find out the approach wasn't going anywhere?

    Assuming the 5x speedup was shown to be viable (ie. performing the same
    benchmarks, on the same data, can be done that quickly in any other
    language, and allowing for the overheads associated with Python's dynamic
    nature), then what went wrong?

    (I've had a look at the benchmarks, with a view to trying some on other
    languages, and they seem an extraordinarily difficult bunch to work with.)
    The Google blog at
    "http://groups.google.com/group/unladen-swallow/browse_thread/thread/f2011129c4414d04"
    says, as of November 8, 2010:
    So Google has pulled the plug on Unladen Swallow. It looks
    like they underestimated the difficulty of speeding up the CPython
    model. The performance improvement achieved was so low
    that cluttering up CPython with a JIT system and LLVM probably is
    a lose.
    LLVM. Ok, that explains a lot. (LLVM is a huge, complex system).

    --
    Bartc
  • Dan Stromberg at Nov 18, 2010 at 5:45 pm

    On Thu, Nov 18, 2010 at 4:24 AM, BartC wrote:

    "John Nagle" <nagle at animats.com> wrote in message
    news:4ce37e01$0$1666$742ec2ed at news.sonic.net...
    On 11/16/2010 10:24 PM, swapnil wrote:
    AFAIK, the merging plan was approved by Guido early this year. I guess
    Google is expecting the community to drive the project from here on.
    That was the whole idea for merging it to mainline. From my last
    conversation with Collin, they are targeting Python 3.3
    I think it's dead. They're a year behind on quarterly releases.
    The last release was Q3 2009. The project failed to achieve its
    stated goal of a 5x speedup. Not even close. More like 1.5x
    (http://www.python.org/dev/peps/pep-3146)
    There must have been good reasons to predict a 5x increase. But why did it
    take so long to find out the approach wasn't going anywhere?
    It's not that it wasn't going anywhere. Improvements were accomplished.

    Also, LLVM needed to be modified more than a little bit for the purpose.

    Also, the developers were constrained a bit by the need to preserve
    compatibility with some pretty weird code - like methods being replaced
    mid-loop.

    It's not a small project.
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: <http://mail.python.org/pipermail/python-list/attachments/20101118/c00b3cf8/attachment.html>
  • John Nagle at Nov 18, 2010 at 6:31 pm

    On 11/18/2010 4:24 AM, BartC wrote:
    "John Nagle" <nagle at animats.com> wrote in message
    news:4ce37e01$0$1666$742ec2ed at news.sonic.net...
    On 11/16/2010 10:24 PM, swapnil wrote:

    AFAIK, the merging plan was approved by Guido early this year. I
    guess Google is expecting the community to drive the project
    from here on. That was the whole idea for merging it to mainline.
    From my last conversation with Collin, they are targeting Python
    3.3
    I think it's dead. They're a year behind on quarterly releases.
    The last release was Q3 2009. The project failed to achieve its
    stated goal of a 5x speedup. Not even close. More like 1.5x
    (http://www.python.org/dev/peps/pep-3146)
    There must have been good reasons to predict a 5x increase.
    For Java, adding a JIT improved performance by much more than that.
    Hard-code compilers for LISP have done much better than 5x. The
    best Java and LISP compilers approach the speed of C, while CPython
    is generally considered to be roughly 60 times slower than C. So
    5x probably looked like a conservative goal. For Google, a company
    which buys servers by the acre, a 5x speedup would have a big payoff.
    Assuming the 5x speedup was shown to be viable (ie. performing the
    same benchmarks, on the same data, can be done that quickly in any
    other language, and allowing for the overheads associated with
    Python's dynamic nature), then what went wrong?
    Python is defined by what a naive interpreter with late binding
    and dynamic name lookups, like CPython, can easily implement. Simply
    emulating the semantics of CPython with generated code doesn't help
    all that much.

    Because you can "monkey patch" Python objects from outside the
    class, a local compiler, like a JIT, can't turn name lookups into hard
    bindings. Nor can it make reliable decisions about the types of
    objects. That adds a sizable performance penalty. Short of global
    program analysis, the compiler can't tell when code for the hard cases
    needs to be generated. So the hard-case code, where you figure out at
    run-time, for ever use of "+", whether "+" is addition or concatenation,
    has to be generated every time. Making that decision is far slower
    than doing an add.

    Shed Skin, which analyzes the entire program, including libraries,
    on every compilation, can figure out the types of objects and generate
    much faster code. Shed Skin has some heavy restrictions, many of which
    could be lifted if more work went into that effort. That's one
    approach that might work.

    I've referred to this problem as "gratuitous hidden dynamism".
    Most things which could be changed dynamically in a Python program
    usually aren't.

    This has been pointed out many times by many people. There's
    even a PhD thesis on the topic. Without a few restrictions, so
    that a compiler can at least tell when support for the hard cases
    is needed, Python cannot be compiled well.

    John Nagle
  • Jean-Paul Calderone at Nov 18, 2010 at 10:59 pm

    On Nov 18, 1:31?pm, John Nagle wrote:
    On 11/18/2010 4:24 AM, BartC wrote:








    "John Nagle" <na... at animats.com> wrote in message
    news:4ce37e01$0$1666$742ec2ed at news.sonic.net...
    On 11/16/2010 10:24 PM, swapnil wrote:

    AFAIK, the merging plan was approved by Guido early this year. I
    guess Google is expecting the community to drive the project
    from here on. That was the whole idea for merging it to mainline.
    From my last conversation with Collin, they are targeting Python
    3.3
    I think it's dead. They're a year behind on quarterly releases.
    The last release was Q3 2009. The project failed to achieve its
    stated goal of a 5x speedup. Not even close. More like 1.5x
    (http://www.python.org/dev/peps/pep-3146)
    There must have been good reasons to predict a 5x increase.
    ? ? ?For Java, adding a JIT improved performance by much more than that.
    Hard-code compilers for LISP have done much better than 5x. ?The
    best Java and LISP compilers approach the speed of C, while CPython
    is generally considered to be roughly 60 times slower than C. ?So
    5x probably looked like a conservative goal. ?For Google, a company
    which buys servers by the acre, a 5x speedup would have a big payoff.
    Assuming the 5x speedup was shown to be viable (ie. performing the
    same benchmarks, on the same data, can be done that quickly in any
    other language, and allowing for the overheads associated with
    Python's dynamic nature), then what went wrong?
    ? ? ? Python is defined by what a naive interpreter with late binding
    and dynamic name lookups, like CPython, can easily implement. ?Simply
    emulating the semantics of CPython with generated code doesn't help
    all that much.

    ? ? ? Because you can "monkey patch" Python objects from outside the
    class, a local compiler, like a JIT, can't turn name lookups into hard
    bindings. ?Nor can it make reliable decisions about the types of
    objects. ?That adds a sizable performance penalty. Short of global
    program analysis, the compiler can't tell when code for the hard cases
    needs to be generated. ?So the hard-case code, where you figure out at
    run-time, for ever use of "+", whether "+" is addition or concatenation,
    has to be generated every time. ?Making that decision is far slower
    than doing an add.
    This isn't completely accurate. It *is* possible to write a JIT
    compiler
    for a Python runtime which has fast path code for the common case, the
    case
    where the meaning of "+" doesn't change between every opcode. PyPy
    has
    produced some pretty good results with this approach.

    For those who haven't seen it yet, http://speed.pypy.org/ has some
    graphs
    which reflect fairly well on PyPy's performance for benchmarks that
    are not
    entirely dissimilar to real world code.

    Jean-Paul
  • Carl Banks at Nov 19, 2010 at 2:23 am

    On Nov 18, 2:59?pm, Jean-Paul Calderone wrote:
    On Nov 18, 1:31?pm, John Nagle wrote:




    On 11/18/2010 4:24 AM, BartC wrote:

    "John Nagle" <na... at animats.com> wrote in message
    news:4ce37e01$0$1666$742ec2ed at news.sonic.net...
    On 11/16/2010 10:24 PM, swapnil wrote:

    AFAIK, the merging plan was approved by Guido early this year. I
    guess Google is expecting the community to drive the project
    from here on. That was the whole idea for merging it to mainline.
    From my last conversation with Collin, they are targeting Python
    3.3
    I think it's dead. They're a year behind on quarterly releases.
    The last release was Q3 2009. The project failed to achieve its
    stated goal of a 5x speedup. Not even close. More like 1.5x
    (http://www.python.org/dev/peps/pep-3146)
    There must have been good reasons to predict a 5x increase.
    ? ? ?For Java, adding a JIT improved performance by much more than that.
    Hard-code compilers for LISP have done much better than 5x. ?The
    best Java and LISP compilers approach the speed of C, while CPython
    is generally considered to be roughly 60 times slower than C. ?So
    5x probably looked like a conservative goal. ?For Google, a company
    which buys servers by the acre, a 5x speedup would have a big payoff.
    Assuming the 5x speedup was shown to be viable (ie. performing the
    same benchmarks, on the same data, can be done that quickly in any
    other language, and allowing for the overheads associated with
    Python's dynamic nature), then what went wrong?
    ? ? ? Python is defined by what a naive interpreter with late binding
    and dynamic name lookups, like CPython, can easily implement. ?Simply
    emulating the semantics of CPython with generated code doesn't help
    all that much.
    ? ? ? Because you can "monkey patch" Python objects from outside the
    class, a local compiler, like a JIT, can't turn name lookups into hard
    bindings. ?Nor can it make reliable decisions about the types of
    objects. ?That adds a sizable performance penalty. Short of global
    program analysis, the compiler can't tell when code for the hard cases
    needs to be generated. ?So the hard-case code, where you figure out at
    run-time, for ever use of "+", whether "+" is addition or concatenation,
    has to be generated every time. ?Making that decision is far slower
    than doing an add.
    This isn't completely accurate. ?It *is* possible to write a JIT
    compiler
    for a Python runtime which has fast path code for the common case, the
    case
    where the meaning of "+" doesn't change between every opcode. ?PyPy
    has
    produced some pretty good results with this approach.
    Right. The key is to be able to dispatch on the type of object once
    for a given chunk of code, which is possible if you do some kind of
    flow path analysis on the function/chunk.

    PyPy is starting to look much better of late, I kind of thought their
    first approach was wrong (or at least too much) but they seem to have
    pushed though it.


    Carl Banks
  • Mark Wooding at Nov 19, 2010 at 1:35 am

    John Nagle <nagle at animats.com> writes:

    Python is defined by what a naive interpreter with late binding
    and dynamic name lookups, like CPython, can easily implement. Simply
    emulating the semantics of CPython with generated code doesn't help
    all that much. Indeed.
    Because you can "monkey patch" Python objects from outside the
    class, a local compiler, like a JIT, can't turn name lookups into hard
    bindings. Nor can it make reliable decisions about the types of
    objects.
    But it /can/ make guesses. A dynamic runtime doesn't have to predict
    everything right in advance; it only has to predict most things sort of
    well enough, and fix up the things it got wrong before anyone notices.
    For example, A Python compiler could inline a function call if it makes
    a note to recompile the calling function if the called function is
    modified. Most functions aren't redefined, so this is probably a pretty
    good guess.
    That adds a sizable performance penalty. Short of global program
    analysis, the compiler can't tell when code for the hard cases needs
    to be generated.
    The right approach is to guess that things are going to be done the easy
    way, and then detect when the guess is wrong.
    So the hard-case code, where you figure out at run-time, for ever use
    of "+", whether "+" is addition or concatenation, has to be generated
    every time. Making that decision is far slower than doing an add.
    There's an old trick here called `inline caching'. The first time a
    function is called, compile it so as to assume that types of things are
    as you found this time: inline simple methods, and so on. Insert some
    quick type checks at the top: is this going to work next time? If not,
    take a trap back into the compiler. The traditional approach is to
    replace the mispredictions with full dispatches (`monomorphic inline
    caching'); the clever approach tolerates a few different types,
    dispatching to optimized code for each (`polymorphic inline caching'),
    unless there are just too many decision points and you give up.

    There are time/space tradeoffs to be made here too. Fortunately, you
    don't have to compile everything super-optimized from the get-go: you
    can dynamically identify the inner loops which need special attention,
    and get the compiler to really stare hard at them. The rest of the
    program might plausibly be left interpreted much of the time for all
    anyone will care.
    I've referred to this problem as "gratuitous hidden dynamism".
    Most things which could be changed dynamically in a Python program
    usually aren't.
    This is one of the crucial observations for making a dynamic language go
    fast; the other is that you still have the compiler around if you
    guessed wrong.

    An aggressively dynamic runtime has two enormous advantages over batch
    compilers such as are traditionally used for C: it gets the entire
    program in one go, and it gets to see the real live data that the
    program's meant to run against. Given that, I'd expect it to be able to
    /beat/ a batch compiler in terms of performance.
    This has been pointed out many times by many people. There's
    even a PhD thesis on the topic. Without a few restrictions, so
    that a compiler can at least tell when support for the hard cases
    is needed, Python cannot be compiled well.
    This assumes static compilation. It's the wrong approach for a dynamic
    language like Python.

    -- [mdw]
  • John Nagle at Nov 19, 2010 at 3:57 am

    On 11/18/2010 5:35 PM, Mark Wooding wrote:
    John Nagle<nagle at animats.com> writes:
    Python is defined by what a naive interpreter with late binding
    and dynamic name lookups, like CPython, can easily implement. Simply
    emulating the semantics of CPython with generated code doesn't help
    all that much. Indeed.
    Because you can "monkey patch" Python objects from outside the
    class, a local compiler, like a JIT, can't turn name lookups into hard
    bindings. Nor can it make reliable decisions about the types of
    objects.
    But it /can/ make guesses. A dynamic runtime doesn't have to predict
    everything right in advance; it only has to predict most things sort of
    well enough, and fix up the things it got wrong before anyone notices.
    For example, A Python compiler could inline a function call if it makes
    a note to recompile the calling function if the called function is
    modified. Most functions aren't redefined, so this is probably a pretty
    good guess.
    That adds a sizable performance penalty. Short of global program
    analysis, the compiler can't tell when code for the hard cases needs
    to be generated.
    The right approach is to guess that things are going to be done the easy
    way, and then detect when the guess is wrong.
    That's been done successfully for Self and JavaScript. It's not
    easy. See this talk on JaegerMonkey:

    http://blog.cdleary.com/2010/09/picing-on-javascript-for-fun-and-profit/

    The effort needed to do that for Javascript is justified by the size
    of the installed base.

    The Unladen Swallow people had plans to go in that direction, but they
    underestimated the size of the job.

    John Nagle
  • Stefan Behnel at Nov 20, 2010 at 8:02 pm

    Mark Wooding, 19.11.2010 02:35:
    John Nagle writes:
    This has been pointed out many times by many people. There's
    even a PhD thesis on the topic. Without a few restrictions, so
    that a compiler can at least tell when support for the hard cases
    is needed, Python cannot be compiled well.
    This assumes static compilation. It's the wrong approach for a dynamic
    language like Python.
    Cython does a pretty good job in that, though. It also optimistically
    optimises a couple of things even during static compilation, e.g.
    "x.append(y)" likely hints on "x" being a list, even if static analysis
    can't prove that.

    Stefan
  • John Ladasky at Nov 17, 2010 at 8:49 pm

    On Nov 16, 2:30?pm, laspi wrote:
    Is Unladen Swallow dead?
    No, it's just resting.
  • John Nagle at Nov 17, 2010 at 10:09 pm

    On 11/17/2010 12:49 PM, John Ladasky wrote:
    On Nov 16, 2:30 pm, laspiwrote:
    Is Unladen Swallow dead?
    No, it's just resting.
    For those who don't get that, The Monty Python reference:
    "http://www.mtholyoke.edu/~ebarnes/python/dead-parrot.htm"

    Owner: Oh yes, the, uh, the Norwegian Blue...What's,uh...What's wrong
    with it?

    Mr. Praline: I'll tell you what's wrong with it, my lad. 'E's dead,
    that's what's wrong with it!

    Owner: No, no, 'e's uh,...he's resting.

    Mr. Praline: Look, matey, I know a dead parrot when I see one, and I'm
    looking at one right now.

    Owner: No no he's not dead, he's, he's restin'! Remarkable bird, the
    Norwegian Blue, idn'it, ay? Beautiful plumage!

    Mr. Praline: The plumage don't enter into it. It's stone dead.

    Owner: Nononono, no, no! 'E's resting!

    Mr. Praline: All right then, if he's restin', I'll wake him up!
    (shouting at the cage) 'Ello, Mister Polly Parrot! I've got a lovely
    fresh cuttle fish for you if you show...

    (owner hits the cage)

    Owner: There, he moved!

    Mr. Praline: No, he didn't, that was you hitting the cage!

    Owner: I never!!

    Mr. Praline: Yes, you did!

    Owner: I never, never did anything...

    Mr. Praline: (yelling and hitting the cage repeatedly) 'ELLO POLLY!!!!!
    Testing! Testing! Testing! Testing! This is your nine o'clock alarm call!

    (Takes parrot out of the cage and thumps its head on the counter. Throws
    it up in the air and watches it plummet to the floor.)

    Mr. Praline: Now that's what I call a dead parrot.

    (There's more, but you get the idea.)

    John Nagle
  • Philip Semanchuk at Nov 17, 2010 at 10:30 pm

    On Nov 17, 2010, at 5:09 PM, John Nagle wrote:
    On 11/17/2010 12:49 PM, John Ladasky wrote:
    On Nov 16, 2:30 pm, laspiwrote:
    Is Unladen Swallow dead?
    No, it's just resting.
    For those who don't get that, The Monty Python reference:
    "http://www.mtholyoke.edu/~ebarnes/python/dead-parrot.htm"
    A link to the source material:
    http://www.youtube.com/user/montypython?blend=1&ob=4#p/c/6FD5A97331C1B802/0/npjOSLCR2hE


    --
    Cheers
    Philip
  • Alexander Kapps at Nov 17, 2010 at 10:51 pm

    On 17.11.2010 23:09, John Nagle wrote:
    On 11/17/2010 12:49 PM, John Ladasky wrote:
    On Nov 16, 2:30 pm, laspiwrote:
    Is Unladen Swallow dead?
    No, it's just resting.
    For those who don't get that, The Monty Python reference:
    "http://www.mtholyoke.edu/~ebarnes/python/dead-parrot.htm"
    Thank you John for making my light enough Wallet even lighter, now I
    have to go and buy the original English version. Seems the German
    translation sucks (misses a lot) and my copy lacks the original dub.

    Damned.

    :-)
  • Robert Kern at Nov 17, 2010 at 11:39 pm

    On 11/17/10 4:51 PM, Alexander Kapps wrote:
    On 17.11.2010 23:09, John Nagle wrote:
    On 11/17/2010 12:49 PM, John Ladasky wrote:
    On Nov 16, 2:30 pm, laspiwrote:
    Is Unladen Swallow dead?
    No, it's just resting.
    For those who don't get that, The Monty Python reference:
    "http://www.mtholyoke.edu/~ebarnes/python/dead-parrot.htm"
    Thank you John for making my light enough Wallet even lighter, now I have to go
    and buy the original English version. Seems the German translation sucks (misses
    a lot) and my copy lacks the original dub.
    They're all (legitimately) on Youtube now.

    http://www.youtube.com/watch?v=npjOSLCR2hE

    --
    Robert Kern

    "I have come to believe that the whole world is an enigma, a harmless enigma
    that is made terrible by our own mad attempt to interpret it as though it had
    an underlying truth."
    -- Umberto Eco
  • Martin Gregorie at Nov 17, 2010 at 11:43 pm

    On Wed, 17 Nov 2010 23:51:01 +0100, Alexander Kapps wrote:
    On 17.11.2010 23:09, John Nagle wrote:
    On 11/17/2010 12:49 PM, John Ladasky wrote:
    On Nov 16, 2:30 pm, laspiwrote:
    Is Unladen Swallow dead?
    No, it's just resting.
    For those who don't get that, The Monty Python reference:
    "http://www.mtholyoke.edu/~ebarnes/python/dead-parrot.htm"
    Thank you John for making my light enough Wallet even lighter, now I
    have to go and buy the original English version. Seems the German
    translation sucks (misses a lot) and my copy lacks the original dub.
    While you're at it, pick up the video of "Monty Python and the Holy
    Grail". the project name, Unladen Swallow, is a reference to the film.


    --
    martin@ | Martin Gregorie
    gregorie. | Essex, UK
    org |
  • Brf256 at Nov 18, 2010 at 5:50 pm
    I do hope that it isn't dead but it unfortunately seems that way?

    - Braden Faulkner
  • Ben James at Nov 18, 2010 at 7:13 pm

    On 18/11/2010 17:50, brf256 at gmail.com wrote:
    I do hope that it isn't dead but it unfortunately seems that way?

    - Braden Faulkner
    It looks like we'll have to get used to a laden one.

    Maybe if we had two, and a strand of creeper...

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppython-list @
categoriespython
postedNov 16, '10 at 10:30p
activeNov 20, '10 at 8:02p
posts20
users16
websitepython.org

People

Translate

site design / logo © 2022 Grokbase