FAQ

dan:
However, there are definitely cases where a lot of code would need to
be optimized, and so I ask the question: How fast is Python, compared
to say a typical optimizing C/C++ compiler?
Highly dependent on context. I use factor of 10-20 as a ballpark,
with factor of 100 for some things like low-level string processing.
Eg, I've got a pure Python regexp engine which clocks at about x80
slower than sre.
what could be done to optimize the interpreter? Are any parts written
in assembly? Could things like hash tables be optimized with parallel
units such as MMX? Etc.
Spend a few tens of millions on developing just-in-time compilers
and program analysis. That worked for Java.

Nothing is written in assembly, except that C can be considered
a portable assembly language. Otherwise ports to different platforms
would be a lot more difficult.

I would hope that the C compiler could optimize the C code
sufficiently well for the hardware, rather than tweaking the
code by hand. (Though I know of at least one person who sent
in a patch to gcc to optimize poorly written in-house code.
Rather circuitous way to fix things, but it worked.)

Andrew
dalke at dalkescientific.com

Search Discussions

  • Andrew Dalke at Aug 20, 2003 at 10:00 pm

    dan:
    However, there are definitely cases where a lot of code would need to
    be optimized, and so I ask the question: How fast is Python, compared
    to say a typical optimizing C/C++ compiler?
    Highly dependent on context. I use factor of 10-20 as a ballpark,
    with factor of 100 for some things like low-level string processing.
    Eg, I've got a pure Python regexp engine which clocks at about x80
    slower than sre.
    what could be done to optimize the interpreter? Are any parts written
    in assembly? Could things like hash tables be optimized with parallel
    units such as MMX? Etc.
    Spend a few tens of millions on developing just-in-time compilers
    and program analysis. That worked for Java.

    Nothing is written in assembly, except that C can be considered
    a portable assembly language. Otherwise ports to different platforms
    would be a lot more difficult.

    I would hope that the C compiler could optimize the C code
    sufficiently well for the hardware, rather than tweaking the
    code by hand. (Though I know of at least one person who sent
    in a patch to gcc to optimize poorly written in-house code.
    Rather circuitous way to fix things, but it worked.)

    Andrew
    dalke at dalkescientific.com
  • Steven Taschuk at Aug 27, 2003 at 9:08 pm

    Quoth Andrew Dalke:
    [...] (Though I know of at least one person who sent
    in a patch to gcc to optimize poorly written in-house code.
    Rather circuitous way to fix things, but it worked.)
    A bit off-topic perhaps, but I'd be interested in the details of
    this anecdote.

    --
    Steven Taschuk o- @
    staschuk at telusplanet.net 7O )
    " (
  • Andrew Dalke at Aug 27, 2003 at 11:00 pm

    Steven Taschuk:
    A bit off-topic perhaps, but I'd be interested in the details of
    [your] anecdote.
    Okay. I know someone who really likes optimized programming.
    The kind of person who will develop an in-memory compiler
    to generate specialized assembly for the exact parameters used,
    thus squeezing out a few extra cycles. He works in a C++ company.
    They used an idiom, the details of what I don't know. Most
    people wouldn't use that idiom because it didn't translate well
    to assembly, but the compiler in theory could figure it out. He
    submitted a patch to do that optimization. It was originally
    rejected because they couldn't see that anyone would write
    code that way. He dug around in gcc itself to find some place
    which used that code, to show that it is used. It was accepted.

    Moral: it's easier to change the technical details (gcc) than
    the social ones (getting people to use a better idiom).

    That's about all I know of the story.

    Andrew
    dalke at dalkescientific.com
  • Steve Horsley at Aug 28, 2003 at 7:40 pm

    On Wed, 20 Aug 2003 22:00:19 +0000, Andrew Dalke wrote:


    Spend a few tens of millions on developing just-in-time compilers
    and program analysis. That worked for Java.
    Have you heard of Jython - python language running on a java VM? It's kind
    of double interpreted - the python source is converted to JVM bytecode,
    and then the JVM runs it however that JVM runs bytecode. I guess it should
    be many times faster than python because of the JVM performance, and
    wopuld be interested to hear any comparisons.

    Steve
  • Lawrence Oluyede at Aug 28, 2003 at 7:51 pm

    "Steve Horsley" <steve.horsley1 at virgin.NO_SPAM.net> writes:

    Have you heard of Jython - python language running on a java VM? It's kind
    of double interpreted - the python source is converted to JVM bytecode,
    and then the JVM runs it however that JVM runs bytecode. I guess it should
    be many times faster than python because of the JVM performance, and
    wopuld be interested to hear any comparisons.
    Jython faster than Python? We did little test and it doesn't seem, look:
    http://tinyurl.com/liix

    --
    Lawrence "Rhymes" Oluyede
    http://loluyede.blogspot.com
    rhymes at NOSPAMmyself.com
  • Alan Kennedy at Aug 29, 2003 at 2:31 pm
    [Steve Horsley]
    Have you heard of Jython - python language running on a java VM?
    It's kind of double interpreted - the python source is converted
    to JVM bytecode, and then the JVM runs it however that JVM runs
    bytecode. I guess it should be many times faster than python
    because of the JVM performance, and wopuld be interested to hear
    any comparisons.
    [Lawrence Oluyede]
    Jython faster than Python? We did little test and it doesn't seem, look:
    http://tinyurl.com/liix
    Please bear in mind that the test code included the start up time for
    interpreter. For jython, this is a high cost, because starting a JVM
    often takes up to 10 seconds or more.

    It would probably be fairer to run timings after the VM has already
    been through the startup phase. I think that is a more valid
    reflection of real-world scenarios where a VM gets started once and
    left running for a long time.

    regards,

    --
    alan kennedy
    -----------------------------------------------------
    check http headers here: http://xhaus.com/headers
    email alan: http://xhaus.com/mailto/alan
  • Lawrence Oluyede at Aug 29, 2003 at 3:14 pm

    Alan Kennedy <alanmk at hotmail.com> writes:

    Please bear in mind that the test code included the start up time for
    interpreter. For jython, this is a high cost, because starting a JVM
    often takes up to 10 seconds or more.
    Yeah, you right. But here comes a question: why do you think that Jython
    (and JVM) are faster than Python (and its VM)? In my own little tests is
    Jython is always slower and GUI (with Swing) is not responsive as GTK for
    example. I think Jython is an amazing and awesome "tool" for Python and
    Java developers but I'm not so sure that is also faster than CPython.

    Bye!


    --
    Lawrence "Rhymes" Oluyede
    http://loluyede.blogspot.com
    rhymes at NOSPAMmyself.com
  • Irmen de Jong at Aug 29, 2003 at 4:15 pm

    Alan Kennedy wrote:
    Please bear in mind that the test code included the start up time for
    interpreter. For jython, this is a high cost, because starting a JVM
    often takes up to 10 seconds or more.
    10?! On my machine, starting one of the swing demos from the 1.4.2
    JDK takes anywhere between zero and two seconds.

    While (C)Python is quicker than that, it can only be quicker by
    two seconds, which can be ignored IMHO.

    Mabe you meannot the startup time, but the time it takes
    when the (server)HotSpot JIT reaches full speed. That sometimes
    takes a while because it incrementally compiles the bytecodes.

    --Irmen
  • Robin Becker at Aug 29, 2003 at 9:06 am
    In article <pan.2003.08.28.19.40.59.802803 at virgin.NO_SPAM.net>, Steve
    Horsley <steve.horsley1 at virgin.NO_SPAM.net> writes
    On Wed, 20 Aug 2003 22:00:19 +0000, Andrew Dalke wrote:

    Spend a few tens of millions on developing just-in-time compilers
    and program analysis. That worked for Java.
    Have you heard of Jython - python language running on a java VM? It's kind
    of double interpreted - the python source is converted to JVM bytecode,
    and then the JVM runs it however that JVM runs bytecode. I guess it should
    be many times faster than python because of the JVM performance, and
    wopuld be interested to hear any comparisons.

    Steve
    experience with ReportLab suggests jython can be fairly slow compared to
    CPython although it does have advantages.
    --
    Robin Becker
  • Andrew MacIntyre at Aug 29, 2003 at 9:50 am

    On Fri, 29 Aug 2003, Robin Becker wrote:

    experience with ReportLab suggests jython can be fairly slow compared to
    CPython although it does have advantages.
    The advantages being?

    Regards,
    Andrew.

    --
    Andrew I MacIntyre "These thoughts are mine alone..."
    E-mail: andymac at bullseye.apana.org.au (pref) | Snail: PO Box 370
    andymac at pcug.org.au (alt) | Belconnen ACT 2616
    Web: http://www.andymac.org/ | Australia
  • Lawrence Oluyede at Aug 29, 2003 at 2:28 pm

    Andrew MacIntyre <andymac at bullseye.apana.org.au> writes:

    The advantages being?
    I think gain access to Java stuff is an advantage in some situations,
    isn't it?

    --
    Lawrence "Rhymes" Oluyede
    http://loluyede.blogspot.com
    rhymes at NOSPAMmyself.com
  • Robin Becker at Aug 29, 2003 at 3:47 pm
    In article <mailman.1062163185.5091.python-list at python.org>, Andrew
    MacIntyre <andymac at bullseye.apana.org.au> writes
    On Fri, 29 Aug 2003, Robin Becker wrote:

    experience with ReportLab suggests jython can be fairly slow compared to
    CPython although it does have advantages.
    The advantages being?

    Regards,
    Andrew.
    well I guess you can package it up into a single jar file and then ship
    it to those big blue iron jvm environments ie it's another market.
    --
    Robin Becker
  • Michael Peuser at Aug 21, 2003 at 4:40 am
    "Irmen de Jong" <irmen at -NOSPAM-REMOVETHIS-xs4all.nl> schrieb im Newsbeitrag
    news:3f43e5cf$0$49115$e4fe514c at news.xs4all.nl...
    Python is fast enough for me, especially 2.3.

    Profile & code slow parts as C extensions.
    Include your own assembly there if so desired.
    Investigate Psyco. There was one example on this
    newsgroup that showed that Python+psyco actually
    outperformed the same program in compiled C.

    --Irmen
    This are my advice as well. Especially use the profiler and change your
    high level algorithms. You will find a lot with hidden quadratic behavaviour
    which slow down your program when it comes to high volume.

    Psyco will generally speed up 2. This is fine (I use it!) but not a break
    through. There may be cases where it performs better.

    A bottleneck can be Tkinter. Use something different then (Qt, wx)..

    Kindly
    Michael P
  • David McNab at Aug 21, 2003 at 9:07 am
    On Thu, 21 Aug 2003 06:40:04 +0200, Michael Peuser paused, took a deep
    breath, then came out with:
    A bottleneck can be Tkinter. Use something different then (Qt, wx)..
    Wow!

    I've found wx to be way slower than Tkinter.

    On a P133 running Win98, a McMillan-compiled prog using wx took twice as
    long to start up as a similar prog implemented in Tkinter.
  • Michael Peuser at Aug 21, 2003 at 6:56 pm
    "David McNab" <postmaster at 127.0.0.1> schrieb im Newsbeitrag
    news:pan.2003.08.21.09.07.20.100414 at 127.0.0.1...
    On Thu, 21 Aug 2003 06:40:04 +0200, Michael Peuser paused, took a deep
    breath, then came out with:
    A bottleneck can be Tkinter. Use something different then (Qt, wx)..
    Wow!

    I've found wx to be way slower than Tkinter.

    On a P133 running Win98, a McMillan-compiled prog using wx took twice as
    long to start up as a similar prog implemented in Tkinter.
    Of course! The wx DLL is mor ethan 6 MB whilest Tcl/Tk still keeps around 1.
    I am not talking about start up. When you have ever used a Canvas with a
    600x800 Image oder with a thousend items or a TIX HList with a dozend
    diffently styled columns you might know WHAT I am talking about.

    Even with less filled widgets, most of what you perceive as "lazy" with e.g.
    games is generally not the Python but the Tcl interpreter. Pygame shows that
    you can dio fast visualisation with Python.

    Kindly
    Michael P
  • Raymond Hettinger at Aug 21, 2003 at 5:15 am
    "dan"
    However, there are definitely cases where a lot of code would need to
    be optimized, and so I ask the question: How fast is Python, compared
    to say a typical optimizing C/C++ compiler?
    The extension modules run at optimized C speed because they *are*
    optimized C.

    For pure python applications, Psyco can provide just-in-time native
    compilation.

    Tim once said that anything written using Python's dictionaries are
    zillions of times faster that anything else. There is a grain of truth
    in that because Python makes it so easy to create efficient data
    structures that their performance can surpass less data
    structures written in assembly or C.

    All that being said, Python is designed for those who value
    programmer time more than they value clock cycles.


    Raymond Hettinger
  • Mark Carter at Aug 21, 2003 at 12:13 pm

    However, there are definitely cases where a lot of code would need to
    be optimized, and so I ask the question: How fast is Python, compared
    to say a typical optimizing C/C++ compiler?
    I did a benchmark some time ago (nothing optimised):

    PURPOSE:
    The purpose of this technical report is to gauge the relative speed of
    the languages: VB, VBA, Python 2.2, C++, and Fortran.


    SUMMARY RESULTS:
    It was discovered that uncompiled VB code in VB 6.0 ran at the same
    speed as VBA code in Excel. It was half the speed of compiled VB code,
    5 times the speed of Python, and 1/20th the speed of C++/Fortran.

    METHOD:

    The following algorithm was implemented in each of the target
    languages:

    X = 0.5
    For I = 1 to 108
    X = 1 ? X* X
    Next

    Timings were made for the execution. The following results were
    obtained:

    Language Timing (seconds)
    VB ? uncompiled 74
    VB ? compiled 37
    VBA ? Excel 75
    Python 401
    C++ - debug version 4
    C++ - release version 3
    Fortran 3


    The timings for Fortran are approximate. The execution time had to be
    timed with a stopwatch because timing functions could not be
    discovered.
  • Alex Martelli at Aug 21, 2003 at 4:25 pm
    Mark Carter wrote:
    ...
    The following algorithm was implemented in each of the target
    languages:

    X = 0.5
    For I = 1 to 108
    X = 1 ? X* X
    Next

    Timings were made for the execution. The following results were
    obtained:

    Language Timing (seconds)
    VB ? uncompiled 74
    VB ? compiled 37
    VBA ? Excel 75
    Python 401
    C++ - debug version 4
    C++ - release version 3
    Fortran 3
    Interesting. One wonders what and where you measured, e.g:

    [alex at lancelot gmpy]$ cat a.cpp
    int main()
    {
    double X = 0.5;
    for(int i = 0; i < 108; i++)
    X = 1 + X * X;
    return 0;
    }

    [alex at lancelot gmpy]$ g++ -O3 a.cpp
    [alex at lancelot gmpy]$ time ./a.out
    0.01user 0.00system 0:00.00elapsed 333%CPU (0avgtext+0avgdata 0maxresident)k
    0inputs+0outputs (186major+21minor)pagefaults 0swaps

    i.e., it's just too fast to measure. Not much better w/Python...:

    [alex at lancelot gmpy]$ cat a.py

    def main():
    X = 0.5
    for i in xrange(108):
    X = 1 + X*X

    main()
    [alex at lancelot gmpy]$ time python -O a.py
    0.03user 0.01system 0:00.15elapsed 26%CPU (0avgtext+0avgdata 0maxresident)k
    0inputs+0outputs (452major+260minor)pagefaults 0swaps

    i.e., for all we can tell, the ratio COULD be 100:1 -- or just about
    anything else! Perhaps more details are warranted...


    Alex
  • Jeff Epler at Aug 21, 2003 at 6:58 pm
    I don't know what Mark Carter wanted to measure either, but I'd like to
    mention that when I compile "a.cpp" on my system with the flags Alex
    used, the generated code doesn't even include any floating-point
    arithmetic. The compiler was able to deduce that X was dead after the
    loop, and that its computation had no side-effects. I'm a little
    surprised that the compiler didn't completely remove the loop, but it's
    still there. "i" isn't there either, instead there's a counter that
    begins at 107, decrements and terminates the loop when it reaches -1.
    And if I use the directive to unroll loops, the logic is the same except
    that the counter decreases by 18 each time instead of 1.

    ... and anyway, this modified code (which does actually compute X when
    compiled on my system) aborts with a floating point overflow error.
    As far as I can tell, your program would be computing a value on the
    order of 10^(3x10^31)...

    Ah, the joy of writing the proverbial good benchmark.

    Jeff

    #include <fpu_control.h>
    fpu_control_t __fpu_control = _FPU_IEEE &~ _FPU_MASK_OM;

    double Y;

    int main()
    {
    double X = 0.5;
    for(int i = 0; i < 108; i++)
    X = 1 + X * X;
    Y = X;
    return 0;
    }
  • Andrew Dalke at Aug 22, 2003 at 5:23 am

    Jeff Epler:
    As far as I can tell, your program would be computing a value on the
    order of 10^(3x10^31)...
    for(int i = 0; i < 108; i++)
    X = 1 + X * X;
    He had 1-X*X. Since X starts at 0.5, this will never go leave
    the range 0 to 1.

    Andrew
    dalke at dalkescientific.com
  • Alex Martelli at Aug 21, 2003 at 6:19 am

    Irmen de Jong wrote:

    Python is fast enough for me, especially 2.3.

    Profile & code slow parts as C extensions.
    Include your own assembly there if so desired.

    Investigate Psyco. There was one example on this
    newsgroup that showed that Python+psyco actually
    outperformed the same program in compiled C.
    I think (but will gladly stand corrected if I'm wrong!) that
    this is a misinterpretation of some code I posted -- the
    C code (crazily) used pow(x,2.0), the Python one (sanely)
    x*x -- within a complicated calculation of erf, and that
    one malapropism in the C code was what let psyco make
    faster code than C did. With C fixed to use x*x -- as any
    performance-aware programmer will always code -- the
    two ran neck to neck, no advantage to either side.


    Alex
  • Irmen de Jong at Aug 21, 2003 at 8:14 am

    Alex Martelli wrote:
    Irmen de Jong wrote:
    Investigate Psyco. There was one example on this
    newsgroup that showed that Python+psyco actually
    outperformed the same program in compiled C.

    I think (but will gladly stand corrected if I'm wrong!) that
    this is a misinterpretation of some code I posted -- the
    C code (crazily) used pow(x,2.0), the Python one (sanely)
    x*x -- within a complicated calculation of erf, and that
    one malapropism in the C code was what let psyco make
    faster code than C did. With C fixed to use x*x -- as any
    performance-aware programmer will always code -- the
    two ran neck to neck, no advantage to either side.
    Whoops, I missed that :) Thanks for the clarification.

    Nevertheless, a Psyco-optimized piece of Python code
    that runs as fast as compiled C is still very impressive
    to me. I know that JIT compiler technology theoretically
    could produce better optimized code than a static optimizing
    compiler, but am happy already if it reaches equal level :-)

    --Irmen de Jong
  • Alex Martelli at Aug 21, 2003 at 8:50 am
    Irmen de Jong wrote:
    ...
    Nevertheless, a Psyco-optimized piece of Python code
    that runs as fast as compiled C is still very impressive
    to me. I know that JIT compiler technology theoretically
    could produce better optimized code than a static optimizing
    compiler, but am happy already if it reaches equal level :-)
    If anybody does have an actual example (idealy toy-sized:-)
    where psyco's JIT does make repeatably faster code than a
    C compiler (well-used, e.g. -O3 for gcc, NOT just -O...!-)
    I'd be overjoyed to see it, by the way.


    Alex
  • Michele Simionato at Aug 23, 2003 at 11:18 am
    Alex Martelli <aleax at aleax.it> wrote in message news:<qJ%0b.116361$cl3.3506646 at news2.tin.it>...
    Irmen de Jong wrote:
    ...
    Nevertheless, a Psyco-optimized piece of Python code
    that runs as fast as compiled C is still very impressive
    to me. I know that JIT compiler technology theoretically
    could produce better optimized code than a static optimizing
    compiler, but am happy already if it reaches equal level :-)
    If anybody does have an actual example (idealy toy-sized:-)
    where psyco's JIT does make repeatably faster code than a
    C compiler (well-used, e.g. -O3 for gcc, NOT just -O...!-)
    I'd be overjoyed to see it, by the way.


    Alex
    Actually, as I posted in the C sharp thread of few weeks ago, on my
    machine psyco+psyco was FASTER than C. The numbers quoted are for C
    with
    option -o, but even for -o3 psyco was still faster and, notice, with
    pow(x,2) replacedby x*x in C too. I would be happy if somebody can
    reproduce that. Here is the link:

    http://groups.google.it/groups?hl=it&lr=&ie=UTF-8&threadm"59b0e2.0308041106.7ac111cc%40posting.google.com&rnum=1&prev=/groups%3Fhl%3Dit%26lr%3D%26ie%3DISO-8859-1%26q%3Dsimionato%2Bspeed%2Bgroup%253Acomp.lang.python.*%2Bgroup%253Acomp.lang.python.*%26meta%3Dgroup%253Dcomp.lang.python.*

    Michele Simionato, Ph. D.
    MicheleSimionato at libero.it
    http://www.phyast.pitt.edu/~micheles
    --- Currently looking for a job ---
  • Jimmy Retzlaff at Aug 21, 2003 at 11:30 am

    Alex Martelli wrote:
    Irmen de Jong wrote:
    ...
    Nevertheless, a Psyco-optimized piece of Python code
    that runs as fast as compiled C is still very impressive
    to me. I know that JIT compiler technology theoretically
    could produce better optimized code than a static optimizing
    compiler, but am happy already if it reaches equal level :-)
    If anybody does have an actual example (idealy toy-sized:-)
    where psyco's JIT does make repeatably faster code than a
    C compiler (well-used, e.g. -O3 for gcc, NOT just -O...!-)
    I'd be overjoyed to see it, by the way.
    In a way this comes back to practicality vs. purity. In a synthetic
    benchmark where one function is called repeatedly with homogeneous data,
    it's hard to imagine that a JIT compiler could ever outperform a good
    optimizing C compiler. But that's the pure side of a performance
    analysis. The practical side is how that function performs in a real
    application where a JIT for a dynamically typed language has much more
    information to work with than a C compiler does.

    For example, a C compiler might only know that you declared a parameter
    as a double and so it can only optimize for that. If you happen to call
    the function more often than not with an int (that gets promoted to a
    double on the way in) then the compiler generated code may waste a good
    deal of time doing floating point arithmetic rather than integer
    arithmetic. Now the programmer might take the time to profile their C
    program and hopefully notice that time is being wasted doing floating
    point arithmetic and then create an int version of their function, but
    often practical constraints will get in the way of this happening.

    This brings back memories of the old arguments about optimizing C
    compilers being able to generate faster code that hand written assembly.
    Of course an expert at assembly could write a faster program given
    enough time, but most people didn't have the time or expertise to write
    assembly code that could perform as well as optimized C once the
    compilers attained a certain level of sophistication. In many practical
    situations C is faster than assembly. PyPy is exciting because it
    presents hope of providing the JIT with enough extra information and
    flexibility that it may be able to make practical Python code outperform
    practical C code in many cases.

    The example above involving double/int was not just an example. It
    happened in an application of mine a while back. I replaced a function
    in a C extension with a Psyco-compiled Python version of the same
    function and the performance of the part of my application that used the
    function doubled. I posted a couple of notes about it. The first is
    about a test in isolation (the pure test) and C was faster:

    http://tinyurl.com/kpgd (includes Python code and links to C code)

    The second note came later after I decided the slightly slower
    Python/Psyco version was fast enough to eliminate the headaches of
    maintaining the C extension. After replacing the C code I was startled
    by a performance improvement in my application. This is the about the
    practical test:

    http://tinyurl.com/kpg6

    And finally a bit of a caveat:

    http://tinyurl.com/kpid

    Jimmy
  • Neil Padgen at Aug 21, 2003 at 2:22 pm

    On Wednesday 20 August 2003 21:19, Irmen de Jong wrote:
    Investigate Psyco.
    On the strength of this thread, I investigated Psyco. Results of a
    very quick investigation with the following program:

    -----------------------------------------
    def calcPi(iterations):
    pi4 = 1.0
    for i in xrange(1, iterations):
    denominator = (4*i)-1
    pi4 = pi4 - 1.0/denominator + 1.0/(denominator+2)
    return pi4 * 4.0

    def timethis(func, funcName):
    import sys
    try:
    i = int(sys.argv[1])
    except:
    i = 1000000
    import time
    start = time.time()
    pi = func(i)
    end = time.time()
    print "%s calculated pi as %s in %s seconds" % (funcName, pi, end
    - start)

    def main():
    timethis(calcPi, 'calcPi')
    timethis(speedyPi, 'speedyPi')

    import psyco
    speedyPi = psyco.proxy(calcPi)

    if __name__ == '__main__':
    main()
    -----------------------------------------

    produced the following results on a 1.7GHz P4 running FreeBSD 4.8:
    python2.2 pi.py
    calcPi calculated pi as 3.14159315359 in 3.87623202801 seconds
    speedyPi calculated pi as 3.14159315359 in 0.790405035019 seconds

    -- Neil
  • Michael Peuser at Aug 21, 2003 at 7:02 pm
    "Neil Padgen" <neil.padgen at mon.bbc.co.uk> schrieb im Newsbeitrag
    news:bi2ki9$pai$1 at nntp0.reith.bbc.co.uk...
    On Wednesday 20 August 2003 21:19, Irmen de Jong wrote:
    Investigate Psyco.
    [...]
    produced the following results on a 1.7GHz P4 running FreeBSD 4.8:
    python2.2 pi.py
    calcPi calculated pi as 3.14159315359 in 3.87623202801 seconds
    speedyPi calculated pi as 3.14159315359 in 0.790405035019 seconds

    -- Neil
    This is certainly correct. My experiance with more general programs running
    for a few minutes shows that you can expect a speed-up of two. This is still
    impressiv when you have your results in 5 instead of 10 minutes..

    Kindly
    Michael P
  • Travis Whitton at Aug 21, 2003 at 2:42 pm
    If you haven't seen this, you should check it out. It compares a variety
    of languages to each other, and it's probably the best site on the
    internet for side-by-side language comparisons:

    http://www.bagley.org/~doug/shootout/

    -Travis

    In article <fbf8d8f2.0308201208.3b9c2d01 at posting.google.com>, dan wrote:
    It would be an understatement to say I love this language. What used
    to take me all day now takes 3 hours, and I can spend the rest of the
    time on my bike thinking about the problems from a high level instead
    of wrestling with arcane compiler problems, etc.

    Back in the day, when looking at an interpreted language (or even
    compiled ones) the first thing I would ask is, "how fast is it?"
    These days, with 1ghz processor machines selling for < $500, it seldom
    comes up as an issue. And of course in Py's case you can always
    'extend and embed' your core routines for fun & profit.

    However, there are definitely cases where a lot of code would need to
    be optimized, and so I ask the question: How fast is Python, compared
    to say a typical optimizing C/C++ compiler?

    I realize this is a more complex question than one might think. There
    are various types of code constructs that might end up with different
    efficiency issues. I guess what I'm asking is, in a general sense,
    how fast is it now for typical code sequences, and -- importantly --
    what could be done to optimize the interpreter? Are any parts written
    in assembly? Could things like hash tables be optimized with parallel
    units such as MMX? Etc.

    Please advise.
  • Andrew Dalke at Aug 21, 2003 at 5:19 pm
    Travis Whitton
    [the shootout] is probably the best site on the
    internet for side-by-side language comparisons:
    Though there's also pleac.sf.net which isn't for timings
    but does show how the different languages would be
    used to do the same thing.

    And I see my Python contribution still leads the
    pack in % done.

    Andrew
    dalke at dalkescientific.com
  • Peter Hansen at Aug 21, 2003 at 3:06 pm

    dan wrote:
    However, there are definitely cases where a lot of code would need to
    be optimized, and so I ask the question: How fast is Python, compared
    to say a typical optimizing C/C++ compiler?
    C is roughly 10 to 100 times faster than Python, though of course it's
    easy to find cases outside of this range, on either side.

    I use 30 as a general overall rule of thumb, in the exceptionally
    few cases where it seems relevant how much faster C would be.

    And in those very few cases, so far, I have consistently concluded
    I'm happy enough with the speed of Python given that the speed of
    *development* in Python is easily 5 to 10 times faster than the
    speed of development in C. (And again, it's easy to find cases
    outside of this range, on either side...)

    -Peter
  • Dan at Aug 22, 2003 at 1:27 am
    Peter Hansen <peter at engcorp.com> wrote in message news:<3F44DFEB.60AB6492 at engcorp.com>...
    ...
    And in those very few cases, so far, I have consistently concluded
    I'm happy enough with the speed of Python given that the speed of
    *development* in Python is easily 5 to 10 times faster than the
    speed of development in C. (And again, it's easy to find cases
    outside of this range, on either side...)
    I pretty much agree. The point of my question was not to knock Python
    -- I'm simply curious how fast, _in_principle_, a language like Python
    could be made to run.

    I've looked at Psyco and Pyrex, I think both are interesting projects
    but I doubt anything in the Py world has had nearly the kind of
    man-hours devoted to optimization that Java, C++, and probably C# have
    had.
  • Peter Hansen at Aug 22, 2003 at 3:37 pm

    dan wrote:
    Peter Hansen <peter at engcorp.com> wrote in message news:<3F44DFEB.60AB6492 at engcorp.com>...
    ...
    And in those very few cases, so far, I have consistently concluded
    I'm happy enough with the speed of Python given that the speed of
    *development* in Python is easily 5 to 10 times faster than the
    speed of development in C. (And again, it's easy to find cases
    outside of this range, on either side...)
    I pretty much agree. The point of my question was not to knock Python
    -- I'm simply curious how fast, _in_principle_, a language like Python
    could be made to run.

    I've looked at Psyco and Pyrex, I think both are interesting projects
    but I doubt anything in the Py world has had nearly the kind of
    man-hours devoted to optimization that Java, C++, and probably C# have
    had.
    Oh, I completely misinterpreted the question then. I thought you wanted
    practical information.

    _In principle_, (which I'll interpret as "in theory"), Python can be made
    to run even faster than C or C++.

    In practice, nobody has been able to prove or disprove that theory yet...

    ;-)

    -Peter
  • Cameron Laird at Aug 24, 2003 at 3:09 pm
    In article <3F44DFEB.60AB6492 at engcorp.com>,
    Peter Hansen wrote:
    dan wrote:
    However, there are definitely cases where a lot of code would need to
    be optimized, and so I ask the question: How fast is Python, compared
    to say a typical optimizing C/C++ compiler?
    C is roughly 10 to 100 times faster than Python, though of course it's
    easy to find cases outside of this range, on either side.

    I use 30 as a general overall rule of thumb, in the exceptionally
    few cases where it seems relevant how much faster C would be.

    And in those very few cases, so far, I have consistently concluded
    I'm happy enough with the speed of Python given that the speed of
    *development* in Python is easily 5 to 10 times faster than the
    speed of development in C. (And again, it's easy to find cases
    outside of this range, on either side...)
    .
    .
    .
    I just think Peter's wise counsel bears repeating.

    Andrew gave the same quantities, incidentally. Myself,
    I use ten "as a general over-all rule of thumb", and
    expect generally to be in the three-to-thirty range. I
    know other programmers whose Python work consistently
    runs about one-one-hundredth as fast as the C equivalent.
    As near as I can tell, that reflects on the kinds of
    programming we do (how numeric, and so on), rather than
    the quality of our coding.
    --

    Cameron Laird <Cameron at Lairds.com>
    Business: http://www.Phaseit.net
    Personal: http://phaseit.net/claird/home.html
  • Graham Fawcett at Aug 28, 2003 at 2:18 pm
    claird at lairds.com (Cameron Laird) wrote in message news:<vkhl98pfvs6r34 at corp.supernews.com>...
    In article <3F44DFEB.60AB6492 at engcorp.com>,
    Peter Hansen wrote:
    dan wrote:
    However, there are definitely cases where a lot of code would need to
    be optimized, and so I ask the question: How fast is Python, compared
    to say a typical optimizing C/C++ compiler?
    C is roughly 10 to 100 times faster than Python, though of course it's
    easy to find cases outside of this range, on either side.

    I use 30 as a general overall rule of thumb, in the exceptionally
    few cases where it seems relevant how much faster C would be.

    And in those very few cases, so far, I have consistently concluded
    I'm happy enough with the speed of Python given that the speed of
    *development* in Python is easily 5 to 10 times faster than the
    speed of development in C. (And again, it's easy to find cases
    outside of this range, on either side...)
    .
    .
    I just think Peter's wise counsel bears repeating.

    My comment is completely off-topic, but I enjoyed a lyrical moment
    when I mis-read Cameron's statement, and found myself imagining what
    "Peter's wise counsel bears" looked like. I am envious of Peter,
    having never made any magical forest-friends myself.

    If we each had at least /one/ wise counsel bear, then c.l.py would
    certainly reap the benefits of our enhanced posts!

    Yours,

    -- Graham
  • Juha Autero at Aug 30, 2003 at 10:38 am

    graham__fawcett at hotmail.com (Graham Fawcett) writes:

    If we each had at least /one/ wise counsel bear, then c.l.py would
    certainly reap the benefits of our enhanced posts!
    That reminds me of a story I probably read from The Practice of
    Programming by Brian W. Kernighan and Rob Pike. In some university
    (I've forgotten the name) students doing programming exercises had to
    explain their problem to a teddy bear before they could talk to course
    staff. This was because often just explaining the problem helped you
    to understand the problem and then you could fix it.

    --
    Juha Autero
    http://www.iki.fi/jautero/
    Eschew obscurity!
  • Peter Hansen at Sep 2, 2003 at 4:53 pm

    Marc Wilson wrote:
    In comp.lang.python, Juha Autero (Juha Autero) wrote
    in <mailman.1062240014.19619.python-list at python.org>::
    graham__fawcett at hotmail.com (Graham Fawcett) writes:
    If we each had at least /one/ wise counsel bear, then c.l.py would
    certainly reap the benefits of our enhanced posts!
    That reminds me of a story I probably read from The Practice of
    Programming by Brian W. Kernighan and Rob Pike. In some university
    (I've forgotten the name) students doing programming exercises had to
    explain their problem to a teddy bear before they could talk to course
    staff. This was because often just explaining the problem helped you
    to understand the problem and then you could fix it.
    The term I coined for this is "echo debugging". :)
    I once spent about two hours in a debugging session with a friend.
    We were away from the computer, discussing the problem, with a
    whiteboard, diagrams, lots of talking.... after we found the
    solution I said something about wow, that's great, we solved it.

    My friend said, "Peter... _I_ didn't even say anything!". :-)

    -Peter
  • Mark Carter at Aug 21, 2003 at 4:11 pm
    cartermark46 at ukmail.com (Mark Carter) wrote in message
    It was discovered that uncompiled VB code in VB 6.0 ran at the same
    speed as VBA code in Excel. It was half the speed of compiled VB code,
    5 times the speed of Python, and 1/20th the speed of C++/Fortran.
    Although, as the saying goes, there's no such thing as a slow language
    - only slow implementations.
  • Achrist at Aug 21, 2003 at 4:30 pm

    dan wrote:
    I realize this is a more complex question than one might think. >
    Please advise.
    Consider the percentage of software projects for which the total
    number of hours of developer time over the life of the project
    exceeds the total number of hours of CPU run time during productive
    use of the software produced. This percentage is abysmally high.
    Python works on improving it on both ends, by both reducing the
    developer time and increasing the number of hours of productive
    use. What more could you want?


    Al
  • Christos TZOTZIOY Georgiou at Aug 23, 2003 at 12:38 am
    On 20 Aug 2003 13:08:20 -0700, rumours say that danbmil99 at yahoo.com
    (dan) might have written:
    How fast is Python, compared
    to say a typical optimizing C/C++ compiler?
    The most important time for me is the time *I* invest in a program,
    since when it's run-time, I can always do other stuff while some slave
    computer follows my orders. So, I'll reply only about development time
    and I'll quote the Smiths: "How Soon Is Now?" :)
    --
    TZOTZIOY, I speak England very best,
    Microsoft Security Alert: the Matrix began as open source.
  • Steve Lamb at Aug 29, 2003 at 3:25 pm

    On 2003-08-20, dan wrote:
    However, there are definitely cases where a lot of code would need to
    be optimized, and so I ask the question: How fast is Python, compared
    to say a typical optimizing C/C++ compiler?
    Recently on the debian-user list someone asked why C was as popular
    as it was today with so many other languages around. Many people cited
    that C runs faster. A bunch of Python people (myself included) pointed
    out we could develop faster. Finally someone else asked why a large
    program (in this case Evolution) couldn't be written in Python since it
    isn't processor intensive. My answer hit it in one: "It wouldn't idle
    as fast?"

    Except for some rare instances in most cases any program waiting on
    user input is going to be sitting idle almost the entire time that it is
    running. Given that is there anyone who really cares how fast it idles?
    As others have pointed out for the few cases where the program isn't
    idling and the user might have something he's waiting on then clearly
    stepping down to C and wrapping that in Python will help.

    --
    Steve C. Lamb | I'm your priest, I'm your shrink, I'm your
    PGP Key: 8B6E99C5 | main connection to the switchboard of souls.
    -------------------------------+---------------------------------------------

Related Discussions

People

Translate

site design / logo © 2022 Grokbase