FAQ
import os
def fib(n):
if n == 1:
return(n)
else:
return (fib(n-1)+fib(n-2))

list=fib(20)
print(list)

The above function return the
return (fib(n-1)+fib(n-2))


RuntimeError: maximum recursion depth exceeded in comparison
[36355 refs]

can any one help

Search Discussions

  • Gary Herron at Apr 30, 2011 at 3:41 am

    On 04/29/2011 08:22 PM, lalit wrote:
    import os
    def fib(n):
    if n == 1:
    return(n)
    else:
    return (fib(n-1)+fib(n-2))

    list=fib(20)
    print(list)

    The above function return the
    return (fib(n-1)+fib(n-2))


    RuntimeError: maximum recursion depth exceeded in comparison
    [36355 refs]

    can any one help
    You correctly test for n==1, but what about when n==2? When the
    recursion works its way down to fib(2), you call both fib(1) and fib(0),
    but the latter starts an infinite sequence of calls to fib(-1), fib(-2)
    and so on without end.

    Gary Herron
  • Jayme Proni Filho at May 1, 2011 at 12:10 am
    I tested your function. So You must remeber:

    By definition, the first two Fibonacci numbers are 0 and 1, and each
    subsequent number is the sum of the previous two. Some sources omit the
    initial 0, instead beginning the sequence with two 1s.

    In math terms it means: F(n) = F(n - 1) + F(n - 2)

    I did a recursive form for you here http://pastebin.com/SN9Qheqn
    Try to do a iterative form for improving your skills. and look this docs
    here
    http://docs.python.org/tutorial/modules.html

    Good ideas man. Bye bye.

    ---------------------------------------------------------------------------------------
    Jayme Proni Filho
    Skype: jaymeproni
    Twitter: @jaymeproni
    Phone: +55 - 17 - 3631 - 6576
    Mobile: +55 - 17 - 9605 - 3560
    e-Mail: jaymeproni at yahoo dot com dot br
    Blogs (Em constru??o)
    Programandor: http://jaymeproni.wordpress.com
    Sociedade em A??o: http://sociedaemacao.wordpress.com
    ---------------------------------------------------------------------------------------
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: <http://mail.python.org/pipermail/python-list/attachments/20110430/f3fd17cf/attachment.html>
  • Jason Friedman at Apr 30, 2011 at 3:57 am

    import os
    def fib(n):
    ? ? ? ?if n == 1:
    ? ? ? ? ?return(n)
    ? ? ? ?else:
    ? ? ? ? ?return (fib(n-1)+fib(n-2))

    list=fib(20)
    print(list)

    The above function return the
    return (fib(n-1)+fib(n-2))


    RuntimeError: maximum recursion depth exceeded in comparison
    [36355 refs]

    can any one help
    The first call to fib() recursively calls fib() twice. Each of those
    will call fib() twice. Each of those will call fib() twice. Pretty
    soon, you've got a lot of calls.

    Have a look at: http://en.literateprograms.org/Fibonacci_numbers_(Python).
  • Ian Kelly at Apr 30, 2011 at 5:27 am

    On Fri, Apr 29, 2011 at 9:57 PM, Jason Friedman wrote:
    The first call to fib() recursively calls fib() twice. ?Each of those
    will call fib() twice. ?Each of those will call fib() twice. ?Pretty
    soon, you've got a lot of calls.
    Which is hell for the running time, but doesn't answer the question of
    why the maximum recursion depth is exceeded, since the fact is that if
    the function were properly coded, the call stack for fib(20) would
    never be more than 20 entries deep at any one time.

    The actual problem, as Gary pointed out, is that the base case is incomplete.
  • Harrismh777 at Apr 30, 2011 at 5:35 am

    Ian Kelly wrote:
    since the fact is that if
    the function were properly coded, the call stack for fib(20) would
    never be more than 20 entries deep at any one time.
    Not so much... and much more !....


    ... because each recursion level 'return' calls fib() twice, and each
    of those calls fib() twice, and you get the point...


    (not to mention, its not properly coded)
  • Peter Otten at Apr 30, 2011 at 6:32 am

    harrismh777 wrote:

    Ian Kelly wrote:
    since the fact is that if
    the function were properly coded, the call stack for fib(20) would
    never be more than 20 entries deep at any one time.
    Not so much... and much more !....


    ... because each recursion level 'return' calls fib() twice, and each
    of those calls fib() twice, and you get the point...
    I don't understand what you are trying to say -- but it's wrong ;)
  • Chris Angelico at Apr 30, 2011 at 6:37 am

    On Sat, Apr 30, 2011 at 4:32 PM, Peter Otten wrote:
    ... ?because each recursion level 'return' calls fib() twice, and each
    of those calls fib() twice, and you get the point...
    I don't understand what you are trying to say -- but it's wrong ;)
    Fortunately, most Python interpreters will not implement
    double-tail-recursion as forking.

    Chris Angelico
  • Steven D'Aprano at Apr 30, 2011 at 7:18 am

    On Sat, 30 Apr 2011 08:32:55 +0200, Peter Otten wrote:

    harrismh777 wrote:
    Ian Kelly wrote:
    since the fact is that if
    the function were properly coded, the call stack for fib(20) would
    never be more than 20 entries deep at any one time.
    Not so much... and much more !....


    ... because each recursion level 'return' calls fib() twice, and each
    of those calls fib() twice, and you get the point...
    I don't understand what you are trying to say -- but it's wrong ;)

    I don't know what M Harris thinks he is trying to say either, but the
    *naive* Fibonacci recursive algorithm is particularly badly performing
    and should be avoided. It's ironic that of the two classic algorithms
    used for teaching beginner programmers about recursion, neither is useful
    in practice.

    Except for fib(0) and fib(1), each call to fib() results in multiple
    recursive calls. E.g. calling fib(4) results in the following sequence of
    calls:

    fib(4) # first call
    => fib(3) + fib(2) # three calls
    => fib(2) + fib(1) + fib(1) + fib(0) # seven calls
    => fib(1) + fib(0) + 1 + 1 + 0 # nine calls
    => 1 + 0 + 1 + 1 + 0 = 3

    Thus requiring nine function calls to calculate fib(4) and a maximum
    stack depth of four. As n increases, the number of calls increases at a
    rate significantly faster than the Fibonacci sequence itself, making it
    much worse than O(N**2) but not quite as bad as O(2**N):

    n = 0 1 2 3 4 5 6 ... 35 ...
    fib(n) = 0 1 1 2 3 5 8 ... 9227465 ...
    calls = 1 1 3 5 9 15 25 ... 29860703 ...

    The number of calls is given by a recursive function with a similar form
    as that of Fibonacci. As far as I know, it doesn't have a standard name,
    but I'll call it R(n):

    R(n) = R(n-1) + R(n-2) + 1, where R(0) = R(1) = 1

    Consequently, the naive recursive function is ridiculously slow and
    memory-hungry.

    This seems to have give rise to a myth that recursion should be avoided.
    What needs to be avoided is *badly thought out* recursion. More sensible
    recursive forms performs much better. I leave them as an exercise for
    those who care, or an exercise in googling for those who care a little
    less *wink*



    --
    Steven
  • BartC at May 1, 2011 at 11:16 pm
    "Steven D'Aprano" <steve+comp.lang.python at pearwood.info> wrote in message
    news:4dbbb7b6$0$29991$c3e8da3$5496439d at news.astraweb.com...
    On Sat, 30 Apr 2011 08:32:55 +0200, Peter Otten wrote:

    harrismh777 wrote:
    Ian Kelly wrote:
    since the fact is that if
    the function were properly coded, the call stack for fib(20) would
    never be more than 20 entries deep at any one time.
    Not so much... and much more !....


    ... because each recursion level 'return' calls fib() twice, and each
    of those calls fib() twice, and you get the point...
    I don't understand what you are trying to say -- but it's wrong ;)

    I don't know what M Harris thinks he is trying to say either, but the
    *naive* Fibonacci recursive algorithm is particularly badly performing
    and should be avoided. It's ironic that of the two classic algorithms
    used for teaching beginner programmers about recursion, neither is useful
    in practice.
    Yes, it generates lots of calls.

    About 22000 for fib(20), and 330 million for fib(40).

    That's why it's popular for benchmarks that measure performance of function
    calls. Using an iterative algorithm wouldn't work quite so well...

    --
    Bartc
  • Terry Reedy at May 2, 2011 at 2:30 am

    On 5/1/2011 7:16 PM, BartC wrote:

    Yes, it generates lots of calls.

    About 22000 for fib(20), and 330 million for fib(40).
    Using the standard double recursion implementation of fib, ncf(n)
    (number of calls to fib() for fib(n)) requires ncf(n-2) + ncf(n+1) + 1
    calls. The +1 is for the call to fib(n) itself). So it grows a bit
    faster than fib(n).

    Fib(n) is actually calculated as the sum of fib(n) 1s: 1+1+1....+1
    fib(n) times as 1 is the only non-0 base case and there is nothing else
    added or multiplied in non-base cases. To put it another way, there are
    fib(n) leaf nodes labeled 1 in the unbalanced tree generated by the
    recursion.

    --
    Terry Jan Reedy
  • Rusi at May 2, 2011 at 8:27 am

    On Apr 30, 12:18?pm, Steven D'Aprano <steve +comp.lang.pyt... at pearwood.info> wrote:

    The number of calls is given by a recursive function with a similar form
    as that of Fibonacci. As far as I know, it doesn't have a standard name,
    but I'll call it R(n):

    R(n) = R(n-1) + R(n-2) + 1, where R(0) = R(1) = 1
    Changing your definition slightly to the following:

    def r(n):
    if n==0 or n==1: return 0
    else: return r(n-1) + r(n-2) + 1


    This r counts the number of times the '+' is done in the original
    (naive) fib.

    We see it is the same as fib (off by 1)
    [fib(n) for n in range(10)]
    [1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
    [r(n) for n in range(10)]
    [0, 0, 1, 2, 4, 7, 12, 20, 33, 54]
    >>>

    So it does not need a new name :-)
  • Steven D'Aprano at May 2, 2011 at 8:56 am

    On Mon, 02 May 2011 01:27:39 -0700, rusi wrote:

    On Apr 30, 12:18?pm, Steven D'Aprano <steve
    +comp.lang.pyt... at pearwood.info> wrote:
    The number of calls is given by a recursive function with a similar
    form as that of Fibonacci. As far as I know, it doesn't have a standard
    name, but I'll call it R(n):

    R(n) = R(n-1) + R(n-2) + 1, where R(0) = R(1) = 1
    Changing your definition slightly to the following:

    def r(n):
    if n==0 or n==1: return 0
    else: return r(n-1) + r(n-2) + 1
    Except that's not the same as my "R(n)". The base cases should be 1, not
    0. That makes rather a big difference to the value: by n = 35, you have

    R(35) = 29860703
    fib(35) = 9227465

    or nearly 2.25 times greater. And the difference just keeps getting
    bigger...

    We see it is the same as fib (off by 1) [...]
    So it does not need a new name :-)
    I see your smiley, but there are a number of similar series as Fibonacci,
    with the same recurrence but different starting values, or similar but
    slightly different recurrences. E.g. Lucas, primefree, Pell, Padovan and
    Perrin numbers.

    (The fact that most of those start with P is almost certainly a
    coincidence.)



    --
    Steven
  • Chris Angelico at May 2, 2011 at 9:03 am

    On Mon, May 2, 2011 at 6:56 PM, Steven D'Aprano wrote:
    (The fact that most of those start with P is almost certainly a
    coincidence.)
    There's definitely something attractive about that letter. Lots of
    programming languages' names start with P. I smell a conspiracy. The
    word "programming" has been secretly attracting related words to its
    corner of the alphabet... and the government's *covering it up* and
    pretending there's nothing happening!

    Chris Angelico
  • Hans Georg Schaathun at May 2, 2011 at 9:53 am

    On 02 May 2011 08:56:57 GMT, Steven D'Aprano wrote:
    : I see your smiley, but there are a number of similar series as Fibonacci,
    : with the same recurrence but different starting values, or similar but
    : slightly different recurrences. E.g. Lucas, primefree, Pell, Padovan and
    : Perrin numbers.

    Well, Fibonacci isn't one unique sequence. Any sequence satisfying
    f(n) = f(n-1) + f(n-2) is /a/ Fibonacci sequence. Regardless of
    starting values. At least according to some authors.

    Ian Andersen (A First Course in Combinatorial Mathematics) prefer
    the sequence 1,2,3,5 ...

    Cormen, Leiserson, Rivest (Introduction to Algorithms) prefer
    0,1,1,2, ... (although they also start the indexing at 0).

    Penguin, Dict. of Mathematics prefer 1,1,2,3,5 but they also
    suggest 0,1,1,2,3, ...

    In short, don't assume that a person talking about Fibonacci
    numbers assume the same base cases as you do.

    --
    :-- Hans Georg
  • Rusi at May 2, 2011 at 10:40 am

    On May 2, 2:53?pm, Hans Georg Schaathun wrote:
    On 02 May 2011 08:56:57 GMT, Steven D'Aprano? wrote:

    : ?I see your smiley, but there are a number of similar series as Fibonacci,
    : ?with the same recurrence but different starting values, or similar but
    : ?slightly different recurrences. E.g. Lucas, primefree, Pell, Padovan and
    : ?Perrin numbers.

    Well, Fibonacci isn't one unique sequence. ?Any sequence satisfying
    f(n) = f(n-1) + f(n-2) is /a/ Fibonacci sequence. ?Regardless of
    starting values. ?At least according to some authors.
    And they all will, when expressed in closed form, be of the form
    f(n) = phi^n + something of a smaller order
    where phi = (1 + sqrt(5))/2

    Since phi > 1 they are exponential, ignoring lower order terms.
  • Steven D'Aprano at May 2, 2011 at 4:24 pm

    On Mon, 02 May 2011 10:53:52 +0100, Hans Georg Schaathun wrote:

    On 02 May 2011 08:56:57 GMT, Steven D'Aprano
    wrote:
    : I see your smiley, but there are a number of similar series as
    Fibonacci, : with the same recurrence but different starting values, or
    similar but : slightly different recurrences. E.g. Lucas, primefree,
    Pell, Padovan and : Perrin numbers.

    Well, Fibonacci isn't one unique sequence. Any sequence satisfying f(n)
    = f(n-1) + f(n-2) is /a/ Fibonacci sequence. Regardless of starting
    values. At least according to some authors.
    According to the references I have, there is one and only one Fibonacci
    sequence, the usual one invented by Fibonacci to talk about rabbits.
    (Actually, we should talk about Fibonacci *numbers*, rather than
    sequence.)

    Wolfram Mathworld is typical, defining *the* Fibonacci numbers as those
    with starting conditions f(1)=1, f(2)=1 (or f(0)=0 if you prefer).

    http://mathworld.wolfram.com/FibonacciNumber.html

    The Collins Dictionary of Mathematics agrees with Mathworld.

    The Lucas numbers (related to, but not the same as, the Lucas sequence)
    obey the same recurrence as the Fibonacci numbers, but with a different
    set of starting values. So there is certainly precedent in the
    mathematical literature for giving different names to the same recurrence
    with different starting values.

    In any case, whatever you call them, what I call R(n) is not one, as the
    recurrence is different:

    R(n) = R(n-1) + R(n-2) + 1

    Penguin, Dict. of Mathematics prefer 1,1,2,3,5 but they also suggest
    0,1,1,2,3, ...
    This depends on whether you start counting from n=0 or n=1.

    Even the sequence you quote, from Anderson:

    1,2,3,5...

    is just the usual Fibonacci numbers starting at n=2 instead of 1 or 0.
    (No matter how confusing something is, there's always at least one person
    who will try to make it *more* confusing.)

    In short, don't assume that a person talking about Fibonacci numbers
    assume the same base cases as you do.
    As far as I'm concerned, there are only two definitions of Fibonacci
    numbers that have widespread agreement in the mathematical community:

    0,1,1,2,3 ... (starting from n=0)
    1,1,2,3,5 ... (starting from n=1)

    Any other definition is rather, shall we say, idiosyncratic.



    --
    Steven
  • Mark Dickinson at May 2, 2011 at 9:18 pm

    On May 2, 5:24?pm, Steven D'Aprano <steve +comp.lang.pyt... at pearwood.info> wrote:

    As far as I'm concerned, there are only two definitions of Fibonacci
    numbers that have widespread agreement in the mathematical community:

    0,1,1,2,3 ... (starting from n=0)
    1,1,2,3,5 ... (starting from n=1)

    Any other definition is rather, shall we say, idiosyncratic.
    And a concrete reason for preferring the above definition (in either
    form) is that divisibility properties of the sequence are much neater
    with this choice:

    gcd(F_m, F_n) = F_{gcd(m, n)}

    --
    Mark
  • Gregory Ewing at May 5, 2011 at 2:35 am

    Chris Angelico wrote:

    There's definitely something attractive about that letter. Lots of
    programming languages' names start with P.
    Including one I invented some years ago, that (in the spirit
    of C and its derivatives) was called simply "P".

    (I was playing around with Sun's NeWS window server, which
    was programmed in Postscript, and I got fed up with Forth-like
    syntax, so I wrote a translator that took an infix language and
    generated Postscript from it.)

    --
    Greg
  • Harrismh777 at Apr 30, 2011 at 7:23 am

    Peter Otten wrote:
    I don't understand what you are trying to say -- but it's wrong;)
    ... that was the point! :-))
  • Thomas Rachel at Apr 30, 2011 at 9:37 am

    Am 30.04.2011 07:35, schrieb harrismh777:
    Ian Kelly wrote:
    since the fact is that if
    the function were properly coded, the call stack for fib(20) would
    never be more than 20 entries deep at any one time.
    Not so much... and much more !....

    ... because each recursion level 'return' calls fib() twice, and each of
    those calls fib() twice, and you get the point...
    yes - but they are called one after the other, so the "twice" call
    counts only for execution speed, not for recursion depth.


    Thomas
  • Harrismh777 at May 2, 2011 at 9:50 pm

    Thomas Rachel wrote:
    ... because each recursion level 'return' calls fib() twice, and each of
    those calls fib() twice, and you get the point...
    yes - but they are called one after the other, so the "twice" call
    counts only for execution speed, not for recursion depth.
    def fib(n):
    if n == 1:
    return(n)
    else:
    return (fib(n-1)+fib(n-2))
    They *are not* called one after the other in the sense that there is
    ever only one level of recursion with each call (not at all). Take a
    closer look. Each call to fib() forces a double head, and each of those
    forces another double head (now four), and so on... the recursion depth
    exception of the OP testifies against your theory.

    The thing about this problem that puzzles me, is why we might consider
    recursion for a possible solution in the first place. In other words,
    normally we consider using recursion when we need information further
    down the stack then we have now... so that recursion is necessary
    because each head call will not have the information it needs for
    completion until the tail clean-up (coming back up the stack) provides
    that information.

    In the case of the fib() numbers (or the fib sequence) recursion is not
    necessary for correctly handling of the problem. The simple
    straight-forward obvious way to handle the problem is to calculate the
    sequence outright in a straight-forward way. On the other hand, Otten's
    use of yield (making a generator) makes some sense too, in the case that
    we want to get the fib numbers over time without taking the time to
    calculate the entire sequence up-front.
    Just saying...

    kind regards,
    m harris
  • Chris Angelico at May 2, 2011 at 10:17 pm

    On Tue, May 3, 2011 at 7:50 AM, harrismh777 wrote:
    They *are not* called one after the other in the sense that there is ever
    only one level of recursion with each call (not at all). Take a closer look.
    Each call to fib() forces a double head, and each of those forces another
    double head (now four), and so on... ?the recursion depth exception of the
    OP testifies against your theory.
    def fib(n):
    if n==1:
    return n
    return fib(n-1)+fib(n-2)


    dis.dis(fib)
    2 0 LOAD_FAST 0 (n)
    3 LOAD_CONST 1 (1)
    6 COMPARE_OP 2 (==)
    9 JUMP_IF_FALSE 5 (to 17)
    12 POP_TOP

    3 13 LOAD_FAST 0 (n)
    16 RETURN_VALUE
    17 POP_TOP
    4 18 LOAD_GLOBAL 0 (fib)
    21 LOAD_FAST 0 (n)
    24 LOAD_CONST 1 (1)
    27 BINARY_SUBTRACT
    28 CALL_FUNCTION 1
    31 LOAD_GLOBAL 0 (fib)
    34 LOAD_FAST 0 (n)
    37 LOAD_CONST 2 (2)
    40 BINARY_SUBTRACT
    41 CALL_FUNCTION 1
    44 BINARY_ADD
    45 RETURN_VALUE

    The recursion is in the last block. Note that it calls a function,
    then keeps going. It doesn't fork. There are two CALL_FUNCTION
    opcodes, called *sequentially*. In terms of the call stack, there is
    only ever one of those two calls active at any given time. Also, as
    earlier pointed out, the reason for the stack overflow is that fib(2)
    will call fib(1) and fib(0), and fib(0) will call fib(-1) and fib(-2),
    etc. Changing the operator from == to <= in the conditional return
    will solve this.

    Chris Angelico
  • Ian Kelly at May 2, 2011 at 10:22 pm

    On Mon, May 2, 2011 at 3:50 PM, harrismh777 wrote:
    Thomas Rachel wrote:
    ... because each recursion level 'return' calls fib() twice, and each of
    those calls fib() twice, and you get the point...
    yes - but they are called one after the other, so the "twice" call
    counts only for execution speed, not for recursion depth.
    def fib(n):
    ? ? if n == 1:
    ? ? ? ? return(n)
    ? ? else:
    ? ? ? ? return (fib(n-1)+fib(n-2))
    They *are not* called one after the other in the sense that there is ever
    only one level of recursion with each call (not at all). Take a closer look.
    Each call to fib() forces a double head, and each of those forces another
    double head (now four), and so on...
    The branching factor has nothing to do with the maximum stack depth.
    If you don't believe me, consider this little script:

    import inspect
    def maxstack(n):
    if n <= 0:
    return len(inspect.stack())
    else:
    return max(maxstack(n-1), maxstack(n-2))
    print maxstack(15)

    This prints "17".

    Now consider this script, which is similar but singly-recursive:

    import inspect
    def maxstack(n):
    if n <= 0:
    return len(inspect.stack())
    else:
    return maxstack(n-1)
    print maxstack(15)

    This also prints "17". Note: they're the same.
    ?the recursion depth exception of the
    OP testifies against your theory.
    The OP's recursion depth exception was because a malformed base case
    made the recursion infinite. It had nothing to do with the branching
    factor.
  • Chris Angelico at May 2, 2011 at 10:42 pm

    On Tue, May 3, 2011 at 8:22 AM, Ian Kelly wrote:
    They *are not* called one after the other in the sense that there is ever
    only one level of recursion with each call (not at all). Take a closer look.
    Each call to fib() forces a double head, and each of those forces another
    double head (now four), and so on...
    The branching factor has nothing to do with the maximum stack depth.
    Side point: In a massively parallel environment, branching like this
    COULD be implemented as a fork. I just don't know of any
    implementations of Python that do so. (And of course, it works for
    fib() because it needs/uses no global state, which makes the
    recursions completely independent. Not all functions are so courteous,
    and the compiler can't necessarily know the difference.)

    Chris Angelico
  • Rusi at May 3, 2011 at 1:48 am

    On May 3, 2:50?am, harrismh777 wrote:
    The thing about this problem that puzzles me, is why we might consider
    recursion for a possible solution in the first place....
    This can be answered directly but a bit lengthily.
    Instead let me ask a seemingly unrelated (but actually much the same)
    question.

    Here are two power functions:

    def powI(x,n):
    result = 1
    for i in range(0,n):
    result = result * x
    return result

    def powR(x,n): return 1 if (n==0) else (x * powR(x, n-1))

    What are their space/time complexities?
    Which do you prefer?
  • Chris Angelico at May 3, 2011 at 2:13 am

    On Tue, May 3, 2011 at 11:48 AM, rusi wrote:
    What are their space/time complexities?
    Which do you prefer?
    They're pretty similar actually. If you rework the first one to not
    use range() but instead have a more classic C style of loop, they'll
    be almost identical:

    def powI(x,n):
    result = 1
    while n > 0:
    result *= x
    n-=1
    return result

    There's a subtle difference in that powR as you've coded it will
    infinite-loop if given a negative exponent, while powI in any form
    will simply return 1 (neither is technically correct, fwiw). Other
    than that, the only difference is that one keeps its state on the call
    stack, the other uses a temporary variable.

    Performance would probably benefit from a special syntax meaning
    "recurse", which would avoid the LOAD_GLOBAL for the function's name
    (plus, things break if you pass recursive functions around after
    compiling them - for instance, powR2=powR; def powR(x,n): os.exit() -
    but if you do that, then you should expect to see nice bullet-holes in
    your foot). However, I do not believe that Python would overall
    benefit from any such thing.

    Chris Angelico
  • Dan Stromberg at May 3, 2011 at 5:27 am

    On Mon, May 2, 2011 at 7:13 PM, Chris Angelico wrote:
    On Tue, May 3, 2011 at 11:48 AM, rusi wrote:
    What are their space/time complexities?
    Which do you prefer?
    They're pretty similar actually. If you rework the first one to not
    use range() but instead have a more classic C style of loop, they'll
    be almost identical:

    def powI(x,n):
    result = 1
    while n > 0:
    result *= x
    n-=1
    return result

    There's a subtle difference in that powR as you've coded it will
    infinite-loop if given a negative exponent, while powI in any form
    will simply return 1 (neither is technically correct, fwiw). Other
    than that, the only difference is that one keeps its state on the call
    stack, the other uses a temporary variable.
    The recursive solution uses more stack. That is true.

    But the recursive solution has time complexity of O(logn). The iterative
    solution has time complexity of O(n). That's a significant difference for
    large n - a significant benefit of the recursive version.

    However, an iterative version of a function can always be created from a
    recursive version. In this case, an iterative version can be constructed
    that is O(logn) time and O(1) stack.
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: <http://mail.python.org/pipermail/python-list/attachments/20110502/bca78819/attachment.html>
  • Chris Angelico at May 3, 2011 at 5:29 am

    On Tue, May 3, 2011 at 3:27 PM, Dan Stromberg wrote:
    But the recursive solution has time complexity of O(logn).? The iterative
    solution has time complexity of O(n).? That's a significant difference for
    large n - a significant benefit of the recursive version.
    Are you sure? It will produce n function calls and n multiplications,
    how is that O(log n)?

    Chris Angelico
  • Dan Stromberg at May 3, 2011 at 5:51 am

    On Mon, May 2, 2011 at 10:29 PM, Chris Angelico wrote:
    On Tue, May 3, 2011 at 3:27 PM, Dan Stromberg wrote:

    But the recursive solution has time complexity of O(logn). The iterative
    solution has time complexity of O(n). That's a significant difference for
    large n - a significant benefit of the recursive version.
    Are you sure? It will produce n function calls and n multiplications,
    how is that O(log n)?
    Doh.

    Usually when someone gives a recursive solution to this problem, it's
    O(logn), but not this time.

    Here's a logn one:

    def power(x, n):
    if n == 0:
    return 1
    else:
    half, remainder = divmod(n, 2)
    if remainder == 1:
    return power(x, half) ** 2 * x
    else:
    return power(x, half) ** 2
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: <http://mail.python.org/pipermail/python-list/attachments/20110502/acbbf0a9/attachment.html>
  • Ian Kelly at May 3, 2011 at 5:46 am

    On Mon, May 2, 2011 at 11:27 PM, Dan Stromberg wrote:
    But the recursive solution has time complexity of O(logn).? The iterative
    solution has time complexity of O(n).? That's a significant difference for
    large n - a significant benefit of the recursive version.

    It's linear as written. I think you're talking about an
    implementation like this one:

    def powR(x,n):
    if n < 0:
    return 1.0 / powR(x, -n)
    elif n == 0:
    return 1
    else:
    half_n = n // 2
    root = powR(x, half_n)
    result = root * root
    if half_n + half_n == n - 1:
    result = result * x
    return result

    That's O(log n), but it was harder to write, and it could just as
    easily be iterative. In fact it might actually have been a bit
    simpler to write it iteratively.
  • Rusi at May 3, 2011 at 7:52 am

    On May 3, 10:29?am, Chris Angelico wrote:
    On Tue, May 3, 2011 at 3:27 PM, Dan Stromberg wrote:
    Doh.
    Usually when someone gives a recursive solution to this problem, it's
    O(logn), but not this time.
    Here's a logn one:
    :-) Ok so you beat me to it :D

    I was trying to arrive at an answer to the question (by M Harris)
    about why anyone would do something like fib recursively. I used power
    since that illustrates the case more easily, viz:
    A trivial power -- recursive or iterative -- is linear time -- O(n)
    A more clever power can be O(log(n))
    This more clever power can be trivially derived from the recursive one
    as follows:

    The recursive O(n) function:
    def powR(x,n): return 1 if (n==0) else (x * powR(x, n-1))

    is just python syntax for the school algebra (or Haskell) equations:

    x^0 = 1
    x^(n+1) = x * x^n

    Adding one more equation:
    x^(2*n) = (x^2)^n = (x*x)^n
    Re-python-ifying we get:


    def pow(x,n):
    return 1 if (n==0) else pow(x*x, n/2) if even(n) else x*pow(x,n-1)

    def even(n): return n % 2 == 0

    Can this be done iteratively? Of course but its not so trivial.

    [If you believe it is, then try writing a log(n) fib iteratively :D ]
  • Dave Angel at May 3, 2011 at 10:32 am

    On 01/-10/-28163 02:59 PM, rusi wrote:
    On May 3, 10:29 am, Chris Angelicowrote:
    On Tue, May 3, 2011 at 3:27 PM, Dan Strombergwrote:
    Doh.
    Usually when someone gives a recursive solution to this problem, it's
    O(logn), but not this time.
    Here's a logn one:
    :-) Ok so you beat me to it :D

    I was trying to arrive at an answer to the question (by M Harris)
    about why anyone would do something like fib recursively. I used power
    since that illustrates the case more easily, viz:
    A trivial power -- recursive or iterative -- is linear time -- O(n)
    A more clever power can be O(log(n))
    This more clever power can be trivially derived from the recursive one
    as follows:

    The recursive O(n) function:
    def powR(x,n): return 1 if (n=) else (x * powR(x, n-1))

    is just python syntax for the school algebra (or Haskell) equations:

    x^0 =
    x^(n+1) = * x^n

    Adding one more equation:
    x^(2*n) =x^2)^n = (x*x)^n
    Re-python-ifying we get:


    def pow(x,n):
    return 1 if (n=) else pow(x*x, n/2) if even(n) else x*pow(x,n-1)

    def even(n): return n % 2 =0

    Can this be done iteratively? Of course but its not so trivial.

    [If you believe it is, then try writing a log(n) fib iteratively :D ]
    What I'm surprised at is that nobody has pointed out that the logn
    version is also generally more accurate, given traditional floats.
    Usually getting the answer accurate (given the constraints of finite
    precision intermediates) is more important than performance, but in this
    case they go hand in hand.

    DaveA
  • Chris Angelico at May 3, 2011 at 11:04 am

    On Tue, May 3, 2011 at 8:32 PM, Dave Angel wrote:
    What I'm surprised at is that nobody has pointed out that the logn version
    is also generally more accurate, given traditional floats. Usually getting
    the answer accurate (given the constraints of finite precision
    intermediates) is more important than performance, but in this case they go
    hand in hand.
    And that, Your Honour, is why I prefer bignums (especially for
    integers) to floating point. Precision rather than performance.

    Chris Angelico
  • Steven D'Aprano at May 3, 2011 at 12:49 pm

    On Tue, 03 May 2011 21:04:07 +1000, Chris Angelico wrote:

    And that, Your Honour, is why I prefer bignums (especially for integers)
    to floating point. Precision rather than performance.
    I'm intrigued by your comment "especially for integers", which implies
    that you might use bignums for non-integers. Out of curiosity, how would
    you calculate:

    sin(sqrt(7)*pi/31)

    using bignums?



    --
    Steven
  • Chris Angelico at May 3, 2011 at 1:02 pm

    On Tue, May 3, 2011 at 10:49 PM, Steven D'Aprano wrote:
    On Tue, 03 May 2011 21:04:07 +1000, Chris Angelico wrote:

    And that, Your Honour, is why I prefer bignums (especially for integers)
    to floating point. Precision rather than performance.
    I'm intrigued by your comment "especially for integers", which implies
    that you might use bignums for non-integers. Out of curiosity, how would
    you calculate:

    sin(sqrt(7)*pi/31)

    using bignums?
    REXX uses decimal bignums, although I don't have a high-performance
    math library (for sin) that uses anything more than IEEE double
    precision. If I coded my own sin algorithm in REXX, I could go to
    whatever precision memory (and patience) would allow.

    Chris Angelico
  • Dan Stromberg at May 3, 2011 at 4:10 pm

    On Tue, May 3, 2011 at 5:49 AM, Steven D'Aprano wrote:
    On Tue, 03 May 2011 21:04:07 +1000, Chris Angelico wrote:

    And that, Your Honour, is why I prefer bignums (especially for integers)
    to floating point. Precision rather than performance.
    I'm intrigued by your comment "especially for integers", which implies
    that you might use bignums for non-integers. Out of curiosity, how would
    you calculate:

    sin(sqrt(7)*pi/31)

    using bignums?
    It's a bit of a stretch, but you could wrap a fixed slash implementation
    around bignums to do this.
    http://portal.acm.org/citation.cfm?id09566

    Fixed slash is like a rational, but you never let the numerator or
    denominator grow huge, instead reducing the precision of the fraction now
    and then to keep things manageable.
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: <http://mail.python.org/pipermail/python-list/attachments/20110503/418a6060/attachment.html>
  • Rusi at May 3, 2011 at 10:51 am

    On May 3, 3:32?pm, Dave Angel wrote:

    What I'm surprised at is that nobody has pointed out that the logn
    version is also generally more accurate, given traditional floats.
    Usually getting the answer accurate (given the constraints of finite
    precision intermediates) is more important than performance, but in this
    case they go hand in hand.
    Well!!
    That's a slant that I was not aware of -- though fairly obvious on
    second thought.

    Thanks Dave.
  • Hans Mulder at May 11, 2011 at 10:06 pm

    On 03/05/2011 09:52, rusi wrote:

    [If you believe it is, then try writing a log(n) fib iteratively :D ]
    It took me a while, but this one seems to work:

    from collections import namedtuple

    Triple = namedtuple('Triple', 'hi mid lo')
    Triple.__mul__ = lambda self, other: Triple(
    self.hi * other.hi + self.mid * other.mid,
    self.hi * other.mid + self.mid * other.lo,
    self.mid * other.mid + self.lo * other.lo,
    )

    def fib(n):
    f = Triple(1, 1, 0)
    a = Triple(1, 0, 1)
    while n:
    if n & 1:
    a *= f
    f *= f
    n >>= 1
    return a.mid


    -- HansM
  • Rusi at May 13, 2011 at 11:11 am

    On May 12, 3:06?am, Hans Mulder wrote:
    On 03/05/2011 09:52, rusi wrote:

    [If you believe it is, then try writing a log(n) fib iteratively :D ]
    It took me a while, but this one seems to work:

    from collections import namedtuple

    Triple = namedtuple('Triple', 'hi mid lo')
    Triple.__mul__ = lambda self, other: Triple(
    ? ? ?self.hi * other.hi + self.mid * other.mid,
    ? ? ?self.hi * other.mid + self.mid * other.lo,
    ? ? ?self.mid * other.mid + self.lo * other.lo,
    )

    def fib(n):
    ? ? ?f = Triple(1, 1, 0)
    ? ? ?a = Triple(1, 0, 1)
    ? ? ?while n:
    ? ? ? ? ?if n & 1:
    ? ? ? ? ? ? ?a *= f
    ? ? ? ? ?f *= f
    ? ? ? ? ?n >>= 1
    ? ? ?return a.mid

    -- HansM
    Bravo! Can you explain this?

    The tightest way I knew so far was this:
    The 2x2 matrix
    0 1
    1 1
    raised to the nth power gives the nth fibonacci number. [And then use
    a logarithmic matrix mult]
    Your version is probably tighter than this.

    Yet one could argue that this is 'cheating' because you (and I) are
    still solving the power problem.

    What I had in mind was to use fib results like:
    f_(2n) = f_n^2 + f_(n+1)^2
    and use these in the same way (from first principles) like we use the
    equation
    x^2n = (x*x)^n
    to arrive at a logarithmic power algo.
  • Ian Kelly at May 13, 2011 at 1:55 pm

    On Fri, May 13, 2011 at 5:11 AM, rusi wrote:
    The tightest way I knew so far was this:
    The 2x2 matrix
    0 1
    1 1
    raised to the nth power gives the nth fibonacci number. [And then use
    a logarithmic matrix mult]
    Your version is probably tighter than this.
    Oh, nice! I did it this way once:

    V = [0 1]

    M =
    [0 1]
    [1 1]

    fib(n) = (V * M ** n)[0]

    Since I viewed M as operating on V, it never occurred to me that
    multiplying by V is actually unnecessary, but it is obvious in
    retrospect. I think it's just a fortuitous coincidence that it works,
    since V sets up the base case and M describes the recursive case. For
    a FIbonacci sequence starting with any other pair of numbers, V would
    change, but M would not, and so V would no longer happen to be
    identical to the top row of M.

    Ian
  • Hans Mulder at May 13, 2011 at 7:46 pm

    On 13/05/2011 13:11, rusi wrote:
    On May 12, 3:06 am, Hans Mulderwrote:
    On 03/05/2011 09:52, rusi wrote:

    [If you believe it is, then try writing a log(n) fib iteratively :D ]
    It took me a while, but this one seems to work:

    from collections import namedtuple

    Triple = namedtuple('Triple', 'hi mid lo')
    Triple.__mul__ = lambda self, other: Triple(
    self.hi * other.hi + self.mid * other.mid,
    self.hi * other.mid + self.mid * other.lo,
    self.mid * other.mid + self.lo * other.lo,
    )

    def fib(n):
    f = Triple(1, 1, 0)
    a = Triple(1, 0, 1)
    while n:
    if n& 1:
    a *= f
    f *= f
    n>>= 1
    return a.mid

    -- HansM
    Bravo! Can you explain this?

    The tightest way I knew so far was this:
    The 2x2 matrix
    0 1
    1 1
    raised to the nth power gives the nth fibonacci number. [And then use
    a logarithmic matrix mult]
    Your version is probably tighter than this.
    My method is just a thinly disguised version of your method: your 2x2
    matrices are symmetrical, i.e. the number in the upper right is equal
    to the number in the lower left. So I can save some memory and some
    CPU time by working with only three numbers.
    Yet one could argue that this is 'cheating' because you (and I) are
    still solving the power problem.
    That's true.
    What I had in mind was to use fib results like:
    f_(2n) = f_n^2 + f_(n+1)^2
    and use these in the same way (from first principles) like we use the
    equation
    x^2n = (x*x)^n
    to arrive at a logarithmic power algo.
    To compute f(4n) this way, you need to compute both f(2n) and f(2n+1)
    first, and to compute those, you need f(n) and f(n+1) and f(n+2)....

    I think I can construct an O(log(n)**2) algorithm this way.

    And it would still be 'cheating', because we'd still use some special
    property of the Fibonacci sequence to reduce our problem to the power
    problem. I think this sort of cheating can't be avoided: there is no
    general method to compute recurrent sequences faster than O(n);
    Fibonacci is just a special case.

    -- HansM
  • Mark Dickinson at May 13, 2011 at 9:48 pm

    On May 11, 11:06?pm, Hans Mulder wrote:
    On 03/05/2011 09:52, rusi wrote:

    [If you believe it is, then try writing a log(n) fib iteratively :D ]
    It took me a while, but this one seems to work:

    from collections import namedtuple

    Triple = namedtuple('Triple', 'hi mid lo')
    Triple.__mul__ = lambda self, other: Triple(
    ? ? ?self.hi * other.hi + self.mid * other.mid,
    ? ? ?self.hi * other.mid + self.mid * other.lo,
    ? ? ?self.mid * other.mid + self.lo * other.lo,
    )
    [...]
    You can even get away with pairs rather than triples:

    ----

    from collections import namedtuple

    Pair = namedtuple('Pair', 'z o')
    Pair.__mul__ = lambda self, other: Pair(
    self.z * other.z + self.o * other.o,
    self.z * other.o + self.o * other.z + self.o * other.o,
    )

    def fib(n):
    f = Pair(0, 1)
    a = Pair(1, 0)
    while n:
    if n & 1:
    a *= f
    f *= f
    n >>= 1
    return a.o

    ----

    I don't see this (or Hans' version) as cheating at all. This really
    *is* the power algorithm, just in a different number system from the
    usual one. For those with a bit of abstract algebra, the above
    algorithm is just computing x^n in the ring Z[x] / (x^2 - x - 1). A
    pair 'Pair(a, b)' represents the element 'a + bx' (more precisely, the
    image of 'a + bx' under the natural quotient map Z[x] -> Z[x] / (x^2 -
    x - 1)) of that ring. And this *can* be generalised to other
    sequences given by a linear recurrence.

    Mark
  • Rusi at May 14, 2011 at 5:24 pm

    On May 14, 2:48?am, Mark Dickinson wrote:

    I don't see this (or Hans' version) as cheating at all.
    Yeah sure -- cheating is a strong word :-)
    This really *is* the power algorithm, just in a different number system from the
    usual one.
    Yes that was my point. If we take the standard logarithmic power algo
    as trivial (in the sense that it is well known) then all these
    solutions do heavy-lifting to transform fib to power and then use the
    'trivial' algo.

    A more direct approach would be to use the identities:

    f_2n = f_n ^ 2 + 2*f_n * f_(n-1)
    f_(2n-1) = f_n ^ 2 + f_(n-1) ^ 2


    The naive python implementation of which is:

    def even(n): return n % 2 == 0
    def sq(x): return x * x

    def fib(n):
    if n==1 or n==2:
    return 1
    elif even(n):
    return sq(fib (n//2)) + 2 * fib(n//2) * fib(n//2 - 1)
    else:
    return sq(fib (n//2 + 1)) + sq(fib(n // 2))

    This is a strange algo -- logarithmic because it halves the n,
    exponential because of the double (triple) calls. [I cannot say I
    know how to work out its exact complexity but I would guess its about
    linear]

    --------------
    BTW How do I parse "the ring Z[x] / (x^2 - x - 1)"?
    Is this a division ring?
  • Ian Kelly at May 14, 2011 at 9:19 pm

    On Sat, May 14, 2011 at 11:24 AM, rusi wrote:
    def fib(n):
    ? ?if n==1 or n==2:
    ? ? ? ?return 1
    ? ?elif even(n):
    ? ? ? ?return sq(fib (n//2)) + 2 * fib(n//2) * fib(n//2 - 1)
    ? ?else:
    ? ? ? ?return sq(fib (n//2 + 1)) + sq(fib(n // 2))

    This is a strange algo ?-- logarithmic because it halves the n,
    exponential because of the double (triple) calls. ?[I cannot say I
    know how to work out its exact complexity but I would guess its about
    linear]
    Yup, linear. Assuming you optimize the even case so that it doesn't
    actually call fib(n//2) twice, the call tree can be approximated as a
    balanced binary tree with height log(n). The total number of nodes in
    the tree is thus O(2 ** log(n)) = O(n).
  • Rusi at May 15, 2011 at 3:32 am

    On May 15, 2:19?am, Ian Kelly wrote:
    On Sat, May 14, 2011 at 11:24 AM, rusi wrote:
    def fib(n):
    ? ?if n==1 or n==2:
    ? ? ? ?return 1
    ? ?elif even(n):
    ? ? ? ?return sq(fib (n//2)) + 2 * fib(n//2) * fib(n//2 - 1)
    ? ?else:
    ? ? ? ?return sq(fib (n//2 + 1)) + sq(fib(n // 2))
    This is a strange algo ?-- logarithmic because it halves the n,
    exponential because of the double (triple) calls. ?[I cannot say I
    know how to work out its exact complexity but I would guess its about
    linear]
    Yup, linear. ?Assuming you optimize the even case so that it doesn't
    actually call fib(n//2) twice, the call tree can be approximated as a
    balanced binary tree with height log(n). ?The total number of nodes in
    the tree is thus O(2 ** log(n)) = O(n).
    It would be linear if the base of the log were 2.
    I am not sure it is.
    You see the naive fib has a complexity which is fib itself. [Earlier
    discussion with Steven]
    fib is exponential but with radix < 2 [phi = (1 + sqrt(5))/2 ]
    This would suggest that this algo is slightly better than linear.

    But I have no idea of the exact complexity.
  • Mark Dickinson at May 15, 2011 at 7:20 pm

    On May 15, 4:32?am, rusi wrote:
    On May 15, 2:19?am, Ian Kelly wrote:
    Yup, linear. ?Assuming you optimize the even case so that it doesn't
    actually call fib(n//2) twice, the call tree can be approximated as a
    balanced binary tree with height log(n). ?The total number of nodes in
    the tree is thus O(2 ** log(n)) = O(n).
    It would be linear if the base of the log were 2.
    I am not sure it is.
    You see the naive fib has a complexity which is fib itself. [Earlier
    discussion with Steven]
    fib is exponential but with radix < 2 [phi = (1 + sqrt(5))/2 ]
    This would suggest that this algo is slightly better than linear.
    Nope. It's linear, just as Ian Kelly said. If g(n) is the total
    number of fib calls made for fib(n), then it's easy to show (e.g., by
    induction) that:

    (a) g(n) is an increasing function of n, and
    (b) g(2^k) = 2^k - 1 for all k >= 1.

    Hence g(n) is O(n).

    Mark
  • Mark Dickinson at May 15, 2011 at 7:29 pm

    On May 15, 8:20?pm, Mark Dickinson wrote:
    On May 15, 4:32?am, rusi wrote:
    On May 15, 2:19?am, Ian Kelly wrote:
    Yup, linear. ?Assuming you optimize the even case so that it doesn't
    actually call fib(n//2) twice, the call tree can be approximated as a
    balanced binary tree with height log(n). ?The total number of nodes in
    the tree is thus O(2 ** log(n)) = O(n).
    It would be linear if the base of the log were 2.
    I am not sure it is.
    You see the naive fib has a complexity which is fib itself. [Earlier
    discussion with Steven]
    fib is exponential but with radix < 2 [phi = (1 + sqrt(5))/2 ]
    This would suggest that this algo is slightly better than linear.
    Nope. ?It's linear, just as Ian Kelly said. ?If g(n) is the total
    number of fib calls made for fib(n), then it's easy to show (e.g., by
    induction) that:

    (a) g(n) is an increasing function of n, and
    (b) g(2^k) = 2^k - 1 for all k >= 1.

    Hence g(n) is O(n).
    Hmm. It's even easier: g(n) is:

    * 1 if n == 1
    * n if n > 1, n odd
    * n-1 if n > 1, n even

    So definitely linear. :-)

    --
    Mark
  • Harrismh777 at May 3, 2011 at 5:10 pm

    Chris Angelico wrote:
    The recursion is in the last block. Note that it calls a function,
    then keeps going. It doesn't fork. There are two CALL_FUNCTION
    opcodes, called*sequentially*. In terms of the call stack, there is
    only ever one of those two calls active at any given time.
    RuntimeError: maximum recursion depth exceeded in comparison
    [36355 refs]

    can any one help

    [ @ Ian, Chris, Thomas ]

    Thanks, guys, but I think y'all are still missing my point/question, as
    interesting as these discussions are... something is wrong (might be my
    understanding)...

    ... CPython must be making function calls by placing stack-frames on the
    call stack... which has a relatively small limit. Python does crappy
    tail optimization (one of the reasons to avoid recursion in Python) and
    yes, this is an infinite recursion... doh... but the main point for this
    part of the discussion is that the "maximum recursion depth exceeded in
    comparison" runtime error is posted because there are more stack-frames
    being posted to the call stack than there is call-stack to hold them!
    We might change this with sys.setrecursionlimit, but that's dangerous;
    but the bottom line is that in order for the recursion depth runtime
    error to be posted, somebody placed too many stack-frames on the call
    stack... in this case about 36,355 of them... yes, the base-case is
    coded wrong and the process is infinite, the point is that tail
    processing doesn't even get to run... the silly thing pummels the call
    stack with stack-frames (obviously exceeding 20) and the runtime error
    is posted. If your point is that the infinite process is the problem, I
    agree. But my point is that the cpu crunch and the rate at which the
    call stack is filled has to do with the double call (which never finds
    tail processing).

    What am I missing here>?
  • Chris Angelico at May 3, 2011 at 9:41 pm

    On Wed, May 4, 2011 at 3:10 AM, harrismh777 wrote:
    If your point is that the infinite process is the problem, I agree. But my
    point is that the cpu crunch and the rate at which the call stack is filled
    has to do with the double call (which never finds tail processing).
    The double call _does not_ affect it. Failing to terminate recursion
    _does_. I don't know what you mean by "cpu crunch" but the call stack
    is going to get n entries. On the Python 2.6 on this system,
    sys.getrecursionlimit() returns 1000, meaning that you can calculate
    fib(1000) safely (okay, probably not quite as there'll be a few used
    for other things, but fib(900) would be safe).

    Chris Angelico
  • Ian Kelly at May 3, 2011 at 10:31 pm

    On Tue, May 3, 2011 at 3:41 PM, Chris Angelico wrote:
    On Wed, May 4, 2011 at 3:10 AM, harrismh777 wrote:
    If your point is that the infinite process is the problem, I agree. But my
    point is that the cpu crunch and the rate at which the call stack is filled
    has to do with the double call (which never finds tail processing).
    The double call _does not_ affect it. Failing to terminate recursion
    _does_. I don't know what you mean by "cpu crunch" but the call stack
    is going to get n entries. On the Python 2.6 on this system,
    sys.getrecursionlimit() returns 1000, meaning that you can calculate
    fib(1000) safely (okay, probably not quite as there'll be a few used
    for other things, but fib(900) would be safe).
    Depends what you mean by "safe". A back-of-the-envelope calculation
    shows that with modern technology it would take more than 10 ** 257
    years to complete. That puts it well into the Dark Era of the
    universe, long after the black holes will have decayed, when I suspect
    it will be difficult to find a continued power source for the computer
    to run. And even if it somehow is still running, the process memory
    will have been so thoroughly muddled by cosmic rays that the final
    result of the calculation will be utterly worthless.

Related Discussions

People

Translate

site design / logo © 2022 Grokbase