FAQ
In python 2.3 (IDLE 1.0.3) running under windows 95, I get the
following types of errors whenever I do simple arithmetic:

1st example:
12.10 + 8.30
20.399999999999999
1.1 - 0.2
0.90000000000000013


2nd example(no errors here):
bool(130.0 - 129.0 == 1.0)
True


3rd example:
a = 0.013
b = 0.0129
c = 0.0001
[a, b, c]
[0.012999999999999999, 0.0129, 0.0001]
bool((a - b) == c)
False


This sort of error is no big deal in most cases, but I'm sure it could
become a problem under certain conditions, particularly the 3rd
example, where I'm using truth testing. The same results occur in all
cases whether I define variables a, b, and c, or enter the values
directly into the bool statement. Also, it doesn't make a difference
whether "a = 0.013" or "a = 0.0130".

I haven't checked this under windows 2000 or XP, but I expect the same
thing would happen. Any suggestions for a way to fix this sort of
error?

Search Discussions

  • DogWalker at Sep 18, 2004 at 4:58 pm
    Have a look at the FAQ (before the response to your message builds).

    ----- Original Message -----
    From: Radioactive Man
    Newsgroups: comp.lang.python
    To: python-list at python.org
    Sent: Saturday, September 18, 2004 9:50 AM
    Subject: Math errors in python


    In python 2.3 (IDLE 1.0.3) running under windows 95, I get the
    following types of errors whenever I do simple arithmetic:

    1st example:
    12.10 + 8.30
    20.399999999999999
    1.1 - 0.2
    0.90000000000000013


    2nd example(no errors here):
    bool(130.0 - 129.0 == 1.0)
    True


    3rd example:
    a = 0.013
    b = 0.0129
    c = 0.0001
    [a, b, c]
    [0.012999999999999999, 0.0129, 0.0001]
    bool((a - b) == c)
    False


    This sort of error is no big deal in most cases, but I'm sure it could
    become a problem under certain conditions, particularly the 3rd
    example, where I'm using truth testing. The same results occur in all
    cases whether I define variables a, b, and c, or enter the values
    directly into the bool statement. Also, it doesn't make a difference
    whether "a = 0.013" or "a = 0.0130".

    I haven't checked this under windows 2000 or XP, but I expect the same
    thing would happen. Any suggestions for a way to fix this sort of
    error?
  • Tim Peters at Sep 18, 2004 at 5:08 pm
    [Radioactive Man]
    In python 2.3 (IDLE 1.0.3) running under windows 95, I get the
    following types of errors whenever I do simple arithmetic:

    1st example:
    12.10 + 8.30
    20.399999999999999
    ...

    Please read the Tutorial appendix on floating-point issues:

    http://docs.python.org/tut/node15.html
  • Gary Herron at Sep 18, 2004 at 5:20 pm

    On Saturday 18 September 2004 09:50 am, Radioactive Man wrote:
    In python 2.3 (IDLE 1.0.3) running under windows 95, I get the
    following types of errors whenever I do simple arithmetic:

    1st example:
    12.10 + 8.30
    20.399999999999999

    It's not a bug, it's a feature of binary arithmetic on ALL coumputers
    in ALL languages. (But perhaps Python is the first time it has not
    been hidden from you.)

    See the Python FAQ entry 1.4.2:

    http://www.python.org/doc/faq/general.html#why-are-floating-point-calculations-so-inaccurate


    Gary Herron
  • Jeremy Bowers at Sep 18, 2004 at 5:29 pm

    On Sat, 18 Sep 2004 16:50:16 +0000, Radioactive Man wrote:

    In python 2.3 (IDLE 1.0.3) running under windows 95, I get the
    following types of errors whenever I do simple arithmetic:
    Specifically (building on DogWalker's reply),
    http://www.python.org/doc/faq/general.html#why-are-floating-point-calculations-so-inaccurate
  • Chris S. at Sep 19, 2004 at 7:18 am

    Jeremy Bowers wrote:

    On Sat, 18 Sep 2004 16:50:16 +0000, Radioactive Man wrote:

    In python 2.3 (IDLE 1.0.3) running under windows 95, I get the
    following types of errors whenever I do simple arithmetic:

    Specifically (building on DogWalker's reply),
    http://www.python.org/doc/faq/general.html#why-are-floating-point-calculations-so-inaccurate
    Perhaps there's a simple explanation for this, but why do we go to the
    trouble of computing fractions when our hardware can't handle the
    result? If the decimal value of 1/3 is can't be represented in binary,
    then don't. We should use an internal representation that stores the
    numerator and denominator as separate integers.
  • Gary Herron at Sep 19, 2004 at 7:39 am

    On Sunday 19 September 2004 12:18 am, Chris S. wrote:
    Jeremy Bowers wrote:
    On Sat, 18 Sep 2004 16:50:16 +0000, Radioactive Man wrote:
    In python 2.3 (IDLE 1.0.3) running under windows 95, I get the
    following types of errors whenever I do simple arithmetic:
    Specifically (building on DogWalker's reply),
    http://www.python.org/doc/faq/general.html#why-are-floating-point-calcula
    tions-so-inaccurate
    Perhaps there's a simple explanation for this, but why do we go to the
    trouble of computing fractions when our hardware can't handle the
    result? If the decimal value of 1/3 is can't be represented in binary,
    then don't. We should use an internal representation that stores the
    numerator and denominator as separate integers.
    That's called rational arithmetic, and I'm sure you can find a package
    that implements it for you. However what would you propose for
    irrational numbers like sqrt(2) and transcendental numbers like PI?

    While I'd love to compute with all those numbers in infinite
    precision, we're all stuck with FINITE sized computers, and hence with
    the inaccuracies of finite representations of numbers.

    Dr. Gary Herron
  • Heiko Wundram at Sep 19, 2004 at 12:07 pm

    Am Sonntag, 19. September 2004 09:39 schrieb Gary Herron:
    That's called rational arithmetic, and I'm sure you can find a package
    that implements it for you. However what would you propose for
    irrational numbers like sqrt(2) and transcendental numbers like PI?
    Just as an example, try gmpy. Unlimited precision integer and rational
    arithmetic. But don't think that they implement anything more than the four
    basic operations on rationals, because algorithms like sqrt and pow become so
    slow, that nobody sensible would use them, but rather just stick to the
    binary arithmetic the computer uses (although this might have some minor
    effects on precision, but these can be bounded).

    Heiko.
  • Alex Martelli at Sep 19, 2004 at 5:41 pm

    Heiko Wundram wrote:

    Am Sonntag, 19. September 2004 09:39 schrieb Gary Herron:
    That's called rational arithmetic, and I'm sure you can find a package
    that implements it for you. However what would you propose for
    irrational numbers like sqrt(2) and transcendental numbers like PI?
    Just as an example, try gmpy. Unlimited precision integer and rational
    arithmetic. But don't think that they implement anything more than the four
    basic operations on rationals, because algorithms like sqrt and pow become so
    slow, that nobody sensible would use them, but rather just stick to the
    binary arithmetic the computer uses (although this might have some minor
    effects on precision, but these can be bounded).
    Guilty as charged, but with a different explanation. I don't support
    raising a rational to a rational exponent, not because it would "become
    slow", but because it could not return a rational result in general.
    When it CAN return a rational result, I'm happy as a lark to support it:
    x = gmpy.mpq(4,9)
    x ** gmpy.mpq(1,2)
    mpq(2,3)
    >>>

    I.e. raising to the power 1/2 (which is the same as saying, taking the
    square root) is supported in gmpy only when the base is a rational which
    IS the square of some other rational -- and similarly for other
    fractional exponents.

    Say you're content with finite precision, and you problem is that
    getting only a few dozen bits' worth falls far short of your ambition,
    as you want _thousands_. Well, you don't have to "stick to the
    arithmetic your computer uses", with its paltry dozens of bits' worth of
    precision -- you can have just as many as you wish. For example:

    For example...:
    x=gmpy.mpf(2, 2222)
    x
    mpf('2.e0',2222)
    y=gmpy.fsqrt(x)
    y
    mpf('1.41421356237309504880168872420969807856967187537694807317667973799
    073247846210703885038753432764157273501384623091229702492483605585073721
    264412149709993583141322266592750559275579995050115278206057147010955997
    160597027453459686201472851741864088919860955232923048430871432145083976
    260362799525140798968725339654633180882964062061525835239505474575028775
    996172983557522033753185701135437460340849884716038689997069900481503054
    402779031645424782306849293691862158057846311159666871301301561856898723
    723528850926486124949771542183342042856860601468247207714358548741556570
    696776537202264854470158588016207584749226572260020855844665214583988939
    4437092659180031138824646815708263e0',2222)
    >>>

    Of course, this still has bounded accuracy (gmpy doesn't do constructive
    reals...):
    x-(y*y)
    mpf('1.21406321925474744732602075007044436621136403661789690072865954475
    776298522118244419272674806546441529118557492550101271984681584381130555
    892259118178248950179953390159664508815540959644741794226362686473376767
    055696411211498987561487078708187675060063022704148995680107509652317604
    479364576039827518913272446772069713871266672454279184421635785339332972
    791970690781583948212784883346298572710476658954707852342842150889381157
    563045936231138515406709376167997169879900784347146377935422794796191261
    624849740964942283842868779082292557869166024095318326003777296248197487
    885858223175591943112711481319695526039760318353849240080721341697065981
    8471278600062647147473105883272095e-674',2222)

    i.e., there IS an error of about 10 to the minus 674 power, i.e. a
    precision of barely more than a couple of thousands of bits -- but then,
    that IS what you asked for, with that '2222'!-)

    Computing square roots (or whatever) directly on rationals would be no
    big deal, were there demand -- you'd still have to tell me what kind of
    loss of accuracy you're willing to tolerate, though. I personally find
    it handier to compute with mpf's (high-precision floats) and then turn
    the result into rationals with a Stern-Brocot algorithm...:
    z=gmpy.f2q(y,-2000)
    z
    mpq(87787840362947056221389837099888119784184900622573984346903816053706
    510038371862119498502008227696594958892073744394524220336403937617412073
    521953746033135074321986796669379393248887099312745495535792954890191437
    233230436746927180393035328284490481153041398619700720943077149557439382
    34750528988254439L,62075377226361968890337286609853165704271551096494666
    544033975362265504696870569409265091955693911548812764050925469857560059
    623789287922026078165497542602022607603900854658753038808290787475128940
    694806084715129308978288523742413573494221901565588452667869917019091383
    93125847495825105773132566685269L)

    If you need the square root of two as a rational number with an error of
    less than 1 in 2**-2000, I think this is a reasonable approach. As for
    speed, this is quite decently usable in an interactive session in my
    cheap and cheerful Mac iBook 12" portable (not the latest model, which
    is quite a bit faster, much less the "professional" Powerbooks -- I'm
    talking about an ageing, though good-quality, consumer-class machine!).

    gmpy (or to be more precise the underlying GMP library) runs optimally
    on AMD Athlon 32-bit processors, which happen to be dirt cheap these
    days, so a cleverly-purchased 300-dollars desktop Linux PC using such an
    Athlon chip would no doubt let you use way more than these humble couple
    thousand bits for such interactive computations while maintaining a
    perfectly acceptable interactive response time.


    Alex
  • Heiko Wundram at Sep 20, 2004 at 9:07 am

    Am Sonntag, 19. September 2004 19:41 schrieb Alex Martelli:
    gmpy (or to be more precise the underlying GMP library) runs optimally
    on AMD Athlon 32-bit processors, which happen to be dirt cheap these
    days, so a cleverly-purchased 300-dollars desktop Linux PC using such an
    Athlon chip would no doubt let you use way more than these humble couple
    thousand bits for such interactive computations while maintaining a
    perfectly acceptable interactive response time.
    But still, no algorithm implemented in software will ever beat the
    FADD/FMUL/FDIV/FPOW/FSIN/FCOS etc. instructions in runtime, that was my
    point... And error calculation is always possible, so that you can give
    bounds to your result, even when using normal floating point arithmetic. And,
    even when using GMPy, you have to know about the underlying limitations of
    binary floating point so that you can reorganize your code if need be to add
    precision (because one calculation might be much less precise if done in some
    way than in another).

    Heiko.
  • Alex Martelli at Sep 20, 2004 at 11:57 am

    Heiko Wundram wrote:

    Am Sonntag, 19. September 2004 19:41 schrieb Alex Martelli:
    gmpy (or to be more precise the underlying GMP library) runs optimally
    on AMD Athlon 32-bit processors, which happen to be dirt cheap these
    days, so a cleverly-purchased 300-dollars desktop Linux PC using such an
    Athlon chip would no doubt let you use way more than these humble couple
    thousand bits for such interactive computations while maintaining a
    perfectly acceptable interactive response time.
    But still, no algorithm implemented in software will ever beat the
    FADD/FMUL/FDIV/FPOW/FSIN/FCOS etc. instructions in runtime, that was my
    Yep, the hardware would have to be designed in a very lousy way for its
    instructions to run slower than software running on the same CPU;-).

    If you're not using some "vectorized" package such as Numeric or
    numarray, though, it's unlikely that you care about speed -- and if you
    _are_ using Numeric or numarray, it doesn't matter to you what type
    Python itself uses for some literal such as 3.17292 -- it only matters
    (speedwise) what your computational package is using (single precision,
    double precision, whatever).
    point... And error calculation is always possible, so that you can give
    bounds to your result, even when using normal floating point arithmetic. And,
    Sure! Your problems come when the bounds you compute are not good
    enough for your purposes (given how deucedly loose error-interval
    computations tend to be, that's going to happen more often than actual
    accuracy loss in your computations... try an interval-arithmetic package
    some day, to see what I mean...).
    even when using GMPy, you have to know about the underlying limitations of
    binary floating point so that you can reorganize your code if need be to add
    precision (because one calculation might be much less precise if done in some
    way than in another).
    Sure. Throwing more precision at a badly analyzed and structured
    algorithm is putting a band-aid on a wound. I _have_ taught numeric
    analysis to undergrads and nobody could have passed my course unless
    they had learned to quote that "party line" back at me, obviously.

    In the real world, the band-aid stops the blood loss often enough that
    few practising engineers and scientists are seriously motivated to
    remember and apply all they've learned in their numeric analysis courses
    (assuming they HAVE taken some: believe it or not, it IS quite possible
    to get a degree in engineering, physics, etc, in most places, without
    even getting ONE course in numeric analysis! the university where I
    taught was an exception only for _some_ of the degrees they granted --
    you couldn't graduate in _materials_ engineering without that course,
    for example, but you COULD graduate in _buildings_ engineering while
    bypassing it...).

    Yes, this IS a problem. But I don't know what to do about it -- after
    all, I _am_ quite prone to taking such shortcuts myself... if some
    computation is giving me results that smell wrong, I just do it over
    with 10 or 100 times more bits... yeah, I _do_ know that will only work
    99.99% of the time, leaving a serious problem, possibly hidden and
    unsuspected, more often than one can be comfortable with. In my case, I
    have excuses -- I'm more likely to have fallen into some subtle trap of
    _statistics_, making my precise computations pretty meaningless anyway,
    than to be doing perfectly correct statistics in numerically smelly ways
    (hey, I _have_ been brought up, as an example of falling into traps, in
    "American Statistician", but not yet, AFAIK, in any journal dealing with
    numerical analysis...:-).


    Alex
  • Chris S. at Sep 19, 2004 at 8:00 am

    Gary Herron wrote:
    That's called rational arithmetic, and I'm sure you can find a package
    that implements it for you. However what would you propose for
    irrational numbers like sqrt(2) and transcendental numbers like PI?
    Sqrt is a fair criticism, but Pi equals 22/7, exactly the form this
    arithmetic is meant for. Any decimal can be represented by a fraction,
    yet not all fractions can be represented by decimals. My point is that
    such simple accuracy should be supported out of the box.
    While I'd love to compute with all those numbers in infinite
    precision, we're all stuck with FINITE sized computers, and hence with
    the inaccuracies of finite representations of numbers.
    So are our brains, yet we somehow manage to compute 12.10 + 8.30
    correctly using nothing more than simple skills developed in
    grade-school. You could theoretically compute an infinitely long
    equation by simply operating on single digits, yet Python, with all of
    its resources, can't overcome this hurtle?

    However, I understand Python's limitation in this regard. This
    inaccuracy stems from the traditional C mindset, which typically
    dismisses any approach not directly supported in hardware. As the FAQ
    states, this problem is due to the "underlying C platform". I just find
    it funny how a $20 calculator can be more accurate than Python running
    on a $1000 Intel machine.
  • Richard Townsend at Sep 19, 2004 at 8:16 am

    On Sun, 19 Sep 2004 08:00:03 GMT, Chris S. wrote:
    Sqrt is a fair criticism, but Pi equals 22/7, exactly the form this
    arithmetic is meant for. Any decimal can be represented by a fraction,
    yet not all fractions can be represented by decimals. My point is that
    such simple accuracy should be supported out of the box.
    Do you really think Pi equals 22/7 ?
    import math
    print math.pi
    3.14159265359
    print 22.0/7.0
    3.14285714286

    What do you get on your $20 calculator ?

    --
    Richard
  • Chris S. at Sep 19, 2004 at 8:52 am

    Richard Townsend wrote:
    On Sun, 19 Sep 2004 08:00:03 GMT, Chris S. wrote:

    Sqrt is a fair criticism, but Pi equals 22/7, exactly the form this
    arithmetic is meant for. Any decimal can be represented by a fraction,
    yet not all fractions can be represented by decimals. My point is that
    such simple accuracy should be supported out of the box.

    Do you really think Pi equals 22/7 ?
    Of course not. That's just a common approximation. Irrational numbers
    are an obvious exception, but we shouldn't sacrifice the accuracy of
    common decimal math just for their sake.
    import math
    print math.pi
    3.14159265359
    print 22.0/7.0
    3.14285714286

    What do you get on your $20 calculator ?
    The same thing actually.
  • Gary Herron at Sep 19, 2004 at 9:34 am

    On Sunday 19 September 2004 01:00 am, Chris S. wrote:
    Gary Herron wrote:
    That's called rational arithmetic, and I'm sure you can find a package
    that implements it for you. However what would you propose for
    irrational numbers like sqrt(2) and transcendental numbers like PI?
    Sqrt is a fair criticism, but Pi equals 22/7,
    What? WHAT? Are you nuts? Pi and 22/7 are most certainly not equal.
    They don't even share three digits beyond the decimal point. (Can you
    really be that ignorant about numbers and expect to contribute
    intelligently to a discussion about numbers. Pi is a non-repeating
    and non-ending number in base 10 or any other base.)

    exactly the form this
    arithmetic is meant for. Any decimal can be represented by a fraction,
    yet not all fractions can be represented by decimals. My point is that
    such simple accuracy should be supported out of the box.
    While I'd love to compute with all those numbers in infinite
    precision, we're all stuck with FINITE sized computers, and hence with
    the inaccuracies of finite representations of numbers.
    So are our brains, yet we somehow manage to compute 12.10 + 8.30
    correctly using nothing more than simple skills developed in
    grade-school. You could theoretically compute an infinitely long
    equation by simply operating on single digits, yet Python, with all of
    its resources, can't overcome this hurtle?

    However, I understand Python's limitation in this regard. This
    inaccuracy stems from the traditional C mindset, which typically
    dismisses any approach not directly supported in hardware. As the FAQ
    states, this problem is due to the "underlying C platform". I just find
    it funny how a $20 calculator can be more accurate than Python running
    on a $1000 Intel machine.
    If you are happy doing calculations with decimal numbers like 12.10 +
    8.30, then the Decimal package may be what you want, but that fails as
    soon as you want 1/3. But then you could use a rational arithmetic
    package and get 1/3, but that would fail as soon as you needed sqrt(2)
    or Pi. But then you could try ... what? Can you see the pattern
    here? Any representation of the infinity of numbers on a finite
    computer *must* necessarily be unable to represent some (actually
    infinity many) of those numbers. The inaccuracies stem from that
    fact.

    Hardware designers have settled on a binary representation of floating
    point numbers, and both C and Python use the underlying hardware
    implementation. (Try your calculation in C -- you'll get the same
    result if you choose to print out enough digits.)

    And BTW, your calculator is not, in general, more accurate than the
    modern IEEE binary hardware representation of numbers used on most of
    today's computers. It is more accurate on only a select subset of all
    numbers, and it does a good job of fooling you in those cases where it
    loses accuracy, by doing calculations on more digits then it displays,
    and rounding off to the on-screen digits.

    So while a calculator will fool you into believing it is accurate when
    it is not, it is Python's design decision to not cater to fools.

    Dr Gary Herron
  • Alex Martelli at Sep 19, 2004 at 4:41 pm
    Chris S. wrote:
    ...
    Sqrt is a fair criticism, but Pi equals 22/7, exactly the form this
    Of course it doesn't. What a silly assertion.
    arithmetic is meant for. Any decimal can be represented by a fraction,
    And pi can't be represented by either (if you mean _finite_ decimals and
    fractions).
    yet not all fractions can be represented by decimals. My point is that
    such simple accuracy should be supported out of the box.
    In Python 2.4, decimal computations are indeed "supported out of the
    box", although you do explicitly have to request them (the default
    remains floating-point). In 2.3, you have to download and use any of
    several add-on packages (decimal computations and rational ones have
    very different characteristics, so you do have to choose) -- big deal.

    While I'd love to compute with all those numbers in infinite
    precision, we're all stuck with FINITE sized computers, and hence with
    the inaccuracies of finite representations of numbers.
    So are our brains, yet we somehow manage to compute 12.10 + 8.30
    correctly using nothing more than simple skills developed in
    Using base 10, sure. Or, using fractions, even something that decimals
    would not let you compute finitely, such as 1/7+1/6.
    grade-school. You could theoretically compute an infinitely long
    equation by simply operating on single digits,
    Not in finite time, you couldn't (excepting a few silly cases where the
    equation is "infinitely long" only because of some rule that _can_ be
    finitely expressed, so you don't even have to LOOK at all the equation
    to solve [which is what I guess you mean by "compute"...?] it -- if you
    have to LOOK at all of the equation, and it's infinite, you can't get
    done in finite time).
    yet Python, with all of
    its resources, can't overcome this hurtle?
    The hurdle of decimal arithmetic, you mean? Download Python 2.4 and
    play with decimal to your heart's content. Or do you mean fractions?
    Then download gmpy and ditto. There are also packages for symbolic
    computation and even more exotic kinds of arithmetic.

    In practice, with the sole exception of monetary computations (which may
    often be constrained by law, or at the very least by customary
    practice), there is no real-life use in which the _accuracy_ of floating
    point isn't ample. There are nevertheless lots of traps in arithmetic,
    but switching to forms of arithmetic different from float doesn't really
    make all the traps magically disappear, of course.

    However, I understand Python's limitation in this regard. This
    inaccuracy stems from the traditional C mindset, which typically
    dismisses any approach not directly supported in hardware. As the FAQ
    Ah, I see, a case of "those who can't be bothered to learn a LITTLE
    history before spouting off" etc etc. Python's direct precursor, the
    ABC language, used unbounded-precision rationals. As a result (obvious
    to anybody who bothers to learn a little about the inner workings of
    arithmetic), the simplest-looking string of computations could easily
    consume all the memory at your computer's disposal, and then some, and
    apparently unbounded amounts of time. It turned out that users object,
    most of the time, to having some apparently trivial computation take
    hours, rather than seconds, in order to be unboundedly precise rather
    than, say, precise to "just" a couple hundred digits (far more digits
    than you need to count the number of atoms in the Galaxy). So,
    unbounded rationals as a default are out -- people may sometimes SAY
    they want them, but in fact, in an overwhelming majority of the cases,
    they actually do not (oh yes, people DO lie, first of all to
    themselves:-).

    As for decimals, that's what a very-high level language aiming for a
    niche very close to Python used from the word go. It got started WAY
    before Python -- I was productively using it over 20 years ago -- and
    had the _IBM_ brand on it, which at the time pretty much meant the
    thousand-pounds gorilla of computers. So where is it now, having had
    all of these advantages (started years before, had IBM behind it, AND
    was totally free of "the traditional C mindset", which was very far from
    traditional at the time, particularly within IBM...!)...?

    Googlefight is a good site for this kind of comparisons... try:

    <http://www.googlefight.com/cgi-bin/compare.pl?q1=python&q2=rexx
    &B1=Make+a+fight%21&compare=1&langue=us>

    and you'll see...:
    """
    Number of results on Google for the keywords python and rexx:

    python
    (10 300 000 results)
    versus
    rexx
    ( 419 000 results)

    The winner is: python
    """

    Not just "the winner", an AMAZING winner -- over TWENTY times more
    popular, despite all of Rexx's advantages! And while there are no doubt
    many fascinating components to this story, a key one is among the pearls
    of wisdom you can read by doing, at any Python interactive prompt:
    import this
    and it is: "practicality beats purity". Rexx has always been rather
    puristic in its adherence to its principles; Python is more pragmatic.
    It turns out that this is worth a lot in the real world. Much the same
    way, say, C ground PL/I into the dust. Come to think of it, Python's
    spirit is VERY close to C (4 and 1/2 half of the 5 principles listed as
    "the spirit of C" in the C ANSI Standard's introduction are more closely
    followed by Python than by other languages which borrowed C's syntax,
    such as C++ or Java), while Rexx does show some PL/I influence (not
    surprising for an IBM-developed language, I guess).

    Richard Gabriel's famous essay on "Worse is Better", e.g. at
    <http://www.jwz.org/doc/worse-is-better.html>, has more, somewhat bitter
    reflections in the same vein.

    Python never had any qualms in getting outside the "directly supported
    in hardware" boundaries, mind you. Dictionaries and unbounded precision
    integers are (and have long been) Python mainstays, although neither the
    hardware nor the underlying C platform has any direct support for
    either. For non-integer computations, though, Python has long been well
    served by relying on C, and nowadays typically the HW too, to handle
    them, which implied the use of floating-point; and leaving the messy
    business of implementing the many other possibly useful kinds of
    non-integer arithmetic to third-party extensions (many in fact written
    in Python itself -- if you're not in a hurry, they're fine, too).

    With Python 2.4, somebody finally felt enough of an itch regarding the
    issue of getting support for decimal arithmetic in the Python standard
    library to go to the trouble of scratching it -- as opposed to just
    spouting off on a mailing list, or even just implementing what they
    personally needed as just a third-party extension (there are _very_ high
    hurdles to jump, to get your code into the Python standard library, so
    it needs strong motivation to do so as opposed to just releasing your
    own extension to the public).
    states, this problem is due to the "underlying C platform". I just find
    it funny how a $20 calculator can be more accurate than Python running
    on a $1000 Intel machine.
    You can get a calculator much cheaper than that these days (and "intel
    machines" not too out of the mainstream for well less than half, as well
    as several times, your stated price). It's pretty obvious that the
    price of the hardware has nothing to do with that "_CAN_ be more
    accurate" issue (my emphasis) -- which, incidentally, remains perfectly
    true even in Python 2.4: it can be less, more, or just as accurate as
    whatever calculator you're targeting, since the precision of decimal
    computation is one of the aspects you can customize specifically...


    Alex
  • Gary Herron at Sep 19, 2004 at 5:16 pm
    A nice thoughtful answer Alex, but possibly wasted, as it's been
    suggested that he is just a troll. (Note his asssertion that Pi"/7
    in one post and the assertion that it is just a common approximation
    in another, and this in a thread about numeric imprecision.)

    Gary Herron

    On Sunday 19 September 2004 09:41 am, Alex Martelli wrote:
    Chris S. wrote:
    ...
    Sqrt is a fair criticism, but Pi equals 22/7, exactly the form this
    Of course it doesn't. What a silly assertion.
    arithmetic is meant for. Any decimal can be represented by a fraction,
    And pi can't be represented by either (if you mean _finite_ decimals and
    fractions).
    yet not all fractions can be represented by decimals. My point is that
    such simple accuracy should be supported out of the box.
    In Python 2.4, decimal computations are indeed "supported out of the
    box", although you do explicitly have to request them (the default
    remains floating-point). In 2.3, you have to download and use any of
    several add-on packages (decimal computations and rational ones have
    very different characteristics, so you do have to choose) -- big deal.
    While I'd love to compute with all those numbers in infinite
    precision, we're all stuck with FINITE sized computers, and hence with
    the inaccuracies of finite representations of numbers.
    So are our brains, yet we somehow manage to compute 12.10 + 8.30
    correctly using nothing more than simple skills developed in
    Using base 10, sure. Or, using fractions, even something that decimals
    would not let you compute finitely, such as 1/7+1/6.
    grade-school. You could theoretically compute an infinitely long
    equation by simply operating on single digits,
    Not in finite time, you couldn't (excepting a few silly cases where the
    equation is "infinitely long" only because of some rule that _can_ be
    finitely expressed, so you don't even have to LOOK at all the equation
    to solve [which is what I guess you mean by "compute"...?] it -- if you
    have to LOOK at all of the equation, and it's infinite, you can't get
    done in finite time).
    yet Python, with all of
    its resources, can't overcome this hurtle?
    The hurdle of decimal arithmetic, you mean? Download Python 2.4 and
    play with decimal to your heart's content. Or do you mean fractions?
    Then download gmpy and ditto. There are also packages for symbolic
    computation and even more exotic kinds of arithmetic.

    In practice, with the sole exception of monetary computations (which may
    often be constrained by law, or at the very least by customary
    practice), there is no real-life use in which the _accuracy_ of floating
    point isn't ample. There are nevertheless lots of traps in arithmetic,
    but switching to forms of arithmetic different from float doesn't really
    make all the traps magically disappear, of course.
    However, I understand Python's limitation in this regard. This
    inaccuracy stems from the traditional C mindset, which typically
    dismisses any approach not directly supported in hardware. As the FAQ
    Ah, I see, a case of "those who can't be bothered to learn a LITTLE
    history before spouting off" etc etc. Python's direct precursor, the
    ABC language, used unbounded-precision rationals. As a result (obvious
    to anybody who bothers to learn a little about the inner workings of
    arithmetic), the simplest-looking string of computations could easily
    consume all the memory at your computer's disposal, and then some, and
    apparently unbounded amounts of time. It turned out that users object,
    most of the time, to having some apparently trivial computation take
    hours, rather than seconds, in order to be unboundedly precise rather
    than, say, precise to "just" a couple hundred digits (far more digits
    than you need to count the number of atoms in the Galaxy). So,
    unbounded rationals as a default are out -- people may sometimes SAY
    they want them, but in fact, in an overwhelming majority of the cases,
    they actually do not (oh yes, people DO lie, first of all to
    themselves:-).

    As for decimals, that's what a very-high level language aiming for a
    niche very close to Python used from the word go. It got started WAY
    before Python -- I was productively using it over 20 years ago -- and
    had the _IBM_ brand on it, which at the time pretty much meant the
    thousand-pounds gorilla of computers. So where is it now, having had
    all of these advantages (started years before, had IBM behind it, AND
    was totally free of "the traditional C mindset", which was very far from
    traditional at the time, particularly within IBM...!)...?

    Googlefight is a good site for this kind of comparisons... try:

    <http://www.googlefight.com/cgi-bin/compare.pl?q1=python&q2=rexx
    &B1=Make+a+fight%21&compare=1&langue=us>

    and you'll see...:
    """
    Number of results on Google for the keywords python and rexx:

    python
    (10 300 000 results)
    versus
    rexx
    ( 419 000 results)

    The winner is: python
    """

    Not just "the winner", an AMAZING winner -- over TWENTY times more
    popular, despite all of Rexx's advantages! And while there are no doubt
    many fascinating components to this story, a key one is among the pearls

    of wisdom you can read by doing, at any Python interactive prompt:
    import this
    and it is: "practicality beats purity". Rexx has always been rather
    puristic in its adherence to its principles; Python is more pragmatic.
    It turns out that this is worth a lot in the real world. Much the same
    way, say, C ground PL/I into the dust. Come to think of it, Python's
    spirit is VERY close to C (4 and 1/2 half of the 5 principles listed as
    "the spirit of C" in the C ANSI Standard's introduction are more closely
    followed by Python than by other languages which borrowed C's syntax,
    such as C++ or Java), while Rexx does show some PL/I influence (not
    surprising for an IBM-developed language, I guess).

    Richard Gabriel's famous essay on "Worse is Better", e.g. at
    <http://www.jwz.org/doc/worse-is-better.html>, has more, somewhat bitter
    reflections in the same vein.

    Python never had any qualms in getting outside the "directly supported
    in hardware" boundaries, mind you. Dictionaries and unbounded precision
    integers are (and have long been) Python mainstays, although neither the
    hardware nor the underlying C platform has any direct support for
    either. For non-integer computations, though, Python has long been well
    served by relying on C, and nowadays typically the HW too, to handle
    them, which implied the use of floating-point; and leaving the messy
    business of implementing the many other possibly useful kinds of
    non-integer arithmetic to third-party extensions (many in fact written
    in Python itself -- if you're not in a hurry, they're fine, too).

    With Python 2.4, somebody finally felt enough of an itch regarding the
    issue of getting support for decimal arithmetic in the Python standard
    library to go to the trouble of scratching it -- as opposed to just
    spouting off on a mailing list, or even just implementing what they
    personally needed as just a third-party extension (there are _very_ high
    hurdles to jump, to get your code into the Python standard library, so
    it needs strong motivation to do so as opposed to just releasing your
    own extension to the public).
    states, this problem is due to the "underlying C platform". I just find
    it funny how a $20 calculator can be more accurate than Python running
    on a $1000 Intel machine.
    You can get a calculator much cheaper than that these days (and "intel
    machines" not too out of the mainstream for well less than half, as well
    as several times, your stated price). It's pretty obvious that the
    price of the hardware has nothing to do with that "_CAN_ be more
    accurate" issue (my emphasis) -- which, incidentally, remains perfectly
    true even in Python 2.4: it can be less, more, or just as accurate as
    whatever calculator you're targeting, since the precision of decimal
    computation is one of the aspects you can customize specifically...


    Alex
  • Alex Martelli at Sep 19, 2004 at 5:51 pm

    Gary Herron wrote:

    A nice thoughtful answer Alex, but possibly wasted, as it's been
    suggested that he is just a troll. (Note his asssertion that Pi"/7
    in one post and the assertion that it is just a common approximation
    in another, and this in a thread about numeric imprecision.)
    If he's not a troll, he _should_ be -- it's just too sad to consider the
    possibility that somebody is really that ignorant and arrogant at the
    same time (although, tragically, human nature is such as to make that
    entirely possible). Nevertheless, newsgroups and mailing lists have an
    interesting characteristic: no "thoughtful answer" need ever be truly
    wasted, even if the person you're answering is not just a troll, but a
    robotized one, _because there are other readers_ which may find
    interest, amusement, or both, in that answer. On a newsgroup, or
    very-large-audience mailing list, one doesn't really write just for the
    person you're nominally answering, but for the public at large.


    Alex
  • Chris S. at Sep 19, 2004 at 10:24 pm

    Alex Martelli wrote:
    If he's not a troll, he _should_ be -- it's just too sad to consider the
    possibility that somebody is really that ignorant and arrogant at the
    same time (although, tragically, human nature is such as to make that
    entirely possible). Nevertheless, newsgroups and mailing lists have an
    interesting characteristic: no "thoughtful answer" need ever be truly
    wasted, even if the person you're answering is not just a troll, but a
    robotized one, _because there are other readers_ which may find
    interest, amusement, or both, in that answer. On a newsgroup, or
    very-large-audience mailing list, one doesn't really write just for the
    person you're nominally answering, but for the public at large.
    Exactly. One could wonder if more timid accusations would have
    engendered such insightful and accurate responses. However, I do
    apologize if I appeared trollish. Thank you for your contributions.
  • Grant Edwards at Sep 19, 2004 at 5:00 pm

    On 2004-09-19, Chris S. wrote:

    Sqrt is a fair criticism, but Pi equals 22/7, exactly the form this
    arithmetic is meant for. <boggle>
    Any decimal can be represented by a fraction, yet not all
    fractions can be represented by decimals. My point is that
    such simple accuracy should be supported out of the box.
    It is. Just not with floating point.
    So are our brains, yet we somehow manage to compute 12.10 + 8.30
    correctly using nothing more than simple skills developed in
    grade-school. You could theoretically compute an infinitely long
    equation by simply operating on single digits, yet Python, with all of
    its resources, can't overcome this hurtle?
    Sure it can.
    However, I understand Python's limitation in this regard. This
    inaccuracy stems from the traditional C mindset, which
    typically dismisses any approach not directly supported in
    hardware. As the FAQ states, this problem is due to the
    "underlying C platform". I just find it funny how a $20
    calculator can be more accurate than Python running on a $1000
    Intel machine.
    You're clueless on so many different points, I don't even know
    where to start...

    --
    Grant Edwards grante Yow! I'm also pre-POURED
    at pre-MEDITATED and
    visi.com pre-RAPHAELITE!!
  • Alex Martelli at Sep 19, 2004 at 5:21 pm
    Gary Herron wrote:
    ...
    irrational numbers like sqrt(2) and transcendental numbers like PI?
    Sqrt is a fair criticism, but Pi equals 22/7,
    What? WHAT? Are you nuts? Pi and 22/7 are most certainly not equal.
    They don't even share three digits beyond the decimal point. (Can you
    really be that ignorant about numbers and expect to contribute
    intelligently to a discussion about numbers. Pi is a non-repeating
    and non-ending number in base 10 or any other base.)
    Any _integer_ base -- you can find infinitely many irrational bases in
    which pi has repeating or terminating expansion (for example, you could
    use pi itself as a base;-). OK, OK, I _am_ being silly!-)
    If you are happy doing calculations with decimal numbers like 12.10 +
    8.30, then the Decimal package may be what you want, but that fails as
    soon as you want 1/3.
    But it fails in exactly the same way as a cheap calculator of the same
    precision, and some people just have a fetish for that.
    But then you could use a rational arithmetic
    package and get 1/3, but that would fail as soon as you needed sqrt(2)
    or Pi. But then you could try ... what? Can you see the pattern
    Uh, "constructive reals", such as those you can find at
    <http://www.hpl.hp.com/personal/Hans_Boehm/crcalc/> ...?

    "Numbers are represented exactly internally to the calculator, and then
    evaluated on demand to guarantee an error in the displayed result that
    is strictly less than one in the least significant displayed digit. It
    is possible to scroll the display to the right to generate essentially
    arbitrary precision in the result." It has trig, logs, etc.
    here? Any representation of the infinity of numbers on a finite
    computer *must* necessarily be unable to represent some (actually
    infinity many) of those numbers. The inaccuracies stem from that
    fact.
    Yes, _but_. There is after all a *finite* set of reals you can describe
    (constructively and univocally) by equations that you can write finitely
    with a given finite alphabet, right? So, infinitely many (and indeed
    infinitely many MORE, since reals overall are _uncountably_ infinite;-)
    reals are of no possible constructive interest -- if we were somehow
    given one, we would have no way to verify that it is what it is claimed
    to be, anyway, since no claim for it can be written finitely over
    whatever finite alphabet we previously agreed to use. So, I think we
    can safely restrict discourse by ignoring, at least, the _uncountably_
    infinite aspects of reals and sticking to some "potentially
    constructively interesting" subset that is _countably_ infinite.

    At this point, the theoretical problems aren't much worse than those you
    meet with, say, integers, or just rationals, etc. Sure, you can't
    represent any but a finite subset of integers (or rationals, etc) in a
    finite computer _in a finite time_, yet that implies no _inaccuracy_
    whatsoever -- specify your finite alphabet and the maximum size of
    equation you want to be able to write, and I'll give you the specs for
    how big a computer I will need to serve your needs. Easy!

    A "constructive reals" library able to hold and manipulate all reals
    that can be described as the sum of convergent series such that the Nth
    term of the series is a ratio of polynomials in N whose tuples of
    coefficients fit comfortably in memory (with space left over for some
    computation), for example, would amply suffice to deal with all commonly
    used 'transcendentals', such as the ones arising from trigonometry,
    logarithms, etc, and many more besides. (My memories of arithmetic are
    SO rusty I don't even recall if adding similarly constrained continuous
    fractions to the mix would make any substantial difference, sigh...).

    If you ask for some sufficiently big computation you may happen to run
    out of memory -- not different from what happens if you ask for a
    raising-to-power between two Python long's which happen to be too big
    for your computer's memory. Buy more memory, move to a 64-bit CPU (and
    a good OS for it), whatever: it's not a problem of _accuracy_, anyway.

    It MAY be a problem of TIME -- if you're in any hurry, and have upgraded
    your computer to have a few hundred terabytes of memory, you MAY be
    disappointed at how deucedly long it takes to get that multiplication
    between longs that just happened to overflow the memory resources of
    your previous machine which had just 200 TB. If you ask for an infinite
    representation of whatever, it will take an infinite time for you to see
    it, of course -- your machine will keep emitting digits at whatever
    rate, even very fast, but if the digits never stop coming then you'll
    never stop staring at them able to truthfully say "I've seen them ALL".
    But that's an effect that's easy to get even with such a simple
    computation as 1/3... it may easily be held with perfect accuracy inside
    the machine, just by using rationals, but if you want to see it as a
    decimal number you'll never be done. Similarly for sqrt(2) and so on.

    But again it's not a problem of _accuracy_, just one of patience;-). If
    the machine is well programmed you'll never see even one wrong digit, no
    matter how long you keep staring and hoping to catch an accuracy issue.

    The reason we tend to use limited accuracy more often than strictly
    needed is that we typically ARE in a hurry. E.g., I have measured the
    radius of a semispherical fishbowl at 98.13 cm and want to know how much
    water I need to fetch to fill it: I do NOT want to spend eons checking
    out the millionth digit -- I started with a measurement that has four or
    so significant digits (way more than _typical_ real-life measurements in
    most cases, btw), it's obvious that I'll be satisfied with just a few
    more significant digits in the answer. In fact, Python's floats are
    _just fine_ for just about any real-life computation, excluding ones
    involving money (which may often be constrained by law or at least by
    common practice) and some involving combinatorial arithmetic (and thus,
    typically, ratios between very large integers), but the latter only
    apply to certain maniacs trying to compute stuff about games (such as,
    yours truly;-).

    So while a calculator will fool you into believing it is accurate when
    it is not, it is Python's design decision to not cater to fools.
    Well put (+1 QOTW). But constructive reals are still COOL, even if
    they're not of much practical use in real life;-).


    Alex
  • Alex Martelli at Sep 19, 2004 at 5:21 pm
    Paul Rubin wrote:
    ...
    The issue here is that Python's behavior confuses the hell out of some
    new users. There is a separate area of confusion, that

    a = 2 / 3

    sets a to 0, and to clear that up, the // operator was introduced and
    Python 3.0 will supposedly treat / as floating-point division even
    when both operands are integers. That doesn't solve the also very
    common confusion that (1.0/3.0)*3.0 = 0.99999999. Rational arithmetic
    can solve that.
    Yes, but applying rational arithmetic by default might slow some
    computations far too much for beginners' liking! My favourite for
    Python 3.0 would be to have decimals by default, with special notations
    to request floats and rationals (say '1/3r' for a rational, '1/3f' for a
    float, '1/3' or '1/3d' for a decimal with some default parameters such
    as number of digits). This is because my guess is that most naive users
    would _expect_ decimals by default...

    Yes, with rational arithmetic, it will still be true that
    sqrt(5.)**2.0 doesn't quite equal 5, but hardly anyone ever complains
    about that.

    And yes, there are languages that can do exact arithmetic on arbitrary
    algebraic numbers, but they're normally not used for general-purpose
    programming.
    Well, you can pretty easily use constructive reals with Python, see for
    example <http://more.btexact.com/people/briggsk2/XR.html> -- that's a
    vastly vaster set than just algebraic numbers. If we DO want precision,
    after all, why should sqrt(5) be more important than log(3)?


    Alex
  • Alex Martelli at Sep 19, 2004 at 10:33 pm

    Paul Rubin wrote:

    aleaxit at yahoo.com (Alex Martelli) writes:
    Yes, but applying rational arithmetic by default might slow some
    computations far too much for beginners' liking!
    I dunno, lots of Lisp dialects do rational arithmetic by default.
    And...? What fractions of beginners get exposed to Lisp as their first
    language just love the resulting precision/speed tradeoff...? I think
    Paul Graham's "Worse is Better" article applies quite well here...

    Well, you can pretty easily use constructive reals with Python, see for
    example <http://more.btexact.com/people/briggsk2/XR.html> -- that's a
    vastly vaster set than just algebraic numbers. If we DO want precision,
    after all, why should sqrt(5) be more important than log(3)?
    I don't know that it's generally tractable to do exact computation on
    constructive reals. How do you implement comparison (<, >, ==)?
    Well, if you can generate decimal representations on demand (and you'd
    better, as the user might ask for such output at any time with any
    a-priori unpredictable number of digits), worst case you can compare
    them lexicographically, one digit at a time, until you find a different
    digit (assuming identical signs and integer parts) -- except that equal
    numbers would not terminate by this procedure. Briggs' implementation
    finesses the issue by comparing no more than k significant digits, 1000
    by default;-)


    Alex
  • Johan Ur Riise at Sep 20, 2004 at 12:08 am

    aleaxit at yahoo.com (Alex Martelli) writes:

    Paul Rubin wrote:
    aleaxit at yahoo.com (Alex Martelli) writes:
    Yes, but applying rational arithmetic by default might slow some
    computations far too much for beginners' liking!
    I dunno, lots of Lisp dialects do rational arithmetic by default.
    And...? What fractions of beginners get exposed to Lisp as their first
    language just love the resulting precision/speed tradeoff...? I think
    Paul Graham's "Worse is Better" article applies quite well here...
    There is not much of a precision/speed tradoff in Common Lisp, you can
    use fractional numbers (which give you exact results with operations
    +, -, * and /) internally and round them off to decimal before
    display. With the OP's example:

    (+ 1210/100 830/100)
    102/5

    (coerce * 'float)
    20.4

    Integers can have unlimited number of digits, but the precision of
    floats and reals are still limited to what the hardware can do, so if
    you want to display for instance 2/3 with lots of decimals, you have
    to multiply it first and insert the decimal point yourself, like in

    (format t ".~d" (round (* 2/3 10000000000000000000)))
    .6666666666666666667

    Of course, long integers (bignums) are slower than short (fixnums), but
    with automatic conversion to and fro, you pay the penalty only when
    you need it.
  • Tim Peters at Sep 20, 2004 at 4:51 am
    [Paul Rubin]
    I don't know that it's generally tractable to do exact computation on
    constructive reals. How do you implement comparison (<, >, ==)?
    Equality of constructive reals is undecidable. In practice, CR
    packages allow specifying a "number of digits of evidence " parameter
    N, so that equality is taken to mean "provably don't differ by more
    than a unit in the N'th digit".
  • Anna Martelli Ravenscroft at Sep 26, 2004 at 12:17 pm
    Note: I posted a response yesterday, but it apparently never appeared (I
    was having some trouble with my newsreader) so I'm posting this now. My
    apologies if it is a duplicate.

    Alex Martelli wrote:
    Paul Rubin wrote:
    ...
    The issue here is that Python's behavior confuses the hell out of some
    new users. There is a separate area of confusion, that

    a = 2 / 3

    sets a to 0, and to clear that up, the // operator was introduced and
    Python 3.0 will supposedly treat / as floating-point division even
    when both operands are integers. That doesn't solve the also very
    common confusion that (1.0/3.0)*3.0 = 0.99999999. Rational arithmetic
    can solve that.

    Yes, but applying rational arithmetic by default might slow some
    computations far too much for beginners' liking! My favourite for
    Python 3.0 would be to have decimals by default, with special notations
    to request floats and rationals (say '1/3r' for a rational, '1/3f' for a
    float, '1/3' or '1/3d' for a decimal with some default parameters such
    as number of digits). This is because my guess is that most naive users
    would _expect_ decimals by default...
    I agree. Naive (eg, non-CS, non-Mathemetician/Engineer) users who grew
    up with calculators and standard math courses in school may have never
    even heard of floats! (I made it as far as Calculus 2 in college, but
    still had never heard of them.)

    This brings me to another issue. Often c.l.py folks seem surprised that
    people don't RTFM about floats before they ask about why their math
    calculations aren't working. Most of the folks asking have no idea they
    are *doing* float arithmetic, so when they try to google for the answer,
    or look in the docs for the answer, and skip right past the "Float
    Arithmetic" section of the FAQ and the Tutorial, it's because they're
    not DOING float arithmetic - that they know of... So, of course they
    won't read those sections to look for their answer, any more than they'd
    read the Complex Number calculations section... People who know about
    floats con't need that section - the ones who do need it, con't know
    they need it.

    If you want people to find those sections when they are looking for
    answers to why their math calculations aren't working - I suggest you
    remove the "FLOAT" from the title. Something in the FAQ like: "Why are
    my math calculations giving weird or unexpected results?" would attract
    a lot more of the people you WANT to read it. Once you've roped them in,
    *then* you can explain to them about floats...

    Anna Martelli Ravenscroft
  • Richard Hanson at Sep 26, 2004 at 7:52 pm

    Anna Martelli Ravenscroft wrote:

    If you want people to find those sections when they are looking for
    answers to why their math calculations aren't working - I suggest you
    remove the "FLOAT" from the title. Something in the FAQ like: "Why are
    my math calculations giving weird or unexpected results?" would attract
    a lot more of the people you WANT to read it. Once you've roped them in,
    *then* you can explain to them about floats...
    Excellent point.

    (Or, "+1" as the "oldbies" say. ;-) )

    Nice to "meet" you, too -- welcome! (Even if I'm primarily only a
    lurker.)

    (Alex mentioned you have a Fujitsu LifeBook -- I do, too, and like it
    very much!)

    ---

    [Note: I am having equipment and connectivity problems. I'll be back
    as I can when I get things sorted out better, and as appropriate (or
    inappropriate ;-) ). Thanks to you and to all for the civil
    and fun discussions!]


    Richard Hanson

    --
    sick<PERI0D>old<P0INT>fart<PIE-DEC0-SYMB0L>newsguy<MARK>com
  • Alex Martelli at Sep 26, 2004 at 8:13 pm
    Richard Hanson wrote:
    ...
    (Alex mentioned you have a Fujitsu LifeBook -- I do, too, and like it
    very much!)
    There are many 'series' of such "Lifebooks" nowadays -- it's become as
    un-descriptive as Sony's "Vaio" brand or IBM's "Thinkpad". Anna's is a
    P-Series -- 10.5" wide-form screen, incredibly tiny, light, VERY
    long-lasting batteries. It was the _only_ non-Apple computer around at
    the local MacDay (I'm a Mac fan, and she attended too, to keep an eye on
    me I suspect...;-), yet it got nothing but admiring "ooh!"s from the
    crowd of design-obsessed Machies (Apple doesn't make any laptop smaller
    than 12", sigh...).

    OBCLPY: Python runs just as wonderfully on her tiny P-Series as on my
    iBook, even though only Apple uses it within the OS itself;-)


    Alex
  • Richard Hanson at Sep 26, 2004 at 10:49 pm
    [Connection working again...?]

    Alex Martelli wrote:
    Richard Hanson wrote:
    ...
    (Alex mentioned you have a Fujitsu LifeBook -- I do, too, and like it
    very much!)
    There are many 'series' of such "Lifebooks" nowadays -- it's become as
    un-descriptive as Sony's "Vaio" brand or IBM's "Thinkpad". Anna's is a
    P-Series -- 10.5" wide-form screen, incredibly tiny, light, VERY
    long-lasting batteries.
    Ahem. As I said ;-) in my reply to your post mentioning Anna's P2000
    (in my MID: <lgebl0182hk6t0809ar3dh9925ptj5um5b at 4ax.com>), and in
    earlier postings re 2.4x installation difficulties, mine is a Fujitsu
    LifeBook P1120. (Sorry, Alex! I definitely *should* have mentioned the
    model again -- I'm just beginning to appreciate the difficulty of even
    *partially* keeping up with c.l.py. I'm learning, though. :-) )

    In any event, the Fujitsu LifeBook P1120 has a 8.9" wide-format
    screen, is 2.2lbs.-light with the smaller *very* long-lasting battery
    and 2.5lbs.-light with the very, *very* long-lasting battery, and has
    -- what tipped the scales, as it were, for my needs -- a touchscreen
    and stylus.
    It was the _only_ non-Apple computer around at
    the local MacDay (I'm a Mac fan, and she attended too, to keep an eye on
    me I suspect...;-), yet it got nothing but admiring "ooh!"s from the
    crowd of design-obsessed Machies (Apple doesn't make any laptop smaller
    than 12", sigh...).
    I can feel your pain. I would switch to Apple in a second if they had
    such light models (and if I had the bucks ;-) ). I need a very light
    machine for reasons specified earlier. (Okay, slightly reluctantly:
    Explicit may be better even with *this* particular info -- I have
    arthritis [ankylosing spondylitis] and need very light laptops to read
    and write with. :-) )
    OBCLPY: Python runs just as wonderfully on her tiny P-Series as on my
    iBook, even though only Apple uses it within the OS itself;-)
    ObC.l.pyFollow-up: Python also runs very well on my tinier ;-) P1120
    with the Transmeta Crusoe TM5800 processor running at 800MHz and with
    256MB RAM and a 256KB L2 on-chip cache -- even using Win2k. :-) It's
    really nice not needing a fan on a laptop, as well -- even when
    calculating Decimal's sqrt() to thousands of decimal places. ;-)

    ObExplicit-metacomment: I'm only attempting a mixture of info *and*
    levity. :-)


    what?-men-arguing-about-whose-is-*tinier*?!'ly y'rs,
    Richard Hanson

    --
    sick<PERI0D>old<P0INT>fart<PIE-DEC0-SYMB0L>newsguy<MARK>com
  • Cameron Laird at Sep 27, 2004 at 1:08 am
    In article <adeel054s0qvd0qas7nk1v9i93vk3tdlv1 at 4ax.com>,
    Richard Hanson wrote:
    [Connection working again...?]

    Alex Martelli wrote:
    Richard Hanson wrote:
    ...
    (Alex mentioned you have a Fujitsu LifeBook -- I do, too, and like it
    very much!)
    .
    .
    .
    Ahem. As I said ;-) in my reply to your post mentioning Anna's P2000
    (in my MID: <lgebl0182hk6t0809ar3dh9925ptj5um5b at 4ax.com>), and in
    earlier postings re 2.4x installation difficulties, mine is a Fujitsu
    LifeBook P1120. (Sorry, Alex! I definitely *should* have mentioned the
    model again -- I'm just beginning to appreciate the difficulty of even
    *partially* keeping up with c.l.py. I'm learning, though. :-) )

    In any event, the Fujitsu LifeBook P1120 has a 8.9" wide-format
    screen, is 2.2lbs.-light with the smaller *very* long-lasting battery
    and 2.5lbs.-light with the very, *very* long-lasting battery, and has
    -- what tipped the scales, as it were, for my needs -- a touchscreen
    and stylus.
    .
    .
    .
    I can feel your pain. I would switch to Apple in a second if they had
    such light models (and if I had the bucks ;-) ). I need a very light
    machine for reasons specified earlier. (Okay, slightly reluctantly:
    Explicit may be better even with *this* particular info -- I have
    arthritis [ankylosing spondylitis] and need very light laptops to read
    and write with. :-) )
    OBCLPY: Python runs just as wonderfully on her tiny P-Series as on my
    iBook, even though only Apple uses it within the OS itself;-)
    ObC.l.pyFollow-up: Python also runs very well on my tinier ;-) P1120
    with the Transmeta Crusoe TM5800 processor running at 800MHz and with
    256MB RAM and a 256KB L2 on-chip cache -- even using Win2k. :-) It's
    really nice not needing a fan on a laptop, as well -- even when
    calculating Decimal's sqrt() to thousands of decimal places. ;-)
    .
    .
    .
    Is Linux practical on these boxes? How do touch-typists like them
  • Richard Hanson at Sep 27, 2004 at 4:14 am

    Cameron Laird wrote:

    In article <adeel054s0qvd0qas7nk1v9i93vk3tdlv1 at 4ax.com>,
    Richard Hanson <me at privacy.net> wrote [comparing
    Anna Martelli Ravenscroft's Fujitsu LifeBook P2000 to my
    (Richard Hanson's) Fujitsu LifeBook P1120]:
    [...]

    In any event, the Fujitsu LifeBook P1120 has a 8.9" wide-format
    screen, is 2.2lbs.-light with the smaller *very* long-lasting battery
    and 2.5lbs.-light with the very, *very* long-lasting battery, and has
    -- what tipped the scales, as it were, for my needs -- a touchscreen
    and stylus.

    [...]

    Alex Martelli wrote:
    OBCLPY: Python runs just as wonderfully on her tiny P-Series as on my
    iBook, even though only Apple uses it within the OS itself;-)
    ObC.l.pyFollow-up: Python also runs very well on my tinier ;-) P1120
    with the Transmeta Crusoe TM5800 processor running at 800MHz and with
    256MB RAM and a 256KB L2 on-chip cache -- even using Win2k. :-) It's
    really nice not needing a fan on a laptop, as well -- even when
    calculating Decimal's sqrt() to thousands of decimal places. ;-)
    .
    .
    .
    Is Linux practical on these boxes?
    I've found on the web accounts of two people, at least, getting the
    P1120 working with Linux and with at least partial functionality of
    the touchscreen -- one individual claimed full functionality. (I found
    some accounts of success with getting Linux working on the P2000, as
    well.) I'm currently waiting to purchase a new harddrive for my P1120
    to see for myself if I can get Linux installed with the touchscreen
    fully functioning -- which, as I mentioned in my post, is particularly
    important to me.
    How do touch-typists like them
    I've been touch-typing since I was about nine-years-old. When I was
    looking for a very light laptop for reasons mentioned in my post, I
    was concerned that I wouldn't be able to touch-type on the ~85% (16mm
    pitch) keyboard. I went to a local "big box" computer store (who shall
    remain nameless) and tried one of the P1120s -- within seconds I
    realized I could easily adapt and subsequently ordered one from
    Fujitsu.

    I would estimate that I was typing *faster* and with substantially
    *fewer* errors inside of several weeks -- and occasional uses of the
    standard-sized keyboard on my HP Omnibook 900B made me feel like a
    Munchkin. :-)

    Now that I'm temporarily back on the standard-pitch Omnibook 900B, I
    have adapted to the what-had-come-to-seem-a-humongous keyboard, once
    again. I most definitely prefer the P1120's keyboard.

    I note that on the P1120, I could reach difficult key-combinations
    much easier, and also, that I could often hold down two keys of a
    three-key combo, say, with one finger or thumb.

    Your mileage may vary, as they say, but I now prefer smaller
    keyboards.

    The "instant on-off" works very well, too. I highly recommend the
    P1120 for anyone who isn't put off by the smaller keyboard. (Drawing
    on the screen with the stylus is pretty trick, as well.)


    Richard Hanson

    --
    sick<PERI0D>old<P0INT>fart<PIE-DEC0-SYMB0L>newsguy<MARK>com
  • Alex Martelli at Sep 27, 2004 at 6:59 am
    Cameron Laird wrote:
    ...
    Is Linux practical on these boxes?
    Never got 'sleep' to work (there's supposed to be a 'hybernate' thingy,
    but I haven't found it to work reliably either). AFAIMC, that's the
    biggie; everything else is fine.
    How do touch-typists like them
    Just fine (the 10.5" P2000 -- can't speak for the even-smaller P1000s).


    Alex
  • Richard Hanson at Sep 27, 2004 at 5:49 pm

    Alex Martelli wrote:

    Cameron Laird wrote:
    How do touch-typists like them
    Just fine (the 10.5" P2000 -- can't speak for the even-smaller P1000s).
    I commented on my P1120 -- works better for me than the standard-sized
    keyboards. See my MID:

    <141fl01ecvoobq2h33ud827p6mefvu3183 at 4ax.com>


    Richard Hanson

    --
    sick<PERI0D>old<P0INT>fart<PIE-DEC0-SYMB0L>newsguy<MARK>com
  • Anna Martelli Ravenscroft at Sep 27, 2004 at 7:29 am

    Cameron Laird wrote:
    In article <adeel054s0qvd0qas7nk1v9i93vk3tdlv1 at 4ax.com>,
    Richard Hanson wrote:
    [Connection working again...?]

    Alex Martelli wrote:

    Richard Hanson wrote:
    ...
    (Alex mentioned you have a Fujitsu LifeBook -- I do, too, and like it
    very much!)
    .
    .
    .
    Ahem. As I said ;-) in my reply to your post mentioning Anna's P2000
    (in my MID: <lgebl0182hk6t0809ar3dh9925ptj5um5b at 4ax.com>), and in
    earlier postings re 2.4x installation difficulties, mine is a Fujitsu
    LifeBook P1120. (Sorry, Alex! I definitely *should* have mentioned the
    model again -- I'm just beginning to appreciate the difficulty of even
    *partially* keeping up with c.l.py. I'm learning, though. :-) )

    In any event, the Fujitsu LifeBook P1120 has a 8.9" wide-format
    screen, is 2.2lbs.-light with the smaller *very* long-lasting battery
    and 2.5lbs.-light with the very, *very* long-lasting battery, and has
    -- what tipped the scales, as it were, for my needs -- a touchscreen
    and stylus.
    .
    .
    .
    I can feel your pain. I would switch to Apple in a second if they had
    such light models (and if I had the bucks ;-) ). I need a very light
    machine for reasons specified earlier. (Okay, slightly reluctantly:
    Explicit may be better even with *this* particular info -- I have
    arthritis [ankylosing spondylitis] and need very light laptops to read
    and write with. :-) )

    OBCLPY: Python runs just as wonderfully on her tiny P-Series as on my
    iBook, even though only Apple uses it within the OS itself;-)
    ObC.l.pyFollow-up: Python also runs very well on my tinier ;-) P1120
    with the Transmeta Crusoe TM5800 processor running at 800MHz and with
    256MB RAM and a 256KB L2 on-chip cache -- even using Win2k. :-) It's
    really nice not needing a fan on a laptop, as well -- even when
    calculating Decimal's sqrt() to thousands of decimal places. ;-)
    .
    .
    .
    Is Linux practical on these boxes? How do touch-typists like them
    Well, mine is dual boot. I'm currently experimenting with Ubuntu on my
    Linux partition... I'm really REALLY hoping for a linux kernel with a
    decent 'sleep' function to come up RSN because I despise having to work
    in Windoze XP instead of Linux. Ah well, at least the XP hasn't been too
    terrible to work on - it runs surprisingly smoothly, particularly with
    Firefox and Thunderbird for browsing and email...

    And I can touch type just fine - except for the damn capslock key (there
    is NO purpose whatsoever for a capslock key as a standalone key on a
    modern keyboard, imho). I've had only minor problems with the touch
    typing that I do - and that, only due to the slightly different layout
    of the SHIFT key on the right side compared to where I'd normally expect
    to find it: keyboard layout is a common bugbear on laptops though,
    regardless of size....

    Anna
  • Richard Hanson at Sep 27, 2004 at 5:42 pm
    Anna Martelli Ravenscroft wrote:

    [This post primarily contains solutions to Anna's problem with the
    Fujitsu LifeBook P2000's key locations. But, there's also some 2.4x
    MSI Installer anecdotal info in my footnote.]
    Cameron Laird wrote:
    Is Linux practical on these boxes? How do touch-typists like them
    Well, mine is dual boot. I'm currently experimenting with Ubuntu on my
    Linux partition... I'm really REALLY hoping for a linux kernel with a
    decent 'sleep' function to come up RSN because I despise having to work
    in Windoze XP instead of Linux. Ah well, at least the XP hasn't been too
    terrible to work on - it runs surprisingly smoothly, particularly with
    Firefox and Thunderbird for browsing and email...
    My Fujitsu LifeBook P1120 is (was) only single-booting Win2k, so I
    can't help with the Linux "sleep" function as yet -- I'll be working
    on dual-booting Win2k and Linux on the P1120 as soon as I get the
    requisite hardware to rebuild things. The "sleep" function is a *very*
    high priority for me, so if and when I find a solution, I'll post it
    if you're still needing such -- may well work for your P2000 as well.
    And I can touch type just fine - except for the damn capslock key (there
    is NO purpose whatsoever for a capslock key as a standalone key on a
    modern keyboard, imho).
    It seems *many* folks agree; read below.
    I've had only minor problems with the touch
    typing that I do - and that, only due to the slightly different layout
    of the SHIFT key on the right side compared to where I'd normally expect
    to find it: keyboard layout is a common bugbear on laptops though,
    regardless of size....
    [I lost all my recent archives in a recent series of "crashes" -- so I
    regoogled this morning for the info herein.]

    On Win2k, and claimed for WinXP, one can manually edit the registry to
    remap any of the keys. I originally did this on my P1120 with Win2k.
    Worked just fine.

    (I had saved to disc before a Win98SE crash just a few minutes ago
    ;-), the manual regedit values. If you're interested in 'em you may
    post here or contact me off-group. The email addie below works if
    unmunged -- ObExplicit: replace the angle-bracketed items with the
    appropriate symbol.)

    Also, there are tools available from both MS, and for those who don't
    like to visit MS ;-), free from many other helpful folks.

    If my memory serves, I liked best the (freeware, I believe) tool
    KeyTweak:

    <http://webpages.charter.net/krumsick/KeyTweak_install.exe>

    available from this page:

    <http://webpages.charter.net/krumsick>

    ---

    MS's tool is Remapkey.exe. (NB: I have not tried this tool --
    *usually* my firewall blocks MS :-) [which required an unblocking to
    install 2.4ax because of the new MSI Installer[1] :-) ].) This tool
    may already be on one of your MS CDs in the reskit dirs (I haven't
    looked in mine).

    In any event, one webpage:

    <http://www.annoyances.org/exec/forum/winxp/t1014389848>

    describes Remapkey.exe as:

    "... a nifty tool put out by microsoft (sic). Make sure you get the
    correct version for your OS. Not resource intensive like other dll
    apps."

    The page has these links (quoted herein):

    For individual downloads:
    <http://www.dynawell.com/support/ResKit/winxp.asp>

    Free from Microsoft site, for full downloads
    <http://www.microsoft.com/downloads/details.aspx?familyid=9d467a69-57ff-4ae7-96ee-b18c4790cffd&displaylang=en>

    or shorter link:
    <http://www.petri.co.il/download_windows_2003_reskit_tools.htm>

    ---

    I also have links to a few other freeware (some open-source) tools for
    all versions of Win32. I won't add them now, but repost or contact me
    if you want more info from my research.

    ---

    Additionally, I found many solutions for Linux, but haven't
    investigated those as (as I said) I have not yet installed Linux on my
    Fujitsu LifeBook P1120. Again, if you have trouble locating a Linux
    key-remapping method, let me know as I found lots of links for the
    better OS :-), as well.

    (I do note that after several reinstalls on the P1120, that I was
    finally used to the capslock and shift key locations well enough to
    avoid wrongly hitting them very often. As they say, though, your
    mileage may vary.)


    Richard Hanson
    ___________________________________________
    [1] On this HP Omnibook 900B even after downloading the requisite MSI
    Install file, I experienced multiple errors trying to install 2.4a3.2
    on Win98SE. I finally got 2.4x working, but I note that the helpfiles
    are still missing the navigation icons. I have the MSI Installer error
    messages if Martin or anyone is interested.

    --
    sick<PERI0D>old<P0INT>fart<PIE-DEC0-SYMB0L>newsguy<MARK>com
  • Dan Bishop at Sep 19, 2004 at 10:24 pm
    Gary Herron <gherron at islandtraining.com> wrote in message news:<mailman.3494.1095586574.5135.python-list at python.org>...
    On Sunday 19 September 2004 01:00 am, Chris S. wrote:
    Gary Herron wrote:
    That's called rational arithmetic, and I'm sure you can find a package
    that implements it for you. However what would you propose for
    irrational numbers like sqrt(2) and transcendental numbers like PI?
    Sqrt is a fair criticism, but Pi equals 22/7,
    What? WHAT? Are you nuts? Pi and 22/7 are most certainly not equal.
    They don't even share three digits beyond the decimal point.
    There are, of course, reasonably accurate rational approximations of
    pi. For example, 355/113 (accurate to 6 decimal places), 312689/99532
    (9 decimal places), or 3126535/995207 (11 decimal places). Also, the
    IEEE 754 double-precision representation of pi is equal to the
    rational number 4503599627370496/281474976710656.
    ...Pi is a non-repeating and non-ending number in base 10 or any other base.)
    It has a terminating representation in base pi ;-)

    But you're right that it has a non-repeating and non-ending
    representation in any _useful_ base.
    If you are happy doing calculations with decimal numbers like 12.10 +
    8.30, then the Decimal package may be what you want, but that fails as
    soon as you want 1/3. But then you could use a rational arithmetic
    package and get 1/3, but that would fail as soon as you needed sqrt(2)
    or Pi.
    True, but who says we need to use the same representation for all
    numbers. Python _could_ use rationals in situations where they'd work
    (like int/int division), and only revert to floating-point when
    necessary (like math.sqrt and math.pi).
    And BTW, your calculator is not, in general, more accurate than the
    modern IEEE binary hardware representation of numbers used on most of
    today's computers.
    In general, it's _less_ accurate. In IEEE 754 double-precision,
    machine epsilon is 2**-53 (about 1e-16), but TI's calculators have a
    machine epsilon of 1e-14. Thus, in general, IEEE 754 gives you about
    2 more digits of precision than a calculator.
    It is more accurate on only a select subset of all numbers,
    Right. In most cases, base 10 has no inherent advantage. The number
    1.41 is a _less_ accurate representation of sqrt(2) than 0x1.6A. The
    number 3.14 is a less accurate representation of pi than 0x3.24. And
    it's not inherently more accurate to say that my height is 1.80 meters
    rather than 0x1.CD meters or 5'11".

    Base 10 _is_ more accurate for monetary amounts, and for this reason I
    agreed with the addition of a decimal class. But it would be a
    mistake to use decimal arithmetic, which has a performance
    disadvantage with no accuracy advantage, in the general case.
  • Bengt Richter at Sep 20, 2004 at 1:32 am
    On 19 Sep 2004 15:24:31 -0700, danb_83 at yahoo.com (Dan Bishop) wrote:
    [...]
    There are, of course, reasonably accurate rational approximations of
    pi. For example, 355/113 (accurate to 6 decimal places), 312689/99532
    (9 decimal places), or 3126535/995207 (11 decimal places). Also, the
    IEEE 754 double-precision representation of pi is equal to the
    rational number 4503599627370496/281474976710656.
    divmod(4503599627370496,281474976710656)
    (16L, 0L)

    a little glitch somewhere ? ;-)

    Others are nice though, but the last one shows up same way:
    print '%s\n%s' %(ED('312689/99532').round(11), ED(math.pi,11))
    ED('3.14159265362')
    ED('3.14159265359')
    print '%s\n%s' %(ED('3126535/995207').round(13), ED(math.pi,13))
    ED('3.1415926535887')
    ED('3.1415926535898')
    print '%s\n%s' %(ED('4503599627370496/281474976710656'), ED(math.pi,'all'))
    ED('16')
    ED('3.141592653589793115997963468544185161590576171875')

    Regards,
    Bengt Richter
  • Dan Bishop at Sep 20, 2004 at 10:02 am
    bokr at oz.net (Bengt Richter) wrote in message news:<cilbv7$sh7$0$216.39.172.122 at theriver.com>...
    On 19 Sep 2004 15:24:31 -0700, danb_83 at yahoo.com (Dan Bishop) wrote:
    [...]
    There are, of course, reasonably accurate rational approximations of
    pi. For example, 355/113 (accurate to 6 decimal places), 312689/99532
    (9 decimal places), or 3126535/995207 (11 decimal places). Also, the
    IEEE 754 double-precision representation of pi is equal to the
    rational number 4503599627370496/281474976710656.
    divmod(4503599627370496,281474976710656)
    (16L, 0L)

    a little glitch somewhere ? ;-)
    Oops. I meant 884279719003555/281474976710656.
  • Paul Foley at Sep 20, 2004 at 3:16 am

    On 19 Sep 2004 15:24:31 -0700, Dan Bishop wrote:

    There are, of course, reasonably accurate rational approximations of
    pi. For example, 355/113 (accurate to 6 decimal places), 312689/99532
    (9 decimal places), or 3126535/995207 (11 decimal places). Also, the
    IEEE 754 double-precision representation of pi is equal to the
    rational number 4503599627370496/281474976710656.
    I hope not! That's equal to 16. (The double float closest to) pi is
    884279719003555/281474976710656

    --
    Don't worry about people stealing your ideas. If your ideas are any good,
    you'll have to ram them down people's throats.
    -- Howard Aiken
    (setq reply-to
    (concatenate 'string "Paul Foley " "<mycroft" '(#\@) "actrix.gen.nz>"))
  • Bengt Richter at Sep 20, 2004 at 5:42 am

    On Mon, 20 Sep 2004 15:16:07 +1200, Paul Foley wrote:
    On 19 Sep 2004 15:24:31 -0700, Dan Bishop wrote:

    There are, of course, reasonably accurate rational approximations of
    pi. For example, 355/113 (accurate to 6 decimal places), 312689/99532
    (9 decimal places), or 3126535/995207 (11 decimal places). Also, the
    IEEE 754 double-precision representation of pi is equal to the
    rational number 4503599627370496/281474976710656.
    I hope not! That's equal to 16. (The double float closest to) pi is
    884279719003555/281474976710656
    Amazingly, that is _exactly_ equal to math.pi
    from ut.exactdec import ED
    import math
    ED('884279719003555/281474976710656')
    ED('3.141592653589793115997963468544185161590576171875')
    ED(math.pi,'all')
    ED('3.141592653589793115997963468544185161590576171875')
    ED('884279719003555/281474976710656') == ED(math.pi,'all')
    True
    ED('884279719003555/281474976710656').astuple()
    (3141592653589793115997963468544185161590576171875L, 1L, -48)
    ED(math.pi,'all').astuple()
    (3141592653589793115997963468544185161590576171875L, 1L, -48)

    So it's also equal to the rational number
    3141592653589793115997963468544185161590576171875 / 10**48
    ED('3141592653589793115997963468544185161590576171875'
    ... '/1000000000000000000000000000000000000000000000000')
    ED('3.141592653589793115997963468544185161590576171875')

    or
    ED('3141592653589793115997963468544185161590576171875') / ED(10**48)
    ED('3.141592653589793115997963468544185161590576171875')

    Regards,
    Bengt Richter
  • Andrea Griffini at Sep 20, 2004 at 5:56 am

    On 19 Sep 2004 15:24:31 -0700, danb_83 at yahoo.com (Dan Bishop) wrote:
    Also, the
    IEEE 754 double-precision representation of pi is equal to the
    rational number 4503599627370496/281474976710656.
    I know the real uses of a precise pi are not that many... but
    isn't that a quite raw approximation ? that fraction equals 16...
    Base 10 _is_ more accurate for monetary amounts, and for this reason I
    agreed with the addition of a decimal class. But it would be a
    mistake to use decimal arithmetic, which has a performance
    disadvantage with no accuracy advantage, in the general case.
    For monetary computation why not using fixed point instead
    (i.e. integers representing the number of thousands of cents,
    for example) ? IMO using floating point instead of something
    like arbitrary precision integers is looking for trouble in
    that area as often what is required is accuracy up to a
    specified fraction of the unit.

    Andrea

    PS: From a study seems that 75.7% of people tends to believe
    more in messages that contain precise numbers (like 75.7%).
  • Dan Bishop at Sep 20, 2004 at 4:07 am
    Paul Rubin <http://phr.cx at NOSPAM.invalid> wrote in message news:<7xfz5ein0h.fsf at ruckus.brouhaha.com>...
    Gary Herron <gherron at islandtraining.com> writes:
    Any representation of the infinity of numbers on a finite computer
    *must* necessarily be unable to represent some (actually infinity
    many) of those numbers. The inaccuracies stem from that fact.
    Well, finite computers can't even represent all the integers, but
    we can reasonably think of Python as capable of doing exact integer
    arithmetic.

    The issue here is that Python's behavior confuses the hell out of some
    new users. There is a separate area of confusion, that

    a = 2 / 3

    sets a to 0,
    That may confusing for non-C programmers, but it's easy to explain.
    The real flaw of old-style division is that code like

    def mean(seq):
    return sum(seq) / len(seq)

    subtly fails when seq happens to contain all integers, and you can't
    even correctly use:

    def mean(seq):
    return 1.0 * sum(seq) / len(seq)

    because it could lose accuracy if seq's elements were of a custom
    high-precision numeric type that is closed under integer division but
    gets coerced to float when multiplied by a float.
    That doesn't solve the also very
    common confusion that (1.0/3.0)*3.0 = 0.99999999.
    What problem?
    (1.0 / 3.0) * 3.0
    1.0

    The rounding error of multiplying 1/3 by 3 happens to exactly cancel
    out that of dividing 1 by 3. It's an accident, but you can use it as
    a quick argument against the "decimal arithmetic is always more
    acurate" crowd.
    Rational arithmetic can solve that.
    Yes, it can, and imho it would be a good idea to use rational
    arithmetic as the default for integer division (but _not_ as a general
    replacement for float).
  • Tim Peters at Sep 20, 2004 at 5:07 am
    [Chris S.]
    Sqrt is a fair criticism, but Pi equals 22/7, exactly the form this
    arithmetic is meant for.
    That's absurd. pi is 3, and nothing but grief comes from listening to
    fancy-pants so-called "mathematicians" trying to convince you that
    their inability to find integer results is an intellectual failing you
    should share <wink>.
  • Andrea Griffini at Sep 20, 2004 at 6:06 am

    On Mon, 20 Sep 2004 01:07:03 -0400, Tim Peters wrote:
    [Chris S.]
    Sqrt is a fair criticism, but Pi equals 22/7, exactly the form this
    arithmetic is meant for.
    That's absurd. pi is 3, and nothing but grief comes from listening to
    fancy-pants so-called "mathematicians" trying to convince you that
    their inability to find integer results is an intellectual failing you
    should share <wink>.
    This is from the Bible...

    007:023 And he made a molten sea, ten cubits from the one brim to the
    other: it was round all about, and his height was five cubits:
    and a line of thirty cubits did compass it round about.

    So it's clear that pi must be 3

    Andrea
  • Andrew Dalke at Sep 20, 2004 at 6:38 am

    Andrea:
    This is from the Bible...

    007:023 And he made a molten sea, ten cubits from the one brim to the
    other: it was round all about, and his height was five cubits:
    and a line of thirty cubits did compass it round about.

    So it's clear that pi must be 3
    Or that the walls were 0.25 cubits thick, if you're talking
    inner diameter vs. outer. ;)

    Andrew
    dalke at dalkescientific.com
  • Dan Bishop at Sep 20, 2004 at 12:11 pm
    Andrew Dalke <adalke at mindspring.com> wrote in message news:<QVu3d.5383$gG4.1881 at newsread1.news.pas.earthlink.net>...
    Andrea:
    This is from the Bible...

    007:023 And he made a molten sea, ten cubits from the one brim to the
    other: it was round all about, and his height was five cubits:
    and a line of thirty cubits did compass it round about.

    So it's clear that pi must be 3
    Or that the walls were 0.25 cubits thick, if you're talking
    inner diameter vs. outer. ;)
    Or it could be 9.60 cubits across and 30.16 cubits around, and the
    numbers are rounded to the nearest cubit.

    Also, I've heard that the original Hebrew uses an uncommon spelling of
    the word for "line" or "circumference". Perhaps that affects the
    meaning.
  • Grant Edwards at Sep 20, 2004 at 3:48 pm

    On 2004-09-20, Andrea Griffini wrote:

    This is from the Bible...

    007:023 And he made a molten sea, ten cubits from the one brim to the
    other: it was round all about, and his height was five cubits:
    and a line of thirty cubits did compass it round about.

    So it's clear that pi must be 3
    If you've only got 1 significant digit in your measured values,
    then Pi == 3 is a prefectly reasonable value to use.

    --
    Grant Edwards grante Yow! Why is everything
    at made of Lycra Spandex?
    visi.com
  • Andrew Dalke at Sep 20, 2004 at 6:36 am

    Uncle Tim:
    That's absurd. pi is 3
    Personally I've found that pie is usually round, though
    if you're talking price I agree -- I can usually get a
    slice for about $3, more like $3.14 with tax. I like
    mine apple, with a bit of ice cream.

    Strange spelling though.

    Andrew
    dalke at dalkescientific.com
  • Alex Martelli at Sep 20, 2004 at 7:02 am

    Andrew Dalke wrote:

    Uncle Tim:
    That's absurd. pi is 3
    Personally I've found that pie is usually round, though
    if you're talking price I agree -- I can usually get a
    slice for about $3, more like $3.14 with tax. I like
    mine apple, with a bit of ice cream.

    Strange spelling though.
    Yeah, everybody knows it's spelled "py"!


    Alex
  • Carl Banks at Sep 21, 2004 at 8:07 pm
    "Chris S." <chrisks at NOSPAM.udel.edu> wrote in message news:<70b3d.1822$uz1.747 at trndny03>...
    I just find
    it funny how a $20 calculator can be more accurate than Python running
    on a $1000 Intel machine.
    Actually, if you look at Intel's track record, it isn't that surprising.

    How many Intel Pentium engineers does it take to change a light bulb?
    Three. One to screw in the bulb, and one to hold the ladder.

    --
    CARL BANKS
  • Grant Edwards at Sep 21, 2004 at 8:12 pm

    On 2004-09-21, Carl Banks wrote:
    "Chris S." <chrisks at NOSPAM.udel.edu> wrote in message news:<70b3d.1822$uz1.747 at trndny03>...
    I just find
    it funny how a $20 calculator can be more accurate than Python running
    on a $1000 Intel machine.
    Actually, if you look at Intel's track record, it isn't that surprising.

    How many Intel Pentium engineers does it take to change a light bulb?
    Three. One to screw in the bulb, and one to hold the ladder.
    Intel, where quality is Job 0.9999999997.

    --
    Grant Edwards grante Yow! My CODE of ETHICS
    at is vacationing at famed
    visi.com SCHROON LAKE in upstate
    New York!!

Related Discussions

People

Translate

site design / logo © 2022 Grokbase