FAQ

I?m going to show a few examples of how Decimals violate the fundamental
laws of mathematics just as floats do.

Decimal is also uses sign and mantissa, except it's Base 10. I think
Decimal should use numerators and denominators, because they are more
accurate. That's why even Decimal defies the laws of mathematics.

Search Discussions

  • Andrew Barnert at Jun 3, 2015 at 10:35 pm

    On Jun 3, 2015, at 15:17, u8y7541 The Awesome Person wrote:


    I?m going to show a few examples of how Decimals violate the fundamental
    laws of mathematics just as floats do.
    Decimal is also uses sign and mantissa, except it's Base 10. I think
    Decimal should use numerators and denominators, because they are more
    accurate.

    So sqrt(2) should be represented as an exact fraction? Do you have infinite RAM?

    That's why even Decimal defies the laws of mathematics.
  • u8y7541 The Awesome Person at Jun 3, 2015 at 10:46 pm

    On Wed, Jun 3, 2015 at 3:35 PM, Andrew Barnert wrote:
    On Jun 3, 2015, at 15:17, u8y7541 The Awesome Person wrote:

    I?m going to show a few examples of how Decimals violate the fundamental
    laws of mathematics just as floats do.
    Decimal is also uses sign and mantissa, except it's Base 10. I think
    Decimal should use numerators and denominators, because they are more
    accurate.
    So sqrt(2) should be represented as an exact fraction? Do you have infinite RAM?

    You can't represent sqrt(2) exactly with sign and mantissa either.
    When Decimal detects a non-repeating decimal, it should round it, and
    assign it a numerator and denominator something like 14142135623730951
    / 10000000000000000 simplified. That's better than sign and mantissa
    errors.


    Or an alternative could be a hybrid of sign and mantissa and fraction
    representation... I don't think that's a good idea though.




    --
    -Surya Subbarao
  • Andrew Barnert at Jun 3, 2015 at 11:20 pm

    On Jun 3, 2015, at 15:46, u8y7541 The Awesome Person wrote:
    On Wed, Jun 3, 2015 at 3:35 PM, Andrew Barnert wrote:
    On Jun 3, 2015, at 15:17, u8y7541 The Awesome Person wrote:

    I?m going to show a few examples of how Decimals violate the fundamental
    laws of mathematics just as floats do.
    Decimal is also uses sign and mantissa, except it's Base 10. I think
    Decimal should use numerators and denominators, because they are more
    accurate.
    So sqrt(2) should be represented as an exact fraction? Do you have infinite RAM?
    You can't represent sqrt(2) exactly with sign and mantissa either.

    That's exactly the point: Decimal never _pretends_ to be exact, and therefore there's no problem when it can't be.


    By the way, it's not just "sign and mantissa" (that just gives you an integer, or maybe a fixed-point number), it's sign, mantissa, _and exponent_.

    When Decimal detects a non-repeating decimal, it should round it, and
    assign it a numerator and denominator something like 14142135623730951
    / 10000000000000000 simplified.
    That's better than sign and mantissa
    errors.

    No, that's exactly the same value as mantissa 1.4142135623730951 and exponent 0, and therefore it has exactly the same error. You haven't gained anything over using Decimal.


    And meanwhile, you've lost some efficiency (it takes twice as much memory because you have to store all those zeroes, where in Decimal they're implied by the exponent), and you've lost the benefit of a well-designed standard to follow (how many digits should you keep? what rounding rule should you use? should there be some way to optionally signal the user that rounding has occurred? and so on...).


    And, again, you've made things more surprising, not less, because now you have a type that's always exact, except when it isn't.


    Meanwhile, when you asked about the problems, I gave you a whole list of them. Have you thought about the others, or only the third one on the list? For example, do you really want adding up a long string of simple numbers to give you a value that takes 500x as much memory to store and 500x as long to calculate with if you don't need the exactness? Or is there going to be another rounding rule that when the fraction gets "too big" you truncate it to a smaller approximation?


    And meanwhile, if you do need the exactness, why don't you need to be able to carry around exact rational multiplies of pi or an exact representation of 2 ** 0.5 (both of which SymPy can do for you, by representing numbers symbolically, the way humans do when they need to)?


    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150603/a2774199/attachment.html>
  • u8y7541 The Awesome Person at Jun 4, 2015 at 12:19 am
    (it takes twice as much memory because you have to store all those zeroes, where in Decimal they're implied by the exponent), and you've lost the benefit of a well-designed standard to follow (how many digits should you keep? what rounding rule should you use? should there be some way to optionally signal the user that rounding has occurred? and so on...).

    You are right about memory...
    LOL, I just thought about having something like representing it as a
    float / float for numerator / denominator! But that would be slower...


    There's got to be a workaround for those zeros. Especially if I'm
    dealing with stuff like 57 / 10^100 (57 is prime!).


    --
    -Surya Subbarao
  • Chris Angelico at Jun 4, 2015 at 12:24 am

    On Thu, Jun 4, 2015 at 10:19 AM, u8y7541 The Awesome Person wrote:
    You are right about memory...
    LOL, I just thought about having something like representing it as a
    float / float for numerator / denominator! But that would be slower...

    How would that even help?


    ChrisA
  • Guido van Rossum at Jun 4, 2015 at 1:01 am
    At this point I feel compelled to explain why I'm against using
    fractions/rationals to represent numbers given as decimals.

    From 1982 till 1886 I participated in the implementation of ABC (
    http://homepages.cwi.nl/~steven/abc/) which did implement numbers as
    arbitrary precision fractions. (An earlier prototype implemented them as
    fractions of two floats, but that was wrong for many other reasons -- two
    floats are not better than one. :-)


    The design using arbitrary precision fractions was intended to avoid newbie
    issues with decimal numbers (these threads have elaborated plenty on those
    newbie issues). For reasons that should also be obvious by now, we
    converted these fractions back to decimal before printing them.


    But there was a big issue that we didn't anticipate. During the course of a
    simple program it was quite common for calculations to slow down
    dramatically, because numbers with ever-larger numerators and denominators
    were being computed (and rational arithmetic quickly slows down as those
    get bigger). So e.g. you might be computing your taxes with a precision of
    a million digits -- only to be rounding them down to dollars for display.


    These issues were quite difficult to debug because the normal approach to
    debugging ("just use print statements") didn't work -- unless you came up
    with the idea of printing the numbers as a fraction.


    For this reason I think that it's better not to use rational arithmetic by
    default.


    FWIW the same reasoning does *not* apply to using Decimal or something like
    decimal128. But then again those don't really address most issues with
    floating point -- the rounding issue exists for decimal as well as for
    binary. Anyway, that's a separate discussion to have.


    --
    --Guido van Rossum (python.org/~guido)
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150603/be6e0c48/attachment.html>
  • MRAB at Jun 4, 2015 at 1:06 am

    On 2015-06-04 02:01, Guido van Rossum wrote:
    At this point I feel compelled to explain why I'm against using
    fractions/rationals to represent numbers given as decimals.

    From 1982 till 1886 I participated in the implementation of ABC
    (http://homepages.cwi.nl/~steven/abc/) which did implement numbers as
    arbitrary precision fractions. (An earlier prototype implemented them as
    fractions of two floats, but that was wrong for many other reasons --
    two floats are not better than one. :-)
    Was that when the time machine was first used? :-)


    [snip]
  • Greg Ewing at Jun 4, 2015 at 11:44 pm

    MRAB wrote:
    On 2015-06-04 02:01, Guido van Rossum wrote:

    From 1982 till 1886 I participated in the implementation of ABC
    Was that when the time machine was first used? :-)

    Must have been a really big project if you had to
    give yourself nearly 100 years of development time!


    --
    Greg
  • Alexander Belopolsky at Jun 4, 2015 at 10:40 pm

    On Wed, Jun 3, 2015 at 9:01 PM, Guido van Rossum wrote:


    But there was a big issue that we didn't anticipate. During the course of
    a simple program it was quite common for calculations to slow down
    dramatically, because numbers with ever-larger numerators and denominators
    were being computed (and rational arithmetic quickly slows down as those
    get bigger).



    The problem of unlimited growth can be solved by rounding, but the result
    is in many ways worse that floating point
    numbers. One obvious problem is that unlike binary floating point where
    all bit patterns represent different numbers,
    only about 60% of fractions with limited numerators and denominators
    represent unique values. The rest are
    reducible by dividing the numerator and denominator by the GCD.


    Furthermore, the fractions with limited numerators are distributed very
    unevenly on the number line. This problem
    is present in binary floats as well: floats between 1 and 2 are twice as
    dense as floats between 2 and 4, but with
    fractions it is much worse. Since a/b - c/d = (ad-bc)/(bd), a fraction
    nearest to a/b is at a distance of 1/(bd) from it.
    So if the denominators are limited by D (|b| < D and |d| < D), for small
    b's the nearest fraction to a/b is at distance
    ~ 1/D, but if b ~ D, it is at a distance of 1/D^2. For example, if we
    limit denominators to 10 decimal digits, the gaps
    between fractions can vary from ~ 10^(-10) to ~ 10^(-20) even if the
    fractions are of similar magnitude - say between
    1 and 2.


    These two problems rule out the use of fractions as a general purpose
    number.
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150604/a72d2305/attachment-0001.html>

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppython-ideas @
categoriespython
postedJun 3, '15 at 10:17p
activeJun 4, '15 at 11:44p
posts10
users7
websitepython.org

People

Translate

site design / logo © 2018 Grokbase