FAQ
Hey guys,

I just found out, how much Python fails on simple math. I checked a
simple equation for a friend.

[code]
from math import e as e
from math import sqrt as sqrt
2*e*sqrt(3) - 2*e == 2*e*(sqrt(3) - 1)
False
[/code]

So WTF? The equation is definitive equivalent. (See http://mathbin.net/59158)

PS:

#1:
2.0 * e * sqrt(3.0) - 2.0 * e
3.9798408154464964

#2:
2.0 * e * (sqrt(3.0) -1.0)
3.979840815446496

I was wondering what exactly is failing here. The math module? Python,
or the IEEE specifications?

--

Search Discussions

•  at Feb 22, 2011 at 1:29 pm ⇧

christian schulze wrote:

#1:
2.0 * e * sqrt(3.0) - 2.0 * e
3.9798408154464964

#2:
2.0 * e * (sqrt(3.0) -1.0)
3.979840815446496

I was wondering what exactly is failing here. The math module? Python,
or the IEEE specifications?
Limited-precision calculation, computer floating-point for example, will do
this. Intermediate results get rounded off in different ways on different
paths through the "same" calculation. The whole truth for this will be
revealed in a Numerical Analysis textbook, e.g. James B. Scarbourough,
_Numerical Mathematical Analysis_,

Mel.
•  at Feb 22, 2011 at 1:32 pm ⇧
You may want to restrict the result to certain limit in the floating
numbers

each system has its own levels of floating numbers and even a small
difference is a difference to return FALSE
On Tue, Feb 22, 2011 at 6:50 PM, christian schulze wrote:

Hey guys,

I just found out, how much Python fails on simple math. I checked a
simple equation for a friend.

[code]
from math import e as e
from math import sqrt as sqrt
2*e*sqrt(3) - 2*e == 2*e*(sqrt(3) - 1)
False
[/code]

So WTF? The equation is definitive equivalent. (See
http://mathbin.net/59158)

PS:

#1:
2.0 * e * sqrt(3.0) - 2.0 * e
3.9798408154464964

#2:
2.0 * e * (sqrt(3.0) -1.0)
3.979840815446496

I was wondering what exactly is failing here. The math module? Python,
or the IEEE specifications?

--
--
http://mail.python.org/mailman/listinfo/python-list

--
Nitin Pawar
-------------- next part --------------
An HTML attachment was scrubbed...
•  at Feb 22, 2011 at 1:33 pm ⇧

---------- Forwarded message ----------
Date: Tue, Feb 22, 2011 at 5:32 PM
Subject: Re: Python fails on math
To: christian schulze <xcr4cx at googlemail.com>
Everybody knows you can't just compare floating point values for
equality with a simple ==.
Instead, check that the difference between them is less than some
predefined epsilon (0.0000001 for example, depends on how much
precision you want).
•  at Feb 22, 2011 at 1:37 pm ⇧

On Tue, 2011-02-22 at 05:20 -0800, christian schulze wrote:
[code]
from math import e as e
from math import sqrt as sqrt
2*e*sqrt(3) - 2*e == 2*e*(sqrt(3) - 1)
False
[/code]
I was wondering what exactly is failing here. The math module? Python,
or the IEEE specifications?
I'm not sure anything is failing as such - the "A==B" operator checks if
values computed by the expressions A and B are equivalent - it doesn't
check if the expressions are equivalent (which you'd obviously need
algebra software to attempt).

(from the rest of your email I'm assuming you know what's actually
happening)

Tim Wintle
•  at Feb 22, 2011 at 1:37 pm ⇧

On Tue, Feb 22, 2011 at 8:20 AM, christian schulze wrote:
Hey guys,

I just found out, how much Python fails on simple math. I checked a
simple equation for a friend.

[code]
from math import e as e
from math import sqrt as sqrt
2*e*sqrt(3) - 2*e == 2*e*(sqrt(3) - 1)
False
[/code]

So WTF? The equation is definitive equivalent. (See http://mathbin.net/59158)

PS:

#1:
2.0 * e * sqrt(3.0) - 2.0 * e
3.9798408154464964

#2:
2.0 * e * (sqrt(3.0) -1.0)
3.979840815446496

I was wondering what exactly is failing here. The math module? Python,
or the IEEE specifications?
1 / 3 = 0.33333333333
1 / 3 * 3 = 0.333333333 * 3 = 0.999999999 != 1
OMG MATH IS BROKEN!!!!!!!!!!!!!!!

Unless you're doing symbolic manipulation or have infinite precision,
it's impossible to accurately represent most values. In fact, e is not
really e, it's just the closest approximation to e we can get using 64
bits. So the exact amount it's off is a result of IEEE. The fact that
it's off at all is a result of us not having infinite memory.
•  at Feb 22, 2011 at 1:48 pm ⇧

On 22/02/2011 13:20, christian schulze wrote:
Hey guys,

I just found out, how much Python fails on simple math. I checked a
simple equation for a friend.

[code]
from math import e as e
from math import sqrt as sqrt
2*e*sqrt(3) - 2*e == 2*e*(sqrt(3) - 1)
False
[/code]

So WTF? The equation is definitive equivalent. (See http://mathbin.net/59158)

PS:

#1:
2.0 * e * sqrt(3.0) - 2.0 * e
3.9798408154464964

#2:
2.0 * e * (sqrt(3.0) -1.0)
3.979840815446496

I was wondering what exactly is failing here. The math module? Python,
or the IEEE specifications?

--
What has failed you is your understanding of what floating point means.
Both sides of your equation contain e which is an irrational number.

No irrational number and many rational ones cannot be expressed exactly
in IEEE format. (1/3, 1/7)

All that has happened is that the two sides have come out with very
slightly different approximations to numbers that they cannot express
exactly.

Regards

Ian
•  at Feb 22, 2011 at 2:24 pm ⇧

christian schulze wrote:
Hey guys,

I just found out, how much Python fails on simple math. I checked a
simple equation for a friend.

[code]
from math import e as e
from math import sqrt as sqrt
2*e*sqrt(3) - 2*e == 2*e*(sqrt(3) - 1)
e has no accurate representation in computer science. Neither does it
with the classic decimal representation. Same for pi, if you remember

Quoting wikipedia, e known digits:
Date Decimal digits Computation performed by
2010 July 5 1,000,000,000,000 Shigeru Kondo & Alexander J. Ye

Anyway, no one solves equation by numerical application. If you want to
illustrate that ab - ac = a(b-c), do it with integers, as their
representation in computer science is accurate within a given range.

JM
•  at Feb 22, 2011 at 2:25 pm ⇧

christian schulze wrote:

Hey guys,

I just found out, how much Python fails on simple math. I checked a
simple equation for a friend.

[code]
from math import e as e
from math import sqrt as sqrt
2*e*sqrt(3) - 2*e == 2*e*(sqrt(3) - 1)
False
[/code]
Try the same in other languages and you'll get the same result. Here's
Javascript (checked on Chrome and Firefox, IE crashed):
2*Math.E*Math.sqrt(3)-2*Math.E==2*Math.E*(Math.sqrt(3)-1)
false

See the Python FAQ:
http://docs.python.org/faq/design.html#why-are-floating-point-calculations-so-inaccurate
•  at Feb 22, 2011 at 4:54 pm ⇧
In article
christian schulze wrote:
Hey guys,

I just found out, how much Python fails on simple math. I checked a
simple equation for a friend.

[code]
from math import e as e
from math import sqrt as sqrt
2*e*sqrt(3) - 2*e == 2*e*(sqrt(3) - 1)
False
[/code]

So WTF? The equation is definitive equivalent. (See http://mathbin.net/59158)
An amusing aspect of this is that the equations posted at that

Anyway, I don't know why you're jumping to the conclusion that it's
Python that's wrong here. Could be the math you learned in school
is wrong. I mean you're assuming that

(*) a(b+c) = ab + ac

but what makes you so certain (*) is correct? Have you tried it with
every possible value of a, b, and c? Or do you just blindly believe
everything your teacher told you or what?

Seems to me you've stumbled on a counterexample to (*). I'm
gonna have to take this up with the mathematicians...
PS:

#1:
2.0 * e * sqrt(3.0) - 2.0 * e
3.9798408154464964

#2:
2.0 * e * (sqrt(3.0) -1.0)
3.979840815446496

I was wondering what exactly is failing here. The math module? Python,
or the IEEE specifications?

--
--
David C. Ullrich
•  at Feb 22, 2011 at 6:29 pm ⇧

On Tue, Feb 22, 2011 at 9:54 AM, David C. Ullrich wrote:
Anyway, I don't know why you're jumping to the conclusion that it's
Python that's wrong here. Could be the math you learned in school
is wrong. I mean you're assuming that

(*) ? ? ? a(b+c) = ab + ac

but what makes you so certain (*) is correct? Have you tried it with
every possible value of a, b, and c? Or do you just blindly believe
everything your teacher told you or what?

Seems to me you've stumbled on a counterexample to (*). I'm
gonna have to take this up with the mathematicians...
Or you could, you know, just check the proof:

•  at Feb 22, 2011 at 7:42 pm ⇧

On 2011-02-22, Ian Kelly wrote:
On Tue, Feb 22, 2011 at 9:54 AM, David C. Ullrich wrote:
Anyway, I don't know why you're jumping to the conclusion that it's
Python that's wrong here. Could be the math you learned in school
is wrong. I mean you're assuming that

(*) ? ? ? a(b+c) = ab + ac

but what makes you so certain (*) is correct? Have you tried it with
every possible value of a, b, and c? Or do you just blindly believe
everything your teacher told you or what?

Seems to me you've stumbled on a counterexample to (*). I'm
gonna have to take this up with the mathematicians...
Or you could, you know, just check the proof:

Except that Python (and computer languages in general) don't deal with
real numbers. They deal with floating point numbers, which aren't the
same thing. [In case anybody is still fuzzy about that.]

FP multiplication distributes over addition, close enough for most
purposes, except when it doesn't quite.

--
Grant Edwards grant.b.edwards Yow!
at
gmail.com
•  at Feb 23, 2011 at 1:06 am ⇧

On 2/22/2011 2:42 PM, Grant Edwards wrote:

Except that Python (and computer languages in general) don't deal with
real numbers. They deal with floating point numbers, which aren't the
same thing. [In case anybody is still fuzzy about that.]
In particular, floats are a fixed finite set of rationals with adjusted
definitions of the arithmetic operators. The adjustment is necessary
because the 'proper' answer to an operation may not be one of the
allowed answers. In other words, f1 float-op f2 may not be the same as
f1 rat-op f2, and hence float-ops do not always obey the rules of
rational (or real) operations.

--
Terry Jan Reedy
•  at Feb 23, 2011 at 3:24 pm ⇧

On 2011-02-23, Terry Reedy wrote:
On 2/22/2011 2:42 PM, Grant Edwards wrote:

Except that Python (and computer languages in general) don't deal with
real numbers. They deal with floating point numbers, which aren't the
same thing. [In case anybody is still fuzzy about that.]
In particular, floats are a fixed finite set of rationals with adjusted
definitions of the arithmetic operators. The adjustment is necessary
because the 'proper' answer to an operation may not be one of the
allowed answers. In other words, f1 float-op f2 may not be the same as
f1 rat-op f2, and hence float-ops do not always obey the rules of
rational (or real) operations.
On some (increasingly rare) systems they don't always obey the rules
of base-two float-opts either, but that's a whole different can of
worms.

--
Grant Edwards grant.b.edwards Yow! I want a VEGETARIAN
at BURRITO to go ... with
gmail.com EXTRA MSG!!
•  at Feb 22, 2011 at 5:27 pm ⇧

On 2011-02-22, christian schulze wrote:
Hey guys,

I just found out, how much Python fails on simple math.
Python doesn't do math.

It does floating point operations.

They're different. Seriously.

On all of the platforms I know of, it's IEEE 754 (base-2) floating
point.
I checked a simple equation for a friend.

[code]
from math import e as e
from math import sqrt as sqrt
2*e*sqrt(3) - 2*e == 2*e*(sqrt(3) - 1)
False
[/code]

So WTF?
Python doesn't do equations. Python does floating point operations.

[And it does them in _base_2_ -- which is important, because it makes
things even more difficult.]

The equation is definitive equivalent. (See http://mathbin.net/59158)
But, the two floating point expressions you provided are not
equivalent.

Remember, you're not doing math with Python.

You're doing binary floating point operations.
#1:
2.0 * e * sqrt(3.0) - 2.0 * e
3.9798408154464964

#2:
2.0 * e * (sqrt(3.0) -1.0)
3.979840815446496

I was wondering what exactly is failing here. The math module?
Python, or the IEEE specifications?
I'm afraid it's the user that's failing. Unfortunately, in many
situations using floating point is neither intuitive nor easy to get
right.

http://docs.python.org/tutorial/floatingpoint.html
http://en.wikipedia.org/wiki/Floating_point

--
Grant Edwards grant.b.edwards Yow! I like your SNOOPY
at POSTER!!
gmail.com
•  at Feb 22, 2011 at 5:59 pm ⇧
Grant Edwards wrote:
Python doesn't do equations. Python does floating point operations.
More generally, all general-purpose programming languages have the same
problem. You'll see the same issues in Fortran, C, Java, Ruby, Pascal,
etc, etc. You'll see the same problem if you punch the numbers into a
hand calculator. It's just the nature of how digital computers do
floating point calculations.

If you really want something that understands that:
2*e*sqrt(3) - 2*e == 2*e*(sqrt(3) - 1)
you need to be looking at specialized math packages like Mathematica and
things of that ilk.
•  at Feb 22, 2011 at 7:38 pm ⇧

On 2011-02-22, Roy Smith wrote:
Grant Edwards wrote:
Python doesn't do equations. Python does floating point operations.
More generally, all general-purpose programming languages have the same
problem. You'll see the same issues in Fortran, C, Java, Ruby, Pascal,
etc, etc. You'll see the same problem if you punch the numbers into a
hand calculator.
Some hand calculators use base-10 (BCD) floating point, so the
problems aren't exactly the same, but they're very similar.

--
Grant Edwards grant.b.edwards Yow! YOU PICKED KARL
at MALDEN'S NOSE!!
gmail.com
•  at Feb 23, 2011 at 9:26 pm ⇧

On 2/22/2011 9:59 AM, Roy Smith wrote:
Grant Edwardswrote:
Python doesn't do equations. Python does floating point operations.
More generally, all general-purpose programming languages have the same
problem. You'll see the same issues in Fortran, C, Java, Ruby, Pascal,
etc, etc.
Not quite. CPython has the problem that it "boxes" its floating
point numbers. After each operation, the value is stored back into
a 64-bit space.

The IEEE 754 compliant FPU on most machines today, though, has
an 80-bit internal representation. If you do a sequence of
operations that retain all the intermediate results in the FPU
registers, you get 16 more bits of precision than if you store
after each operation. Rounding occurs when the 80-bit value is
forced back to 64 bits.

So it's quite possible that this would look like an equality
in C, or ShedSkin, or maybe PyPy (which has some unboxing
optimizations) but not in CPython.

(That's not the problem here, of course. The problem is that
the user doesn't understand floating point. The issues I'm talking
about are subtle, and affect few people. Those of us who've had
of simulation systems, where cumulative error can be a problem.
In the 1990s, I had to put a lot of work into this for collision
detection algorithms for a physics engine. As two objects settle
into contact, issues with tiny differences between large numbers
start to dominate. It takes careful handling to prevent that from
causing high frequency simulated vibration in the simulation,
chewing up CPU time at best and causing simulations to fly apart
at worst. The problems are understood now, but they weren't in
the mid-1990s. The licensed Jurassic Park game "Trespasser" was a flop
for that reason.)

John Nagle
•  at Feb 24, 2011 at 11:55 am ⇧

On Wed, 23 Feb 2011 13:26:05 -0800, John Nagle wrote:

The IEEE 754 compliant FPU on most machines today, though, has an 80-bit
internal representation. If you do a sequence of operations that retain
all the intermediate results in the FPU registers, you get 16 more bits
of precision than if you store after each operation.
That's a big if though. Which languages support such a thing? C doubles
are 64 bit, same as Python.

--
Steven
•  at Feb 24, 2011 at 12:56 pm ⇧

Steven D'Aprano wrote:
On Wed, 23 Feb 2011 13:26:05 -0800, John Nagle wrote:

The IEEE 754 compliant FPU on most machines today, though, has an 80-bit
internal representation. If you do a sequence of operations that retain
all the intermediate results in the FPU registers, you get 16 more bits
of precision than if you store after each operation.
That's a big if though. Which languages support such a thing? C doubles
are 64 bit, same as Python.
Assembly! :)

~Ethan~
•  at Feb 24, 2011 at 2:17 pm ⇧

On Thu, 24 Feb 2011 04:56:46 -0800 Ethan Furman wrote:
The IEEE 754 compliant FPU on most machines today, though, has an 80-bit
internal representation. If you do a sequence of operations that retain
all the intermediate results in the FPU registers, you get 16 more bits
of precision than if you store after each operation.
That's a big if though. Which languages support such a thing? C doubles
are 64 bit, same as Python.
Assembly! :)
Really? Why would you need that level of precision just to gather all
the students into the auditorium?

--
D'Arcy J.M. Cain <darcy at druid.net> | Democracy is three wolves
http://www.druid.net/darcy/ | and a sheep voting on
+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.
•  at Feb 24, 2011 at 3:34 pm ⇧

D'Arcy J.M. Cain wrote:
On Thu, 24 Feb 2011 04:56:46 -0800
Ethan Furman wrote:
That's a big if though. Which languages support such a thing? C doubles
are 64 bit, same as Python.
Assembly! :)
Really? Why would you need that level of precision just to gather all
the students into the auditorium?
You would think so, but darned if some of them don't wind up in a
*different* *auditorium*!

Mel.
•  at Feb 24, 2011 at 4:40 pm ⇧

On 2/24/11 5:55 AM, Steven D'Aprano wrote:
On Wed, 23 Feb 2011 13:26:05 -0800, John Nagle wrote:

The IEEE 754 compliant FPU on most machines today, though, has an 80-bit
internal representation. If you do a sequence of operations that retain
all the intermediate results in the FPU registers, you get 16 more bits
of precision than if you store after each operation.
That's a big if though. Which languages support such a thing? C doubles
are 64 bit, same as Python.
C double *variables* are, but as John suggests, C compilers are allowed (to my
knowledge) to keep intermediate results of an expression in the larger-precision
FPU registers. The final result does get shoved back into a 64-bit double when
it is at last assigned back to a variable or passed to a function that takes a
double.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
an underlying truth."
-- Umberto Eco
•  at Feb 25, 2011 at 12:33 am ⇧

On Thu, 24 Feb 2011 10:40:45 -0600, Robert Kern wrote:
On 2/24/11 5:55 AM, Steven D'Aprano wrote:
On Wed, 23 Feb 2011 13:26:05 -0800, John Nagle wrote:

The IEEE 754 compliant FPU on most machines today, though, has an
80-bit internal representation. If you do a sequence of operations
that retain all the intermediate results in the FPU registers, you get
16 more bits of precision than if you store after each operation.
That's a big if though. Which languages support such a thing? C doubles
are 64 bit, same as Python.
C double *variables* are, but as John suggests, C compilers are allowed
(to my knowledge) to keep intermediate results of an expression in the
larger-precision FPU registers. The final result does get shoved back
into a 64-bit double when it is at last assigned back to a variable or
passed to a function that takes a double.
So...

(1) you can't rely on it, because it's only "allowed" and not mandatory;

(2) you may or may not have any control over whether or not it happens;

(3) it only works for calculations that are simple enough to fit in a
single expression; and

(4) we could say the same thing about Python -- there's no prohibition on
Python using extended precision when performing intermediate results, so
it too could be said to be "allowed".

It seems rather unfair to me to single Python out as somehow lacking
(compared to which other languages?) and to gloss over the difficulties
in "If you do a sequence of operations that retain all the intermediate
results..." Yes, *if* you do so, you get more precision, but *how* do you
do so? Such a thing will be language or even implementation dependent,
and the implication that it just automatically happens without any effort
seems dubious to me.

But I could be wrong, of course. It may be that Python, alone of all
modern high-level languages, fails to take advantage of 80-bit registers
in FPUs *wink*

--
Steven
•  at Feb 25, 2011 at 12:45 am ⇧

On Fri, 2011-02-25 at 00:33 +0000, Steven D'Aprano wrote:
On Thu, 24 Feb 2011 10:40:45 -0600, Robert Kern wrote:
On 2/24/11 5:55 AM, Steven D'Aprano wrote:
On Wed, 23 Feb 2011 13:26:05 -0800, John Nagle wrote:

The IEEE 754 compliant FPU on most machines today, though, has an
80-bit internal representation. If you do a sequence of operations
that retain all the intermediate results in the FPU registers, you get
16 more bits of precision than if you store after each operation.
That's a big if though. Which languages support such a thing? C doubles
are 64 bit, same as Python.
C double *variables* are, but as John suggests, C compilers are allowed
(to my knowledge) to keep intermediate results of an expression in the
larger-precision FPU registers. The final result does get shoved back
into a 64-bit double when it is at last assigned back to a variable or
passed to a function that takes a double.
So...

(1) you can't rely on it, because it's only "allowed" and not mandatory;

(2) you may or may not have any control over whether or not it happens;

(3) it only works for calculations that are simple enough to fit in a
single expression; and

(4) we could say the same thing about Python -- there's no prohibition on
Python using extended precision when performing intermediate results, so
it too could be said to be "allowed".

It seems rather unfair to me to single Python out as somehow lacking
(compared to which other languages?) and to gloss over the difficulties
in "If you do a sequence of operations that retain all the intermediate
results..." Yes, *if* you do so, you get more precision, but *how* do you
do so? Such a thing will be language or even implementation dependent,
and the implication that it just automatically happens without any effort
seems dubious to me.

But I could be wrong, of course. It may be that Python, alone of all
modern high-level languages, fails to take advantage of 80-bit registers
in FPUs *wink*

--
Steven
Maybe I'm wrong, but wouldn't compiling Python with a compiler that
supports extended precision for intermediates allow Python to use
extended precision for its immediates? Or does Python use its own
floating-point math?
•  at Feb 25, 2011 at 12:52 am ⇧

On 2011-02-25, Steven D'Aprano wrote:

C double *variables* are, but as John suggests, C compilers are allowed
(to my knowledge) to keep intermediate results of an expression in the
larger-precision FPU registers. The final result does get shoved back
into a 64-bit double when it is at last assigned back to a variable or
passed to a function that takes a double.
So...

(1) you can't rely on it, because it's only "allowed" and not mandatory;

(2) you may or may not have any control over whether or not it happens;

(3) it only works for calculations that are simple enough to fit in a
single expression; and
(3) is sort of an interesting one.

If a C compiler could elminate stores to temporary variables (let's
say inside a MAC loop) it might get a more accurate result by leaving
temporary results in an FP register. But, IIRC the C standard says
the compiler can only eliminate stores to variables if it doesn't
change the output of the program. So I think the C standard actually
forces the compiler to convert results to 64-bits at the points where
a store to a temporary variable happens. It's still free to leave the
result in an FP register, but it has to toss out the extra bits so
that it gets the same result as it would have if the store/load took
place.
(4) we could say the same thing about Python -- there's no
prohibition on Python using extended precision when performing
intermediate results, so it too could be said to be "allowed".
Indeed. Though C-python _will_ (AFAIK) store results to variables
everywhere the source code says to, and C is allowed to skip those
stores, C is still required to produce the same results as if the

IOW, I don't see that there's any difference between Python and C
either.

--
Grant
•  at Feb 25, 2011 at 12:29 pm ⇧

On 01/-10/-28163 02:59 PM, Grant Edwards wrote:
On 2011-02-25, Steven D'Apranowrote:
C double *variables* are, but as John suggests, C compilers are allowed
(to my knowledge) to keep intermediate results of an expression in the
larger-precision FPU registers. The final result does get shoved back
into a 64-bit double when it is at last assigned back to a variable or
passed to a function that takes a double.
So...

(1) you can't rely on it, because it's only "allowed" and not mandatory;

(2) you may or may not have any control over whether or not it happens;

(3) it only works for calculations that are simple enough to fit in a
single expression; and
<snip>
In 1975, I was writing the arithmetic and expression handling for an
interpreter. My instruction primitives could add two bytes; anything
more complex was done in my code. So I defined a floating point format
(decimal, of course) and had extended versions of it available for
intermediate calculations. I used those extended versions, in logs for
example, whenever the user of our language could not see the
intermediate results.

When faced with the choice of whether to do the same inside explicit
expressions, like (a*b) - (c*d), I deliberately chose *not* to do such
optimizations, in spite of the fact that it would improve both
performance and (sometimes) accuracy.

I wrote down my reasons at the time, and they had to do with 'least
surprise." If a computation for an expression gave a different result
than the same one decomposed into separate variables, the developer
would have a hard time knowing when results might change, and when they
might not. Incidentally, the decimal format also assured "least
surprise," since the times when quantization error entered in were
exactly the same times as if one were doing the calculation by hand.

I got feedback from a customer who was getting errors in a complex
calculation (involving trig), and wanted help in understanding why.
While his case might have been helped by intermediate values having
higher accuracy, the real solution was to reformulate the calculation to
avoid subtracting two large numbers that differed by very little. By
applying a little geometry before writing the algorithm, I was able to
change his accuracy from maybe a millionth of an inch to something
totally unmeasurable.

I still think the choice was appropriate for a business language, if not
for scientific use.

DaveA
•  at Mar 9, 2011 at 4:26 pm ⇧

On Feb 25, 12:52?am, Grant Edwards wrote:
So I think the C standard actually
forces the compiler to convert results to 64-bits at the points where
a store to a temporary variable happens.
I'm not sure that this is true. IIRC, C99 + Annex F forces this, but
C99 by itself doesn't.
Indeed. ?Though C-python _will_ (AFAIK) store results to variables
everywhere the source code says to
Agreed.

That doesn't rescue Python from the pernicious double-rounding
problem, though: it still bugs me that you get different results for
e.g.,
1e16 + 2.99999
1.0000000000000002e+16

depending on the platform. OS X, Windows, 64-bit Linux give the
above; 32-bit Linux generally gives 1.0000000000000004e+16 instead,
thanks to using the x87 FPU with its default 64-bit precision.
(Windows uses the x87 too, but changes the precision to 53-bit
precision.)

In theory this is prohibited too, under C99 + Annex F.

--
Mark
•  at Feb 25, 2011 at 12:57 am ⇧

On 2011-02-25, Westley Mart?nez wrote:

Maybe I'm wrong, but wouldn't compiling Python with a compiler that
supports extended precision for intermediates allow Python to use
extended precision for its immediates?
I'm not sure what you mean by "immediates", but I don't think so. For
the C compiler to do an optimization like we're talking about, you
have to give it the entire expression in C for it to compile. From
the POV of the C compiler, C-Python never does more than one FP
operation at a time when evaluating Python bytecode, and there aren't
any intemediate values to store.
Or does Python use its own floating-point math?
No, but the C compiler has no way of knowing what the Python
expression is.

--
Grant
•  at Feb 25, 2011 at 1:16 am ⇧

On Fri, 2011-02-25 at 00:57 +0000, Grant Edwards wrote:
On 2011-02-25, Westley Mart?nez wrote:

Maybe I'm wrong, but wouldn't compiling Python with a compiler that
supports extended precision for intermediates allow Python to use
extended precision for its immediates?
I'm not sure what you mean by "immediates", but I don't think so. For
the C compiler to do an optimization like we're talking about, you
have to give it the entire expression in C for it to compile. From
the POV of the C compiler, C-Python never does more than one FP
operation at a time when evaluating Python bytecode, and there aren't
any intemediate values to store.
Or does Python use its own floating-point math?
No, but the C compiler has no way of knowing what the Python
expression is.

--
Grant

I meant to say intermediate. I think I understand what you're saying.
Regardless, the point is the same; floating-point numbers are different
from real numbers and their limitations have to be taken into account
when operating on them.
•  at Mar 9, 2011 at 10:36 am ⇧

On Feb 25, 12:33?am, Steven D'Aprano <steve +comp.lang.pyt... at pearwood.info> wrote:
On Thu, 24 Feb 2011 10:40:45 -0600, Robert Kern wrote:
On 2/24/11 5:55 AM, Steven D'Aprano wrote:
On Wed, 23 Feb 2011 13:26:05 -0800, John Nagle wrote:

The IEEE 754 compliant FPU on most machines today, though, has an
80-bit internal representation. ?If you do a sequence of operations
that retain all the intermediate results in the FPU registers, you get
16 more bits of precision than if you store after each operation.
That's a big if though. Which languages support such a thing? C doubles
are 64 bit, same as Python.
C double *variables* are, but as John suggests, C compilers are allowed
(to my knowledge) to keep intermediate results of an expression in the
larger-precision FPU registers. The final result does get shoved back
into a 64-bit double when it is at last assigned back to a variable or
passed to a function that takes a double.
So...

(1) you can't rely on it, because it's only "allowed" and not mandatory;

(2) you may or may not have any control over whether or not it happens;

(3) it only works for calculations that are simple enough to fit in a
single expression; and

(4) we could say the same thing about Python -- there's no prohibition on
Python using extended precision when performing intermediate results, so
it too could be said to be "allowed".

It seems rather unfair to me to single Python out as somehow lacking
(compared to which other languages?) and to gloss over the difficulties
in "If you do a sequence of operations that retain all the intermediate
results..." Yes, *if* you do so, you get more precision, but *how* do you
do so? Such a thing will be language or even implementation dependent,
and the implication that it just automatically happens without any effort
seems dubious to me.

But I could be wrong, of course. It may be that Python, alone of all
modern high-level languages, fails to take advantage of 80-bit registers
in FPUs *wink*

--
Steven
And note that x64 machines use SSE for all their floating point maths,
which is 64bit max precision anyway

Ben
•  at Feb 22, 2011 at 8:18 pm ⇧

On 2/22/11 5:20 AM, christian schulze wrote:
I just found out, how much Python fails on simple math.
2*e*sqrt(3) - 2*e == 2*e*(sqrt(3) - 1)
Everyone else has answered very well, so I won't comment on the actual
question at hand-- it seems to have been answered completely.

But! I shall go all o.O and headscratch at you and our definition of
"simple" when you go write an equation which has a number that is
described both as Irrational and Transcendental in it.

Irrational, transcendental numbers so don't get to be grouped under the
"simple" classification. (That said, you'd run into problems with many
entirely non-Transcendental floating point numbers that have not yet
meditated enough to reach nirvana-- but still).

--

Stephen Hansen
... Also: Ixokai
... Mail: me+list/python (AT) ixokai (DOT) io
... Blog: http://meh.ixokai.io/

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 487 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-list/attachments/20110222/19a5c5d5/attachment.pgp>
•  at Feb 22, 2011 at 8:54 pm ⇧

On 22 Feb, 14:20, christian schulze wrote:
Hey guys,

I just found out, how much Python fails on simple math. I checked a
simple equation for a friend.
Python does not fail. Floating point arithmetics and numerical
approximations will do this. If you need symbolic maths, consider
using the sympy package. If you want to prove it to yourself, try the
same thing numerically with Matlab first, and then symbolically with
Maple or Mathematica.

Sturla
•  at Feb 22, 2011 at 10:40 pm ⇧

On 22 Feb., 21:18, Stephen Hansen wrote:
On 2/22/11 5:20 AM, christian schulze wrote:

I just found out, how much Python fails on simple math.
2*e*sqrt(3) - 2*e == 2*e*(sqrt(3) - 1)
Everyone else has answered very well, so I won't comment on the actual
question at hand-- it seems to have been answered completely.

But! I shall go all o.O and headscratch at you and our definition of
"simple" when you go write an equation which has a number that is
described both as Irrational and Transcendental in it.

Irrational, transcendental numbers so don't get to be grouped under the
"simple" classification. (That said, you'd run into problems with many
entirely non-Transcendental floating point numbers that have not yet
meditated enough to reach nirvana-- but still).

--

? ?Stephen Hansen
? ?... Also: Ixokai
? ?... Mail: me+list/python (AT) ixokai (DOT) io
? ?... Blog:http://meh.ixokai.io/

?signature.asc
I'd rather say not trivial but simple.
I looked at "e" as a simple variable with a finite floating point
value.

BTW; shame on me, e wasn't supposed to be THE e, but just a random
number. (The excercise was a geometry problem, as I was told later.)

The problem I had with the output of python was, that both expressions

--

Related Discussions

Discussion Overview
 group python-list categories python posted Feb 22, '11 at 1:20p active Mar 9, '11 at 4:26p posts 34 users 25 website python.org

25 users in discussion

Content

People

Support

Translate

site design / logo © 2022 Grokbase