At this point I feel compelled to explain why I'm against using

fractions/rationals to represent numbers given as decimals.

From 1982 till 1886 I participated in the implementation of ABC (

http://homepages.cwi.nl/~steven/abc/) which did implement numbers as

arbitrary precision fractions. (An earlier prototype implemented them as

fractions of two floats, but that was wrong for many other reasons -- two

floats are not better than one. :-)

The design using arbitrary precision fractions was intended to avoid newbie

issues with decimal numbers (these threads have elaborated plenty on those

newbie issues). For reasons that should also be obvious by now, we

converted these fractions back to decimal before printing them.

But there was a big issue that we didn't anticipate. During the course of a

simple program it was quite common for calculations to slow down

dramatically, because numbers with ever-larger numerators and denominators

were being computed (and rational arithmetic quickly slows down as those

get bigger). So e.g. you might be computing your taxes with a precision of

a million digits -- only to be rounding them down to dollars for display.

These issues were quite difficult to debug because the normal approach to

debugging ("just use print statements") didn't work -- unless you came up

with the idea of printing the numbers as a fraction.

For this reason I think that it's better not to use rational arithmetic by

default.

FWIW the same reasoning does *not* apply to using Decimal or something like

decimal128. But then again those don't really address most issues with

floating point -- the rounding issue exists for decimal as well as for

binary. Anyway, that's a separate discussion to have.