Floating point is a wonderful area to study if you're interested in numerical analysis, or "how to come up with garbage using generally accepted mathematics".

Floating-point should probably be referred to as "scientific notation", as the older definition simply meant a fixed-length field where the decimal point could be moved; e.g. 1.234 12.34 123.4. But things are what they are.

Some older computer systems allowed for selection of a "noise digit" that was shifted in when a value was normalized. For the 1620, this could be any decimal value; on the CDC mainframes it varied from one-half (i.e. every other bit set) on addition and subtraction and every third bit set for division instead of the usual zero. You could get a fair idea of how good your calculations were by selecting different values of this fill digit. I don't recall if IEEE 754 makes provision for this or not--I don't think so.

## Bookmarks