Decimal Floating Point Types in my VM
A friend of mine asked me, why did you include decimal floating point types in your VM specification. This is because I believe it has a good chance to become ubiquitous in the future as a hardware standard.
I like the idea of decimal floating point, or perhaps it is more the cast that I dislike binary floating point. Many of us tend to think of base 10 real numbers when we hear floating point, but in most hardware implementations the representation of a binary floating point number leads to some subtle but important differences, especially with regards to the set of irrationals.
The challenges with working with binary floating point, become especially apparent in financial applications, where precise representation of large decimal quantities is important.
That was a bit of an emotional argument, regarding my own feelings regarding, but a more practical point is the following quote from Intel :
In planning to improve on how decimal calculations are carried out and at the same time to make Intel Corporation one of the early adopters of the decimal floating-point arithmetic from the IEEE Standard 754R draft, we have implemented in software adecimal floating-point library, posted here. Standardization committees for high-level languages such as C and C++ are already developing plans to add decimal floating-point support to these languages.
This alone is sufficient for me to believe that there is a strong likelihood of many architectures supporting decimal floating-point math in the future.
For those interested in a more detailed argument, I will refer them to Decimal Floating-Point: Algorism for Computers. Note: for those unfamiliar with the term algorism the Wikipedia definition (sorry I'm lazy) is is the technique of performing basic arithmetic by writing numbers in place value form and applying a set of memorized rules and facts to the digits.