The floating point rules are such that transforming cast(real)cast(float) to cast(real) is a valid transformation. This is because the floating point rules are written with the following principle in mind:
An algorithm is invalid if it breaks if the floating point precision is increased. Floating point precision is always a minimum, not a maximum.
Programs that legitimately depended on maximum precision are:
- compiler/library validation test suites
- ones trying to programmatically test the precision
- is not of value to user programming, and there are alternate ways to test the precision.
- D has .properties that take care of that.
- Programs that rely on a maximum accuracy need to be rethought and reengineered.