A floating point value, if no explicit initializer is given, is initialized to NaN (Not A Number):
double d; // d is set to double.nan
NaNs have the interesting property in that whenever a NaN is used as an operand in a computation, the result is a NaN. Therefore, NaNs will propagate and appear in the output whenever a computation made use of one. This implies that a NaN appearing in the output is an unambiguous indication of the use of an uninitialized variable.
If 0.0 was used as the default initializer for floating point values, its effect could easily be unnoticed in the output, and so if the default initializer was unintended, the bug may go unrecognized.
The default initializer value is not meant to be a useful value, it is meant to expose bugs. Nan fills that role well.
But surely the compiler can detect and issue an error message for variables used that are not initialized? Most of the time, it can, but not always, and what it can do is dependent on the sophistication of the compiler's internal data flow analysis. Hence, relying on such is unportable and unreliable.
Because of the way CPUs are designed, there is no NaN value for integers, so D uses 0 instead. It doesn't have the advantages of error detection that NaN has, but at least errors resulting from unintended default initializations will be consistent and therefore more debuggable.