I need to find out the precision of the floating-point data types viz. Single, Double, and Decimal i.e. the maximum number of decimal digits ( i.e. base 10 digits ) that may be specified in thier literals. As this is not documented I thought I'd keep on enterring ever larger numbers in the Immediate Window until I got an overflow error and then count the number of digits. However to my frustration, as soon as I exceed 15 9' i.e. 9999999999999999, it gets converted to 1E+16.
How can I find out the precision and scale of these three data types?
How can I find out the precision and scale of these three data types?