I'm not that good at understanding the format of the IEEE floating point itself, but I can give an analogy...
Here's how we people store a floating point number in what we call SCIENTIFIC NOTATION:
1.234 E -2
(which is 0.01234 )
Now floating point is similar. If you look at the format, you'll see a lot of bits put into a so-called exponent, a lot of bits into a so-called mantissa. Now the exponent portion is in a standard integer format, high bit means negative, two's complement negatives, etc. This portion is similar to the right part of the SciNotation, which is an exponent. The left part of the SciNotation is similar to the mantissa.
Now, in SciNotation, any zeros that are not between non-zero digits are not put in the mantissa since they do not affect the magnitude of the number. Now the mantissa that the computer is concerned with is a binary mantissa, so for example, you have:
0.25 decimal
= 0.01 binary
0.125 decimal
= 0.001 binary
0.375 decimal
= 0.011 binary
After the decimal (binary?) point, the binary digit represents 1/2, then 1/4, etc., the same way decimal digits after the decimal point represents 1/10, then 1/100, etc. Now one thing that the IEEE designers noticed about binary floating points is that since there are only two possible digits, all binary mantissas (except in the case of 0.0) start with a 1. So they decided to remove that 1 from the format, making it an implied 1. Which complicates the matter, since by throwing away the implied 1, making it always set, there was no easy way of defining 0.0...
So IEEE decided that a float or double containing all 0's was called 0.0.
But unfortunately, the problem was merely shifted. 1.0 couldn't be defined exactly! That's because 1.0 in binary floating point is:
1.00 E 0
Since that first 1 is thrown away, it meant that the mantissa stored was 0, and the exponent was 0... but if both mantissa and exponent was 0, then the number was 0.0! Now what???
IEEE then decided that a very approximate value for 1.0 would work. So they decided that 1.0 would be represented by:
1.000000000000000 (15 more zeros for a float, 47 more for a double) 1 E 0
So that 1.0 was represented by a number with a 1 in the lowest order bit.
Got that?
Anyway the explanations aren't very accurate, they're pretty good ones, if only to help demonstrate how difficult floating point is.