If I run the following piece of Go code:
fmt.Println(float32(0.1) + float32(0.2))
fmt.Println(float64(0.1) + float64(0.2))
the output is:
0.3
0.30000000000000004
It appears the result of the float32 sum is more exact than the result of the float64 sum, why? I thought that float64 is always more precise than float32. How do I decide which one to pick to have the most accurate result?
It isn't. fmt.Println
is just making it look more precise. Println
uses %g
for floating point and complex numbers. The docs say...
The default precision for... %g it is the smallest number of digits necessary to identify the value uniquely.
0.3 is sufficient to identify a float32
. But float64
being much more precise needs more digits.
We can use fmt.Printf
and %0.20g
to force both numbers to display the same precision.
f32 := float32(0.1) + float32(0.2)
f64 := float64(0.1) + float64(0.2)
fmt.Printf("%0.20g
", f32)
fmt.Printf("%0.20g
", f64)
0.30000001192092895508
0.30000000000000004441
float64
is more precise. Neither are exact as that is the nature of floating point numbers.
We can use strconv.FormatFloat
to see what these numbers really are.
fmt.Println(strconv.FormatFloat(float64(f32), 'b', -1, 32))
fmt.Println(strconv.FormatFloat(f64, 'b', -1, 64))
10066330p-25
5404319552844596p-54
That is 10066330 * 2^-25
and 5404319552844596 * 2^-54
.