If you parse a string into a big.Float like f.SetString("0.001")
, then multiply it, I'm seeing a loss of precision. If I use f.SetFloat64(0.001)
, I don't lose precision. Even doing a strconv.ParseFloat("0.001", 64)
, then calling f.SetFloat()
works.
Full example of what I'm seeing here:
https://play.golang.org/p/_AyTHJJBUeL
Expanded from this question: https://stackoverflow.com/a/47546136/105562
The difference in output is due to imprecise representation of base 10 floating point numbers in float64
(IEEE-754 format) and the default precision and rounding of big.Float
.
See this simple code to verify:
fmt.Printf("%.30f
", 0.001)
f, ok := new(big.Float).SetString("0.001")
fmt.Println(f.Prec(), ok)
Output of the above (try it on the Go Playground):
0.001000000000000000020816681712
64 true
So what we see is that the float64
value 0.001
is not exactly 0.001
, and the default precision of big.Float
is 64.
If you increase the precision of the number you set via a string
value, you will see the same output:
s := "0.001"
f := new(big.Float)
f.SetPrec(100)
f.SetString(s)
fmt.Println(s)
fmt.Println(BigFloatToBigInt(f))
Now output will also be the same (try it on the Go Playground):
0.001
1000000000000000