Why are the following unequal in Go? Is this a bug, or is it by design? If it's by design, why does this occur and is this type of behavior documented anywhere?
https://play.golang.org/p/itEV9zwV2a
package main
import (
"fmt"
)
func main() {
x := 10.1
fmt.Println("x == 10.1: ", x == 10.1)
fmt.Println("x*3.0 == 10.1*3.0:", x*3.0 == 10.1*3.0)
fmt.Println("x*3.0: ", x*3.0)
fmt.Println("10.1*3.0: ", 10.1*3.0)
}
Produces:
x == 10.1: true
x*3.0 == 10.1*3.0: false
x*3.0: 30.299999999999997
10.1*3.0: 30.3
Note that the same floating point math is being performed, just with different syntax. So why is the result different? I would expect 10.1*3.0
to equal 30.29999...
as in the x*3.0
example.
Constants and number literals in Go are untyped and have unlimited precision. The moment it has to be stored as a specific type, the bounds of that type apply. So when you declare x := 10.1
, that literal is converted into a float
and loses some precision. But when you specifically do 10.1*3.0
these have their full precision.
See the "Floats" header in this article. https://blog.golang.org/constants
Numeric constants live in an arbitrary-precision numeric space; they are just regular numbers. But when they are assigned to a variable the value must be able to fit in the destination. We can declare a constant with a very large value:
const Huge = 1e1000
—that's just a number, after all—but we can't assign it or even print it. This statement won't even compile:
fmt.Println(Huge)
The error is, "constant 1.00000e+1000 overflows float64", which is true. But Huge might be useful: we can use it in expressions with other constants and use the value of those expressions if the result can be represented in the range of a float64.
How it actually does this, especially in the given Huge
case, I do not know.
The Go Programming Language Specification
Numeric constants represent exact values of arbitrary precision and do not overflow. Consequently, there are no constants denoting the IEEE-754 negative zero, infinity, and not-a-number values.
Implementation restriction: Although numeric constants have arbitrary precision in the language, a compiler may implement them using an internal representation with limited precision. That said, every implementation must:
Represent integer constants with at least 256 bits.
Represent floating-point constants, including the parts of a complex constant, with a mantissa of at least 256 bits and a signed binary exponent of at least 16 bits.
Give an error if unable to represent an integer constant precisely.
Give an error if unable to represent a floating-point or complex constant due to overflow.
Round to the nearest representable constant if unable to represent a floating-point or complex constant due to limits on precision.
A numeric type represents sets of integer or floating-point values. The predeclared architecture-independent floating-point numeric types are:
float32 the set of all IEEE-754 32-bit floating-point numbers float64 the set of all IEEE-754 64-bit floating-point numbers
Constants use package math/big
at compile time for arbitrary-precision arithmetic. Variables use IEEE-754, which is often provided by the hardware, for floating-point arithmetic.