I have been playing around with Go and I ran into a (non?)feature of Go while running the following code:
a := 1 //int
b := 1.0 //float64
c := a/b //should be float64
When I ran this I get the following run-time error:
invalid operation: a / b (mismatched types int and float64)
I thought GoLang was supposed to be pretty good with type inference. Why should it be necessary for me to write:
c := float64(a)/b //float64
In general, given two number types, c should be inferred to be the smallest type that contains both. I can't see this as being an oversight, so I am just trying to figure out why this behavior was decided upon. For readability reasons only? Or would my suggested behavior cause some kind of logical inconsistency in language or something?
This is mentioned in the FAQ: Why does Go not provide implicit numeric conversions?
The convenience of automatic conversion between numeric types in C is outweighed by the confusion it causes. When is an expression unsigned? How big is the value? Does it overflow? Is the result portable, independent of the machine on which it executes?
It also complicates the compiler.
That is why you need:
either to do an explicit type conversion:
c := float64(a)/b
or to use var float64
var a, b float64
a = 1 //float64
b = 1.0 //float64
c := a/b