Go中文字类型的推断与混淆

I have started learning GoLang and I am currently reading about its type inference system using the short variable declaration syntax.

Here is a simple program that caught my attention and is causing me some diffculty in understanding:

package main

import (
    "fmt"
    "sort"
)

type statistics struct {
    numbers []float64
    mean    float64
    median  float64
}

// Performs analytics on a slice of floating-point numbers
func GenerateStats(numbers []float64) (stats statistics) {
    stats.numbers = numbers
    stats.mean = sum(numbers) / float64(len(numbers))
    sort.Float64s(numbers)
    stats.median = median(numbers)
    return stats
}

// Helper function to sum up a slice of floats
func sum(numbers []float64) (total float64) {
    for _, num := range numbers {
        total += num
    }
    return total
}

// Helper to find the median of a sorted slice of floats
func median(numbers []float64) (med float64) {
    n := len(numbers)
    if n%2 == 0 {
        med = numbers[n/2]
    } else {
        med = (numbers[n/2] + numbers[(n-1)/2]) / 2 // Infer 01
    }
    return med
}

func main() {
    nums := []float64{1, 2, 3, 3, 4}
    result := generateStats(nums)
    fmt.Println(result.numbers)
    fmt.Println(result.mean)
    fmt.Println(result.median)

    b := "This is Go" + 1.9 // Infer 02
    fmt.Println(b)
}

When I execute this program using: $ go run <path>/statistics.go,
I get the following error message:

# command-line-arguments
<path>/statistics.go:47:20: cannot convert "This is Go" to type float64
<path>/statistics.go:47:20: invalid operation: "This is Go" + 1.9 (mismatched types string and float64)

Here is reasoning for the different behavious:

Infer 01: The type of numeric literal 2 is inferred on the basis of the expression it is used in. Since the type of numerator is float64, Go performs the division successfully by assuming 2 as of the corresponding type. Hence, the type of the variable on LHS is float64.

I used the same reasoning for Infer 02: The type of float literal 1.9 is inferred on the basis of the declaration. However, a float literal cannot be added to a string unless it is implicitly converted to a string. So, I am unsure about the type of the variable b. Hence, an error should be raised.

Now, I am confused by the error message.

Why does the compiler try to implicitly convert string literal to float64 type?

In a general sense: How does the compiler infer types when both the operands are literals? What are some good resources that can help me understand Go's type inference system better?

It does not try to convert it to float64 type. It just warns about the string literal not being of float64.

It could say either the string is not a float, or the float is not a string, but the second line covers both of the cases with the message mismatched types string and float64.

Literal and constants in Go does not have a Go type. For example:

  • think of 1.1234 as float constant (and not a float32 or float64)
  • 2 as an int constant
  • "hi" as a string constant.

When you try to put a constant inside a variable Go tries to fit the constant in the variable based on variable type:

type DayCount int
const i = 123456

var a int32 = i
var b uint64 = i
var c int = i
var d int8 = i  // compile error: constant 123456 overflows int8
var e DayCount = i
var f DayCount = c  // compile error: cannot use c (type int) as type DayCount in assignment

If the variable itself doesn't have a type and the right hand side of assignment is a constant, compiler will assume the "default type" of the constant. That is int type for int constants and float64 for float constants.

var a = 1234          // a type will be int
var b = 1.1234567890  // b type will be float64
var c = "gogogo"      // c type will be string

You could also do some operations on constants (before they are converted to normal go variable) that is done in compile time:

var a int8 = 25 * 87  // compile error: constant 2175 overflows int8

As both 25 and 87 are constants compiler will multiple them when your are compiling your program and tries to fit it into variable a that fails. The point is multiply is not done when you are running your program.

If you try to mix normal variables and constants, compiler will try to convert constants to your variables type:

var a uint64 = 87
var b = 25 * a
// Is the same as
var b = uint64(25) * a
// so b type is uint64

Now about the first case in your question

med = (numbers[n/2] + numbers[(n-1)/2]) / 2 // Infer 01

is the same as

med = (numbers[n/int(2)] + numbers[(n-int(1))/int(2)]) / float64(2)

Compiler will try to cast constant into nearby variable type. The first and second 2 is being used in a division operation with an int and are converted to int. The third one is used in a division with a float64 and converted to float64. I don't think "type inference" is a fitting name for here.

In the second case both + operators are constants, compiler uses a heuristic approach and assumes that if one of the constants is float both constants should be fit into a float64 (that is faulty) and tries to put a string constant into float64 that fails (Hence the can't convert to float64 error).

The second case behavior is not documented in spec and I don't think it matters at all. IMO a valid compiler may try to fit 1.9 into a string variable and shows another error.

See also blog on constants, spec on constants, spec on constants expressions.