为什么这两个golang整数转换函数给出不同的结果?

I wrote a function to convert a byte slice to an integer.

The function I created is actually a loop-based implemtation of what Rob Pike published here:

http://commandcenter.blogspot.com/2012/04/byte-order-fallacy.html

Here is Rob's code:

i = (data[0]<<0) | (data[1]<<8) | (data[2]<<16) | (data[3]<<24);

My first implementation (toInt2 in the playground) doesn't work as I expected because it appears to initialize the int value as a uint. This seems really strange but it must be platform specific because the go playground reports a different result than my machine (a mac).

Can anyone explain why these functions behave differently on my mac?

Here's the link to the playground with the code: http://play.golang.org/p/FObvS3W4UD

Here's the code from the playground (for convenience):

/*

Output on my machine:

    amd64 darwin go1.3 input: [255 255 255 255]
    -1
    4294967295

Output on the go playground:

    amd64p32 nacl go1.3 input: [255 255 255 255]
    -1
    -1

*/

package main

import (
    "fmt"
    "runtime"
)

func main() {
    input := []byte{255, 255, 255, 255}
    fmt.Println(runtime.GOARCH, runtime.GOOS, runtime.Version(), "input:", input)
    fmt.Println(toInt(input))
    fmt.Println(toInt2(input))
}

func toInt(bytes []byte) int {
    var value int32 = 0 // initialized with int32

    for i, b := range bytes {
        value |= int32(b) << uint(i*8)
    }
    return int(value) // converted to int
}

func toInt2(bytes []byte) int {
    var value int = 0 // initialized with plain old int

    for i, b := range bytes {
        value |= int(b) << uint(i*8)
    }
    return value
}

This is an educated guess, but int type can be 64bit or 32bit depending on the platform, on my system and yours it's 64bit, since the playground is running on nacl, it's 32bit.

If you change the 2nd function to use uint all around, it will work fine.

From the spec:

uint     either 32 or 64 bits 
int      same size as uint
uintptr  an unsigned integer large enough to store the uninterpreted bits of a pointer value

int is allowed to be 32 or 64 bits, depending on platform/implementation. When it is 64-bits, it is capable of representing 2^32 as a signed positive integer, which is what happens on your machine. When it is 32-bits (the playground), it overflows as you expect.