将几个[]字节连接在一起的最快方法是什么?

Right now I'm using the code below (as in BenchmarkEncoder()) and it's fast, but I'm wondering if there is a faster, more efficient way. I benchmark with GOMAXPROCS=1 and:

sudo -E nice -n -20 go test -bench . -benchmem -benchtime 3s

.

package blackbird

import (
    "testing"
    "encoding/hex"
    "log"
    "bytes"
    "encoding/json"
)

var (
    d1, d2, d3, d4, outBytes []byte
    toEncode [][]byte
)

func init() {
    var err interface{}
    d1, err = hex.DecodeString("6e5438fd9c3748868147d7a4f6d355dd")
    d2, err = hex.DecodeString("0740e2dfa4b049f2beeb29cc304bdb5f")
    d3, err = hex.DecodeString("ab6743272358467caff7d94c3cc58e8c")
    d4, err = hex.DecodeString("7411c080762a47f49e5183af12d87330e6d0df7dd63a44808db4e250cdea0a36182fce4a309842e49f4202eb90184dd5b621d67db4a04940a29e981a5aea59be")
    if err != nil {
        log.Fatal("hex decoding failed: %v", err)
    }
    toEncode = [][]byte{d1, d2, d3, d4}

}

func Encode(stuff [][]byte) []byte {
    return bytes.Join(stuff, nil)
}

func BenchmarkEncoderDirect(b *testing.B) {
    for i := 0; i < b.N; i++ {
        bytes.Join(toEncode, nil)
    }
}

func BenchmarkEncoder(b *testing.B) {
    for i := 0; i < b.N; i++ {
        Encode(toEncode)
    }
}

func BenchmarkJsonEncoder(b *testing.B) {
    for i := 0; i < b.N; i++ {
        outBytes, _ = json.Marshal(toEncode)

    }
}

What is the fastest way to concatenate several []byte together?

bytes.Join() is pretty fast, but it does some extra work appending separators between the appendable byte slices. It does so even if the separator is an empty or nil slice.

So if you care about the best performance (although it will be a slight improvement), you may do what bytes.Join() does without appending (empty) separators: allocate a big-enough byte slice, and copy each slice into the result using the built-in copy() function.

Try it on the Go Playground:

func Join(s ...[]byte) []byte {
    n := 0
    for _, v := range s {
        n += len(v)
    }

    b, i := make([]byte, n), 0
    for _, v := range s {
        i += copy(b[i:], v)
    }
    return b
}

Using it:

concatenated := Join(d1, d2, d3, d4)

Improvements:

If you know the total size in advance (or you can calculate it faster than looping over the slices), provide it and you can avoid having to loop over the slices in order to count the needed size:

func JoinSize(size int, s ...[]byte) []byte {
    b, i := make([]byte, size), 0
    for _, v := range s {
        i += copy(b[i:], v)
    }
    return b
}

Using it in your case:

concatenated := JoinSize(48 + len(d4), d1, d2, d3, d4)

Notes:

But if your goal in the end is to write the concatenated byte slice into an io.Writer, performance wise it is better not to concatenate them but write each into it separately.

In general, @icza's answer is right. For your specific use-case, however, you can allocate once and decode into that buffer much more efficiently:

Like this:

package main

import (
    "encoding/hex"
)

func main() {
    h1 := []byte("6e5438fd9c3748868147d7a4f6d355dd")
    h2 := []byte("0740e2dfa4b049f2beeb29cc304bdb5f")
    h3 := []byte("ab6743272358467caff7d94c3cc58e8c")
    h4 := []byte("7411c080762a47f49e5183af12d87330e6d0df7dd63a44808db4e250cdea0a36182fce4a309842e49f4202eb90184dd5b621d67db4a04940a29e981a5aea59be")

    tg := make([]byte, 16+16+16+(1024*1024)) // allocate enough space for the 3 IDs and a max 1MB of extra data

    hex.Decode(tg[:16], h1)
    hex.Decode(tg[16:32], h2)
    hex.Decode(tg[32:48], h3)
    l, _ := hex.Decode(tg[48:], h4)

    tg = tg[:48+l]
}

At the end of that code, tg holds the 3 IDs plus the variable-length 4th chunk of data, decoded, contiguously.