Go sync.pool是否比make慢得多?

I tried to use sync.Pool to reuse []byte. But it turned out to be slower than just make. Code:

package main

import (
    "sync"
    "testing"
)

func BenchmarkMakeStack(b *testing.B) {
    for N := 0; N < b.N; N++ {
        obj := make([]byte, 1024)
        _ = obj
    }
}

var bytePool = sync.Pool{
    New: func() interface{} {
        b := make([]byte, 1024)
        return &b
    },
}

func BenchmarkBytePool(b *testing.B) {
    for N := 0; N < b.N; N++ {
        obj := bytePool.Get().(*[]byte)
        _ = obj
        bytePool.Put(obj)
    }
}

Result:

$ go test pool_test.go -bench=. -benchmem
BenchmarkMakeStack-4    2000000000      0.29 ns/op       0 B/op    0 allocs/op
BenchmarkBytePool-4      100000000     17.2 ns/op        0 B/op    0 allocs/op

According to the Go docs, sync.Pool should be faster, but my test showed otherwise. Can anybody help me explain this?

Update: 1. update code in question by using go benchmark. 2. the answer laid in stack and heap, see peterSO's answer.

First Law of Benchmarks: Meaningless microbenchmarks produce meaningless results.

Your unrealistic microbenchmarks are not meaningful.


Package sync

import "sync"

type Pool

A Pool is a set of temporary objects that may be individually saved and retrieved.

Any item stored in the Pool may be removed automatically at any time without notification. If the Pool holds the only reference when this happens, the item might be deallocated.

A Pool is safe for use by multiple goroutines simultaneously.

Pool's purpose is to cache allocated but unused items for later reuse, relieving pressure on the garbage collector. That is, it makes it easy to build efficient, thread-safe free lists. However, it is not suitable for all free lists.

An appropriate use of a Pool is to manage a group of temporary items silently shared among and potentially reused by concurrent independent clients of a package. Pool provides a way to amortize allocation overhead across many clients.

An example of good use of a Pool is in the fmt package, which maintains a dynamically-sized store of temporary output buffers. The store scales under load (when many goroutines are actively printing) and shrinks when quiescent.

On the other hand, a free list maintained as part of a short-lived object is not a suitable use for a Pool, since the overhead does not amortize well in that scenario. It is more efficient to have such objects implement their own free list.

Is sync.Pool appropriate for your use case? Is sync.Pool appropriate for your benchmark? Are your use case and your benchmark the same? Is your use case a microbenchmark?


Using the Go testing package for your artificial benchmarks, with separate benchmarks for make stack and heap allocations, make is both faster and slower than sync.Pool.

Output:

$ go test pool_test.go -bench=. -benchmem
BenchmarkMakeStack-4    2000000000      0.29 ns/op       0 B/op    0 allocs/op
BenchmarkMakeHeap-4       10000000    136 ns/op       1024 B/op    1 allocs/op
BenchmarkBytePool-4      100000000     17.2 ns/op        0 B/op    0 allocs/op
$

pool_test.go:

package main

import (
    "sync"
    "testing"
)

func BenchmarkMakeStack(b *testing.B) {
    for N := 0; N < b.N; N++ {
        obj := make([]byte, 1024)
        _ = obj
    }
}

var obj []byte

func BenchmarkMakeHeap(b *testing.B) {
    for N := 0; N < b.N; N++ {
        obj = make([]byte, 1024)
        _ = obj
    }
}

var bytePool = sync.Pool{
    New: func() interface{} {
        b := make([]byte, 1024)
        return &b
    },
}

func BenchmarkBytePool(b *testing.B) {
    for N := 0; N < b.N; N++ {
        obj := bytePool.Get().(*[]byte)
        _ = obj
        bytePool.Put(obj)
    }
}