I wrote a benchmark to test the speed of two Fibonacci number generators, and the source code is here on github.
func BenchmarkFib(b *testing.B) {
fibFuncs := []struct {
name string
f func(int) int
}{
{"recursive", fibRecu},
{"iterative", fibIter},
}
for _, fibFunc := range fibFuncs {
// calculate k'th Fibonacci number
for k := 10; k < 1001; k *= 10 {
b.Run(fmt.Sprintf("%s Fib %v", fibFunc.name, k), func(b *testing.B) {
for n := 0; n < b.N; n++ {
// b.StopTimer()
// reset the memo
memo = map[int]int{0: 0, 1: 1, 2: 1}
// b.StartTimer()
fibFunc.f(k)
}
})
}
}
}
As it is, the benchmark works and the output is
nos (master) fibonacci $ go test -bench .
goos: linux
goarch: amd64
pkg: github.com/nosarthur/dynamicP/fibonacci
BenchmarkFib/recursive_Fib_10-4 1000000 1256 ns/op
BenchmarkFib/recursive_Fib_100-4 100000 18256 ns/op
BenchmarkFib/recursive_Fib_1000-4 10000 206109 ns/op
BenchmarkFib/iterative_Fib_10-4 10000000 218 ns/op
BenchmarkFib/iterative_Fib_100-4 5000000 292 ns/op
BenchmarkFib/iterative_Fib_1000-4 2000000 881 ns/op
PASS
ok github.com/nosarthur/dynamicP/fibonacci 12.208s
However, I added b.StopTime()
and b.StartTime()
to exclude the time for resetting the memo. With these two lines un-commented, the benchmark hangs, and the partial output is
nos (master *) fibonacci $ go test -bench .
goos: linux
goarch: amd64
pkg: github.com/nosarthur/dynamicP/fibonacci
BenchmarkFib/recursive_Fib_10-4 1000000 2139 ns/op
BenchmarkFib/recursive_Fib_100-4 100000 24775 ns/op
BenchmarkFib/recursive_Fib_1000-4 5000 239197 ns/op
BenchmarkFib/iterative_Fib_10-4 ^Csignal: interrupt
FAIL github.com/nosarthur/dynamicP/fibonacci 269.067s
What is the proper way to exclude the memo resetting? My go version is 1.10.1
What's happening is that your functions are really fast, particularly in the case of the iterative function, and your map reset (as well as the StartTimer
and StopTimer
functions themselves with the runtime stat allocation) are much, much slower.
So what's happening is that when you call StopTimer
it's setting the internally tracked duration of the benchmark to only the time it took to run the function. Well guess how it estimates how many iterations to run within the designated benchmark time? Yep, you guessed it - the internal duration.
So basically, your iterative function takes about 10ns to run, the map reset takes about 250ns, and the Timer functions take considerably longer - but the benchmark is estimating that each run takes only 20ns and is setting the number of iterations accordingly.
My suggestion - don't use the StartTimer/StopTimer
functions in this case, and simply add a third run to your tests that's a non-op -- basically:
{"baseline", func(int) int {return 0}},
Then just subtract the times from this function from the other two sets to estimate how much of the ns/op was from the allocation vs the functions themselves.
Here's the results when I ran this:
BenchmarkFib/baseline_Fib_10-2 5000000 357 ns/op
BenchmarkFib/baseline_Fib_100-2 5000000 327 ns/op
BenchmarkFib/baseline_Fib_1000-2 5000000 310 ns/op
BenchmarkFib/recursive_Fib_10-2 1000000 1659 ns/op
BenchmarkFib/recursive_Fib_100-2 50000 24898 ns/op
BenchmarkFib/recursive_Fib_1000-2 5000 301771 ns/op
BenchmarkFib/iterative_Fib_10-2 5000000 333 ns/op
BenchmarkFib/iterative_Fib_100-2 3000000 394 ns/op
BenchmarkFib/iterative_Fib_1000-2 1000000 1052 ns/op
PASS
ok _/tmp/dynamicP/fibonacci 15.305s