I don't understand how golang is outperforming c++ in this operation by 10 times, even the map lookup is 3 times faster in go than c++.
this is the c++ snippet
#include <iostream>
#include <unordered_map>
#include <chrono>
std::chrono::nanoseconds elapsed(std::chrono::steady_clock::time_point start) {
std::chrono::steady_clock::time_point now = std::chrono::high_resolution_clock::now();
return std::chrono::duration_cast<std::chrono::nanoseconds>(now - start);
}
void make_map(int times) {
std::unordered_map<double, double> hm;
double c = 0.0;
for (int i = 0; i < times; i++) {
hm[c] = c + 10.0;
c += 1.0;
}
}
int main() {
std::chrono::steady_clock::time_point start_time = std::chrono::high_resolution_clock::now();
make_map(10000000);
printf("elapsed %lld", elapsed(start_time).count());
}
this is the golang snippet:
func makeMap() {
o := make(map[float64]float64)
var i float64 = 0
x := time.Now()
for ; i <= 10000000; i++ {
o[i] = i+ 10
}
TimeTrack(x)
}
func TimeTrack(start time.Time) {
elapsed := time.Since(start)
// Skip this function, and fetch the PC and file for its parent.
pc, _, _, _ := runtime.Caller(1)
// Retrieve a function object this functions parent.
funcObj := runtime.FuncForPC(pc)
// Regex to extract just the function name (and not the module path).
runtimeFunc := regexp.MustCompile(`^.*\.(.*)$`)
name := runtimeFunc.ReplaceAllString(funcObj.Name(), "$1")
log.Println(fmt.Sprintf("%s took %s", name, elapsed))
}
What I'd like to know is how to optimize the c++ to achieve better performance.
Updated to measure similar operations for both cpp
and go
. It starts measurment before calling the map-making function and ends it when the function returns. Both versions reserve space in the map and return the created map (from which a couple of numbers are printed).
Slightly modified cpp
:
#include <iostream>
#include <unordered_map>
#include <chrono>
std::unordered_map<double, double> make_map(double times) {
std::unordered_map<double, double> m(times);
for (double c = 0; c < times; ++c) {
m[c] = c + 10.0;
}
return m;
}
int main() {
std::chrono::high_resolution_clock::time_point start_time = std::chrono::high_resolution_clock::now();
auto m = make_map(10000000);
std::chrono::high_resolution_clock::time_point end_time = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(end_time-start_time);
std::cout << elapsed.count()/1000000000. << "s
";
std::cout << m[10] << "
"
<< m[9999999] << "
";
}
% g++ -DNDEBUG -std=c++17 -Ofast -o perf perf.cpp
% ./perf
2.81886s
20
1e+07
Slightly modified go
version:
package main
import (
"fmt"
"time"
)
func make_map(elem float64) map[float64]float64 {
m := make(map[float64]float64, int(elem))
var i float64 = 0
for ; i < elem; i++ {
m[i] = i + 10
}
return m
}
func main() {
start_time := time.Now()
r := make_map(10000000)
end_time := time.Now()
fmt.Println(end_time.Sub(start_time))
fmt.Println(r[10])
fmt.Println(r[9999999])
}
% go build -a perf.go
% ./perf
1.967707381s
20
1.0000009e+07
It doesn't look like a tie as it did before the update. One thing slowing the cpp version down is the default hashing function for double
. When replacing it with a really bad (but fast) hasher, I got the time down to 1.89489s.
struct bad_hasher {
size_t operator()(const double& d) const {
static_assert(sizeof(double)==sizeof(size_t));
return
*reinterpret_cast<const size_t*>( reinterpret_cast<const std::byte*>(&d) );
}
};
It's a bit hard to pin down "the speed of C++" (for almost any particular thing) because it can depend on quite a few variables, such as the compiler you use. For example, I'm typically seeing a difference of 2:1 or so between gcc and msvc for the C++ version of this code.
As far as differences between C++ and Go, I'd guess it's mostly down to differences in how the hash tables are implemented. One obvious point is that Go's map implementation allocates data space in blocks of 8 elements at a time. At least the standard library implementations I've seen, std::unordered_map
places only one item per block.
We'd expect this to mean that in a typical case, the C++ code will do much larger number of individual allocations from the heap/free store, so its speed will depend much more heavily on the speed of the heap manager. The Go version should also have a substantially higher locality of reference so it makes better user of the cache.
Given those differences, I'm a little surprised that you're only seeing a 10:1 difference. My immediate guess would have been (somewhat) higher than that--but as we all know, one measurement is worth more than 100 guesses.
Meaningless microbenchmarks produce meaningless results.
Continuing @mrclx's and @TedLyngmo's microbenchmark thread, fix the bug in @TedLyngmo's Go microbenchmark:
perf.go
:
package main
import (
"fmt"
"time"
)
func makeMap(elem float64) time.Duration {
x := time.Now()
o := make(map[float64]float64, int(elem))
var i float64 = 0
for ; i < elem; i++ {
o[i] = i + 10
}
t := time.Now()
return t.Sub(x)
}
func main() {
r := makeMap(10000000)
fmt.Println(r)
}
Output:
$ go version
go version devel +11af353531 Tue Feb 12 14:48:26 2019 +0000 linux/amd64
$ go build -a perf.go
$ ./perf
1.649880112s
$
perf.cpp
:
#include <iostream>
#include <unordered_map>
#include <chrono>
void make_map(double times) {
std::unordered_map<double, double> hm;
hm.reserve(static_cast<size_t>(times)); // <- good stuff
for (double c = 0; c < times; ++c) {
hm[c] = c + 10.0;
}
}
int main() {
std::chrono::high_resolution_clock::time_point start_time = std::chrono::high_resolution_clock::now();
make_map(10000000);
std::chrono::high_resolution_clock::time_point end_time = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(end_time-start_time);
std::cout << elapsed.count()/1000000000. << "s
";
}
Output:
$ g++ --version
g++ (Ubuntu 8.2.0-7ubuntu1) 8.2.0
$ g++ -DNDEBUG -std=c++17 -Ofast -o perf perf.cpp
$ ./perf
3.09203s
$
Go leads!