使用`make`进行并发内存分配?

I am going to read a large csv file and return an array of structs. So, I decided to split the large file into multiple smaller files with 1 million lines each and use go routines to process them in parallel.

Inside each worker, I create an array to insert the file lines in:

for i := 0; i < 10 ; i++ {
    go func(index int) {
        lines := make([]MyStruct, 1000000)
    }(i)
}

It seems like the go routines wait for each other on this line. So, if the memory allocation for the array takes 1 second, 10 concurrent routines doing that will take 10 seconds, instead of 1 second!

Could you please help me understand why? If this is so, I guess I will allocate memory before starting the go routines and pass the array's pointer to each of them, plus the index of the element that they need to start with while reading lines and setting values.

You need to set runtime.GOMAXPROCS(runtime.NumCPU()) or GOMAXPROCS environment variable for it to actually use multiple cores.

ref: http://golang.org/pkg/runtime/#GOMAXPROCS

And to quote @siritinga:

And of course, you need to do something with lines.

Right now, they are allocated and then lost for the garbage collector.

A different approach is to preallocate the slice then pass parts of it to the goroutines, for example:

N := 1000000
lines := make([]MyStruct, N * 10)
for i := 0; i < 10 ; i++ {
    idx := i * N
    go func(lines []MyStruct) {
        //do stuff with lines
    }(lines[idx:idx+N])
}