将goroutines与缓冲读取结合使用可优化读取大文件

Given a requirement where a large csv file (about 300 bytes long lines ending in /n) needs to be processed in a typical ETL: Extract, Transform, Load fashion (each line read, split and composition of a JSON inserted in a DB). Would it be beneficial to spawn one (or more) goroutines that worked together processing the file? What would need to be done to create a bufio.Scanner that started reading from a random position of the file?

Would it be beneficial to spawn one (or more goroutines)?

Yes, absolutely. In general, you could have 3 concurrent goroutines on each E, T, L, and have them coordinated via channels.

For more insights, check out this awesome talk from Rob Pike himself:

Concurrency is not Parallelism: https://goo.gl/cp8xgF Talk Slides http://talks.golang.org/2012/waza.slide#1