尝试通过MySQL提高插入速度

Hi I need to upload enormous amount of small text info into the MySQL. Unfortunately there no BulkOp with MySQL, what I am trying to use go-routines to parallelize transactions. The problem that all this concurrency and racing stuff drives me a bit crazy.

And I am not sure if to what I come is any good.

A simplified code looks like this, the huge file is scanned line by line, and lines appends to an slice, when size of slice is 1000

sem := make(chan int, 10) //Transactions pool
sem2 := make(chan int) // auxiliary blocking semaphore
for scanner.Scan() {
  line := scanner.Text()
  lines = append(lines, line)
  if len(lines) > 1000 {
     sem <- 1 //keep max 10 transactions
      go func(mylines ...lineT) {
        // I  use variadic, to avoid issue with pointers
        // I want to path data by values. 
        <-sem2 // all lines of slice copied, release the lock
        gopher(mylines...) //gopher does the transaction by iterating
                           //every each line. And here I may use slice
                           //I think.

        <-sem  //after transaction done, release the lock
       }(lines...)
       sem2 <- 1 //this to ensure, that slice will be reset, 
                 //after values are copied to func, otherwise 
                 //lines could be nil before the goroutine fired.
       lines = nil //reset slice
   }
}

How can I better solve thing. I know I could have make something to bulk import via MySQL utilities, but this is not possible. I neither can make it like INSERT with many values VALUES ("1", "2), ("3", "4") because it's not properly escaping, and I just get errors.

This way looks a ta wierd, but not as my 1st approach

func gopher2(lines []lineT) {
  q := "INSERT INTO main_data(text) VALUES "
  var aur []string
  var inter []interface{}
  for _, l := range lines {
    aur = append(aur, "(?)")
    inter = append(inter, l)
  }
  q = q + strings.Join(aur, ", ")

   if _, err := db.Exec(q, inter...); err != nil {
     log.Println(err)
   }

 }