如何优化向API发出请求的for循环? [关闭]

I have a for-loop in my Go code. Each iteration makes a request to a certain API and then saves its result in a map. How do I optimise the performance so that the iterations will be called asynchronously?

I'm currently diving into goroutines and channels and all that, but I'm still having trouble to apply it in the wild :)

results := map[string]Result

for ID, person := range people {
    result := someApiCall(person)
    results[ID] = result
}

// And do something with all the results once completed

There are many ways to make each iteration executed asynchronously. One of them is by taking advantage of goroutine and channel (like what you desired).

Please take a look at example below. I think it'll be easier if I put the explanations as comments on each part of the code.

// prepare the channel for data transporation purpose between goroutines and main routine
resChan := make(chan []interface{})

for ID, person := range people {

    // dispatch an IIFE as goroutine, so no need to change the `someApiCall()`
    go func(id string, person Person) {
        result := someApiCall(person)

        // send both id and result to channel.
        // it'll be better if we construct new type based id and result, but in this example I'll use channel with []interface{} type
        resChan <- []interface{}{id, result}
    }(ID, person)
}

// close the channel since every data is sent.
close(resChan)

// prepare a variable to hold all results
results := make(map[string]Result)

// use `for` and `range` to retrieve data from channel
for res := range ch {
    id := res[0].(string)
    person := res[1].(Person)

    // append it to the map
    result[id] = person
}

// And do something with all the results once completed

Another way is by using few sync API like sync.Mutex and sync.WaitGroup to achieve same target.

// prepare a variable to hold all results
results := make(map[string]Result)

// prepare a mutex object with purpose is to lock and unlock operations related to `results` variable, to avoid data race.
mtx := new(sync.Mutex)

// prepare a waitgroup object for effortlessly waits for goroutines to finish
wg := new(sync.WaitGroup)

// tell the waitgroup object how many goroutines that need to be finished
wg.Add(people)

for ID, person := range people {

    // dispatch an IIFE as goroutine, so no need to change the `someApiCall()`
    go func(id string, person Person) {
        result := someApiCall(person)

        // lock the append operation on `results` variable to avoid data race
        mtx.Lock()
        results[ID] = result
        mtx.Unlock()

        // tell waitgroup object that one goroutine is just finished
        wg.Done()
    }(ID, person)
}

// block the process synchronously till all goroutine finishes.
// after that it'll continue to next process underneath
wg.Wait()

// And do something with all the results once completed

A warning. Both approaches above are fine to use on a case that there are only few data that need to be iterated. If there are a lot of it, it'll not be good, there will be tons of goroutine dispatched nearly in the same time, and will cause very high machine memory usage. I suggest to take a look about worker pool technique to improve the code.

You can use goroutine to call the api in parallel:

type Item struct {
    id string
    res Result
}

func callApi(id string, person Result, resultChannel chan Item) {
    res := someApiCall(person)
    resultChannel <- Item{id, res}
}

resultChannel := make(chan Item)
for id, person := range people {
    go callApi(id, person, resultChannel)
}

result := make(map[string]Result)
for range people {
    item := <- resultChannel
    result[item.id] = item.res
}

However, the above code ignores error handling, e.g. someApiCall might fail or panic, and if there're too many persons, there will be too many api calls in parallel, normally, you should limit the number of api calls in parallel. I'll leave those problems as an exercise for you