查找[主机]:Go中没有此类主机错误

I have this test program which will fetch url parallel, but when I increase the parallel number to about 1040, I start to get lookup www.httpbin.org: no such host error.

After some Google, I found others say that not close the response will cause the problem, but I do close that with res.Body.Close().

What's the problem here? thanks very much.

package main

import (
    "fmt"
    "net/http"
    "io/ioutil"
)

func get(url string) ([]byte, error) {

    client := &http.Client{}
    req, _ := http.NewRequest("GET", url, nil)

    res, err := client.Do(req)

    if err != nil {
        fmt.Println(err)
        return nil, err
    } 

    bytes, read_err := ioutil.ReadAll(res.Body)
    res.Body.Close()

    fmt.Println(bytes)

    return bytes, read_err
}

func main() {
    for i := 0; i < 1040; i++ {
        go get(fmt.Sprintf("http://www.httpbin.org/get?a=%d", i))
    }
}

That's because you may have up to 1040 concurrent calls in your code so you may very well be in a state with 1040 body opened and none yet closed.

You need to limit the number of goroutines used.

Here's one possible solution with a limit to 100 concurrent calls max :

func getThemAll() {
    nbConcurrentGet := 100
    urls :=  make(chan string, nbConcurrentGet)
    for i := 0; i < nbConcurrentGet; i++ {
        go func (){
            for url := range urls {
                get(url)
            }
        }()
    }
    for i:=0; i<1040; i++ {
        urls <- fmt.Sprintf("http://www.httpbin.org/get?a=%d", i)
    }
}

If you call this in the main function of your program, it may stop before all tasks are finished. You can use a sync.WaitGroup to prevent it :

func main() {
    nbConcurrentGet := 100
    urls :=  make(chan string, nbConcurrentGet)
    var wg sync.WaitGroup
    for i := 0; i < nbConcurrentGet; i++ {
        go func (){
            for url := range urls {
                get(url)
                wg.Done()
            }
        }()
    }
    for i:=0; i<1040; i++ {
        wg.Add(1)
        urls <- fmt.Sprintf("http://www.httpbin.org/get?a=%d", i)
    }
    wg.Wait()
    fmt.Println("Finished")
}

well technically your process is limited (by the Kernel) to about 1000 open file descriptors. Depending on the context you might need to increase this number.

In your shell run (note the last line):

$ ulimit -a
-t: cpu time (seconds)         unlimited
-f: file size (blocks)         unlimited
-d: data seg size (kbytes)     unlimited
-s: stack size (kbytes)        8192
-c: core file size (blocks)    0
-v: address space (kb)         unlimited
-l: locked-in-memory size (kb) unlimited
-u: processes                  709
-n: file descriptors           2560

To increase (temporarly):

$ ulimit -n 5000
(no output)

Then verify the fd limit:

$ ulimit -n
5000