I have the following http client/server code:
Server
func main() {
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
fmt.Println("Req: ", r.URL)
w.Write([]byte("OK")) // <== PROBLEMATIC LINE
// w.WriteHeader(200) // Works as expected
})
log.Fatal(http.ListenAndServe(":5008", nil))
}
Client
func main() {
client := &http.Client{}
for i := 0; i < 500; i++ {
url := fmt.Sprintf("http://localhost:5008/%02d", i)
req, _ := http.NewRequest("GET", url, nil)
_, err := client.Do(req)
if err != nil {
fmt.Println("error: ", err)
} else {
fmt.Println("success: ", i)
}
time.Sleep(10 * time.Millisecond)
}
}
When I run the client above against the server, then after 250 connections I get the following error from client.Do:error: Get http://localhost:5008/250: dial tcp: lookup localhost: no such host
and no more connections will succeed.
If I change the line in server from w.Write([]byte("OK"))
==> w.WriteHeader(200)
however then there is no limit to the amount of connections and it works as expected.
What am I missing here?
You are not closing the body. When you do any writes from the server, the connection is left open because the response has not been read yet. When you just WriteHeader, the response is done and the connection can be reused or closed.
To be completely honest, I do not know why leaving open connections causes domain lookups to fail. Based on the fact that 250 is awfully close to the round number 256, I would guess there is an artificial limitation placed by the OS that you are hitting. Perhaps the max FDs allowed is 256? Seem low, but it would explain the problem.
func main() {
client := &http.Client{}
for i := 0; i < 500; i++ {
url := fmt.Sprintf("http://localhost:5008/%02d", i)
req, _ := http.NewRequest("GET", url, nil)
resp, err := client.Do(req)
if err != nil {
fmt.Println("error: ", err)
} else {
fmt.Println("success: ", i)
resp.Body.Close()
}
time.Sleep(10 * time.Millisecond)
}
}
The application must close the response body on the client as described at the beginning of the net/http package docmentation.
func main() {
client := &http.Client{}
for i := 0; i < 500; i++ {
url := fmt.Sprintf("http://localhost:5008/%02d", i)
req, _ := http.NewRequest("GET", url, nil)
resp, err := client.Do(req)
if err != nil {
fmt.Println("error: ", err)
} else {
resp.Body.Close() // <---- close is required
fmt.Println("success: ", i)
}
time.Sleep(10 * time.Millisecond)
}
}
If the application does not close the response body, then the underlying network connection may not be closed or returned to the the client's connection pool. In this case, each new request creates a new network connection. The process eventually hits the file descriptor limit and anything that requires a file descriptor will fail. This includes name lookups and opening new connections.
The default limit for number of open file descriptors on OS X is 256. I'd expect the client application to fail just short of that limit.
Because each connection to the server uses a file descriptor on the server, the server may also have reached its file descriptor limit.
The response body has zero length when w.Write([]byte("OK"))
is removed from the server code. This triggers an optimization in the client for zero length response bodies where the connection is closed or returned to the pool before the application closes the response body.
Also using MAC OSX having the same problem when doing post request concurrently:
After 250 requests it will have that error;
Using go1.8.3.
The fix for my problem is to close both req and res body:
for i := 0; i < 10; i++ {
res, err := client.Do(req)
if err == nil {
globalCounter.Add(1)
res.Body.Close()
req.Body.Close()
break
} else {
log.Println("Error:", err, "retrying...", i)
}
}
I am using go version go1.9.4 linux/amd64 I tried different ways to solve this problem. Nothing helped but http://craigwickesser.com/2015/01/golang-http-to-many-open-files/
Along with resp.Body.Close(), I had to add req.Header.Set("Connection", "close")
func PrettyPrint(v interface{}) (err error) {
b, err := json.MarshalIndent(v, "", " ")
if err == nil {
fmt.Println(string(b))
}
return
}
func Json(body []byte, v interface{}) error {
return json.Unmarshal(body, v)
}
func GetRequests(hostname string, path string) []map[string]interface{} {
transport := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
Dial: (&net.Dialer{
Timeout: 0,
KeepAlive: 0,
}).Dial,
TLSHandshakeTimeout: 10 * time.Second,
}
httpClient := &http.Client{Transport: transport}
req, reqErr := http.NewRequest("GET", "https://"+hostname+path, nil)
req.SetBasicAuth("user", "pwd")
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Connection", "close")
if reqErr != nil {
fmt.Println("error in making request", reqErr)
}
resp, err := httpClient.Do(req)
if err != nil {
fmt.Println("error in calling request", err)
}
defer resp.Body.Close()
content, err := ioutil.ReadAll(resp.Body)
// fmt.Println(string(content))
var json_resp_l []map[string]interface{}
Json(content, &json_resp_l)
if len(json_resp_l) == 0 {
var json_resp map[string]interface{}
Json(content, &json_resp)
if json_resp != nil {
json_resp_l = append(json_resp_l, json_resp)
}
}
PrettyPrint(json_resp_l)
return json_resp_l
}
func main() {
GetRequests("server.com", "/path")
}
I think the key reason is you should use a sharing http.Transport
for each http.Client
, the http.Transport
will be pooling the connections and reuse them for better performance.