I'm trying to understand Go behavior on HTTP io.Reader for response.Body. The overview of the program:
req1, err := http.NewRequest("GET", url1, nil)
resp1, err := httpClient.Do(req1)
req2, err := http.NewRequest("PUT", url2, resp1.Body)
httpClient.Do(req2)
It's expected that resp1.Body could be very big (magnitude of GBs). I'm wondering: Will Go read resp1.Body entirely and store it in RAM (or disk?) before opening another reader for the second request? Or is Go smart enough to directly stream body from resp1 into the second request as the data flow?
In other words, I'm trying to understand whether this code will put pressure on memory or even incur heavy IO operations in case of big files.
The ideal behavior that I expect will be the second scenario (Go streams the data directly without buffering the entire body). In this case, each stream will only use buffer of size 32KB (Go's default for copy?).