For a program I'm making this function is ran as a goroutine in a for loop depending on how many urls are passed in (no set amount).
func makeRequest(url string, ch chan<- string, errors map[string]error){
res, err := http.Get(url)
if err != nil {
errors[url] = err
close(ch)
return
}
defer res.Body.Close()
body, _ := ioutil.ReadAll(res.Body)
ch <- string(body)
}
The entire body of the response has to be used so ioutil.ReadAll seemed like the perfect fit but with no restriction on the amount of urls that can be passed in and the nature of ReadAll being that it's all stored in memory it's starting to feel less like the golden ticket. I'm fairly new to Go so if you do decide to answer, if you could give some explanation behind your solution it would be greatly appreciated!
In order to bound the amount of memory that you're application is using, the common approach is to read into a buffer, which should directly address your ioutil.ReadAll
problem.
go's bufio
package offers utilities (Scanner
) which supports reading until a delimiter, or reading a line from the input, which is highly related to @Howl's question
One insight that I got as I learned how to use Go is that ReadAll is often inefficient for large readers, and like in your case, is subject to arbitrary input being very big and possibly leaking out memory. When I started out, I used to do JSON parsing like this:
data, err := ioutil.ReadAll(r)
if err != nil {
return err
}
json.Unmarshal(data, &v)
Then, I learned of a much more efficient way of parsing JSON, which is to simply use the Decoder
type.
err := json.NewDecoder(r).Decode(&v)
if err != nil {
return err
}
Not only is this more concise, it is much more efficient, both memory-wise and time-wise:
Read
method to get all the data and parse it. This saves a lot of time in allocations and removes stress from the GCNow, of course your question has nothing to do with JSON, but this example is useful to illustrate that if you can use Read
directly and parse data chunks at a time, do it. Especially with HTTP requests, parsing is faster than reading/downloading, so this can lead to parsed data being almost immediately ready the moment the request body finishes arriving.
In your case, you seem not to be actually doing any handling of the data for now, so there's not much to suggest to aid you specifically. But the io.Reader
and the io.Writer
interfaces are the Go equivalent of UNIX pipes, and so you can use them in many different places:
Writing data to a file:
f, err := os.Create("file")
if err != nil {
return err
}
defer f.Close()
// Copy will put all the data from Body into f, without creating a huge buffer in memory
// (moves chunks at a time)
io.Copy(f, resp.Body)
Printing everything to stdout:
io.Copy(os.Stdout, resp.Body)
Pipe a response's body to a request's body:
resp, err := http.NewRequest("POST", "https://example.com", resp.Body)
While that is pretty much simple in go
Here is the client program:
package main
import (
"fmt"
"net/http"
)
var data []byte
func main() {
data = make([]byte, 128)
ch := make(chan string)
go makeRequest("http://localhost:8080", ch)
for v := range ch {
fmt.Println(v)
}
}
func makeRequest(url string, ch chan<- string) {
res, err := http.Get(url)
if err != nil {
close(ch)
return
}
defer res.Body.Close()
defer close(ch) //don't forget to close the channel as well
for n, err := res.Body.Read(data); err == nil; n, err = res.Body.Read(data) {
ch <- string(data[:n])
}
}
Here is the serve program:
package main
import (
"net/http"
)
func main() {
http.HandleFunc("/", hello)
http.ListenAndServe("localhost:8080", nil)
}
func hello(w http.ResponseWriter, r *http.Request) {
http.ServeFile(w, r, "movie.mkv")
}