Let's say I'm using a fictional package in my webserver called github.com/john/jupiterDb
that I'm using to connect to my database hosted on Jupiter.
When someone makes a request to my server, I want to store the body of the request in my Jupiter DB. So I have some code like this:
http.HandleFunc("/SomeEvent", registerSomeEvent)
And in my registerSomeEvent
handler I want to do this:
func registerSomeEvent(w http.ResponseWriter, r *http.Request) {
jupiterDb.Insert(r.Body) // Takes a while!
fmt.FPrint(w, "Thanks!")
}
Now obviously I don't want to wait for the round trip to Jupiter to thank my user. So the obvious Go thing to do is to wrap that Insert
call in a go routine.
But oftentimes creators of packages that do lengthy IO will use go routines in the package to ensure these functions return immediately and are non-blocking. Does this mean I need to check the source for every package I use to make sure I'm using concurrency correctly?
Should I wrap it in an extra go routine anyway or should I trust the maintainer has already done the work for me? This feels to make like I have less ability to treat a package as a black box, or am I missing something?
I would just read the body and send it to a channel. A group of goroutines will be reading from the channel and send to jupiter the payload.
var reqPayloadChannel = make(chan string, 100)
func jupiter_worker() {
for payload := range reqPayloadChannel {
jupiterDb.Insert(payload) // Takes a while!
}
}
func registerSomeEvent(w http.ResponseWriter, r *http.Request) {
reqPayloadChannel <- r.Body.ReadAll()
fmt.Fprint(w, "Thanks!")
}
Next steps are to setup the working group and to handle the case when the jupiter channel is full due to very slow clients.