I'm learning Go, and am writing a simple web server that uses a channel to limit the number of concurrent requests. The server prints log entries at the console that show it's receiving the requests and processing them, however the client browser doesn't show any output. I've tried adding a flush of the response writer, that didn't help.
As a noob, what am I missing? Thanks for any tips/pointers.
Here's the code:
package main
import (
"fmt"
"html"
"net/http"
"time"
)
// define a type to be used with our request channel
type clientRequest struct {
r *http.Request
w http.ResponseWriter
}
const (
MaxRequests int = 10
)
// the request channel, to limit the number of simultaneous requests being processed
var reqChannel chan *clientRequest
func init() {
reqChannel = make(chan *clientRequest, MaxRequests)
}
func main() {
// create the server's handler
var ServeMux = http.NewServeMux()
ServeMux.HandleFunc("/", serveHandler)
// start pool of request handlers, all reading from the same channel
for i := 0; i < MaxRequests; i++ {
go processRequest(i)
}
// create the server object
s := &http.Server{
Addr: ":8080",
Handler: ServeMux, // handler to invoke, http.DefaultServeMux if nil
ReadTimeout: 10 * time.Second, // maximum duration before timing out read of the request
WriteTimeout: 10 * time.Second, // maximum duration before timing out write of the response
MaxHeaderBytes: 1 << 20, // maximum size of request headers, 1048576 bytes
}
// start the server
err := s.ListenAndServe()
if err != nil {
fmt.Println("Server failed to start: ", err)
}
}
func serveHandler(w http.ResponseWriter, r *http.Request) {
var newRequest = new(clientRequest)
newRequest.r = r
newRequest.w = w
reqChannel <- newRequest // send the new request to the request channel
fmt.Printf("Sent request to reqChannel for URL: %q
", html.EscapeString(r.URL.Path))
}
func processRequest(instanceNbr int) {
fmt.Printf("processRequest started for instance #%d
", instanceNbr)
for theRequest := range reqChannel { // receive requests from the channel until it is closed
fmt.Printf("Got request from reqChannel for URL: %q
", html.EscapeString(theRequest.r.URL.Path))
// xxx this isn't working:
fmt.Fprintf(theRequest.w, "processRequest instance #%d: URL is %q", instanceNbr, html.EscapeString(theRequest.r.URL.Path))
if f, ok := theRequest.w.(http.Flusher); ok {
f.Flush()
}
}
}
The server closes the response when serveHandler
returns.
One fix is to block serveHandler
until the request is processed. In the following code, the worker closes done
to signal that the request is complete. The handler waits for done
to close.
type clientRequest struct {
r *http.Request
w http.ResponseWriter
done chan struct{} // <-- add this line
}
func serveHandler(w http.ResponseWriter, r *http.Request) {
var newRequest = new(clientRequest)
newRequest.r = r
newRequest.w = w
newRequest.done = make(chan struct{})
reqChannel <- newRequest // send the new request to the request channel
fmt.Printf("Sent request to reqChannel for URL: %q
", html.EscapeString(r.URL.Path))
<-newRequest.done // wait for worker goroutine to complete
}
func processRequest(instanceNbr int) {
fmt.Printf("processRequest started for instance #%d
", instanceNbr)
for theRequest := range reqChannel { // receive requests from the channel until it is closed
fmt.Printf("Got request from reqChannel for URL: %q
", html.EscapeString(theRequest.r.URL.Path))
fmt.Fprintf(theRequest.w, "processRequest instance #%d: URL is %q", instanceNbr, html.EscapeString(theRequest.r.URL.Path))
if f, ok := theRequest.w.(http.Flusher); ok {
f.Flush()
}
close(theRequest.done) // signal handler that request is complete
}
}
If the goal is to limit the number of active handlers, then you can use a channel as a counting semaphore to limit the number of active handler goroutines:
var reqChannel = make(chan struct{}, MaxRequests)
func serveHandler(w http.ResponseWriter, r *http.Request) {
reqChannel <- struct{}{}
// handle the request
<-reqChannel
}
Note that the server runs handlers in a per connection goroutine.
Even simpler is to just write a handler. Most servers do not need to limit request handler concurrency.
Your answer is in this part of the net/http code :
// HTTP cannot have multiple simultaneous active requests.[*]
// Until the server replies to this request, it can't read another,
// so we might as well run the handler in this goroutine.
// [*] Not strictly true: HTTP pipelining. We could let them all process
// in parallel even if their responses need to be serialized.
serverHandler{c.server}.ServeHTTP(w, w.req)
if c.hijacked() {
return
}
w.finishRequest()
After ServeHTTP
returns, the request is finished.
So you have a few solutions :
drop your worker pattern and do the job in serveHandler
wait for the request to be fully processed before finishing serveHandler
, with something like this:
(tested on my local)
type clientRequest struct {
r *http.Request
w http.ResponseWriter
done chan struct{}
}
func serveHandler(w http.ResponseWriter, r *http.Request) {
var newRequest = new(clientRequest)
newRequest.r = r
newRequest.w = w
newRequest.done = make(chan struct{})
reqChannel <- newRequest // send the new request to the request channel
fmt.Printf("Sent request to reqChannel for URL: %q
", html.EscapeString(r.URL.Path))
<-newRequest.done // wait for the worker to finish
}
func processRequest(instanceNbr int) {
fmt.Printf("processRequest started for instance #%d
", instanceNbr)
for theRequest := range reqChannel { // receive requests from the channel until it is closed
fmt.Printf("Got request from reqChannel for URL: %q
", html.EscapeString(theRequest.r.URL.Path))
// xxx this isn't working:
fmt.Fprintf(theRequest.w, "processRequest instance #%d: URL is %q", instanceNbr, html.EscapeString(theRequest.r.URL.Path))
if f, ok := theRequest.w.(http.Flusher); ok {
f.Flush()
}
theRequest.done <- struct{}{}
}
}