I am learning source code about golang , at the source code create new connection for each request in net/http, like:
// Create new connection from rwc.
func (srv *Server) newConn(rwc net.Conn) (c *conn, err error) {
c = new(conn)
c.remoteAddr = rwc.RemoteAddr().String()
c.server = srv
c.rwc = rwc
c.w = rwc
if debugServerConnections {
c.rwc = newLoggingConn("server", c.rwc)
}
c.sr.r = c.rwc
c.lr = io.LimitReader(&c.sr, noLimit).(*io.LimitedReader)
br := newBufioReader(c.lr)
bw := newBufioWriterSize(checkConnErrorWriter{c}, 4<<10)
c.buf = bufio.NewReadWriter(br, bw)
return c, nil
}
why new(conn)
at this place, is there can improve performance do with get conn
from sync.Pool
Does this question stem from measuring performance of an application or simply, as the title implies, from reading the source code? The basic rule of performance optimisation is "measure, don't guess" (see, for example, this article).
Apart from performance, there may be a good reason for not using sync.Pool
here. You would need to know at what point the connection is no longer used and can safely be put back in the pool.
That said, your suggestion may have some merit if it is obvious at what point(s) in the code the connection needs to be returned to the pool. Why not measure the performance improvement for a (realistic if small) application that could benefit? If there's a really significant benefit, it might be worth proposing to the Go community. However, the main benefit of sync.Pool
is to reduce GC overhead, so the application would have to create many connections for there to be a noticeable benefit.