AWS-SDK-GO中的Goroutine泄漏?

I have the following snippit of code using current aws-sdk-go release v1.7.9.

sess, _ := session.NewSession()
s3client := s3.New(sess)
location, err := s3client.GetBucketLocation(&s3.GetBucketLocationInput{Bucket: &bucket})

I log the callstack before and after the call to GetBucketLocation(). I see the total number of goroutines increase by two, with these two extra goroutines running afterwards:

goroutine 45 [IO wait]:
net.runtime_pollWait(0x2029008, 0x72, 0x8)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/runtime/netpoll.go:160 +0x59
net.(*pollDesc).wait(0xc420262610, 0x72, 0xc42003e6f0, 0xc4200121b0)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/net/fd_poll_runtime.go:73 +0x38
net.(*pollDesc).waitRead(0xc420262610, 0xbcb200, 0xc4200121b0)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/net/fd_poll_runtime.go:78 +0x34
net.(*netFD).Read(0xc4202625b0, 0xc42022fc00, 0x400, 0x400, 0x0, 0xbcb200, 0xc4200121b0)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/net/fd_unix.go:243 +0x1a1
net.(*conn).Read(0xc42023c068, 0xc42022fc00, 0x400, 0x400, 0x0, 0x0, 0x0)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/net/net.go:173 +0x70
crypto/tls.(*block).readFromUntil(0xc42017c060, 0x2029248, 0xc42023c068, 0x5, 0xc42023c068, 0xc400000000)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/crypto/tls/conn.go:476 +0x91
crypto/tls.(*Conn).readRecord(0xc42029a000, 0x840917, 0xc42029a108, 0xc420116ea0)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/crypto/tls/conn.go:578 +0xc4
crypto/tls.(*Conn).Read(0xc42029a000, 0xc420196000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/crypto/tls/conn.go:1113 +0x116
net/http.(*persistConn).Read(0xc42000ba00, 0xc420196000, 0x1000, 0x1000, 0x23d3b0, 0xc42003eb58, 0x7a8d)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/net/http/transport.go:1261 +0x154
bufio.(*Reader).fill(0xc42000cba0)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/bufio/bufio.go:97 +0x10c
bufio.(*Reader).Peek(0xc42000cba0, 0x1, 0xc42003ebbd, 0x1, 0x0, 0xc42000cc00, 0x0)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/bufio/bufio.go:129 +0x62
net/http.(*persistConn).readLoop(0xc42000ba00)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/net/http/transport.go:1418 +0x1a1
created by net/http.(*Transport).dialConn
        /usr/local/Cellar/go/1.7.4_2/libexec/src/net/http/transport.go:1062 +0x4e9

goroutine 46 [select]:
net/http.(*persistConn).writeLoop(0xc42000ba00)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/net/http/transport.go:1646 +0x3bd
created by net/http.(*Transport).dialConn
        /usr/local/Cellar/go/1.7.4_2/libexec/src/net/http/transport.go:1063 +0x50e

These routines do not disappear over time and they continue to accumulate as more calls are made to GetBucketLocation().

Am I doing something wrong (neglecting to close some resource) or is there a goroutine leak happening in aws-sdk-go?

Note the same behavior is observed with the s3manager.Downloader::Download() function.

It turns out I was incorrect in stating that the routines do not disappear over time. If I add a 10 second sleep after the call to GetBucketLocation, before printing out the stack of the goroutines, then the extra routines do indeed disappear.

I believe the reason for this, is that golang's net/http package maintains some kind of connection pool that can be reused. See the following discussion: https://groups.google.com/forum/#!topic/golang-nuts/QckzdZmzlk0

Waiting long enough seems to eventually close the connections and stop the goroutines.