I'm trying to upload a file without loading it into memory, as shown below. Services like S3 need a Content-Length
set in such cases. Is there a go-lang built-in to do that, or do I have to compute it myself.
package main
import (
"io"
"mime/multipart"
"net/http"
"os"
"path/filepath"
)
func newfileUploadRequest(uri string, params map[string]string, paramName, path string) (*http.Request, chan error, error) {
file, err := os.Open(path)
if err != nil {
return nil, nil, err
}
bodyReader, bodyWriter := io.Pipe()
multiWriter := multipart.NewWriter(bodyWriter)
errChan := make(chan error, 1)
go func() {
defer bodyWriter.Close()
defer file.Close()
part, err := multiWriter.CreateFormFile(paramName, filepath.Base(path))
if err != nil {
errChan <- err
return
}
if _, err := io.Copy(part, file); err != nil {
errChan <- err
return
}
for k, v := range params {
if err := multiWriter.WriteField(k, v); err != nil {
errChan <- err
return
}
}
errChan <- multiWriter.Close()
}()
req, err := http.NewRequest("POST", uri, bodyReader)
return req, errChan, err
}
Any help would be much appreciated.
In the docs for http.Request.Write it states
If Body is present, Content-Length is <= 0 and TransferEncoding hasn't been set to "identity", Write adds "Transfer-Encoding: chunked" to the header
Which means if you don't set a Content-Length
, http.Request.Write
will use chunked transfer encoding. This was added to HTTP/1.1
to get rid of the need to calculate the Content-Length
for streaming transfers just like this.
So normally Go programs use chunked encoding and there is no need to set Content-Length
. Any modern HTTP stack should support chunked transfer encoding.
However S3 does not support chunked transfer encoding, so I think you'll have to calculate Content-Length
yourself.