I am implementing video stream server in Go. I am currently using standard fileserver but I am unsure if it is efficient with large files (4GB+).
Is there a way to efficiently serve large file in Go?
I'm not sure what you mean by "efficient" so I will assume you mean that large files are streamed and not buffered.
The standard http.FileServer
eventually writes data using the serveContent
function, which requires an io.ReadSeeker
as the contents (fortunately files are just that).
The contents is then copied using io.Copy
, which in the general case (though probably not your common case, see below) means copyBuffer
. will use a 32KB buffer.
Therefore, assuming the implementation of http.ResponseWriter
does not buffer its input (it doesn't, see also chunked writer), memory utilization should be constant.
However when the writer supports a ReadFrom
method, io.Copy
will use that instead. Because http.response
(the standard implementation of ResponseWriter
interface) implements the ReadFrom method, it'll be used instead of copyBuffer
. This implementation in turn will try use the sendfile
system call when possible (as is the case for os.File
), which is a stronger meaning of efficient (the data doesn't have to go through the process memory space at all, so about as good as it gets).
In other words, I think it's reasonable to say that the built in net/http
package already supports efficient streaming of large files.