When writing data exceeding 4096 bytes to Cmd.StdinPipe
in Go, program processing stops on Windows. This phenomenon does not occur when the same code running on Linux, or when writing process using goroutine.
Processing will not proceed from _, err = in.Write ([] byte {'0'})
(4097 bytes) in the code shown below. Why is this?
Why does not it occur with goroutine or on Linux?
*** Golang reference describes Cmd.StdinPipe using goroutine as an example, and my problem has also been solved. This question arises from curiosity about Go.
package main
import (
"bytes"
"fmt"
"io"
"log"
"os/exec"
)
func main() {
cmd := exec.Command("more")
pype, err := cmd.StdinPipe()
if err != nil {
log.Fatal(err)
}
bytes4k := generateBytes(1024 * 4) // Works on Linux, but not Windows.
// bytes4k := generateBytes(1024 * 64) // Don't works on Linux and Windows.
fmt.Println("bytes generated.")
// go writeBytes(pype, bytes4k) // Works fine!
writeBytes(pype, bytes4k) // Can't write. Write is locked.
err = cmd.Run()
if err != nil {
log.Fatal(err)
}
fmt.Println("finished.")
}
func generateBytes(num int) []byte {
byte := bytes.NewBuffer(make([]byte, 0, num))
for i := 0; i < num; i++ {
byte.WriteByte('0')
}
return byte.Bytes()
}
func writeBytes(in io.WriteCloser, bytes []byte) {
defer in.Close()
_, err := in.Write(bytes)
if err != nil {
log.Fatal(err)
}
fmt.Println("written bytes to pipe.")
_, err = in.Write([]byte{'0'}) // Why this code stops at 4097 bytes?
if err != nil {
log.Fatal(err)
}
fmt.Println("written 1 byte to pipe.")
}
Writing only blocks if no more space is in the pipe. While the size of the pipe in Windows might be 4k it is much larger in Linux. From pipe(7):
... Since Linux 2.6.11, the pipe capacity is 16 pages (i.e., 65,536 bytes in a system with a page size of 4096 bytes)...
Thus, you will probably get the same result on Linux as on Windows when writing into a pipe where nobody is reading but you need to write way more data into the pipe until you reach this situation.
This has already been answered, here's the workaround you'd probably want to use if you run into this issue - use the buffered writer instead, available in the bufio package. This lets you wrap a writer with a larger buffer that you have full control over.