O_NONBLOCK SOCK_STREAM限制为8192

I'm writing from c program into a SOCK_STREAM of a Unix Domain Socket that is being listened from a go program, using net.Listen("unix", sockname). When I set the socket to O_NONBLOCK using fcntl(), I see that the C program writes only 8192 bytes in the first write. After it fails, I monitor and write back the remaining data, but the read data on my server is not valid in this case.

When I do not use O_NONBLOCK, then the whole of 8762 bytes are written in a single write and everything works as expected.

C Client Socket Connection

    if ( (fd = socket(AF_UNIX, SOCK_STREAM, 0)) == -1) {
        return;
    }
   int flags = fcntl(fd, F_GETFL, 0);
   flags = flags|O_NONBLOCK;
   fcntl(fd, F_SETFL, flags);
   ...
    if (connect(fd, (struct sockaddr*)&addr, sizeof(addr)) == -1) {
        return;
    }

C Client Writing

        while (written < to_write) {
            int result;
            if ((result = write(fd, &buffer[written], to_write - written)) < 0) {
                if (errno == EINTR) {
                    continue;
                }
                if (errno == EAGAIN) {
                    struct pollfd pfd = { .fd = fd, .events = POLLOUT };
                    poll_count++;
                    if (poll_count > 3) {
                        goto end;
                    }
                    if ((poll(&pfd, 1, -1) <= 0) && (errno != EAGAIN)) {
                        goto end;
                    }
                    continue;
                }
end:
                return written ? written : result;
            }
            written += result;
            buffer += result;
        }

Go Server Reading

buf := make([]byte, 0, count)
var tmpsize int32
for {
    if count <= 0 {
        break
    }

    if count > 100 {
        tmpsize = 100
    } else {
        tmpsize = count
    }

    tmp := make([]byte, tmpsize)
    nr, err = conn.Read(tmp)
    if err != nil {
        return
    }

    buf = append(buf, tmp[:nr]...)
    count = count - int32(nr)
}

What am I missing here. I'm running it on OSX. I also tried setting the SO_SNDBUF in the Go Server to 10000, but it does not help

err = syscall.SetsockoptInt(int(fd.Fd()), syscall.SOL_SOCKET, syscall.SO_SNDBUF, 10000)
if err != nil {
    return
}

What I would do is read the data straight into a bytes.Buffer similar to the answer listed here:

https://stackoverflow.com/a/24343240/8092543

https://golang.org/pkg/io/#Copy

The beauty of the io.Copy is that it consumes a Writer + Reader interface, which are satisfied nicely but your bytes.Buffer (io.Writer) and your conn.Read (io.Reader). Replace your entire block with something like...

var buf bytes.Buffer
count, err := io.Copy(buf, conn)
if err != nil { 
    return nil, fmt.Errorf("error during conn read: %v", err)
}

return buf.Bytes(), nil

This is normal behaviour. A non-blocking write can only transfer as much data as will fit into the socket send buffer. If you get a short count, you need to either loop or select on writability and retry. A blocking write on the other hand always transfers the entire data.

Setting the send buffer size may not do anything on Unix domain sockets.