如何为Go中写入的每个连接发送单独的数据包?

Problem

I want to run a load test with a high number of requests per second. I have written a socket sender and a receiver in Go. The sender sends a lot of packets to port 7357, each one containing the current time in nanoseconds. The receiver listens in port 7357 and parses each message, computing the latency.

The problem is that when reading I get multiple packets in one conn.Read(). I understand that this means that I am in fact sending multiple messages per packet: each conn.Write() does not send a socket packet, but it waits for some time and then gets coalesced with the next (or the next few) before sending.

Question

How can I make sure that each conn.Write() is sent individually through the socket as a separate packet? Note: I don't want to reinvent TCP, I just want to simulate the load from a number of external entities that send a message each.

Steps Taken

I have searched the documentation but there seems to be no conn.Flush() or similar. I have tried using a buffered writer:

writer := bufio.NewWriter(conn)
...
bytes, err := writer.Write(message)
err = writer.Flush()

No errors, but still I get mixed packets at the receiving end. I have also tried doing a fake conn.Read() of 0 bytes after every conn.Write(), but it didn't work either. Sending a message terminator such as does not seem to make any difference. Finally, Nagle algorithm is disabled by default, but I have called tcp.SetNoDelay(true) for good measure.

In Node.js I managed to do the trick with a setImmediate() after each socket.write(): setImmediate() waits for all I/O to finish before continuing. How can I do the same in Go so I get separate packets?

Code Snippets

Send:

func main() {
  conn, _ := net.Dial("tcp", ":7357")
  defer conn.Close()
  for {
    timestamp := strconv.FormatInt(time.Now().UnixNano(), 10)
    conn.Write([]byte(timestamp))
    conn.Read(buff)
  }
}

Receive:

func main() {
  listen, _ := net.Listen("tcp4", ":7357")
  defer listen.Close()
  for {
    conn, _ := listen.Accept()
    go handler(conn)
  }
}
func handler(conn net.Conn) {
  defer conn.Close()
  var buf = make([]byte, 1024)
  for {
    conn.Read(buf)
    data := string(buf[:n])
    timestamp, _ := strconv.ParseInt(data, 10, 64)
    elapsed := timestamp - time.Now().UnixNano()
    log.Printf("Elapsed %v", elapsed)
  }
}

Error handling has been removed for legibility, but it is thoroughly checked in the actual code. It crashes when running the strconv.ParseInt() the first time, with a value out of range error since it receives a lot of timestamps coalesced.

You can read predefined number of bytes from the socket on each iteration, that might help, but you need to create your own protocol, that will be handled by your application. Without proto impossible to guarantee that everything will work stable, because on the receiver you cannot understand where is the begin and where is the end of message.

There used to a be a rule that before anyone was permitted to write any code that uses TCP, they were required to repeat the following sentence from memory and explain what it means: "TCP is not a message protocol, it is a reliable byte-stream protocol that does not preserve application message boundaries."

Aside from the fact that your suggested solution is simply not possible reliably with TCP, it is not the solution to reducing latency. If the network is overwhelmed, using more packets to send the same data will just make the latency worse.

TCP is a byte stream protocol. The service it provides is a stream of bytes. Period.

It seems that you want a low-latency message protocol that works over TCP. Great. Design one and implement it.

The main trick to getting low latency is to use application-level acknowledgements. The TCP ACK flag will piggy-back onto the acknowledgements, providing low latency.

Do not disable Nagling. That's a hack that's only needed when you can't design a proper protocol that's intended to work with TCP in the first place. It will make latency worse under non-ideal conditions for same reason the solution you suggested, even if it were possible, would be a poor idea.

But you MUST design and implement a message protocol or use an existing one. Your code is expecting TCP, which is not a message protocol, to somehow deliver messages to it. That is just not going to happen, period.

How can I make sure that each conn.Write() is sent individually through the socket as a separate packet? Note: I don't want to reinvent TCP, I just want to simulate the load from a number of external entities that send a message each.

Even if you could, that wouldn't do what you want anyway. Even if they were sent in separate packets, that would not guarantee that read on the other side wouldn't coalesce. If you want to send and receive messages, you need a message protocol which TCP is not.

In Node.js I managed to do the trick with a setImmediate() after each socket.write(): setImmediate() waits for all I/O to finish before continuing. How can I do the same in Go so I get separate packets?

You may have changed it from "happens not to work" to "happened to work when I tried it". But for the reasons I've explained, you can never make this work reliably and you are on a fool's errand.

If you want to send and receive messages, you need to precisely defined what a "message" will be and write code to send and receive them. There are no shortcuts that are reliable. TCP is a byte stream protocol, period.

If you care about latency and throughput, design an optimized message protocol to layer over TCP that optimizes these. Do not disable Nagle as Nagle is required to prevent pathological behavior. It should only be disabled when you cannot change the protocol and are stuck with a protocol that was not designed to layer on top of TCP. Disabling Nagle ties one hand behind your back and causes dramatically worse latency and throughput under poor network conditions by increasing the number of packets required to send the data even when that doesn't make any sense.

You probably want/need application-level acknowledgements. This works nicely with TCP because TCP ACKs will piggyback on the application-level acknowledgements.