I am trying something similar to the below pattern:
func sendFunc(n int, c chan int) {
for i := 0; i < n; i++ {
c <- i
fmt.Println("Pushed")
}
close(c)
}
func main() {
c := make(chan int, 10)
go sendFunc(10, c)
// Receive from the channel
for i := range c {
fmt.Println(i)
}
}
The output appears to be sync, like this:
Pushed
Pushed
Pushed
Pushed
Pushed
Pushed
Pushed
Pushed
Pushed
Pushed
0
1
2
3
4
5
6
7
8
9
Does that mean range over for loop is actually receiving data synchronously?
But if I change the buffered channel to a non-buffered channel:
c := make(chan int)
the result seems to be async:
Pushed
0
1
Pushed
Pushed
2
3
Pushed
Pushed
4
5
Pushed
Pushed
6
7
Pushed
Pushed
8
9
Pushed
which one is better? (what about the performance diff?)
Updated
So my scenario is that: in the receiverFunc a request will be made every time new data is received from the sendFunc(), the result shows that the scheduler does not start receiving until all data has been sent to the channel (given a buffered channel with enough buffer space). Therefore I ended up using the non-buffered channel so that the time for waiting for the respond and the time for processing data in the sendFunc() can overlap which provides a better performance.
As Cerise Limón already stated: this is an effect of how the runtime schedules go routines. Basically a go routine is run as long as it doesn't block or returns. So the call go sendFunc(10, c)
will execute until it blocks or returns. If you put a <-time.After(1)
in the sendFunc
, the function will suddenly block and you will have the effect that the scheduler will schedule another routine.
here is a little example on the playground: https://play.golang.org/p/99vJniOf3_
The question which one is better is hard to answer. And disclaimer: I am by far not an expert on this, but I guess it is a tradeoff. While a smaller buffer reduces the time a single message stays in a buffer, it triggers a re-scheduling of go routines, which general costs some time.
A larger buffer can on the other hand increase the message latency through the buffer but on the other hand improve the throughput. Also you can preproduce a lot of messages, which can be useful if you have some static overhead that is the same for one or many messages (e.g. requesting a single line of input vs requesting multiple lines of input).
Have this explanation of the scheduler https://rakyll.org/scheduler/.