i was curious if anyone has analyzed the performance difference between these two paradigms.
Have a listener goroutine (maybe a few) that listen on a socket and spawn a new goroutine to process that information and send it along to wherever it has to go. After the send command the routine will finish and will be destroyed. Every request will create a routine and then destroy it when finished.
Have a listener goroutine (maybe a few) that listen on a socket and passes the data to a channel. Many goroutines are blocking on a channel receive and will take turns taking things out of the channel and processing them. When done, the routine will wait on the channel to get more information. Routines are never destroyed in this paradigm. A couple master routines receiving socket info on channels and the other routines waiting on channels to process information. Routines are never destroyed.
The question i have is for a system that receives lots of small bits of information in a receive (0.5-1.5kb per message) but has lots of messages coming in at once (high volume, low size) what paradigm is better for speed and processing. Having a bunch of routines sitting and using channels to spread them over a bunch of listening routines? Or, creating a routing for each request and having that routine end after each request?
even rudimentary ideology and conjecture is cool.
Thanks.
Generally speaking, I tend to find that spawning countless routines without re-using them is wasting: even if goroutines are cheap, they aren't free, and imply a scheduling cost.
Now, both methods have drawbacks under high load: spawning routines will cost you memory, scheduling, and may grind your program to a halt, while using channels your requests will hang until the previous one is processed.
My usual approach is to do a batch-based pipeline (inspired by the excellent Go Concurrency Patterns: Pipelines and cancellations blog post):
That way you can control precisely the flow and behavior of your pipeline, while keeping the advantages of multiple workers. You can easily implement an overflow mechanism by dropping the incomming requests if the buffer get over a limit size, spawn or kill new workers to acomodate the load, etc.