I'm trying to understand if a client (in a client-server architecture) when using a 'blocking' call will somehow completely lose the other side of the connection without being given any signs of the loss. I think this can happen normally as most networks sometimes have issues. The thing is I want to duplicate the idea: the client connects in a blocking mode, the server accept the connection, then disappears, and then possibly later re-appears, but not in a way where the server closes a connection, or 'nacks' or anything.
Is there a way to induce this behavior in a local network?
And as it turns out this particular app is written in Go but I don't know how much that will matter.
At the risk of being silly... Have you considered removing the cable / shutting down the network interface, manually? If this is just for testing what happens if you lose connectivity that is an option.
Another option is to use the firewall of your operating system to drop the specific traffic, for example to add a rule with iptables in a Linux-based OS.
//Block incoming port 80 (web)
$ sudo iptables -I INPUT -p tcp -m tcp --dport 80 -j DROP
//Block outgoing port 80 (web)
$ sudo iptables -I OUTPUT -p tcp -m tcp --dport 80 -j DROP
//Remove block incoming port 80 (web)
$ sudo iptables -I INPUT -p tcp -m tcp --dport 80 -j ACCEPT
//Remove block outgoing port 80 (web)
$ sudo iptables -I OUTPUT -p tcp -m tcp --dport 80 -j ACCEPT
Of course, you can execute this commands within your program, if you need to.