"Eventually, the server will block on his send operations"
Frankly, the above is the real bug here. There's no reason a server should block just because some client isn't receiving data fast enough. The server should be using non-blocking, asynchronous I/O and should otherwise continue to work normally, even if a client isn't reading fast enough.
Now, even if you address the blocking issue, you may also have the issue of the client receiving data not quickly enough. I.e. as you mentioned, you want the client to not receive data it can't process. You have at least a few choices here:
- Require the client to actively response to a processed message before sending another one.
Pro: this is an immediate solution, in the sense that the server will never exceed the transmission rate that the client can handle.
Con: this would add bandwidth overhead and latency to your network protocol. A client with a high ping time will suffer, even if it's otherwise able to receive data quickly.
- Keep track of the data rate the client appears to be able to handle, and slow the materialization of messages so that the server does not exceed this rate.
Pro: this solution will at all times maintain a transmission rate as high as the client is capable of dealing with.
Con: at least initially, and perhaps intermittently as the client's own capacity varies, this may exceed temporarily the client's ability to keep up
- Use UDP, which will allow the network transport layer to discard datagrams that the client isn't processing quickly enough.
Pro: this solution delegates the whole problem to the network transport layer, leaving you to worry about the real details of your server and client
Con: UDP is inherently unreliable. In addition to dealing with dropped datagrams (which in your case is a benefit), you also must be prepared to handle datagrams received out of order, and individual datagrams received more than once.
That's about as specific an answer as is possible, given the broadly stated question.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…