Here's the scenario: I've got a real-time(ish) GUI display that shows the current state of some sensors on a server across the network, and I need the display to be as low-latency as possible, so my server is sending high-frequency updates to the client computer via UDP packets.
Occasionally (because reasons) the incoming-UDP-packet-buffer (SO_RECV) of the client's UDP socket might fill up, at which point there is no room in the buffer for the next incoming packet. In this case, the OS simply drops the incoming packet and life goes on.
... however, that behavior isn't ideal for a system where we are trying to minimize latency, because the more-valuable "fresh" packet (with the most-up-to-date sensor values in it) has been lost, in order to preserve the now-redundant (and therefore useless) "old" data that was already present in the socket's receive-buffer.
So the question is, is there any way to tell a UDP socket that when it needs to drop a packet because its receive-buffer is full, it should drop the oldest packet(s) in its buffer as necessary in order to make room for the newly received packet, rather than dropping the new packet and keeping the older ones? (Note that since UDP is allowed to drop any packet at any time for any reason, there's no reason why it couldn't do that)
(I realize I can work around the issue by e.g. dedicating a high-priority thread to reading from the UDP socket, thereby making sure that the buffer never fills up in the first place, but that seems like a rather elaborate and racy work-around for a problem that could be avoided with a simple tweak to the socket's built-in buffering logic)
question from:
https://stackoverflow.com/questions/65835139/how-to-get-a-udp-socket-to-drop-the-oldest-queued-packet-rather-than-the-new-inc 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…