I’m still working on Vrui’s second-generation collaboration / tele-presence infrastructure (which is coming along nicely, thankyouverymuch), and I also recently started working with another group of researchers who are trying to achieve similar goals, but have some issues with their own home-grown network system, which is based on Open Sound Control (OSC). I did some background research on OSC this morning, and ran into several instances of an old pet peeve of mine: the relative performance of UDP vs TCP. Actually, I was trying to find out whether OSC communicates over UDP or TCP, and whether there is a way to choose between those at run-time, but most sources that turned up were about performance (it turns out OSC simply doesn’t do TCP).
Here are some quotes from one article I found: “I was initially hoping to use UDP because latency is important…” “I haven’t been able to fully test using TCP yet, but I’m hopeful that the trade-off in latency won’t be too bad.”
Here are quotes from another article: “UDP has it’s [sic] uses. It’s relatively fast (compared with TCP/IP).” “TCP/IP would be a poor substitute [for UDP], with it’s [sic] latency and error-checking and resend-on-fail…” “[UDP] can be broadcast across an entire network easily.” “Repeat that for multiple players sharing a game, and you’ve got a pretty slow, unresponsive game. Compared to TCP/IP then UDP is fast.” “For UDP’s strengths as a high-volume, high-speed transport layer…” “Sending data via TCP/IP has an ‘overhead’ but at least you know your data has reached its destination.” “… if the response time [over TCP] was as much as a few hundred milliseconds, the end result would be no different!”
First thing first: Yes, UDP can send broadcast or multicast IP packets. But that’s not relevant for 99.9% of applications: IP broadcast only works on a single local network segment, and IP multicast does not work on the public Internet — there is currently no mechanism to assign multicast addresses dynamically, and therefore multicast packets that do not use well-known reserved addresses are ignored by Internet routers. So no points there.
In summary, according to these articles (which reflect common wisdom; I do not intend to pick on these specific authors or articles), TCP is slow. Specifically — allegedly — it has high latency (a few hundred milliseconds over UDP, according to the second article), and low bandwidth compared to UDP.
Now let’s put that common wisdom to the test. Fortunately, my collaboration framework has some functionality built in that allows a direct comparison. For example, the base protocol can send echo requests (akin to ICMP ping) at regular intervals, to keep a running estimate of transmission delay between the server and all clients, and to synchronize the server’s and client’s real-time clocks. These ping packets are typically sent over UDP, but since not all clients can always use UDP, the protocol can fall back to using TCP. The echo protocol is simple: the client sends an echo request over TCP or UDP, the server receives the request, and immediately sends an echo reply to the client, over the same channel on which it received the request. This allows us to compare the latency of sending data over TCP vs. UDP.
I ran the first experiment between my home PC and my server at UC Davis. Here are the results from 200 echo packet round-trips: (I also list timings using ICMP, i.e., the “real” ping protocol, as a baseline):
|mean round-trip time [ms]
|Std. deviation [ms]
Oookay, that’s not exactly what common wisdom would predict. TCP and UDP have the same latency (the minor numerical difference is safely within the margin of error), and are less than 2% slower than bare-metal ICMP. Let’s try that again, but between a client and server running on the same computer:
|mean round-trip time [ms]
|Std. deviation [ms]
In this test, UDP is indeed faster than TCP, by a whopping 0.02 ms (again, within the margin of error). Notably, ICMP is now faster than TCP and UDP by a factor of more than four, which is explained by ICMP running entirely in kernel space, while my collaboration infrastructure sends and receives packages from user space.
So what gives? Why does everybody know that TCP sucks for low latency? The issue is failure recovery. TCP is — clearly — just as “fast” (in terms of latency) as UDP as long as no IP packets get lost. So what happens if IP packets do get lost? In UDP’s case, nothing happens. The receiver doesn’t receive the packet, and the loss does not affect latency. In TCP’s case, the error recovery algorithm will notice that a packet was lost, duplicated, or sent out-of-order, and the receiver will request re-transmission of the bad packet (actually, TCP uses positive acknowledgment, but whatever). And because re-sending a bad packet takes at least a full round-trip between receiver and sender, it does indeed add to worst-case latency.
So that’s bad. Under failure conditions, TCP can have higher latency. But what’s the alternative? The sender (usually) does not send packets for funsies, but to communicate. And if packets get dropped or otherwise mangled, communication does not happen. Meaning, if some UDP sender needs to make sure that some bit of data actually arrives at the receiver, it has to implement some mechanism to deal with packet loss. And that will increase worst-case latency, just as it does for TCP. So the bottom line is: UDP has lower worst-case latency than TCP if and only if any individual piece of sent data does not matter. In other words: UDP has lower worst-case latency than TCP only when sending idempotent data, meaning data where it doesn’t matter if not everything arrives, or some data arrives multiple times, or packets arrive out-of-order, as long as a certain fraction of data arrives. Typical examples for this type of data are simple state updates in online games (an example discussed in the second article I linked), or audio packets for real-time voice chat. In most other cases using UDP does not actually help, or even hurts (the main point of the second article I linked). Even in online games, it is generally only okay if one of a player’s many mouse movement packets is lost, because the next one will update to the correct state (idempotent!), but if a button click packet gets lost and the player’s gun doesn’t shoot, there’ll be hell to pay.
So the common wisdom should actually be: If you want to send a stream of packets at low latency, and each subsequent packet will contain the full end state of your system, i.e., updates are idempotent as opposed to incremental, then use UDP. In all other cases, use TCP. And, generally speaking, don’t attempt a custom implementation of TCP’s failure correction in your UDP code, because it’s highly likely that the TCP developers did it better (and TCP runs in kernel space, to boot).
That’s for latency. What about sending high-volume data, in other words, what about bandwidth? Time for another experiment, again between my home PC and my UC Davis server. First, I sent a medium amount of data (100MB) over TCP, using a simple sender and receiver. This took 158 seconds, for an average bandwidth of 0.634MB/s. (Yes, I know. I am appropriately embarrassed by my Internet speed.) Next, I sent the same data over UDP, simply blasting a sequence of 74,899 datagrams of 1400 data bytes each over my home PC’s outgoing network interface. That took about 2.3s, for an average bandwidth of 43.28MB/s. Success! But oh wait. How many of those datagrams actually arrived at my server? It turns out that 95.88% of the datagrams I sent were lost en route. Oh well, I guess those data weren’t important anyway. 🙁
Seriously, though, the problem is that UDP, by design, does not do any congestion control. If the rate of sent datagrams at any time overwhelms any of the network links between sender and receiver, datagrams will be discarded silently. So we need to implement some form of traffic shaping ourselves. That’s not easy (there’s a reason TCP is the complex protocol that it is), and as a first approach, I simply calculated the average number of packets that were sent by my simple TCP sender per second, and set up a timer on the sender side to spread datagrams out to the same average rate. This ended up taking 160s (duh!), for a bandwidth of 0.623MB/s. At this rate, only 0.015% of datagrams were lost en route. Clearly not better than TCP, but then that’s expected if set up this way.
Next, I tried pushing the effective bandwidth up, by sending datagrams at increasingly higher rates. At 0.812MB/s on the sender side, 20.93% of datagrams were lost, for an effective bandwidth on the receiver side of 0.642MB/s, or 1.3% more than TCP’s. In a real bulk data protocol, the sender would have had to re-send those missing packets, so this is an upper limit on the bandwidth that could have been achieved. Trying even faster, with a sender-side bandwidth of 1.181MB/s, 44.29% of datagrams were lost, for a receiver-side bandwidth of 0.658MB/s, or 3.8% above TCP. And again, this is a loose upper limit. Any mechanism to re-send those dropped packets would have lowered effective end-to-end bandwidth.
From these numbers we can extrapolate that in the best case, where no datagrams are lost whatsoever, UDP can maybe transmit bulk data a few percent faster than TCP. (I said maybe, because in a situation with no packet loss, TCP wouldn’t have to spend time re-transmitting data, either). In any real situation, where IP packets are invariably lost, the necessary re-transmission overhead would have brought UDP back to about the same level as TCP. The price we pay for this tiny potential improvement (if it’s even there) is that we have to implement TCP’s failure correction and traffic shaping algorithms ourselves, in user space. Again, that’s generally not a good idea. The bottom line is the same as for latency: if you want to send data that doesn’t all have to arrive at the receiver, like real-time audio chat data, UDP is a good choice. In all other cases, use TCP.
Finally, let’s compare TCP and UDP bandwidth in the local case, where sender and receiver are on the same computer. Here we have a somewhat counter-intuitive result: UDP transmitted 100MB at a bandwidth of 450MB/s, with 0% packet loss as expected, while TCP transmitted at 890MB/s, almost twice as fast. Huh? The answer here is that TCP on a single host is directly mapped to UNIX’s pipe mechanism, meaning there is no traffic shaping or failure recovery, and that my test program was able to pass data to TCP in larger chunks, because it didn’t have to send individual datagrams (concretely, I sent 4096 bytes per system call for TCP, vs. 1400 bytes per system call for UDP). Fewer system calls, less time, higher bandwidth.
In summary: Is TCP really that slow? Answer: no, not at all. Under very specific circumstances, data transmitted over UDP can have lower worst-case latency, or potentially higher bandwidth, than the same data transmitted over TCP. If your data falls within those circumstances, i.e., if re-sending lost, mangled, or mis-ordered packets would not be helpful, like in idempotent state updates or in data streams that have built-in forward error correction or loss masking like real-time audio chat data, use UDP. In the general case, or if you don’t know for sure, use TCP.