I wonder if OOB data is ever going to be fixed to allow more than one byte.
I can't quite comprehend how the implementation got so screwed up in the first place. A two byte field in every header, supposed to be a pointer to designate a range of data, OBVIOUSLY when it's set it should transmit exactly one byte of information.
I thought that SSH used it to pass ^C up faster than the rest of the stream so that you can quickly kill programs that flood the terminal. I've never verified this though.
But the ^C should be encrypted. If SSH sets the URG flag on a packet containing an encrypted ^C, it would be leaking plain-text - even for just one byte :)
I don't know how it actually works, but I can't see how this would be needed in that situation. The ^C signal goes from client to server, while the program is flooding in the opposite direction. So the flood of data shouldn't delay the ^C transmission.
The real problem is not with programs flooding the terminal, but rather that if the foreground program isn't accepting input (perfectly normal for many programs) then by design the socket buffer will fill up, TCP flow control will kick in, and the client machine will stop even sending the bytes to the server. In order for a ctrl+C to get around the backed up buffer, it has to bypass the regular queue.
While I agree it is likely to be deprecated (or already is), I think being able to signal that side data which was transmitted later should be processed earlier has value in situations where processing delay is more significant than transmission delay. You could do this with a single stream, but the receiver would have to read ahead and check for flags etc. Had early implementations got this consistently usable, it might see more use.
It's necessary to share file handles between processes using TCP over UNIX Domain Sockets. I'm sure some other mechanism could be devised for that though.
Minor nit: While Unix Domain Sockets (AF_UNIX) can send open file descriptors between processes, and it uses the msg_control field of sendmsg's struct msghdr ("ancillary data"), that's not the same as TCP's OOB data and in fact TCP is not involved at all in unix sockets, even when used as SOCK_STREAM. sendmsg() even has a MSG_OOB flag that is used to trigger TCP's OOB mechanism that (as far as I can tell) doesn't use the msg_control field at all, as the same option is available for plain send() and sendto(); the OOB data should be sent through the normal msg_iov field.
Some places where msg_control is used for TCP/IP sockets (at least in Linux) include kernel timestamping of messages. This is "out of band" data, but it is out of band to the kernel, not the network peer.
Weird. I guess I built a mental model that felt like it made sense. Turns out computers are crazier than common sense. Thanks for the detailed reply :)
Well, there's no reason to fix it unless someone has a compelling use case. And there's never going to be a compelling use case built around it if it's not fixed.
And regarding the fix, which do you propose, change the spec or change the majority of implementations?
It may offend the sensibilities but we're stuck with the current state of affairs for the life of TCP I think.
I can't quite comprehend how the implementation got so screwed up in the first place. A two byte field in every header, supposed to be a pointer to designate a range of data, OBVIOUSLY when it's set it should transmit exactly one byte of information.