Last week I learnt something new, which is always good. Unfortunately it was that for over 15 years I’d been working under a misconception about how an API worked.
WSASend() from multiple threads on a single socket you must ensure that only one thread calls into the API at a given time. Failure to do so can cause buffer corruption.
It all began with this question on StackOverflow and I dived in and gave my usual response. Unfortunately my usual response was wrong and after spending some time talking to the person who had asked the question and running their test code I realised that I’ve been mistaken about this for a long time.
My view has always been that if you issue multiple
WSARecv() calls on a single socket then you need to be aware that the completions may be handled out of sequence by the threads that you have servicing your I/O completion port. This is purely due to thread scheduling and the actual calls to
WSARecv() are thread safe in themselves. I wrote about this here back in 2002. My belief was that you could safely call
WSARecv() from multiple threads on the same socket at the same time and the only problem would be resequencing the reads once you processed them. Unfortunately this is incorrect as the example code attached to the question shows.
The example code is somewhat contrived for a TCP socket in that it doesn’t care about the sequencing of the read completions and it doesn’t care about processing the TCP stream out of sequence. It issues multiple
WSARecv() calls from multiple threads and the data being sent is simply a series of bytes where the next byte is the value of the current byte plus one and where we wrap back to zero after a ‘max value’ is reached.
Such a stream with a max value of 7 would look like this: 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x00, 0x01, 0x02, 0x03…
Validating such a TCP stream is as simple as taking any read that completes in any order and simply ensuring that the bytes that are contained in the buffer that has been returned follow the expected pattern. Starting from the first byte, whatever value it is, subsequent bytes must be one greater until the ‘max value’ is reached at which point the wrap to zero and continue to increment. Assuming my long held beliefs were true then it didn’t matter how many threads were issuing
WSARecv() calls on the socket at the same time, the resulting slices of the TCP stream should all be valid.
Unfortunately the test program fails to prove this and instead proves that without synchronisation around the
WSARecv() call the returned stream slices can be corrupt. That is the data in the buffers returned from the read completion can include bytes that do not follow the expected pattern.
Of course the way that the test program uses the API is of limited use as I can’t think of a TCP stream where it would be useful to be able to process the stream in randomly sized chunks with no need to have processed the data that comes before the current data. One of the reasons that I believed that I understood the requirements of the API was that I never used it this way. In systems where multiple reads on TCP streams were allowed I would always increment a sequence number, put the sequence number in the read buffer’s meta data and then issue the
WSARecv() all as an atomic step with locking around it. This made it possible to correctly sequence the read completions and also had the side effect of preventing multiple threads calling
WSARecv() on a single socket at the same time. However, with UDP sockets the broken usage pattern is much more likely and I think that I may have seen the corruption on a client’s system in the past - we’re testing with fixed code now.
I’ve yet to fully investigate the
WSASend() side of the problem but I’m assuming that the issue is the same. The person who asked the question on StackOverflow has seen data corruption when the receive side is protected by synchronisation and the send side isn’t and I’ve no reason to doubt his analysis. I would like to think that calling
WSASend() on one thread and
WSARecv(), on another on the same socket, at the same time, is OK, but for now I’m assuming it’s not and simply using a single lock per socket around both calls.
The current documentation for has this to say about thread safety: If you are using I/O completion ports, be aware that the order of calls made to WSARecv is also the order in which the buffers are populated. WSARecv should not be called on the same socket simultaneously from different threads, because it can result in an unpredictable buffer order. Which, IMHO, doesn’t actually address this issue. It mentions the unpredictable buffer order, which is expected, but not the buffer corruption. Also the documentation for has an identical note with
WSARecv replaced by
WSASend, which, when you think about it, doesn’t actually make sense at all. I don’t remember seeing these notes when I was first looking at the documentation, but who knows (yes, I’d like a fully preserved set of all revisions to the MSDN docs so that I can diff back to what the docs said when I wrote the code ;) ). Network Programming for Microsoft Windows, Second Edition, doesn’t mention any thread safety issues at all in its coverage of
WSARecv() as far as I can see from a brief scan.
Apart from the one known case of UDP datagram corruption that may be caused by my misunderstanding I think most of our systems are pretty safe just by their design. However, there will be a fix included in the 6.6.3 release and 7.0 wont be susceptible to this problem at all due to its Activatable Object usage at the socket level.
It’s always good to learn something new, more so when it challenges long held beliefs and especially when it proves those beliefs to have been incorrect.