Some thoughts on that two thread pool server design

I’m currently re-reading “High Performance Server Architecture” by Jeff Darcy and he has a lot of sensible stuff to say about avoiding context switches and how my multiple thread pool design, whilst conceptually good is practically not so good. In general I agree with him but often the design provides good enough performance and it’s easy to compose from the various classes in The Server Framework.

Explicitly managing the threads that could run, using a semaphore that only allows a number of threads that is equal to or less than your number of cores to do work at once is a nice idea but one that adds complexity to the workflow as you need to explicitly acquire and release the semaphore as you perform your blocking operations. This approach, coupled with a single thread pool with more threads than you have processors would likely result in less context switches and higher performance.

I’m currently accumulating ideas for the performance work that I have scheduled for the 6.4 release, I expect a single pool design with a running threads limiter will feature…