General

TCP flow control and asynchronous writes

Overview To enable network applications to send and receive data via a TCP connection reliably and efficiently the TCP protocol includes flow control which allows the TCP stack on one side of the connection to tell the TCP stack on other side of the connection to slow down its data transmission, or to stop sending data entirely. The reason that this is required is that the TCP stack in each peer contains buffers for data transmission and data reception and flow control is required to prevent a sender from sending when a receiver’s buffer is full.

New client profile: inConcert

We have a new client profile available here for a client that began by using The Free Framework and then switched to using The Server Framework to take advantage of the advanced features that it offers.

New client profile: Cadcorp

We have a new client profile available here for a client that began using The Server Framework in its GeognoSIS web mapping product in September 2010.

TIME_WAIT and its design implications for protocols and scalable client server systems

Overview When building TCP client server systems it’s easy to make simple mistakes which can severely limit scalability. One of these mistakes is failing to take into account the TIME_WAIT state. In this blog post I’ll explain why TIME_WAIT exists, the problems that it can cause, how you can work around it, and when you shouldn’t. TIME_WAIT is an often misunderstood state in the TCP state transition diagram. It’s a state that some sockets can enter and remain in for a relatively long length of time, if you have enough socket’s in TIME_WAIT then your ability to create new socket connections may be affected and this can affect the scalability of your client server system.

New client profile: Desktop Sharing Company

We have a new client profile available here for a client that we’ve had since 2006 in the desktop sharing market. Their system, built on The Server Framework, runs on more than 120 servers worldwide and handles more than 200,000 desktop sharing sessions each day!

Useful link to TCP connection knowledge base articles

I found this article recently whilst discussing a question about socket reuse using DisconnectEx() over on StackOverflow. It’s a useful collection of the various configuration settings that can affect the number of concurrent TCP connections that a server can support, complete with links to the KB articles that discuss the settings in more detail. It’s a bit out of date, but it’s probably a good starting point if you want to understand the limits involved.

How to support 10,000 or more concurrent TCP connections

Using a modern Windows operating system it’s pretty easy to build a server system that can support many thousands of connections if you design the system to use the correct Windows APIs. The key to server scalability is to always keep in mind the Four Horsemen of Poor Performance as described by Jeff Darcy in his document on High Performance Server Architecture. These are: Data copies Context switches Memory allocation Lock contention I’ll look at context switches first, as IMHO this is where outdated designs often rear their head first.

Welcome to ServerFramework.com

ServerFramework.com is a new website that we’ve put together to make it easier for users and potential users of the licensed version of our high performance, I/O completion port based client/server socket framework to find all of the information that they need. As many of you know, I’ve been working on the code that forms The Server Framework since 2001 and it’s been used by lots of our clients to produce highly scalable, high performance, reliable servers that often run continuously 24/7, all year round.