I/O completion ports made easy
I described the basics of I/O completion ports in my last post, but there is still the question of what the easiest way to use them. Here I’ll show a callback-based application design that I’ve found makes a fully asynchronous program really simple to do.
I touched briefly on attaching our own context data to the OVERLAPPED
structure we pass along with I/O operations. It’s this same idea that I’ll expand on here. This time, we define a generic structure to use with all our operations, and how our threads will handle them while dequeuing packets:
struct io_context { OVERLAPPED ovl; void (*on_completion)(DWORD error, DWORD transferred, struct io_context *ctx); }; OVERLAPPED *ovl; ULONG_PTR completionkey; DWORD transferred; BOOL ret = GetQueuedCompletionStatus(iocp, &transferred, &completionkey, &ovl, INFINITE); if(ret) { struct io_context *ctx = (struct io_context*)ovl; ctx->on_completion(ERROR_SUCCESS, transferred, ctx); } else if(ovl) { DWORD err = GetLastError(); struct io_context *ctx = (struct io_context*)ovl; ctx->on_completion(err, transferred, ctx); } else { // error out }
With this, all our I/O operations will have a callback associated with them. When a completion packet is dequeued it gets the error information, if any, and runs the callback. Having every I/O operation use a single callback mechanism greatly simplifies the design of the entire program.
Lets say our app was reading a file and sending out it’s contents. We also want it to prefetch the next buffer so we can start sending right away. Here’s our connection context:
struct connection_context { HANDLE file; SOCKET sock; WSABUF readbuf; WSABUF sendbuf; struct io_context readctx; struct io_context sendctx; };
A structure like this is nice because initiating an I/O operation will need no allocations. Note that we need two io_context members because we’re doing a read and send concurrently.
Now the code to use it:
#define BUFFER_SIZE 4096 void begin_read(struct connection_context *ctx) { if(ctx->readbuf.buf) { // only begin a read if one isn't already running. return; } ctx->readbuf.buf = malloc(BUFFER_SIZE); ctx->readbuf.len = 0; // zero out io_context structure. memset(&ctx->readctx, 0, sizeof(ctx->readctx)); // set completion callback. ctx->readctx.on_completion = read_finished; ReadFile(ctx->file, ctx->readbuf.buf, BUFFER_SIZE, NULL, &ctx->readctx.ovl); } void read_finished(DWORD error, DWORD transferred, struct io_context *ioctx) { // get our connection context. struct connection_context *ctx = (struct connection_context*)((char*)ioctx - offsetof(struct connection_context, readctx)); if(error != ERROR_SUCCESS) { // handle error. return; } if(!transferred) { // reached end of file, close out connection. free(ctx->readbuf.buf); ctx->readbuf.buf = 0; return; } // send out however much we read from the file. ctx->readbuf.len = transferred; begin_send(ctx); }
This gives us a very obvious chain of events: read_finished
is called when a read completes. Since we only get an io_context
structure in our callback, we need to adjust the pointer to get our full connection_context
.
Sending is easy too:
void begin_send(struct connection_context *ctx) { if(ctx->sendbuf.buf) { // only begin a send if one // isn't already running. return; } if(!ctx->recvbuf.len) { // only begin a send if the // read buffer has something. return; } // switch buffers. ctx->sendbuf = ctx->readbuf; // clear read buffer. ctx->readbuf.buf = NULL; ctx->readbuf.len = 0; // zero out io_context structure. memset(&ctx->sendctx, 0, sizeof(ctx->sendctx)); // set completion callback. ctx->sendctx.on_completion = send_finished; WSASend(ctx->sock, &ctx->sendbuf, 1, NULL, 0, &ctx->sendctx.ovl, NULL); // start reading next buffer. begin_read(ctx); } void send_finished(DWORD error, DWORD transferred, struct io_context *ioctx) { // get our connection context. struct connection_context *ctx = (struct connection_context*)((char*)ioctx - offsetof(struct connection_context, sendctx)); if(error != ERROR_SUCCESS) { // handle error. return; } // success, clear send buffer and start next send. free(ctx->sendbuf.buf); ctx->sendbuf.buf = NULL; begin_send(ctx); }
Pretty much more of the same. Again for brevity I’m leaving out some error checking code and assuming the buffer gets sent out in full. I’m also assuming a single-threaded design—socket and file functions themselves are thread-safe and have nothing to worry about, but the buffer management code here would need some extra locking since it could be run concurrently. But the idea should be clear.
Update: this subject continued in Tips for efficient I/O.
Related Posts
- High Performance I/O on Windows on May 13, 2009 in Asynchronous I/O, Coding, Feature Article, I/O Completion Ports, Microsoft, Scalability, Sockets, Windows
- Scalability isn’t everything on March 04, 2008 in Coding, Scalability
- User Mode Scheduling in Windows 7 on April 23, 2009 in Coding, Microsoft, Scalability
- Tips for efficient I/O on May 15, 2009 in Asynchronous I/O, Coding, Feature Article, I/O Completion Ports, Microsoft, Scalability, Sockets, Windows
- WCF is pretty neat on April 21, 2008 in Coding, Microsoft, Scalability