April 2008 Archives

Optimizing exceptions

You might often hear about exceptions being slow. For this reason they are usually shunned in the embedded space, and sometimes even for regular desktop/server programming. What makes them slow? When one is thrown it needs to search through the call stack for exception handlers.

I guess I don’t understand this line of thinking. For one, exceptions are meant for exceptional situations: things you don’t expect to happen under normal operation. Code that uses exceptions will run just as fast (or maybe even faster) as code without, until you throw one. These exceptional situations are truely rare, so I usually don’t care if they do happen to run slower.

A compiler can actually use exceptions to optimize your code. Consider this inefficient (but typical) pseudo-C:

int dosomething(void) {
   /* do something A */
   if(err) return -1;

   /* do something B */
   if(err) {
      /* cleanup previous work A */
      return -1;
   }

   /* do something C */
   if(err) {
      /* cleanup previous work B */
      /* cleanup previous work A */
      return -1;
   }

   return 0;
}

Or even this more efficient (yes boys and girls, goto actually has a good use case in C, get over it) pseudo-C:

int dosomething(void) {
   /* do something A */
   if(err) return -1;

   /* do something B */
   if(err) goto err1;

   /* do something C */
   if(err) goto err2;

   return 0;

   err2:
   /* cleanup previous work B */

   err1:
   /* cleanup previous work A */

   return -1;
}

Why are these bad? Cache locality. In the first example, you have error handling code inline with your regular code. In the second you have it slightly better and off to the end of the function. Ideally the code you run will all be compacted in as few cache lines as possible, and erroring handling this way will waste significant space on cleanup code that in the large majority of cases won’t be run.

But with exceptions, the compiler is free to take all the cleanup code in your entire app, and shove it into a single separate area of code. All your normal code that you expect to run can be compact and closer together. Of course, this will make exceptions run slower. If your code is heavy on throwing exceptions (which would probably be an abuse) it will probably cause a significant overall slowdown. But if they are used correctly–for exceptional situations–then the common case will be improved cache usage and therefor faster code.

WCF is pretty neat

I haven’t worked with .NET extensively since a little bit after 2.0 was released, so when I took on a new job developing with it, I had some catching up to do. WCF was the easy part. In fact, I’m really enjoying using it. I can tell they put a lot of thought into making it scalable.

For those that don’t know, WCF is Microsoft’s new web services framework, meant to replace the old Remoting stuff in .NET 2.0. It lets you worry about writing code—classes and methods etc., and it manages transforming it into SOAP and WSDL in the background.

The coolest thing about WCF is the support for completely async design. You start a database query, put the method call into the background, and resume it when the database query is done. This allows the server to run thousands of clients in only a couple threads, improving cache and memory usage greatly.

One funny thing I learned from this is that ASP.NET has full async support too, it just doesn’t get a lot of advertising for some reason. The one thing that annoys me about all modern web development frameworks is the lack of async support making you pay for 20 servers when you should only need one, and here it was under my nose all the time. Imagine that!

Visual C++ 2008 Feature Pack is now available

The Visual C++ 2008 Feature Pack I talked about before is finished and ready for download. This includes a bulk of the TR1 updates (sadly, still no cstdint) and some major MFC updates.