“We recently released DotImage 10 and if you’ve been following us for a while, you know that we are committed to building the best .NET imaging components. Since I started at Atalasoft, I have been looking at the issue of making as much of our internals in entirely managed code. From the very beginning, it was a daunting task. I estimated that to do the entire project would be around three person years of engineering time. This begs the question, “is it really worth it?”
On the down side:
- managed code runs, on average, 1.5x slower than unmanaged code. In image processing, this time stacks up quickly. Operations are routinely repeated billions of times, so we really have to keep an eye on costs.
- Translating to managed code appears to not add much value as the code appears to be the same – no features added
On the plus side:
- dotImage will run on the client in Silverlight applications as well as hosted .NET where unmanaged code isn’t allowed
- Managed code is far more stable – array bounds checking alone is a big win
- Managed code is simpler to author to and simpler to deploy
- Managed code is future proof on different processors/OSes
- Managed code is easier to scale across multiple cores/CPUs
How did we do this? I’ve been playing a chess game with our API over the past 5 years.
This laid the ground work. The next step was to apply a set of porting strategies. My goal was to reuse as much code as possible in this port. This meant using our regular C# code (slam dunk), removing any unsafe code, porting C/C++ (and in some cases choosing to use the new code in both managed and unmanaged ports), adapting APIs when possible (Silverlight doesn’t have System.Drawing, which means no Rectangle, Point, Size or Color objects), writing unit tests that ensure that output matches, running benchmarks to find and eliminate bottlenecks, and so on.
Finally, there is a new secret weapon in our arsenal. A fair amount of DotImage 10 is written in F#.
Yes, you read that right. …
…Since each of these are inlines, the F# optimizer can actually do something useful with the code. In my experience so far, the C# optimizer doesn’t really do much, if anything. So why do we care about this? It’s that lurking 1.5x managed code cost. In my measurements, C#->IL->target CPU does about 1.5x the work of C++->target CPU. Quite honestly, for a virtual language to a virtual machine, this is a very low cost. By using F#, we were able to address this cost by using inlining, code profiling, scanline caching, memoization and other techniques. In many cases we ended up with code that ran in equivalent time to C++ code or in some cases faster.
This is not to say that we didn’t have issues with F#. I found several compiler bugs for which we got some quick work-arounds from Don Syme and his team. I also found some interesting .NET interoperability challenges, but in the end I was able to meet one of my prime rules for picking F#: any object written in F# should be method signature identical to the C# equivalent so that our customers shouldn’t need to know or care about the .NET language implementation choice under the hood. The code should work, it should work well and with no surprises.
This was a great post and I applaud Steve for taking the time to write and Atalasoft for supporting him in allowing it. It’s not every company that will let their people blog about things that are this close to the core.
Also it’s great seeing a real world success story for F#…