Concurrent Execution gets a boost with the Disruptor pattern
In our new cloudy multi-core environments, multi-threaded execution is the new craze under all its aspects: concurrency management, asynchronous execution, “agents” programming, and the general producer-consumer pattern. In the .NET world, Microsoft has worked hard on this subject in the framework 4.0/5.0, with TPL Dataflow and the async keyword support being major examples. Even relegating the direct use of the Thread class to a low-level programmer task 🙂
One common point in all concurrency algorithms is locks : ensuring that those precious shared resources do not get messed up by competing simultaneous access. Locks are widely used, often with a wide scope, but their performance impact could sometime be neglected.
That’s why LMAX (a trading platform provider) announcement that they are open-sourcing a new pattern allowing drastic reductions in the time spent in locks is so interesting. This algorithm, named Disruptor, handles the Producer – Consumer pattern using a specific data model (a ring-buffer with pre-instanciated objects) that reduces concurrent access to shared resources. They actually started from the lower-level CPU Cache access strategy to derive fundamental rules for lock performance: separate resources on distinct cache-lines for example. This greatly enhances two Key Performance Indicators: throughput and latency.
It’s quite fast. As this comparison table shows, in comparison to the Java ArrayBlockingQueue class which is already well optimized (click on the picture for larger size).
What lies next? the Disruptor, thanks to its open-sourcing, is now being dissected across the globe. Ports to .NET and C++ are underway. Hopefully, I will find a good reason to use it 🙂