Wednesday, November 30, 2011

Read/Write Lockless Exclusion

With the recent release of iOS 5, GCD now has a simple and elegant solution to the reader writer problem that utilizes lockless exclusion. In summary, this involves using dispatch_async (or dispatch_sync) to enqueue "reader" blocks onto a user concurrent queue. These blocks may be used to read some kind of data but not write to it. If we need write access, then we can use the new dispatch_barrier_async (or dispatch_barrier_sync) function. This will put a special barrier block on the queue. The blocks are still dequeued in FIFO order. However, when there are one or more reader blocks running, then a writer block may not be dequeued. While a single writer block is running, no other blocks may be dequeued, including other writer blocks. This guarantees that writer blocks have exclusive access and reader blocks may share access.

The new barrier blocks and user concurrent queues are immensely valuable. Lately, I've been considering how they relate to my recently developed Lockless Exclusion Accessor (LEA) design pattern. If you are not already familiar with my LEA pattern, you should first read my LEA article. The intent of a LEA is to make mutual exclusion easy and efficient. Since we want to be efficient, let's push efficiency further by allowing shared read access. Here is some generic code that does just that:

/// Block type to use when accessing the shared memory.
/// The type is 'id' just to be generic. With a real
/// LEA, you should use the exact type.
typedef void(^LEABlock)(id sharedMemory);

/// Class extension for holding the private, shared memory
/// and its associated queue.
@interface MyObject ()

@property (nonatomic, strong) id sharedMemory;
@property (nonatomic, assign) dispatch_queue_t leaQueue;

@end

@implementation MyObject

@synthesize sharedMemory = _sharedMemory;
@synthesize leaQueue = _leaQueue;

- (id)init {
    if ((self = [super init])) {
        // The queue MUST be concurrent to gain shared readership.
        dispatch_queue_t queue = NULL;
        queue = dispatch_queue_create(
          "com.RobBrown.LEAQueue",
          DISPATCH_QUEUE_CONCURRENT);
        [self setLeaQueue:queue];

        // Initialize shared memory here.
    }
    return self;
}

- (void)accessSharedMemoryReadWriteAsync:(LEABlock)block {

    // Block must not be nil.
    NSParameterAssert(block);

    // Executes the block asynchronously with exclusive access.
    dispatch_barrier_async([self leaQueue], ^{
        block([self sharedMemory]);
    });
}

- (void)accessSharedMemoryReadOnlyAsync:(LEABlock)block {

    // Block must not be nil.
    NSParameterAssert(block);

    // Executes the block asynchronously with shared access.
    dispatch_async([self leaQueue], ^{
        block([self sharedMemory]);
    });
}

@end

The first thing to note is that the dispatch queue is now concurrent instead of serial. If you use a serial queue then you will have no added benefits. The next thing to note is that every previous LEA in your object splits in two: a readonly version and a read write version. It's only four extra lines for a potentially great increase in efficiency.

As always, you should favor dispatch_async over dispatch_sync. If you must have synchronous access, then you can add synchronous variants too. You must, however, never use my dispatch_sync_checked function. It is not and never will be compatible with barrier blocks.

By introducing the readonly LEA, we now introduce a potential point of failure. What if we write some code that uses a readonly LEA, then we later change the code to modify the shared memory without switching to a read write LEA? This could introduce race conditions. If needed, you can add a sanity check to your readonly LEA. Here are a few options:
  1. If the object you are passing back has mutable and immutable variants, create two block types. The read write LEA will use the mutable variant and the readonly LEA will use the immutable variant. This is the best solution.
  2. If you don't have mutable and immutable variants, then you can simply pass a copy of the shared memory to the readonly block. This can potentially mask errors since you won't know if you accidentally use a readonly LEA when you meant to use a read write LEA.
  3. Before calling the block, you copy the shared memory. Then after calling the block, you check that the shared memory was not changed by comparing it to the earlier copy. If it was changed, then throw an exception. Although Objective-C rarely throws exceptions, this is an appropriate place to do so. Overall, this solution can be tedious work. You should also use a pre-processor macro to remove this check from release builds for efficiency.
  4. You can create a subclass of NSProxy. Instead of passing back the shared memory you can pass it the proxy. This proxy should throw an exception if any setters are called. This too can be tedious work. You may want to use a pre-processor macro here as well to remove the proxy in release builds.
Don't forget that user concurrent queues and barrier blocks are iOS 5 (and Mac OS X Lion) only. If you are using iOS 5 and you need mutual exclusion, I highly recommend using Read/Write LEAs (or at least regular LEAs). They make mutual exclusion fast and easy.