Locking

Like AppFabric, ScaleOut StateServer supports both pessimistic and optimistic locking strategies. Key differences between the two products' APIs are discussed in this topic.

Pessimistic Locking

AppFabric DataCache method ScaleOut NamedCache method

GetAndLock

Retrieve (set acquireLock parameter to true)

PutAndUnlock

Update (set unlockAfterUpdate parameter to true)

Unlock

ReleaseLock

Lock Handles

WSAF Caching returns a DataCacheLockHandle instance from a locking GetAndLock() call. This handle instance must then be supplied to one of the DataCache’s unlock calls (PutAndUnlock()/Unlock()) to release the lock on the object.

In contrast, ScaleOut StateServer’s NamedCache API automatically manages lock handles on your behalf. Handles to locks that you acquire are stored behind the scenes in thread-local storage, so your application does not need to store and manage any lock handles.

Handling Lock Collisions

In WSAF Caching, if a caller attempts to lock an object that has already been locked by another client/thread then a DataCacheException will be thrown. AppFabric client applications must implement their own retry logic.

ScaleOut StateServer’s NamedCache API will block and automatically perform retries if you attempt to lock an object that has already been locked by another caller. Your locking call will return once your thread successfully acquires the lock.

The NamedCache’s MaxLockAttempts and LockRetryInterval properties control the number and frequency of lock retry attempts. By default, the NamedCache will attempt 20000 retries every 5 milliseconds. If these retries are exhausted then an ObjectLockedException will be thrown.

[Tip] Tip

Setting the MaxLockAttempts count to 1 will make the NamedCache behave like the AppFabric DataCache—if lock acquisition fails on the first attempt then the locking call will return immediately and an ObjectLockedException will be thrown if there is a lock collision.

Lock Timeouts

In WSAF Caching, the DataCache.GetAndLock method allows you specify a lock timeout argument, which causes a lock to be released if a programming error or application crash causes a client application to leave an object locked in the cache.

ScaleOut StateServer offers a configurable lock timeout that is applied to the entire data grid. The default timeout is 90 seconds (this default was chosen to match ASP.NET’s default session lock timeout). The value can be adjusted by modifying the lock_timeout configuration setting in the service’s soss_params.txt file.

If locking is being used, you are encouraged to call NamedCache.ReleaseLock in a finally block to reduce the likelihood of leaving an object locked in the ScaleOut data grid. The ReleaseLock method has minimal overhead and will only make a round-trip to a ScaleOut host if it knows that the current thread is still holding a lock on the object. For example:

NamedCache cache = CacheFactory.GetCache("Locking sample cache");

try
{
  // Read and lock:
  cache.Retrieve("key1", true);

  // Exception-prone code:
  int zero = 0;
  int mistake = 42 / zero;

  // Update and unlock:
  cache.Update("key1", mistake, true);
}
catch (DivideByZeroException)
{
  // handle error...
}
finally
{
  // Ensure object is always unlocked.
  if (cache != null) cache.ReleaseLock("key1");
}
Synchronizing Object Creation

"Forcing" Locks on Non-existent Objects

Several of the AppFabric GetAndLock() overloads provide a forceLock boolean parameter. Contrary to the behavior suggested by this parameter name, it does not allow a caller to force an override of a lock that is held by another client/thread. Instead, this parameter allows you to lock an object even if it does not yet exist in the AppFabric cache.

The intended usage patterns for AppFabric’s forceLock parameter are not fully explained in the MSDN documentation, but a common use case for such a feature is to provide a way to synchronize creation of an object in the cache. That is, if multiple AppFabric clients attempt to access an object and encounter a cache miss, the forceLock parameter can be used to prevent all of the clients from simultaneously attempting to retrieve the missing object from a system of record (a potentially expensive operation) and then overwriting each other’s objects in the cache.

AppFabric clients must use locking calls to perform this synchronization, and clients must be written to perform retries or otherwise gracefully handle exceptions that are thrown by lock collisions.

ScaleOut’s CreateHandler Callback

ScaleOut’s NamedCache API does not require the use of error-prone locking to synchronize and coordinate the creation of objects in the distributed cache. When calling the NamedCache.Retrieve() method, a caller can simply provide a delegate to a callback method (called a "CreateHandler") that will be invoked if a cache miss occurs. This user-supplied callback is responsible for providing an object instance that will be automatically inserted into the cache.

If multiple clients/threads simultaneously attempt to read the missing object, only one thread in one client will be permitted to execute the CreateHandler callback so as to prevent multiple clients/thread from simultaneously creating the object—this behavior is valuable when creating the cached object involves expensive calls to a database or when it is otherwise undesirable for an object to be repeatedly created in the cache. While the object is being created, other threads that try to retrieve the object while the callback is executing will be blocked, even if they’re running on other client machines. Once the object has been added to the data grid, those other threads will be unblocked and the newly-stored object will be returned to all of them.

The synchronization performed by ScaleOut’s APIs is transparent to the caller—a client only needs to provide a CreateHandler callback to a NamedCache Retrieve() call.

[Tip] Tip

A full example of how to efficiently synchronize the creation of expensive objects is available in the "RetrieveOptions.CreateHandler") reference documentation.

Optimistic Locking

A number of WSAF Caching methods return a DataCacheItemVersion instance that can be used to implement optimistic concurrency in your application. If there is a version mismatch when performing a Put call, the Put method returns null and your application may choose to re-retrieve the object and retry the update.

ScaleOut’s approach to optimistic concurrency is similar to AppFabric’s, but the object’s version is not returned separately from the object. Instead, users implement the IOptimisticConcurrencyVersionHolder interface on objects that need to implement optimistic concurrency control. Version information is essentially stored in the object, and, if a mismatch is detected when updating an object in the cache, an OptimisticLockException is thrown.

An optimistic update can be performed by using the Update method. This overload takes an UpdateOptions struct as a parameter, whose LockingMode property allows you to specify an UpdateLockingMode enum. This enum controls whether your update operation performs optimistic locking, pessimistic locking, or no locking at all.

If an OptimisticLockException is thrown from an update operation, your application may choose to either discard the update or else refresh its local copy of the object and retry the update. Because this retry logic can be expensive (depending on the size and complexity of your object and the changes you are making to it), optimistic locking is best used in situations where updates to the object are infrequent relative to reads.

[Tip] Tip

Resolving collisions can be the trickiest part of implementing an optimistic concurrency strategy. ScaleOut’s .NET API Reference has a helpful topic on Optimistic Concurrency that has sample code illustrating best practices for performing updates and handling collisions.