The NamedCache API

ScaleOut Software NamedCache API

The conceptual topics in this topic and its subtopics contain a conceptual overview of the Soss.ClientNamedCache class.

This topic contains the following sections:

The Soss.ClientNamedCache class is the primary way to interact with your objects in a ScaleOut distributed object store. This class represents a named cache of objects in the distributed data store, and it allows you to work with your objects using familiar .NET collection syntax.

NamedCache cache = CacheFactory.GetCache("MyCache");
cache["Welcome"] = "Hello World!";

A number of advanced caching features are also available to your application, both at the cache level using the NamedCache class and at the individual object level using the CreatePolicy class. The following sections provide starting points for learning more about the various features available through ScaleOut StateServer's .NET APIs.

Timeouts and Expirations

Objects stored in ScaleOut StateServer can have timeouts assigned to them when they're inserted into the cache using the CreatePolicyTimeout property. The CreatePolicyIsAbsoluteTimeout property can be used to designate whether the object's timeout should be sliding or absolute.

When an object expires from the distributed cache, the NamedCacheObjectExpired event can be used to execute custom expiration logic just before the object is removed from the cache. Arguments provided to this event can be used to extend the expiring object's lifetime, if desired.


An object can be exclusively locked across the entire distributed cache in order to ensure that another client or thread will not modify the object. Locks can be controlled using the NamedCacheAcquireLock and NamedCacheReleaseLock methods.

Atomic read-and-lock operations can be performed on an object using the NamedCacheRetrieve method. A corresponding update-and-release atomic operation can be performed using the NamedCacheUpdate method.

If locking needs to be performed implicitly by all NamedCache access methods (such as the indexers or other overloads that don't provide lock-control parameters) then the NamedCacheUseLocking property can be used to turn on locking for all read and update operations performed by the NamedCache instance.

Optimistic locking is a feature introduced in version 5.0 that allows for an alternative approach to concurrency control. Instead of pessimistically preventing other clients from accessing the object, the optimistic strategy raises an exception only if a collision is detected when an update is performed, allowing the client application to resolve the collision (typically by re-reading the object and then retrying the update operation). Applications need to implement the IOptimisticConcurrencyVersionHolder interface on their cached objects if they want to support optimistic operations. See the topic on Optimistic Concurrency for details.


To help manage the lifetimes of object groups, objects stored in the ScaleOut StateServer service can have dependency relationships with other objects in the distributed store. If an object is dependent on a parent, it will automatically be removed from the store when its parent is updated or removed from the store. Dependency relationships are established when a child object is inserted into the store using the CreatePolicyDependencies property.

When an object is being removed due to a dependency relationship, the NamedCacheObjectExpired event is triggered to execute custom expiration logic just before the child object is removed from the store. Arguments provided to this event can be used to extend the child object's lifetime, if desired.


Support for querying the distributed store using LINQ syntax was introduced in version 5.0. See the topics on indexing the properties on your objects and then querying for them using LINQ.

If advanced filtering must be performed against complex types (or the supported LINQ operators are insufficient), the InvokeFilter LINQ operator can be used to deploy and run custom filter logic in parallel across your farm. This method uses uses ScaleOut's Invocation Grid feature to automatically deploy and execute your custom .NET filter logic on all of your ScaleOut hosts. More information and samples are available in the Advanced Filtering with InvokeFilter topic.

Prior to the introduction of LINQ support, queries could be performed against the cache using the NamedCacheQuery method, and this approach to querying continues to be supported in the APIs. Using the NamedCache.Query method requires objects to have metadata explicitly assigned to them in the store using NamedCacheSetMetadata, and the Query method will search for exact matches on this object metadata.

Parallel Method Invocation

The ScaleOut StateServer service hosts a powerful parallel execution engine that allows you to efficiently run "map-reduce" style operations across a selected set objects stored in the cache. The NamedCacheInvoke method is used to execute Parallel Method Invocation (PMI) operations: users supply this method with a filter (to select which objects to process), a custom evaluation method (the work to be done against each object), and a custom merge method (a way for the invocation engine to coalesce the evaluation results back into a single return value).

The Samples directory under the ScaleOut StateServer installation directory contains a "ParallelMethodInvocation" sample project that illustrates how to use the feature. The example shows how trade decisions can be efficiently made across a large set of financial portfolios.

In version 5.0 and later, Parallel Method Invocation operations can also be performed using LINQ syntax. See the topic on Invoke and Merge for more details.

Users must obtain a license for ScaleOut ComputeServer to perform Parallel Method Invocation operations.

Invocation Grid

To simplify and automate the deployment of application code to grid servers for parallel method invocations, ScaleOut ComputeServer enables applications to define an invocation grid prior to running parallel method invocations on a collection of objects in a named cache. The invocation grid allows you to specify which assemblies are needed to perform an invocation. When an invocation grid is loaded, ScaleOut ComputeServer creates a set of .NET worker processes, one per grid server, and automatically deploys and loads your assemblies into these worker processes in preparation for running parallel method invocations. Once a named cache is associated with an invocation grid, all parallel method invocations on this named cache are transparently sent to the invocation grid's worker processes for execution.

Use the InvocationGridBuilder class to launch a grid of worker processes across your farm. The InvocationGridBuilderAddDependency method is used to specify which of your custom assemblies should be deployed and loaded by the worker instances--be sure to add the assembly that contains the Eval and Merge methods that will be involved in your parallel method invocation calls. The InvocationGridBuilderLoad method starts the workers across the farm. Once the grid is loaded and running, associate the new grid with a named cache using the NamedCacheInvocationGrid property; this assignment will cause all NamedCacheInvoke operations to run in the deployed grid workers.


ScaleOut StateServer's .NET APIs use the Framework's BinaryFormatter as the default serialization mechanism for your objects. When using the default, all objects stored in the cache must be marked with the [Serializable] attribute or implement ISerializable.

Custom serialization mechanisms can be used by calling the NamedCacheSetCustomSerialization method.

Backing Store Integration

ScaleOut StateServer can be configured to automatically populate the store with objects from a database (or any other type of backing store, such as a web service) and automatically update a database with changes made to objects in the store.

Four types of backing store operations are available:


When user code tries to retrieve an object that is not yet in the StateServer store, the missing object is automatically loaded into the cache from the backing store and is then returned to the user's calling code. This is a synchronous operation that is transparent to the user's calling code.


When a user inserts or updates an object in the cache, the change is synchronously and transparently written to the backing store.


After an object is inserted into the StateServer store, the service will periodically and asynchronously refresh the cached version of the object with the value from the backing store.


After an object is inserted into the StateServer store, the service will periodically and asynchronously update the backing store with the latest value from the cache. It will also signal the backing store when the object has been removed.

Users must provide the logic that the ScaleOut service will use to interact with the backing store. This is accomplished by implementing the IBackingStore interface with an implementation that knows how to retrieve and update objects from the backing store. Once implemented, the NamedCacheSetBackingStoreAdapter method must be used to associate the IBackingStore adapter with a named cache, along with a provided BackingStorePolicy object that designates which of the four backing store operations should be performed.

When performing asynchronous backing store operations (write-through or refresh-ahead), a cached object's CreatePolicy determines at insertion time which type of backing store operation to perform and how frequently it should occur. See the CreatePolicyBackingStoreMode and CreatePolicyBackingStoreInterval properties for details.

GeoServer Replication

ScaleOut StateServer's GeoServer option allows for different stores in different geographic locations to replicate and share objects over low-bandwidth WAN connections. There are a number of replication options and API usage strategies that are available to users who want to access objects in multiple datacenters. Please see the GeoServer Replication topic for details.

Users must obtain a license for ScaleOut StateServer's GeoServer Option to perform GeoServer replication.

Lightweight Authorization

With version 5.0, ScaleOut StateServer introduces a new mechanism for authorizing access to named caches within the distributed store. SOSS provides two authorization mechanisms.

Security note Security Note

In the current release, this mechanism is intended for use within a secure datacenter by a single organization and does not secure the distributed store from attack.

The default login mechanism checks the application's current login name against a list of authorized login names that have been associated with the named cache using the soss.exe management tool. This tool also can be used to authorize either read/write access or read-only access. Prior to accessing a named cache, the application creates an instance of the LoginManager class and performs a Login call to the named cache. If access is denied, an exception is thrown.

NamedCache cache = CacheFactory.GetCache("MyCache");

The user can implement an extensible authorization policy by implementing the ILoginModule interface and registering the implementation with the login manager:

LoginManager.getInstance().Login("MyCache", new LoginModule());

When the application logs in to the named cache, the login module supplies SOSS with encoded credentials in the form of a byte array. The SOSS client passes these encoded credentials to the SOSS service, which supplies them to the user's authorization provider. This authorization provider must be associated with SOSS using the soss.exe management tool. This provider validates the credentials using a user-defined mechanism and then returns an authorization ticket back to SOSS along with read/write or read-only authorization.

An application can explicitly log out from its access to a named cache. The scope of the logout can be specified to include only the local client or all clients accessing the SOSS store.

Local Client-Side Caching

To maximize access performance, the ScaleOut .NET APIs maintain a fully-coherent internal near cache that contains deserialized versions of objects in the store. When reading objects, this client-side cache reduces access response time by eliminating data motion and deserialization overhead for objects that have not been updated in the authoritative SOSS distributed store.

The contents of the deserialized cache will automatically be invalidated if the object is changed or removed by any of the servers in the farm, so it will never return a stale version of a cached object. The client libraries always check with the SOSS service prior to returning the object in order to make sure that the deserialized version is up-to-date. While this check requires a round-trip to the service, it is much faster than retrieving and deserializing the entire object.

The client cache is enabled by default in your client application—it can be disabled on a per cache basis using the NamedCacheAllowClientCaching property. Please review the "Guidelines for Using the Deserialized Cache" documentation in this property for details about when it is appropriate to enable and disable the client cache.

Managing Low-Memory Scenarios

In order to provide fast access, the ScaleOut StateServer service stores all objects in the physical memory of the host systems that comprise the distributed store. The number of objects that can be stored is constrained by the available memory across these host systems. Although these systems should be provisioned with sufficient memory to accommodate an application's peak load, exceptional situations may arise in which your object load exceeds the available memory. ScaleOut StateServer provides a mechanism for evicting the least recently accessed objects when a high memory threshold is reached.

ScaleOut StateServer's soss_params.txt file contains a setting called lru_threshold that controls how the service behaves in low-memory scenarios. Two options are available:

  1. Disallow creation of more objects in the store: This is the default behavior of ScaleOut StateServer. If no free memory is available the client applications will receive a StateServerException when attempting to insert a new object. The lru_threshold setting should be set to 100 for this behavior.

  2. Remove the least-recently-used objects: If the lru_threshold setting is below 100 then the ScaleOut service will evict the least-recently used objects to make room in memory for new ones. A side effect of this behavior is that some objects may be removed from the store prior to their expected timeouts, and the NamedCacheObjectExpired event will be fired for these objects just prior to their removal. Individual objects can be made ineligible for memory reclamation using the CreatePolicyPreemptionPriority property.

Please note that an application generally can attempt to create objects faster than the expiration mechanism can remove them. Hence, an unusually high creation rate could overwhelm the LRU reclamation mechanism and lead to an exception when attempting to create new objects. Adequate memory should be provided and the LRU threshold should be set low enough to properly handle a burst of creation requests. This behavior will vary on an application-by-application basis.