Programming Azure TableStorage : Repository Pattern - Part 1

3 minute read

Repository Pattern Mediates between the domain and data mapping layers using a collection-like interface for accessing domain objects. -by Edward Hieatt and Rob Mee

Repository pattern according to me simply put is an in-memory database which abstracts away the actually operations that one needs to do with the physical database (CRUD operations). One could implement their own caching mechanisms in the repository, as well as do transactional operations.

You can learn more about the Repository Pattern here and here.

In my previous blog i mentioned about the class TableEntity and how one should create their own version of BaseTableEntity . In this one i would show how to use Repository pattern to perform Create, Update and Delete operations, i would keep the reading and Query object pattern to the next one.

ITableStore<TTable> One of the most important and efficient things that .Net provides is Generics, in this situation we could use one TableStore generic class to cater to any entity that client requires. First we declare a contract for the table store.

public interface ITableStore<TTable> : ITableReader<TTable>
    where TTable : BaseTableEntity, new()
{
    void Add(TTable item);
    void Upsert(TTable item);
    void Delete(TTable item);
    void SubmitChanges();
}

The generic constraint here restricts the object of the instance to handle only the entities which are of base type BaseTableEntity and has a default constructor used for deserializing the entity when its read back from the storage.

Implementation TableStore<TTable> In the above contract we have methods which are meant for Create/Update/Delete of the entities while submit changes is the one which will do the heavy loading of actually storing the data in the cloud.

The advantages of doing the above pattern (also called Unit of Work) is that we could maintain transactions and also do bulk operations (very helpful when using Cloud Storage)

TableStorage for WindowsAzure has a batch operation class called TableBatchOperation which can hold the operations (Add,Delete and Update) as a list and can at a time be sent to execute using the CloudTable.ExecuteBatch for synchronous operations or CloudeTable.BeginExecuteBatch for asynchronous. In my version of the implementation of ITableStore i use asynchronous execution.

Below is the implementation of TableStore

public class TableStore<TTable> : ITableStore<TTable>
    where TTable : BaseTableEntity, new()
{
    private readonly string tableName;
    private TableServiceContext _dataServiceContext;
    private int objectsAdded = 0;
    private TableBatchOperation tableBatchOperation;

    internal CloudTable CloudTable
    {
        get { return this._dataServiceContext.ServiceClient.GetTableReference(tableName); }
    }

    public TableStore(ICloudTableContext dataContext)
    {
        this.tableName = RepositoryHelper.GetTableNameFromType<TTable>();
        this._dataServiceContext = new TableServiceContext(dataContext.TableClient);
        this._dataServiceContext.IgnoreResourceNotFoundException = true;
        //Create table if not exists
        this.TryCreateTable();
        this.tableBatchOperation = new TableBatchOperation();
    }

    private void TryCreateTable()
    {
        this.CloudTable.CreateIfNotExists();
    }

    #region ITableRepository<T> Members

    public void Add(TTable item)
    {
        tableBatchOperation.InsertOrReplace(item);
        objectsAdded++;
        //WEAK CODE 
        if (objectsAdded == 10)
        {
            //submit the changes implicitly 
            this.SubmitChanges();
        }
    }

    public void Upsert(TTable item)
    {
        this.tableBatchOperation.InsertOrReplace(item);
        objectsAdded++;
        //WEAK CODE 
        if (objectsAdded == 10)
        {
            //submit the changes implicitly 
            this.SubmitChanges();
        }
    }

    public void Delete(TTable item)
    {
        this.tableBatchOperation.Delete(item);
    }
    public void SubmitChanges()
    {
        try
        {
            if (this.tableBatchOperation.Count > 0)
            {
                TableBatchOperation operationAsBatch = new TableBatchOperation();
                foreach (var item in this.tableBatchOperation)
                {
                    operationAsBatch.Add(item);
                }
                this.SubmitChanges(operationAsBatch);
            }
        }
        finally
        {
            objectsAdded = 0;
            this.tableBatchOperation = new TableBatchOperation();
        }
    }

    private void SubmitChanges(TableBatchOperation batchOperation)
    {
        if (batchOperation.Count > 0)
        {
            var asynResult = this.CloudTable.BeginExecuteBatch(batchOperation,
                this.SubmitChangesCompleted, batchOperation);
        }
    }

    private void SubmitChangesCompleted(IAsyncResult result)
    {
        try
        {
            var resultTable = this.CloudTable.EndExecuteBatch(result);
        }
        catch (StorageException ex)
        {
            int index = -1;
            if (int.TryParse(ex.Message.Split(':')[1], out index))
            {
                var tableBatch = result.AsyncState as TableBatchOperation;
                tableBatch.RemoveAt(index);
                this.SubmitChanges(tableBatch);
            }
        }
        
    }
}

Note : The exception handling in the submit completed method, its a crude way but Microsoft hasn’t given us any better way now , i hope they do.

In the next blog, we would go through the reading or querying the TableStorage.

Leave a comment