August 2010

Volume 25 Number 08

Smart Client - Building Distributed Apps with NHibernate and Rhino Service Bus, Part 2

By Oren Eini | August 2010

In the July 2010 issue of MSDN Magazine, I started walking through the process of building a smart client application for a lending library. I called the project Alexandria, and decided to use NHibernate for data access and Rhino Service Bus for reliable communication with the server.

NHibernate (nhibernate.info) is an object-relational mapping (O/RM) framework, and Rhino Service Bus (github.com/rhino-esb/rhino-esb) is an open source service bus implementation built on the Microsoft .NET Framework. I happen to be deeply involved in developing both of these frameworks, so it seemed like an opportunity to implement a project with technologies I know intimately, while at the same time provide a working example for developers who want to learn about NHibernate and Rhino Service Bus.

In the previous article, I covered the basic building blocks of the smart client application. I designed the back end, along with the communication mode between the smart client application and the back end. I also touched on batching and caching, how to manage transactions and the NHibernate session, how to consume and reply to messages from the client, and how everything comes together in the bootstrapper.

In this installment, I will cover best practices for sending data between the back end and the smart client application, as well as patterns for distributed change management. Along the way, I’ll cover the remaining implementation details, and will present a completed client for the Alexandria application.

You can download the sample solution from github.com/ayende/­alexandria. The solution comprises three parts: Alexandria.Backend hosts the back-end code; Alexandria.Client contains the front-end code; Alexandria.Messages contains the message definitions shared between them.

No One Model Rules

One of the most common questions people ask when writing distributed applications is: How can I send my entities to the client application and then apply the change set on the server side?

If that’s your question, you’re probably thinking in a mode where the server side is mostly a data repository. If you build such applications, there are technology choices you can make that simplify this task (for example, employing WCF RIA Services and WCF Data Services). Using the type of architecture I’ve outlined so far, however, it doesn’t really make sense to talk about sending entities on the wire. Indeed, the Alexandria application uses three distinct models for the same data, each model best suited for different parts of the application.

The domain model on the back end, which is used for querying and transactional processing, is suitable for use with NHibernate (and further refinement would be to split the querying and transactional processing responsibilities). The message model represents messages on the wire, including some concepts that map closely to domain entities (BookDTO in the sample project is a data clone of Book). In the client application, the View Model (like the BookModel class) is optimized to be bound to the XAML and to handle user interactions.

While at first glance you can see many commonalities among the three models (Book, BookDTO, BookModel), the fact that they have different responsibilities means that trying to cram all of them into a single model would create a cumbersome, heavyweight, one-size-doesn’t-fit-anyone model. By splitting the model along the lines of responsibilities, I made the work much easier because I can refine each model independently to fit its own purposes.

From a conceptual point of view, there are other reasons to want to create a separate model for each usage. An object is a combination of data and behavior, but when you try to send an object over the wire, the only thing you can send is the data. That leads to some interesting questions. Where do you place business logic that should run on the back-end server? If you put it in the entities, what happens if you execute this logic on the client?

The end result of this sort of architecture is that you aren’t using real objects. Instead, you’re using data objects—objects that are simply holding the data—and the business logic resides elsewhere, as procedures that run over the object data. This is frowned upon, because it leads to scattering of logic and code that’s harder to maintain over time. No matter how you look at it, unless the back-end system is a simple data repository, you want to have different models in different parts of the application. That, of course, leads to a very interesting question: how are you going to handle changes?

Commands over Change Sets

Among the operations I allow users in the Alexandria application are adding books to their queue, reordering books in the queue, and removing them entirely from the queue, as shown in Figure 1. Those operations need to be reflected in both the front end and the back end.

image: Possible Operations on the User’s Books Queue

Figure 1 Possible Operations on the User’s Books Queue

I could try to implement this by serializing the entities over the wire and sending the modified entity back to the server for persistence. Indeed, NHibernate contains explicit support for just such scenarios, using the session.Merge method.

However, let’s assume the following business rule: When a user adds a book to her queue from the recommendations list, that book is removed from the recommendations and another recommendation is added.

Imagine trying to detect that a book was moved from the recommendations list to the queue using just the previous and current state (the change set between the two states). While it can be done, to say that it would be awkward to handle is an understatement.

I call such architectures Trigger-Oriented Programming. Like triggers in a database, what you have in a system based on change sets is code that deals mostly with data. To provide some meaningful business semantics, you have to extract the meaning of the changes from the change set by brute force and luck.

There’s a reason that triggers containing logic are considered an anti-pattern. Though appropriate for some things (such as replication or pure data operations), trying to implement business logic using triggers is a painful process that leads to a system that’s hard to maintain.

Most systems that expose a CRUD interface and allow you to write business logic in methods such as UpdateCustomer are giving you Trigger-Oriented Programming as the default (and usually the only choice). When there isn’t significant business logic involved—when the system as a whole is mostly about CRUD—this type of architecture makes sense, but in most applications, it’s not appropriate and not recommended.

Instead, an explicit interface (RemoveBookFromQueue and AddBookToQueue, for instance) results in a system that’s much easier to understand and think about. The ability to exchange information at this high level allows a great degree of freedom and easy modification down the road. After all, you don’t have to figure out where some functionality in the system is based on what data is manipulated by that functionality. The system will spell out exactly where this is happening based on its architecture.

The implementation in Alexandria follows the explicit interface principle; invoking those operations resides in the application model and is shown in Figure 2. I’m doing several interesting things here, so let’s handle each of these in order.

Figure 2 Adding a Book to the User’s Queue on the Front End

public void AddToQueue(BookModel book) {
  Recommendations.Remove(book);
  if (Queue.Any(x => x.Id == book.Id) == false) 
    Queue.Add(book);
  bus.Send(
    new AddBookToQueue {
      UserId = userId, BookId = book.Id
    },
    new MyQueueQuery {
      UserId = userId
    },
    new MyRecommendationsQuery {
      UserId = userId
    });
}

First, I modify the application model directly to immediately reflect the user’s desires. I can do this because adding a book to the user’s queue is an operation that is guaranteed never to fail. I also remove it from the recommendations list, because it doesn’t make sense to have an item on the user’s queue also appear on the recommendations list.

Next, I send a message batch to the back-end server, telling it to add the book to the user’s queue, and also to let me know what the user’s queue and recommendations are after this change. This is an important concept to understand.

The ability to compose commands and queries in this manner means that you don’t take special steps in commands like AddBookToQueue to get the changed data to the user. Instead, the front end can ask for it as part of the same message batch and you can use existing functionality to get this data.

There are two reasons I request the data from the back-end server even though I make the modifications in memory. First, the back-end server may execute additional logic (such as finding new recommendations for this user) that will result in modifications you don’t know about on the front-end side. Second, the reply from the back-end server will update the cache with the current status.

Disconnected Local State Management

You might have noticed a problem in Figure 2 with regard to disconnected work. I make the modification in memory, but until I get a reply back from the server, the cached data isn’t going to reflect those changes. If I restart the application while still disconnected, the app will display expired information. Once communication with the back-end server resumes, the messages would flow to the back end and the final state would resolve to what the user is expecting. But until that time, the application is displaying information that the user has already changed locally.

For applications that expect extended periods of disconnection, don’t rely only on the message cache; instead implement a model that’s persisted after each user operation.

For the Alexandria application, I extended the caching conventions to immediately expire any information that’s part of a command-and-queries message batch such as the one in Figure 2. That way, I won’t have the up-to-date information, but I also won’t show erroneous information if the application is restarted before I get a reply from the back-end server. For the purposes of the Alexandria application, that’s enough.

Back-End Processing

Now that you understand how the process works on the front-end side of things, let’s look at the code from the back-end server point of view. You’re already familiar with the handling of queries, which I showed in the previous article. Figure 3 shows the code for handling a command.

Figure 3 Adding a Book to the User’s Queue

public class AddBookToQueueConsumer : 
  ConsumerOf<AddBookToQueue> {
  private readonly ISession session;
  public AddBookToQueueConsumer(ISession session) {
    this.session = session;
  }
  public void Consume(AddBookToQueue message) {
    var user = session.Get<User>(message.UserId);
    var book = session.Get<Book>(message.BookId);
    Console.WriteLine("Adding {0} to {1}'s queue",
      book.Name, user.Name);
    user.AddToQueue(book);
  }
}

The actual code is pretty boring. I load the relevant entities and then call a method on the entity to perform the actual task. However, this is more important than you might think. An architect’s job, I’d argue, is to make sure that the developers in the project are as bored as possible. Most business problems are boring, and by removing technological complexities from the system, you get a much higher percentage of developer time spent working on boring business problems instead of interesting technological problems.

What does that mean in the context of Alexandria? Instead of spreading business logic in all the message consumers, I have centralized as much of the business logic as possible in the entities. Ideally, consuming a message follows this pattern:

  • Load any data required to process the message
  • Call a single method on a domain entity to perform the actual operation

This process ensures that the domain logic is going to remain in the domain. As for what that logic is—well, that’s up to the scenarios you need to handle. This should give you an idea about how I handle the domain logic in the case of User.AddToQueue(book):

public virtual void AddToQueue(Book book) {
  if (Queue.Contains(book) == false)
    Queue.Add(book);
  Recommendations.Remove(book);
  // Any other business logic related to 
  // adding a book to the queue
}

You’ve seen a case where the front-end logic and the back-end logic match exactly. Now let’s look at a case where they don’t. Removing a book from the queue is very simple on the front end (see Figure 4). It’s pretty straightforward. You remove the book from the queue locally (which removes it from the UI), then send a message batch to the back end, asking it to remove the book from the queue and update the queue and the recommendations.

Figure 4 Removing a Book from the Queue

public void RemoveFromQueue(BookModel book) {
  Queue.Remove(book);
  bus.Send(
    new RemoveBookFromQueue {
      UserId = userId,
      BookId = book.Id
    },
    new MyQueueQuery {
      UserId = userId
    },
    new MyRecommendationsQuery {
      UserId = userId
    });
}

On the back end, consuming the RemoveBookFromQueue message follows the pattern shown in Figure 3, loading the entities and calling the user.RemoveFromQueue(book) method:

public virtual void RemoveFromQueue(Book book) {
  Queue.Remove(book);
  // If it was on the queue, it probably means that the user
  // might want to read it again, so let us recommend it
  Recommendations.Add(book);
  // Business logic related to removing book from queue
}

The behavior is different between the front end and the back end. On the back end, I add the removed book to the recommendations, which I don’t do on the front end. What would be the result of the disparity?

Well, the immediate response would be to remove the book from the queue, but as soon as the replies from the back-end server reach the front end, you’ll see the book added to the recommendations list. In practice, you’d probably be able to notice the difference only if the back-end server was shut down when you remove a book from the queue.

Which is all very nice, but what about when you actually need confirmation from the back-end server to complete an operation?

Complex Operations

When the user wants to add, remove or reorder items in her queue, it’s pretty obvious that the operation can never fail, so you can allow the application to immediately accept the operation. But for operations such as editing addresses or changing the credit card, you can’t just accept the operation until you have a confirmation of success from the back end.

In Alexandria, this is implemented as a four-stage process. It sounds scary, but it’s really quite simple. Figure 5 shows the possible stages.

image: Four Possible Stages for a Command Requiring Confirmation

Figure 5 Four Possible Stages for a Command Requiring Confirmation

The top-left screen shot shows the normal view of the subscription details. This is how Alexandria shows confirmed changes. The bottom-left screen shot shows the edit screen for the same data. Clicking the save button on this screen results in the screenshot shown on the top–right; this is how Alexandria shows unconfirmed changes.

In other words, I accept the change (provisionally) until I get a reply back from the server indicating that the change was accepted (which moves us back to the top-left screen) or rejected, which moves the process to the bottom-right screenshot. That screenshot shows an error from the server and allows the user to fix the erroneous detail.

The implementation isn’t complex, despite what you may think. I’ll start in the back end and move outward. Figure 6 shows the back-end code required to handle this and it isn’t anything new. I’ve been doing much the same thing throughout this article. Most of the conditional command functionality (and complexity) lives in the front end.

Figure 6 Back-End Handling of Changing a User’s Address

public void Consume(UpdateAddress message) {
  int result;
  // pretend we call some address validation service
  if (int.TryParse(message.Details.HouseNumber, out result) == 
    false || result % 2 == 0) {
    bus.Reply(new UpdateDetailsResult {
      Success = false,
      ErrorMessage = "House number must be odd number",
      UserId = message.UserId
    });
  }
  else {
    var user = session.Get<User>(message.UserId);
    user.ChangeAddress(
      message.Details.Street,
      message.Details.HouseNumber,
      message.Details.City, 
      message.Details.Country, 
      message.Details.ZipCode);
    bus.Reply(new UpdateDetailsResult {
      Success = true,
      UserId = message.UserId
    });
  }
}

One thing that’s different from what you’ve seen before is that here I have explicit success/fail code for the operation, while before I simply requested a data refresh in a separate query. The operation can fail, and I want to know not only whether the operation is successful or not, but why it failed.

Alexandria makes use of the Caliburn framework to handle much of the drudgery of managing the UI. Caliburn (caliburn.codeplex.com) is a WPF/Silverlight framework that relies heavily on conventions to make it easy to build much of the application functionality in the application model rather than writing code in the XAML code behind.

As you’ll see from looking at the sample code, just about everything in the Alexandria UI is wired via the XAML using conventions, giving you both clear and easy to understand XAML and an application model that directly reflects the UI without having a direct dependency on it. This results in significantly simpler code.

Figure 7should give you an idea about how this is implemented in the SubscriptionDetails view model. In essence, SubscriptionDetails contains two copies of the data; one is held in the Editable property and that’s what all the views relating to editing or displaying unconfirmed changes show. The second is held in the Details property, which is used to hold the confirmed changes. Each mode has a different view, and each mode selects from which property to display the data.

Figure 7 Moving Between View Modes in Response to User Input

public void BeginEdit() {
  ViewMode = ViewMode.Editing;
  Editable.Name = Details.Name;
  Editable.Street = Details.Street;
  Editable.HouseNumber = Details.HouseNumber;
  Editable.City = Details.City;
  Editable.ZipCode = Details.ZipCode;
  Editable.Country = Details.Country;
  // This field is explicitly ommitted
  // Editable.CreditCard = Details.CreditCard;
  ErrorMessage = null;
}
public void CancelEdit() {
  ViewMode = ViewMode.Confirmed;
  Editable = new ContactInfo();
  ErrorMessage = null;
}

In the XAML, I wired the ViewMode binding to select the appropriate view to show for every mode. In other words, switching the mode to Editing will result in the Views.SubscriptionDetails.Editing.xaml view being selected to show the edit screen for the object.

It is the save and confirmation processes you will be most interested in, however. Here’s how I handle saving:

public void Save() {
  ViewMode = ViewMode.ChangesPending;
  // Add logic to handle credit card changes
  bus.Send(new UpdateAddress {
    UserId = userId,
    Details = new AddressDTO {
      Street = Editable.Street,
      HouseNumber = Editable.HouseNumber,
      City = Editable.City,
      ZipCode = Editable.ZipCode,
      Country = Editable.Country,
    }
  });
}

The only thing I’m actually doing here is sending a message and switching the view to a non-editable one with a marker saying that those changes have not yet been accepted. Figure 8 shows the code for confirmation or rejection. All in all, a miniscule amount of code to implement such a feature, and it lays the foundation for implementing similar features in the future.

Figure 8 Consuming the Reply and Handling the Result

public class UpdateAddressResultConsumer : 
  ConsumerOf<UpdateAddressResult> {
  private readonly ApplicationModel applicationModel;
  public UpdateAddressResultConsumer(
    ApplicationModel applicationModel) {
    this.applicationModel = applicationModel;
  }
  public void Consume(UpdateAddressResult message) {
    if(message.Success) {
      applicationModel.SubscriptionDetails.CompleteEdit();
    }
    else {
      applicationModel.SubscriptionDetails.ErrorEdit(
        message.ErrorMessage);
    }
  }
}
//from SubscriptionDetails
public void CompleteEdit() {
  Details = Editable;
  Editable = new ContactInfo();
  ErrorMessage = null;
  ViewMode = ViewMode.Confirmed;
}
public void ErrorEdit(string theErrorMessage) {
  ViewMode = ViewMode.Error;
  ErrorMessage = theErrorMessage;
}

You also need to consider classic request/response calls, such as searching the catalog. Because communication in such calls is accomplished via one-way messages, you need to change the UI to indicate background processing until the response from the back-end server arrives. I won’t go over that process in detail, but the code for doing it exists in the sample application.

Checking Out

At the beginning of this project, I started by stating the goals and challenges I anticipated facing in building such an application. The major challenges I intended to address were data synchronization, the fallacies of distributed computing, and handling an occasionally connected client. Looking back, I think Alexandria does a good job of meeting my goals and overcoming the challenges.

The front-end application is based on WPF and making heavy use of the Caliburn conventions to reduce the actual code for the application model. The model is bound to the XAML views and a small set of front-end message consumers that make calls to the application model.

I covered handling one-way messaging, caching messages at the infrastructure layer and allowing for disconnected work even for operations that require back-end approval before they can really be considered complete.

On the back end, I built a message-based application based on Rhino Service Bus and NHibernate. I discussed managing the session and transaction lifetimes and how you can take advantage of the NHibernate first-level cache using messages batches. The message consumers on the back end serve either for simple queries or as delegators to the appropriate method on a domain object, where most of the business logic actually resides.

Forcing the use of explicit commands rather than a simple CRUD interface results in a clearer code. This allows you to change the code easily, because the entire architecture is focused on clearly defining the role of each piece of the application and how it should be built. The end result is a very structured product, with clear lines of responsibility.

It’s hard to try to squeeze guidance for a full-blown distributed application architecture into a few short articles, especially while trying to introduce several new concepts at the same time. Still, I think you’ll find that applying the practices outlined here will result in applications that are actually easier to work with than the more traditional RPC- or CRUD-based architectures.


Oren Eini (who works under the pseudonym Ayende Rahien) is an active member of several open source projects (NHibernate and Castle among them) and is the founder of many others (Rhino Mocks, NHibernate Query Analyzer and Rhino Commons among them). Eini is also responsible for the NHibernate Profiler (nhprof.com), a visual debugger for NHibernate. You can follow his work at ayende.com/Blog.