Wednesday, December 21, 2011

Using WS-Discovery in WCF 4.0

Runtime endpoint discovery is one of the most challenging capabilities to implement in service oriented systems. Dynamically resolving service’s endpoints based on predefined criteria is a necessary functionality to interact with services which endpoint addresses change frequently. WS-Discovery is an OASIS Standard that defines a lightweight discovery mechanism for discovering services based on multicast messages. Essentially, WS-Discovery enables a service to send a Hello announcement message when it is initialized and a Bye message when is removed from the network. Clients can discover services by multicasting a Probe message to which a service can reply with a ProbeMatch message containing the information necessary to contact the service. Additionally, clients can find services that have changed endpoint by issuing a Resolve message to which respond with a ResolveMatch message.




Contrary to other WS-* protocols, WS-Discovery has found a great adoption among the network device builders as it allows to streamline the interactions between these type of devices. For instance, a printer can use WS-Discovery to announce its presence on a network so that it can be discovered by the different applications that require printing documents. Windows Vista's contact location system is another example of a technology based on WS-Discovery.




The 4.0 release of Windows Communication Foundation includes an implementation of WS-Discovery that enables service’s endpoints as runtime discoverable artifacts. WCF enables the WS-Discovery capabilities in two fundamental models: Managed and Ad-Hoc. The managed mode assumes a centralized component called service proxy that serves as a persistent repository for all the services in a network. When a service is initialized it publishes its details to the discovery proxy so that it becomes accessible the the different clients in the network.



Contrary to the managed model, the Ad-Hoc mechanism does not rely on a centralized discovery proxy. In this model, services publish their presence in a network by multicasting announcement message that can be processed by the interested consumers. Additionally, clients can also multicast discover messages through the network in order to find a service that matches predefined criteria.



WCF's WS-Discovery managed mode will be the subject on a future post. Today we would like to illustrate the details of enabling dynamic discovery using the WS-Discovery 's Ad-Hoc model in WCF 4.0. This model is traditionally simpler to implement than the managed model although it can introduce some challenges from the management standpoint.



WCF 4.0 abstracts the WS-Discovery Ad-Hoc model using the ServiceDiscoveryBehavior which indicates that a service can be discoverable and the UdpDiscoveryEndpoint that instantiates a service endpoint that can listen for discovery requests. The remaining of this post will provide a practical example of the use of the WS-Discovery Ad-Hoc model in WCF 4.0



Let’s start with the following WCF service.



1: public class SampleService: ISampleService
2: {
3: public string Echo(string msg)
4: {
5: return msg;
6: }
7: }
 8:
 9: [ServiceContract]
 10: public interface ISampleService
11: {
12: [OperationContract]
 13: string Echo(string msg);
14: }
Figure: Sample WCF Service

In order to make the service discoverable we first need to add the ServiceDiscoveryBehavior to the service behavior’s collection. As explained previously, this behavior indicates to the WCF runtime that the service supports the WS-Discovery protocol.


1: using (ServiceHost host = new ServiceHost(typeof(SampleService), new Uri(base uri...)))
2: {
3: ...
4:
host.AddServiceEndpoint(typeof(ISampleService), new BasicHttpBinding(), String.Empty);
5: ServiceDiscoveryBehavior discoveryBehavior= new ServiceDiscoveryBehavior();
6: host.Description.Behaviors.Add(discoveryBehavior);
7: ...
 8: }

Figure: Adding the service discovery behavior
The next step is to add the UdpDiscoveryEndpoint to the list of service endpoints so that our service can start listening for WS-Discovery messages.


1: host.AddServiceEndpoint(new UdpDiscoveryEndpoint());Figure: Adding an UDP discovery endpoint

At this point our service is ready to receive and interpret WS-Discovery messages from the different clients on the network. However those clients are not yet aware of the existence of the service given that this one hasn’t published the Hello announcement message. We can accomplish this by simply adding a new UdpAnnoucement endpoint to the list of service endpoints.

1: discoveryBehavior.AnnouncementEndpoints.Add(new UdpAnnouncementEndpoint());

Figure: Adding an UDP announcement endpoint

In order to dynamically discover services using the Ad-Hoc model, a WCF client instantiates a DiscoveryClient that uses discovery endpoint specifying where to send Probe or Resolve messages. The client then calls Find that specifies search criteria within a FindCriteria instance. If matching services are found, Find returns a collection of EndpointDiscoveryMetadata. The following code illustrates that concept.



1: DiscoveryClient discoveryClient = new DiscoveryClient(new UdpDiscoveryEndpoint());
2: FindResponse discoveryResponse= discoveryClient.Find(new FindCriteria(typeof(ISampleService)));
3: EndpointAddress address = discoveryResponse.Endpoints[0].Address;
4:
5: SampleServiceClient service = new SampleServiceClient(new BasicHttpBinding(), address);
6: service.Echo("WS-Discovery test");


Figure: WCF WS-Discovery client

The WCF implementation of the WS-Discovery Ad-Hoc model presents various aspects that I think are worth highlighting. First, WCF uses specialized discovery and announcement endpoints to process WS-Discovery messages isolating them from the service’s messages. Additionally, the use of service behaviors allow developers to incorporate the WS-Discovery capabilities as they are required without interfering with the normal service functioning. Finally, WCF clients can simply use the discovery client to dynamically resolve the service endpoint without having to make major modifications to its business logic.



Monday, November 21, 2011

Introduction to jQuery for ASP.NET

If you are keeping yourself updated with the latest in the .NET sphere, you are probably aware that Microsoft has provided an inbuilt support for jQuery in Visual Studio 2010. Though it was possible to use jQuery with ASP.NET even before VS 2010, formally including jQuery as a part of website created using VS2010 means that more and more developers are going to learn and use it. If you haven't tried jQuery yet this article series will teach you everything needed to master jQuery and use it in ASP.NET applications. What is jQuery?

The official website for jQuery defines jQuery as follows: "jQuery is a fast and concise JavaScript Library that simplifies HTML document traversing, event handling, animating, and Ajax interactions for rapid web development. jQuery is designed to change the way that you write JavaScript." Let's try to understand this description of jQuery in a bit detail.

jQuery is a JavaScript library As a ASP.NET developer you must have used JavaScript in one way or the other. The standard JavaScript no doubt helps you code rich and responsive web pages but the amount of code that you need to write is often too much. For example, if you wish to write a fancy popup menu from ground up using JavaScript it might be a time consuming task. To simplify such development and make you more productive several JavaScript libraries are available and jQuery is one of them. There are others such as Mootools, Prototype and Dojo. The fact that Microsoft is supporting jQuery in its products and investing resources into it clearly indicates its popularity. As you would expect jQuery is cross-browser and supports all leading browsers including IE 6+, FF 2+, Chrome, Safari 3+ and Opera 9+. jQuery is fast and concise

jQuery is highly optimized library. Moreover it is compact. The production version of jQuery 1.4.3 is just 26 KB and development version 179 KB. This compactness means less data to be downloaded at client side without compromising stunning UI effects. Scope of jQuery jQuery simplifies HTML DOM navigation considerably. For example, document.getElementById("Text1") becomes just $("#Text1"). Simple. Isn't it? Most of the JavaScript functionality goes in client side event handlers. jQuery is handy when it comes to event handling. If you are thinking about your AJAX functionality don't worry. jQuery allows you to make AJAX calls to ASP.NET Web Services, WCF services or even page methods.

If needed jQuery can be used along with ASP.NET AJAX. jQuery is designed to change the way that you write JavaScript That's true! jQuery dramatically changes the way you write JavaScript code. Initially you may find syntax of jQuery bit odd but once you get hang of it you will probably never look at any other library (or at least to the traditional way of writing JavaScript).

For example, a common JavaScript file contains too many small to huge functions and you keep calling them individually whenever required. With jQuery the "chain" of operations makes your code compact and handy. Ok. Enough of introduction. Now, let's complete the "hello world" ritual :-) In next section you will build a simple ASP.NET webform with some server controls on it that perform a trivial job of displaying "Hello world". Download jQuery Before you start any development with jQuery you need to download its latest version. You can do so by visiting http://jquery.com and then downloading "Development" version.

If you are using VS2010 then you need not download anything because when you create a new website by default jQuery library is already added for you (see screenshot below) In the above screenshot jquery-1.4.1.js is the development version, jquery-1.4.1.min.js is minified production version and jquery-1.4.1-vsdoc.js is the VS2010 IntelliSense file that enables IntelliSense for jQuery (see below). In the remainder of this article I will be using VS2008. If you are using VS2010 then just cleanup the default website by removing master page and other unwanted stuff. Design a simple web form Create a new website in VS2008 and create a new folder named Scripts. Copy the downloaded jQuery file in the Scripts folder. Though you can place it anywhere inside your website it is recommended to keep all JavaScript files under one folder typically named Scripts. The default name for the jQuery file is jquery-1.4.3.js but you can change it to something else if you so wish. Now open the default web form and add a Now place a TextBox and a Button web control on the web form. Switch back to HTML source view and add another The first line is where jQuery magic starts. The $ sign is a shortcut to base object jQuery. So, $(...) is actually same as jQuery(...).

If you ever coded in ASP.NET AJAX this concept should be familiar to you. The ready() is an event that fires when the wen page under consideration is fully loaded and its various elements are ready to be accessed. The event handler for ready event is supplied in the parenthesis as OnPageReady. OnPageReady() is a normal JavaScript function that wires an event handler for the client side click event of the button. It does so again by using $ shortcut. This time, however, ID of the button control is specified prefixing it with #. The click is an event and you specify its handler as OnButtonClick. The event handler receives an event object giving more information about the event. The OnButtonClick() is another function that simply displays "Hello World from jQuery!" using JavaScript alert. The OnButtonClick function also calls event.preventDefault() method so as to prevent web form postback that normally would have happened due to Button web server control. Ok.
If you run the web form you should see something like this : Easy! Isn't it? Now let's modify the above code as shown below:

$(document).ready( function()
{ $("#Button1").click(function(event)
{ alert("Hello world from jQuery!");
event.preventDefault(); } ) }
)

This is a compact version of the code that achieves the same functionality. Here, instead of defining separate functions you have written all the code there itself. You may compare this code with anonymous methods of C#. Adding something more... Now that the "Hello world" ritual is over let's add some extra features to our code. Begin by defining the following CSS class in your web form: The CSS class NoFocus will be applied to the textbox when it doesn't have focus whereas CSS class Focus will be applied when the textbox receives the focus. To accomplish this change the preceding code (compact version) as shown below:

$(document).ready ( function()
{ $("#Button1").click( function(event)
{ alert($("#TextBox1").val()); return false; } ) ,
$("#TextBox1").addClass("NoFocus") ,
$("#TextBox1").focus( function(event) { $("#TextBox1").removeClass("NoFocus"); $("#TextBox1").addClass("Focus"); } ), $("#TextBox1").blur( function(event) { $("#TextBox1").removeClass("Focus"); $("#TextBox1").addClass("NoFocus"); } ) } )


Notice the lines marked in bold letters. The click event handler of the button now displays whatever has been entered in the textbox. To retrieve textbox value you use val() method. Initially the textbox won't have focus and its CSS class should be NoFocus. This is done using addClass() method. The focus() and blur() event handlers simply add and remove the appropriate classes using addClass() and removeClass() methods respectively.

Wednesday, November 16, 2011

Entity FrameWork in DAL

The data access layer provides a centralized location for all calls into the database, and thus makes it easier to port the application to other database systems. There are many different options to get the Data Access Layer built quickly. In my effort to build the DAL, I would have chosen from many different model providers, for example: NHibernate LINQ to SQL Or else I would have chosen any other model provider, possibly: SubSonic LLBLGen Pro LightSpeed ..or who knows This means that we will be using an ORM (Object Relational Mapping) tool to generate our Data Access Layer. As you can see, there are many tools, but all having their advantages and disadvantages. In order to select the right tool, you need consider your project requirements, the skills of the development team, and if an organization has standardized on a specific tool etc. However, I am not going to use the tools listed above, instead decided to use Microsoft Entity Framework (EF), which is something Microsoft has newly introduced. The ADO.NET Entity Framework (EF) is an object-relational mapping (ORM) framework for the .NET Framework. ADO.NET Entity Framework (EF) abstracts the relational (logical) schema of the data that is stored in a database and presents its conceptual schema to the application. However the conceptual model that EF created fail to meet my requirements. Therefore I had to find a work around to create the right conceptual model for this design. You will see few adjustments that I have made to achieve my requirement in later part of this Article. The selection of EF (Entity Framework) for my ASP.NET MVC front end is not a blindfolded decision. I do have few affirmations... ASP.NET MVC Framework is Microsoft, so as ADO.NET Entity Framework. They tend to work well together than any other tool. Latest version of EF (Entity Framework) has many promising features, and has addressed many issue had in its initial version. In general Microsoft seems serious about EF. That made me thinks that EF would have a significant share in the future of ORM tools. The community has raised concerns over what has been promised and is being delivered with EF. You can read this ADO .NET Entity Framework Vote of No Confidence for more details about it. Microsoft and the Entity Framework team have received a tremendous amount of feedback from experts in entity-based applications and software architectures on the .NET platform. While Microsoft’s announcement of its intention to provide framework support for entity architectures was received with enthusiasm, the Entity Framework itself has consistently proved to be cause for significant concern. However the second version of Entity Framework (known, somewhat confusingly, as 'Entity Framework v4' because it forms part of .NET 4.0) is available in Beta form as part of Visual Studio 2010, and has addressed many of the criticisms made of version 1. The Entity data model (which is also called the conceptual model of the database), which ADO.NET Entity Framework auto created for our DMS is given below. This has a 1:1 (one to one) mapping between the database tables and the so called conceptual models. These models or objects can be used to communicate with the respective physical data source or the tables of the database. I hope you can understand my conceptual model quite easily.
The Data Access Layer project, which is given below, has three folders namely Models, Infrastructure and Interfaces (You will later notice that this project template is being reused in other projects of this system too). Outside of those folders, you find few classes such as one base class and two concrete classes. These are the main operational classes of the data access layer (DAL) project. This project template made it possible to directly access all commonly used main operational classes without worrying about navigating in to directories. "The inherent complexity of a software system is related to the problem it is trying to solve. The actual complexity is related to the size and structure of the software system as actually built. The difference is a measure of the inability to match the solution to the problem."
In the Data Access Layer (DAL) design, I thought of using a variant of repository pattern. The Repository pattern is commonly used by many enterprise designers. It is a straight forward design that gives you testable & reusable code modules. Furthermore it gives flexibility and separation of concerns between the conceptual model and business logics too. In my design there are two Repositories namely ‘DocumentRepository’ and ‘FileRepository’, which will mediates between the domain (also called Business Logic Layer) and data mapping layers (also called Data Access Layer) using domain objects (also called Business Objects or Models). In general it is recommended having one repository class for every business object of the system. Among the few folders that you see in that project, the folder named 'Interfaces' is important. So let's check inside the 'Interfaces' folder and see what interfaces that we have in it... IRepository - The generic interface that abstractly defines the behavior of all the repositories of our system. This is the super repository. This is what extends to create the abstract repository with the name 'RepositoryBase'. IDocumentRepository – This is a specialized repository, which defines the behavior specific to 'Document'. IFileRepository – Just like it was with 'IDocumentRepository', this defines the behavior of the 'File' repository. IRepositoryContext - This defines the behavior of the context of our repositories. IPagination - This defined the definition for pagination related operations. 'IUnitOfWork' and 'IUnitOfWorkFactory' were there but removed from the design later. Initially, I was planning to write some codes for dependency injection (DI) too. But later I found that Microsoft enterprise library has everything done for us. They have an application block called 'UnityApplicaitonBlock', which does what I was initiated with those two interfaces. Therefore I removed unity and inversion of control related implementations from the project. Now the source code you download now will not have those implementations. I have used interfaces in the DAL design, and that might made you ask questions like 'why we need these interfaces?' and 'why not just have the concrete implementation alone?'. There are many reasons for having interfaces in a design. They allow implementation to be interchangeable (One common scenario is, when unit testing, you can replace the actual repository implementation with a fake one). Additionally, when carefully used, they help to better organize your design. The interface is the bare minimum needed to explain what a function or class will do. It's the lease amount of code you can write for a full declaration of functionality and usability. For this reason, it's (more) clearer for a user of your function or class to understand how the object works. The user shouldn't have to look through all of your implementations to understand what the object does. So, again defining interfaces is the more organized approach. In some instances, I have seen designers add interfaces for every class, which I thought is a too extreme design practice. In my design I have used interfaces not only to organize the design but also to interchange implementation too. Later in the article you will see how these interfaces are being used to assist 'unity application block' to inject dependencies too. You can see in the screen above that I have set of interfaces define for pagination too. The pagination (of business objects/ entities) is something that developers code in many different layers. Some code it in the UI (User Interface) layer, while others do it in BLL (Business Logic Layer) or DAL (Data Access Layer). I thought of implementing it inside DAL to avoid unwanted network round trips that would occurred otherwise in a distributed multi-tier deployment setup. Additionally when keep it inside DAL, I will have the option of using some of the built-in EF (Entity Framework) functions for pagination works. The figure below summarizes the overall DAL design with its main components and their relations. I think it is clear how the three interfaces and their concrete implementations are being defined.
I think it is important for you to know how this design evolves. So let me talk about that here. Initially, I had the 'IRepository' interface. That was used to implement the two specialized 'Document' and 'File' repositories. That time I noticed that the two was having many common operations. So I thought of abstractly define them with a abstract class. Thus, decided to add a new generic abstract class called 'RepositoryBase'. Once all these were done, the final system had the generic 'IRepository' interface right at the top, then the abstract 'RepositoryBase' class, and then two concrete implementations of 'File' and 'Document' repositories. I thought it is done, and had a final look before closing-up the design, but that made me realized that the design is still having issues. So let me talk about that in the next paragraph. The design I had made it impossible to interchange implementation of either of the specialized repository. As an example, if I have a test/ fake version of the 'DocumentRepository' implemented with the name 'TestDocumentRepository' and wanted to interchange that with the 'DocumentRepository', then it is not possible with my design. So I decided to do few more adjustment to the design by introducing two specialized interfaces called 'IDocumentRepository' and 'IFileRepository'. This made it possible to fully hide the special implementations, hence gain the required interchangeability. That final change concludes my DAL design.

Monday, October 31, 2011

Embedding RavenDB into an ASP.NET MVC 3 Application

RavenDB Embedded and MVC
RavenDB can be run in three different modes:

1.As a Windows service
2.As an IIS application
3.Embedded in a .NET application
The first two have a fairly simple setup process, but come with some implementation strategy overhead. The third option, embedded, is extremely easy to get up and running. In fact, there’s a NuGet package available for it. A call to the following command in the Package Manager Console in Visual Studio 2010 (or a search for the term “ravendb” in the Manage NuGet Packages dialog) will deliver all of the references needed to start working with the embedded version of RavenDB:

Install-Package RavenDB-EmbeddedDetails of the package can be found on the NuGet gallery site at bit.ly/ns64W1.

Adding the embedded version of RavenDB to an ASP.NET MVC 3 application is as simple as adding the package via NuGet and giving the data store files a directory location. Because ASP.NET applications have a known data directory in the framework named App_Data, and most hosting companies provide read/write access to that directory with little or no configuration required, it’s a good place to store the data files. When RavenDB creates its file storage, it builds a handful of directories and files in the directory path provided to it. It won’t create a top-level directory to store everything. Knowing that, it’s worthwhile to add the ASP.NET folder named App_Data via the Project context menu in Visual Studio 2010 and then create a subdirectory in the App_Data directory for the RavenDB data (see Figure 1).


Figure 1 App_Data Directory Structure

A document data store is schema-less by nature, hence there’s no need to create an instance of a database or set up any tables. Once the first call to initialize the data store is made in code, the files required to maintain the data state will be created.

Working with the RavenDB Client API to interface with the data store requires an instance of an object that implements the Raven.Client.IDocumentStore interface to be created and initialized. The API has two classes, DocumentStore and EmbeddedDocumentStore, that implement the interface and can be used depending on the mode in which RavenDB is running. There should only be one instance per data store during the lifecycle of an application. I can create a class to manage a single connection to my document store that will let me access the instance of the IDocumentStore object via a static property and have a static method to initialize the instance (see Figure 2).

Figure 2 Class for DocumentStore

public class DataDocumentStore
{
private static IDocumentStore instance;

public static IDocumentStore Instance
{
get
{
if(instance == null)
throw new InvalidOperationException(
"IDocumentStore has not been initialized.");
return instance;
}
}

public static IDocumentStore Initialize()
{
instance = new EmbeddableDocumentStore { ConnectionStringName = "RavenDB" };
instance.Conventions.IdentityPartsSeparator = "-";
instance.Initialize();
return instance;
}
}The static property getter checks a private static backing field for a null object and, if null, it throws an InvalidOperationException. I throw an exception here, rather than calling the Initialize method, to keep the code thread-safe. If the Instance property were allowed to make that call and the application relied upon referencing the property to do the initialization, then there would be a chance that more than one user could hit the application at the same time, resulting in simultaneous calls to the Initialize method. Within the Initialize method logic, I create a new instance of the Raven.Client.Embedded.EmbeddableDocumentStore and set the ConnectionStringName property to the name of a connection string that was added to the web.config file by the install of the RavenDB NuGet package. In the web.config, I set the value of the connection string to a syntax that RavenDB understands in order to configure it to use the embedded local version of the data store. I also map the file directory to the Database directory I created in the App_Data directory of the MVC project:



The IDocumentStore interface contains all of the methods for working with the data store. I return and store the EmbeddableDocumentStore object as an instance of the interface type IDocumentStore so I have the flexibility of changing the instantiation of the EmbeddedDocumentStore object to the server version (DocumentStore) if I want to move away from the embedded version. This way, all of my logic code that will handle my document object management will be decoupled from the knowledge of the mode in which RavenDB is running.

RavenDB will create document ID keys in a REST-like format by default. An “Item” object would get a key in the format “items/104.” The object model name is converted to lowercase and is pluralized, and a unique tracking identity number is appended after a forward slash with each new document creation. This can be problematic in an MVC application, as the forward slash will cause a new route parameter to be parsed. The RavenDB Client API provides a way to change the forward slash by setting the IdentityPartsSeparator value. In my DataDocumentStore.Initialize method, I’m setting the IdentityPartsSeparator value to a dash before I call the Initialize method on the EmbeddableDocumentStore object, to avoid the routing issue.

Adding a call to the DataDocumentStore.Initialize static method from the Application_Start method in the Global.asax.cs file of my MVC application will establish the IDocumentStore instance at the first run of the application, which looks like this:

protected void Application_Start()
{
AreaRegistration.RegisterAllAreas();
RegisterGlobalFilters(GlobalFilters.Filters);
RegisterRoutes(RouteTable.Routes);

DataDocumentStore.Initialize();
}From here I can make use of the IDocumentStore object with a static call to the DataDocumentStore.Instance property to work on document objects from my embedded data store within my MVC application.

RavenDB Objects
To get a better understanding of RavenDB in action, I’ll create a prototype application to store and manage bookmarks. RavenDB is designed to work with Plain Old CLR Objects (POCOs), so there’s no need to add property attributes to guide serialization. Creating a class to represent a bookmark is pretty straightforward. Figure 3 shows the Bookmark class.

Figure 3 Bookmark Class

public class Bookmark
{
public string Id { get; set; }
public string Title { get; set; }
public string Url { get; set; }
public string Description { get; set; }
public List Tags { get; set; }
public DateTime DateCreated { get; set; }

public Bookmark()
{
this.Tags = new List();
}
}RavenDB will serialize the object data into a JSON structure when it goes to store the document. The well-known “Id” named property will be used to handle the document ID key. RavenDB will create that value—provided the Id property is empty or null when making the call to create the new document—and will store it in a @metadata element for the document (which is used to handle the document key at the data-store level). When requesting a document, the RavenDB Client API code will set the document ID key to the Id property when it loads the document object.

The JSON serialization of a sample Bookmark document is represented in the following structure:

{
"Title": "The RavenDB site",
"Url": "http://www.ravendb.net",
"Description": "A test bookmark",
"Tags": ["mvc","ravendb"],
"DateCreated": "2011-08-04T00:50:40.3207693Z"
}The Bookmark class is primed to work well with the document store, but the Tags property is going to pose a challenge in the UI layer. I’d like to let the user enter a list of tags separated by commas in a single text box input field and have the MVC model binder map all of the data fields without any logic code seeping into my views or controller actions. I can tackle this by using a custom model binder for mapping a form field named “TagsAsString” to the Bookmark.Tags field. First, I create the custom model binder class (see Figure 4).

Figure 4 BookmarkModelBinder.cs

public class BookmarkModelBinder : DefaultModelBinder
{
protected override void OnModelUpdated(ControllerContext controllerContext,
ModelBindingContext bindingContext)
{
var form = controllerContext.HttpContext.Request.Form;
var tagsAsString = form["TagsAsString"];
var bookmark = bindingContext.Model as Bookmark;
bookmark.Tags = string.IsNullOrEmpty(tagsAsString)
? new List()
: tagsAsString.Split(',').Select(i => i.Trim()).ToList();
}
}Then I update the Globals.asax.cs file to add the BookmarkModelBinder to the model binders at application startup:

protected void Application_Start()
{
AreaRegistration.RegisterAllAreas();
RegisterGlobalFilters(GlobalFilters.Filters);
RegisterRoutes(RouteTable.Routes);

ModelBinders.Binders.Add(typeof(Bookmark), new BookmarkModelBinder());
DataDocumentStore.Initialize();
}To handle populating an HTML text box with the current tags in the model, I’ll add an extension method to convert a List object to a comma-separated string:

public static string ToCommaSeparatedString(this List list)
{
return list == null ? string.Empty : string.Join(", ", list);
}Unit of Work
The RavenDB Client API is based on the Unit of Work pattern. To work on documents from the document store, a new session needs to be opened; work needs to be done and saved; and the session needs to close. The session handles change tracking and operates in a manner that’s similar to a data context in the EF. Here’s an example of creating a new document:

using (var session = documentStore.OpenSession())
{
session.Store(bookmark);
session.SaveChanges();
}It’s optimal to have the session live throughout the HTTP request so it can track changes, use the first-level cache and so on. I’ll create a base controller that will use the DocumentDataStore.Instance to open a new session on action executing, and on action executed will save changes and then dispose of the session object (see Figure 5). This allows me to do all of the work desired during the execution of my action code with a single open session instance.

Figure 5 BaseDocumentStoreController

public class BaseDocumentStoreController : Controller
{
public IDocumentSession DocumentSession { get; set; }

protected override void OnActionExecuting(ActionExecutingContext filterContext)
{
if (filterContext.IsChildAction)
return;
this.DocumentSession = DataDocumentStore.Instance.OpenSession();
base.OnActionExecuting(filterContext);
}

protected override void OnActionExecuted(ActionExecutedContext filterContext)
{
if (filterContext.IsChildAction)
return;
if (this.DocumentSession != null && filterContext.Exception == null)
this.DocumentSession.SaveChanges();
this.DocumentSession.Dispose();
base.OnActionExecuted(filterContext);
}
}MVC Controller and View Implementation
The BookmarksController actions will work directly with the IDocumentSession object from the base class and manage all of the Create, Read, Update and Delete (CRUD) operations for the documents. Figure 6 shows the code for the bookmarks controller.

Figure 6 BookmarksController Class

public class BookmarksController : BaseDocumentStoreController
{
public ViewResult Index()
{
var model = this.DocumentSession.Query()
.OrderByDescending(i => i.DateCreated)
.ToList();
return View(model);
}

public ViewResult Details(string id)
{
var model = this.DocumentSession.Load(id);
return View(model);
}

public ActionResult Create()
{
var model = new Bookmark();
return View(model);
}

[HttpPost]
public ActionResult Create(Bookmark bookmark)
{
bookmark.DateCreated = DateTime.UtcNow;
this.DocumentSession.Store(bookmark);
return RedirectToAction("Index");
}

public ActionResult Edit(string id)
{
var model = this.DocumentSession.Load(id);
return View(model);
}

[HttpPost]
public ActionResult Edit(Bookmark bookmark)
{
this.DocumentSession.Store(bookmark);
return RedirectToAction("Index");
}

public ActionResult Delete(string id)
{
var model = this.DocumentSession.Load(id);
return View(model);
}

[HttpPost, ActionName("Delete")]
public ActionResult DeleteConfirmed(string id)
{
this.DocumentSession.Advanced.DatabaseCommands.Delete(id, null);
return RedirectToAction("Index");
}
}The IDocumentSession.Query method in the Index action returns a result object that implements the IEnumerable interface, so I can use the OrderByDescending LINQ expression to sort the items and call the ToList method to capture the data to my return object. The IDocumentSession.Load method in the Details action takes in a document ID key value and de-serializes the matching document to an object of type Bookmark.

The Create method with the HttpPost verb attribute sets the CreateDate property on the bookmark item and calls the IDocumentSession.Store method off of the session object to add a new document record to the document store. The Update method with the HttpPost verb can call the IDocumentSession.Store method as well, because the Bookmark object will have the Id value already set. RavenDB will recognize that Id and update the existing document with the matching key instead of creating a new one. The DeleteConfirmed action calls a Delete method off of the IDocumentSession.Advanced.DatabaseCommands object, which provides a way to delete a document by key without having to load the object first. I don’t need to call the IDocumentSession.SaveChanges method from within any of these actions, because I have the base controller making that call on action executed.

All of the views are pretty straightforward. They can be strongly typed to the Bookmark class in the Create, Edit and Delete markups, and to a list of bookmarks in the Index markup. Each view can directly reference the model properties for display and input fields. The one place where I’ll need to vary on object property reference is with the input field for the tags. I’ll use the ToCommaSeparatedString extension method in the Create and Edit views with the following code:

@Html.TextBox("TagsAsString", Model.Tags.ToCommaSeparatedString())This will allow the user to input and edit the tags associated with the bookmark in a comma-delimited format within a single text box.

Searching Objects
With all of my CRUD operations in place, I can turn my attention to adding one last bit of functionality: the ability to filter the bookmark list by tags. In addition to implementing the IEnumerable interface, the return object from the IDocumentSession.Query method also implements the IOrderedQueryable and IQueryable interfaces from the .NET Framework. This allows me to use LINQ to filter and sort my queries. For example, here’s a query of the bookmarks created in the past five days:

var bookmarks = session.Query()
.Where( i=> i.DateCreated >= DateTime.UtcNow.AddDays(-5))
.OrderByDescending(i => i.DateCreated)
.ToList();Here’s one to page through the full list of bookmarks:

var bookmarks = session.Query()
.OrderByDescending(i => i.DateCreated)
.Skip(pageCount * (pageNumber – 1))
.Take(pageCount)
.ToList();RavenDB will build dynamic indexes based on the execution of these queries that will persist for “some amount of time” before being disposed of. When a similar query is rerun with the same parameter structure, the temporary dynamic index will be used. If the index is used enough within a given period, the index will be made permanent. These will persist beyond the application lifecycle.

I can add the following action method to my BookmarksController class to handle getting bookmarks by tag:

public ViewResult Tag(string tag)
{
var model = new BookmarksByTagViewModel { Tag = tag };
model.Bookmarks = this.DocumentSession.Query()
.Where(i => i.Tags.Any(t => t == tag))
.OrderByDescending(i => i.DateCreated)
.ToList();
return View(model);
}I expect this action to be hit on a regular basis by users of my application. If that’s indeed the case, this dynamic query will get turned into a permanent index by RavenDB with no additional work needed on my part.