Wednesday, December 17, 2008

The MRU List

One of the truisms of search is that if you search for something once, you’re likely to search for it again. In addition, some things are searched for much more often than others. It makes sense, then to cache the results for the most common searches so that you don’t have to query the database or other backing store in order to return results. That way you can satisfy the most common requests very quickly, reducing the load on your back-end database.

The problem, of course, is that you can’t cache all search results. If you could, you wouldn’t need a database to hold the information. The question becomes one of how to determine which results you should store in memory. One method that works very well is the Most Recently Used (MRU) list.

The idea is to keep some small number of the most recently requested results in memory. When somebody performs a search, the program checks the MRU list first. If the results are found in the MRU list, then those results moved to the front of the list. If the requested results are not in the list, then the program queries the database to obtain the results, places those new results at the front of the list, and removes the results from the back of the list--the least recently used results.

Because items are moved back to the front of the list each time they’re requested, the most frequently requested results will remain in the list, and things that are requested infrequently will fall off the back of the list.

This technique works well when searches follow the common pattern: a few terms being much more frequently requested than others. If searches are randomly distributed, the MRU list is not at all affected, because it’s likely that the requested results won’t be in the cache.

The .NET LinkedList class is an excellent choice for implementing an MRU list because moving things around in the list is a constant-time operation. That is, it takes the same amount of time to insert or remove an item in a list of 10,000 items as it does in a list of 10 items.

The MRU List interface requires a way to determine if an item is in the list, and a way to add something to the list. Also, the constructor should let you say how many items the list should hold. That’s really all there is to it, but implementation is just a little bit tricky.

We can build the MRU list as a generic class with two type parameters: the type of object to be stored in the linked list, and the type of key used to search for items. The key is important because the LinkedList.Find method searches the list looking for a particular object reference, rather than a key value. We’ll have to implement our own key search.

The constructor requires two parameters: an integer that specifies the maximum number of items that the MRU list can hold, and a comparison function that can compare a key value against a list object to determine if the object matches the key. The final interface looks like this:


using System;
using System.Collections.Generic;
using System.Text;

namespace MostRecentlyUsedList
{
public delegate bool MruEqualityComparer(K key, T item);

public class MruList
{
private LinkedList items;
private int maxCapacity;
private MruEqualityComparer compareFunc;

public MruList(int maxCap, MruEqualityComparer compFunc)
{
maxCapacity = maxCap;
compareFunc = compFunc;
items = new LinkedList();
}

public T Find(K key)
{
LinkedListNode node = FindNode(key);
if (node != null)
{
items.Remove(node);
items.AddFirst(node);
return node.Value;
}
return default(T);
}

private LinkedListNode FindNode(K key)
{
LinkedListNode node = items.First;
while (node != null)
{
if (compareFunc(key, node.Value))
return node;
node = node.Next;
}
return null;
}

public void Add(T item)
{
// remove items until the list is no longer full.
while (items.Count >= maxCapacity)
{
items.RemoveLast();
}
items.AddFirst(item);
}


}
}


static class Program
{
static private MruList mru;
///
/// The main entry point for the application.
///

[STAThread]
static void Main()
{

mru = new MruList(1000, KeyCompare);

for (int count = 0; count < 1000; count++)
{
mru.Add(new SearchResult("Jim" + count, count));
}
FindItem("Jim999");

}

static private bool KeyCompare(string key, SearchResult item)
{
return key.Equals(item.Name);
}

static private void FindItem(string key)
{
SearchResult rslt = mru.Find(key);
if (rslt == null)
{
Console.WriteLine("’{0}’ not found", key);
}
else
{
Console.WriteLine("’{0}’ is {1}", rslt.Name, rslt.Age);
}

}


}


class SearchResult
{
private string _name;
private int _age;

public SearchResult(string n, int a)
{
_name = n;
_age = a;
}

public string Name
{
get { return _name; }
}

public int Age
{
get { return _age; }
}
}


The MRU List

One of the truisms of search is that if you search for something once, you’re likely to search for it again. In addition, some things are searched for much more often than others. It makes sense, then to cache the results for the most common searches so that you don’t have to query the database or other backing store in order to return results. That way you can satisfy the most common requests very quickly, reducing the load on your back-end database.

The problem, of course, is that you can’t cache all search results. If you could, you wouldn’t need a database to hold the information. The question becomes one of how to determine which results you should store in memory. One method that works very well is the Most Recently Used (MRU) list.

The idea is to keep some small number of the most recently requested results in memory. When somebody performs a search, the program checks the MRU list first. If the results are found in the MRU list, then those results moved to the front of the list. If the requested results are not in the list, then the program queries the database to obtain the results, places those new results at the front of the list, and removes the results from the back of the list--the least recently used results.

Because items are moved back to the front of the list each time they’re requested, the most frequently requested results will remain in the list, and things that are requested infrequently will fall off the back of the list.

This technique works well when searches follow the common pattern: a few terms being much more frequently requested than others. If searches are randomly distributed, the MRU list is not at all affected, because it’s likely that the requested results won’t be in the cache.

The .NET LinkedList class is an excellent choice for implementing an MRU list because moving things around in the list is a constant-time operation. That is, it takes the same amount of time to insert or remove an item in a list of 10,000 items as it does in a list of 10 items.

The MRU List interface requires a way to determine if an item is in the list, and a way to add something to the list. Also, the constructor should let you say how many items the list should hold. That’s really all there is to it, but implementation is just a little bit tricky.

We can build the MRU list as a generic class with two type parameters: the type of object to be stored in the linked list, and the type of key used to search for items. The key is important because the LinkedList.Find method searches the list looking for a particular object reference, rather than a key value. We’ll have to implement our own key search.

The constructor requires two parameters: an integer that specifies the maximum number of items that the MRU list can hold, and a comparison function that can compare a key value against a list object to determine if the object matches the key. The final interface looks like this:


using System;
using System.Collections.Generic;
using System.Text;

namespace MostRecentlyUsedList
{
public delegate bool MruEqualityComparer(K key, T item);

public class MruList
{
private LinkedList items;
private int maxCapacity;
private MruEqualityComparer compareFunc;

public MruList(int maxCap, MruEqualityComparer compFunc)
{
maxCapacity = maxCap;
compareFunc = compFunc;
items = new LinkedList();
}

public T Find(K key)
{
LinkedListNode node = FindNode(key);
if (node != null)
{
items.Remove(node);
items.AddFirst(node);
return node.Value;
}
return default(T);
}

private LinkedListNode FindNode(K key)
{
LinkedListNode node = items.First;
while (node != null)
{
if (compareFunc(key, node.Value))
return node;
node = node.Next;
}
return null;
}

public void Add(T item)
{
// remove items until the list is no longer full.
while (items.Count >= maxCapacity)
{
items.RemoveLast();
}
items.AddFirst(item);
}


}
}


static class Program
{
static private MruList mru;
///
/// The main entry point for the application.
///

[STAThread]
static void Main()
{

mru = new MruList(1000, KeyCompare);

for (int count = 0; count < 1000; count++)
{
mru.Add(new SearchResult("Jim" + count, count));
}
FindItem("Jim999");

}

static private bool KeyCompare(string key, SearchResult item)
{
return key.Equals(item.Name);
}

static private void FindItem(string key)
{
SearchResult rslt = mru.Find(key);
if (rslt == null)
{
Console.WriteLine("’{0}’ not found", key);
}
else
{
Console.WriteLine("’{0}’ is {1}", rslt.Name, rslt.Age);
}

}


}


class SearchResult
{
private string _name;
private int _age;

public SearchResult(string n, int a)
{
_name = n;
_age = a;
}

public string Name
{
get { return _name; }
}

public int Age
{
get { return _age; }
}
}

Tuesday, December 16, 2008

Factory Method

Definition
Define an interface for creating an object, but let subclasses decide which class to instantiate. Factory Method lets a class defer instantiation to subclasses.

using System;
using System.Collections;

namespace GangOfFour.Factory
{
///
/// MainApp startup class for Real-World
/// Factory Method Design Pattern.
///

class MainApp
{
///
/// Entry point into console application.
///

static void Main()
{
// Note: constructors call Factory Method
Document[] documents = new Document[2];
documents[0] = new Resume();
documents[1] = new Report();

// Display document pages
foreach (Document document in documents)
{
Console.WriteLine("\n" + document.GetType().Name+ "--");
foreach (Page page in document.Pages)
{
Console.WriteLine(" " + page.GetType().Name);
}
}

// Wait for user
Console.Read();
}
}

// "Product"

abstract class Page
{
}

// "ConcreteProduct"

class SkillsPage : Page
{
}

// "ConcreteProduct"

class EducationPage : Page
{
}

// "ConcreteProduct"

class ExperiencePage : Page
{
}

// "ConcreteProduct"

class IntroductionPage : Page
{
}

// "ConcreteProduct"

class ResultsPage : Page
{
}

// "ConcreteProduct"

class ConclusionPage : Page
{
}

// "ConcreteProduct"

class SummaryPage : Page
{
}

// "ConcreteProduct"

class BibliographyPage : Page
{
}

// "Creator"

abstract class Document
{
private ArrayList pages = new ArrayList();

// Constructor calls abstract Factory method
public Document()
{
this.CreatePages();
}

public ArrayList Pages
{
get{ return pages; }
}

// Factory Method
public abstract void CreatePages();
}

// "ConcreteCreator"

class Resume : Document
{
// Factory Method implementation
public override void CreatePages()
{
Pages.Add(new SkillsPage());
Pages.Add(new EducationPage());
Pages.Add(new ExperiencePage());
}
}

// "ConcreteCreator"

class Report : Document
{
// Factory Method implementation
public override void CreatePages()
{
Pages.Add(new IntroductionPage());
Pages.Add(new ResultsPage());
Pages.Add(new ConclusionPage());
Pages.Add(new SummaryPage());
Pages.Add(new BibliographyPage());
}
}
}

Factory Method:when and where use it:
Class constructors exist so that clients can create an instance of a class. There are situations however, where the client does not, or should not, know which of several possible classes to instantiate. The Factory Method allows the client to use an interface for creating an object while still retaining control over which class to instantiate.
The key objective of the Factory Method is extensibility. Factory Methods are frequently used in applications that manage, maintain, or manipulate collections of objects that are different but at the same time have many characteristics in common. A document management system for example is more extensible if you reference your documents as a collections of IDocuments. These documents may be Text files, Word documents, Visio diagrams, or legal papers. They all have an author, a title, a type, a size, a location, a page count, etc. If a new type of document is introduced it simply has to implement the IDocument interface and it will fit in with the rest of the documents. To support this new document type the Factory Method code may or may not have to be adjusted (depending on how it was implemented - with or without parameters).

Factory Method in .NET Framework
The Factory Method is commonly used in .NET. An example is the System.Convert class which exposes many static methods that, given an instance of a type, returns another new type. For example, Convert.ToBoolean accepts a string and returns
boolean with value true or false depending on the string value (“true” or “false”).
In .NET the Factory Method is typically implemented as a static method which creates an instance of a particular type determined at compile time. In other words, these methods don’t return base classes or interface types of which the true type is only known at runtime. This is exactly where Abstact Factory and Factory Method differ; Abstract Factory methods are virtual or abstract and return abstract classes or interfaces. Factory Methods are abstract and return class types.

Friday, December 12, 2008

Abstract Facotry

Definition
Provide an interface for creating families of related or dependent objects without specifying their concrete classes.
Frequency of use: high

using System;

namespace DoFactory.GangOfFour.Adapter
{

class MainApp
{
///
/// Entry point into console application.
///

static void Main()
{
// Non-adapted chemical compound
Compound unknown = new Compound(Chemical.Unknown);
unknown.Display();

// Adapted chemical compounds
Compound water = new RichCompound(Chemical.Water);
water.Display();

Compound benzene = new RichCompound(Chemical.Benzene);
benzene.Display();

Compound alcohol = new RichCompound(Chemical.Alcohol);
alcohol.Display();

// Wait for user
Console.Read();
}
}

// "Target"

class Compound
{
private Chemical name;
private float boilingPoint;
private float meltingPoint;
private double molecularWeight;
private string molecularFormula;

// Constructor
public Compound(Chemical name)
{
this.name = name;
}

public virtual void Display()
{
Console.WriteLine("\nCompound: {0} -- ", Name);
}

// Properties
public Chemical Name
{
get{ return name; }
}

public float BoilingPoint
{
get{ return boilingPoint; }
set{ boilingPoint = value; }
}

public float MeltingPoint
{
get{ return meltingPoint; }
set{ meltingPoint = value; }
}

public double MolecularWeight
{
get{ return molecularWeight; }
set{ molecularWeight = value; }
}

public string MolecularFormula
{
get{ return molecularFormula; }
set{ molecularFormula = value; }
}
}

// "Adapter"

class RichCompound : Compound
{
private ChemicalDatabank bank;

// Constructor
public RichCompound(Chemical name) : base(name)
{
}

public override void Display()
{
// Adaptee
bank = new ChemicalDatabank();

// Adaptee request methods
BoilingPoint = bank.GetCriticalPoint(Name, State.Boiling);
MeltingPoint = bank.GetCriticalPoint(Name, State.Melting);
MolecularWeight = bank.GetMolecularWeight(Name);
MolecularFormula = bank.GetMolecularStructure(Name);

base.Display();
Console.WriteLine(" Formula: {0}", MolecularFormula);
Console.WriteLine(" Weight : {0}", MolecularWeight);
Console.WriteLine(" Melting Pt: {0}", MeltingPoint);
Console.WriteLine(" Boiling Pt: {0}", BoilingPoint);
}
}

// "Adaptee"

class ChemicalDatabank
{
// The Databank 'legacy API'
public float GetCriticalPoint(Chemical compound, State point)
{
float temperature = 0.0F;

// Melting Point
if (point == State.Melting)
{
switch (compound)
{
case Chemical.Water : temperature = 0.0F; break;
case Chemical.Benzene : temperature = 5.5F; break;
case Chemical.Alcohol : temperature = -114.1F; break;
}
}
// Boiling Point
else if (point == State.Boiling)
{
switch (compound)
{
case Chemical.Water : temperature = 100.0F; break;
case Chemical.Benzene : temperature = 80.1F; break;
case Chemical.Alcohol : temperature = 78.3F; break;
}
}
return temperature;
}

public string GetMolecularStructure(Chemical compound)
{
string structure = "";

switch (compound)
{
case Chemical.Water : structure = "H20"; break;
case Chemical.Benzene : structure = "C6H6"; break;
case Chemical.Alcohol : structure = "C2H6O2"; break;
}
return structure;
}

public double GetMolecularWeight(Chemical compound)
{
double weight = 0.0;
switch (compound)
{
case Chemical.Water : weight = 18.015; break;
case Chemical.Benzene : weight = 78.1134; break;
case Chemical.Alcohol : weight = 46.0688; break;
}
return weight;
}
}

// Enumerations

public enum Chemical
{
Unknown,
Water,
Benzene,
Alcohol
}

public enum State
{
Boiling,
Melting
}
}


.NET optimized sample code

The .NET optimized code demonstrates the same code as above but uses more modern, built-in .NET features. In this example, abstract classes have been replaced by interfaces because the abstract classes do not contain implementation code. Continents are represented as enumerations. The AnimalWorld constructor dynamically creates the desired abstract factory using the Continent enumerated values.
Code in project: DoFactory.GangOfFour.Abstract.NetOptimized
Abstract Factory: when and where use it
The Abstract Factory pattern provides a client with a class that creates objects that are related by a common theme. The classic example is that of a GUI component factory which creates UI controls for different windowing systems, such as, Windows, Motif, or MacOS. If you’re familiar with Java Swing you’ll recognize it as a good example of the use of the Abstract Factory pattern to build UI interfaces that are independent of their hosting platform. From a design pattern perspective, Java Swing succeeded, but applications built on this platform perform poorly and are not very interactive or responsive compared to native Windows or native Motif applications.
Over time the meaning of the Abtract Factory pattern has changed somewhat compared to the original GoF definition. Today, when developers talk about the Abstract Factory pattern they do not only mean the creation of a ‘family of related or dependent’ objects but also include the creation of individual object instances.
Next are some reasons and benefits for creating objects using an Abstract Factory rather than calling constructors directly:
Constructors are limited in their control over the overall creation process. If your application needs more control consider using a Factory. These include scenarios that involve object caching, sharing or re-using of objects, and applications that maintain object and type counts.

There are times when the client does not know exactly what type to construct. It is easier to code against a base type or interface and a factory can take parameters or other context-based information to make this decision for the client. An example of this are the provider specific ADO.NET objects (DbConnection, DbCommand, DbDataAdapter, etc).
Constructors don’t communicate their intention very well because they must be named after their class (or Sub New in VB.NET). Having numerous overloaded constructors may make it hard for the client developer to decide which constructor to use. Replacing constructors with intention-revealing creation methods are sometimes preferred. An example follows:
Several overloaded constructors. Which one should you use?

// C#
public Vehicle (int passengers)
public Vehicle (int passengers, int horsePower)
public Vehicle (int wheels, bool trailer)
public Vehicle (string type)
' VB.NET
public Sub New (Byval passengers As Integer)
public Sub New (Byval passengers As Integer, _
Byval horsePower As Integer)
public Sub New (Byval wheels As Integer wheels, _
Byval trailer As Boolean)
public Sub New (Byval type As String)
The Factory pattern makes code more expressive and developers more productive
// C#
public Vehicle CreateCar (int passengers)
public Vehicle CreateSuv (int passengers, int horsePower)
public Vehicle CreateTruck (int wheels, bool trailer)
public Vehicle CreateBoat ()
public Vehicle CreateBike ()
' VB.NET
public Function CreateCar (Byval passengers As Integer) As Vehicle
public Function CreateSuv (Byval passengers As Integer, _
Byval horsePower As Integer) As Vehicle
public Function CreateTruck (Byval wheels As Integer, _
Byval trailer As Boolean) As Vehicle
public Function CreateBoat () As Vehicle
public Function CreateBike () As Vehicle

Abstract Factory in the .NET Framework
ADO.NET 2.0 includes two new Abstract Factory classes that offer provider independent data access techniques. They are: DbProviderFactory and DbProviderFactories. The DbProviderFactory class creates the ‘true’ (i.e. the database specific) classes you need, such as SqlClientConnection, SqlClientCommand, and SqlClientDataAdapter. Each managed provider (such as SqlClient, OleDb, ODBC, and Oracle) has its own DbProviderFactory class. DbProviderFactory objects are created by the DbProviderFactories class, which itself is a factory class. In fact, it is a factory of factories -- it manufactures different factories, one for each provider.
When Microsoft talks about Abstract Factories they mean types that expose factory methods as virtual or abstract instance functions and return an abstract class or interface. Below is an example from .NET:
// C#
public abstract class StreamFactory
{
public abstract Stream CreateStream();
}
' VB.NET
Public MustInherit Class StreamFactory
Public MustOverride Function CreateStream() As Stream
End Class
In this scenario your factory type inherits from StreamFactory and is used to dynamically select the actual Stream type being created:
// C#
public class MemoryStreamFactory : StreamFactory
{
...
}
' VB.NET
Public Class MemoryStreamFactory
Inherits StreamFactory
...
End Class

The naming convention in .NET is to appends the word ‘Factory’ to the name of
that is being created. For example, a class that manufactures widget objects would
named WidgetFactory. A search through the libraries for the word ‘Factory’ reveals numerous classes that are implementations of the Factory design pattern.

Wednesday, June 18, 2008

Using SQL Management Objects to create.

SQL Management Objects (SMO) is a new set of programming objects that expose the management functionalities of the SQL Server Database. In simpler terms, with the help of SMO objects and their associated methods/attributes/functionalities we can automate SQL Server tasks which can range from

Administrative Tasks - Retrieval of Database Server Settings, creating new databases Etc.
Application Oriented Task - Applying T-SQL Scripts
Scheduling Tasks - Creation of new Jobs, Running and maintenance of existing Jobs via SQL Server Agent.
SMO is also an upgraded version of SQL-DMO (Database Management Objects) which came along earlier versions of SQL Server. Even the SQL Management Studios is utilizes SMO, therefore in theory one should be able to replicate any function which can be performed with the help of SQL Management Studios.

Scope of the Article
The scope of this document is to explain the use of SMO dynamic linked assembly for creation and restoration of databases .This involves creating copies of databases from an existing database template backup (.bak).

Possible Applications
Such an approach to database creation can be made to effective use in scenarios such as:

Automated Database creation and Restoration:
If a need arises to create 'N' number of database and restore them from a common database backup (.bak).
Creation of a new database.
Overwriting an existing database.

I would elaborate on the first application, creation of copies of databases from a common database backup. Below is a list of the implementation in detail:

Reference SMO
For us to make use of the builtin functionality that SMO objects offer we have to refer the SMO assemblies via a .Net application. This can be done by adding via the Visual Studios 2005 by browsing to, Project ->Add Reference Tab and adding the following.

- Microsoft.SqlServer.ConnectionInfo
- Microsoft.SqlServer.Smo
- Microsoft.SqlServer.SmoEnum
- Microsoft.SqlServer.SqlEnum

Also include the following in the code editor of your Visual Studios 2005 project

Using Microsoft.SqlServer.Management.Smo;
The Configurable Values which are required are:
- Server Name -Name of the Server which you want to connect to and restore the
databases at.
- User Name and Password in case of SQL Authentication
- Name of the Database
- Data File Path of the Database - By default it will be stored under SQL Server Directory.
- Log File Path of the Database - By default it will be stored under SQL Server Directory.

Step1: Connecting to the Server
This can be done either by using SQL authentication or by using the built in NT authentication depending upon your usage and application.

a) SQL Authentication

ServerConnection conn = new ServerConnection();
//Windows Authentication is False conn.LoginSecure = false;
//Specify the UserName conn.Login = //Specify the Password conn.Password =
//Specify the Server Name to be connected conn.ServerInstance = ; Server svr = new Server(conn);
//Connect to the server using the configured Connection //String svr.ConnectionContext.Connect();

--------------------------------------------------------------------------------

b) Windows Authentication:

ServerConnection conn = new ServerConnection();
//Windows Authentication is True conn.LoginSecure = true;
//Specify the Server Name to be connected conn.ServerInstance = ; Server svr = new Server(conn); //Connect to the server using the configured Connection //String svr.ConnectionContext.Connect();

--------------------------------------------------------------------------------

Step2: Obtaining Information of the New Database to be Restored

a) Use Restore object to facilitate the restoration of the new Database. Specify the name of the New Database to be restored.

Restore res = new Restore(); res.Database = ;

--------------------------------------------------------------------------------

b) Specify the location of the Backup file from which the new database is to be restored.

//Restore from an already existing database backup
res.Action = RestoreActionType.Database; res.Devices.AddDevice(, DeviceType.File); res.ReplaceDatabase = true;
//Data Table to obtain the values of the Data Files of // the database. DataTable dt; dt = res.ReadFileList(svr);
//Obtaining the existing Logical Name and Log Name of //the database backup string TemplateDBName = dt.Rows[0]["LogicalName"].ToString(); string TemplateLogName = dt.Rows[1]["LogicalName"].ToString();

--------------------------------------------------------------------------------


c) Specify the new location of the database files which with the relocate property.(This is similar to the restoration of a database from SQL Server Script with the help of WITH MOVE option

// Now relocate the data and log files // RelocateFile reloData = new RelocateFile(TemplateDBName, @"D:\" + + "_data.mdf"); RelocateFile reloLog = new RelocateFile(TemplateLogName, @"D:\" + + "_log.ldf"); res.RelocateFiles.Add(reloData); res.RelocateFiles.Add(reloLog);

--------------------------------------------------------------------------------

d) Restore the Database. This will restore an exact replica of the database backup file which was specified.

//Restore the New Database File res.SqlRestore(svr); //Close connection svr.ConnectionContext.Disconnect();

--------------------------------------------------------------------------------

Conclusion
Such an implementation of SMO for automated database restore can be put to use by a Database administrator for ease of use and customization.

Another possible utilization which i had implemented was to improve the efficiency of a data intensive operation .This was done by automating the restoration of multiple copies of the database. A real time scenario where such an approach can be made use of is to process bulk data with the help of staging databases. A particular data intensive operation was spilt across similar databases to process individual units of data and split the load.


Find out more information on http://www.sqlservercentral.com/

Wednesday, February 27, 2008

Smart Client Architecture

From thick clients to thin clients to Smart Clients

We are all accustomed to desktop applications of the past that were designed and developed for the Windows environments. These applications have had a rich user interface but had their limitations too. This section drills down at what these limitations were and how and why Smart Clients came into being.

As far as the client applications of the past are concerned, we have had an increased demand for client applications that can be executed in the Windows environments. We have an increased demand for rich client applications that could be developed with ease using some powerful tools like Microsoft's VB, VC++, etc. Using these developer tools, one could develop applications that could create executables that would run on any Windows environments, provided the necessary runtime libraries or DLLs were available. We could design and develop applications that had rich UIs too. These applications however suffered from a major drawback in their deployment (commonly known as the DLL Hell problem). They were quite difficult to deploy and maintain.

With these hindrances in mind, client browser applications which could be deployed and updated from a central location emerged. This reduced the cost involved in deploying and maintaining these applications. These applications had their drawbacks too. They were devoid of the rich UI that the rich client applications provided and had to be connected at all times for their operation. However, they were a good choice especially for their ease of deployment and management. Contrary to this, the rich client applications had their deployment and management drawbacks associated. Hence, the thin client applications continued to dominate the software development community for years.

The increasing demand by the global businesses worldwide for fast, flexible, efficient and responsive applications and the increasing demand for applications that support mobility, Smart Client Applications emerged. These applications provide a rich user experience, ease of deployment and ability to be updated from a centralized location. They can work in both online and offline modes and provide a fast and responsive UI.


Smart Clients -- combining the best of both worlds

Smart Clients provide a much rich user interface much like a traditional two tier application, with a seamless offline operation which was missing with these traditional applications. Smart Client Applications combine the best of both worlds. They combine the best of both fat and thin client applications. They use local resources, local processing, web services for communication and are flexible and can be deployed and updated from a centralized server seamlessly. Typically we could have a Web Service to hold all with the Smart Clients consuming this service. The Patterns and Practices group from Microsoft has come out with the Smart Client Architecture and Design Guide which "gives you prescriptive guidance on how to overcome architectural challenges and design issues when building smart client solutions. It also provides guidance on how to combine the benefits of traditional rich client applications with the manageability of thin client applications." You can take a look at this guide here.

The following are the basic characteristics of Smart Client.

Use of local resources and processing
Support for both online and offline operations
Ease of deployment and configuration
Use of Web services for communication purposes
Support for hot updates

Microsoft .NET provides ample support for designing and developing Smart Client applications in more ways then one. It solves the version conflicts (using assembly metadata, etc.) involved in storing multiple assemblies side - by - side. This allows the applications to be executed with the version of an assembly with which you had built and tested your applications. It also provides No-Touch deployment, hot updates features and high flexibility in securing assemblies (using cryptography, etc.) and providing specific permissions to an assembly; hence facilitating application stability. Further, you can leverage the rich UI of the .NET Windows applications to provide a powerful user experience, use SOAP based Web Services. Finally, you have Service Oriented architecture and with the introduction of .NET 2.0, you have more power, flexibility, management and deployment features than you can even think of.