Friday, August 17, 2007

Query Performance Tuning (SQL Server Compact Edition)

You can improve your SQL Server 2005 Compact Edition (SQL Server Compact Edition) application performance by optimizing the queries you use. The following sections outline techniques you can use to optimize query performance.

Improve Indexes

Creating useful indexes is one of the most important ways to achieve better query performance. Useful indexes help you find data with fewer disk I/O operations and less system resource usage.

To create useful indexes, you much understand how the data is used, the types of queries and the frequencies they are run, and how the query processor can use indexes to find your data quickly.

When you choose what indexes to create, examine your critical queries, the performance of which will affect user experience most. Create indexes to specifically aid these queries. After adding an index, rerun the query to see if performance is improved. If it is not, remove the index.

As with most performance optimization techniques, there are tradeoffs. For example, with more indexes, SELECT queries will potentially run faster. However, DML (INSERT, UPDATE, and DELETE) operations will slow down significantly because more indexes must be maintained with each operation. Therefore, if your queries are mostly SELECT statements, more indexes can be helpful. If your application performs many DML operations, you should be conservative with the number of indexes you create.

SQL Server Compact Edition includes support for showplans, which help assess and optimize queries. SQL Server Compact Edition uses the same showplan schema as SQL Server 2005 except SQL Server Compact Edition uses a subset of the operators. For more information, see the Microsoft Showplan Schema at http://schemas.microsoft.com/sqlserver/2004/07/showplan/.

The next few sections provide additional information about creating useful indexes.

Create Highly-Selective Indexes
Indexing on columns used in the WHERE clause of your critical queries frequently improves performance. However, this depends on how selective the index is. Selectivity is the ratio of qualifying rows to total rows. If the ratio is low, the index is highly selective. It can get rid of most of the rows and greatly reduce the size of the result set. It is therefore a useful index to create. By contrast, an index that is not selective is not as useful.

A unique index has the greatest selectivity. Only one row can match, which makes it most helpful for queries that intend to return exactly one row. For example, an index on a unique ID column will help you find a particular row quickly.

You can evaluate the selectivity of an index by running the sp_show_statistics stored procedures on SQL Server Compact Edition tables. For example, if you are evaluating the selectivity of two columns, "Customer ID" and "Ship Via", you can run the following stored procedures:

sp_show_statistics_steps 'orders', 'customer id';


RANGE_HI_KEY RANGE_ROWS EQ_ROWS DISTINCT_RANGE_ROWS

------------------------------------------------------------

ALFKI 0 7 0

ANATR 0 4 0

ANTON 0 13 0

AROUT 0 14 0

BERGS 0 23 0

BLAUS 0 8 0

BLONP 0 14 0

BOLID 0 7 0

BONAP 0 19 0

BOTTM 0 20 0

BSBEV 0 12 0

CACTU 0 6 0

CENTC 0 3 0

CHOPS 0 12 0

COMMI 0 5 0

CONSH 0 4 0

DRACD 0 9 0

DUMON 0 8 0

EASTC 0 13 0

ERNSH 0 33 0

(90 rows affected)

And

sp_show_statistics_steps 'orders', 'reference3';


RANGE_HI_KEY RANGE_ROWS EQ_ROWS DISTINCT_RANGE_ROWS

------------------------------------------------------------

1 0 320 0

2 0 425 0

3 0 333 0

(3 rows affected)


The results show that the "Customer ID" column has a much lower degree of duplication. This means an index on it will be more selective than an index on the "Ship Via" column.

For more information about using these stored procedures, see sp_show_statistics (SQL Server Compact Edition), sp_show_statistics_steps (SQL Server Compact Edition), and sp_show_statistics_columns (SQL Server Compact Edition).

Create Multiple-Column Indexes
Multiple-column indexes are natural extensions of single-column indexes. Multiple-column indexes are useful for evaluating filter expressions that match a prefix set of key columns. For example, the composite index CREATE INDEX Idx_Emp_Name ON Employees ("Last Name" ASC, "First Name" ASC) helps evaluate the following queries:

... WHERE "Last Name" = 'Doe'

... WHERE "Last Name" = 'Doe' AND "First Name" = 'John'

... WHERE "First Name" = 'John' AND "Last Name" = 'Doe'

However, it is not useful for this query:

... WHERE "First Name" = 'John'

When you create a multiple-column index, you should put the most selective columns leftmost in the key. This makes the index more selective when matching several expressions.

Avoid Indexing Small Tables
A small table is one whose contents fit in one or just a few data pages. Avoid indexing very small tables because it is typically more efficient to do a table scan. This saves the cost of loading and processing index pages. By not creating an index on very small tables, you remove the chance of the optimizer selecting one.

SQL Server Compact Edition stores data in 4 Kb pages. The page count can be approximated by using the following formula, although the actual count might be slightly larger because of the storage engine overhead.

* <# of rows>

<# of pages> = -----------------------------------------------------------------

4096

For example, suppose a table has the following schema:

Column Name Type (size)
Order ID
INTEGER (4 bytes)

Product ID
INTEGER (4 bytes)

Unit Price
MONEY (8 bytes)

Quantity
SMALLINT (2 bytes)

Discount
REAL (4 bytes)


The table has 2820 rows. According to the formula, it takes about 16 pages to store its data:

<# of pages> = ((4 + 4 + 8 + 2 + 4) * 2820) / 4096 = 15.15 pages

Choose What to Index

We recommend that you always create indexes on primary keys. It is frequently useful to also create indexes on foreign keys. This is because primary keys and foreign keys are frequently used to join tables. Indexes on these keys lets the optimizer consider more efficient index join algorithms. If your query joins tables by using other columns, it is frequently helpful to create indexes on those columns for the same reason.

When primary key and foreign key constraints are created, SQL Server Compact Edition automatically creates indexes for them and takes advantage of them when optimizing queries. Remember to keep primary keys and foreign keys small. Joins run faster this way.

Use Indexes with Filter Clauses
Indexes can be used to speed up the evaluation of certain types of filter clauses. Although all filter clauses reduce the final result set of a query, some can also help reduce the amount of data that must be scanned.

A search argument (SARG) limits a search because it specifies an exact match, a range of values, or a conjunction of two or more items joined by AND. It has one of the following forms:

Column operator

operator Column

SARG operators include =, >, <, >=, <=, IN, BETWEEN, and sometimes LIKE (in cases of prefix matching, such as LIKE 'John%'). A SARG can include multiple conditions joined with an AND. SARGs can be queries that match a specific value, such as:

"Customer ID" = 'ANTON'

'Doe' = "Last Name"

SARGs can also be queries that match a range of values, such as:

"Order Date" > '1/1/2002'

"Customer ID" > 'ABCDE' AND "Customer ID" < 'EDCBA'

"Customer ID" IN ('ANTON', 'AROUT')

An expression that does not use SARG operators does not improve performance, because the SQL Server Compact Edition query processor has to evaluate every row to determine whether it meets the filter clause. Therefore, an index is not useful on expressions that do not use SARG operators. Non-SARG operators include NOT, <>, NOT EXISTS, NOT IN, NOT LIKE, and intrinsic functions.

Use the Query Optimizer

When determining the access methods for base tables, the SQL Server Compact Edition optimizer determines whether an index exists for a SARG clause. If an index exists, the optimizer evaluates the index by calculating how many rows are returned. It then estimates the cost of finding the qualifying rows by using the index. It will choose indexed access if it has lower cost than table scan. An index is potentially useful if its first column or prefix set of columns are used in the SARG, and the SARG establishes a lower bound, upper bound, or both, to limit the search.

Understand Response Time vs. Total Time

Response time is the time it takes for a query to return the first record. Total time is the time it takes for the query to return all records. For an interactive application, response time is important because it is the perceived time for the user to receive visual affirmation that a query is being processed. For a batch application, total time reflects the overall throughput. You have to determine what the performance criteria are for your application and queries, and then design accordingly.

For example, suppose the query returns 100 records and is used to populate a list with the first five records. In this case, you are not concerned with how long it takes to return all 100 records. Instead, you want the query to return the first few records quickly, so that you can populate the list.

Many query operations can be performed without having to store intermediate results. These operations are said to be pipelined. Examples of pipelined operations are projections, selections, and joins. Queries implemented with these operations can return results immediately. Other operations, such as SORT and GROUP-BY, require using all their input before returning results to their parent operations. These operations are said to require materialization. Queries implemented with these operations typically have an initial delay because of materialization. After this initial delay, they typically return records very quickly.

Queries with response time requirements should avoid materialization. For example, using an index to implement ORDER-BY yields better response time than does using sorting. The following section describes this in more detail.

Index the ORDER-BY / GROUP-BY / DISTINCT Columns for Better Response Time
The ORDER-BY, GROUP-BY, and DISTINCT operations are all types of sorting. The SQL Server Compact Edition query processor implements sorting in two ways. If records are already sorted by an index, the processor needs to use only the index. Otherwise, the processor has to use a temporary work table to sort the records first. Such preliminary sorting can cause significant initial delays on devices with lower power CPUs and limited memory, and should be avoided if response time is important.

In the context of multiple-column indexes, for ORDER-BY or GROUP-BY to consider a particular index, the ORDER-BY or GROUP-BY columns must match the prefix set of index columns with the exact order. For example, the index CREATE INDEX Emp_Name ON Employees ("Last Name" ASC, "First Name" ASC) can help optimize the following queries:

... ORDER BY / GROUP BY "Last Name" ...

... ORDER BY / GROUP BY "Last Name", "First Name" ...

It will not help optimize:

... ORDER BY / GROUP BY "First Name" ...

... ORDER BY / GROUP BY "First Name", "Last Name" ...

For a DISTINCT operation to consider a multiple-column index, the projection list must match all index columns, although they do not have to be in the exact order. The previous index can help optimize the following queries:

... DISTINCT "Last Name", "First Name" ...

... DISTINCT "First Name", "Last Name" ...

It will not help optimize:

... DISTINCT "First Name" ...

... DISTINCT "Last Name" ...

Note:
If your query always returns unique rows on its own, avoid specifying the DISTINCT keyword, because it only adds overhead




Rewrite Subqueries to Use JOIN

Sometimes you can rewrite a subquery to use JOIN and achieve better performance. The advantage of creating a JOIN is that you can evaluate tables in a different order from that defined by the query. The advantage of using a subquery is that it is frequently not necessary to scan all rows from the subquery to evaluate the subquery expression. For example, an EXISTS subquery can return TRUE upon seeing the first qualifying row.

Note:
The SQL Server Compact Edition query processor always rewrites the IN subquery to use JOIN. You do not have to try this approach with queries that contain the IN subquery clause.



For example, to determine all the orders that have at least one item with a 25 percent discount or more, you can use the following EXISTS subquery:

SELECT "Order ID" FROM Orders O

WHERE EXISTS (SELECT "Order ID"

FROM "Order Details" OD

WHERE O."Order ID" = OD."Order ID"

AND Discount >= 0.25)

You can also rewrite this by using JOIN:

SELECT DISTINCT O."Order ID" FROM Orders O INNER JOIN "Order Details"

OD ON O."Order ID" = OD."Order ID" WHERE Discount >= 0.25

Limit Using Outer JOINs
OUTER JOINs are treated differently from INNER JOINs in that the optimizer does not try to rearrange the join order of OUTER JOIN tables as it does to INNER JOIN tables. The outer table (the left table in LEFT OUTER JOIN and the right table in RIGHT OUTER JOIN) is accessed first, followed by the inner table. This fixed join order could lead to execution plans that are less than optimal.

For more information about a query that contains INNER JOIN, see Microsoft Knowledge Base.

Use Parameterized Queries

If your application runs a series of queries that are only different in some constants, you can improve performance by using a parameterized query. For example, to return orders by different customers, you can run the following query:

SELECT "Customer ID" FROM Orders WHERE "Order ID" = ?

Parameterized queries yield better performance by compiling the query only once and executing the compiled plan multiple times. Programmatically, you must hold on to the command object that contains the cached query plan. Destroying the previous command object and creating a new one destroys the cached plan. This requires the query to be re-compiled. If you must run several parameterized queries in interleaved manner, you can create several command objects, each caching the execution plan for a parameterized query. This way, you effectively avoid re-compilations for all of them.

Query Only When You Must

The SQL Server Compact Edition query processor is a powerful tool for querying data stored in your relational database. However, there is an intrinsic cost associated with any query processor. It must compile, optimize, and generate an execution plan before it starts doing the real work of performing the plan. This is particularly true with simple queries that finish quickly. Therefore, implementing the query yourself can sometimes provide vast performance improvement. If every millisecond counts in your critical component, we recommend that you consider the alternative of implementing the simple queries yourself. For large and complex queries, the job is still best left to the query processor.

For example, suppose you want to look up the customer ID for a series of orders arranged by their order IDs. There are two ways to accomplish this. First, you could follow these steps for each lookup:

Open the Orders base table

Find the row, using the specific "Order ID"

Retrieve the "Customer ID"

Or you could issue the following query for each lookup:

SELECT "Customer ID" FROM Orders WHERE "Order ID" =


The query-based solution is simpler but slower than the manual solution, because the SQL Server Compact Edition query processor translates the declarative SQL statement into the same three operations that you could implement manually. Those three steps are then performed in sequence. Your choice of which method to use will depend on whether simplicity or performance is more important in your application.

Monday, August 06, 2007

Participating in Transactions in XML Web Services Created Using ASP.NET

The transaction support for XML Web services leverages the support found in the common language runtime, which is based on the same distributed transaction model found in Microsoft Transaction Server (MTS) and COM+ Services. The model is based on declaratively deciding whether an object participates in a transaction, rather than writing specific code to handle committing and rolling back a transaction. For an XML Web service created using ASP.NET, you can declare an XML Web service's transactional behavior by setting the TransactionOption property of the WebMethod attribute applied to an XML Web service method. If an exception is thrown while the XML Web service method is executing, the transaction is automatically aborted; conversely, if no exception occurs, the transaction is automatically committed.

The TransactionOption property of the WebMethod attribute specifies how an XML Web service method participates in a transaction. Although this declarative level represents the logic of a transaction, it is one step removed from the physical transaction. A physical transaction occurs when a transactional object accesses a data resource, such as a database or message queue. The transaction associated with the object automatically flows to the appropriate resource manager. A .NET Framework data provider, such as the .NET Framework Data Provider for SQL Server or the .NET Framework Data Provider for OLE DB, looks up the transaction in the object's context and enlists in the transaction through the Distributed Transaction Coordinator (DTC). The entire transaction occurs automatically.

XML Web service methods can only participate in a transaction as the root of a new transaction. As the root of a new transaction, all interactions with resource managers, such as servers running Microsoft SQL Server, Microsoft Message Queuing (also known as MSMQ), and Microsoft Host Integration Server maintain the ACID properties required to run robust distributed applications. XML Web service methods that call other XML Web service methods participate in different transactions, as transactions do not flow across XML Web service methods.

The following code example shows an XML Web service that exposes a single XML Web service method, called DeleteDatabase. This XML Web service method performs a database operation that is scoped within a transaction. If the database operation does throw an exception, the transaction is automatically stopped; otherwise, the transaction is automatically committed.

<%@ WebService Language="C#" Class="Orders" %>
<%@ Assembly name="System.EnterpriseServices,Version=1.0.3300.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a" %>

using System;
using System.Data;
using System.Data.SqlClient;
using System.Web.Services;
using System.EnterpriseServices;

public class Orders : WebService
{
[ WebMethod(TransactionOption=TransactionOption.RequiresNew)]
public int DeleteAuthor(string lastName)
{
String deleteCmd = "DELETE FROM authors WHERE au_lname='" +
lastName + "'" ;
String exceptionCausingCmdSQL = "DELETE FROM NonExistingTable WHERE
au_lname='" + lastName + "'" ;

SqlConnection sqlConn = new SqlConnection(
"Persist Security Info=False;Integrated Security=SSPI;database=pubs;server=myserver");

SqlCommand deleteCmd = new SqlCommand(deleteCmdSQL,sqlConn);
SqlCommand exceptionCausingCmd = new
SqlCommand(exceptionCausingCmdSQL,sqlConn);

// This command should execute properly.
deleteCmd.Connection.Open();
deleteCmd.ExecuteNonQuery();

// This command results in an exception, so the first command is
// automatically rolled back. Since the XML Web service method is
// participating in a transaction, and an exception occurs, ASP.NET
// automatically aborts the transaction. The deleteCmd that
// executed properly is rolled back.

int cmdResult = exceptionCausingCmd.ExecuteNonQuery();

sqlConn.Close();

return cmdResult;
}
}

Managing State in XML Web Services Created Using ASP.NET

XML Web services have access to the same state management options as other ASP.NET applications when the class implementing the XML Web service derives from the WebService class. The WebService class contains many of the common ASP.NET objects, including the Session and Application objects.

The Application object provides a mechanism for storing data that is accessible to all code running within the Web application, whereas the Session object allows data to be stored on a per-client session basis. If the client supports cookies, a cookie can identify the client session. Data stored in the Session object is available only when the EnableSession property of the WebMethod attribute is set to true for a class deriving from WebService. A class deriving from WebService automatically has access to the Application object.


To access and store state specific to a particular client session

Declare an XML Web service method, setting the EnableSession property of the WebMethod attribute to true.

[ WebMethod(EnableSession=true) ]
public int PerSessionServiceUsage()

The following code example is an XML Web service with two XML Web service methods: ServerUsage and PerSessionServerUage. ServerUsage is a hit counter for every time the ServerUsage XML Web service method is accessed, regardless of the client communicating with the XML Web service method. For instance, if three clients call the ServerUsage XML Web service method consecutively, the last one receives a return value of 3. PerSessionServiceUsage, however, is a hit counter for a particular client session. If three clients access PerSessionServiceUsage consecutively, each will receive the same result of 1 on the first call.


<%@ WebService Language="C#" Class="ServerUsage" %>
using System.Web.Services;

public class ServerUsage : WebService {
[ WebMethod(Description="Number of times this service has been accessed.") ]
public int ServiceUsage() {
// If the XML Web service method hasn't been accessed,
// initialize it to 1.
if (Application["appMyServiceUsage"] == null)
{
Application["appMyServiceUsage"] = 1;
}
else
{
// Increment the usage count.
Application["appMyServiceUsage"] = ((int) Application["appMyServiceUsage"]) + 1;
}
return (int) Application["appMyServiceUsage"];
}

[ WebMethod(Description="Number of times a particualr client session has accessed this XML Web service method.",EnableSession=true) ]
public int PerSessionServiceUsage() {
// If the XML Web service method hasn't been accessed, initialize
// it to 1.
if (Session["MyServiceUsage"] == null)
{
Session["MyServiceUsage"] = 1;
}
else
{
// Increment the usage count.
Session["MyServiceUsage"] = ((int) Session["MyServiceUsage"]) + 1;
}
return (int) Session["MyServiceUsage"];
}
}

Wednesday, August 01, 2007

Post data to other Web pages with ASP.NET 2.0

ASP.NET 2.0's PostBackUrl attribute allows you to designate where a Web form and its data is sent when submitted.

Standard HTML forms allow you to post or send data to another page or application via the method attribute of the form element. In ASP.NET 1.x, Web pages utilise the postback mechanism where the page's data is submitted back to the page itself. ASP.NET 2.0 provides additional functionality by allowing a Web page to be submitted to another page. This week, I examine this new feature.


The old approach

I want to take a minute to cover the older HTML approach to gain perspective. The HTML form element contains the action attribute to designate what server side resource will handle the submitted data. The following code provides an example.

< html >
< head >
< title > Sample HTML form</title></head>
<body>
<form name="frmSample" method="post" action="target_url">
<input type="text" name="fullname" id="fullname" />
<input type="button" name="Submit" value="submit" />

</form>
</body></html>The value entered in the text field (fullname) is submitted to the page or program specified in the form element's action attribute. For ASP.NET developers, standard HTML forms are seldom, if ever, used.

ASP.NET developers have plenty of options when faced with the task of passing values from page to page. This includes session variables, cookies, querystring variables, caching, and even Server.Transfer, but ASP.NET 2.0 adds another option.


ASP.NET 2.0 alternative

When designing ASP.NET 2.0, Microsoft recognised the need to easily cross post data between Web forms. With that in mind, the PostBackUrl attribute was added to the ASP.NET button control. It allows you to designate where the form and its data are sent when submitted (via the URL assigned to the PostBackUrl attribute). Basically, cross posting is a client side transfer that uses JavaScript behind the scenes.

The ASP.NET Web form in Listing A has two text fields (name and e-mail address) and a button to submit the data. The PostBackUrl attribute of the submit button is assigned to another Web form, so the data is sent to that page when the form is submitted. Note: The form is set up to post the data when it is submitted via the method attribute of the form element, but this is unnecessary since all cross postbacks utilise post by design.

Listing A

<%@ Page language="vb" %>

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" >

<html><head>

<title>Cross Postback Example</title>

</head><body>

<form id="frmCrossPostback1" method="post" runat="server">

<asp:Label ID="lblName" runat="server" Text="Name:"></asp:Label>

<asp:TextBox ID="txtName" runat="server"></asp:TextBox><br />

<asp:Label ID="lblE-mailAddress" runat="server" Text="E-mail:"></asp:Label>

<asp:TextBox ID="txtE-mailAddress" runat="server"></asp:TextBox><br />

<asp:Button ID="btnSubmit" runat="server" Text="Submit" PostBackUrl="CrossPostback2.aspx" />

</form></body></html>
Working with previous pages

The IsPostBack property of an ASP.NET Page object is not triggered when it is loaded via a cross postback call. However, a new property called PreviousPage allows you to access and work with pages utilising cross postbacks.

When a cross page request occurs, the PreviousPage property of the current Page class holds a reference to the page that caused the postback. If the page is not the target of a cross-page posting, or if the pages are in different applications, the PreviousPage property is not initialised.

You can determine if a page is being loaded as a result of a cross postback by checking the PreviousPage object. A null value indicates a regular load, while a null value signals a cross postback. Also, the Page class contains a new method called IsCrossPagePostBack to specifically determine whether the page is loading as the result of a cross postback.

Once you determine that a cross postback has occurred, you can access controls on the calling page via the PreviousPage object's FindControl method. The code in Listing B is the second page in our example; it is called by the page in the previous listing.

Listing B


<%@ Page language="vb" %>

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" >

<html><head>

<title>Cross Postback Example 2</title>

</head><body>

<script language="vb" runat="server">

Sub Page_Load()

If Not (Page.PreviousPage Is Nothing) Then

If Not (Page.IsCrossPagePostBack) Then
Response.Write("Name:" + CType(PreviousPage.FindControl("txtName"), TextBox).Text + "<BR>")
Response.Write("E-mail:" + CType(PreviousPage.FindControl("txtE-mailAddress"), TextBox).Text + "<BR>")

End If

End If

End Sub

</script></body></html>
The page determines if it is being called via a cross postback. If so, the values from the calling page are accessed using the FindControl method and converting the controls returned by this method to TextBox controls and displaying their Text properties.


You can convert the entire PreviousPage object to the type of the page initiating the cross postback. This approach allows you to access the public properties and methods of a page. Before I offer an example of this technique, I must rewrite the first example to include public properties. Listing C is the first listing with two properties added to give access to field values.

Listing C


<%@ Page language="vb" %>

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" >

<html><head>

<title>Cross Postback Example</title>

<script language="vb" runat="server">

Public ReadOnly Property Name

Get

Return Me.txtName.Text

End Get

End Property

Public ReadOnly Property E-mailAddress

Get

Return Me.txtE-mailAddress.Text

End Get

End Property

</script></head><body>

<form id="frmCrossPostback1" method="post" runat="server">

<asp:Label ID="lblName" runat="server" Text="Name:"></asp:Label>

<asp:TextBox ID="txtName" runat="server"></asp:TextBox><br />

<asp:Label ID="lblE-mailAddress" runat="server" Text="E-mail:"></asp:Label>

<asp:TextBox ID="txtE-mailAddress" runat="server"></asp:TextBox><br />

<asp:Button ID="btnSubmit" runat="server" Text="Submit" PostBackUrl="CrossPostback2.aspx" />

</form></body></html>
Now that the properties have been established, you can easily access them. One caveat is that the Page class's PreviousPage object must be converted to the appropriate type to properly work with the properties. This is accomplished by converting it to the necessary page level object.

Listing D illustrates this point by referencing the calling page in the top of the page, so it can be used in the code. Once it is referenced, the actual VB.NET code converts the PreviousPage object to the appropriate type using the CType function. At this point, the properties may be used as the code demonstrates.

Listing D


<%@ Page language="vb"
%>

<%@ Reference Page="~/CrossPostback1.aspx" %>

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" >

<html><head>

<title>Cross Postback Example 3</title>

</head><body>

<script language="vb" runat="server">

Sub Page_Load()

Dim cppPage As CrossPostback1_aspx

If Not (Page.PreviousPage Is Nothing) Then

If Not (Page.IsCrossPagePostBack) Then

If (Page.PreviousPage.IsValid) Then
cppPage = CType(PreviousPage, CrossPostBack1_aspx)
Response.Write("Name:" + cppPage.Name + "<br>")
Response.Write("E-mail:" + cppPage.E-mailAddress)

End If

End If

End If

End Sub

</script></body></html>

A note about the use of the IsValid method of the PreviousPage object in the previous listing: The IsValid property of a previous page allows you to ensure a page passed all validation tests before working with it.

Summary

Passing data values between Web pages has many applications, including maintaining personal user information. Legacy Web solutions, like using the querystring and cookies, allows you to pass and maintain values, and you can easily direct one page to another when submitted. ASP.NET 1.1 supported these solutions as well as additional ones, but ASP.NET 2.0 addresses the issue head on by supporting cross page postbacks. This makes it easy for one Web page to process data from another. Take advantage of this new concept when you're working on your next ASP.NET 2.0 application.