Search This Blog

Friday, 19 September 2008

Nant Task for Sandcastle

I have been thinking of writing a Nant task for Sandcastle for quite some time so I decided to knock something up . I have been fairly busy at work and havent been able to spend time on this. As i find some time on the train I just finished this piece of code.

I am quite a fan of Nant and the Nant contrib project has a task for NDoc which kind of inspired me to do one for Sandcastle. Sandcastle supports the new frameworks in .Net quite well and most people who were using NDoc would have migrated to Sandcastle due to its capabilities for build purposes.

This task is intended to allow users who do not want to use sandcastle command line builder in there nant scripts as an external process. The logging of this task and the sandcastle output is also streamed into the nant log.

I have based my task schema on the bare minimal that will require direct configuration in nant. In any case you should be able to do all the configuration in the sandcastle project. What’s shown below is purely a because i am a developer and I need these features and so i think everyone is going to be happy with it :). '

The following readme is placed in a text file along side the installer zip file.

The Sandcastle for task is built using the following components
Nant 0.86
Sandcastle Help File Builder
Sandcastle UI Builder

The core components required by the task are installed by the installer.

The installer allows you to install the task into a folder that you choose but does not check to see if Nant is installed in the same directory
Generally C:\Program Files\Nant\bin
Installing the task in the same folder as Nant is the only scenario that has been tested.

The path of nant should be added to the path variable as instructed in the Nant installation instructions

In addition to installing the task the following configuration for the Nant.exe.config file needs to be added.

Under the elements
<configuration>
...<nant>
......<frameworks>
...........<platform>
................<task-assemblies>
                    <!-- Nant sandcastle task-->
                  <include name="NAnt.Contrib.Tasks.Sandcastle.dll"/>
If you wish to add the Sandcastle task into another folder, the Nant probing paths need to be set to look at this folder. It is not recommended however

Source code is also provided and feel free to customise it or edit it , The source code is present at Install folder\sandcastle task\Src

I have tested this on my system by installing sandcastle, nant and this task and it works fine. Obviously i am saying “Works on my machine”, if you see any problems using it please let me know about it.

The Sandcastle task schema for the Nant script should be as below, project and output are the only two required attributes,

<sandcastle 
    project="${Sandcastle project file path}" 
    output="Output location for Sandcastle files">

    <showMissing 
        remarks="false" 
        params="false" 
        returns="false" 
        values="false" 
        namespaces="false" 
        summaries="false">
    </showMissing>
    
    <document 
        internals="false" 
        privates="false" 
        protected"="false" 
        attributes="false"
        copyrightText="" 
        feedbackemail="" 
        footer="">
    </document>
</sandcastle>

e.g Nant build file

<?xml version="1.0"?>
<project name="Hello World" default="build">
    <property name="projfile" value="C:\Documentation.shfb"/>
     <target name="build">
        <sandcastle project="${projfile}" output="C:\Documentation\Help">
<document copyrightText="Copyright@ TSQLDOTNET Limited" feedbackemail="srinivas.s@tsqldotnet.com"/>
 </sandcastle>
    </target>
 </project>

Link: Nant Sandcastle Installer

The download consists of the prerequisites hence it is bulky at 40MB, The prerequistes include Sandcastle installer , Win 3.1 installer and DotnetFx

Tuesday, 2 September 2008

Google Chrome

Now as is all over the news i went about downloading the browser to get a feel of it. The download of the click once installer was about 474 KB so how big is that :) really neat start, then the download of the installer and install takes about 1 minute (4 MB line) and in another minute it imports all settings from IE. Now that was really impressive , two and half minutes all set and up and running.

Features that clearly stand out are the Search history, Dynamic Tabs and the simplicity of book marking pages, there is more but these make be happy already :).

A useful thing is the Search your history box, I have wished for something like this for ages and its nice to see this feature. Hope Google does this for favourites as well.. I having accumulating favourites for the last seven years and i some times wish i could search through surely useful to me :)

The UI is definitely better than the heavy IE7, tabs are not new in a browser these days just checkout the dynamic tabs feature in Google Chrome, it is definitely cool , drag the tab out and see how it works

Bookmarking is easier and the download feature is different to other browsers.

Seems like Google has just started , I am already thinking of using this browser , but i am not sure about any bottle necks .. gotta wait and see

I am quite happy with what the beta offers,  if Google manages to hold off IE , I bet we could see this being used more widely..

If you want to download go to http://tools.google.com/chrome/?hl=en-GB

Wednesday, 13 August 2008

Determine a Leap Year?

Armando Prato Armando Prato had written a SQL tip on how to determine a leap year the trick used is pretty neat, His example is in TSQL as below , but i guess we could use the idea in any language

   1: create function dbo.fn_IsLeapYear
   2:  (@year int)
   3: returns bit
   4: as
   5: begin    
   6:    return
   7: (select 
   8:     case datepart(mm, dateadd(dd, 1, cast((cast(@year as varchar(4)) + '0228') as datetime)))     
   9:     when 2 then 1     else 0     
  10:     end
  11: )
  12: end
  13: go

I like the idea of appending 0228 to the year and finding out if it is a leap year, The function takes in the year, appends '0228' to it (for February 28th) and adds a day. If the month of the next day is a 2, then we're still in February so it must be a leap year!  If not, it is not a leap year.

In C# this could be something like , I just thought its worth mentioning this on the blog for my record at the least

   1: bool isLeapYear = ((new DateTime(<int year value>, 02, 28)).AddDays(1).Date.Month.ToString() == "2")

Friday, 8 August 2008

Tracking v/s Capturing changes

Change Data Capture has by far been my favourite feature everytime i think about SQL Server 2008. The really neat bit is something I missed during my learning process, There are two flavours to capturing changes, the change itself and the data that has changed, this is what distinguishes Change Data Tracking and Change Data Capture in SQL Server 2008

Change data capture provides historical change information for a user table by capturing both the fact that DML changes were made and the actual data that was changed. Changes are captured by using an asynchronous process that reads the transaction log and has a low impact on the system.. When you want to stage data in logical blocks such as a website publishing engine, or a clearing system this feature could prove very useful. This is mainly because of the granularity of the changes that are captured and the nature in which they are stored providing no coupling to the object whose changes are captured.

Change tracking on the other hand captures the rows in a table that changed, but does not capture the data itself. This allows applications to determine rows that have changed only with the latest row data being available in the user tables. Therefore, change tracking is more limited in the historical questions it can answer compared to change data capture. However, for applications that do not require historical information, there is far less storage overhead because of the changed data not being captured. It is the data captured which causes the database to grow. A synchronous tracking mechanism is used to track the changes and has been designed to have minimal overhead to the DML operations.

Either of these features can be used to synchronize applications or there database engines. Synchronization can be implemented in applications in two directions, one-way and two way.

In One-way synchronization applications, such as a client or mid-tier caching application, can be built that use change tracking. e.g, a caching application requires data to be stored in the database and to be cached in other data stores. In this scenario the application must be able to keep the cache up-to-date with any changes that have been made to the database. There are no changes to pass back to the Database Engine.

In two way synchronization, the data in the Database Engine is synchronized with one or more data stores. The data in those stores can be updated and the changes must be synchronized back to the database. A good example of two-way synchronization is an application which is occasionally connected such as a mobile application. In this type of application, a client application queries and updates a local store. When a connection is available between a client and server, the application will synchronize with a server, and changed data flows in both directions. In two-way synchronization applications must be able to detect conflicts. A conflict would occur if the same data was changed in both data stores in the time between synchronization's. With the ability to detect conflicts, an application can make sure that changes are not lost.

So my misinterpretation that change capture and change tracking meant the same proved to be wrong. This really useful feature could be put to use effectively in a scalable manner by choosing the right flavour of change capture which is based on the needs or nature of your application. There is no denial that applications on SQL Server versions prior to 2005 will need a major overhaul if there is an existing mechanism in place, that said it is best not to underestimate the implementation of Change Data Capture for a existing application. However new applications could base there designs around this feature and seek to benefit rapidly.

On this note a quick note to people who use Log shipping, The feature is useful when batch processing of transactions is to be done on a regular frequency, however it is still limited in not being able to identify each transaction individually, there is no denial however that is the best choice for Disaster Recovery options.

Monday, 28 July 2008

Concurrency Control in .Net Data Tiers

Most of our data in applications is stored in relational databases, although there are other ways of storing data I guess about 75% of applications use RDBMS. While using these databases we write .Net code which functions correctly but may incorrectly write, read or handle data in the run time. e.g. I have found a common case where developers tend not to use some of the features built into the RDBMS system under the covers, and concentrate so much on getting the application tier right, that they fall into pitfalls that arise out of data handling , One such issue is concurrency of data, It is one of those things developers don’t look at when they hit a data handling issue. a consequence is a lack of transaction capabilities , sometimes they don’t analyse that the root cause could be data handling. Agree or deny I at the least cannot think of applications used for business not to have transactions. More often than not business processes in a application translate into a transaction block or unit.

Concurrency is one such issue which can be controlled and exercised. It can be in an optimistic or pessimistic mechanism, whatever be the mode exercised it is important to understand the concept and how it can be exercised.

Optimistic concurrency lets the last update succeed and can make data viewed by the end user dirty, if a update happened since data was displayed to the user. It is the applications responsibility to determine stale data and then decide to perform an update based on the decision it takes. Optimistic concurrency is used extensively in .NET to address the needs of mobile
and disconnected applications, where locking data rows for prolonged periods of time would be infeasible. Also, maintaining record locks requires a persistent connection to the database server, which is not possible in disconnected applications

In the pessimistic scenario, read locks are obtained by the consumer of the data and any updates to the same data are prevented, Pessimistic concurrency requires a persistent connection to the database and is not a scalable option when users are interacting with data, because records might be
locked for relatively large periods of time.

These are the mechanisms you adopt. To implement this in a data tier of a .Net application developers use transactions

Most of today’s applications need to support transactions for maintaining the integrity of a system’s data. There are several approaches to transaction management; however, each approach fits into one of two basic programming models:

Manual transactions. You write code that uses the transaction support features of either ADO.NET or Transact-SQL directly in your component code or stored procedures, respectively.

Automatic transactions. Using Enterprise Services (COM+), you add declarative  attributes to your .NET classes to specify the transactional requirements of your objects at run time. You can use this model to easily configure multiple components to perform work within the same transaction.

To decide if u want to use transactions in SQL code, ADO.Net or Automatic transaction I found a simple decision tree which allows you to make a decision on which method to use to implement transactions, of course you make the decision if you need to use transactions in .Net

image

Friday, 25 July 2008

Astoria Data Services – IIS + SQL Server ?

A few months ago I had the opportunity to preview Astoria Data Services in a NextGen User Group meeting and I was thinking I have seen this before, but i failed to recollect where at that time, read the following you will see why i am pointing this out. At the time of preview MS promised to add security to Astoria, Basically all I could see was SQL queries on the browser URL / request URL

When you install IIS on a machine , in management console of windows you have an option to configure SQL XML support for IIS. Do this and then configure a Virtual directory in IIS, which offers quite a list of decent security settings. Configure these and browse to the URL with a SQL query in the URL and see what happens, there ain’t much difference?

I am not cynical but i want to point out this is not new, we will however have to wait and see what additional features are offered by Astoria before we can comment

Wednesday, 23 July 2008

Oracle 9i and 11g Data Access Components for .Net

I am currently working with SQL Servers and Oracle servers and products communicate with both databases, In my previous job where we used only SQL Server everything was easy as it was native to Microsoft’s products, however the world of .Net with Oracle has its pain points. A particular scenario i came across today was having a web server with Oracle 9i Data Access Components and 11g Data Access Components on the same server and getting them to work for two different ASP.Net sites. I initially installed the ODAC components without the client thinking it was enough for the ASP.Net site to work, however as it turns out there is more to it than i thought it would be

When i started Oracle 9i client components where already installed on the server and the Products using 9i components where working fine. However the moment I installed the 11g Data Access Components the Products using 9i stopped working throwing exceptions and neither where products using 11g components working. So i had to uninstall 11g Data Access components and leave Product A working. So sort this problem i googled’ a lot and found information scattered all over the place, In my need to have everything in one place i thought i jot down a few notes. So to fix the issue I had to do the following

Install a Oracle 11i client home.

Install Oracle 11i Data Access components into the home created in the previous step

The we have to make sure the odp.net folder which resides under the home directory has permissions for ASP.Net and Network Service accounts for read and execute on all sub folders and files.

It doesn’t end there if your connection strings use TNS names configure tnsnames.ora in the network\admin folder of the 11i client folder. In our case the 11i Components where used to access 9i databases so i just copied the 9i tnsnames.ora file into the 11i client.

Once all this is done you should be good to go with your applications. Most provider related exceptions should be resolved when you follow this procedure.

Through with the 2nd year, 1 more to go

La la lalalalalala !!! Just checked my 2nd year MBA results, quite a relief to have passed with some good scores.. I was a bit worried I wouldnt get through... considering the power naps between assignment work ...

Tuesday, 22 July 2008

Processes, Engineering and Organisations

I reflect on my miniscule career of eight years and some management studies and I find myself thinking about how i would want to do business or run an organisation, I understand profit is the basis for any business being brutally factual about it , but my point is as profit increases and businesses grow rate of returns decrease due to scalability and efficiency issues. As I start my rant I have to mention there is a clear difference in engineering process and quality processes. Processes which are used to engineer a product are different from processes using to perform business. Trying to smudge one with the other in the name of organisational vision is in itself a game most service providers play and the IT industry is full of such innuendos , does any organisation care to mention what engineering processes are going to be used in there project tenders, I guess not.. some how the most important things for engineering which is the task in hand fail to make the list of priorities. We are too busy to pay attention to detail and want the bare bones in place.

I have been working on improving engineering processes for sometime now and have been at both ends, creating and consuming processes. Some organisations use the fixed cost project model and would probably say I will pass as the stakeholders are only interested in there projects and not the engineering process itself, however what they do like is a accreditation such as ISO etc. Even though the PMO as a functional body exists, it doesn't really work as a unified body other than trying to sort out dependencies and integration issues and they wonder why most projects are over budgeted and running longer they should be , If these organisations join the bandwagon of adopting a process like Scrum, Prince or CMM they are not going to find much mileage unless they have there fundamentals sorted out , simple things like infrastructure, configuration management and engineering processes some how manage to get to the list of requisites of a project only at the end of a disaster, or in some cases where you have outsourced your development in a Root Cause Analysis, the funny thing about the RCA is in my experience no one bothers or cares to look at it and there are no lessons learnt either, funny the person who has to come up with it also has mask the most obvious factors which increase cost..

To run an organisation with innovative engineering and quality processes takes more than training a bunch of individuals on some process and asking them to get on with it, Organisational leaders such as CTO’s and CEO’s need to understand that unless it is pipelined form the top in the form of actions, they are not going to maximise return on investment. Unless they do so , PMO' bodies will not action anything because at the moment every organisation is so concerned about short term goals and profits that they have laid the rules of sustainability in business to rest, It is not that organisations don’t do enough , it is just that they try to do something only when the eggs are rotten..

Oh i work in IT so the references to CMM/ Scrum/ Prince you can take anything for an example Six Sigma, ISO, TQM , I am on a train at 10 in the night and this is so not what i should be thinking about

Wednesday, 9 July 2008

Data Modelling Jazz

When we often think of data modelling it is a pretty picture created in Visio and for the some one more serious about data modelling it is representing entities, attributes and relationships in a meaningful manner. I didn’t realise modelling languages are different and a tool such as Visio supports/ works on such languages. e.g IDEFIX(Integration Definition for Information Modelling) is modelling language .. Woah that definition really woke me up .. and as usual this was developed in the US Airforce in 1985

The primary tool of a database designer is the data model. It’s such a great tool because it can show the details not only of single tables at a time, but the relationships between several
entities at a time. Of course it is not the only way to document a database;

• Often a product that features a database as the central focus will include a document that lists all tables, data types, and relationships. (developers think can’t be bothered)
• Every good DBA has a script of the database saved somewhere for re-creating the database. (developers think am still not bothered)
• SQL Server’s metadata includes ways to add properties to the database to describe the objects. (developers by now would think oh get a life will you).

Some common terms you would come across are Entities which are synonymous to tables in database, attributes which are synonymous to column definitions in a table and relationships represent how two entities relate to each other.  We represent these pictorially or grammatically in written text Anyway my idea of blogging about Data Modelling was to drop a few notes on some practices we could adopt while modelling data.

  • Entity names There are two ways you can go about these: plural or singular. Some argue tables names should be singular , but many feel that the table name refers to the set of rows and should be plural. Whatever convention you choose be consistent with it, mixing and matching could end up confusing the person reading the data model.
  • Attribute names: It’s generally not necessary to repeat the entity name in the attribute name, except for the primary key. The entity name is implied by the attribute’s inclusion in the entity. The chosen attribute name should reflect precisely what is contained in the attribute and how it relates to the entity.
  • Relationships: Name relationships with verb phrases, which make the relationship between a parent and child entity a readable sentence. The sentence expresses the
    relationship using the entity names and the relationship cardinality. The relationship sentence is a very powerful tool for communicating the purpose of the relationships with non technical members of the project team (e.g., customer representatives.
  • Domains: Define domains for your attributes, implementing type inheritance wherever possible to take advantage of domains that are similar. Using domains gives you a set of standard templates to use when building databases that ensures consistency across your database.
  • Objects: Define every object so it is clear what you had in mind when you created a given object. This is a tremendously valuable practice to get into, as it will pay off later when questions are asked about the objects, and it will serve as documentation to provide to other programmers and/or users.

Wednesday, 25 June 2008

Iteration Length?

Our organisation has always used two week iterations and sometimes we do wonder if two weeks is optimum, Well the first thought was for enhancement projects two weeks fits the bill, these could be projects where the solution to implement the features is available to the team, but for a brand new project which includes a technical learning curve and innovation, three weeks might be be apt. This is subject to discussion with the teams. Although Scum recommends a 30 day iteration, 2 week iterations yield results and 3 week iterations will yield innovative results. At the end of the day both iteration lengths are within the realm of Scrum and it is a variable available to teams and the product owner, how it is varied or used as long as the goals of the release are met, is not important.

Based on previous experience with two week iterations I have made some observations which kind of remind us the strength of the two week iteration

  • Two weeks is just about enough time to get some amount of meaningful development done.
  • Two week iterations indicate and provide more opportunities to succeed or fail.e.g, within a 90 day release plan, 5 - 2-week iterations of development and 1 stabilisation iteration at the end make it possible to have checkpoints on the way to the release.
  • The 2-week rhythm is a natural calendar cycle that is easy for all participants to remember and lines up well with typical 1 or 2 week vacation plans, simplifying capacity estimates for the team.
  • Velocity can be measured and scope can be adjusted more quickly.
  • The overhead in planning and closing an iteration is proportioned to the amount of work that can be accomplished in a two week sprint.
  • A two week iteration allows the team to break down work into small chunks where the define/build/test cycle is concurrent. With longer iterations, there is a tendency for teams to build a more waterfall-like process.
  • Margin of error for capacity planned and available is lesser in two week iterations.

Well the above is on the basis of what I have observed and may be different in your organisation.

Tuesday, 24 June 2008

Five Levels of Planning in Scrum

In agile methods, a team gets work through iteration planning. Due to the shortness of the iteration a planning gains importance than an actual plan. The disadvantage of iteration planning when applied to projects that run for more then a few iterations or with multiple teams is that view of long term implications of iteration activities can be lost. In other words: the view of ‘the project as a whole’ is lost.

Planning activities for large-scale development efforts should rely on five levels

• Product Vision
• Product Roadmap
• Release Plan
• Sprint Plan
• Daily Commitment

Five Levels of Planning

Each of the five levels of planning help us address fundamental planning principles of priorities, estimates and commitments.

Friday, 20 June 2008

Principles of SOA

Although there is no official standard for SOA, the community seems to agree that there are four guiding principles to achieve SOA.

  • Service boundaries are explicit.

Services expose business functionality by using well defined contracts, and these contracts fully describe a set of concrete operations and messages. Implementation details of the service are unknown, and the consumer of the service is agnostic to this, so the technology platform of the service is irrelevant to the client. What is relevant though is where the service is present so that the client can reach it to consume it.

Remoting, WCF and Web Services all support this principle. But what distinguishes WCF is its ability to answer some of the limitations in the other technologies. e.g for Remoting to be used, the underlying CLR types have to be shared between the client and the service, not so in WCF. Web Services with WSE allows a service to address this principle as well.

  • Services are autonomous.

As previously mentioned services expose business functionality, to achieve this they encapsulate the business functionality, what this means is that the service should encapsulate all tiers in the functionality from database access to business tier and the contract itself. At the end of the day the service should be replaceable and movable without affecting the consumer. As a rule of thumb to achieve this any external dependencies should be eliminated during the design process. Any component changes in  the service should change the version of the service as a logical unit. This is otherwise called atomicity.

  • Clients and Services share contracts, not code.

Given that we said service boundaries are explicit, the contract enforces the boundary between the client and service, and this leads us to conclude that a service once published cannot be changed and all future versions should be backward compatible. There is an argument that contracts may or may not be tied to a particular technology or platform. I am unable to comment on this but seems to be only fair is as long as the service can be consumed by a client and the client can remain agnostic to the technology used to implement the service, this principle is achieved.

  • Compatibility is based on policy.

Contracts define functionality offered by a service, while a policy defines the constraints imposed by the service on the consumer for attributes like security, communication protocol and reliability requirements. Prior to WCF , Web Services with WSE 3.0 helped developers achieve this, I have had the opportunity to do this on .Net 2.0 using WSE and i can say that this is pretty crucial to how your service behaves with the consumer :). At the end of the day the policy is also exposed in WSDL to the client , so that client knows the constraints imposed by the service.

As a architect/developer we may try to meet the principles listed above in our applications, but due to the influence of technology in implementation and deployment we may not be able to abide by these principles strictly, but that is probably alright. For e.g if we had three Services which encapsulate functionality for order processing, accounts and sales, we cant really have three databases for each service to work individually, we may have a reporting service which may need access to the three databases, in which case we may consolidate everything into a single database, and use schemas in the database to isolate functionality at the database level, then build a database access layer which servers the functionality required by the service. Relational data seems to be quite limiting in this respect, but would the Entity Framework allow us to address, May be may be not it is yet to be seen

Thursday, 19 June 2008

WCF Fundamentals

I have been trying to read some stuff on some WCF concepts and thought it would be useful to jot down some of the concepts outlined by Michele Leroux Bustamante in the book "Learning WCF" for reference purposes. Although it is pure theory it is useful to get an understanding of WCF. If you have used WSE before these may seem familiar ..

Messaging, Serialization and RPC

Enterprise applications communicate with each other using remote calls across process and machine boundaries. The format of the message is what determines the application's ability to communicate with other applications. RPC and XML are two common ways of achieving this. RPC calls are used to communication with objects or components across boundaries (process and machine implied). A RPC call is marshaled by converting it into a message and sent over a transport protocol such as TCP. At the destination the message is unmarshaled into a stack frame that invokes the object. Generally a proxy is used at the client and stub at the destination to marshal and unmarshal calls at either end points. This process is serialization and deserialization. Now the proxy and the stub can use only. Remoting in .Net works with this principle of messaging. Now RPC comes in many flavours, but each flavour is not compatible with another , this means it is not interoperable.. so have this applied in a interoperable way we use Web Services or WCF. WCF however wins over the two, RPC and Web Services because of its ability to operate in both RPC style messaging and Web Service messaging. In both cases the service type is used and the life time of the service type can be controlled using configuration settings defined for the service model

Services

In WCF all services use CLR types to encapsulate business functionality and for a CLR type to be qualified as a service it must implement a service contract. Marking a class or interface with an attribute ServiceContractAttribute makes a class a service type or the class implementing the interface a service type. To mark a method as a service operation (in a contract definition) , we use the attrubute OperationContractAttribute.

Hosting

WCF Services can be self hosted or on IIS like ASP.Net applications. The host is important for the service, and therefore a ServiceHost instance is associated with a Service type. We construct a ServiceHost and provide it with a service type to activate incoming messages.The ServiceHost manages the lifetime of the communication channels for the service.

Metadata

The client to understand how it should communicate with a WCF service needs data about the address, binding and contract. This is the part of the metadata of the service. Clients rely on this metadata to invoke the service. It can be exposed by the service in two ways, the Service host can expose metadata exchange endpoint to access metadata at runtime or using a WSDL. In either case clients use tools to generate proxies .

Proxies

Clients use proxies to communicate with the WCF service. A proxy is a type which represents the service contract and hides serialization details from the client. Like Web Services, in WCF if you have access to the service contract you can create a proxy using a tool. The proxy only gives information about the service and its operations, the client will still need metadata to generate endpoint configuration and there are tools available to do this.

Endpoints

When a service host opens a communication channel for a service it must expose one or more endpoints for the client to invoke. An endpoint is made of three parts Address (a URI), Binding (protocols supported), and Contract (Operations). A service host is generally provided with one or more endpoints before it opens a communication channel.

Addresses

Each endpoint for the service is represented by an Address. The Address is in the format scheme://domain:port/path.

  • Scheme represents the transport protocol. Http, TCP/IP, named pipes or MSMQ are some of them, scheme for MSMQ is net.msmq, for TCP it is net.tcp and named pipes is net.pipe.
  • Port is the communication port to be used for the scheme other than the default port if required, for default ports it is not required to be specified.
  • Domain represents the machine name or the web domain.
  • Path is usually provided as part of the address to disambiguate service endpoints. E.g net.tcp://localhost:9000 or net.msmq://localhost/QueuedServices/ServiceA

Bindings

A binding describes the protocols supported by a particular endpoint, specifically

  • Transport protocol, TCP, MSMQ, HTTP or named pipes.
  • Message Encoding format , XML or binary.
  • Other protocols for messaging, security and reliability, plus anything else that is custom. Several predefined bindings are available as standard bindings in WCF, however these only cover some common communication scenarios.

Channels

Channels facilitate communication between the client and the WCF service. The ServiceHost creates a channel listener for each end point which generates a communication channel. The proxy on the client creates a  channel factory which generates a communication channel for the client. Both these channels should be compatible for communication to be successful between the client and the service.

Behaviors

Behaviors are local to the service or the client and are used to control features such as exposing metadata, authorisation, authentication or transactions etc.

Wednesday, 18 June 2008

SQL Server 2008 RC0 is out

RC0 SQL Server 2008 for some reason seems to be quietly done, didn't realise it was out, it is available to download at SQL Server 2008 RC

PS: SQL Server 2008 RC0 will automatically expire after 180 days

Sunday, 15 June 2008

Enterprise Application Integration

I have been recently thinking about integration scenarios for different applications and how to take decisions on the way two applications should be integrated.. Infact my question is what are the factors i would consider to make a decision on how two applications will integrate. On the same note as much as there is a benefit that comes out of integration of two applications more often than not there is a resulting consequence based on how the applications integrated

Should the two applications be loosely coupled? This is first one that comes to my mind ,on most occasions the answer is yes it should be loosely coupled, the more loosely coupled the two applications are the more opportunities they have to extend there functionality without affecting each other and the integration itself. If tightly coupled its obvious that the integration breaks when applications change.

Simplicity, if we as developers minimize code involved in the integration of the applications, it becomes easily maintainable and provides better integration.

Integration technology Different integration techniques require varying amounts of specialized software and hardware. These special tools can be expensive, can lead to vendor lock-in, and increase the burden on developers to understand how to use the tools to integrate applications.

Data format , The format in which data is exchanged is important, and it should be borne in mind that the format should be compatible with an independent translator other than the integrating applications itself so that at some point these applications are also able to talk to other applications should there be a need. A related issue is data format evolution and extensibility and how the format can change over time and how that will affect the applications.

Data timeliness , we need to minimize the length of time between when one application decides to share some data and other applications have that data. Data should be exchanged in small chunks, rather than large sets. Latency in data sharing has to be factored into the integration design; the longer, the more opportunity for shared data to become stale, and the more complex integration becomes.

Data or functionality, Integrated applications may also wish to share functionality such that each application can invoke the functionality in the others. But this may have significant consequences for how well the integration works.

Asynchronicity, This is a aspect developers start realising after implementation and performance tests fail, By default we think and code synchronously,  This is especially true of integrated applications, where the remote application may not be running or the network may be unavailable, the source application may wish to simply make shared data available or log a request.

Friday, 13 June 2008

ASP.Net and COM Interop

At some point in time we have had to use a COM component in .Net applications as part of migrating legacy code to new technologies, but kind of puts me off is doing it without knowing why we are doing it or at the least how the legacy component works. Visual Studio makes it so easy to use COM components by doing all the work for you under the covers. In this article i would like to see how COM is used and get into some of the aspects of dealing with COM.. in the .Net world. A COM component can be consumed in .Net either by Early binding or Late binding.

Early binding

This is where type information about the COM component is available to the consumer at design time to the consumer, e.g most common thing we do is reference it in VS.Net and the studio runs tlbimp.exe to generate the Interop assembly for consumption by .Net code, this is because .Net needs meta data information about the assembly before hand. Another reason this is most prefered way of consuming COM objects is that Early binding is much faster in its access than Late bindinig, In addition to this developers are able to use the COM object as if it was another .Net object by creating an instance using the new keyword.

Late Binding

Information of the COM component is not known until code is executed or in the runtime.A classic example of this is using HttpServerUtility.CreateObject used in ASP pages You will need to pass the component program ID of the COM component to this method.. Now it is important to consider how COM components are instantiated. In windows XP and 2000 how the COM component is instantiated depends on how the threading model of the COM compoenent is marked in the registry as Free, Apartment, Neutral or Both.

Components marked Free

When we call a COM component marked as free in ASP.Net the instance is on the same threadpool thread that the ASP.Net page started running.  The ASP.Net thread pool is initialised as a Multi Threaded Apartment (MTA) and since the COM component is marked as Free, no thread switch is necessary and performance penalty is minimal

Components marked Apartment

Traditionally, business COM components that are called from either ASP have been marked Apartment. The single-threaded apartment (STA) threading model is not compatible with the default threading model for the ASP.NET thread pool, which is MTA. As a result, calling a native COM component marked Apartment from an ASP.NET page results in a thread switch and COM cross-apartment marshalling.

Under stress, this presents a severe bottleneck. To work around this issue, a new directive called ASPCompat was introduced to the System.Web.UI.Page object.

How ASPCompat Works

The ASPCompat attribute minimizes thread switching due to incompatible COM threading models. More specifically, if a COM component is marked Apartment, the ASPCompat = "true" directive on an ASP.NET page runs the component marked Apartment on one of the COM+ STA worker threads. Assume that you are requesting a page called UnexpectedCompat.aspx that contains the directive ASPCompat ="true". When the page is compiled, the page compiler checks to see if the page requires ASPCompat mode. Because this value is present, the page compiler modifies the generated page class to implement the IHttpAsyncHandler interface, adds methods to implement this interface, and modifies the page class constructor to reflect that ASPCompatMode will be used.

The two methods that are required to implement the IHttpAsyncHandler interface are BeginProcessRequest and EndProcessRequest. The implementation of these methods contains calls to this.ASPCompatBeginProcessRequest and this.ASPCompatEndProcessRequest, respectively.

You can view the code that the page compiler creates by setting Debug="true" in the <compilation> section of the web.config or machine.config files.

The Page.ASPCompatBeginProcessRequest() method determines if the page is already running on a COM+ STA worker thread. If it is, the call can continue to execute synchronously. A more common scenario is a page running on a .NET MTA thread-pool thread. ASPCompatBeginProcessRequest() makes an asynchronous call to the native function ASPCompatProcessRequest() within Aspnet_isapi.dll.

The following describes what happens when invoking COM+ in the latter scenario:
1. The native ASPCompatProcessRequest() function constructs an ASPCompatAsyncCall class that contains the callback to the ASP.NET page and a context object created by ASP.NET. The native ASPCompatProcessRequest() function then calls a method that creates a COM+ activity and posts the activity to COM+.

2. COM+ receives the request and binds the activity to an STA worker thread.

3. Once the thread is bound to an activity, it calls the ASPCompatAsyncCall::OnCall() method, which initializes the intrinsics so they can be called from the ASP.NET (this is similar to classic ASP code). This function calls back into managed code so that the ProcessRequest() method of the page can continue executing.

4. The page callback is invoked and the ProcessRequest() function on that page continues to run on that STA thread. This reduces the number of thread switches required to run native COM components marked Apartment.

5. Once the ASP.NET page finishes executing, it calls Page.ASPCompat EndProcessRequest() to complete the request

Thursday, 22 May 2008

RESEED

Just learnt a new TSQL term , although i did know this functionality is achievable in other ways, i hadn't come across this one before. RESEED resets the seed value of the IDENTITY. However, SQL Server 2000 works differently on RESEED for virgin tables when compared with 2005/2008.

For the table_name, to reset the seed value of the identity column to a new_value DBCC CHECKIDENT(table_name, RESEED, new_value) does the trick.

In SQL Server 2000 RESEED always increments the seed value but on 2005/2008, it doesn't increment but starts with the new_value.

Tuesday, 20 May 2008

Cruise Control .Net farms

In the present organisation that i work for , we work as four teams continuously developing code and checking it into TFS and as a consequence of this we have adopted the process of continuous integration using Cruise Control.Net . Well using Cruise Control.Net along side Nant to have automated build processes is nothing new, but what we have achieved and are planning to achieve is hopefully a unique implementation of a automated build platform. We have about nine different projects having there own automated builds and since Nant is not able to scale across multiple CPUs, the resources on the server are underutilized and wait times have increased considerably. So as a solution to this we have not installed two instances of Cruise Control on the main build server one called the current build which developers use to build binaries and test , while the nightly build runs on a scheduled basis generating the end product installers. Sometimes people may still need a setup and nightly build allows forcing builds manually using the CC Tray application.

So to achieve this there were a few bottle necks we had to work around.

  • Cruise Control does not install two instances completely, for starters it does install the windows service for itself when you install a second instance. So you have to manually install the service using the installutil.exe in the .Net framework.
  • Since you will have two instances building simultaneously you will need to isolate the identities under which these build run and the physical locations of these builds.
  • The cruise control server web dashboard also has to be configured manually as separate virtual directories.
  • The cruise control manager for each instance has to be configured to use a different port, the default install of CC.Net using 21234, we could use something like 21244 as a series . In case you have a firewall make sure your network admin allows requests to this port (CC Tray uses these ports to communicate with the server)

Now that said we have a single server which has two instances of Cruise Control, The build outputs are copied on to a network share. We now have another build server where we are going to replicate the set up of the first server and then split the CC projects across the two physical servers to share the load of the builds. It all looks simple, but thanks to the guy who authored a Templated build where I have been able to enhance it to achieve the following,  We have been constantly updating the build or the last one year and we are close to achieving end to end automation for the product, so when we check-in code to TFS , we are able to dish out a CD ISO image for the products to the network share in the nightly build. We also have switches as properties in the build scripts which can allow the current builds to create setups incase the nightly build is overloaded. Due to constant development work and support issues we have to squeeze work like this as part of our non functional sprint work. But then scrum treats engineering work as Non functional and is rightly justified in doing so.

Storing Hierarchical Data - HeirarchyID

In SQL Server 2000 we were limited by the 32 level recursion limit for TSQL, and storing and querying hierarchical data in the form  of trees was really difficult and inefficient, We used Cursors or temporary tables to write these queries. But simplicity, maintenance or performances were sacrificed. Ofcourse we could bundle a bit of code in the data layer of your application to share the load however this didn't the solve the problem in reporting scenarios where the data processing was to be done on the database server

As we moved on to SQL Server 2005 it improved because of the introduction of CTE's, CTE's where beautiful solutions to solving querying hierarchical data,  I use the word beautiful because it looked nicer on the outset when you used it without knowing the limitations it worked well on development environments For e.g using the AdventureWorks database we could use a CTE to query employee manager data as shown below

WITH UpperHierarchy(EmployeeId, LastName, Manager, HierarchyOrder)
 AS
 (
    SELECT emp.EmployeeId, emp.LoginId, emp.LoginId, 1 AS HierarchyOrder
    FROM HumanResources.Employee AS emp
      WHERE emp.ManagerId isNull
    UNION ALL
    SELECT emp.EmployeeId, emp.LoginId, Parent.LastName, HierarchyOrder + 1
    FROM HumanResources.Employee AS emp
           INNER JOIN UpperHierarchy AS Parent
                 ON emp.ManagerId = parent.EmployeeId
 )
 SELECT *
 From UpperHierarchy

Although this decreased the complexity of writing queries, performance of these queries was still challenged on large databases, the optimisation of the CTE execution plans did improve things but as in any database situation the optimizer is handicapped without indexing capabilities. Indexes reduce the load on the query increasing performance and scalability. In addition in SQL 2005 the underlying storage structure was still something users had to design to suit there requirements. This just got better with the introduction of a new managed SQL CLR data type called HeirarchyID in SQL Server 2008., It is available ready to use in the databarse server now... If you now look back and remember the introduction of the CLR into SQL Server in Yukon you will appreciate this feature even more and how it has been panned out..

This data type does not store the identifier of the parent element but a set of information to locate the element in the hierarchy. This type represents a node in the tree structure. If you look at values contained in a column of HeirarchyID type, you realize that they are binary values.It is extremely compact and supports arbitrary inserts and deletions. As per MS a node in an organizational hierarchy of 100,000 people with an average fanout of 6 levels takes about 38 bits. This is rounded up to 40 bits, or 5 bytes, for storage. Because it stores the elements hierarchy in its entirety it is also indexable now..

As always there are a few limitations

  • It can hold upto 892 bytes but to be honest that should allow you to span out into a really big tree structure under a single node.
  • A query with the FOR XML clause will fail on a table with HeirarchyID unless the column is first converted to a character data type. Use the ToString() method to convert the HeirarchyID value to the logical representation as a nvarchar(4000) data type

We can represent the HeirarchyID type in a string format. This format shows clearly information carried by this type. Indeed, string representation is formatted as is:/<index level 1>/<index level 2>/…/<index level N>. This representation corresponds to atree structure . Note that first child of a node does not have a value of 1 all the time but can have the /1.2/ value. So to play around a bit we first need a table with a column of HeirarchyID type and an index on the hierarchy ID column

CREATE TABLE Organization
(
    EmployeeID heirarchyid NOT NULL,
    EmployeeName nvarchar(50) NOT NULL
)

ALTER TABLE dbo.Organization
ADD HierarchyLevel As EmployeeID.GetLevel()

CREATE INDEX IX_Employee
ON Organization(HierarchyLevel,EmployeeID);

To populate the data table we use the CTE we mentioned earlier to  just modify the SELECT statement like the following

Insert Into dbo.Organization(EmployeeId, EmployeeName)
Select Node, LastName 
 From UpperHierarchy

Now Hierarchical data can be queried using the functions

HeirarchyID data type can be manipulated through a set of functions.· GetAncestor, GetDescendant, GetLevel, GetRoot, ToString, IsDescendant, Parse, Read, Reparent, Write, for details on the functions please refer to the CTP documentation of Katmai but most of them are self explanatory #

For e.g to see how we use these functions, let us insert a node as the last child of an existing node.To do this we first retrieve the sibling node.

--finding sibling node
SELECT @sibling = Max(EmployeeID)
FROM dbo.Organization
WHERE EmployeeId.GetAncestor(1)= @Parent;
--inserting node
INSERT dbo.Organization(EmployeeId, EmployeeName)
VALUES(@Parent.GetDescendant(@sibling,NULL), @Name)

We do not always want to (or can) recover the sibling node to perform insertion. There is perhaps an implied policy to determine node position. For example, let’s say we have an [order] column which position nodes among its siblings. We can compute node path as string: In this example, since the node @Parent is the root, that will give/<order>/. Thanks to the Parse() function, we can use this value to create the new node.

Declare @Parent As HeirarchyID = HeirarchyID::GetRoot() 
Declare @NewPath As varchar(10)= @Parent.ToString()+ CAST([Order] AS varchar(3))+ '/'
INSERT dbo.Organization(EmployeeId, EmployeeName) VALUES(HierarchyId::Parse(@NewPath),'aChild')

You will have note the new syntax of SQL Server 2008 to declare and assign variables in only one line. :: denotes a static method on a SQL CLR type in TSQL. So what am i getting to finally the CTE is not so much of a beauty anymore, just run this query to see what it returns

Select *
From dbo.Organization
Where @BossNode.IsDescendant(EmployeeId)

If  you run this query along side the CTE query and compare the execution plan of these queries you would see why this new feature is being talked about :)

Monday, 19 May 2008

SQL Profiler for 2005 Express Edition

I was trying to see how i can profile SQL 2005 Express Edition as the management studio does not have a profiler. I found one at the following location though , it is very useful and quite a light weight tool http://code.google.com/p/sqlexpressprofiler/downloads/list

Friday, 16 May 2008

NEW INSERT STATEMENT

In SQL 2008 some new row value constructors have been added, we are familiar with the INSERT statement which has been around for ages the ANSI way

INSERT INTO Table1 (column1 ,column2 ,... columnN) VALUES (value1,value2,....valueN)

Another way of inserting a single row of data is as follows

INSERT INTO Table1 SELECT value1,value2,....valueN

Similarly for multiple rows of data

INSERT INTO Table1 
SELECT value1,value2,....valueN
UNION SELECT value1,value2,....valueN
UNION SELECT value1,value2,....valueN

Now the new ROW VALUE CONSTRUCTOR allows the following to add multiple rows of data

INSERT INTO Table1(column1 ,column2 ,... columnN) VALUES
(value1 , value2 , ... valueN),
(value1 , value2 , ... valueN),
(value1 , value2 , ... valueN),
(value1 , value2 , ... valueN),
(value1 , value2 , ... valueN),

We normally use INSERT statements in stored procedures using parameters to the stored procedure, so the above row value constructor is not very useful , it seems to be use of a table valued parameter to insert multiple rows of data is a better option in SQL Server 2008

Databases and Software Development

Database is a form of software development and yet all too often the database is thought of as a secondary entity when development teams discuss architecture and test plans—many developers do not seem to believe, understand or leave alone feel the need to understand that standard software development best practices apply to database development. Virtually every application imaginable requires some form of data store. And many in the development community go beyond simply persisting data, creating applications that are data driven. Given this dependency upon data and databases, Data is the central factor that dictates the value any application can bring to its users. Without the data, there is no need for the application.

The very reason we use the word legacy to refer to an application as old as 3 years is more often than not because of the database. As applications grow in size with new features , the amount of thought put into refactoring front end code or developing new code by developers is not put into the database development, ,so what happens essentially is a situation where your front end is two years ahead of the database , and as time progresses your applications capabilities and features get pulled back by the limitations of database quality and standard. We can all deny this but in reality it results either in a limitation or extra cost on creating work arounds.

The central argument on many a database forum is what to do with that ever-present required logic. Sadly, try as we might, developers have still not figured out how to develop an application without the need to implement business requirements. And so the debate rages on. Does “business logic” belong in the database? In the application tier? What about the user interface? And what impact do newer application architectures have on this age-old question?In recent times I have been able to have a look at technologies like Astoria, Linq and the Entity Model framework and Katmai. I was amazed at how little or no database code needs to be written by a developer who is doing UI or business layer development in a software. At the same time being a SQL Server fan myself.. I was worried that my database skills will slowly vaporise into thin air. Hmm that's not as bad as it sounds. All these new technologies such as Astoria, Linq or the Entity Framework are abstractions of the database and allow developing a logical data layer which maps to a physical database , so all developers who work primarily in the UI level or business layer level will slowly stop doing any SQL code at all, instead churning code that interests them against a logical data layer, but what contradicts this is the need to learn a new syntax in the form Linq, . On the other hand, database development will shift to being the specialist job...the design development and management activities of the database developer and Administrator will begin emerging as specialist skills as opposed to generalist skills in the near future. The future of the database specialist seems to be bright.. but what is to be seen is how organisations look at this shift in software development ideology.. To be fair this model of specialist and generalists is not new..

Thursday, 15 May 2008

How to consume a .Net ArrayList in classic ASP

I first followed the procedure to register a .Net assembly using regasm on the web server as below (http://weblogs.asp.net/dneimke/archive/2004/01/31/65330.aspx)

  1. Make sure your assembly has a strong name to install it in the GAC
  2. Run regasm /tlb “<path of your assembly>”
  3. Add the assembly to the GAC

This allows your assembly to be accessible in your ASP code to create server side objects

In my example I have a .Net assembly which has a class as below

 

using System; 
using System.Collections; 
namespace API 
{ 
      /// <summary> 
      /// Summary description for Versions. 
      /// </summary> 
      public class Versions 
      { 
            ArrayList versions = new ArrayList();
            public Versions() 
            { 
                  versions.Add(“1”); 
                  versions.Add(“2”); 
                  versions.Add(“3”); 
            } 

            public ArrayList List
            { 
                  get 
                  { 
                        return versions; 
                  } 
            } 
      } 
} 

I initially tried to have a method in my .Net class return an ArrayList, Then I tried to create an object of type “System.Collections.ArrayList” in ASP and iterate the object in ASP, as the array list type definition was not supported when the asp code executed. So I created the class above which had a public property having a type of System.Collections.ArrayList

 

So now in ASP I did the following and it printed out the arrray list on my asp page

<%@ codePage="65001" %> 
<html> 
      <head> 
            <title>Test Page</title> 
            <%
   1:  
   2:                   set versions = Server.CreateObject("Versions") 
   3:                   For Each v in versions.List 
   4:                         Response.Write(v & "</br>") 
   5:                   Next 
   6:             
%> </head> <body></body> </html>

That can be further enhanced in your ASP code to achieve the functionality you need to

 

 

TSQL - GROUP BY and ALL


We sure have used Group By in our SQL queries to group data and get result sets with aggregates from SQL Server, But before I explain lets create a temporary table with order details table using the Northwind database in SQL 2000

I will just join orders and [order detail] table in Northwind to get the data i need into a temporary table as shown below

SELECT 
  O.OrderID, 
  ProductID, 
  UnitPrice, 
  Quantity, 
  (UnitPrice*Quantity) AS Amount,
  CustomerID
INTO 
  #tempOrders
FROM 
  Orders O 
INNER JOIN
  [order details] D 
ON
  O.[orderid] = D.[orderid]
ORDER BY
  ProductID

So now i have a table called #tempOrders with the order details i need.

Now suppose I'd like to see the customers that were sold Product #1 along with the total amount that they spent. I will usea query with a GROUP BY clause as below with a WHERE condition to filter records

SELECT 
   CustomerID,
   SUM(Amount) AS TotalAmount
 FROM
   #tempOrders
 WHERE 
   ProductID = 1
 GROUP BY 
   CustomerID


Now, let's say that I'd like to see all customers that have been sold any products, but we still just want to see the "TotalAmount" for ProductID #1. For customers that have never ordered ProductID #1, it should output a "TotalAmount" value of 0. One way to do this is with a CASE expression as shown below

SELECT 
  CustomerID,
  SUM(CASE WHEN ProductID = 1 THEN Amount ELSE 0 END) AS TotalAmount
FROM
  #tempOrders
GROUP BY 
  CustomerID

Now this would return customers who haven't purchased Product #1 with a total of 0. In situations like these the SUM(CASE...) expression can be replaced with a GROUP BY ALL.

SELECT 
    CustomerID,
    ISNULL(SUM(Amount), 0) AS TotalAmount
  FROM
    #tempOrders
  WHERE 
    ProductID = 1
  GROUP BY ALL CustomerID

Values that are excluded from the aggregation according to the WHERE clause have NULL values returned, the ISNULL function makes sure all customers who haven't ordered Product #1 have a total of 0 instead of NULL. The ALL option basically says "ignore the WHERE clause when doing the GROUPING, but still apply it for any aggregate functions". So, in this case, the WHERE clause is not considered when generating the population of CustomerID values, but it is applied when calculating the SUM. This is very much like our first solution, where we removed the WHERE clause completely, and used a SUM(CASE...) expression to conditionally calculate the aggregate.

GROUP BY ALL is kind of obscure and neat to know, but not really useful in most situations since there are usually easier or better ways to get this result. This won't work if we want all Customers to be displayed, since a customer must have at least one order to show up in the result.Another limitation is we can not use GROUP BY ALL if we want to return a grand total for all orders

Wednesday, 14 May 2008

How to consume a .Net ArrayList in classic ASP

I first followed the procedure to register a .Net assembly using regasm on the web server as below (http://weblogs.asp.net/dneimke/archive/2004/01/31/65330.aspx)

  1. Make sure your assembly has a strong name to install it in the GAC
  2. Run regasm /tlb “<path of your assembly>”
  3. Add the assembly to the GAC

This allows your assembly to be accessible in your ASP code to create server side objects

In my example I have a .Net assembly which has a class as below  

using System;                                                     
using System.Collections; 

namespace API 
{ 
      /// <summary> 
      /// Summary description for Versions. 
      /// </summary> 
      public class Versions 
      { 
            ArrayList versions = new ArrayList(); 

            public Versions() 
            { 
                  versions.Add(“1”); 
                  versions.Add(“2”); 
                  versions.Add(“3”); 
            }   

            public ArrayList List 
            { 
                  get { return versions; } 
            } 
      } 
} 

I initially tried to have a method in my .Net class return an ArrayList, Then I tried to create an object of type “System.Collections.ArrayList” in ASP and iterate the object in ASP, as the array list type definition was not supported when the asp code executed. So I created the class above which had a public property having a type of System.Collections.ArrayList  .

So now in ASP I did the following and it printed out the arrray list on my asp page 

<%@ codePage="65001" %> 
<html> 
      <head> 
            <title>Test Page</title>
            <%
   1:     set versions = Server.CreateObject("Versions") 
   2:                   For Each v in versions.List 
   3:                         Response.Write(v & "</br>")
   4:                   Next
   5:             
%> </head> <body></body> </html>

  That can be further enhanced in your ASP code to achieve the functionality you need to