# Wednesday, December 12, 2007

Ian pointed out that there was a race condition in the UniversalAsyncResult that I posted a few days ago. I have amended the original post rather than repost the code.

The changes are to make the complete flag and the waithandle field volatile and to check the complete flag after allocating the waithandle in case Complete was called while the waithandle was being allocated.

Isn't multithreaded programming fun :-)

Wednesday, December 12, 2007 9:29:13 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, December 11, 2007

On 5th December, Microsoft Live Labs announced Volta. The idea of Volta is to be able to write your application in a .NET language of your choice and then the Volta infrastructure takes care of targetting the right platform (JavaScript or SilverLight, IE or FireFox). This side of things is pretty neat. The other thing it allows you to do is to write your application monolithically and then decide on distribution later on based on profiling, etc. This sounds quite neat but I have some reservations.

I went through the DCOM wave or "COM with a longer wire" as it was labelled at the time and learned some hard lessons. I then went through the .NET Remoting experience and learned  ... well ... the same hard lessons. The reality is unless you design something to be remote then when you remote it its going to suck.

Lets look at a Volta example. First some basics:

  1. Download and install the CTP
  2. Create a Volta application project

Now lets build our Volta application. You create the HTML page for the UI (heres the important snippet)

<p>Press the button to do chatty stuff</p>
p><button id="chat">Chat!</button></p>
div id="output" />

You create the Volta code to wire up your code to the HTML

public partial class VoltaPage1 : Page
Button chat;
Div output;

public VoltaPage1()

partial void InitializeComponent()
chat = Document.GetById<Button>("chat");
output = Document.GetById<Div>("output");

Now create a class that has a chatty interface

class HoldStatement
public string The { get { return "The "; } }
public string Quick { get { return "Quick "; } }
public string Brown { get { return "Brown "; } }
public string Fox { get { return "Fox "; } }
public string Jumped { get { return "Jumped "; } }
public string Over { get { return "Over "; } }
public string Lazy { get { return "Lazy "; } }
public string Dog { get { return "Dog!"; } }

OK now lets wire up the event handler on the button to output a statement in the output div

partial void InitializeComponent()
chat = Document.GetById<Button>("chat");
output = Document.GetById<Div>("output");

HoldStatement hs = new HoldStatement();
chat.Click += delegate
output.InnerText = hs.The + hs.Quick + hs.Brown + hs.Fox + hs.Jumped +
                       hs.Over + hs.The + hs.Lazy + hs.Dog;

If we compile this in Release mode and press F5 we get a browser and a server appears in the system tray.

If you double click on this server icon you get a monitor window that allows you to trace interaction with the server. Bring that up and check the Log Requests checkbox.

Now click on the Chat! button in the browser. Notice you get a bunch of assembly loads in the log window. Clear the log and press the Chat! button again. Now you see there are no requests going across the wire as the entire application is running in the browser.

Now go to the project properties and select the Volta tab. Enable the <queue dramatic music> Tiersplitter. Go to the HoldStatement class, right click on it and select Refactor -> Tier Split to Run at Origin. Notice we now get a [RunAtOrigin] attribute on the class.

Rebuild and press F5 again. Bring up the Log window again and select to log messages. Now every time you press the Chat! button it makes a request to the server. We have distributed our application with just one attribute! - how cool is that?! But hold on ... how many requests are going to the server? Well one for each property access.

Latency is the death knell of many distributed applications. You have to design remote services to be chunky (lots of data in one round trip) otherwise the network roundtripping will kill your application performance. So saying you can defer architecting your solution in terms of distribution then simply annotate classes with an attribute is a dangerous route to go down. We have already learned the hard way that taking a component and just putting it on the other end of a wire does not make for a good distributed application - Volta appears to be pushing us in that direction again.

.NET | Volta
Tuesday, December 11, 2007 4:01:02 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

Yesterday I announced the release of the MapperActivity. I said I'd clean up the source code and post it.

Two things are worth noting:

  1. The dependency on the CodeProject Type Browser has gone
  2. There is a known issue where you can map more than one source element to the same destination - in this case the last mapping you create will win. We will fix this in a subsequent release.

Here's the latest version with the code

MapperActivityLibrary.zip (933.62 KB)

.NET | BizTalk | WCF | WF
Tuesday, December 11, 2007 8:31:41 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, December 10, 2007

One of the neat things you can do in BizTalk is to create a map that takes one kind of XML message and transforms it into another. They basically are a wrapper over XSLT and its extension infrastructure. The tool you use to create maps is the BizTalk mapper. In this tool you load in a source and destination schema - which get displayed in treeviews and then you drag elements from the source to the destination to describe how to build the destination message.

Now, in WF - especially when using Silver (the WF/WCF integration layer introduced in .NET 3.5) - you tend to spend alot of time taking data from one object and putting it into another as different objects are used to serialize and deserialize messages received and sent via WCF. This sounds alot like what the BizTalk Mapper does, but with objects rather than XML messages.

I was recently teaching Guerrilla Connected Systems (which has now been renamed Guerrilla Enterprise.NET) and Christian Weyer was doing a guest talk on REST and BizTalk Services. Afterwards, Christian and me were bemoaning the amount of mundane code you get forced to write in Silver and how sad it was that WF doesn't have a mapper. So we decided to build one. Unfortunately neither myself nor Christian are particularly talented when it comes to UI - but we knew someone who was: Jörg Neumann. So with Jörg doing the UI, myself writing the engine and Christian acting as our user/tester we set to work.

The result is the MapperActivity. The MapperActivity requires you to specify the SourceType and DestinationType - we used the Type Browser from CodeProject for this. Then you can edit the Mappings which brings up the mapping tool

You drag members (properties or fields) from the left hand side and drop them on type compatible members (properties or fields) on the right hand side.

Finally you need to databind the SourceObject and DestinationObject to tell the mapper where the data comes from and where the data is going. You can optionally get the MapperActivity to create the destination object tree if it doesn't exist using the CanCreateDestination property. Here's the full property grid:

And thanks to Jon Flanders for letting me crib his custom PropertyDescriptor implementation (why on earth don't we have a public one in the framework?). You need one of these when creating extender properties (the SourceObject and DestinationObject properties change type based on the SourceType and DestinationType respectively).

Here is the activity - I'll clean up the code before posting that.

MapperActivity.zip (37.78 KB)

Feedback always appreciated

.NET | BizTalk | WCF | WF
Monday, December 10, 2007 9:56:07 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

In my last post I talked about the WCF async programming model. On the service side you have to create a custom implementation of IAsyncResult. Fortunately this turns out to be pretty much boilerplate code so here is a general purpose one

class UniversalAsyncResult : IAsyncResult, IDisposable
  // To store the callback and state passed
  AsyncCallback callback;
  object state;
  // For the implementation of IAsyncResult.AsyncCallback
  volatile ManualResetEvent waithandle;

  // Guard object for thread sync
  object guard = new object();

  // complete flag that is set to true on completion (supports IAsyncResult.IsCompleted)
  volatile bool complete;

  // Store the callback and state passed
  public UniversalAsyncResult(AsyncCallback callback, object state)
    this.callback = callback;
    this.state = state;

  // Called by consumer to set the call object to a completed state
  public void Complete()
    // flag as complete
    complete = true;

    // signal the event object
    if (waithandle != null)

    // Fire callback to signal the call is complete
    if (callback != null)

#region IAsyncResult Members
  public object AsyncState
    get { return state; }
  // Use double check locking for efficient lazy allocation of the event object
  public WaitHandle AsyncWaitHandle
      if (waithandle == null)
        lock (guard)
          if (waithandle == null)
            waithandle = new ManualResetEvent(false);
            // guard against the race condition that the waithandle is
            // allocated during or after complete
            if( complete )

      return waithandle;

    get { return false; }

    get { return complete; }

IDisposable Members
  // Clean up the event if it got allocated
  public void Dispose()
    if (waithandle != null)

You create an instance of UniversalAsyncResult passing in an AsyncCallback and state object if available. When you have finished your async processing just call the Complete method on the UniversalAsyncResult object.

Comments welcome

Monday, December 10, 2007 12:07:29 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

WCF supports asynchronous programming. On the client side you can generate an async API for invoking the service using svcutil

svcutil /async <rest of command line here> 

In VS 2005 you had to use the command line. In VS2008 the Add Service Reference dialog now has the Advanced button which allows you to generate the async operations

However, the fact that you can do asynchronous programming on the service side is not so obvious. You need to create the service contract according to the standard .NET async pattern of Beginxxx / Endxxx. You also need to annotate the Beginxxx method with an OperationContract that says this is part of an async pair. Note you do not annotate the Endxxx method with [OperationContract]

interface IGetData
// This pair of methods are bound together as WCF is aware of the async pattern
    IAsyncResult BeginGetData(AsyncCallback cb, object state);
    string[] EndGetData(IAsyncResult iar);

Now, its fairly obvious that when the request message arrives the Beginxxx bethod is called by WCF. The question is how is the Endxxx method called? WCF passes the Beginxxx method an AsyncCallback that must somehow be cached and invoked when the operation is complete. This callback triggers the call to Endxxx. Another question is "what am I meant to return from the Beginxxx method?" - where does this IAsyncResult come from? And that, unfortunately, is up to you - you have to supply one, which is a good place to cache that AsyncCallback that WCF passed you.

Here is a simple sample shows how the service side works

Async.zip (219.99 KB)

Comments, as always, welcome

Monday, December 10, 2007 11:52:30 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Wednesday, December 05, 2007

In my last post I talked about the base class I created for creating custom composites that you want control over the execution logic. Firstly Josh pointed out that I ought to state its limitations - it only executes (and therefore cancels) one child activity at a time so it would not be useful for building, for example, the ParallelActivity.

With that limitation in mind though, there are a number of useful applications. I had a think about a different execution model and came up with prioritized execution. So I built this on top of my base class which was an interesting exercise as I could use an attached property for the Priority which meant I didn't limit the children to a special child activity type. I also had a think about the design time and realised that the sequential and parallel models for composite design time wasn't appropriate so I stole^H^H^H^H^H leveraged the designer that the Conditional Activity Group uses which seemed to work well.

I packaged up the code and told Josh about it

"Oh yeah, thats an interesting one - Dharma Shukla and Bob Schmidt used that in their book"


It turns out that pretty much half the world has decided to build a prioritised execution composite.

Heres the code anyway

PrioritizedActivitySample.zip (76.36 KB)

But then I got to thinking - actually there is a problem with this approach - elegant as it may be. If you cannot change the priorities at runtime then all you are building is an elaborate sequence. So it turns out that attached properties cannot be data bound in WF - you have to use a full DependencyProperty created with DependencyProperty.Register rather than DependencyProperty.RegisterAttached. So I reworked the example, creating a special child activity DynamicPrioritizedSequenceActivity that has a priority that is a bindable dependency property. I also had to create a validator to make sure only this type of child could be added to the composite.

This neat thing now is you have a prioritized execution composite whose children can dynamically change their priority using data binding. The code for this is here

DynamicPrioritySample.zip (78.83 KB)

So between these two samples we have prioritized execution, designer support, attached property support and validator support

As usual all comments welcome

Wednesday, December 05, 2007 3:47:10 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, December 04, 2007

There are two types of custom composites you may end up writing:

  1. a form of UserControl with a set of activities whose composite functionality you want to reuse. This is what you get when you add a new activity to a workflow project using the menu item. You end up with a class that derives from SequenceActivity and is the one occasion with a custom activity where you don;t really have to override Execute
  2. a parent that wants to take control over the execution model of its children (think of the execution model of the IfElseActivity, WhileActivity and the ParallelActivity)

To me the second of these is more interesting (although no doubt less common) and involves writing a bit of plumbing. In general, though, there is a common pattern to the code and so I have created a base class that wraps this plumbing.

CustomCompositeBase.zip (55.6 KB)

There are two fundemental overridable methods:

LastActivityReached: returns whether there are more child activities to execute
GetNextActivity: returns the next activity to be scheduled

Inside the zip there is also a test project that implements a random execution composite (highly useful I know).

Feedback, as always, very welcome.

Tuesday, December 04, 2007 9:19:18 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Sunday, December 02, 2007

Yesterday I posted a base class for Async Activities that buried all the plumbing. However, looking at the zip I only had the binary for the base class and the test harness. Here is the zipped code.

AsyncActivityBaseClass12.zip (69.71 KB)

In addition Rod gave me some feedback (for example making the base class abstract messed up the design time view for the derived activity). In the light of that and other ideas I have made some changes:

  • The base class is now concrete and so designer view now works for derived classes
  • If you don't supply a queue name then the queue name defaults to the QualifiedName of the activity
  • Cancellation support is now embedded in the base class.
  • There is a new override called DoAsyncTeardown to allow you to deallocate any resources you allocated in DoAsyncSetup

I hope someone finds this useful

Sunday, December 02, 2007 12:19:15 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Saturday, December 01, 2007

A while back Joe Duffy announced that they were changing the default max number of worker threads in the CLR threadpool in CLR 2.0 SP1. The change was to increase the limit 10 fold from 25 threads per processor per process to 250 per processor per process.

The basis of the change is that it is possible to deadlock the threadpool if you spawn and then wait for threadpool work from a threadpool thread (e.g. in processing an ASP.NET request). Increasing the size makes this occurance less likely (although doesn't remove the risk).

Although its not obvious, CLR 2.0 SP1 ships with .NET 3.5. So I was teaching Guerrilla .NET a couple of months ago and thought I'd demo the change by running  a project built under VS2005 then rebuild it under VS2008. What I hadn't expected (although on reflection its not that surprising) is that when I ran under 2005 on a machine with 2008 *installed* then 2005 also has the new behavior. In other words it patches the 2.0 CLR rather than being something that only affects new applications.

Why should we care? Well there are two main reasons we use threadpools. Firstly to remove the repeated cost of creating and destroying threads by repeatedly using a set of already created threads; secondly, it makes sure that we don't drown the machine in spawned threads when the application comes under load. Extra threads consume resources: the scheduler has to give them time and they default to 1Mb stack space.

And here is where I think the problem is. If you create a server side application, on 32 bit Windows, deploy it on an 8-way machine and put in under load, before this change you will cap out at 200 threads by default which is a reasonable (maybe a little high) number for IO bound processing. However, after the change this will cap out by default at 2000 threads. 2000 threads with 1Mb stack space means 2Gb of memory consumed by the threads. On 32 bit windows this is the total amount of addressable memory address space ... ouch! Say hello to Mr OutOfMemoryException.

So for existing applications this is a potentially breaking change. Your applications will have to take control of the max number of worker threads to ensure correct processing. ASP.NET already does this and you can adjust it via the <processModel> configuration element. In a windows service based application you have to make a call to ThreadPool.SetMaxThreads to control the threadpool.

To his credit Joe doen't pretend this change was the ideal resolution to a real-world problem but it is a change that will affect existing applcations as it is a patch to CLR 2.0 rather than a feature of a new version of the CLR

Saturday, December 01, 2007 9:45:17 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   |