# Wednesday, July 25, 2012

Microsoft recently announced the beta of Service Bus 1.0 for Windows Server. This is the on-premise version of the Azure Service Bus that so many have been asking for. There is a good walkthrough of the new beta in the MSDN documentation here including how to install it.

So why this blog post? Well if you install the beta on a Windows Server 2008 R2 box and you actually develop on another machine then there’s a couple of things not mentioned in the documentation that you need to do (having previously spent long hour glaring at certificate errors allowed me to resolve things pretty quickly)

First a little background into why there is an issue. Certificate validation has a number of steps to it for a certificate to be considered valid:

  1. The issuer of the cert must be trusted. In fact the issuer of the issuer must also be trusted. In fact the issuer of the issuer of the issuer must be trusted. In fact … well you get the picture. In other words, when a certificate is validated all certificates in the chain of issuers must be trusted back to a trusted root CA (normally bodies like Verisign or your corporate certificate server). You will find these in your certificate store in the “Trusted Root Certificate Authorities” section. This mechanism is known as chain trust.
  2. The certificate must be in date (between the valid from and valid to dates)
  3. The usage of the certificate must be correct for the supported purposes of the cert (server authentication, client authentication, code signing, etc)
  4. For a server authentication cert (often known as an SSL cert) the subject of the cert must be the same as the server DNS name
  5. The certificate must not be revoked (if a cert is compromised the issuer can revoke the cert and publishes this in a revocation list) according to the issuers revocation list

For 1. there is an alternative called Peer Trust where the presented certificate must be in the clients Trusted People section of their certificate store

There may be other forms of validation but these are the ones that typically fail

So the issue is that the installer of Service Bus (SB), if you just run through the default install, generates a Certificate Authority cert and puts it into the target machine’s Trusted CA store – it obviously doesn’t put it in the client’s Trusted CA store because it knows nothing about that. It also generates a server authentication cert with the machine name you are installing on. This causes two issues: no cert issued by the CA will be trusted and so an SSL failure will happen the first time you try to talk to SB, the generated SSL cert will have problems generally validating due to, for example, no revocation list

The first of these issues can be fixed by exporting the generated CA cert and importing it into the Trusted CA Store ion the dev machine


The second can be fixed by exporting the SSL cert and importing it into the trusted people store on the dev machine


With these in place the Getting Started / Brokered Messaging / QueuesOnPrem sample should work fine

.NET | Azure | ServiceBus | WCF
Wednesday, July 25, 2012 5:04:54 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Monday, July 02, 2012

The demos from DEV 326 – my talk on WCF 4.5 at TechEd EMEA are available here. The session was also recorded and the video is now on Channel9 here

Monday, July 02, 2012 2:13:14 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Wednesday, September 28, 2011

Thanks to everyone for attending my sessions on the Reactive Framework and Garbage Collection at Basta 2011, you can download the demos here:

Reactive Framework

Garbage Collection

Wednesday, September 28, 2011 11:46:31 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Wednesday, June 15, 2011

Just a heads up that when self hosting the new WCF Web API. By default if you try to add the Web API references via Nuget you will get a failure (E_FAIL returned from a COM component).

This is due to the likely project types (Console, Windows Service, WPF) defaulting to the client profile rather than the full framework. If you change the project to the full framework the Nuget packages install correctly

Yet again bitten by the Client Profile

Wednesday, June 15, 2011 10:21:33 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Friday, February 04, 2011

I have just found myself answering essentially the same question 4 times on the MSDN WCF Forum about how instances, threading and throttling interact in WCF. So to save myself some typing I will walk through the relationships here and then I can reference this post in questions.

WCF has 3 built in instancing models: PerCall, PerSession and Single. They are set on the InstanceContextMode on the ServiceBehavior attribute on the service implementation. They relate to how many instances of the service implementation class get used when requests come in, and work as follows:

  • Single – one instance of the implementation class is used for all requests
  • PerCall – every request gets its own instance of the implementation class
  • PerSession – this is the slightly odd one as it means every session gets its own instance. In practical terms it means that for Session supporting bindings: NetTcpBinding, WSHttpBinding, NetNamedPipeBinding, etc, every proxy gets an instance of the service. For bindings that do not support session: BasicHttpBinding, WebHttpBinding, we get the same effect as PerCall. To add to the confusion, this setting is the default

By default WCF assumes you do not understand multithreading. Therefore, it only allows one thread at a time into an instance of the service implementation class unless you tell it otherwise. You can control this behavior using the ConcurrencyMode on the ServiceBehavior; it has 3 values:

  • Single – this is the default and only one thread can get into an instance at a time
  • Multiple – any thread can enter an instance at any time
  • Reentrant – only makes a difference in duplex services but means that an inbound request can be received from the component you are currently making a request to (sounds a bit vague but in duplex the role of service and client are somewhat arbitrary)

Now these two concepts are different but have some level of interaction.

If you set InstanceContextMode to Single and ConcurrencyMode to Single then your service will process exactly one request at a time. If you set InstanceContextMode to Single and ConcurrencyMode to Multiple then your service processes many requests but you are responsible for ensuring your code is threadsafe.

If you set InstanceContextMode to PerCall then ConcurrencyMode Single and Multiple behave the same as each request gets its own instance

For PerSession ConcurrencyMode Multiple is only required if you want to support a client sending multiple requests through the same proxy from multiple threads concurrently

Unless you turn on ASP.NET Compatibility, WCF calls are processed on IO threads in the system threadpool. There is no thread affinity so any of these threads could process a request. The number of threads being used will grow until the throughput of the service matches the number of concurrent requests (assuming the server machine has the resources to match the number of concurrent requests). Although the number of IO threads is capped at 1000 by default, if you hit this many then unless you are running on some big iron hardware you probably have problems in your architecture.

Throttling is there to ensure your service is not swamped in terms of resources. There are three throttles in place:

  • MaxConcurrentCalls – the number of concurrent calls that can be made – under .NET 4 defaults to 16 x number of cores
  • MaxConcurrentSessions – the number of concurrent sessions that can be in in flight – under .NET 4 defaults to 100 x number of cores
  • MaxConcurrentObjects – the number of service implementation objects that are in use – defaults to the sum of MaxConcurrentCalls + MaxConcurrentSessions

In reality, depending on whether you are using sessions or not, the session or call throttle will affect your service the most. The object one will only affect your service if you set it lower than the others or you do something unsual and handle the mapping of requests to objects yourself using a custom IInstanceContextProvider

You can control the throttle values using the serviceThrottling service behavior which you set in the config file or in code

Friday, February 04, 2011 9:50:40 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, June 08, 2010


I’m speaking at Software Architect 2010 in October. I’m going to be delivering two sessions on Windows Workflow Foundation 4.0: the first explains the basic architecture and looks at using workflow as a Visual Scripting environment to empower business users. The second looks at building big systems with workflow concentrating on the WCF integration features.

In addition to that I’ll be delivering two all-day workshops with Andy Clymer: Building Applications the .NET 4.0 Way and Moving to the Parallel Mindset with .NET 4.0. The first of these will take a number of new features of .NET 4.0 and show how they can be combined to create compelling applications. The second will look at the Parallel Framework Extensions (PFx) introduced in .NET 4.0 examining both the rich functionality of the library, how it can be best leveraged avoiding common parallel pitfalls and finally looking at patterns that aid parallelisation of your code

.NET | PFx | RSK | WCF | WF | WF4
Tuesday, June 08, 2010 10:52:40 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Monday, June 07, 2010

I’ve been working with the Unity IoC container from Microsoft Patterns and Practices recently. Its mostly straightforward as IoC containers go but one thing had me puzzled for a while as its not really documented or blogged as far as I can see; so I decided to blog it so hopefully others looking will stumble across this article

Lets start off with a simple example: I have two interfaces: IService and IRepository that live in the Interfaces class library

   1: public interface IService
   2: {
   3:     void DoWork();
   4: }
   1: public interface IRepository
   2: {
   3:     string GetStuff();
   4: }
I also have two implementations in the Services class library: MyRepository 
   1: public class MyRepository : IRepository
   2: {
   3:     public string GetStuff()
   4:     {
   5:         return "TADA!!";
   6:     }
   7: }

and MyService

   1: public class MyService : IService
   2: {
   3:     private IRepository repository;
   5:     public MyService(IRepository repository)
   6:     {
   7:         this.repository = repository;
   8:     }
   9:     public void DoWork()
  10:     {
  11:         Console.WriteLine(repository.GetStuff());
  12:     }
  13: }

Notice that MyService needs an IRepository to do its work. Now the idea here is I’m going to wire this together via dependency injection and the Unity IoC container. So I have my application

   1: class Program
   2: {
   3:     static void Main(string[] args)
   4:     {
   5:         UnityContainer container = new UnityContainer();
   6:         UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity");
   7:         section.Configure(container);
   9:         IService svc = container.Resolve<IService>();
  11:         svc.DoWork();
  12:     }
  13: }

Notice as we’re using IoC that there are no hard coded dependencies – everything is wired up via the container. However, there must be some information about how the interfaces map to concrete types and this is in the config file

   1: <configuration>
   2:   <configSections>
   3:     <section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection,Microsoft.Practices.Unity.Configuration"/>
   4:   </configSections>
   6:   <unity>
   7:     <typeAliases>
   8:       <!-- Lifetime Managers -->
   9:       <typeAlias alias="singleton" type="Microsoft.Practices.Unity.ContainerControlledLifetimeManager"/>
  11:       <!-- Interfaces -->
  12:       <typeAlias alias="IService" type="Interfaces.IService,Interfaces"/>
  13:       <typeAlias alias="IRepository" type="Interfaces.IRepository,Interfaces"/>
  15:       <!-- Implementations -->
  16:       <typeAlias alias="service" type="Services.MyService, Services"/>
  17:       <typeAlias alias="repository" type="Services.MyRepository, Services"/>
  19:     </typeAliases>
  20:     <containers>
  21:       <container>
  22:         <types>
  23:           <type type="IService" mapTo="service">
  24:             <lifetime type="singleton"/>
  25:           </type>
  26:           <type type="IRepository" mapTo="repository">
  27:             <lifetime type="singleton"/>
  28:           </type>
  29:         </types>
  30:       </container>
  31:     </containers>
  32:   </unity>
  33: </configuration>

Now all of this works fine and is simple Unity stuff. We use constructor injection to get the repository implementation into the service constructor. However, I’ve decided the service needs a timeout that I will generally configure in the config file. However to make Unit Testing simple I’ll add another constructor to MyService so I can pass a specific timeout

   1: public class MyService : IService
   2: {
   3:     private IRepository repository;
   4:     TimeSpan timeout;
   6:     public MyService(IRepository repository, TimeSpan timeout)
   7:     {
   8:         this.timeout = timeout;
   9:     }
  11:     public MyService(IRepository repository)
  12:     {
  13:         this.repository = repository;
  14:         timeout = GetTimeoutFromConfig();
  15:     }
  17:     private TimeSpan GetTimeoutFromConfig()
  18:     {
  19:         return default(TimeSpan);
  20:     }
  22:     public void DoWork()
  23:     {
  24:         Console.WriteLine(repository.GetStuff());
  25:     }
  26: }

Now I try to run the application and I get a pretty ugly error

Unhandled Exception: Microsoft.Practices.Unity.ResolutionFailedException: Resolution of the dependency failed,
type = "Interfaces.IService", name = "(none)".
Exception occurred while: while resolving.
Exception is: InvalidOperationException - The type Int32 cannot be constructed.
You must configure the container to supply this value.

Now that’s weird – I have no types that take an Int32! This is caused by Unity’s default behavior where it will try to resolve on the constructor with the most parameters (on the basis that this one will have the most dependencies that can be injected). It tried to resolve the TimeSpan and so looks at the TimeSpan and tries to resolve its constructor which can take an Int32. I actually want to tell it to use a different constructor and I can do this in two ways: annotate the constructor I want to use with the [InjectionConstructor] attribute

   1: [InjectionConstructor]
   2: public MyService(IRepository repository)
   3: {
   4:     this.repository = repository;
   5:     timeout = GetTimeoutFromConfig();
   6: }

But personally I don’t like this. It forces the services assembly to take a dependency on Unity and the service has knowledge about how its being constructed. What I really want to do is specify this in config. This isn’t very well documented from what I can see but what you do it specify the constructor and how to resolve the parameters against the type mapping in the config – i.e.

   1: <type type="IService" mapTo="service">
   2:   <lifetime type="singleton"/>
   3:   <constructor>
   4:     <param name="repository">
   5:       <dependency/>
   6:     </param>
   7:   </constructor>
   8: </type>

As well as specifying dependencies you can also give explicit values by using <value/> instead of <dependency/>. This model I think is a lot cleaner than the attribute approach.
Monday, June 07, 2010 11:14:26 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
.NET | Dublin | WCF | WF
Monday, June 07, 2010 10:05:40 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 

The Visual Studio 2010 Power Tools have just been released. There’s all sorts of goodness in here: new flexible tab handling in the VS shell (vertical tab groups, tabs grouped by project, dropping of rarely used tabs, etc); a new searchable Add Reference dialog; new editor enhancements to make navigation easier and much more

I’ve been playing with it for the last couple of hours and its very neat. Of course I don’t use Resharper or CodeRush (I use too many machines I don’t control to become dependent on them) so some of these features may be available in those tools. But for me the power tools are a welcome addition to the IDE

Monday, June 07, 2010 8:26:40 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Tuesday, March 30, 2010

Rock Solid Knowledge on iTunes is live! We’ve taken the feed to our free screencasts and they are now available through iTunes via the following link


Now you can watch them on the move

.NET | EF4 | PFx | RSK | SilverLight | WCF | WF | WF4
Tuesday, March 30, 2010 9:13:22 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Wednesday, March 17, 2010

Thanks to everyone who attended my sessions at DevWeek 2010. I’ve now uploaded the demos which you can find at the following locations

A Day of .NET 4.0 Demos

Windows Workflow Foundation 4.0 Demos

Creating WCF Services using WF4 Demos

I’ll be around for the rest of the conference so drop by for a chat at our developer clinic in the exhibition area

.NET | EF4 | RSK | WCF | WF | WF4
Wednesday, March 17, 2010 8:13:59 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Wednesday, March 10, 2010

In my last post I pointed to a screencast I had recorded that showed how to create a custom message filter to plug your own logic into the WCF 4.0 Routing Service. However, the simple custom filter is only the start – you can actually take control of part of the routing table which allows you to make global decisions about which filters match a particular request. On that basis I have created another screencast that shows how to build one of these more complex custom filters. in this case I use the example of a round robin load balancer where you can use a file on the file system to indicate whether a specific endpoint should be considered part of the load balancing algorithm

You can find this last in the series of screencasts on the routing service, along with all the others here, on the Rock Solid Knowledge site


Wednesday, March 10, 2010 1:05:40 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, March 08, 2010

I’ve just uploaded a new screencast on the the Rock Solid Knowledge site. This one shows you how to plug your own routing logic into the new Routing Service that is part of WCF 4.0. It uses a custom message filter that can be used to supplement the existing set of filters such as matching on XPath and Action.

You can find the screencast (along with the previous ones in the series) here


Monday, March 08, 2010 12:37:22 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, March 02, 2010

Continuing my screencast series on the Routing Service in WCF 4.0, I have just uploaded one on Data Dependent Routing

You can get to the latest screencast (and all of the others) here http://rocksolidknowledge.com/ScreenCasts.mvc

Tuesday, March 02, 2010 10:50:18 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, March 01, 2010

I have just published my second screencast on one of the new features of WCF 4.0 – the Routing Service. This screencast focuses on enabling multicast and failover with the Routing Service

You can get to the screencast from here http://rocksolidknowledge.com/ScreenCasts.mvc

Monday, March 01, 2010 1:16:51 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, February 26, 2010

Thanks to everyone who attended my two sessions at BASTA! – another thoroughly enjoyable conference. I’ve uploaded the demos

What’s new in Workflow 4.0 – this includes the application with the rehosted designer

Building WCF services with Workflow 4.0

.NET | WCF | WF | WF4
Friday, February 26, 2010 10:02:42 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Sunday, February 14, 2010

I my last post I showed that creating a custom composite activity (one that can have one or more children) requires deriving from NativeActivity. The Retry activity that I showed was fairly simple and in particular didn’t try to share data with its child. There appears to be a catch-22 in this situation when it comes to overriding CacheMetadata: if I add a Variable to the metadata (AddVariable) then it can be used exclusively by its children – i.e. the activity itself can’t manipulate the state; if I add a variable as implementation data to the metadata (AddImplementationVariable) then the children cant see it as its seen as purely used for this activities implementation. How then do we create data that can be both manipulated by the activity and accessed by the parent?

The secret to achieving this is a feature called ActivityAction - Matt Winkler talks about it here. The idea is that I can bind an activity to one or more parent defined pieces of data then schedule the activity where it will have access to the data. Its probably best to show this with an example so I have written a ForEachFile activity that you give a directory and then it passes its child the file name of each file in the directory in turn. I’ll show the code in its entirety and then walk through each piece in turn

   1: [ContentProperty("Body")]
   2: [Designer(typeof(ForEachFileDesigner))]
   3: public class ForEachFile : NativeActivity, IActivityTemplateFactory
   4: {
   5:     public InArgument<string> Directory { get; set; }
   7:     [Browsable(false)]
   8:     public ActivityAction<string> Body { get; set; }
  10:     private Variable<IEnumerator<FileInfo>> files = new Variable<IEnumerator<FileInfo>>("files");
  12:     protected override void CacheMetadata(NativeActivityMetadata metadata)
  13:     {
  14:         metadata.AddDelegate(Body);
  16:         RuntimeArgument arg = new RuntimeArgument("Directory", typeof(string), ArgumentDirection.In);
  17:         metadata.AddArgument(arg);
  19:         metadata.AddImplementationVariable(files);
  20:     }
  21:     protected override void Execute(NativeActivityContext context)
  22:     {
  23:         DirectoryInfo dir = new DirectoryInfo(Directory.Get(context));
  25:         IEnumerable<FileInfo> fileEnum = dir.GetFiles();
  26:         IEnumerator<FileInfo> fileList = fileEnum.GetEnumerator();
  27:         files.Set(context, fileList);
  29:         bool more = fileList.MoveNext();
  31:         if (more)
  32:         {
  33:             context.ScheduleAction<string>(Body, fileList.Current.FullName, OnBodyComplete);
  34:         }
  35:     }
  37:     private void OnBodyComplete( NativeActivityContext context, ActivityInstance completedInstance)
  38:     {
  39:         IEnumerator<FileInfo> fileList = files.Get(context);
  40:         bool more = fileList.MoveNext();
  42:         if (more)
  43:         {
  44:             context.ScheduleAction<string>(Body, fileList.Current.FullName, OnBodyComplete);
  45:         }
  47:     }
  49:     #region IActivityTemplateFactory Members
  50:     public Activity Create(DependencyObject target)
  51:     {
  52:         var fef = new ForEachFile();
  53:         var aa = new ActivityAction<string>();
  54:         var da = new DelegateInArgument<string>();
  55:         da.Name = "item";
  57:         fef.Body = aa;
  58:         aa.Argument = da;
  60:         return fef;
  61:     }
  62:     #endregion
  63: }

Ok lets start with the core functionality then we’ll look at each of the pieces that help make this fully usable. As you can see the class derives from NativeActivity and overrides CacheMetadata and Execute – we’ll look at their implementations in a minute. There are three member variables in the class: the InArgument<string> for the directory; an implementation variable to hold the iterator as we move through the files in the directory; the all important ActivityAction<string> which we will use to pass the current file name to the child activity.

Lets look at CacheMetadata more closely. Because we want to do interesting things with some of the state the default implementation won’t work so we override it. As we don’t call the base class version we need to specify how we use all of the state. We add the ActivityAction as a delegate, we bind the Directory argument to a RuntimeArgument and specify the iterator as an implementation variable – we don’t want the child activity to have access to that directly.

Next lets look at Execute. The first couple of lines are nothing unusual – we get the directory and get hold of the list of files. Now we have to store the retrieved iterator in the implementation variable, fileList. We move to the first file in the iteration, if it returns false the list was empty so as long as MoveNext returned true we want to schedule the child activity passing the current file. To do this we use the ScheduleAction<T> member on the context. However, because we’re going to process each one sequentially, we also need to know when the child is finished so we pass a completion callback, OnBodyComplete.

Now in OnBodyComplete we simply get the iterator, move to the next item in the iteration and as long as we’re not finished iterating re-schedule the child again passing the new item in the iteration.

That really is the “functional” part of the activity – everything else is there to support the designer. So why does the designer need help? Well lets look at what we would have to do to build this activity correctly if we were going to execute it from main

   1: Activity workflow = new ForEachFile
   2: {
   3:   Body = new ActivityAction<string>
   4:   {
   5:     Argument = new DelegateInArgument<string>
   6:     {
   7:       Name = "item",
   8:     },
   9:     Handler = new WriteLine
  10:     {
  11:       Text = new VisualBasicValue<string>("item")
  12:     }
  13:   },
  14:   Directory = @"c:\windows"
  15: };

As you can see, its not a matter of simply creating the ForEachFile, we have to build the internal structure too – something needs to create that structure for the designer - this is the point of IActivityTemplateFactory. When you drag an activity on to the design surface normally it just creates the XAML for that activity. However, before it does that it does a speculative cast for IActivityTemplateFactory and if the activity supports that it calls the interface’s Create method instead and serializes the resulting activity to XAML.

So going back to the ForEachFile activity, you can see it implements IActivityTemplateFactory and therefore, as far as the designer is concerned, this is the interesting functionality – lets take a look at the Create method.We create the ForEachFile and wire an ActivityAction<string> to its Body property. ActivityAction<string> needs a slot to store the data to be presented to the child activity. This is modelled by DelegateArgument<string> and this gets wired to the Argument member of the ActvityAction. We also name the argument as we want a default name for the state so the child activity can use it. Notice, however, we don’t specify the child activity itself (it would be a pretty useless composite if we hard coded this). The child will be placed on the Handler property of the ActivityAction but that will be done in the ActivityDesigner using data binding.

Before we look at the designer lets highlight a couple of “polishing” features of the code: the class declared a ContentProperty via an attribute – that just makes the XAML parsing cleaner as the child doesn’t need to be specified using property element syntax; the Body is set to non-browsable – we don’t want the activities user to try to set this value in the property grid.

OK on to the designer. If you read my previous article there are a couple of new things here. Lets look at the markup – again there is no special code in the code behind file, everything is achieved using data binding

   1: <sap:ActivityDesigner x:Class="ActivityLib.ForEachFileDesigner"
   2:     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
   3:     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
   4:     xmlns:s="clr-namespace:System;assembly=mscorlib"
   5:     xmlns:conv="clr-namespace:System.Activities.Presentation.Converters;assembly=System.Activities.Presentation"
   6:     xmlns:sap="clr-namespace:System.Activities.Presentation;assembly=System.Activities.Presentation"
   7:     xmlns:sapv="clr-namespace:System.Activities.Presentation.View;assembly=System.Activities.Presentation"
   8:     xmlns:me="clr-namespace:ActivityLib">
   9:     <sap:ActivityDesigner.Resources>
  10:         <conv:ArgumentToExpressionConverter x:Key="expressionConverter"/>
  11:     </sap:ActivityDesigner.Resources>
  13:     <Grid>
  14:         <Grid.RowDefinitions>
  15:             <RowDefinition Height="Auto"/>
  16:             <RowDefinition Height="*"/>
  17:         </Grid.RowDefinitions>
  19:         <StackPanel Orientation="Horizontal" Margin="2">
  20:             <TextBlock Text="For each file "/>
  21:             <TextBox Text="{Binding ModelItem.Body.Argument.Name}" MinWidth="50"/>
  22:             <TextBlock Text=" in "/>
  23:             <sapv:ExpressionTextBox Expression="{Binding Path=ModelItem.Directory, Converter={StaticResource expressionConverter}}"
  24:                                     ExpressionType="s:String"
  25:                                     OwnerActivity="{Binding ModelItem}"/>
  26:         </StackPanel>
  27:         <sap:WorkflowItemPresenter Item="{Binding ModelItem.Body.Handler}"
  28:                                    HintText="Drop activity"
  29:                                    Grid.Row="1" Margin="6"/>
  31:     </Grid>
  33: </sap:ActivityDesigner>

Here’s what this looks like in the designer


So the TextBox at line 21 displays the argument name that our IActivityTemplateFactory implementation set up. The ExpressionTextBox is bound to the directory name but allows VB.NET expressions to be used. The WorkflowItemPresenter is bound to the Handler property of the Body ActivityAction, This designer is then associated with the activity using the [Designer] attribute.

So as you can see, ActivityAction and IActivityTemplateFactory work together to allow us to build rich composite activities with design time support

.NET | WF | WF4
Sunday, February 14, 2010 1:54:30 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, February 09, 2010

I’m writing Essential Windows Workflow Foundation 4.0 with Maurice for DevelopMentor. One of the things that I think is less than obvious is the behavior of NativeActivity.

What is NativeActivity I hear you ask? Well there are a number of models for building custom activities in WF4. Most “business” type custom activities will be built using a declarative model in XAML by assembling building blocks graphically. However, what if you are missing a building block? At this point you have to fall back to writing code and there are three options for your base class when writing an activity in code:

You use CodeActivity when you have a simple synchronous activity. All work happens in Execute and it has no child activities

This is new to WF4. Here you have the ability to implement the async pattern (BeginExecute / EndExecute) to perform short lived async operations where you do not want the workflow persisted (e.g. an async WebRequest)

This gives you full access to the power of the workflow execution engine. However, in the words of Spiderman’s Uncle, “with great power comes great responsibility”. NativeActivity can be a bit tricky so that is what this article is about

I’m going to walk through the code for a Retry activity – where a child activity can be rerun a number of times upon failure. The activity has two InArguments: MaxRetries and RetryDelay. MaxRetries says how many times you will retry the child before giving up. RetryDelay says how long to wait between retries. We could use a Thread.Sleep to do the delay but this would not be good for the workflow engine: we block a thread it could use and there is no way for the engine to persist the workflow – what if we wanted to retry in 2 days? So instead, as part of our implementation, we’ll use a Delay activity.

Now to explain the code we have to take a slight diversion and talk about the relationship between Activity, ActivityInstance and ExecutionContext. I talked about is a while back here when the PDC CTP first came out (that’s what the reference to some base class called WorkflowElement is about) but to expand a little: The Activity is really just a template containing the code to execute for the activity. The ActivityInstance is the actual thing that is executing. It holds the state for this instance of the activity template. Now we need a way to bind the template code to the currently running instance of the activity and this is the role of the ExecutionContext. If you are using a CodeActivity base class then most of this is hidden from you except that you have to access arguments by passing in the ExecutionContext. However, with NativeActivity you have to get more directly involved with this model.

Now how does the workflow engine know what data you need to store in the ActivityInstance? Well it turns out you need to tell it. NativeActivity has a virtual method called CacheMetadata (this post talks about it to some degree). The point being that the activity has to register all of the “stuff” that it wants to use during its execution. Now the base class implementation will do some fairly reasonable default actions but it cannot know, for example, that part of your functionality is there purely for implementation details and should not be public. Therefore, you will often override this when you create a NativeActivity

So without more ado – here’s the code for the Retry activity. I’ll then walk through it

   1: [Designer(typeof(ForEachFileDesigner))]
   2: public class Retry : NativeActivity
   3: {
   4:   public InArgument<int> MaxRetries { get; set; }
   5:   public InArgument<TimeSpan> RetryDelay { get; set; }
   6:   private Delay Delay = new Delay();
   8:   public Activity Body { get; set; }
  10:   Variable<int> CurrentRetry = new Variable<int>("CurrentRetry");
  12:   protected override void CacheMetadata(NativeActivityMetadata metadata)
  13:   {
  14:     metadata.AddChild(Body);
  15:     metadata.AddImplementationVariable(CurrentRetry);
  17:     RuntimeArgument arg = new RuntimeArgument("MaxRetries", typeof(int), ArgumentDirection.In);
  18:     metadata.Bind(MaxRetries, arg);
  19:     metadata.AddArgument(arg);
  21:     Delay.Duration = RetryDelay;
  23:     metadata.AddImplementationChild(Delay);
  24:   }
  26:   protected override void Execute(NativeActivityContext context)
  27:   {
  28:     CurrentRetry.Set(context, 0);
  30:     context.ScheduleActivity(Body, OnFaulted);
  31:   }
  33:   private void OnFaulted(NativeActivityFaultContext faultContext, Exception propagatedException, ActivityInstance propagatedFrom)
  34:   {
  35:     int current = CurrentRetry.Get(faultContext);
  36:     int max = MaxRetries.Get(faultContext);
  38:     if (current < max)
  39:     {
  40:       faultContext.CancelChild(propagatedFrom);
  41:       faultContext.HandleFault();
  42:       faultContext.ScheduleActivity(Delay, OnDelayComplete);
  43:       CurrentRetry.Set(faultContext, current + 1);
  44:     }
  45:   }
  47:   private void OnDelayComplete(NativeActivityContext context, ActivityInstance completedInstance)
  48:   {
  49:     context.ScheduleActivity(Body, OnFaulted);
  50:   }
  51: }

So lets look at the code: first the class derives from NativeActivity (we’ll come to the designer later). This means I want to do fancy things like have child activities or perform long running asynchronous work. Next we see the two InArguments that are passed to the Retry. The Delay is an implementation detail of how we will pause between retry attempts and the Body property is where the activity we are going to retry lives. Finally we have a Variable, CurrentRetry, where we store how many retry attempts we have made. That is the data in the class but remember we need to tell the workflow engine about what we need to store and why – this is the point of CacheMetadata (I read this method name as “here is the metadata for the cache” rather than “I am going to cache some metadata”)

In CacheMetadata the first thing we do is specify that the Body is our child activity. Next we tell the engine that we want to be able to get hold of the CurrentRetry but that its only there for our implementation – we’re not expecting the child activity to try to make use of it. The next 3 lines (17-19) seem a little strange but essentially we’re saying that the MaxRetries argument needs to be accessed over the whole lifetime of the ActivityInstance so we need a slot for that. We next configure out Delay activity passing the RetryDelay as its duration (this is how long we want to wait between retries). Finally we add the Delay, not as a normal child, but as an implementation detail.

OK, Execute is pretty simple – we initialize the CurrentRetry and then schedule the Body. But, because we want to retry on failure, we also pass in a fault handler (OnFaulted)

OnFaulted does the main work. It fires if the child fails. So it checks to see if we have exceeded the retry count and if not retries the Body activity. However, it doesn’t do this directly, first it tells the context that it has handled the error – it also, strangely has to cancel the current child (which is odd as its already faulted) – Maurice talks about this oddity here. Next it schedules the Delay as we have to wait for the retry delay and wires up a completion handler (OnDelayComplete) so we know when the delay is finished. Finally it updates the CurrentRetry.

OnDelayComplete simply reschedules the Body activity, remembering to pass the fault handler again in case the activity fails again.

Oh one thing I said I’d come back to – the designer. I have written a designer to go with this (designers in WF4 are WPF based). The XAML for this is here:

   1: <sap:ActivityDesigner x:Class="WordActivities.ForEachFileDesigner"
   2:     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
   3:     xmlns:s="clr-namespace:System;assembly=mscorlib"
   4:     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
   5:     xmlns:sap="clr-namespace:System.Activities.Presentation;assembly=System.Activities.Presentation"
   6:     xmlns:sapv="clr-namespace:System.Activities.Presentation.View;assembly=System.Activities.Presentation"
   7:     xmlns:conv="clr-namespace:System.Activities.Presentation.Converters;assembly=System.Activities.Presentation" 
   8:     mc:Ignorable="d" 
   9:     xmlns:d="http://schemas.microsoft.com/expression/blend/2008" 
  10:     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" >
  11:     <sap:ActivityDesigner.Resources>
  12:         <conv:ArgumentToExpressionConverter x:Key="expressionConverter"/>
  13:     </sap:ActivityDesigner.Resources>    
  14:   <Grid>
  15:         <Grid.RowDefinitions>
  16:             <RowDefinition Height="Auto"/>
  17:             <RowDefinition Height="Auto"/>
  18:             <RowDefinition Height="*"/>
  19:         </Grid.RowDefinitions>
  20:         <Grid.ColumnDefinitions>
  21:             <ColumnDefinition Width="Auto"/>
  22:             <ColumnDefinition Width="*"/>
  23:         </Grid.ColumnDefinitions>
  25:             <TextBlock Grid.Row="0" Grid.Column="0" Margin="2">Max. Retries:</TextBlock>
  26:             <sapv:ExpressionTextBox Expression="{Binding Path=ModelItem.MaxRetries, Converter={StaticResource expressionConverter}}"
  27:                                     ExpressionType="s:Int32"
  28:                                     OwnerActivity="{Binding ModelItem}"
  29:                                     Grid.Row="0" Grid.Column="1" Margin="2"/>
  31:             <TextBlock Grid.Row="1" Grid.Column="0" Margin="2">Retry Delay:</TextBlock>
  32:         <sapv:ExpressionTextBox Expression="{Binding Path=ModelItem.RetryDelay, Converter={StaticResource expressionConverter}}"
  33:                                     ExpressionType="s:TimeSpan"
  34:                                     OwnerActivity="{Binding ModelItem}"
  35:                                     Grid.Row="1" Grid.Column="1" Margin="2"/>
  37:             <sap:WorkflowItemPresenter Item="{Binding ModelItem.Body}"
  38:                                        HintText="Drop Activity"
  39:                                        Margin="6"
  40:                                        Grid.Row="2"
  41:                                        Grid.Column="0"
  42:                                        Grid.ColumnSpan="2"/>
  44:     </Grid>
  45: </sap:ActivityDesigner>

There is no special code behind, everything is done via databinding.

So as you can see, there are a lot of pieces that need to be put into place for something that seems fairly simple. The critical issue is getting the implementation of CacheMetadata correct – once you have identified all of the pieces of data you need stored the rest falls out nicely.

.NET | WF | WF4
Tuesday, February 09, 2010 8:14:03 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, February 05, 2010

A while back I wrote a blog post about DataSets and why you shouldn’t use them on service boundaries. The fundamental issues are:

  1. No non-.NET client has any idea what the data looks like you are sending them
  2. Hidden, non essential data is being passes up and down the wire
  3. Your client gets coupled to the shape of your data (usually a product of the Data Access Layer)

So when I saw that Entity Framework 4.0 supports Self Tracking Entities (STEs) I was interested to see how they would work – after all, automated change tracking is one of the reasons people wanted to use DataSets in service contracts. The idea is that as you manipulate the state the object itself tracks the changes to properties and whether it has been created new or marked for deletion. It does this by implementing an interface called IObjectWithchangeTracker which is then used by the ObjectContext to work out what needs to be done in terms of persistence. There is an extension method on the ObjectContext called ApplyChanges which does the heavy lifting.

The Entity Framework team has released a T4 Template to generate these STEs from an EDMX file and the nice thing is that the generated entities themselves have no dependency on the Entity Framework. Only the generated context class has this dependency and, so the story goes, the client needs to know nothing about Entity Framework, only the service does. The client, and the entities, remain ignorant of the persistence model. For this reason STEs have been touted as a powerful tool in n-tier based architectures

All of this seems almost too good to be true … and unfortunately it is.

To understand what the issue is with STEs we have to remember what two of the main goals of Service based systems are:

  1. Support heterogeneous systems – the service ecosystem is not bound to one technology
  2. Decoupling – to ensure changes in a service do not cascade to consumers of the service for technical reasons (there may obviously be business reasons why we might want a change in a service to effect the consumers such as changes in law or legislation)

So to understand the problem with STEs we need to look at the generated code.Here’s the model I’m using:


Now lets look at the T4 Template generated code for, say, the OrderLine

[DataContract(IsReference = true)]
public partial class OrderLine: IObjectWithChangeTracker, INotifyPropertyChanged
    #region Primitive Properties
    public int id
        get { return _id; }
            if (_id != value)
                ChangeTracker.RecordOriginalValue("id", _id);
                _id = value;
    private int _id;
    // more details elided for clarity

A few things to notice: firstly this is a DataContract and therefore is designed to be used on WCF contracts – that is its intent; secondly a bit of work takes place inside the generated property setters. The property setters check to see if the data is actually changed, then it records the old value and raises calls OnPropertyChanged to raise the PropertyChanged event (defined on INotifyPropertyChanged). Lets have a look inside OnPropertyChanged:

protected virtual void OnPropertyChanged(String propertyName)
     if (ChangeTracker.State != ObjectState.Added && ChangeTracker.State != ObjectState.Deleted)
         ChangeTracker.State = ObjectState.Modified;
     if (_propertyChanged != null)
         _propertyChanged(this, new PropertyChangedEventArgs(propertyName));

Ok so maybe a bit more than raising the event. It also marks the object as modified in the ChangeTracker. The ChangeTracker state is partly how the STE serializes its self tracked changes. It is this data that is used by the ApplyChanges extension method to work out what has changed. So the thing to remember here is that the change tracking is performed by code generated into the property setters.

Well the T4 Template has done its work so we create our service contract using these conveniently generated types and the client uses metadata and Add Service Reference to build its proxy code. It gets the OrderLine from the service, updates the quantity and sends it back. The service calls ApplyChanges on the context and then saves the changes and … nothing changes in the Database. What on earth went wrong?

At this point we have to step back and think about what those types we use in service contracts are actually doing. Those types are nothing more than serialization helpers – to help us bridge the object world to the XML one. The metadata generation uses the type definition and the attribute annotations to generate a schema (XSD) definition of the data in the class. Notice we’re only talking about data – there is no concept of behavior. And this is the problem When the Add Service Reference code generation takes place its based on the schema in the service metadata – so the objects *look* right, just the all important code in the property setters is missing. So you can change the state of entities in the client and the service will never be able to work out if the state has changed – so changes don’t get propagated to the database.

There is a workaround for this problem. You take the generated STEs and put them in a separate assembly which you give to the client. Now the client has all of the change tracking code available to it, changes get tracked and the service can work out what has changed and persist it.

But what have we just done? We have forced the client to be .NET. Not only that, we’ve probably compiled against .NET 4.0 and so we are requiring the client to be .NET 4.0 aware – we might as well have taken out a dependency on Entity Framework 4.0 in the client at this point. In addition, changes I make to the EDMX file are going to get reflected in the T4 generated code – I have coupled my client to my data access layer. Lets go back to the problems with using DataSets on service boundaries again. We’re back where we pretty much started – although the data being transmitted is more controlled.

So STEs at first glance look very attractive, but in service terms they are in fact similar to using DataSets in terms of the effect on the service consumer. So what is the solution? We’ll we’re back with our old friends DTOs and AutoMapper. To produce real decoupling and allow heterogeneous environments we have to explicitly model the *data* being passed at service boundaries. How this is patched this into our data access layer is up to the service. Entity Framework 4.0 certainly improves matters here over 1.0 as we can use POCOs which aid testability and flexibility of our service

.NET | EF4 | WCF
Friday, February 05, 2010 12:48:08 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, February 04, 2010

I’ve just noticed that Andy has blogged about our drop in clinic we are running at DevWeek 2010. All of the Rock Solid Knowledge guys will be at the conference so come over and let us help solve your design and coding problems – whether it be WCF, Workflow, Multithreading, WPF, Silverlight, ASP.NET MVC,  Design Patterns or whatever you are currently wrestling with. We’re also doing a bunch of talks (detailed on our Conferences page)


.NET | RSK | WCF | WF | WF4
Thursday, February 04, 2010 1:19:44 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, January 18, 2010

I’ve just realised that DevWeek 2010 is on the horizon. DevWeek is one of my favourite conferences  - lots of varied talks, vendor independence and I get to hang out with lots of friends for the week.

This year I’m doing a preconference talk and two sessions:

A Day of .NET 4.0
Mon 15th March 2010
.NET 4.0 is a major release of .NET, including the first big changes to the .NET runtime since 2.0. In this series of talks we will look at the big changes in this release in C#, multithreading, data access and workflow.
The best-known change in C# is the introduction of dynamic typing. However, there have been other changes that may affect your lives as developers more, such as support for generic variance, named and optional parameters. There is a whole library for multithreading, PFx, that not only assists you in parallelizing algorithms but in fact replaces the existing libraries for multithreading with a unified, powerful single API.
Entity Framework has grown up – the initial release, although gaining some popularity, missed features that made it truly usable in large-scale systems. The new release, version 4.0, introduces a number of features, such as self-tracking objects, that assist using Entity Framework in n-tier applications.
Finally, workflow gets a total rewrite to allow the engine to be used far more widely, having overcome limitations in the WF 3.5 API.
This workshop will take you through all of these major changes, and we’ll also talk about some of the other smaller but important ones along the way.

An Introduction to Windows Workflow Foundation 4.0
Tues 16th March 2010
.NET 4.0 introduces a new version of Windows Workflow Foundation. This new version is a total rewrite of the version introduced with .NET 3.0. In this talk we look at what Microsoft are trying to achieve with WF 4.0, why they felt it necessary to rewrite rather than modify the codebase, and what new features are available in the new library. Along the way we will be looking at the new designer, declarative workflows, asynchronous processing, sequential and flowchart workflows and how workflow’s automated persistence works.

Creating Workflow-based WCF Services
Tues 16th March 2010
There are very good reasons for using a workflow to implement a WCF service: workflows can provide a clear platform for service composition (using a number of building block services to generate a functionally richer service); workflow can manage long running stateful services without having to write your own plumbing to achieve this. The latest version of Workflow, 4.0, introduces a declarative model for authoring workflows and new activities for message based interaction. In addition we have a framework for flexible message correlation that allows the contents of a message to determine which workflow the message is destined for. In this session we will look at how you can consume services from workflow and expose the resulting workflow itself as a service.

As well as these you may well see me popping up as a guest code-monkey in the other Rock Solid Knowledge sessions. Hope to see you there

Monday, January 18, 2010 2:52:19 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, January 07, 2010

The Routing Service is a new feature of WCF 4.0 (shipping with Visual Studio 2010). It is an out-of-the-box SOAP Intermediary able to perform protocol bridging, multicast, failover and data dependent routing. I’ve just uploaded a new screencast walking through setting up the Routing Service and showing a simple example of protocol bridging (the client sends messages over HTTP and the service receives them over NetTcp). This screencast is one of a series I will be recording about the Routing Service. You can find the screencast here.

Thursday, January 07, 2010 1:24:26 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, December 17, 2009

Seeing as this has changed completely from WF 3.5 I thought I’d post a quick blog entry to describe how to run a workflow declared in a XAML file.

You may have heard that the WF 4.0 default authoring model is now XAML. However, the Visual Studio 2010 workflow projects store the XAML as a resource in the binary rather than as a text file. So if you want to deploy your workflows as XAML text files how do you run them? In .NET 3.5 you could pass the workflow runtime an XmlReader pointing at the XAML file but in WF 4.0 there is no WorkflowRuntime class. It turns out you need to load the XAML slightly indirectly by creating a special activity called a DynamicActivity

DynamicActivity has an Implementation member that points to a Func<Activity> delegate – in other words you wire up a method that returns an activity (the workflow). Here’s an example:

static void Main(string[] args)
  DynamicActivity dyn = new DynamicActivity();
  // this line wires up the workflow creator method
  dyn.Implementation = CreateWorkflow;
static Activity CreateWorkflow()
  // we use the new XamlServices utility class to deserialize the XAML
  Activity act = (Activity)XamlServices.Load(@"..\..\workflow1.xaml");
  return act;
.NET | WF | WF4
Thursday, December 17, 2009 4:27:07 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, November 02, 2009

Thanks to everyone who attended Guerrilla.NET last week. As promised the demos are now online and you can find them here


Monday, November 02, 2009 9:24:49 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, October 02, 2009

I just got back from speaking at Software Architect 2009. I had a great time at the conference and thanks to everyone who attended my sessions. As promised the slides and demos are now available on the Rock Solid Knowledge website and you can get them from the Conferences page.

.NET | Azure | REST | RSK | WCF | WF
Friday, October 02, 2009 10:12:18 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 

I had a lucky coincidence while speaking at Software Architect 2009. Four hours before I gave a talk on Contract First Design with WCF the WSCF Blue guys released version 1.0 of their tool. Contract first design is the most robust way to build services that potentially have to exist in a heterogeneous environment – it essentially means you start from the WSDL which means everyone can consume the service as no implementation details leak on to the wire. The major stumbling block in the WCF world has been there is no tool that can take a WSDL document and generate the skeleton of a service that implements the defined contract. Finally we have a released tool that does this very thing in the shape of WSCF Blue. In fact it can go further than that because writing WSDL by hand is too much like heavy lifting for many people – so WSCF Blue can also take one or more schema and runs a wizard to build a WSDL document that creates a contract based on the messages in the schema documents.

WSCF Blue solves a problem that has been there since WCF 3.0 and the great thing is that its free – being release on codeplex


Friday, October 02, 2009 10:06:53 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Thursday, October 01, 2009

I used to be a C# MVP a couple of years ago. When I took on the role of CTO of DevelopMentor I found I had a far smaller amount of time to devote to community based activity and so I lost my MVP status. I stepped down as DevelopMentor’s CTO back in January and as a result suddenly found I had time again and so can often be found hanging out at the MSDN WCF forum, speaking at conferences, etc. Consequently I have just heard that I’ve been awarded MVP status for Connected Systems :-) - nice to be back in the fold

.NET | Life
Thursday, October 01, 2009 4:22:59 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Saturday, September 26, 2009

Thanks to everyone who came to my sessions at BASTA! My first time at that conference and I had a great time. Here are the slides and demos

What’s New in Workflow 4.0

Contract First Development with WCF

Generics, Extension Methods and Lambdas – Beyond List<T>

Saturday, September 26, 2009 9:24:56 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Sunday, July 26, 2009

Wenlong Dong has just posted about changes to the WCF thottling defaults in WCF  4.0. The new throttle defaults are going to be based on the number of processors/cores in the machine – this means the more powerful the machine the higher the throttle will be  - which is a good thing. The throttle that always hit people first were either session, if they had a session as a result of contract/binding settings or, no session, concurrent calls. Now the changes (100 * processor count for session and 16 * processor count for calls) will mean that either one would bite first.

As Wenlong says, the original values were always too low for enterprise applications. What they did do, however, is force people to address throttling if they were writing enterprise apps. And Wenlong is right that that sometimes meant people putting the throttles up to int.MaxValue which is really the same as no throttle at all. But although, in general, I think the change is the right move (it mostly works out of the box), it will lead to people facing performance problems with absolutely no idea why they might be taking place.

What I mean by this is problems caused by the current throttles are really easy to spot “as soon as I put more than 10 calls though it breaks”. You can google that and find the problem and the solution. The new values make it hugely non-obvious why an application suddenly has problems with more than 400 clients (NetTcpBinding on machine with 4 cores for example). Good time to be a consultant in the UK and Europe I guess ;-)

Sunday, July 26, 2009 10:24:17 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Tuesday, July 14, 2009

It has been one of the bug bears for me as I have given talks on the Azure platform that I have not been able to answer any question that involved commercial details with anything other than “we’ll have to see the prices when they announce them. Finally, today, the team have announced the pricing model and availability


.NET | Azure
Tuesday, July 14, 2009 3:46:36 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Saturday, July 04, 2009

One of the frustrations with the current WCF tooling is that, although I can generate the client proxy code from a WSDL document I cannot generate the server side skeleton that matches the WSDL. In situations where two parties agree a contract based on WSDL this makes implementing the service side error prone. Back in the ASMX days WSDL.EXE had a /s switch that would create the service side skeleton, but this feature was not carried through into SVCUTIL.EXE.

Fortunately the WSCF (Web Service Contract First) guys  have finally released a beta of a WCF compatible version of their tool that allows you to build up a model of the service and then generate the client and/or service

Here’s Santosh’s announcement

Saturday, July 04, 2009 10:03:55 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Thursday, July 02, 2009

In my last post I linked to the screencast I made on processing large messages in WCF using buffering. I also said that I would be putting up another one on streaming messages shortly. That second screencast has now gone live on the Rock Solid Knowledge website. In part 2 of the large message handling screencast I talk about enabling streaming, designing contracts for streaming and how this affects the way the receiver has to process the data. You can find the new screencast here


Thursday, July 02, 2009 1:02:39 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Thursday, June 18, 2009

I’ve just uploaded a new screencast on to Rock Solid Knowledge. This one walks though configuring WCF to be able to pass large messages between client and service. Its the of two parts, this one talks about the default mode WCF uses for transferring data – buffering. I’ll be doing another one soon that looks at WCF in streaming mode.

You can find the screencast here

Thursday, June 18, 2009 1:04:34 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Wednesday, June 10, 2009

I am one of the moderators of the MSDN WCF Forum. One of the main areas of questions on the forum is duplex messaging – particularly using the WSDualHttpBinding. So instead of typing long messages repeating the same thing in answer to these questions I’ve decided to write this blog post to give a bit of background about duplex messaging and then discuss the options for bindings and common problems people have.

What is Duplex Messaging?

There are many ways that messages can be exchanged between two parties in a service based system: the client can send messages to the server and never get any back; the client can send a message and wait for a response; the client and service can send eachother messages without any pre-defined pattern; the client can send the service a message but not wait synchronously for a response and then then service can send a message back asynchronously; and there are many others. However, the first three of these are supported natively in WCF and are known as One-way, request/response and duplex.

So Duplex messaging is where, unsolicited, the client and service can send eachother messages. Most commonly this is characterized by the service sending the client “events” or notifications or progress of “interesting things”.

Duplex Contracts in WCF

To send messages to eachother the client and service must have an idea of what operations are available and what messages are sent and received during the communication. In WCF this idea is modelled by the contract. Now normally a contract just determines what functionality is available at the service. However, now the service is going to be sending messages to the client that the client isn’t specifically waiting for so it needs an idea of what messages the client can deal with. So we need a contract that models both directions of the conversation.

A bi-directional contract is modelled using two interfaces bound together with a ServiceContract – like this:

interface IOrderPizza
    void PlaceOrder(string PizzaType);
interface IPizzaProgress
    void TimeRemaining(int minutes);
    void PizzaReady();

The import bit here is the CallbackContract that establishes the relationship between the service’s and client’s contracts.

Writing the Service

The service is implemented normally apart from two issues: firstly it needs to access the callback contract to be able to send messages back to the client; secondly the communication infrastructure (modelled by the binding) needs to be able to cope with duplex messaging. Firstly lets look at accessing the callback contract:

class PingService : IOrderPizza
    IPizzaProgress callback;
    public void PlaceOrder(string PizzaType)
        callback = OperationContext.Current.GetCallbackChannel();
        Action preparePizza = PreparePizza;
        preparePizza.BeginInvoke(ar => preparePizza.EndInvoke(ar), null);
    void PreparePizza()
        for (int i = 10 - 1; i >= 0; i--)

The critical line here is calling GetCallbackContract on the OperationContext. This gives the service access to a proxy to call back to the client.

Now the service also needs to use a contract that is compatible with duplex messaging. WSHttpBinding is the default for the built in WCF projects but it does not support duplex messaging. People generally then move to the WSDualHttpBinding which is similar to the WSHttpBinding but does support duplex. I will go into more depth about bindings for duplex shortly but for now lets stick to this for now - it will work in our test rig on a single machine without issue.

Writing the Client

If the client is going to receive these messages it needs to provide an implementation of the callback contract. It can gets its definition from either a shared contract assembly or from metadata. If using metadata the callback contract will be named the same as the service’s contract but with the work Callback appended. It will also need to supply this implementation to the WCF infrastructure and it does this by wrapping an instance in an InstanceContext object and passing it to the proxy constructor. So here is the client:

class Program
    static void Main(string[] args)
        InstanceContext ctx = new InstanceContext(new Callback());
        OrderPizzaClient proxy = new OrderPizzaClient(ctx);
        Console.WriteLine("press enter to exit");
class Callback : IOrderPizzaCallback
    public void TimeRemaining(int minutes)
        Console.WriteLine("{0} seconds remaining", minutes);
    public void PizzaReady()
        Console.WriteLine("Pizza is ready");

Running the service and the client will have this working quite happily – so it would seem that duplex messaging and WCF works very well … so why on earth do people keep asking questions about it on the WCF forums?

It Worked on My Machine but Broke when we Deployed It!

Ahh well you probably did the thing that is obvious but almost always a bad idea. You went and chose WSDualHttpBinding as your duplex binding. To understand why this is a bad idea we need to dig a little deeper into how the WSDualHttpBinding works. HTTP is a unidirectional protocol: the client makes a request and the server sends a response. There is no way for a server to initiate an exchange with the client. So how on earth is duplex messaging going to work because it requires exactly this facility? Well the “Dual” in the name is significant, the WSDualHttpBinding actually consists of two connections: one outbound from client to server and one inbound from server to client – this second connection may already be ringing alarm bells with you. The are a two big problems with inbound connections to a client: firewalls very often block inbound connections to clients; the client may not be reachable from the server, it may be using NAT translation behind a router and so cannot be contacted without port forwarding being set up on the router. Both of these issues are showstoppers in real network topologies. You can take some small steps to help – you can specify what port the client should listen on for example by using the clientBaseAddress property of the WSDualHttpBinding. This means the network admin will only have to punch one hole in their firewall (but lets face it, network admins don’t allow any holes to be punched in the firewall).

So if you really shouldn’t use WSDualHttpBinding for duplex, what should you use instead? Well NetTcpBinding supports duplex out of the box and the nice thing about this is that the outbound connection that it establishes can also be used be used for inbound traffic – suddenly we don;t have the inbound connection firewall/NAT issues. “But hold on, isn’t NetTcpBinding for intranet? I’ve read books that tell me that in their ‘which binding should I use?’ flowcharts!” Well it turns out those flowcharts are talking rubbish – NetTcpBinding works very happily over the internet, its just not interoperable by design. “Aha! but I need interop so WSDualHttpBinding is for me!” Well unfortunately not, NetTcpBinding is non-interoperable by design, WSDualHttpBinding is non-interoperable despite its design. From the name it would suggest interoperability but  Arun Gupta from Sun wrote this excellent post describing why it wasn’t.

So now seeing that we really are not talking about interop anyway, NetTcpBinding is far more useful than WSDualHttpBinding. Its not bullet proof, if the firewall only allows outbound port 80 but also allows inbound port 80, then WSDualHttpBinding would work where NetTcpBinding wouldn’t – but in this situation we’re really talking server to server and so I’d argue its probably better to roll your own bidirectional communication with two standard HTTP based connections.

The final option you have for duplex communication is to add a piece of infrastructure into the mix. The .NET Services Service Bus (part of the Azure platform) allows two parties to exchange messages both making outbound connections – potentially even using HTTP port 80. The two outbound connections rendezvous in the Service Bus which mediates their message exchanges. If the receiver has had to use outbound port 80 then it polls to receive message bound for it.

It Worked for the First 10 Clients and then the Rest Timed Out!

Irrespective if which of the standard bindings you are using, duplex assumes a constant relationship between proxy and service. In WCF this idea is modelled by the concept of session. All duplex bindings require session. A while back I wrote in some detail about sessions. You will have to either put up with increasing the session throttle (see the linked article for details) or roll your own custom binding that can do duplex without session – you can find an example of this here.

I Use the Callback While the Client is calling the Service and it Deadlocks!

This is because your client is probably a Rich Client GUI based application (Windows Forms or WPF). To understand why this is a problem we need to step back briefly and look at UI clients, threading and WCF threading. UI applications have a rule: you must only update the UI from the thread that created those UI components. In general a GUI application has one UI thread so anything that changes the UI needs to be done from that thread. .NET 2.0 introduced a new construct to simplify the process of a background thread updating the UI: SynchronizationContext. The idea is that a UI framework creates an implementation of a SynchronizationContext derived class that handles the mechanics of marshalling a call on to the UI thread. An instance of this implementation is then made available on the UI and accessible via the SynchronizationContext.Current.

WCF adds more complexity into the mix by enforcing a rule that says “unless you tell me otherwise I will only allow one thread at a time into an object that I control”. You see this with singleton services that will only allow one call at a time by default. The same is also true of the callback implementation object – so WCF will only allow one active thread in the client at a time. So while WCF is performing an outbound call it will not allow an inbound call into the object. This causes the initial problem with the deadlock that the service’s callback cannot be dispatched while the client’s outbound call is in progress. To solve this we use the “unless you tell me otherwise” part of the above rule. You do this by annotating the callback implementation class with a [CallbackBehavior] attribute like this:

class Callback : IOrderPizzaCallback
    public void TimeRemaining(int minutes)
        Console.WriteLine("{0} seconds remaining", minutes);
    public void PizzaReady()
        Console.WriteLine("Pizza is ready");

But now there is another problem: by default WCF will attempt to dispatch using an available SynchronizationContext. The problem with this callback is the UI thread is already blocked in an outbound call. SO for the call to dispatch we need to tell WCF not to use the SynchronizationContext – again using the CallbackBehavior attribute:

[CallbackBehavior(ConcurrencyMode=ConcurrencyMode.Reentrant, UseSynchronizationContext=false)]
class Callback : IOrderPizzaCallback

Now the issue is of course that the call is going to be processed on a non UI thread so you would have to manually marshal any UI interaction using the SynchronizationContext.Post method.

Duplex messaging can be a useful message exchange pattern but in WCF there can be some unexpected issues. Hopefully this blog post clarifies those issues and demonstrates workarounds for them.

.NET | Azure | WCF
Wednesday, June 10, 2009 7:54:13 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Monday, June 01, 2009

it seems that the ever increasing immediacy of social networking and blogging has some downsides. Ted Neward has posted a eulogy for Developmentor (who I regularly teach for). Unfortunately Ted didn’t bother checking his facts. DM are very much still in business and delivering courses (I’m teaching one the week after next and the week after that in fact! Plus several in July and September!)

.NET | Life
Monday, June 01, 2009 7:29:26 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Thursday, May 21, 2009

We’ve just posted a new screencast at Rock Solid Knowledge. In this one I walk you through enabling WCF tracing to assist in diagnosing bugs in your services using the example of a serialization failure. You can find the screen casts here


Thursday, May 21, 2009 12:22:42 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 

Every year I speak at the excellent DevWeek conference in London. Eric Nelson, a Microsoft Developer Evangelist in the UK is also a regular attendee. This year Eric, Mike and Mike were recording interviews with people at the conference. Eric interviewed me about Workflow 4.0 (I was delivering a post conference day on WF 4.0, Dublin and Oslo). He’s just got round to published it online


.NET | Dublin | Oslo | WF
Thursday, May 21, 2009 6:49:02 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Monday, May 18, 2009

Soma has blogged that beta 1 has finally shipped. Lots of new stuff in here since PDC. His blog post is here


Monday, May 18, 2009 4:06:32 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 

We’ve started a library of screencasts at rock solid knowledge. I’ve created one on WCF Serialization and one on using SSL with Self hosted WCF services. Andy Clymer has recorded one on Visual Studio Tricks and Tips and Dave Wheeler has one on character animation in Silverlight 2.0. You can find the screen casts at


We’ll be adding to the library as time goes on so subscribe to the library RSS feed

.NET | RSK | SilverLight | WCF
Monday, May 18, 2009 11:50:15 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Saturday, May 02, 2009

Cliff Simpkins has just posted a blog entry detailing some of the changes between the PDC preview of WF 4.0 and what is coming in beta 1, due shortly.

Important information here not just about beta 1 but also about things you can expect and also things that won’t make it into RTM

Saturday, May 02, 2009 12:27:00 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Sunday, April 05, 2009

I’ve been doing some work with the WCF REST Starter Kit for our website http://rocksolidknowledge.com. Preview 2 of the start kit has a bunch of client side plumbing (the original release concentrated on the service side)

The client side code looks something like this:

HttpClient client = new HttpClient("http://twitter.com");

HttpResponseMessage response = client.Get("statuses/user_timeline/richardblewett.xml");


As compact as this is I was a bit disappointed to see that I only had a few options for processing the content: ReadAsString, ReadAsStream and ReadAsByteArray. Now seeing as they had a free hand to give you all sorts of processing options I was surprised there weren’t more. However, one of the assemblies with the start kit is called Microsoft.Http.Extensions. So I opened it up in Reflector and lo and behold there are a whole bunch of extension methods in there – so why wasn’t I seeing them?

Extension methods become available to your code when their namespace is in scope (e.g. when you have a using statement for the namespace in your code). It turns out that the team put the extension methods in the namespaces appropriate to the technology they were exposing. So for example the ReadAsXElement extension method is in the System.Xml.Linq namespace and the ReadAsXmlSerializable<T> method is in the System.Xml.Serialization namespace.

Although I really like the functionality of the WCF Starter Kit, this particular practice, to me, seems bizarre. Firstly, it makes the API counter intuitive – you use the HttpClient class and then there is no hint in the library that there are a bunch of hidden extensions. Secondly, injecting your extensions into someone else’s namespace increases the likelihood of extension method collision (where two libraries define the same extension method in the same namespace). The same named extension method in difference namespaces can be disambiguated, the same named extension in the same namespace gives you no chance.

I think if you want to define extension methods then you should keep them in your own namespace – it makes everyone’s life simpler in the long run.

Sunday, April 05, 2009 4:58:48 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Thursday, March 26, 2009

Thanks to all who attended my Devweek session: A Beginners Guide to Windows Workflow. The slides and demos can be downloaded here

Thursday, March 26, 2009 6:51:37 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, March 23, 2009

If you look in the System.Threading namespace you will notice something that looks slightly odd: there are a pair of classes ReaderWriterLock and ReaderWriterLockSlim. What are these for and why are there two of them?

Whenever you have multiple threads there is a need to protect state shared between them from corruption. The simplest tool in the .NET Framework is the Monitor class (encapsulated by the lock statement in C#) that provides mutual exclusion. In other words, only one thread at a time can acquire it and anyone else trying to acquire it blocks until the first thread releases it. It situations of high update this provides a reasonable approach. However, if we had no writers we could allow everyone to access a resource without acquiring a lock. So if we had many readers and only an occasional writer, ideally we’d like all the readers to be able to access a resource at the same time and only be blocked if someone needed to write. Unfortunately a Monitor would only allow one reader in at a time so this is the role of a Reader/Writer lock – to allow many concurrent readers but only a single writer, blocking all readers.

So we come to the next question: why are there two of them? ReaderWriterLock was introduced in version 2.0 of .NET and provided the above functionality. However, there was a problem. Imagine this scenario: we have 10 readers currently reading when a writer wants access. Obviously the writer has to let the readers finish and so waits patiently for the readers to release their read locks. However, as the readers drop to one, suddenly a brand new reader arrives – there i still a read lock being held but thats ok as it can also acquire a read lock in that situation. Now the writer is blocked on a new reader and more can continue to arrive so that the writer never gets in – this is called writer starvation. ReaderWriterLock suffers from the possibility of writer starvation.

For version 3.5 SP1 the .NET framework team decided to address this. They introduced a new class called ReaderWriterLockSlim that provides the reader/writer lock functionality but does not suffer from writer starvation (new readers are blocked when there is a writer waiting). However, why didn’t the team just fix the original one? They say that some people depend on the feature that new readers can get a lock if a reader is already there. They didn’t want to break existing code.

Monday, March 23, 2009 2:13:25 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Wednesday, March 18, 2009

The .NET Async pattern is a very neat way of running functionality asynchronously. The async pattern is where for a synchronous method DoWork be have a pair of methods BeginDoWork and EndDoWork to handle running the DoWork functionality on the threadpool. Now it is well documented that if I call the Begin method I must also call the End method to allow the async infrastructure to clean up resources it may have acquired. However, where do I call the End version?

Consider async delegate invocation. Quite often I’d like to just fire off the async method and forget about it but I have to call EndInvoke. Fortunately lambdas make this really easy

Action a = DoWork;
a.BeginInvoke( ar => a.EndInvoke(ar) );

Now there is a nasty problem. What happens if DoWork throws an exception? The async delegate infrastructure will conveniently cache the exception and re-throw it when I call EndInvoke. However, the issue is that this is happening in an AsyncCallback delegate on a threadpool thread and so I cannot catch it easily in this construct. Why is that a problem? well since .NET 2.0 an unhandled exception on a background thread will terminate the process. This means to reliably call EndInvoke in an AsyncCallback we must put it in a try … catch. This is annoying code to reliably put in place and is easily forgotten. So I have written an extension method to wrap this functionality for you

public static class Extensions
public static AsyncCallback Try(this AsyncCallback cb, Action<Exception> exceptionAction)
AsyncCallback wrapper = delegate(IAsyncResult iar)
catch (Exception e)
return wrapper;

So this code extends AsyncCallback and takes a delegate to call if an exception takes place it then wraps its own AsyncCallback around the one passed in this time putting it in a try … catch block. The usage looks like this:

Action a = DoStuff;
AsyncCallback cb = ia => a.EndInvoke(ia);
a.BeginInvoke(cb.Try(e => Console.WriteLine(e)), null);

The only awkward thing here is having to take the lambda out of the call to BeginInvoke because the C# compiler won’t allow the dot operator on a lambda (without casting it to an AsyncCallback) but at least this wraps up some of the issues

Wednesday, March 18, 2009 5:37:08 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, March 03, 2009

I talked about the new WF4 Runtime model a while back. One of the things I discussed was the new data flow model of arguments and variables. However, if you are used to WF 3.5 something looks a bit odd here. Lets look at a simple activity example:

public class WriteLineActivity : WorkflowElement
    public InArgument<string> Text { get; set; }
    protected override void Execute(ActivityExecutionContext context)

Why do I need to pass the ActivityExecutionContext when I want to get the data from the argument? This highlights a subtlety to the change in the runtime model. The activity is really just a template. A class called ActivityInstance is the thing that is actually executed, it just has a reference to the actual activity to be able to hand off to the activty’s methods (e.g. Execute). The actual argument state is stored in a construct called the LocalEnvironment. The ActivityExecutionContext gives access to  the current ActivityInstance and that in turn holds a reference to the current LocalEnvironment. Therefore, to get from  an activity’s Execute method to its state we have to go via the ActivityExecutionContext.

Tuesday, March 03, 2009 9:41:42 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Wednesday, February 25, 2009

I’ve finally got round to pushing the code for BlobExplorer to CodePlex. You can find the CodePlex site here. If you want to contribute to the project, let me know at richard at nospam dotnetconsult dot co dot uk and I’ll add you to the contributors list. The releases will continue being pushed to blob storage http://dotnetconsult.blob.core.windows.net/tools/BlobExplorer.zip

Wednesday, February 25, 2009 2:59:12 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, February 20, 2009

A while back I did this post talking about how WCF contract definitions should model the messages being passed and not use business  objects. This inevitably  means that you have to translate from the Data Transfer Object (DTO) in the contract to business object and back again. This can feel like a lot of overhead but it really does protect you from a lot of heartache further down the line.

However Dom just pointed out the AutoMapper to me. This is in its early stages but looks like the kind of library that will really take away a lot of the grunt work associated with using DTOs – kudos!

Friday, February 20, 2009 6:55:55 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, February 06, 2009

I've just dropped a new release of Blob Explorer for managing Windows Azure Blob Storage

Two new features:

  • Blob metadata can now be added and viewed
  • There is a status bar to show that long running uploads and downloads are in progress

As always you can get it from blob storage here


Friday, February 06, 2009 10:39:17 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, January 23, 2009

Sometimes you have to wonder if this subject will ever go away …

A few weeks ago I posted on using OO constructs in your DataContracts. It’s one of those things that is understandable when .NET developers first start building message based systems. Another issue that raises its head over and over again goes along the lines of “I’m trying to send a DataSet back to my client and it just isn’t working properly”. This reminds me of the old joke:

Patient: Doctor, doctor it hurts when I do this (patient raises his arm into a strange position over his head)
Doctor: Well don’t do it then

So what is the issue with using DataSets as parameters or return types in your service operations?

Lets start off with interoperability – or the total lack thereof. If we are talking about interoperability we have to think about what goes into the WSDL for the DataSet – after all it is a .NET type. In fact DataSets, by default serialize as XML so surely it must be ok! Here’s what a DataSet looks like in terms of XML Schema in the WSDL

  <xs:element ref="xs:schema" /> 
  <xs:any />

In other words I’m going to send you some … XML – you work out what do with it. But hold on – if I use Add Service Reference, it *knows* its a DataSet so maybe it is ok. Well WCF cheats; just above that sequence is another piece of XML

       <ActualType Name="DataSet" Namespace="http://schemas.datacontract.org/2004/07/System.Data"
="http://schemas.microsoft.com/2003/10/Serialization/" /> 

So WCF cheats by putting an annotation only it understands into the schema so it knows to use a DataSet. If you really do want to pass back arbitrary XML as part of a message then use an XElement.

So how about if I have WCF on both ends of the wire? Well then you’ve picked a really inefficient way to transfer around the data. You have to remember how highly functional a DataSet actually is. Its not just the data in the tables that support that functionality, there is also : change tracking data to keep track of what rows have been added, updated and removed since the DataSet was filled; relationship data between tables; a schema  describing itself. DataSets are there to support disconnected processing of tabular data, not as a general purpose data transfer mechanism.

Then you may say – “hey we’re running on an intranet – the extra data is unimportant”. So the final issue you get with a DataSet is tight coupling of the service and the consumer. Changes to the structure of the data on one side of the wire cascade to the other side to someone who may not be expecting the changes. Admittedly not all changes will be breaking ones but are you sure you know which ones will be and which ones won’t. As long as the data you actually want to pass isn’t changing why are you inflicting this instability on the other party in the message exchange. The likelihood that you will have to make unnecessary changes to, say, the client when the service changes is increased with DataSets 

So what am I suggesting to do instead? Instead model the data that you do want to pass around using DataContract (or XElement if you truly want to be able to pass untyped XML). Does this mean you have to translate the data from a DataSet to this DataContract when you want to send it? Yes it does, but that code can be isolated in a single place. When you receive the data as a DataContract and want to process it as a DataSet, does this mean you have to recreate a DataSet programmatically? Yes it does, but again you can isolate this code in a single place.

So what does doing this actually buy you if you do that work? You get something which is potentially interoperable, that only passes the required data across the wire and that decouples the service and the consumer.

Friday, January 23, 2009 4:49:20 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, January 22, 2009

I’ve been writing a lab on Workflow Services in 4.0 recently. Part of what I was showing was the new data orientated correlation (the same kind of mechanism that BizTalk uses for correlation). So I wanted to have two operations that were correlated to the same workflow instance based on data in the message (rather than a smuggled context id as 3.5 does it). As I was writing this lab I suddenly started getting an InvalidOperationException stating DispatchOperation requires Invoker every time I brought the .xamlx file up in a browser. It appeared that others had seen this as well but not really solved it. So I dug around looking at the XAML (workflow services can be written fully declaratively now) and the config file and could see no issues there. I asked around but no one I asked knew the reason.

So I created a simple default Declarative Workflow Service project and that worked ok. I compared my lab workflow and the default one and it suddenly dawned on me what was wrong. The default project has just one operation on the contract and has a ServiceOperationActivity to implement it. My contract had my two operations but I had, so far, only bound one ServiceOperationActivity. So in other words I had not implemented the contract. This is obviously an issue and looking back I’m annoyed I didn’t see it sooner.

However, the problem is that this is a change in behavior between 3.5 and 4.0. In 3.5 if I didn’t bind a ReceiveActivity to every operation I got a validation warning but I could still retrieve metadata; in 4.0 you get a fatal error. Its not hugely surprising that the behavior has changed – after all the whole infrastructure has been rewritten.

On the whole its a good thing that implementation of the contract is enforced – although it would be nice if the validation infrastructure caught this at compile time rather than it being a runtime failure.

.NET | BizTalk | WCF | WF
Thursday, January 22, 2009 10:10:15 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Wednesday, January 21, 2009

A forum question was asking how to create a host that dynamically loaded up services deployed in DLLs. I told the poster to use reflection but he wasn't familiar with it so I thought I would quickly knock up a sample

The host looks in a folder under the directory in which it is executing. The service must be annotated with a [ServiceClass] attribute (defined in a library that accompanies the host). Then you set up the config file for the service ( [ServiceBehavior(ConfigurationName="<some name>")] helps decouple the config from the actual implementing type nicely

The sample is here

DynamicHost.zip (77.07 KB)

Wednesday, January 21, 2009 11:00:56 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, January 20, 2009

Christian found a bug in BlobExplorer when the blob name has pseudo directory structure in it (common prefixes)

Fixed version is available here

Tuesday, January 20, 2009 10:10:25 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, January 16, 2009

Hanging out on the WCF Forums, it appears one in five answers I give seem to be to do with sessions in WCF. As a result, rather than write the same thing over and over again I decided to distil my forum posts into a blog post that I can link to.

WCF has the concept of a session. A session is simply an identifier for the proxy making the call. Where does this ID come from? Well it depends on the binding: with NetTcpBinding, the session is inherent in the transport – TCP is a connected session based transport; with WSHttpBinding session is an artifice piggybacking either WS-SecureConversation or WS-ReliableMessaging. So different bindings go via different mechanisms to achieve the concept of session. However, not all bindings support session – BasicHttpBinding, for example, has no mechanism for maintaining a session. Be aware that every mechanism for maintaining a session in WCF has issues with load balancing. The relationship between proxy and service is a machine-bound one. If you want to use load balancing you will have to use sticky sessions.

So what does having a session do for me? Well some parts of the infrastructure rely on sessions – callbacks for example. WCF also supports the ability to associate a specific instance of the service implementation class with a session (called PerSession instancing). Having a specific instance for the session can seem very attractive in that it allows you to make multiple calls to a service from a specific proxy instance and the service can maintain state in member variables in that service instance in behalf of the client. In fact, if the binding supports session, this is the default model for mapping service instances to requests. WCF also supports two other instancing models (controlled by the service behavior InstanceContextmode): PerCall – each request gets its own instance; Single – all calls use a single instance.

Session has an impact beyond the service – client proxy behavior is affected by the idea of session. Say the client crashes, what happens to the session? Again this depends on the binding. NetTcpBinding is connection orientated so the service side gets torn down if the client crashes. WSHttpBinding on the othe hand rides on top of a connectionless protocol (HTTP) and so if the client crashes the service has no idea. In this situation the session will eventually time out depending on the configuration of WS-SecureConverstion or WS-ReliableMessaging – the default is 10 minutes. To tear down the session the proxy has to call back to the service to say that it is done. This means that closing the proxy in the client is crucial to efficient session management and that Close with WSHttpBinding will actually roundtrip to the service. If the service has died, Close, in this situation will throw an exception and you must call Abort to clean up the proxy correctly.

Session also has effects that are often hidden during development. WCF has an inbuilt throttle to control concurrent operations. There are three throttles: concurrent requests (self explanatory); concurrent objects (remember that there are options as to how many service instances service requests – in reality this one is very unlikely to affect you); and most importantly for the discussion here, concurrent sessions – this defaults to 10. So if you do not clean up your proxy by default your service will support up to 10 concurrent proxies which is generally way too low for many systems.

Sometimes your service will require the ability to maintain per client state. However, due to the complexity of dealing with sessions it is often best to avoid them (unfortunately impossible with the NetTcpBinding) and use another, applkication defined, session identifier and put the session state in a database. However, for example with callbacks, sometimes you require session in your service. In this case you should annotate your contract with the modified ServiceContract


If you do not need session and you do not need to use a session orientated protocol like named pipes or tcp then I would turn off session explicitly by using the following on the contract


In my experience it is rarely useful to use sessions and they introduce overhead, load balancing restrictions and complexity for the client.

Friday, January 16, 2009 11:18:54 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, January 08, 2009

I’ve recently started hanging out on the MSDN WCF forum. Its interesting seeing the questions that are coming up there. One fairly common one is along the lines of:

I have a service that I want to return a list of Person objects but it doesn’t seem to be working when I pass Employees and Managers in the list.

At some point someone will reply

You should annotate the contract with the ServiceKnownType attribute for each type that the Person could be.

Now in a strict sense this is true – however, I think there is a much deeper question that needs discussion.

To understand where the basic problem is we need to take a quick detour into the architecture of WCF. WCF has two layers: the channel layer and the service model layer.

The channel layer deals with Message objects passing them through a composable stack of channels which add or process message headers, handle mapping the Message object to and from a stream of bytes and moving those bytes from one place to another. Notice here we have no notion of ServiceContracts, etc. All messages are untyped in .NET terms are are expressed in terms of an XML Infoset.

The service model layer sits on top of the channel layer and deals with endpoints, behaviors, dispatchers and other bits of plumbing – but the phrase “sits on top of the channel layer” is hugely important. An endpoint is made up of three elements: an address (where), a binding (how) and a contract (what). It is the contract that often causes confusion. In WCF code it takes the form of an annotated interface (can be a class but an interface is more flexible)

interface ICalc
     int Add( int x, int y);

But what do the parameters and return types really mean? What you are in fact doing here is specifying the request and response message bodies. This information will be translated into XML by the time the Message hits the channel layer. Now here we have only used simple types. The data that we generally want to move around is more complex and so we need another way to describe it – for that we use a .NET type and a serializer that translates an object into XML and back again. There are two main serializers that can be used: the long standing XmlSerializer and the DataContractSerializer that was introduced with WCF; the DataContractSerializer is the default one. You normally take a .NET class and annotate is with attributes (although .NET 3.5 SP1 removed the need to do this its generally a good idea anyway as you retain control over XML namespaces, etc)

class Person
     public string Name{ get; set; }
     public int Age{ get; set; }

interface IMakeFriends
     void AddFriend( Person p );

So far so good. So where is the problem? Well now we’re in .NET type system world so it starts to become tempting to pass types that derive from person to the AddFriend method – after all the C# compiler doesn’t doesn’t complain. And at this point when you call the AddFriend method it doesn’t work and so you post the above question to the WCF forum. Unfortunately the way that contracts are expressed in WCF makes is very easy to forget what their purpose is: to define the messages send to the operation and being sent back from the operation. In reality you have to think “how would I express this data in XML?”. XML doesn’t support inheritance so whatever you put in the contract is going to have to have some way of mapping to XML. The data contracts used to define the messages are simply a .NET typed convenience for generating the XML for the data you want to pass – if you view them any other way you are destined for a world of pain. So think about the data you want to pass, not how it may happen to be represented in your business layer and design your DataContracts accordingly.

“But wait!” you may say “I may want to pass and Employee or a Manager here and I need the data to get to the receiver”. Again it comes back to “how would I express that in XML?” as that is what will be happening at the end of the day. If you have a discrete set of types that could be passed then you have something similar to a XML Schema Choice Group. In this case ServiceKnownType is a reasonable approach (although it doesn’t use a Choice Group under the covers). Remember you will need to change the contract if new types need to be passed – as with Choice Groups. However, if you require looser semantics where an extensible number of things may be passed without wanting to change the contract then you require a different way of expressing this in XML and therefore a different way of expressing this in .NET code. Something like:

public class KeyValuePair
public string Key { get; set; }
public string Value { get; set; }

public class FlexibleData
public string Discriminator { get; set; }
public List<KeyValuePair> Properties { get; set; }

In this case the receiver will be responsible for unpicking the Properties based on the Discriminator. The other thing you could do is to pass the data as an XElement and work explicitly at the XML level, with LINQ to XML this is not very onerous. Is this all really convenient to use in .NET? Well not as easy as inheritance, but then we’re actually talking about XML here not .NET. Again, the .NET typed contract is simply a convenience for creating the XML message.

Thursday, January 08, 2009 4:40:18 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, December 16, 2008

With any development environment I go through a series of stages:

  1. Love - this thing is awesome I never realised how great life would be with these new features
  2. Hate - Man why does it do these stupid things, it really detracts from the new cool features
  3. Acceptance - well it has foibles but generally I get my job done faster

I've been in 3 with VS 2008 for some time now but two things have always irked me:

When I type a method I get error red lines appear due to syntax errors like this:

Well obviously I get an error - I haven't finished typing yet! I find these sorts of errors very distracting.

The other issue is with implementing interfaces - I love the feature where I can specify a class implements an interface and then press Ctrl+. [Enter] and it puts a skeleton implementation in for me. What I have always found annoying though is the #region if surrounds the implementation with:

So I was looking around the C# editor options today and found I could turn both of these off.

I think I may be heading back to phase 1 of my relationship with VS2008 now :-)

Tuesday, December 16, 2008 5:18:38 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, December 15, 2008

I spent a while working this one out and none of the google hits  showed me what I needed so I thought I'd blog it here. When you hook the Drop event how do you deal with multiple files being dropped? The GetData("FileNameW") member only returns a single file name in the string array even if multiple files are being dragged. GetData("FileName") returns a single file name in 8.3 compatible munged form so that doesn't help. I looked in the help for System.Windows.DataObject and it has a GetFileDropList member. However, it turns out that the DragDropEventArgs.Data is an IDataObject not a DataObject. So if you cast it to a DataObject you get access to the file list

if (e.Data.GetFormats().Contains("FileDrop"))
    DataObject dataObject = e.Data as DataObject;
    StringCollection files = null;
    if (dataObject != null)
        files = dataObject.GetFileDropList();
        // not sure if this is necessary but its defensive coding
        string[] fileArray = (string[])e.Data.GetData("FileNameW");
        files = new StringCollection();

    foreach (string file in files)
        // process files

Hopefully someone else looking how to do this will stumble across this blog post and save some time

Monday, December 15, 2008 9:01:53 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

I've added drag and drop support to the BlobExplorer (announced here) so you can drag files from windows explorer into cloud blob storage

Rather than maintain the application in multiple places all releases will now go to the cloud here


.NET | Azure | WPF | BlobExplorer
Monday, December 15, 2008 3:52:11 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

I will finally submit to Dominick's harrassment and stop running Visual Studio elevated. I started doing it because I couldn't be bothered to mess about with HttpCfg and netsh when writing self-hosted WCF apps exposed over HTTP. Registering part of the HTTP namespace with http.sys requires admin privilege. I could have sorted it out with a wildcard registration but never got round to it.

Since writing the BlobExplorer, Christian has been my major source of feature requests. His latest was he wanted drag and drop. Now I'm not an expert in WPF so I hit google to find out how to do it. And it seemed really simple - just set AllowDrop=true on the UIElement in question and handle the Drop event. But I just couldn't get the app to accept drag events from Windows Explorer. After much frustration I finally realised that the problem had nothing to do with my code but was rather a security issue. the BlobExplorer was running elevated because I started it from Visual Studio which was also running elevated. Elevated applications will not accept drag events from non-elevated applications.

So I'm setting VS to run normally again and will do the proper thing with http.sys for my WCF applications.

.NET | Azure | WCF | WPF
Monday, December 15, 2008 12:49:41 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, December 11, 2008

I think the best feature introduced in .NET 2.0 was anonymous methods in C#. Hows that for taking a stance!

I've just been reading an article about anonymous methods coming to VB in .NET 4.0 and while I think this is great news for VB developers its worth adding a note of caution due to one of the examples used. Generally anonymous methods and event handlers can cause unforseen issues due to the way anonymous methods are implemented under the covers (I'm assuming the VB implementation is broadly the same as the C# one). If you look at the code below:

class Program
    static void Main(string[] args)
        Foo f = new Foo();

        f.ItHappened += delegate
            Console.WriteLine("Foo happens");

        f.ItHappened -= delegate
            Console.WriteLine("Foo happens");


class Foo
    public event Action ItHappened;

    public void Bar()
        if (ItHappened != null)

The problem is the compiler generates two different real methods for the two anonymous methods. Therefore the unwiring of the handler fails. It comes down to this: if you wire up an event hander with an anonymous method, unless you store the delegate in a variable, there is no way to remove that event handler afterwards. There is a well known issue in .NET in terms of memory leaks with event handlers keeping alive objects that otherwise have independent lifetimes. You fix this issue by remembering to remove the event handler when the object is finished with. However, using anonymous methods (or lambdas for that matter) prevents you from removing the event handler and so this issue becomes unresolvable.

So my advice: even though anonymous methods are very powerful, generally use real methods for event handlers unless the lifetime of the event owner and the subscriber are inherently tied together.

Thursday, December 11, 2008 8:21:46 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Wednesday, December 10, 2008

As I said in my previous post, asynchrony and persistence enable arguable the most powerful feature of workflow, that of stateful, long running execution. However as I also said, the WF runtime has been rewritten for WF 4.0 - so what does asynchrony and persistence look like in the 4.0 release?

First lets look at what was problematic in WF 3.5 in this area:

  • Writing an asynchronous activity was hellishly complex as they get executed differently dependent on their parent (State Machine, Listen Activity, standard activity)
  • There was no model for performing async work that didn't open up the possibility of being persisted during processing. This is fine if you are waiting for a nightly extract file to arrive on a share; its a real problem if you are trying to invoke a service asynchronously as the workflow may have been unloaded by the time the HTTP response comes back

The WF team have tried to solve both of these problems in this release and in addition obviously have to align with the new runtime model I talked about in my last post. Lets talk about async first and then we'll look at the related but separate issue of persistence.

The standard asynchronous model has been streamlined so the communication mechanisms have been wrapped in the concept of a Bookmark. A bookmark represents a resumable point of execution. Lets look at the code for an async activity using bookmarks.

public class StandardAsyncActivity : WorkflowElement
    protected override void Execute(ActivityExecutionContext context)
        BookmarkCallback cb = Callback;
        context.CreateNamedBookmark("rlBookMark", cb, BookmarkOptions.MultipleResume);

    int callTimes = 0;
    private void Callback(ActivityExecutionContext ctx, Bookmark bm, object state)
        if (callTimes == 2)


This code demonstrates a number of features. In 3.5 an activity indicated it was async by returning ActivityExecutionStatus.Executing from the Execute method. However, here the workflow runtime knows that this activity has yet to complete as it has an outstanding bookmark. When the activity is done with bookmark processing (in this case it receives two pieces of data) it simply closes it and since the runtime now knows it has no outstanding bookmarks the activity must be complete. Also notice that no special interfaces require implementation, the activity simply associates a callback to be fired when execution resumes at this bookmark. So how does execution resume? Heres the code:

myInstance.ResumeBookmark("rlBookMark", "hello bookmark");

This code is in the host or an extension (new name for a workflow runtime service) associated with the activity and as you can see is very simple.

With this model, after Execute returns and after each bookmark callback that doesn't close the bookmark, the workflow will go idle (assuming no other activities are runnable) and could be unloaded and/or persisted. So this is ideal for situations where the next execution resumption will be after some time. However, if we're simply performing async IO then the next resumption may be subsecond.

This model will not work for such processing and so there is a lightweight version of async processing in 4.0. Lets look at an example

public class LightweightAsyncActivity : WorkflowElement
    protected override void Execute(ActivityExecutionContext context)
        AsyncOperationContext block = context.SetupAsyncOperationBlock();

        Action a = AsyncWork;

        AsyncCallback cb = delegate(IAsyncResult iar)
            // called on non workflow thread
                                         ar => block.EndCompleteOperation(ar),

        a.BeginInvoke(cb, null);

    private void AsyncProcessingComplete(ActivityExecutionContext ctx, Bookmark bm, object val)
        // called on workflow thread
        Console.WriteLine("Bookmark callback");

    private void AsyncWork()
        // called on non workflow thread
        Console.WriteLine("Done Sleeping");

This time, instead of explicitly creating a bookmark we create an AsyncOperationContext. This then allows us to run on a separate thread (BeginInvoke hands the AsyncWork method to the threadpool) and then when it completes tell the workflow infrastructure that our async work is done. It also allows us to provide another method that will execute on a workflow thread in case there are other workflow related tasks we need to perform (AsyncProcessingComplete in the above example). Although when Execute returns the workflow will go idle, this infrastructure puts in place a new construct called a No Persist Block that prevents persistence while the async operation is in progress.

So these are the two new async models but how does the new persistence model interact. Well the above description mentions some interaction but lets look at 4.0 persistence in more detail and then draw the two parts of the post together.

The WorkflowPersistenceService has now been remodelled in terms of Persistence Providers. You get one out-of-the-box as before for SQL Server. There are two scripts to run against a database to put the new persistence constructs in place SqlPersistenceProviderSchema40.sql and SqlPersistenceProviderLogic40.sql.To enable the sql persistence provider you create an instance of the provider for each workflow instance using a factory and add it as an extension to that workflow instance:

string conn = "server=.;database=WFPersist;integrated security=SSPI";
SqlPersistenceProviderFactory fact = new SqlPersistenceProviderFactory(conn, true);

WorkflowInstance myInstance = WorkflowInstance.Create(new MyWorkflow());

PersistenceProvider pp = fact.CreateProvider(myInstance.Id);


The factory and the provider must be have Open called on them as they both derive from the WCF CommunicationObject class (presumably due to the way this layers in for durable services and Dublin). Note that there is no flag for the provider equivelent to 3.5's PersistOnIdle. It is now up to the host to specifically call Persist or Unload on the WorkflowInstance if wants it persisted or unloaded. So now in the Idle event you may put code like this:

myInstance.Idle += delegate

I'll come back to why use TryPersist rather than Persist in a moment. However, there is another job to be done. You need to explicitly tell the persistence provider that you are done with a workflow instance to ensure that it gets deleted from the persistence store when the workflow completes. And so you will typically call Unload in the Completed event:

myInstance.Completed += delegate(object sender, WorkflowCompletedEventArgs e)

However, things are a little gnarly here. Remember above that both the Bookmark and AsyncOperationBlock models for async will trigger the Idle event but the AsyncOperationBlock prevents persistence. Thats why we have to use TryPersist and TryUnload as Persist and Unload will throw exceptions if persistence is not possible. The team have said they are working on getting this revised for beta 1 and having seen a preview of the kind of things they are looking at it should be a lot neater then.

So thats the new async and persistence model for WF 4.0. The simplicity and extra functionality now in the async model will take away a good deal of the complexity in building custom activtities that play well in the long running workflow infrastructure



.NET | Dublin | WCF | WF
Wednesday, December 10, 2008 2:32:56 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

I've always found the workflow execution model very compelling. Its a very different way of thinking about writing applications but it solves a set of difficult problems particularly in the area of asynchronous and long-running processing. So when I found out that .NET 4.0 introduces a new version of the workflow infrastructure I was intregued as to what changes had been made.

The ideas behind workflow have not changed but the WF 4.0 implementation is very different from the 3.5 one, involving a very different API and a rewritten design-time experience. The decision to do a ground up rewrite was not taken lightly and was necessary to resolve issues that people were facing with 3.5 - particularly around serialization, performance and complexity. So you can say goodbye to WorkflowRuntime, WorkflowRuntimeService, WorkflowQueue, ActivityExecutionStatusIEventActivity and a number of other familiar classes and interfaces; the new API looks quite different.

So lets look at the code to create and execute a workflow in WF 4.0

WorkflowInstance myInstance = WorkflowInstance.Create(new MyWorkflow());


Console.WriteLine("Press enter to exit");
string data = Console.ReadLine();

So firstly two things are the same: you still use a WorkflowInstance object to address the workflow and starting the workflow is still an asychronous operation. However, notice that we don't use WorkflowRuntime to create the workflow, everything is done against the WorkflowInstance itself.

There is also a more lightweight workflow execution model if you simply want to inline some workflow functionality

WorkflowInvoker inv = new WorkflowInvoker(new MyWorkflow());

Workflows are now, by default, authored declaratively in XAML as are custom activities. In fact the activtity model looks quite different. The base unity of execution in a workflow is a WorkflowElement. Activities are composite activities, normally declared in XAML and these ultimately derive from WorkflowElement. However, neither of these constructs is actually executed; an ActivityInstance is generated from the activity definition and it is this that is executed. This is because arbitrary state can be attached to an executing activity in the form of variables. An activity can only bind its own state (modelled explicitly in the form of arguments) to variables declared on its parent or other ancestors. This is a very different model from 3.5 where an activity's state could be data bound to properties of any other activity in the entire workflow activity tree. This mean't, in effect, that to truly capture the state of a workflow you can to serialize the entire activity tree, including those activties that had finished execution and those that had yet to start execution. We can see this here

The 4.0 model only requires the currently executing set of activities to be serialized as they cannot bind state outside of their parental relationships


This makes persistence much faster and more compact.

Creating custom activties is often a case of combining building blocks defined as other activties - these are generally modelled in XAML. However, what if you want to create a building block activity yourself? In this case you simply derive from WorkflowElement directly and override the Execute method. This sounds a lot like creating custom activities in 3.5, however there are important differences. Firstly state in and out of an activity is explicitly modelled with arguments that have direction, secondly you no longer have to tell the runtime when the activity is finished or not in Execute - the runtime knows based on what actions you perform.

public class WriteLineActivity : WorkflowElement
    public InArgument<string> Text { get; set; }
    protected override void Execute(ActivityExecutionContext context)

Notice here that this activity is assuming that it is running in a Console host. Generally we try to avoid tying an activity to a certain type of host and this was achieved in 3.5 by using pluggable Workflow Runtime Services. The same is true in 4.0 but these services have been renamed Extensions to avoid confusion with Workflow Services (where workflow is used to implement a WCF service). The WorkflowInstance class now has an Extensions collection and the ActivityExecutionContext has a GetExtension<T> method for the activity to retrieve an extension it is interested in.

Some of the real power of the workflow infrastructure only becomes apparent when you plug in a persistence service and your activties are asynchronous so the workflow can be taken out of memory. It is this combination that allows workflows to perform long running processing and this infrastructure has also changed a great deal. However I will walk through those changes in my next post.

Wednesday, December 10, 2008 10:44:27 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, December 08, 2008

I've uploaded an updated version of the BlobExplorer (first blogged about here). It now uses a decent algorithm for determining MIME type (thanks Christian) and the blob list now retains a single entry when you update an existing blob

Downloadable from my website here

BlobExplorer12.zip (55.28 KB)

downloadable from Azure Blob Storage here

.NET | Azure | BlobExplorer | REST | WPF
Monday, December 08, 2008 9:33:00 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Sunday, December 07, 2008

The Windows Azure Storage infrastructure has three types of storage:

  • Blob - for storing arbitrary state as a blob (I recently blogged about writing an explorer for blob storage)
  • Table - for storage structured or semi-structured data with the ability to query it
  • Queue - primarily for communication between components in the cloud

This post is going to concentrate on queue storage as its usage model is a bit different traditional queuing products such as MSMQ. The code samples in this post use the StorageClient library sample shipped in the Windows Azure SDK.

Lets start with the basics: you authenticate with Azure storage using your account name and key provided by the Azure portal when you create a storage project. If you're using the SDK's deveopment storage the account name, key and uri are fixed. Here is the code to set up authentication details with development queue storage:

string accountName = "devstoreaccount1";
string accountKey = "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==";
string address = "";

StorageAccountInfo info = new StorageAccountInfo(new Uri(address), null, accountName, accountKey);

QueueStorage qs = QueueStorage.Create(info);

Queues are named within the storage so to create a queue (or address an existing queue) you use the collowing code:

MessageQueue mq = qs.GetQueue("fooq");


The CreateQueue method will not recreate an already existing queue and has an overload to tell you that the queue already existed. You can then create message objects and push them into the queue as follows:

Message msg = new Message(DateTime.Now.ToString());


Hopefully none of this so far is particularly shocking. Where things start to get interesting is on the receive side. We can simply receive a message so:

msg = mq.GetMessage(5);
if (msg != null

So what is interesting about this? Firstly notice that the GetMessage takes a parameter - this is a timeout in seconds. Secondly, GetMessage doesn't block but will return null if there is no message to receive. Finally, although you can't see it from the above code, the message is still on the queue. To remove the message you need to delete it:

Message msg = mq.GetMessage(5);
if (msg != null

This is where that timeout comes in: having received a message, that message is locked for the specified timeout. You must called DeleteMessage within that timeout otherwise the message is unlocked and can be picked up by another queue reader. This prevents a queue reader trying to process a message and then dying leaving the message locked. It, in effect, provides a loose form of transaction around the queue without having to formally support transactional sematics which tend not to scale at web levels.

However, currently we have to code some polling logic as the GetMessage call doesn't block. The MessageQueue class in StorageClient also provides the polling infrastructure under the covers and delivers the message via eventing:

mq.MessageReceived += new MessageReceivedEventHandler(mq_MessageReceived);
mq.PollInterval = 2000; // in milliseconds
mq.StartReceiving(); // start polling

The event handler will fire every time a message is received and when there are no more messages the client will start polling the queue according to the PollInterval

Bear in mind that the same semantics apply to receiving messages. It is beholden on the receiver to delete the message within the timeout otherwise the message will again become visible to other readers. Notice here we don't explicitly set a timeout and so a default is picked up from the Timeout property on the MessageQueue class (this defaults to 30 seconds).

Using the code above it is very easy to think that everything is running with a nicely tuned .NET API under the covers. Remember, however, that this infrastructure is actually exposed via internet standard protocols. All we have here in reality is a convenient wrapper over the storage REST API. And so bear in mind that the event model is just a convenience provided by the class library and not an inherent feature built into queue storage. Similarly the reason that GetMessage doesn't block is simply that it wraps an HTTP request that returns an empty queue message list if there are no messages on the queue. If you use GetMessage or eventing API then the messages are retrieved one at a time. You can also use the GetMessages API which will allows you to receive multiple messages at once.

.NET | Azure | REST
Sunday, December 07, 2008 11:04:49 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Saturday, December 06, 2008

I've been digging around in Azure Storage recently and as a side project I decided to write an explorer for Blob Storage. My UI skills are not my strongest suit but I had fun dusting off my WPF knowledge (I have to thank Andy Clymer, Dave "no blog" Wheeler and Ian Griffiths for nursemaiding me through some issues)

Here is a screen shot of the explorer attached to development storage. You can also attach it to your hosted Azure storage in the cloud

In the spirit of cloud storage and dogfooding I used the Blob Explorer to upload itself into the cloud so you can find it here

However, as Azure is a CTP environment I thought I would also upload it here

BlobExplorer1.zip (55.08 KB)

as, on the web, URIs live forever and my storage account URIs may not after the CTP. I hope someone gets some mileage out of it - I certainly enjoyed writing it. Comments, bug reports and feature requests are appreciated

.NET | Azure | BlobExplorer | REST | WPF
Saturday, December 06, 2008 10:00:19 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, December 01, 2008

DevelopMentor UK ran a one day post PDC event on Friday at the Microsoft Offices in London. Myself and Dominick covered WF 4.0, Dublin, Geneva, Azure and Oslo

You can find the materials and demos here

.NET | Azure | Oslo | WCF | WF | Dublin
Monday, December 01, 2008 11:30:39 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, November 21, 2008

Thanks to all who came to my REST talk at Oredev

The slides are here

REST.pdf (325.61 KB)

and the demos are here

REST.zip (30.32 KB)

Friday, November 21, 2008 3:28:54 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

Thanks to everyone who came to my SOA with WF and WCF talk at Oredev

The slides are here

SOA-WCF-WF.pdf (278.79 KB)

and the demos are here

ServiceComposition.zip (107.08 KB)

Friday, November 21, 2008 7:17:50 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, November 03, 2008

I was in Redmond a few weeks ago looking at the new stuff that Microsoft's Connected System Division (CSD) were working on (WF 4.0, REST Toolkit, Oslo, Dublin). At the end of the week I did an interview for Ron Jacobs for Endpoint.tv on Channel 9. We discussed WF 4.0, Dublin, Oslo and M - as well as 150 person Guerrilla courses. You can watch it here

.NET | BizTalk | Oslo | REST | WCF | WF
Monday, November 03, 2008 11:12:08 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Sunday, November 02, 2008

.NET Services is a block of functionality layered on top of the Azure platform. This project has had a couple of names in the past – BizTalk Services when it was an incubation project and then Zurich as it was productionized.


.NET Services consists of three services:

1)      Identity

2)      The Service Bus

3)      Workflow


Lets talk about identity first. Officially named the .NET Access Control Service, this provides you a Security Token Service (STS) that you can use to configure how different forms of authentication map into claims (it leverages the claims construct first surfaced in WCF and now wrapped up in Geneva). It supports user ids/passwords, Live ID, Certs and full blown federation using WS-Trust. It also supports rule based authorization against the claim sets.


The Service Bus is a component that allows you to both listen for and send messages into the “service bus”. More concretely it means that from inside my firewall I can listen to an internet based endpoint that others can send messages into. It supports both unicast and multicast. The plumbing is done by a bunch of new WCF Bindings (the word Relay in the binding indicates it is a Service Bus binding) although there an HTTP based API too. Sending messages to the internet is fairly obvious how that happens, its the listening that is more interesting. The preferred way is to use TCP (NetTcpRelayBinding) to connect. This parks a TCP session in the service bus which it then sends messages to you down. However they also support HTTP although the plumbing is a bit more complex. There they construct a Message Buffer that they send messages into then the HTTP relay listener polls it for messages. The message buffer is not a full blown queue although it does have some of the same characteristics of a queue. There is a very limited size and TTL for the messages in it. Over time they may turn it into a full blown queue but for the current CTP it is not.


So what’s the point of the Service Bus? It enables to be subscribe to internet based events (I find it hard to use the word “Cloud” ;-)) to allow loosely coupled systems over the web. It also allows the bridging of on-premises systems to web based ones through the firewall


Finally there is the .NET Workflow Service. This is WF in the Cloud (ok I don’t find it that hard). They provide a constrained set of activities (currently very constrained although they are committed to providing a much richer set. They provide HTTP Receive and Send and Service bus Send plus some flow control and some XPath ones that allow content based routing. You deploy your workflow into their infrastructure and can create instances of it waiting for messages to arrive and to route them, etc. With the toolset you basically get one-click deployment of XAML based workflows. They currently only support WF 3.5 although they will be rolling out WF 4.0 (which is hugely different from WF 3.5 – thats the subject of another post) in the near future.


So what does .NET Services give us? It provides a rich set of messaging infrastructure over and above that of Windows Azure Services

.NET | Azure | WCF | WF
Sunday, November 02, 2008 3:10:04 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Saturday, November 01, 2008

Having spent the last week at PDC - with very spotty Internet access. I thought I'd take some time to reflect on what I thought about the various announcements and technologies that I dug around in. This post is about the big announcement: Windows Azure


Azure (apparently pronounced to rhyme with badger) is an infrastructure designed to control applications installed in a huge datacenter. Microsoft refer to it as a “cloud based operating system” and talk about it being where you deploy your apps for Internet  scale.


So I guess we have to start with: what problem are Microsoft trying to solve? There are two answers here:

1)      Applications that need Internet scale are really hard to deploy due to the potentially huge hardware requirements and cost of managing that infrastructure. Most organizations that go through rapid growth experience a lot of pain as they try to scale for the 10,000s to many millions of users (Pinku has some great slides on the pain MySpace went through and how they solved it). This normally requires rearchitecting of the app to more loosely couple, etc.

2)      Microsoft have a serious threat from both Amazon and Google in the high scaling world as both of those companies already have “cloud solutions” in place. They had to do something to ensure they were not left behind.


So Microsoft started the project to create an infrastructure to allow customers to deploy applications into Microsoft’s datacenter that would seamlessly scale as their requirements grew – Azure is the result.


The idea then is that you write your software and tell Microsoft what facilities you require – this is in the form of a manifest config file: “I require 20 instances of the web front end and 5 instances of the data processing engine”. The software and config is then deployed in to Microsoft’s infrastructure and a component called the Fabric Controller maps the software on to virtual machines under the control of a hypervisor. They also put a load balancer in front of the web  front end. The web site runs as a “web role” that can accept requests from the Internet and the processing engine gets mapped to a “worker role” that cannot be accessed externally.


The Fabric Controller has responsibility to ensure that there are always a number of running instances your components even in the face of hardware failures. It will also ensure that you can perform rolling upgrades without taking your app offline if that what you require.


The problem that apps then face is that even though they may run on a specific machine at one point, the next time they are initialized they may be on a completely separate machine and so storing anything locally is pointless apart from as a localized cache. That begs the question: where do I store my state? Enter the Azure Storage Service. This is a massively scalable, fault tolerant, highly available storage system. There are three types of storage available


Blob: Unstructured data with sizes up to 50Gb

Table: Structured tabular data with up to 252 user defined properties (think columns)

Queue: queue based data for storing messages to be passed from one component to another in the azure infrastructure


Hopefully Blob and Queue are fairly familiar constructs to most people. Table probably needs a little clarification. We are not talking about a relational database here. There is no schema for the table based data so, in fact, every row could be shaped completely differently (although this would be pretty ugly to try to read back out again). There are no relations and therefore no joins. There are also no server managed indexes – you define a pseudo index with the idea of a partition ID – this ID can be used to horizontally partition the data across multiple machine clusters but the partition ID is something you are responsible for managing. However, each row must be uniquely identifiable so there is also a concept of a row id and the partition id/row id combination make up the primary key of the table. There is also a system maintained version number for concurrency control. So this is where the strange number of 252 user defined properties comes from 255 – 3 (partition id, row id, version)


So in the above example, the web front end passes the data processing engine the data by enqueing it into queue storage. The processing engine then stores the data further (say in a blob or table) or just processes and pushes the results back on another queue. It can also send messages out to external endpoints.


All components run under partial trust (a variant similar to ASP.NET medium trust) so Azure developers will need some understanding of CAS.


The API for talking to Azure is REST based which can be wrapped by ADO.NET Data Services if appropriate (e.g. for table storage)


To get started you need an account provisioned (to get space reserved in the datacenter). You can do this via http://www.azure.com. There are other services built on top of Azure, which I will cover in subsequent posts, which get provisioned in the same place.


There is an SDK and VS2008 SP1 integration. This brings a development fabric to your machine that has the same services available as the cloud based one so you can test without deploying into the cloud. There are also VS project templates which in fact create multiple projects: the application one(s) and another to specify the deployment configuration.


So where does that leave Microsoft. They have created an offering that in its totality (which is much more than I have talked about here) is beyond what both Amazon and Google have created in terms of functionality. But they are left with two issues:

1)      Will companies trust Microsoft to run their business critical applications? Some definitely will but others will reserve judgement for some time until they have seen in practice that this infrastructure is sound. Microsoft say they will also have an SLA in the contract that will have financial penalty clauses if they fail to fulfil it in some currently unspecified way

2)      Microsoft have not yet announced any pricing model. This leaves companies in a difficult position – do they throw resources at a project with a lot of potential and bear the risk that when Microsoft unveil the pricing their application is not economically viable? Or do they wait to start investing in this technology until Microsoft announce pricing. Unfortunately this is a chicken and egg situation – Microsoft cannot go live commercially  until the infrastructure has been proven in practice by companies creating serious app on it, and yet they do not want to announce pricing until they are ready for commercial release. Hopefully they will be prepared to discuss pricing for any organization that it serious about building on the infrastructure on a case by case basis before full commercial release.


Azure definitely has huge potential for those companies that need a very flexible approach to scale or who will require scale over time but that time cannot yet be determined. It also has some challenges for how you build applications - there are design constraints you have to cater for (failure is a fact of life and you have to code to accept that for example).


Definitely interesting times

.NET | Azure | REST
Saturday, November 01, 2008 2:39:23 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, October 27, 2008
I'm sitting here in the PDC 08 Keynote. Ray Ozzie has just announced Windows Azure - a new Windows platform "in the cloud". In other words a set of services hosted in Microsoft's datacenters that you can deploy your apps into. As a platform it has a whole systems management side to it.

The service model uses services, endpoints - contracts ... seems familiar. You deploy the code and a model describing the app so the systems management can support your app.

Storage system is highly available with storage for blobs, tables and queues. Supports dynamic load balancing and caching

Azure development is done in Visual Studio and supports both managed and unmanaged code. New "cloud" project templates give you a pair of projects - one is a standard familiar .NET project and the other is the one is configuation that describes the app.

The Azure portal lets you change the configuration dynamically to scale up as required. Currently you have to edit the XML but they will be providing a UI for the configuration.

This all looks pretty exciting - looking forward to getting hold of the bits tomorrow

.NET | Azure
Monday, October 27, 2008 4:24:25 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, September 16, 2008

I'm going to be doing a couple of sessions at the Oredev conference in Sweden in November

Writing REST based Systems with .NET

For many, building large scale service based systems equate to using SOAP. There is, however, another way to architect service based systems by embracing the model the web uses - REpresentational State Transfer, or REST. .NET 3.5 introduced a way of building the service side with WCF - however you can also use ASP.NET's infrastructure as well. In this session we talk about what REST is, two approaches to creating REST based services and how you can consume these services very simply with LINQ to XML.

Writing Service Oriented Systems with WCF and Workflow

Since its launch WCF has been Microsoft's premier infrastructure to writing SOA based systems. However one of the main benefits of Service Orientation is combining the functionality of services to create higher order functionality which itself is exposed as a service - namely service composition. Workflow is a very descriptive way of showing how services are combined and in .NET 3.5 Microsoft introduced an integration layer between WCF and Workflow to simplify the job of service composition. In this session we examine this infrastructure and bring out both its string and weak points with an eye to what is coming down the line in Project Oslo - Microsoft's next generation of its SOA platform.

Hope to see you there

.NET | ASP.NET | LINQ | MVC | Oslo | REST | WCF | WF
Tuesday, September 16, 2008 1:45:26 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Wednesday, June 18, 2008

I've been looking at the routing infrastructure Microsoft are releasing in .NET 3.5 SP1. This is the infrastructure that allows me to bind an HTTP Hander to a URI rather than simply using webforms. It is used in the ASP.NET MVC framework that is in development. The infrastructure is pretty clean in design, First you add a reference to System.Web.Routing. Then you simply create Route objects binding a URI to an implementation of IRouteHandler. Finally you add it to a RouteTable static class. Global.asax is the ideal spot for this code.

void Application_Start(object sender, EventArgs e) 
        Route r = new Route("books", new BooksRouteHandler());
        r = new Route("books/isbn/{isbn}", new BooksRouteHandler());
        r = new Route("search/price", new SearchRouteHandler());

Here BooksRouteHandler and SearchRouteHandler implement IRouteHandler

public interface IRouteHandler
    IHttpHandler GetHttpHandler(RequestContext requestContext);

So for example the BooksRouteHandler looks like this

public class BooksRouteHandler : IRouteHandler
    public IHttpHandler GetHttpHandler(RequestContext requestContext)
        if (requestContext.RouteData.Values.ContainsKey("isbn"))
            string isbn = (string)requestContext.RouteData.Values["isbn"];
            return new ISBNHandler(isbn);
            return new BooksHandler();

Where ISBNHandler and BooksHandler both implement IHttpHandler

This is all pretty straightforward. The one thing that had me puzzling for a while is who looks at the RouteTable. Reflector to the rescue! There is a module in the System.Web.Routing assembly called UrlRoutingModule. If you add this in to your web.config the routing starts working. The config piece looks like this

      <add name="Routing" 
           type="System.Web.Routing.UrlRoutingModule, System.Web.Routing, Version=, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>

I'm currently using it to build a REST based service for the second part of a two part article for the DevelopMentor DevelopMents newsletter so if you're not on the distribution list for that subscribe!
Wednesday, June 18, 2008 10:36:52 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Friday, June 06, 2008

As promised, here are the demos from the precon myself and Dave Wheeler (get a blog Dave) did at Software Architect 2008. It was a fun day talking about security, WCF, WF, Windows Forms, WPF, Silverlight, Ajax, ASP.NET MVC, LINQ and Oslo

DotNetForArchitects.zip (791.24 KB)

There is a text file in the demos directory in the zip that explains the role of each of the projects in the solution

Edit: Updated the download link so hopefully the problems people have been experiencing will be resolved

.NET | LINQ | Oslo | SilverLight | WCF | WF | WPF
Friday, June 06, 2008 10:02:07 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 

I've just got back from Software Architect 2008. Its a great conference to speak at and an interesting change from speaking at hard core developer conferences like DevWeek. Thanks to everyone who attended my sessions - the slides and demos are below

SOA with WCF and WF - SOA.zip (368.43 KB)

Volta - Volta.zip (506.21 KB)

The slides and demos from the pre conference workshop on .NET 3.5 for architects that Dave Wheeler and me presented will be posted early next week. We have realised we really need a guide to what all the projects are and how they relate - so we'll add this documentation and post them

.NET | Volta | WCF | WF
Friday, June 06, 2008 8:39:14 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Friday, May 09, 2008

The Workflow Mapper Activity for copying data from one object to another was an interesting project for myself, Jörg and Christian. However, none of us has the cycles to turn it into the hugely valuable activity I think it could be. therefore, we have decided to publish the code on codeplex so hopefully we can get community involvement to polish the functionality.

You can find the project at


Take a look and let us know if you want to get involved in the project

Friday, May 09, 2008 5:47:56 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Thursday, March 20, 2008

I use my machine for development and research as well as teaching. Originally I only installed VS2005 and SQL Express. I later installed VS 2008. Finally I installed BizTalk (which required a full-blown SQL Server install) and thought nothing more of it.

More recently I was checking into how LINQ to SQL was doing some things and realised I couldn't find SQL Server Profiler on my machine - in fact I couldn't find any of the SQL Server tools! I googled this a bit and found this was a well known issue with installing SQL Express before SQL Server. So I resigned myself to sorting this out on the next repave.

I was talking to Kev Jones about this issue at DevWeek last week and yesterday he pops up and tells me he's solved it. Kev had the cunning idea of actually reading the SQL Server installation warning messages - which apparently tell you exactly what to do.

You can read Kev's post about it here

Thursday, March 20, 2008 9:25:51 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Saturday, March 15, 2008

The demos from my DevWeek 2008 Postcon A Day of Connected Systems with VS 2008 are now here:

DayOfCS.zip (1.48 MB)

Thanks for attending the session

.NET | BizTalk | WCF | WF
Saturday, March 15, 2008 9:47:05 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, March 14, 2008

The demos for my DevWeek 2008 talk on a Developers Guide to Workflow are now available here:

DevGuideToWF.zip (31.23 KB)

Thanks a lot for attending the session

Friday, March 14, 2008 7:46:46 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, March 13, 2008

The demos from my DevWeek 2008 Silver talk about integrating WCF and WF are available here:

Silver.zip (107.74 KB)

Thanks for coming to the talk

Thursday, March 13, 2008 11:51:43 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

The demos from my DevWeek 2008 Robust Long Running Workflow talk are now available here:

RobustLongRunning.zip (28.84 KB)

Thanks for coming to the talk.

Thursday, March 13, 2008 11:48:33 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, February 12, 2008

Recently I wrote an article on Volta for the DevelopMentor newsletter DevelopMents. I concentrated on the core of what Volta is doing, IL rewriting, rather than highlight the IL to JavaScript functionality that has caught most attention. You can read the article here.

.NET | Volta
Tuesday, February 12, 2008 9:09:49 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

Recently I discussed creating robust multi-host workflow architectures with WF using the SqlWorkflowPersistenceService. I talked about an issue with the way workflow ownership is implemented and  showed some code that would fix the issue but that I also said was a hack. Ayende rightly pointed out that for high volume systems it would be a disaster. The point of that post was to highlight the problem.

Ideally the workflow team will fix the persistence service to recover from abandoned workflows more robustly - in the meantime the following stored procedure will unlock the abandoned workflows and make then runnable. The idea is to make this a scheduled job in the database running every few seconds - this is essentially what BizTalk does.

CREATE PROCEDURE dbo.ClearUnlockedWorkflows

InstanceState Set ownerID=null
  WHERE NOT ownerID is NULL and 
ownedUntil < getdate


The only oddity in the code is the setting of the nextTimer to the current time. The issue is that straight persistence (as opposed to unloading on a delay) sets this value to 31st Dec 9999 which is obviously not going to be reached for some time. Unfortunately the workflow will only be scheduled due to an expired timer so I have to reset the timer such that it will expire immediately. I can't think of any issues this would cause but if there's a scenario I haven't thought of all comments are welcome.

.NET | BizTalk | WF
Tuesday, February 12, 2008 9:57:13 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Wednesday, February 06, 2008

For the third year running I'm going to be speaking at BearPark's DevWeek conference. This year I actually managed to submit some breakout session titles on time too.

Wednesday 12th March

11:30 A developer’s guide to Windows Workflow Foundation
There are many challenges to writing software. Not least of these are lack of transparency of code and creating software that can execute correctly in the face of process or machine restart. Windows Workflow Foundation (WF) introduces a new way of writing software that solves these problems and more. This session explains what WF brings to your applications and explains how it works. Along the way we will see the major features of WF that make it a very powerful tool in your toolkit, removing the need for you to write a lot of complex plumbing.

14:00 Creating robust, long-running Workflows
Long-running processes have unique requirements in that they need to maintain state over process restart; Windows Workflow Foundation (WF) enables this with its persistence infrastructure. However, there are issues around hosting and activity development that require attention for long running workflows to be robust. This session looks at the design of the workflow persistence service; issues around hosting and creating full featured asynchronous activities. This session assumes some familiarity with WF.

16:00 Cross my palm with Silver – creating workflow-based WCF services
There are very good reasons for using a workflow to implement a WCF service: workflows can provide a clear platform for service composition (using a number of building block services to generate functionally richer service); workflows can manage long running stateful services without having to write your own plumbing to achieve this. This session introduces the new Visual Studio 2008 Workflow Services. This technology, previously known as “Silver”, provides a relatively seamless integration between WF and WCF, enabling the service developer to concentrate on the application functionality rather than the plumbing. This session assumes some familiarity with WF and WCF.

Friday 14th March - Postcon

A day of connected systems with Visual Studio 2008
Most businesses find themselves building applications that use two or more machines working together to produce their functionality. One of the challenges in this world, apart from the actual business logic being implemented, is connecting the different parts of the application in a way that best fits the environment the machines are places – are there firewalls in place? Are some parts of the application written on different platforms such as Java? Do the different parts of the application have to maintain their state over machine restart? Late 2006 saw Microsoft release WCF and WF to tackle some of these challenges. However, parts of the story were left untold – especially the integration between the two.
Visual Studio 2008 introduces a number of new features for writing service based software. Its features build on the libraries released as part of .NET 3.0, providing an integration layer between the two. In this pre/post conference session we start at the basics of how WCF and WF work and then look at the various integration technologies introduced in Visual Studio 2008.

So if you're attending DevWeek I hope to see you there

Wednesday, February 06, 2008 3:16:10 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, February 04, 2008

One of the most powerful features of the Windows Workflow Foundation is its ability to automate state management of long running processes using a persistence service. Microsoft supply a persistence service out of the box - the SqlWorkflowPersistenceService. Unsurprisingly this persists state in SQL Server (or SQL Express).

Once you have a persistence service more possibilities open up: different applications can progress the same workflow instance over time; multiple hosts can process workflows in a scale out model. This second feature needs a little investigation - there is a gotcha hiding there you need to be aware of.

The issue is how to stop more than one host picking up the same instance of a workflow and processing it at the same time - imagine the workflow transfered $10,000,000 from one account to another, you'd hardly want this happening twice. So if the possiblity exists for multiple hosts to see the same persistence store, the persistence service must be able to ensure only one host is executing a workflow at any one time.

The SqlWorkflowPersistenceService handles multiple concurrent hosts by using the concept of workflow ownership - the guid of a host (created randomly by the persistence service constructor) is stamped against a workflow that it is actively executing (not in an idle or completed state). Now the question comes "what if the host dies while executing a workflow?". This is what the ownership timeout is for. You set the ownership timeout in the constructor of the SqlWorkflowPersistenceService.

SqlWorkflowPersistenceService sql = 
new SqlWorkflowPersistenceService(CONN, 


Here the third parameter specifies how long a host is allowed to run a workflow before persistence occurs. If the host takes longer than this then it will get an error when it atempts to persist. The fourth parameter is the polling interval for how often the persistence service will check for expired timers.

Now the idea is that if a host dies then it's lock will timeout and another host can pick up the work. There is a problem, however, in the implementation. The persistence service only looks for expired ownership locks when it first starts - not when it polls for expired timers. Therefore, for a workflow instance whose host has died mid-processing, it will only recover if a new host instance starts after the timeout has occurred.

So how can you make this more robust? Well we need a way to explicitly load the workflows that have had their ownership expire - unfortunately there is no exposed method to do this on the SqlWorkflowPersistenceService. Instead we have to get all the workflows, catch the exception if we load a locked one and unload any that aren't ready to run. Here is an example:

TimerCallback cb = delegate
// get all the persisted workflows
  foreach (var item in
      // load the workflow - this will throw a WorkflowOwnershipException if
      // the workflow is currently owned
inst = workflowRuntime.GetWorkflow(item.WorkflowInstanceId);

      // Unload workflow if its still idle on a timer
      DateTime timerExpiry = (DateTime)item.NextTimerExpiration;
      if (timerExpiry > DateTime
      // Loaded a workflow locked by another instance

Timer t = new Timer(cb, null, 0, 1000);

So this code will attempt to load workflow instances with expired locks every second. Is it a hack? Yes. But without one of two things in the SqlWorkflowPersistenceService its the sort of code you have to write to pick up unlocked workflow instances robustly. The workflow team could:

  • Check for expired ownership locks in the stored procedure that checks for timer expiration
  • Provide a method on the persistence service that explicitly allows the loading of unlocked workflow instances

Maybe in the next version :-)

Monday, February 04, 2008 11:07:39 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, January 29, 2008

We, at DevelopMentor, are very excited to be running our SilverLight 2.0 course, Essential SilverLight 2.0 in the UK on 17th March - all the new bits straight from Mix'08! Course author, Dave Wheeler, will be delivering the course at our London facility so if you are interested in the next generation of Rich Internet Applications register now.

Tuesday, January 29, 2008 11:02:35 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, January 04, 2008

Christian noticed a bug in my Async Activity base class that I talked about here and here. The latest code is here

Friday, January 04, 2008 5:20:57 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, December 14, 2007

When working with WF I always find it useful having a BizTalk background. Issues that are reasonably well known in BizTalk are not immediatedly apparent in Workflow unless you know they are part and parcel of that style of programming.

One issue in BizTalk is where you are waiting for a number of messages to start off some processing and you don't know in which order they are going to arrive. In this situation you use a Parallel Shape and put activating receives in the branches initializing the same correlation set. The orchestration engine understands what you are trying to do and creates a convoy for the remaining messages when the first one arrives. This is known as a Concurrent Parallel Receive. You don't leave the parallel shape until the all the messages have arrived and the convoy ensures that the remaining messages are routed to the same orchestration instance.

There is, however, an inherent race condition in this architecture in that: if two messages arrive simultaneously, due to the asynchronous nature of BizTalk, both messages could be processed before the orchestration engine has a chance to set up the convoy infrastructure. We will end up with two instances of the orchestration both waiting for messages that will never arrive. All you can do is put timeouts in place to ensure your orchestrations can recover from that situation and flag the fact that the messages require resubmitting.

With Workflow Services we essentially have the same issue waiting for us. Lets set up the workflow ...

If we call this from a client as follows:

PingClient proxy = new PingClient();

then everything works ok - in fact it works irrespective of the order the operations are called in, that's the nature of this pattern. It works because by the time we make the second call, the first has completed and the context is now cached on the proxy.

But lets make the client a bit more complex:

static IDictionary<string, string> ctx = null;
static void Main(string[] args)
  PingClient proxy = new PingClient();
  IContextManager mgr = ((IChannel)proxy.InnerChannel).GetProperty<IContextManager>();

  Thread t = new Thread(DoIt);

  if (ctx != null)

  ctx = mgr.GetContext();

  Console.WriteLine("press enter to exit");

static void DoIt()
  PingClient proxy = new PingClient();
  IContextManager mgr = ((IChannel)proxy.InnerChannel).GetProperty<IContextManager>();

(ctx != null
  ctx = mgr.GetContext();

Here we make the two calls on different proxies on different threads. A successful call stores the context in the static ctx field. Now a proxy will use the context if it has already been set, otherwise it assumes that it is the first call. So here the race condition has made its way all the way back to the client. There are things we could do about this in the client code (taking a lock out while we're making the call so we complete one and store the context before the other checks to see if the context is null), however, that really isn't the point. The messages may come from two separate client applications which both check a database for the context. Again we have an inherent race condition that we need to put architecture in place to detect and recover from. It would be nice to put the ParallelActivity in a ListenActivity with a timeout DelayActivity. However you can't do this because a ParallelActivity does not implement IEventActivity and so the ListenActivity validation will fail (the first activity in a branch must implement IEventActivity). We therefore have to put each receive in its own ListenActivity and time out the waits individually.

So is the situation pretty much the same for both WF and BizTalk? Well not really. The BizTalk window of failure is much smaller than the WF one as the race condition is isolated server side. Because WF ropes the client into message correlation the client has to receive the context and put it somewhere visible to other interested parties before the race condition is resolved.

Hopefully Oslo will bring data orientated correlation to the WF world

.NET | BizTalk | WCF | WF | Oslo
Friday, December 14, 2007 10:31:35 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, December 13, 2007

I've just been reading the latest team blog entry from the Volta team. This entry is meant to address the issue that I discussed here about the dangers of taking arbitrary objects and remoting them.

They say that they are abstracting away "unnecessary details" only leaving the necessary ones. Firstly, every abstraction leaks (just look at WCF for an example of that) so no matter how hard you try to abstract away the plumbing it will come through the abstraction in unexpected ways. Secondly the remote boundary is not an unnecessary detail. Its a fundemental part of the design of an application.

Unless Volta has heuristics inside it to generate a remote facade during execution, the interface into a remote object is of huge importance to how an application will perform and scale. Volta should at least give you a warning if you apply the [RunAtOrigin] attribute to a class with properties on it.

Does this mean that the whole idea is broken? Not at all - it just means that applications have to be designed with Volta in mind. Decisions have to be made in the design about which layers *may* get remoted and the interface into that layer should be designed accordingly. Then the exact decision about which layers to *actually* remote can be deferred and tuned according to profiling the application.

.NET | Volta
Thursday, December 13, 2007 10:15:49 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Wednesday, December 12, 2007

Ian pointed out that there was a race condition in the UniversalAsyncResult that I posted a few days ago. I have amended the original post rather than repost the code.

The changes are to make the complete flag and the waithandle field volatile and to check the complete flag after allocating the waithandle in case Complete was called while the waithandle was being allocated.

Isn't multithreaded programming fun :-)

Wednesday, December 12, 2007 9:29:13 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, December 11, 2007

On 5th December, Microsoft Live Labs announced Volta. The idea of Volta is to be able to write your application in a .NET language of your choice and then the Volta infrastructure takes care of targetting the right platform (JavaScript or SilverLight, IE or FireFox). This side of things is pretty neat. The other thing it allows you to do is to write your application monolithically and then decide on distribution later on based on profiling, etc. This sounds quite neat but I have some reservations.

I went through the DCOM wave or "COM with a longer wire" as it was labelled at the time and learned some hard lessons. I then went through the .NET Remoting experience and learned  ... well ... the same hard lessons. The reality is unless you design something to be remote then when you remote it its going to suck.

Lets look at a Volta example. First some basics:

  1. Download and install the CTP
  2. Create a Volta application project

Now lets build our Volta application. You create the HTML page for the UI (heres the important snippet)

<p>Press the button to do chatty stuff</p>
p><button id="chat">Chat!</button></p>
div id="output" />

You create the Volta code to wire up your code to the HTML

public partial class VoltaPage1 : Page
Button chat;
Div output;

public VoltaPage1()

partial void InitializeComponent()
chat = Document.GetById<Button>("chat");
output = Document.GetById<Div>("output");

Now create a class that has a chatty interface

class HoldStatement
public string The { get { return "The "; } }
public string Quick { get { return "Quick "; } }
public string Brown { get { return "Brown "; } }
public string Fox { get { return "Fox "; } }
public string Jumped { get { return "Jumped "; } }
public string Over { get { return "Over "; } }
public string Lazy { get { return "Lazy "; } }
public string Dog { get { return "Dog!"; } }

OK now lets wire up the event handler on the button to output a statement in the output div

partial void InitializeComponent()
chat = Document.GetById<Button>("chat");
output = Document.GetById<Div>("output");

HoldStatement hs = new HoldStatement();
chat.Click += delegate
output.InnerText = hs.The + hs.Quick + hs.Brown + hs.Fox + hs.Jumped +
                       hs.Over + hs.The + hs.Lazy + hs.Dog;

If we compile this in Release mode and press F5 we get a browser and a server appears in the system tray.

If you double click on this server icon you get a monitor window that allows you to trace interaction with the server. Bring that up and check the Log Requests checkbox.

Now click on the Chat! button in the browser. Notice you get a bunch of assembly loads in the log window. Clear the log and press the Chat! button again. Now you see there are no requests going across the wire as the entire application is running in the browser.

Now go to the project properties and select the Volta tab. Enable the <queue dramatic music> Tiersplitter. Go to the HoldStatement class, right click on it and select Refactor -> Tier Split to Run at Origin. Notice we now get a [RunAtOrigin] attribute on the class.

Rebuild and press F5 again. Bring up the Log window again and select to log messages. Now every time you press the Chat! button it makes a request to the server. We have distributed our application with just one attribute! - how cool is that?! But hold on ... how many requests are going to the server? Well one for each property access.

Latency is the death knell of many distributed applications. You have to design remote services to be chunky (lots of data in one round trip) otherwise the network roundtripping will kill your application performance. So saying you can defer architecting your solution in terms of distribution then simply annotate classes with an attribute is a dangerous route to go down. We have already learned the hard way that taking a component and just putting it on the other end of a wire does not make for a good distributed application - Volta appears to be pushing us in that direction again.

.NET | Volta
Tuesday, December 11, 2007 4:01:02 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

Yesterday I announced the release of the MapperActivity. I said I'd clean up the source code and post it.

Two things are worth noting:

  1. The dependency on the CodeProject Type Browser has gone
  2. There is a known issue where you can map more than one source element to the same destination - in this case the last mapping you create will win. We will fix this in a subsequent release.

Here's the latest version with the code

MapperActivityLibrary.zip (933.62 KB)

.NET | BizTalk | WCF | WF
Tuesday, December 11, 2007 8:31:41 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, December 10, 2007

One of the neat things you can do in BizTalk is to create a map that takes one kind of XML message and transforms it into another. They basically are a wrapper over XSLT and its extension infrastructure. The tool you use to create maps is the BizTalk mapper. In this tool you load in a source and destination schema - which get displayed in treeviews and then you drag elements from the source to the destination to describe how to build the destination message.

Now, in WF - especially when using Silver (the WF/WCF integration layer introduced in .NET 3.5) - you tend to spend alot of time taking data from one object and putting it into another as different objects are used to serialize and deserialize messages received and sent via WCF. This sounds alot like what the BizTalk Mapper does, but with objects rather than XML messages.

I was recently teaching Guerrilla Connected Systems (which has now been renamed Guerrilla Enterprise.NET) and Christian Weyer was doing a guest talk on REST and BizTalk Services. Afterwards, Christian and me were bemoaning the amount of mundane code you get forced to write in Silver and how sad it was that WF doesn't have a mapper. So we decided to build one. Unfortunately neither myself nor Christian are particularly talented when it comes to UI - but we knew someone who was: Jörg Neumann. So with Jörg doing the UI, myself writing the engine and Christian acting as our user/tester we set to work.

The result is the MapperActivity. The MapperActivity requires you to specify the SourceType and DestinationType - we used the Type Browser from CodeProject for this. Then you can edit the Mappings which brings up the mapping tool

You drag members (properties or fields) from the left hand side and drop them on type compatible members (properties or fields) on the right hand side.

Finally you need to databind the SourceObject and DestinationObject to tell the mapper where the data comes from and where the data is going. You can optionally get the MapperActivity to create the destination object tree if it doesn't exist using the CanCreateDestination property. Here's the full property grid:

And thanks to Jon Flanders for letting me crib his custom PropertyDescriptor implementation (why on earth don't we have a public one in the framework?). You need one of these when creating extender properties (the SourceObject and DestinationObject properties change type based on the SourceType and DestinationType respectively).

Here is the activity - I'll clean up the code before posting that.

MapperActivity.zip (37.78 KB)

Feedback always appreciated

.NET | BizTalk | WCF | WF
Monday, December 10, 2007 9:56:07 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

In my last post I talked about the WCF async programming model. On the service side you have to create a custom implementation of IAsyncResult. Fortunately this turns out to be pretty much boilerplate code so here is a general purpose one

class UniversalAsyncResult : IAsyncResult, IDisposable
  // To store the callback and state passed
  AsyncCallback callback;
  object state;
  // For the implementation of IAsyncResult.AsyncCallback
  volatile ManualResetEvent waithandle;

  // Guard object for thread sync
  object guard = new object();

  // complete flag that is set to true on completion (supports IAsyncResult.IsCompleted)
  volatile bool complete;

  // Store the callback and state passed
  public UniversalAsyncResult(AsyncCallback callback, object state)
    this.callback = callback;
    this.state = state;

  // Called by consumer to set the call object to a completed state
  public void Complete()
    // flag as complete
    complete = true;

    // signal the event object
    if (waithandle != null)

    // Fire callback to signal the call is complete
    if (callback != null)

#region IAsyncResult Members
  public object AsyncState
    get { return state; }
  // Use double check locking for efficient lazy allocation of the event object
  public WaitHandle AsyncWaitHandle
      if (waithandle == null)
        lock (guard)
          if (waithandle == null)
            waithandle = new ManualResetEvent(false);
            // guard against the race condition that the waithandle is
            // allocated during or after complete
            if( complete )

      return waithandle;

    get { return false; }

    get { return complete; }

IDisposable Members
  // Clean up the event if it got allocated
  public void Dispose()
    if (waithandle != null)

You create an instance of UniversalAsyncResult passing in an AsyncCallback and state object if available. When you have finished your async processing just call the Complete method on the UniversalAsyncResult object.

Comments welcome

Monday, December 10, 2007 12:07:29 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

WCF supports asynchronous programming. On the client side you can generate an async API for invoking the service using svcutil

svcutil /async <rest of command line here> 

In VS 2005 you had to use the command line. In VS2008 the Add Service Reference dialog now has the Advanced button which allows you to generate the async operations

However, the fact that you can do asynchronous programming on the service side is not so obvious. You need to create the service contract according to the standard .NET async pattern of Beginxxx / Endxxx. You also need to annotate the Beginxxx method with an OperationContract that says this is part of an async pair. Note you do not annotate the Endxxx method with [OperationContract]

interface IGetData
// This pair of methods are bound together as WCF is aware of the async pattern
    IAsyncResult BeginGetData(AsyncCallback cb, object state);
    string[] EndGetData(IAsyncResult iar);

Now, its fairly obvious that when the request message arrives the Beginxxx bethod is called by WCF. The question is how is the Endxxx method called? WCF passes the Beginxxx method an AsyncCallback that must somehow be cached and invoked when the operation is complete. This callback triggers the call to Endxxx. Another question is "what am I meant to return from the Beginxxx method?" - where does this IAsyncResult come from? And that, unfortunately, is up to you - you have to supply one, which is a good place to cache that AsyncCallback that WCF passed you.

Here is a simple sample shows how the service side works

Async.zip (219.99 KB)

Comments, as always, welcome

Monday, December 10, 2007 11:52:30 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Wednesday, December 05, 2007

In my last post I talked about the base class I created for creating custom composites that you want control over the execution logic. Firstly Josh pointed out that I ought to state its limitations - it only executes (and therefore cancels) one child activity at a time so it would not be useful for building, for example, the ParallelActivity.

With that limitation in mind though, there are a number of useful applications. I had a think about a different execution model and came up with prioritized execution. So I built this on top of my base class which was an interesting exercise as I could use an attached property for the Priority which meant I didn't limit the children to a special child activity type. I also had a think about the design time and realised that the sequential and parallel models for composite design time wasn't appropriate so I stole^H^H^H^H^H leveraged the designer that the Conditional Activity Group uses which seemed to work well.

I packaged up the code and told Josh about it

"Oh yeah, thats an interesting one - Dharma Shukla and Bob Schmidt used that in their book"


It turns out that pretty much half the world has decided to build a prioritised execution composite.

Heres the code anyway

PrioritizedActivitySample.zip (76.36 KB)

But then I got to thinking - actually there is a problem with this approach - elegant as it may be. If you cannot change the priorities at runtime then all you are building is an elaborate sequence. So it turns out that attached properties cannot be data bound in WF - you have to use a full DependencyProperty created with DependencyProperty.Register rather than DependencyProperty.RegisterAttached. So I reworked the example, creating a special child activity DynamicPrioritizedSequenceActivity that has a priority that is a bindable dependency property. I also had to create a validator to make sure only this type of child could be added to the composite.

This neat thing now is you have a prioritized execution composite whose children can dynamically change their priority using data binding. The code for this is here

DynamicPrioritySample.zip (78.83 KB)

So between these two samples we have prioritized execution, designer support, attached property support and validator support

As usual all comments welcome

Wednesday, December 05, 2007 3:47:10 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, December 04, 2007

There are two types of custom composites you may end up writing:

  1. a form of UserControl with a set of activities whose composite functionality you want to reuse. This is what you get when you add a new activity to a workflow project using the menu item. You end up with a class that derives from SequenceActivity and is the one occasion with a custom activity where you don;t really have to override Execute
  2. a parent that wants to take control over the execution model of its children (think of the execution model of the IfElseActivity, WhileActivity and the ParallelActivity)

To me the second of these is more interesting (although no doubt less common) and involves writing a bit of plumbing. In general, though, there is a common pattern to the code and so I have created a base class that wraps this plumbing.

CustomCompositeBase.zip (55.6 KB)

There are two fundemental overridable methods:

LastActivityReached: returns whether there are more child activities to execute
GetNextActivity: returns the next activity to be scheduled

Inside the zip there is also a test project that implements a random execution composite (highly useful I know).

Feedback, as always, very welcome.

Tuesday, December 04, 2007 9:19:18 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Sunday, December 02, 2007

Yesterday I posted a base class for Async Activities that buried all the plumbing. However, looking at the zip I only had the binary for the base class and the test harness. Here is the zipped code.

AsyncActivityBaseClass12.zip (69.71 KB)

In addition Rod gave me some feedback (for example making the base class abstract messed up the design time view for the derived activity). In the light of that and other ideas I have made some changes:

  • The base class is now concrete and so designer view now works for derived classes
  • If you don't supply a queue name then the queue name defaults to the QualifiedName of the activity
  • Cancellation support is now embedded in the base class.
  • There is a new override called DoAsyncTeardown to allow you to deallocate any resources you allocated in DoAsyncSetup

I hope someone finds this useful

Sunday, December 02, 2007 12:19:15 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Saturday, December 01, 2007

A while back Joe Duffy announced that they were changing the default max number of worker threads in the CLR threadpool in CLR 2.0 SP1. The change was to increase the limit 10 fold from 25 threads per processor per process to 250 per processor per process.

The basis of the change is that it is possible to deadlock the threadpool if you spawn and then wait for threadpool work from a threadpool thread (e.g. in processing an ASP.NET request). Increasing the size makes this occurance less likely (although doesn't remove the risk).

Although its not obvious, CLR 2.0 SP1 ships with .NET 3.5. So I was teaching Guerrilla .NET a couple of months ago and thought I'd demo the change by running  a project built under VS2005 then rebuild it under VS2008. What I hadn't expected (although on reflection its not that surprising) is that when I ran under 2005 on a machine with 2008 *installed* then 2005 also has the new behavior. In other words it patches the 2.0 CLR rather than being something that only affects new applications.

Why should we care? Well there are two main reasons we use threadpools. Firstly to remove the repeated cost of creating and destroying threads by repeatedly using a set of already created threads; secondly, it makes sure that we don't drown the machine in spawned threads when the application comes under load. Extra threads consume resources: the scheduler has to give them time and they default to 1Mb stack space.

And here is where I think the problem is. If you create a server side application, on 32 bit Windows, deploy it on an 8-way machine and put in under load, before this change you will cap out at 200 threads by default which is a reasonable (maybe a little high) number for IO bound processing. However, after the change this will cap out by default at 2000 threads. 2000 threads with 1Mb stack space means 2Gb of memory consumed by the threads. On 32 bit windows this is the total amount of addressable memory address space ... ouch! Say hello to Mr OutOfMemoryException.

So for existing applications this is a potentially breaking change. Your applications will have to take control of the max number of worker threads to ensure correct processing. ASP.NET already does this and you can adjust it via the <processModel> configuration element. In a windows service based application you have to make a call to ThreadPool.SetMaxThreads to control the threadpool.

To his credit Joe doen't pretend this change was the ideal resolution to a real-world problem but it is a change that will affect existing applcations as it is a patch to CLR 2.0 rather than a feature of a new version of the CLR

Saturday, December 01, 2007 9:45:17 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

One of the most powerful aspects of Windows Workflow is that it can manage a workflow instance's state when it has no work to do. Obviously to do this it needs to know that your activity is waiting for something to happen. A number of activities in the Standard Activity Library run asynchronously like this -e.g. the DelayActivity and the new ReceiveActivity. However, most real WF projects require you to write your own custom activities. To perform long running operations or wait for external events to occur, these too should be able to run asynchronously.

To be usable in all situations, an async activity needs to do more than just derive from Activity and return ActivityExecutionStatus.Executing from the Execute override. It also needs to implement IActivityEventListener<QueueEventArgs> and IEventActivity (the latter allows it to be recognised as an event driven activity). We end up with 3 interesting methods


Depending where your activity is used these get called in a different order

Sequential Workflow: Execute
State Machine: Subscribe, Execute, Unsubscribe
ListenActivity: Subscribe, Unsubscribe, Execute

So getting the behavior correct in where you create the workflow queue you are going to listen on, set up the event handler on the queue, process the results from the queue and close the activity can be a little tricky. So I have created a base class for async activities that buries the complexity of the plumbing and leaves you with three responsiblities:

  1. Tell the base class the name of the queue you wish to wait on
  2. Optionally override a method to set up any infrastructure you need before you start waiting on the queue
  3. Override a method to process the data that was passed on to the queue and decide whether you are still waiting for more data or you are finished

You can download the code here AsyncActivityBaseClass.zip (38.36 KB). Any feedback gratefully received

Finally, thanks to Josh and Christian for sanity checking the code

Saturday, December 01, 2007 8:17:49 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, November 29, 2007

Really as a note to myself as I always forget how to do this. To turn off JIT debugging which takes an age to try to attach the debugger under the covers, you set this value in the registry


At least I now know where to go to find that information :-)

Thursday, November 29, 2007 12:12:28 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, June 07, 2007

Something seems broken in the CLR ThreadPool. I was teaching Effective .NET recently and I noticed very strange behavior. It was the first time I had taught the course off my laptop which is a Core 2 Duo. Normally I teach the course in our training centre and the instructor machine is hyper-threaded.

I ran the following code:

static void Main(string[] args)

  ThreadPool.SetMinThreads(50, 1000);
ThreadPool.SetMaxThreads(5, 1000);

  int wtcount;
int iocount;

  ThreadPool.GetMaxThreads(out wtcount, out iocount);

  Console.WriteLine("{0} : {1}", wtcount, iocount);

Here I am calling SetMinThreads and SetMaxThreads. SetMinThreads "warms up" the thread pool so the default 0.5 second delay in spawning a new thread into the thread pool is avoided. SetMaxThreads changes the cap on the maximum number of threads available in the thread pool.

I had run this code several times before when talking about controlling the ThreadPool and expected the following output

5 : 1000

In other words the capping of the thread pool takes precedence over warming it up - which I would have thought is the correct behavior. However, ran I ran it this time I saw

50 : 1000

Which basically meant that SetMinThreads was overriding SetMaxThreads. But I had seen the expected behavior before on a hyper-threaded machine. So I did one more test (which is why the Console.ReadLine is at the top of the program). I started the program and then before pressing Enter went to task manager and set the process affinity to only execute on one processor. Lo and Behold I get the output

5 : 1000

So the behavior with respect to SetMinThreads and SetMaxThreads is different if the process is running with two (or more?) processors or not (note there is no difference between standard single processor and a hyper-threaded processor). This to me is totally busted. I know what I think the behavior should be but irrespective of that - it should definitely be consistent across the number of processors.

Edit: Dan points out to me that SetMinThreads(50, 1000) is obviously going to fail on a single proc machine as there are only 25 threads in the thread pool - Doh! So what was happening was on a single proc test the call to SetMinThreads failed and so the call to SetMaxThreads succeeded. As Dan pointed out - if you use 25 instead of 50 for the number of threads in SetMinThreads then the behavior is consistent in that the two APIs both fail if they try to do inconsistent things. Thanks Dan.

However, I do find it odd that I have asked the CLR to do something for me and it has failed - I would kind of expect an exception. Now I know I didn't check the return value so it is partially my own fault, but surely return values should only be used for expected behavior.

Thursday, June 07, 2007 10:45:22 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 

In the previous article I talked about hosting options for WCF. Here I'll discuss hosting for WF. However, to understand the options fully we need to digress briefly and talk about some of the features of WF.

Asynchronous Execution

The execution model of WF is inherently asynchronous. Activities are queued up for execution and the runtime asks the scheduler service to make the calls to Activity.Execute on to threads. The DefaultWorkflowSchedulerService (which you get if you take no special actions) maps Execute calls on to thread pool worker threads. The ManualWorkflowSchedulerService (which also comes out of the box but you have to install specifically install using WorkflowRuntime.AddService) schedules Execute on the thread that calls ManualWorkflowSchedulerService.RunWorkflow.


By default the workflow runtime does not persist workflows. Only when an object of a type that derives from WorkflowPersistenceService is added as a runtime service does the workflow runtime start persisting workflows both in the form of checkpoints when significant things have happened in the workflow's execution and to unload a workflow when it goes idle. When something happens that should be directed to an unloaded workflow the host (or a service) calls WorkflowRuntime.GetWorkflow(instanceId) and the runtime can reload the workflow from persistent storage.

One issue here is that of timers. If a workflow has gone idle waiting for a timer to expire (say via the DelayActivity) then no external event is going to occur to trigger a call WorkflowRuntime.GetWorkflow(instanceId)to allow the workflow to continue execution. In this case the persistence service must have functionality to discover when a timer has expired in a persisted workflow. The out of the box implementation, the SqlWorkflowPersistenceService polls the database periodically (how often is controlled by a parameter passed to its constructor). Obviously for this to occur the workflow runtime has to be loaded and running (via WorkflowRuntime.StartRuntime).

Hosting Options

The above two issues affect your WF hosting decisions and implementation. As with WCF you can host in any .NET process but in many ways the responsibilities of a WF host are more far reaching than for a WCF host. Essentially a WCF host has to start listening on the endpoints for the services it is hosting. A WF host not only has to configure and start the workflow runtime but also decide when new workflow instances should be created and broker communication from the outside world into the workflow. So lets look at the various options.

WF Console Hosting

When you install the Visual Studio 2005 Extensions for Windows Workflow Foundation package you get new project types which include console projects for both sequential and state machine workflows. These project types are useful for doing research and demos but thats about it.

WF Smart Client Hosting

There are a number of WF features that make it desirable to host WF in a smart client. Among these are the ability to control complex page navigation and the WF rules engine for controlling complex cross field navigation. The Acropolis team are planning to provide WF based navigation in their framework.

WF ASP.NET Hosting

One of the common use cases of WF hosting is controlling page flow in ASP.NET applications. This isn't by any means the sum total of application of WF in ASP.NET; for example, the website may be the gateway into starting and monitoring long running business processes. There are two main issues with ASP.NET hosting:

  1. ASP.NET runs executes on thread pool worker threads as does the DefaultWorkflowSchedulerService. ASP.NET also stores the HttpContext in thread local storage. So if the workflow invokes functionality that requires access to HttpContext or the ASP.NET request needs to wait for the workflow then you should use the ManualWorkflowSchedulerService as the scheduler. This also means that the workflow runtime and ASP.Net will not be competing for threads.
  2. For long running workflows the workflow will probably be persisted and so the persistence service needs to recognise when timeouts have occurred. However, to do this the workflow runtime needs to be executing. So why is this an issue? Well in ASP.NET hosting the lifetime of the workflow runtime is generally tied to that of the Application object and therefore to the lifetime of the hosting AppDomain. But ASP.NET does AppDomain and worker process recycling - the AppDomain not being recreated until the next request. Therefore, if no requests come in (maybe the site has bursts of activity then periods of inactivity) the AppDomain may be recycled and the workflow runtime stop executing. At this point the persistence service can no longer recognise when timers have expired.

As long as your workflows do not require persistence or you have (or can create via a sentinel process) a regular stream of requests then ASP.NET is an excellent workflow host - especially if IIS is being used to host WCF endpoints that hand off to workflows. just remember it is not suitable for all applications.

WF Windows Service Hosting

A Windows Service is the most flexible host for WF as the workflow runtime lifetime is generally tied to the lifetime of the process. Obviously, you will probably have to write a communication layer to allow the workflows to be contacted by the outside world. In places where ASP.NET isn't a suitable host it can delegate to a Windows Service based host. Just remember there is an overhead to hand off to the windows service from another process (say via WCF using the netNamedPipeBinding) so make sure ASP.NET really isn't suitable before you go down this path.

WF is a powerful tool for many applications and the host plays a crucial role in providing the right environment for WF execution. Hopefully this article has given some insight into use cases and issues with the various hosting options.


Thursday, June 07, 2007 1:39:04 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 

I have just uploaded the code from mine and Jon's precon on WCF and WF. You can download it here.

Thursday, June 07, 2007 4:26:21 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Wednesday, June 06, 2007

WCF and WF are both frameworks. Specifically this means they are not full blown products but rather blocks of utility code with which to build products. For both of these frameworks the hosting process is undefined and it is up to you to provide this. As .NET based frameworks the host can be any .NET process so what are the advantages and disadvantages of each? In this the first of two articles I'll look at options and issues hosting WCF then in the next article I'll do the same for WF.

WCF Console Hosting

Console apps are useful for testing things out and can be useful for mocking up a service if you are testing a client. However, they are not suitable for production systems as they have no resilience associated with them.

WCF Smart Client Hosting

Useful for peer to peer applications.

WCF Windows Service Hosting

This is a tangible host for  production services. Whether this is the best host depends on a number of questions:

  • What operation system am I running on?
  • What network protocols do I need to support?
  • Do I need to execute any code before the first request comes in?

Assuming you want to run this on a server OS and one that is currently released you will have to use Windows Server 2003 (.NET 3.0 isn't supported on Windows 2000 or earlier). If you want to use any protocol other HTTP or you need to execute code before the first request comes in you must use a Windows Service to host your WCF service.

If you are happy to run on Vista or Windows Server 2008 Beta (formally known as Longhorn Server) then you should only use a Windows Service if you need to execute code before the first request arrives.

WCF IIS Hosting

IIS should be your host of choice for WCF services. However, there are two issues to be aware of:

  1. IIS6 and earlier can only host services that use HTTP as a transport. IIS7 introduces the Windows Process Activation Service that allows arbitrary protocols (such as TCP and MSMQ) to have the same activation facilities as HTTP.
  2. IIS provides activation on the first request. If you need to execute code before this (for example if you have a service that records trend data over time for some external data source and your service queries this trend data) then you cannot use IIS as a host without adding out of band infrastructure to bootstrap the application by performing a request.

So the options for WCF hosting are pretty straightforward. In the next article we'll see that the hosting options for WF have some issues that may not at first be obvious.

Wednesday, June 06, 2007 10:36:24 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 

Just sitting in on a session on extending the WF rules infrastructure given by Moustafa Ahmed. Some interesting ideas. I especially liked the infrastructure for decoupling the rules from the workflow assembly using a repository, custom workflow service and a custom policy activity. He also showed a custom rules editor that was business user friendly (which the OOB one isn't) called inrule which looked pretty good.

Wednesday, June 06, 2007 8:05:31 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 

So I'm here at TechEd 2007 enjoying the humidity of Orlando in June. I delivered a Precon on WCF and WF on Sunday with Jon Flanders which was a blast. Mark Smith who writes our WPF course asked me to check out what I could find out about Acropolis - a new framework for creating WPF applications. The Acropolis website went live on Monday and so I wandered over to the boothe to see what I could find out. It came as quite a surprise that Dave Hill, who i used to work with in the UK about 8 years ago, was on the stall. Dave is the architect on Acropolis - its a small world. I had lunch with Dave and it looks like they have some interesting ideas for helping to enforce separation between the presentation layer and business funtionality in an application and it is all expressed declaratively in XAML. If you're building smart clients in WPF its definitely worth keeping an eye on.

Wednesday, June 06, 2007 4:15:23 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   |