# Tuesday, 08 June 2010


I’m speaking at Software Architect 2010 in October. I’m going to be delivering two sessions on Windows Workflow Foundation 4.0: the first explains the basic architecture and looks at using workflow as a Visual Scripting environment to empower business users. The second looks at building big systems with workflow concentrating on the WCF integration features.

In addition to that I’ll be delivering two all-day workshops with Andy Clymer: Building Applications the .NET 4.0 Way and Moving to the Parallel Mindset with .NET 4.0. The first of these will take a number of new features of .NET 4.0 and show how they can be combined to create compelling applications. The second will look at the Parallel Framework Extensions (PFx) introduced in .NET 4.0 examining both the rich functionality of the library, how it can be best leveraged avoiding common parallel pitfalls and finally looking at patterns that aid parallelisation of your code

.NET | PFx | RSK | WCF | WF | WF4
Tuesday, 08 June 2010 10:52:40 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Monday, 07 June 2010
.NET | Dublin | WCF | WF
Monday, 07 June 2010 22:05:40 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Tuesday, 30 March 2010

Rock Solid Knowledge on iTunes is live! We’ve taken the feed to our free screencasts and they are now available through iTunes via the following link


Now you can watch them on the move

.NET | EF4 | PFx | RSK | SilverLight | WCF | WF | WF4
Tuesday, 30 March 2010 21:13:22 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Wednesday, 17 March 2010

Thanks to everyone who attended my sessions at DevWeek 2010. I’ve now uploaded the demos which you can find at the following locations

A Day of .NET 4.0 Demos

Windows Workflow Foundation 4.0 Demos

Creating WCF Services using WF4 Demos

I’ll be around for the rest of the conference so drop by for a chat at our developer clinic in the exhibition area

.NET | EF4 | RSK | WCF | WF | WF4
Wednesday, 17 March 2010 08:13:59 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, 26 February 2010

Thanks to everyone who attended my two sessions at BASTA! – another thoroughly enjoyable conference. I’ve uploaded the demos

What’s new in Workflow 4.0 – this includes the application with the rehosted designer

Building WCF services with Workflow 4.0

.NET | WCF | WF | WF4
Friday, 26 February 2010 10:02:42 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Sunday, 14 February 2010

I my last post I showed that creating a custom composite activity (one that can have one or more children) requires deriving from NativeActivity. The Retry activity that I showed was fairly simple and in particular didn’t try to share data with its child. There appears to be a catch-22 in this situation when it comes to overriding CacheMetadata: if I add a Variable to the metadata (AddVariable) then it can be used exclusively by its children – i.e. the activity itself can’t manipulate the state; if I add a variable as implementation data to the metadata (AddImplementationVariable) then the children cant see it as its seen as purely used for this activities implementation. How then do we create data that can be both manipulated by the activity and accessed by the parent?

The secret to achieving this is a feature called ActivityAction - Matt Winkler talks about it here. The idea is that I can bind an activity to one or more parent defined pieces of data then schedule the activity where it will have access to the data. Its probably best to show this with an example so I have written a ForEachFile activity that you give a directory and then it passes its child the file name of each file in the directory in turn. I’ll show the code in its entirety and then walk through each piece in turn

   1: [ContentProperty("Body")]
   2: [Designer(typeof(ForEachFileDesigner))]
   3: public class ForEachFile : NativeActivity, IActivityTemplateFactory
   4: {
   5:     public InArgument<string> Directory { get; set; }
   7:     [Browsable(false)]
   8:     public ActivityAction<string> Body { get; set; }
  10:     private Variable<IEnumerator<FileInfo>> files = new Variable<IEnumerator<FileInfo>>("files");
  12:     protected override void CacheMetadata(NativeActivityMetadata metadata)
  13:     {
  14:         metadata.AddDelegate(Body);
  16:         RuntimeArgument arg = new RuntimeArgument("Directory", typeof(string), ArgumentDirection.In);
  17:         metadata.AddArgument(arg);
  19:         metadata.AddImplementationVariable(files);
  20:     }
  21:     protected override void Execute(NativeActivityContext context)
  22:     {
  23:         DirectoryInfo dir = new DirectoryInfo(Directory.Get(context));
  25:         IEnumerable<FileInfo> fileEnum = dir.GetFiles();
  26:         IEnumerator<FileInfo> fileList = fileEnum.GetEnumerator();
  27:         files.Set(context, fileList);
  29:         bool more = fileList.MoveNext();
  31:         if (more)
  32:         {
  33:             context.ScheduleAction<string>(Body, fileList.Current.FullName, OnBodyComplete);
  34:         }
  35:     }
  37:     private void OnBodyComplete( NativeActivityContext context, ActivityInstance completedInstance)
  38:     {
  39:         IEnumerator<FileInfo> fileList = files.Get(context);
  40:         bool more = fileList.MoveNext();
  42:         if (more)
  43:         {
  44:             context.ScheduleAction<string>(Body, fileList.Current.FullName, OnBodyComplete);
  45:         }
  47:     }
  49:     #region IActivityTemplateFactory Members
  50:     public Activity Create(DependencyObject target)
  51:     {
  52:         var fef = new ForEachFile();
  53:         var aa = new ActivityAction<string>();
  54:         var da = new DelegateInArgument<string>();
  55:         da.Name = "item";
  57:         fef.Body = aa;
  58:         aa.Argument = da;
  60:         return fef;
  61:     }
  62:     #endregion
  63: }

Ok lets start with the core functionality then we’ll look at each of the pieces that help make this fully usable. As you can see the class derives from NativeActivity and overrides CacheMetadata and Execute – we’ll look at their implementations in a minute. There are three member variables in the class: the InArgument<string> for the directory; an implementation variable to hold the iterator as we move through the files in the directory; the all important ActivityAction<string> which we will use to pass the current file name to the child activity.

Lets look at CacheMetadata more closely. Because we want to do interesting things with some of the state the default implementation won’t work so we override it. As we don’t call the base class version we need to specify how we use all of the state. We add the ActivityAction as a delegate, we bind the Directory argument to a RuntimeArgument and specify the iterator as an implementation variable – we don’t want the child activity to have access to that directly.

Next lets look at Execute. The first couple of lines are nothing unusual – we get the directory and get hold of the list of files. Now we have to store the retrieved iterator in the implementation variable, fileList. We move to the first file in the iteration, if it returns false the list was empty so as long as MoveNext returned true we want to schedule the child activity passing the current file. To do this we use the ScheduleAction<T> member on the context. However, because we’re going to process each one sequentially, we also need to know when the child is finished so we pass a completion callback, OnBodyComplete.

Now in OnBodyComplete we simply get the iterator, move to the next item in the iteration and as long as we’re not finished iterating re-schedule the child again passing the new item in the iteration.

That really is the “functional” part of the activity – everything else is there to support the designer. So why does the designer need help? Well lets look at what we would have to do to build this activity correctly if we were going to execute it from main

   1: Activity workflow = new ForEachFile
   2: {
   3:   Body = new ActivityAction<string>
   4:   {
   5:     Argument = new DelegateInArgument<string>
   6:     {
   7:       Name = "item",
   8:     },
   9:     Handler = new WriteLine
  10:     {
  11:       Text = new VisualBasicValue<string>("item")
  12:     }
  13:   },
  14:   Directory = @"c:\windows"
  15: };

As you can see, its not a matter of simply creating the ForEachFile, we have to build the internal structure too – something needs to create that structure for the designer - this is the point of IActivityTemplateFactory. When you drag an activity on to the design surface normally it just creates the XAML for that activity. However, before it does that it does a speculative cast for IActivityTemplateFactory and if the activity supports that it calls the interface’s Create method instead and serializes the resulting activity to XAML.

So going back to the ForEachFile activity, you can see it implements IActivityTemplateFactory and therefore, as far as the designer is concerned, this is the interesting functionality – lets take a look at the Create method.We create the ForEachFile and wire an ActivityAction<string> to its Body property. ActivityAction<string> needs a slot to store the data to be presented to the child activity. This is modelled by DelegateArgument<string> and this gets wired to the Argument member of the ActvityAction. We also name the argument as we want a default name for the state so the child activity can use it. Notice, however, we don’t specify the child activity itself (it would be a pretty useless composite if we hard coded this). The child will be placed on the Handler property of the ActivityAction but that will be done in the ActivityDesigner using data binding.

Before we look at the designer lets highlight a couple of “polishing” features of the code: the class declared a ContentProperty via an attribute – that just makes the XAML parsing cleaner as the child doesn’t need to be specified using property element syntax; the Body is set to non-browsable – we don’t want the activities user to try to set this value in the property grid.

OK on to the designer. If you read my previous article there are a couple of new things here. Lets look at the markup – again there is no special code in the code behind file, everything is achieved using data binding

   1: <sap:ActivityDesigner x:Class="ActivityLib.ForEachFileDesigner"
   2:     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
   3:     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
   4:     xmlns:s="clr-namespace:System;assembly=mscorlib"
   5:     xmlns:conv="clr-namespace:System.Activities.Presentation.Converters;assembly=System.Activities.Presentation"
   6:     xmlns:sap="clr-namespace:System.Activities.Presentation;assembly=System.Activities.Presentation"
   7:     xmlns:sapv="clr-namespace:System.Activities.Presentation.View;assembly=System.Activities.Presentation"
   8:     xmlns:me="clr-namespace:ActivityLib">
   9:     <sap:ActivityDesigner.Resources>
  10:         <conv:ArgumentToExpressionConverter x:Key="expressionConverter"/>
  11:     </sap:ActivityDesigner.Resources>
  13:     <Grid>
  14:         <Grid.RowDefinitions>
  15:             <RowDefinition Height="Auto"/>
  16:             <RowDefinition Height="*"/>
  17:         </Grid.RowDefinitions>
  19:         <StackPanel Orientation="Horizontal" Margin="2">
  20:             <TextBlock Text="For each file "/>
  21:             <TextBox Text="{Binding ModelItem.Body.Argument.Name}" MinWidth="50"/>
  22:             <TextBlock Text=" in "/>
  23:             <sapv:ExpressionTextBox Expression="{Binding Path=ModelItem.Directory, Converter={StaticResource expressionConverter}}"
  24:                                     ExpressionType="s:String"
  25:                                     OwnerActivity="{Binding ModelItem}"/>
  26:         </StackPanel>
  27:         <sap:WorkflowItemPresenter Item="{Binding ModelItem.Body.Handler}"
  28:                                    HintText="Drop activity"
  29:                                    Grid.Row="1" Margin="6"/>
  31:     </Grid>
  33: </sap:ActivityDesigner>

Here’s what this looks like in the designer


So the TextBox at line 21 displays the argument name that our IActivityTemplateFactory implementation set up. The ExpressionTextBox is bound to the directory name but allows VB.NET expressions to be used. The WorkflowItemPresenter is bound to the Handler property of the Body ActivityAction, This designer is then associated with the activity using the [Designer] attribute.

So as you can see, ActivityAction and IActivityTemplateFactory work together to allow us to build rich composite activities with design time support

.NET | WF | WF4
Sunday, 14 February 2010 13:54:30 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, 09 February 2010

I’m writing Essential Windows Workflow Foundation 4.0 with Maurice for DevelopMentor. One of the things that I think is less than obvious is the behavior of NativeActivity.

What is NativeActivity I hear you ask? Well there are a number of models for building custom activities in WF4. Most “business” type custom activities will be built using a declarative model in XAML by assembling building blocks graphically. However, what if you are missing a building block? At this point you have to fall back to writing code and there are three options for your base class when writing an activity in code:

You use CodeActivity when you have a simple synchronous activity. All work happens in Execute and it has no child activities

This is new to WF4. Here you have the ability to implement the async pattern (BeginExecute / EndExecute) to perform short lived async operations where you do not want the workflow persisted (e.g. an async WebRequest)

This gives you full access to the power of the workflow execution engine. However, in the words of Spiderman’s Uncle, “with great power comes great responsibility”. NativeActivity can be a bit tricky so that is what this article is about

I’m going to walk through the code for a Retry activity – where a child activity can be rerun a number of times upon failure. The activity has two InArguments: MaxRetries and RetryDelay. MaxRetries says how many times you will retry the child before giving up. RetryDelay says how long to wait between retries. We could use a Thread.Sleep to do the delay but this would not be good for the workflow engine: we block a thread it could use and there is no way for the engine to persist the workflow – what if we wanted to retry in 2 days? So instead, as part of our implementation, we’ll use a Delay activity.

Now to explain the code we have to take a slight diversion and talk about the relationship between Activity, ActivityInstance and ExecutionContext. I talked about is a while back here when the PDC CTP first came out (that’s what the reference to some base class called WorkflowElement is about) but to expand a little: The Activity is really just a template containing the code to execute for the activity. The ActivityInstance is the actual thing that is executing. It holds the state for this instance of the activity template. Now we need a way to bind the template code to the currently running instance of the activity and this is the role of the ExecutionContext. If you are using a CodeActivity base class then most of this is hidden from you except that you have to access arguments by passing in the ExecutionContext. However, with NativeActivity you have to get more directly involved with this model.

Now how does the workflow engine know what data you need to store in the ActivityInstance? Well it turns out you need to tell it. NativeActivity has a virtual method called CacheMetadata (this post talks about it to some degree). The point being that the activity has to register all of the “stuff” that it wants to use during its execution. Now the base class implementation will do some fairly reasonable default actions but it cannot know, for example, that part of your functionality is there purely for implementation details and should not be public. Therefore, you will often override this when you create a NativeActivity

So without more ado – here’s the code for the Retry activity. I’ll then walk through it

   1: [Designer(typeof(ForEachFileDesigner))]
   2: public class Retry : NativeActivity
   3: {
   4:   public InArgument<int> MaxRetries { get; set; }
   5:   public InArgument<TimeSpan> RetryDelay { get; set; }
   6:   private Delay Delay = new Delay();
   8:   public Activity Body { get; set; }
  10:   Variable<int> CurrentRetry = new Variable<int>("CurrentRetry");
  12:   protected override void CacheMetadata(NativeActivityMetadata metadata)
  13:   {
  14:     metadata.AddChild(Body);
  15:     metadata.AddImplementationVariable(CurrentRetry);
  17:     RuntimeArgument arg = new RuntimeArgument("MaxRetries", typeof(int), ArgumentDirection.In);
  18:     metadata.Bind(MaxRetries, arg);
  19:     metadata.AddArgument(arg);
  21:     Delay.Duration = RetryDelay;
  23:     metadata.AddImplementationChild(Delay);
  24:   }
  26:   protected override void Execute(NativeActivityContext context)
  27:   {
  28:     CurrentRetry.Set(context, 0);
  30:     context.ScheduleActivity(Body, OnFaulted);
  31:   }
  33:   private void OnFaulted(NativeActivityFaultContext faultContext, Exception propagatedException, ActivityInstance propagatedFrom)
  34:   {
  35:     int current = CurrentRetry.Get(faultContext);
  36:     int max = MaxRetries.Get(faultContext);
  38:     if (current < max)
  39:     {
  40:       faultContext.CancelChild(propagatedFrom);
  41:       faultContext.HandleFault();
  42:       faultContext.ScheduleActivity(Delay, OnDelayComplete);
  43:       CurrentRetry.Set(faultContext, current + 1);
  44:     }
  45:   }
  47:   private void OnDelayComplete(NativeActivityContext context, ActivityInstance completedInstance)
  48:   {
  49:     context.ScheduleActivity(Body, OnFaulted);
  50:   }
  51: }

So lets look at the code: first the class derives from NativeActivity (we’ll come to the designer later). This means I want to do fancy things like have child activities or perform long running asynchronous work. Next we see the two InArguments that are passed to the Retry. The Delay is an implementation detail of how we will pause between retry attempts and the Body property is where the activity we are going to retry lives. Finally we have a Variable, CurrentRetry, where we store how many retry attempts we have made. That is the data in the class but remember we need to tell the workflow engine about what we need to store and why – this is the point of CacheMetadata (I read this method name as “here is the metadata for the cache” rather than “I am going to cache some metadata”)

In CacheMetadata the first thing we do is specify that the Body is our child activity. Next we tell the engine that we want to be able to get hold of the CurrentRetry but that its only there for our implementation – we’re not expecting the child activity to try to make use of it. The next 3 lines (17-19) seem a little strange but essentially we’re saying that the MaxRetries argument needs to be accessed over the whole lifetime of the ActivityInstance so we need a slot for that. We next configure out Delay activity passing the RetryDelay as its duration (this is how long we want to wait between retries). Finally we add the Delay, not as a normal child, but as an implementation detail.

OK, Execute is pretty simple – we initialize the CurrentRetry and then schedule the Body. But, because we want to retry on failure, we also pass in a fault handler (OnFaulted)

OnFaulted does the main work. It fires if the child fails. So it checks to see if we have exceeded the retry count and if not retries the Body activity. However, it doesn’t do this directly, first it tells the context that it has handled the error – it also, strangely has to cancel the current child (which is odd as its already faulted) – Maurice talks about this oddity here. Next it schedules the Delay as we have to wait for the retry delay and wires up a completion handler (OnDelayComplete) so we know when the delay is finished. Finally it updates the CurrentRetry.

OnDelayComplete simply reschedules the Body activity, remembering to pass the fault handler again in case the activity fails again.

Oh one thing I said I’d come back to – the designer. I have written a designer to go with this (designers in WF4 are WPF based). The XAML for this is here:

   1: <sap:ActivityDesigner x:Class="WordActivities.ForEachFileDesigner"
   2:     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
   3:     xmlns:s="clr-namespace:System;assembly=mscorlib"
   4:     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
   5:     xmlns:sap="clr-namespace:System.Activities.Presentation;assembly=System.Activities.Presentation"
   6:     xmlns:sapv="clr-namespace:System.Activities.Presentation.View;assembly=System.Activities.Presentation"
   7:     xmlns:conv="clr-namespace:System.Activities.Presentation.Converters;assembly=System.Activities.Presentation" 
   8:     mc:Ignorable="d" 
   9:     xmlns:d="http://schemas.microsoft.com/expression/blend/2008" 
  10:     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" >
  11:     <sap:ActivityDesigner.Resources>
  12:         <conv:ArgumentToExpressionConverter x:Key="expressionConverter"/>
  13:     </sap:ActivityDesigner.Resources>    
  14:   <Grid>
  15:         <Grid.RowDefinitions>
  16:             <RowDefinition Height="Auto"/>
  17:             <RowDefinition Height="Auto"/>
  18:             <RowDefinition Height="*"/>
  19:         </Grid.RowDefinitions>
  20:         <Grid.ColumnDefinitions>
  21:             <ColumnDefinition Width="Auto"/>
  22:             <ColumnDefinition Width="*"/>
  23:         </Grid.ColumnDefinitions>
  25:             <TextBlock Grid.Row="0" Grid.Column="0" Margin="2">Max. Retries:</TextBlock>
  26:             <sapv:ExpressionTextBox Expression="{Binding Path=ModelItem.MaxRetries, Converter={StaticResource expressionConverter}}"
  27:                                     ExpressionType="s:Int32"
  28:                                     OwnerActivity="{Binding ModelItem}"
  29:                                     Grid.Row="0" Grid.Column="1" Margin="2"/>
  31:             <TextBlock Grid.Row="1" Grid.Column="0" Margin="2">Retry Delay:</TextBlock>
  32:         <sapv:ExpressionTextBox Expression="{Binding Path=ModelItem.RetryDelay, Converter={StaticResource expressionConverter}}"
  33:                                     ExpressionType="s:TimeSpan"
  34:                                     OwnerActivity="{Binding ModelItem}"
  35:                                     Grid.Row="1" Grid.Column="1" Margin="2"/>
  37:             <sap:WorkflowItemPresenter Item="{Binding ModelItem.Body}"
  38:                                        HintText="Drop Activity"
  39:                                        Margin="6"
  40:                                        Grid.Row="2"
  41:                                        Grid.Column="0"
  42:                                        Grid.ColumnSpan="2"/>
  44:     </Grid>
  45: </sap:ActivityDesigner>

There is no special code behind, everything is done via databinding.

So as you can see, there are a lot of pieces that need to be put into place for something that seems fairly simple. The critical issue is getting the implementation of CacheMetadata correct – once you have identified all of the pieces of data you need stored the rest falls out nicely.

.NET | WF | WF4
Tuesday, 09 February 2010 20:14:03 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, 04 February 2010

I’ve just noticed that Andy has blogged about our drop in clinic we are running at DevWeek 2010. All of the Rock Solid Knowledge guys will be at the conference so come over and let us help solve your design and coding problems – whether it be WCF, Workflow, Multithreading, WPF, Silverlight, ASP.NET MVC,  Design Patterns or whatever you are currently wrestling with. We’re also doing a bunch of talks (detailed on our Conferences page)


.NET | RSK | WCF | WF | WF4
Thursday, 04 February 2010 13:19:44 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, 18 January 2010

I’ve just realised that DevWeek 2010 is on the horizon. DevWeek is one of my favourite conferences  - lots of varied talks, vendor independence and I get to hang out with lots of friends for the week.

This year I’m doing a preconference talk and two sessions:

A Day of .NET 4.0
Mon 15th March 2010
.NET 4.0 is a major release of .NET, including the first big changes to the .NET runtime since 2.0. In this series of talks we will look at the big changes in this release in C#, multithreading, data access and workflow.
The best-known change in C# is the introduction of dynamic typing. However, there have been other changes that may affect your lives as developers more, such as support for generic variance, named and optional parameters. There is a whole library for multithreading, PFx, that not only assists you in parallelizing algorithms but in fact replaces the existing libraries for multithreading with a unified, powerful single API.
Entity Framework has grown up – the initial release, although gaining some popularity, missed features that made it truly usable in large-scale systems. The new release, version 4.0, introduces a number of features, such as self-tracking objects, that assist using Entity Framework in n-tier applications.
Finally, workflow gets a total rewrite to allow the engine to be used far more widely, having overcome limitations in the WF 3.5 API.
This workshop will take you through all of these major changes, and we’ll also talk about some of the other smaller but important ones along the way.

An Introduction to Windows Workflow Foundation 4.0
Tues 16th March 2010
.NET 4.0 introduces a new version of Windows Workflow Foundation. This new version is a total rewrite of the version introduced with .NET 3.0. In this talk we look at what Microsoft are trying to achieve with WF 4.0, why they felt it necessary to rewrite rather than modify the codebase, and what new features are available in the new library. Along the way we will be looking at the new designer, declarative workflows, asynchronous processing, sequential and flowchart workflows and how workflow’s automated persistence works.

Creating Workflow-based WCF Services
Tues 16th March 2010
There are very good reasons for using a workflow to implement a WCF service: workflows can provide a clear platform for service composition (using a number of building block services to generate a functionally richer service); workflow can manage long running stateful services without having to write your own plumbing to achieve this. The latest version of Workflow, 4.0, introduces a declarative model for authoring workflows and new activities for message based interaction. In addition we have a framework for flexible message correlation that allows the contents of a message to determine which workflow the message is destined for. In this session we will look at how you can consume services from workflow and expose the resulting workflow itself as a service.

As well as these you may well see me popping up as a guest code-monkey in the other Rock Solid Knowledge sessions. Hope to see you there

Monday, 18 January 2010 14:52:19 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, 17 December 2009

Seeing as this has changed completely from WF 3.5 I thought I’d post a quick blog entry to describe how to run a workflow declared in a XAML file.

You may have heard that the WF 4.0 default authoring model is now XAML. However, the Visual Studio 2010 workflow projects store the XAML as a resource in the binary rather than as a text file. So if you want to deploy your workflows as XAML text files how do you run them? In .NET 3.5 you could pass the workflow runtime an XmlReader pointing at the XAML file but in WF 4.0 there is no WorkflowRuntime class. It turns out you need to load the XAML slightly indirectly by creating a special activity called a DynamicActivity

DynamicActivity has an Implementation member that points to a Func<Activity> delegate – in other words you wire up a method that returns an activity (the workflow). Here’s an example:

static void Main(string[] args)
  DynamicActivity dyn = new DynamicActivity();
  // this line wires up the workflow creator method
  dyn.Implementation = CreateWorkflow;
static Activity CreateWorkflow()
  // we use the new XamlServices utility class to deserialize the XAML
  Activity act = (Activity)XamlServices.Load(@"..\..\workflow1.xaml");
  return act;
.NET | WF | WF4
Thursday, 17 December 2009 16:27:07 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, 02 October 2009

I just got back from speaking at Software Architect 2009. I had a great time at the conference and thanks to everyone who attended my sessions. As promised the slides and demos are now available on the Rock Solid Knowledge website and you can get them from the Conferences page.

.NET | Azure | REST | RSK | WCF | WF
Friday, 02 October 2009 22:12:18 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Saturday, 26 September 2009

Thanks to everyone who came to my sessions at BASTA! My first time at that conference and I had a great time. Here are the slides and demos

What’s New in Workflow 4.0

Contract First Development with WCF

Generics, Extension Methods and Lambdas – Beyond List<T>

Saturday, 26 September 2009 09:24:56 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Thursday, 21 May 2009

Every year I speak at the excellent DevWeek conference in London. Eric Nelson, a Microsoft Developer Evangelist in the UK is also a regular attendee. This year Eric, Mike and Mike were recording interviews with people at the conference. Eric interviewed me about Workflow 4.0 (I was delivering a post conference day on WF 4.0, Dublin and Oslo). He’s just got round to published it online


.NET | Dublin | Oslo | WF
Thursday, 21 May 2009 06:49:02 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Monday, 18 May 2009

Soma has blogged that beta 1 has finally shipped. Lots of new stuff in here since PDC. His blog post is here


Monday, 18 May 2009 16:06:32 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Saturday, 02 May 2009

Cliff Simpkins has just posted a blog entry detailing some of the changes between the PDC preview of WF 4.0 and what is coming in beta 1, due shortly.

Important information here not just about beta 1 but also about things you can expect and also things that won’t make it into RTM

Saturday, 02 May 2009 00:27:00 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Saturday, 28 March 2009

Thanks all who attended my post conference day on a day of connected systems with .NET 4.0, Dublin and Oslo. You can get the demos here.

Dublin | Oslo | WCF | WF
Saturday, 28 March 2009 08:39:03 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, 26 March 2009

Thanks to all who attended my Devweek session: A Beginners Guide to Windows Workflow. The slides and demos can be downloaded here

Thursday, 26 March 2009 06:51:37 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, 03 March 2009

I talked about the new WF4 Runtime model a while back. One of the things I discussed was the new data flow model of arguments and variables. However, if you are used to WF 3.5 something looks a bit odd here. Lets look at a simple activity example:

public class WriteLineActivity : WorkflowElement
    public InArgument<string> Text { get; set; }
    protected override void Execute(ActivityExecutionContext context)

Why do I need to pass the ActivityExecutionContext when I want to get the data from the argument? This highlights a subtlety to the change in the runtime model. The activity is really just a template. A class called ActivityInstance is the thing that is actually executed, it just has a reference to the actual activity to be able to hand off to the activty’s methods (e.g. Execute). The actual argument state is stored in a construct called the LocalEnvironment. The ActivityExecutionContext gives access to  the current ActivityInstance and that in turn holds a reference to the current LocalEnvironment. Therefore, to get from  an activity’s Execute method to its state we have to go via the ActivityExecutionContext.

Tuesday, 03 March 2009 09:41:42 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, 22 January 2009

I’ve been writing a lab on Workflow Services in 4.0 recently. Part of what I was showing was the new data orientated correlation (the same kind of mechanism that BizTalk uses for correlation). So I wanted to have two operations that were correlated to the same workflow instance based on data in the message (rather than a smuggled context id as 3.5 does it). As I was writing this lab I suddenly started getting an InvalidOperationException stating DispatchOperation requires Invoker every time I brought the .xamlx file up in a browser. It appeared that others had seen this as well but not really solved it. So I dug around looking at the XAML (workflow services can be written fully declaratively now) and the config file and could see no issues there. I asked around but no one I asked knew the reason.

So I created a simple default Declarative Workflow Service project and that worked ok. I compared my lab workflow and the default one and it suddenly dawned on me what was wrong. The default project has just one operation on the contract and has a ServiceOperationActivity to implement it. My contract had my two operations but I had, so far, only bound one ServiceOperationActivity. So in other words I had not implemented the contract. This is obviously an issue and looking back I’m annoyed I didn’t see it sooner.

However, the problem is that this is a change in behavior between 3.5 and 4.0. In 3.5 if I didn’t bind a ReceiveActivity to every operation I got a validation warning but I could still retrieve metadata; in 4.0 you get a fatal error. Its not hugely surprising that the behavior has changed – after all the whole infrastructure has been rewritten.

On the whole its a good thing that implementation of the contract is enforced – although it would be nice if the validation infrastructure caught this at compile time rather than it being a runtime failure.

.NET | BizTalk | WCF | WF
Thursday, 22 January 2009 10:10:15 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Saturday, 10 January 2009

I'm really excited to announce that I've joined thinktecture as a consultant. I've known Ingo, Christian and Dominick for some time now and its great to take that relationship on to a new level. I have huge respect for the abilities of the thinktecture team and so it was fantastic when they asked me if I was interested in working with them.

I'll be focusing on all things distributed at thinktecture - so that includes WCF, Workflow, BizTalk, Dublin and Azure. And I guess I'll have to learn to speak German now ...

Azure | BizTalk | Dublin | Life | WCF | WF
Saturday, 10 January 2009 18:27:22 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Wednesday, 10 December 2008

As I said in my previous post, asynchrony and persistence enable arguable the most powerful feature of workflow, that of stateful, long running execution. However as I also said, the WF runtime has been rewritten for WF 4.0 - so what does asynchrony and persistence look like in the 4.0 release?

First lets look at what was problematic in WF 3.5 in this area:

  • Writing an asynchronous activity was hellishly complex as they get executed differently dependent on their parent (State Machine, Listen Activity, standard activity)
  • There was no model for performing async work that didn't open up the possibility of being persisted during processing. This is fine if you are waiting for a nightly extract file to arrive on a share; its a real problem if you are trying to invoke a service asynchronously as the workflow may have been unloaded by the time the HTTP response comes back

The WF team have tried to solve both of these problems in this release and in addition obviously have to align with the new runtime model I talked about in my last post. Lets talk about async first and then we'll look at the related but separate issue of persistence.

The standard asynchronous model has been streamlined so the communication mechanisms have been wrapped in the concept of a Bookmark. A bookmark represents a resumable point of execution. Lets look at the code for an async activity using bookmarks.

public class StandardAsyncActivity : WorkflowElement
    protected override void Execute(ActivityExecutionContext context)
        BookmarkCallback cb = Callback;
        context.CreateNamedBookmark("rlBookMark", cb, BookmarkOptions.MultipleResume);

    int callTimes = 0;
    private void Callback(ActivityExecutionContext ctx, Bookmark bm, object state)
        if (callTimes == 2)


This code demonstrates a number of features. In 3.5 an activity indicated it was async by returning ActivityExecutionStatus.Executing from the Execute method. However, here the workflow runtime knows that this activity has yet to complete as it has an outstanding bookmark. When the activity is done with bookmark processing (in this case it receives two pieces of data) it simply closes it and since the runtime now knows it has no outstanding bookmarks the activity must be complete. Also notice that no special interfaces require implementation, the activity simply associates a callback to be fired when execution resumes at this bookmark. So how does execution resume? Heres the code:

myInstance.ResumeBookmark("rlBookMark", "hello bookmark");

This code is in the host or an extension (new name for a workflow runtime service) associated with the activity and as you can see is very simple.

With this model, after Execute returns and after each bookmark callback that doesn't close the bookmark, the workflow will go idle (assuming no other activities are runnable) and could be unloaded and/or persisted. So this is ideal for situations where the next execution resumption will be after some time. However, if we're simply performing async IO then the next resumption may be subsecond.

This model will not work for such processing and so there is a lightweight version of async processing in 4.0. Lets look at an example

public class LightweightAsyncActivity : WorkflowElement
    protected override void Execute(ActivityExecutionContext context)
        AsyncOperationContext block = context.SetupAsyncOperationBlock();

        Action a = AsyncWork;

        AsyncCallback cb = delegate(IAsyncResult iar)
            // called on non workflow thread
                                         ar => block.EndCompleteOperation(ar),

        a.BeginInvoke(cb, null);

    private void AsyncProcessingComplete(ActivityExecutionContext ctx, Bookmark bm, object val)
        // called on workflow thread
        Console.WriteLine("Bookmark callback");

    private void AsyncWork()
        // called on non workflow thread
        Console.WriteLine("Done Sleeping");

This time, instead of explicitly creating a bookmark we create an AsyncOperationContext. This then allows us to run on a separate thread (BeginInvoke hands the AsyncWork method to the threadpool) and then when it completes tell the workflow infrastructure that our async work is done. It also allows us to provide another method that will execute on a workflow thread in case there are other workflow related tasks we need to perform (AsyncProcessingComplete in the above example). Although when Execute returns the workflow will go idle, this infrastructure puts in place a new construct called a No Persist Block that prevents persistence while the async operation is in progress.

So these are the two new async models but how does the new persistence model interact. Well the above description mentions some interaction but lets look at 4.0 persistence in more detail and then draw the two parts of the post together.

The WorkflowPersistenceService has now been remodelled in terms of Persistence Providers. You get one out-of-the-box as before for SQL Server. There are two scripts to run against a database to put the new persistence constructs in place SqlPersistenceProviderSchema40.sql and SqlPersistenceProviderLogic40.sql.To enable the sql persistence provider you create an instance of the provider for each workflow instance using a factory and add it as an extension to that workflow instance:

string conn = "server=.;database=WFPersist;integrated security=SSPI";
SqlPersistenceProviderFactory fact = new SqlPersistenceProviderFactory(conn, true);

WorkflowInstance myInstance = WorkflowInstance.Create(new MyWorkflow());

PersistenceProvider pp = fact.CreateProvider(myInstance.Id);


The factory and the provider must be have Open called on them as they both derive from the WCF CommunicationObject class (presumably due to the way this layers in for durable services and Dublin). Note that there is no flag for the provider equivelent to 3.5's PersistOnIdle. It is now up to the host to specifically call Persist or Unload on the WorkflowInstance if wants it persisted or unloaded. So now in the Idle event you may put code like this:

myInstance.Idle += delegate

I'll come back to why use TryPersist rather than Persist in a moment. However, there is another job to be done. You need to explicitly tell the persistence provider that you are done with a workflow instance to ensure that it gets deleted from the persistence store when the workflow completes. And so you will typically call Unload in the Completed event:

myInstance.Completed += delegate(object sender, WorkflowCompletedEventArgs e)

However, things are a little gnarly here. Remember above that both the Bookmark and AsyncOperationBlock models for async will trigger the Idle event but the AsyncOperationBlock prevents persistence. Thats why we have to use TryPersist and TryUnload as Persist and Unload will throw exceptions if persistence is not possible. The team have said they are working on getting this revised for beta 1 and having seen a preview of the kind of things they are looking at it should be a lot neater then.

So thats the new async and persistence model for WF 4.0. The simplicity and extra functionality now in the async model will take away a good deal of the complexity in building custom activtities that play well in the long running workflow infrastructure



.NET | Dublin | WCF | WF
Wednesday, 10 December 2008 14:32:56 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

I've always found the workflow execution model very compelling. Its a very different way of thinking about writing applications but it solves a set of difficult problems particularly in the area of asynchronous and long-running processing. So when I found out that .NET 4.0 introduces a new version of the workflow infrastructure I was intregued as to what changes had been made.

The ideas behind workflow have not changed but the WF 4.0 implementation is very different from the 3.5 one, involving a very different API and a rewritten design-time experience. The decision to do a ground up rewrite was not taken lightly and was necessary to resolve issues that people were facing with 3.5 - particularly around serialization, performance and complexity. So you can say goodbye to WorkflowRuntime, WorkflowRuntimeService, WorkflowQueue, ActivityExecutionStatusIEventActivity and a number of other familiar classes and interfaces; the new API looks quite different.

So lets look at the code to create and execute a workflow in WF 4.0

WorkflowInstance myInstance = WorkflowInstance.Create(new MyWorkflow());


Console.WriteLine("Press enter to exit");
string data = Console.ReadLine();

So firstly two things are the same: you still use a WorkflowInstance object to address the workflow and starting the workflow is still an asychronous operation. However, notice that we don't use WorkflowRuntime to create the workflow, everything is done against the WorkflowInstance itself.

There is also a more lightweight workflow execution model if you simply want to inline some workflow functionality

WorkflowInvoker inv = new WorkflowInvoker(new MyWorkflow());

Workflows are now, by default, authored declaratively in XAML as are custom activities. In fact the activtity model looks quite different. The base unity of execution in a workflow is a WorkflowElement. Activities are composite activities, normally declared in XAML and these ultimately derive from WorkflowElement. However, neither of these constructs is actually executed; an ActivityInstance is generated from the activity definition and it is this that is executed. This is because arbitrary state can be attached to an executing activity in the form of variables. An activity can only bind its own state (modelled explicitly in the form of arguments) to variables declared on its parent or other ancestors. This is a very different model from 3.5 where an activity's state could be data bound to properties of any other activity in the entire workflow activity tree. This mean't, in effect, that to truly capture the state of a workflow you can to serialize the entire activity tree, including those activties that had finished execution and those that had yet to start execution. We can see this here

The 4.0 model only requires the currently executing set of activities to be serialized as they cannot bind state outside of their parental relationships


This makes persistence much faster and more compact.

Creating custom activties is often a case of combining building blocks defined as other activties - these are generally modelled in XAML. However, what if you want to create a building block activity yourself? In this case you simply derive from WorkflowElement directly and override the Execute method. This sounds a lot like creating custom activities in 3.5, however there are important differences. Firstly state in and out of an activity is explicitly modelled with arguments that have direction, secondly you no longer have to tell the runtime when the activity is finished or not in Execute - the runtime knows based on what actions you perform.

public class WriteLineActivity : WorkflowElement
    public InArgument<string> Text { get; set; }
    protected override void Execute(ActivityExecutionContext context)

Notice here that this activity is assuming that it is running in a Console host. Generally we try to avoid tying an activity to a certain type of host and this was achieved in 3.5 by using pluggable Workflow Runtime Services. The same is true in 4.0 but these services have been renamed Extensions to avoid confusion with Workflow Services (where workflow is used to implement a WCF service). The WorkflowInstance class now has an Extensions collection and the ActivityExecutionContext has a GetExtension<T> method for the activity to retrieve an extension it is interested in.

Some of the real power of the workflow infrastructure only becomes apparent when you plug in a persistence service and your activties are asynchronous so the workflow can be taken out of memory. It is this combination that allows workflows to perform long running processing and this infrastructure has also changed a great deal. However I will walk through those changes in my next post.

Wednesday, 10 December 2008 10:44:27 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, 01 December 2008

DevelopMentor UK ran a one day post PDC event on Friday at the Microsoft Offices in London. Myself and Dominick covered WF 4.0, Dublin, Geneva, Azure and Oslo

You can find the materials and demos here

.NET | Azure | Oslo | WCF | WF | Dublin
Monday, 01 December 2008 11:30:39 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, 21 November 2008

Thanks to everyone who came to my SOA with WF and WCF talk at Oredev

The slides are here

SOA-WCF-WF.pdf (278.79 KB)

and the demos are here

ServiceComposition.zip (107.08 KB)

Friday, 21 November 2008 07:17:50 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, 03 November 2008

I was in Redmond a few weeks ago looking at the new stuff that Microsoft's Connected System Division (CSD) were working on (WF 4.0, REST Toolkit, Oslo, Dublin). At the end of the week I did an interview for Ron Jacobs for Endpoint.tv on Channel 9. We discussed WF 4.0, Dublin, Oslo and M - as well as 150 person Guerrilla courses. You can watch it here

.NET | BizTalk | Oslo | REST | WCF | WF
Monday, 03 November 2008 23:12:08 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Sunday, 02 November 2008

.NET Services is a block of functionality layered on top of the Azure platform. This project has had a couple of names in the past – BizTalk Services when it was an incubation project and then Zurich as it was productionized.


.NET Services consists of three services:

1)      Identity

2)      The Service Bus

3)      Workflow


Lets talk about identity first. Officially named the .NET Access Control Service, this provides you a Security Token Service (STS) that you can use to configure how different forms of authentication map into claims (it leverages the claims construct first surfaced in WCF and now wrapped up in Geneva). It supports user ids/passwords, Live ID, Certs and full blown federation using WS-Trust. It also supports rule based authorization against the claim sets.


The Service Bus is a component that allows you to both listen for and send messages into the “service bus”. More concretely it means that from inside my firewall I can listen to an internet based endpoint that others can send messages into. It supports both unicast and multicast. The plumbing is done by a bunch of new WCF Bindings (the word Relay in the binding indicates it is a Service Bus binding) although there an HTTP based API too. Sending messages to the internet is fairly obvious how that happens, its the listening that is more interesting. The preferred way is to use TCP (NetTcpRelayBinding) to connect. This parks a TCP session in the service bus which it then sends messages to you down. However they also support HTTP although the plumbing is a bit more complex. There they construct a Message Buffer that they send messages into then the HTTP relay listener polls it for messages. The message buffer is not a full blown queue although it does have some of the same characteristics of a queue. There is a very limited size and TTL for the messages in it. Over time they may turn it into a full blown queue but for the current CTP it is not.


So what’s the point of the Service Bus? It enables to be subscribe to internet based events (I find it hard to use the word “Cloud” ;-)) to allow loosely coupled systems over the web. It also allows the bridging of on-premises systems to web based ones through the firewall


Finally there is the .NET Workflow Service. This is WF in the Cloud (ok I don’t find it that hard). They provide a constrained set of activities (currently very constrained although they are committed to providing a much richer set. They provide HTTP Receive and Send and Service bus Send plus some flow control and some XPath ones that allow content based routing. You deploy your workflow into their infrastructure and can create instances of it waiting for messages to arrive and to route them, etc. With the toolset you basically get one-click deployment of XAML based workflows. They currently only support WF 3.5 although they will be rolling out WF 4.0 (which is hugely different from WF 3.5 – thats the subject of another post) in the near future.


So what does .NET Services give us? It provides a rich set of messaging infrastructure over and above that of Windows Azure Services

.NET | Azure | WCF | WF
Sunday, 02 November 2008 15:10:04 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, 16 September 2008

I'm going to be doing a couple of sessions at the Oredev conference in Sweden in November

Writing REST based Systems with .NET

For many, building large scale service based systems equate to using SOAP. There is, however, another way to architect service based systems by embracing the model the web uses - REpresentational State Transfer, or REST. .NET 3.5 introduced a way of building the service side with WCF - however you can also use ASP.NET's infrastructure as well. In this session we talk about what REST is, two approaches to creating REST based services and how you can consume these services very simply with LINQ to XML.

Writing Service Oriented Systems with WCF and Workflow

Since its launch WCF has been Microsoft's premier infrastructure to writing SOA based systems. However one of the main benefits of Service Orientation is combining the functionality of services to create higher order functionality which itself is exposed as a service - namely service composition. Workflow is a very descriptive way of showing how services are combined and in .NET 3.5 Microsoft introduced an integration layer between WCF and Workflow to simplify the job of service composition. In this session we examine this infrastructure and bring out both its string and weak points with an eye to what is coming down the line in Project Oslo - Microsoft's next generation of its SOA platform.

Hope to see you there

.NET | ASP.NET | LINQ | MVC | Oslo | REST | WCF | WF
Tuesday, 16 September 2008 13:45:26 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Friday, 06 June 2008

As promised, here are the demos from the precon myself and Dave Wheeler (get a blog Dave) did at Software Architect 2008. It was a fun day talking about security, WCF, WF, Windows Forms, WPF, Silverlight, Ajax, ASP.NET MVC, LINQ and Oslo

DotNetForArchitects.zip (791.24 KB)

There is a text file in the demos directory in the zip that explains the role of each of the projects in the solution

Edit: Updated the download link so hopefully the problems people have been experiencing will be resolved

.NET | LINQ | Oslo | SilverLight | WCF | WF | WPF
Friday, 06 June 2008 22:02:07 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 

I've just got back from Software Architect 2008. Its a great conference to speak at and an interesting change from speaking at hard core developer conferences like DevWeek. Thanks to everyone who attended my sessions - the slides and demos are below

SOA with WCF and WF - SOA.zip (368.43 KB)

Volta - Volta.zip (506.21 KB)

The slides and demos from the pre conference workshop on .NET 3.5 for architects that Dave Wheeler and me presented will be posted early next week. We have realised we really need a guide to what all the projects are and how they relate - so we'll add this documentation and post them

.NET | Volta | WCF | WF
Friday, 06 June 2008 08:39:14 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Friday, 09 May 2008

The Workflow Mapper Activity for copying data from one object to another was an interesting project for myself, Jörg and Christian. However, none of us has the cycles to turn it into the hugely valuable activity I think it could be. therefore, we have decided to publish the code on codeplex so hopefully we can get community involvement to polish the functionality.

You can find the project at


Take a look and let us know if you want to get involved in the project

Friday, 09 May 2008 05:47:56 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Saturday, 15 March 2008

The demos from my DevWeek 2008 Postcon A Day of Connected Systems with VS 2008 are now here:

DayOfCS.zip (1.48 MB)

Thanks for attending the session

.NET | BizTalk | WCF | WF
Saturday, 15 March 2008 09:47:05 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, 14 March 2008

The demos for my DevWeek 2008 talk on a Developers Guide to Workflow are now available here:

DevGuideToWF.zip (31.23 KB)

Thanks a lot for attending the session

Friday, 14 March 2008 07:46:46 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, 13 March 2008

The demos from my DevWeek 2008 Silver talk about integrating WCF and WF are available here:

Silver.zip (107.74 KB)

Thanks for coming to the talk

Thursday, 13 March 2008 11:51:43 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

The demos from my DevWeek 2008 Robust Long Running Workflow talk are now available here:

RobustLongRunning.zip (28.84 KB)

Thanks for coming to the talk.

Thursday, 13 March 2008 11:48:33 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, 12 February 2008

Recently I discussed creating robust multi-host workflow architectures with WF using the SqlWorkflowPersistenceService. I talked about an issue with the way workflow ownership is implemented and  showed some code that would fix the issue but that I also said was a hack. Ayende rightly pointed out that for high volume systems it would be a disaster. The point of that post was to highlight the problem.

Ideally the workflow team will fix the persistence service to recover from abandoned workflows more robustly - in the meantime the following stored procedure will unlock the abandoned workflows and make then runnable. The idea is to make this a scheduled job in the database running every few seconds - this is essentially what BizTalk does.

CREATE PROCEDURE dbo.ClearUnlockedWorkflows

InstanceState Set ownerID=null
  WHERE NOT ownerID is NULL and 
ownedUntil < getdate


The only oddity in the code is the setting of the nextTimer to the current time. The issue is that straight persistence (as opposed to unloading on a delay) sets this value to 31st Dec 9999 which is obviously not going to be reached for some time. Unfortunately the workflow will only be scheduled due to an expired timer so I have to reset the timer such that it will expire immediately. I can't think of any issues this would cause but if there's a scenario I haven't thought of all comments are welcome.

.NET | BizTalk | WF
Tuesday, 12 February 2008 09:57:13 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Wednesday, 06 February 2008

For the third year running I'm going to be speaking at BearPark's DevWeek conference. This year I actually managed to submit some breakout session titles on time too.

Wednesday 12th March

11:30 A developer’s guide to Windows Workflow Foundation
There are many challenges to writing software. Not least of these are lack of transparency of code and creating software that can execute correctly in the face of process or machine restart. Windows Workflow Foundation (WF) introduces a new way of writing software that solves these problems and more. This session explains what WF brings to your applications and explains how it works. Along the way we will see the major features of WF that make it a very powerful tool in your toolkit, removing the need for you to write a lot of complex plumbing.

14:00 Creating robust, long-running Workflows
Long-running processes have unique requirements in that they need to maintain state over process restart; Windows Workflow Foundation (WF) enables this with its persistence infrastructure. However, there are issues around hosting and activity development that require attention for long running workflows to be robust. This session looks at the design of the workflow persistence service; issues around hosting and creating full featured asynchronous activities. This session assumes some familiarity with WF.

16:00 Cross my palm with Silver – creating workflow-based WCF services
There are very good reasons for using a workflow to implement a WCF service: workflows can provide a clear platform for service composition (using a number of building block services to generate functionally richer service); workflows can manage long running stateful services without having to write your own plumbing to achieve this. This session introduces the new Visual Studio 2008 Workflow Services. This technology, previously known as “Silver”, provides a relatively seamless integration between WF and WCF, enabling the service developer to concentrate on the application functionality rather than the plumbing. This session assumes some familiarity with WF and WCF.

Friday 14th March - Postcon

A day of connected systems with Visual Studio 2008
Most businesses find themselves building applications that use two or more machines working together to produce their functionality. One of the challenges in this world, apart from the actual business logic being implemented, is connecting the different parts of the application in a way that best fits the environment the machines are places – are there firewalls in place? Are some parts of the application written on different platforms such as Java? Do the different parts of the application have to maintain their state over machine restart? Late 2006 saw Microsoft release WCF and WF to tackle some of these challenges. However, parts of the story were left untold – especially the integration between the two.
Visual Studio 2008 introduces a number of new features for writing service based software. Its features build on the libraries released as part of .NET 3.0, providing an integration layer between the two. In this pre/post conference session we start at the basics of how WCF and WF work and then look at the various integration technologies introduced in Visual Studio 2008.

So if you're attending DevWeek I hope to see you there

Wednesday, 06 February 2008 15:16:10 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, 04 February 2008

One of the most powerful features of the Windows Workflow Foundation is its ability to automate state management of long running processes using a persistence service. Microsoft supply a persistence service out of the box - the SqlWorkflowPersistenceService. Unsurprisingly this persists state in SQL Server (or SQL Express).

Once you have a persistence service more possibilities open up: different applications can progress the same workflow instance over time; multiple hosts can process workflows in a scale out model. This second feature needs a little investigation - there is a gotcha hiding there you need to be aware of.

The issue is how to stop more than one host picking up the same instance of a workflow and processing it at the same time - imagine the workflow transfered $10,000,000 from one account to another, you'd hardly want this happening twice. So if the possiblity exists for multiple hosts to see the same persistence store, the persistence service must be able to ensure only one host is executing a workflow at any one time.

The SqlWorkflowPersistenceService handles multiple concurrent hosts by using the concept of workflow ownership - the guid of a host (created randomly by the persistence service constructor) is stamped against a workflow that it is actively executing (not in an idle or completed state). Now the question comes "what if the host dies while executing a workflow?". This is what the ownership timeout is for. You set the ownership timeout in the constructor of the SqlWorkflowPersistenceService.

SqlWorkflowPersistenceService sql = 
new SqlWorkflowPersistenceService(CONN, 


Here the third parameter specifies how long a host is allowed to run a workflow before persistence occurs. If the host takes longer than this then it will get an error when it atempts to persist. The fourth parameter is the polling interval for how often the persistence service will check for expired timers.

Now the idea is that if a host dies then it's lock will timeout and another host can pick up the work. There is a problem, however, in the implementation. The persistence service only looks for expired ownership locks when it first starts - not when it polls for expired timers. Therefore, for a workflow instance whose host has died mid-processing, it will only recover if a new host instance starts after the timeout has occurred.

So how can you make this more robust? Well we need a way to explicitly load the workflows that have had their ownership expire - unfortunately there is no exposed method to do this on the SqlWorkflowPersistenceService. Instead we have to get all the workflows, catch the exception if we load a locked one and unload any that aren't ready to run. Here is an example:

TimerCallback cb = delegate
// get all the persisted workflows
  foreach (var item in
      // load the workflow - this will throw a WorkflowOwnershipException if
      // the workflow is currently owned
inst = workflowRuntime.GetWorkflow(item.WorkflowInstanceId);

      // Unload workflow if its still idle on a timer
      DateTime timerExpiry = (DateTime)item.NextTimerExpiration;
      if (timerExpiry > DateTime
      // Loaded a workflow locked by another instance

Timer t = new Timer(cb, null, 0, 1000);

So this code will attempt to load workflow instances with expired locks every second. Is it a hack? Yes. But without one of two things in the SqlWorkflowPersistenceService its the sort of code you have to write to pick up unlocked workflow instances robustly. The workflow team could:

  • Check for expired ownership locks in the stored procedure that checks for timer expiration
  • Provide a method on the persistence service that explicitly allows the loading of unlocked workflow instances

Maybe in the next version :-)

Monday, 04 February 2008 11:07:39 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, 04 January 2008

Christian noticed a bug in my Async Activity base class that I talked about here and here. The latest code is here

Friday, 04 January 2008 17:20:57 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, 14 December 2007

When working with WF I always find it useful having a BizTalk background. Issues that are reasonably well known in BizTalk are not immediatedly apparent in Workflow unless you know they are part and parcel of that style of programming.

One issue in BizTalk is where you are waiting for a number of messages to start off some processing and you don't know in which order they are going to arrive. In this situation you use a Parallel Shape and put activating receives in the branches initializing the same correlation set. The orchestration engine understands what you are trying to do and creates a convoy for the remaining messages when the first one arrives. This is known as a Concurrent Parallel Receive. You don't leave the parallel shape until the all the messages have arrived and the convoy ensures that the remaining messages are routed to the same orchestration instance.

There is, however, an inherent race condition in this architecture in that: if two messages arrive simultaneously, due to the asynchronous nature of BizTalk, both messages could be processed before the orchestration engine has a chance to set up the convoy infrastructure. We will end up with two instances of the orchestration both waiting for messages that will never arrive. All you can do is put timeouts in place to ensure your orchestrations can recover from that situation and flag the fact that the messages require resubmitting.

With Workflow Services we essentially have the same issue waiting for us. Lets set up the workflow ...

If we call this from a client as follows:

PingClient proxy = new PingClient();

then everything works ok - in fact it works irrespective of the order the operations are called in, that's the nature of this pattern. It works because by the time we make the second call, the first has completed and the context is now cached on the proxy.

But lets make the client a bit more complex:

static IDictionary<string, string> ctx = null;
static void Main(string[] args)
  PingClient proxy = new PingClient();
  IContextManager mgr = ((IChannel)proxy.InnerChannel).GetProperty<IContextManager>();

  Thread t = new Thread(DoIt);

  if (ctx != null)

  ctx = mgr.GetContext();

  Console.WriteLine("press enter to exit");

static void DoIt()
  PingClient proxy = new PingClient();
  IContextManager mgr = ((IChannel)proxy.InnerChannel).GetProperty<IContextManager>();

(ctx != null
  ctx = mgr.GetContext();

Here we make the two calls on different proxies on different threads. A successful call stores the context in the static ctx field. Now a proxy will use the context if it has already been set, otherwise it assumes that it is the first call. So here the race condition has made its way all the way back to the client. There are things we could do about this in the client code (taking a lock out while we're making the call so we complete one and store the context before the other checks to see if the context is null), however, that really isn't the point. The messages may come from two separate client applications which both check a database for the context. Again we have an inherent race condition that we need to put architecture in place to detect and recover from. It would be nice to put the ParallelActivity in a ListenActivity with a timeout DelayActivity. However you can't do this because a ParallelActivity does not implement IEventActivity and so the ListenActivity validation will fail (the first activity in a branch must implement IEventActivity). We therefore have to put each receive in its own ListenActivity and time out the waits individually.

So is the situation pretty much the same for both WF and BizTalk? Well not really. The BizTalk window of failure is much smaller than the WF one as the race condition is isolated server side. Because WF ropes the client into message correlation the client has to receive the context and put it somewhere visible to other interested parties before the race condition is resolved.

Hopefully Oslo will bring data orientated correlation to the WF world

.NET | BizTalk | WCF | WF | Oslo
Friday, 14 December 2007 10:31:35 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, 11 December 2007

Yesterday I announced the release of the MapperActivity. I said I'd clean up the source code and post it.

Two things are worth noting:

  1. The dependency on the CodeProject Type Browser has gone
  2. There is a known issue where you can map more than one source element to the same destination - in this case the last mapping you create will win. We will fix this in a subsequent release.

Here's the latest version with the code

MapperActivityLibrary.zip (933.62 KB)

.NET | BizTalk | WCF | WF
Tuesday, 11 December 2007 08:31:41 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, 10 December 2007

One of the neat things you can do in BizTalk is to create a map that takes one kind of XML message and transforms it into another. They basically are a wrapper over XSLT and its extension infrastructure. The tool you use to create maps is the BizTalk mapper. In this tool you load in a source and destination schema - which get displayed in treeviews and then you drag elements from the source to the destination to describe how to build the destination message.

Now, in WF - especially when using Silver (the WF/WCF integration layer introduced in .NET 3.5) - you tend to spend alot of time taking data from one object and putting it into another as different objects are used to serialize and deserialize messages received and sent via WCF. This sounds alot like what the BizTalk Mapper does, but with objects rather than XML messages.

I was recently teaching Guerrilla Connected Systems (which has now been renamed Guerrilla Enterprise.NET) and Christian Weyer was doing a guest talk on REST and BizTalk Services. Afterwards, Christian and me were bemoaning the amount of mundane code you get forced to write in Silver and how sad it was that WF doesn't have a mapper. So we decided to build one. Unfortunately neither myself nor Christian are particularly talented when it comes to UI - but we knew someone who was: Jörg Neumann. So with Jörg doing the UI, myself writing the engine and Christian acting as our user/tester we set to work.

The result is the MapperActivity. The MapperActivity requires you to specify the SourceType and DestinationType - we used the Type Browser from CodeProject for this. Then you can edit the Mappings which brings up the mapping tool

You drag members (properties or fields) from the left hand side and drop them on type compatible members (properties or fields) on the right hand side.

Finally you need to databind the SourceObject and DestinationObject to tell the mapper where the data comes from and where the data is going. You can optionally get the MapperActivity to create the destination object tree if it doesn't exist using the CanCreateDestination property. Here's the full property grid:

And thanks to Jon Flanders for letting me crib his custom PropertyDescriptor implementation (why on earth don't we have a public one in the framework?). You need one of these when creating extender properties (the SourceObject and DestinationObject properties change type based on the SourceType and DestinationType respectively).

Here is the activity - I'll clean up the code before posting that.

MapperActivity.zip (37.78 KB)

Feedback always appreciated

.NET | BizTalk | WCF | WF
Monday, 10 December 2007 21:56:07 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Wednesday, 05 December 2007

In my last post I talked about the base class I created for creating custom composites that you want control over the execution logic. Firstly Josh pointed out that I ought to state its limitations - it only executes (and therefore cancels) one child activity at a time so it would not be useful for building, for example, the ParallelActivity.

With that limitation in mind though, there are a number of useful applications. I had a think about a different execution model and came up with prioritized execution. So I built this on top of my base class which was an interesting exercise as I could use an attached property for the Priority which meant I didn't limit the children to a special child activity type. I also had a think about the design time and realised that the sequential and parallel models for composite design time wasn't appropriate so I stole^H^H^H^H^H leveraged the designer that the Conditional Activity Group uses which seemed to work well.

I packaged up the code and told Josh about it

"Oh yeah, thats an interesting one - Dharma Shukla and Bob Schmidt used that in their book"


It turns out that pretty much half the world has decided to build a prioritised execution composite.

Heres the code anyway

PrioritizedActivitySample.zip (76.36 KB)

But then I got to thinking - actually there is a problem with this approach - elegant as it may be. If you cannot change the priorities at runtime then all you are building is an elaborate sequence. So it turns out that attached properties cannot be data bound in WF - you have to use a full DependencyProperty created with DependencyProperty.Register rather than DependencyProperty.RegisterAttached. So I reworked the example, creating a special child activity DynamicPrioritizedSequenceActivity that has a priority that is a bindable dependency property. I also had to create a validator to make sure only this type of child could be added to the composite.

This neat thing now is you have a prioritized execution composite whose children can dynamically change their priority using data binding. The code for this is here

DynamicPrioritySample.zip (78.83 KB)

So between these two samples we have prioritized execution, designer support, attached property support and validator support

As usual all comments welcome

Wednesday, 05 December 2007 15:47:10 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, 04 December 2007

There are two types of custom composites you may end up writing:

  1. a form of UserControl with a set of activities whose composite functionality you want to reuse. This is what you get when you add a new activity to a workflow project using the menu item. You end up with a class that derives from SequenceActivity and is the one occasion with a custom activity where you don;t really have to override Execute
  2. a parent that wants to take control over the execution model of its children (think of the execution model of the IfElseActivity, WhileActivity and the ParallelActivity)

To me the second of these is more interesting (although no doubt less common) and involves writing a bit of plumbing. In general, though, there is a common pattern to the code and so I have created a base class that wraps this plumbing.

CustomCompositeBase.zip (55.6 KB)

There are two fundemental overridable methods:

LastActivityReached: returns whether there are more child activities to execute
GetNextActivity: returns the next activity to be scheduled

Inside the zip there is also a test project that implements a random execution composite (highly useful I know).

Feedback, as always, very welcome.

Tuesday, 04 December 2007 09:19:18 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Sunday, 02 December 2007

Yesterday I posted a base class for Async Activities that buried all the plumbing. However, looking at the zip I only had the binary for the base class and the test harness. Here is the zipped code.

AsyncActivityBaseClass12.zip (69.71 KB)

In addition Rod gave me some feedback (for example making the base class abstract messed up the design time view for the derived activity). In the light of that and other ideas I have made some changes:

  • The base class is now concrete and so designer view now works for derived classes
  • If you don't supply a queue name then the queue name defaults to the QualifiedName of the activity
  • Cancellation support is now embedded in the base class.
  • There is a new override called DoAsyncTeardown to allow you to deallocate any resources you allocated in DoAsyncSetup

I hope someone finds this useful

Sunday, 02 December 2007 12:19:15 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Saturday, 01 December 2007

One of the most powerful aspects of Windows Workflow is that it can manage a workflow instance's state when it has no work to do. Obviously to do this it needs to know that your activity is waiting for something to happen. A number of activities in the Standard Activity Library run asynchronously like this -e.g. the DelayActivity and the new ReceiveActivity. However, most real WF projects require you to write your own custom activities. To perform long running operations or wait for external events to occur, these too should be able to run asynchronously.

To be usable in all situations, an async activity needs to do more than just derive from Activity and return ActivityExecutionStatus.Executing from the Execute override. It also needs to implement IActivityEventListener<QueueEventArgs> and IEventActivity (the latter allows it to be recognised as an event driven activity). We end up with 3 interesting methods


Depending where your activity is used these get called in a different order

Sequential Workflow: Execute
State Machine: Subscribe, Execute, Unsubscribe
ListenActivity: Subscribe, Unsubscribe, Execute

So getting the behavior correct in where you create the workflow queue you are going to listen on, set up the event handler on the queue, process the results from the queue and close the activity can be a little tricky. So I have created a base class for async activities that buries the complexity of the plumbing and leaves you with three responsiblities:

  1. Tell the base class the name of the queue you wish to wait on
  2. Optionally override a method to set up any infrastructure you need before you start waiting on the queue
  3. Override a method to process the data that was passed on to the queue and decide whether you are still waiting for more data or you are finished

You can download the code here AsyncActivityBaseClass.zip (38.36 KB). Any feedback gratefully received

Finally, thanks to Josh and Christian for sanity checking the code

Saturday, 01 December 2007 08:17:49 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, 07 June 2007

In the previous article I talked about hosting options for WCF. Here I'll discuss hosting for WF. However, to understand the options fully we need to digress briefly and talk about some of the features of WF.

Asynchronous Execution

The execution model of WF is inherently asynchronous. Activities are queued up for execution and the runtime asks the scheduler service to make the calls to Activity.Execute on to threads. The DefaultWorkflowSchedulerService (which you get if you take no special actions) maps Execute calls on to thread pool worker threads. The ManualWorkflowSchedulerService (which also comes out of the box but you have to install specifically install using WorkflowRuntime.AddService) schedules Execute on the thread that calls ManualWorkflowSchedulerService.RunWorkflow.


By default the workflow runtime does not persist workflows. Only when an object of a type that derives from WorkflowPersistenceService is added as a runtime service does the workflow runtime start persisting workflows both in the form of checkpoints when significant things have happened in the workflow's execution and to unload a workflow when it goes idle. When something happens that should be directed to an unloaded workflow the host (or a service) calls WorkflowRuntime.GetWorkflow(instanceId) and the runtime can reload the workflow from persistent storage.

One issue here is that of timers. If a workflow has gone idle waiting for a timer to expire (say via the DelayActivity) then no external event is going to occur to trigger a call WorkflowRuntime.GetWorkflow(instanceId)to allow the workflow to continue execution. In this case the persistence service must have functionality to discover when a timer has expired in a persisted workflow. The out of the box implementation, the SqlWorkflowPersistenceService polls the database periodically (how often is controlled by a parameter passed to its constructor). Obviously for this to occur the workflow runtime has to be loaded and running (via WorkflowRuntime.StartRuntime).

Hosting Options

The above two issues affect your WF hosting decisions and implementation. As with WCF you can host in any .NET process but in many ways the responsibilities of a WF host are more far reaching than for a WCF host. Essentially a WCF host has to start listening on the endpoints for the services it is hosting. A WF host not only has to configure and start the workflow runtime but also decide when new workflow instances should be created and broker communication from the outside world into the workflow. So lets look at the various options.

WF Console Hosting

When you install the Visual Studio 2005 Extensions for Windows Workflow Foundation package you get new project types which include console projects for both sequential and state machine workflows. These project types are useful for doing research and demos but thats about it.

WF Smart Client Hosting

There are a number of WF features that make it desirable to host WF in a smart client. Among these are the ability to control complex page navigation and the WF rules engine for controlling complex cross field navigation. The Acropolis team are planning to provide WF based navigation in their framework.

WF ASP.NET Hosting

One of the common use cases of WF hosting is controlling page flow in ASP.NET applications. This isn't by any means the sum total of application of WF in ASP.NET; for example, the website may be the gateway into starting and monitoring long running business processes. There are two main issues with ASP.NET hosting:

  1. ASP.NET runs executes on thread pool worker threads as does the DefaultWorkflowSchedulerService. ASP.NET also stores the HttpContext in thread local storage. So if the workflow invokes functionality that requires access to HttpContext or the ASP.NET request needs to wait for the workflow then you should use the ManualWorkflowSchedulerService as the scheduler. This also means that the workflow runtime and ASP.Net will not be competing for threads.
  2. For long running workflows the workflow will probably be persisted and so the persistence service needs to recognise when timeouts have occurred. However, to do this the workflow runtime needs to be executing. So why is this an issue? Well in ASP.NET hosting the lifetime of the workflow runtime is generally tied to that of the Application object and therefore to the lifetime of the hosting AppDomain. But ASP.NET does AppDomain and worker process recycling - the AppDomain not being recreated until the next request. Therefore, if no requests come in (maybe the site has bursts of activity then periods of inactivity) the AppDomain may be recycled and the workflow runtime stop executing. At this point the persistence service can no longer recognise when timers have expired.

As long as your workflows do not require persistence or you have (or can create via a sentinel process) a regular stream of requests then ASP.NET is an excellent workflow host - especially if IIS is being used to host WCF endpoints that hand off to workflows. just remember it is not suitable for all applications.

WF Windows Service Hosting

A Windows Service is the most flexible host for WF as the workflow runtime lifetime is generally tied to the lifetime of the process. Obviously, you will probably have to write a communication layer to allow the workflows to be contacted by the outside world. In places where ASP.NET isn't a suitable host it can delegate to a Windows Service based host. Just remember there is an overhead to hand off to the windows service from another process (say via WCF using the netNamedPipeBinding) so make sure ASP.NET really isn't suitable before you go down this path.

WF is a powerful tool for many applications and the host plays a crucial role in providing the right environment for WF execution. Hopefully this article has given some insight into use cases and issues with the various hosting options.


Thursday, 07 June 2007 13:39:04 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 

I have just uploaded the code from mine and Jon's precon on WCF and WF. You can download it here.

Thursday, 07 June 2007 04:26:21 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Wednesday, 06 June 2007

WCF and WF are both frameworks. Specifically this means they are not full blown products but rather blocks of utility code with which to build products. For both of these frameworks the hosting process is undefined and it is up to you to provide this. As .NET based frameworks the host can be any .NET process so what are the advantages and disadvantages of each? In this the first of two articles I'll look at options and issues hosting WCF then in the next article I'll do the same for WF.

WCF Console Hosting

Console apps are useful for testing things out and can be useful for mocking up a service if you are testing a client. However, they are not suitable for production systems as they have no resilience associated with them.

WCF Smart Client Hosting

Useful for peer to peer applications.

WCF Windows Service Hosting

This is a tangible host for  production services. Whether this is the best host depends on a number of questions:

  • What operation system am I running on?
  • What network protocols do I need to support?
  • Do I need to execute any code before the first request comes in?

Assuming you want to run this on a server OS and one that is currently released you will have to use Windows Server 2003 (.NET 3.0 isn't supported on Windows 2000 or earlier). If you want to use any protocol other HTTP or you need to execute code before the first request comes in you must use a Windows Service to host your WCF service.

If you are happy to run on Vista or Windows Server 2008 Beta (formally known as Longhorn Server) then you should only use a Windows Service if you need to execute code before the first request arrives.

WCF IIS Hosting

IIS should be your host of choice for WCF services. However, there are two issues to be aware of:

  1. IIS6 and earlier can only host services that use HTTP as a transport. IIS7 introduces the Windows Process Activation Service that allows arbitrary protocols (such as TCP and MSMQ) to have the same activation facilities as HTTP.
  2. IIS provides activation on the first request. If you need to execute code before this (for example if you have a service that records trend data over time for some external data source and your service queries this trend data) then you cannot use IIS as a host without adding out of band infrastructure to bootstrap the application by performing a request.

So the options for WCF hosting are pretty straightforward. In the next article we'll see that the hosting options for WF have some issues that may not at first be obvious.

Wednesday, 06 June 2007 22:36:24 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 

Just sitting in on a session on extending the WF rules infrastructure given by Moustafa Ahmed. Some interesting ideas. I especially liked the infrastructure for decoupling the rules from the workflow assembly using a repository, custom workflow service and a custom policy activity. He also showed a custom rules editor that was business user friendly (which the OOB one isn't) called inrule which looked pretty good.

Wednesday, 06 June 2007 20:05:31 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   |