# Tuesday, 16 December 2008

With any development environment I go through a series of stages:

  1. Love - this thing is awesome I never realised how great life would be with these new features
  2. Hate - Man why does it do these stupid things, it really detracts from the new cool features
  3. Acceptance - well it has foibles but generally I get my job done faster

I've been in 3 with VS 2008 for some time now but two things have always irked me:

When I type a method I get error red lines appear due to syntax errors like this:

Well obviously I get an error - I haven't finished typing yet! I find these sorts of errors very distracting.

The other issue is with implementing interfaces - I love the feature where I can specify a class implements an interface and then press Ctrl+. [Enter] and it puts a skeleton implementation in for me. What I have always found annoying though is the #region if surrounds the implementation with:

So I was looking around the C# editor options today and found I could turn both of these off.

I think I may be heading back to phase 1 of my relationship with VS2008 now :-)

Tuesday, 16 December 2008 17:18:38 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, 15 December 2008

I spent a while working this one out and none of the google hits  showed me what I needed so I thought I'd blog it here. When you hook the Drop event how do you deal with multiple files being dropped? The GetData("FileNameW") member only returns a single file name in the string array even if multiple files are being dragged. GetData("FileName") returns a single file name in 8.3 compatible munged form so that doesn't help. I looked in the help for System.Windows.DataObject and it has a GetFileDropList member. However, it turns out that the DragDropEventArgs.Data is an IDataObject not a DataObject. So if you cast it to a DataObject you get access to the file list

if (e.Data.GetFormats().Contains("FileDrop"))
    DataObject dataObject = e.Data as DataObject;
    StringCollection files = null;
    if (dataObject != null)
        files = dataObject.GetFileDropList();
        // not sure if this is necessary but its defensive coding
        string[] fileArray = (string[])e.Data.GetData("FileNameW");
        files = new StringCollection();

    foreach (string file in files)
        // process files

Hopefully someone else looking how to do this will stumble across this blog post and save some time

Monday, 15 December 2008 21:01:53 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

I've added drag and drop support to the BlobExplorer (announced here) so you can drag files from windows explorer into cloud blob storage

Rather than maintain the application in multiple places all releases will now go to the cloud here


.NET | Azure | WPF | BlobExplorer
Monday, 15 December 2008 15:52:11 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

I will finally submit to Dominick's harrassment and stop running Visual Studio elevated. I started doing it because I couldn't be bothered to mess about with HttpCfg and netsh when writing self-hosted WCF apps exposed over HTTP. Registering part of the HTTP namespace with http.sys requires admin privilege. I could have sorted it out with a wildcard registration but never got round to it.

Since writing the BlobExplorer, Christian has been my major source of feature requests. His latest was he wanted drag and drop. Now I'm not an expert in WPF so I hit google to find out how to do it. And it seemed really simple - just set AllowDrop=true on the UIElement in question and handle the Drop event. But I just couldn't get the app to accept drag events from Windows Explorer. After much frustration I finally realised that the problem had nothing to do with my code but was rather a security issue. the BlobExplorer was running elevated because I started it from Visual Studio which was also running elevated. Elevated applications will not accept drag events from non-elevated applications.

So I'm setting VS to run normally again and will do the proper thing with http.sys for my WCF applications.

.NET | Azure | WCF | WPF
Monday, 15 December 2008 12:49:41 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, 11 December 2008

I think the best feature introduced in .NET 2.0 was anonymous methods in C#. Hows that for taking a stance!

I've just been reading an article about anonymous methods coming to VB in .NET 4.0 and while I think this is great news for VB developers its worth adding a note of caution due to one of the examples used. Generally anonymous methods and event handlers can cause unforseen issues due to the way anonymous methods are implemented under the covers (I'm assuming the VB implementation is broadly the same as the C# one). If you look at the code below:

class Program
    static void Main(string[] args)
        Foo f = new Foo();

        f.ItHappened += delegate
            Console.WriteLine("Foo happens");

        f.ItHappened -= delegate
            Console.WriteLine("Foo happens");


class Foo
    public event Action ItHappened;

    public void Bar()
        if (ItHappened != null)

The problem is the compiler generates two different real methods for the two anonymous methods. Therefore the unwiring of the handler fails. It comes down to this: if you wire up an event hander with an anonymous method, unless you store the delegate in a variable, there is no way to remove that event handler afterwards. There is a well known issue in .NET in terms of memory leaks with event handlers keeping alive objects that otherwise have independent lifetimes. You fix this issue by remembering to remove the event handler when the object is finished with. However, using anonymous methods (or lambdas for that matter) prevents you from removing the event handler and so this issue becomes unresolvable.

So my advice: even though anonymous methods are very powerful, generally use real methods for event handlers unless the lifetime of the event owner and the subscriber are inherently tied together.

Thursday, 11 December 2008 20:21:46 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Wednesday, 10 December 2008

As I said in my previous post, asynchrony and persistence enable arguable the most powerful feature of workflow, that of stateful, long running execution. However as I also said, the WF runtime has been rewritten for WF 4.0 - so what does asynchrony and persistence look like in the 4.0 release?

First lets look at what was problematic in WF 3.5 in this area:

  • Writing an asynchronous activity was hellishly complex as they get executed differently dependent on their parent (State Machine, Listen Activity, standard activity)
  • There was no model for performing async work that didn't open up the possibility of being persisted during processing. This is fine if you are waiting for a nightly extract file to arrive on a share; its a real problem if you are trying to invoke a service asynchronously as the workflow may have been unloaded by the time the HTTP response comes back

The WF team have tried to solve both of these problems in this release and in addition obviously have to align with the new runtime model I talked about in my last post. Lets talk about async first and then we'll look at the related but separate issue of persistence.

The standard asynchronous model has been streamlined so the communication mechanisms have been wrapped in the concept of a Bookmark. A bookmark represents a resumable point of execution. Lets look at the code for an async activity using bookmarks.

public class StandardAsyncActivity : WorkflowElement
    protected override void Execute(ActivityExecutionContext context)
        BookmarkCallback cb = Callback;
        context.CreateNamedBookmark("rlBookMark", cb, BookmarkOptions.MultipleResume);

    int callTimes = 0;
    private void Callback(ActivityExecutionContext ctx, Bookmark bm, object state)
        if (callTimes == 2)


This code demonstrates a number of features. In 3.5 an activity indicated it was async by returning ActivityExecutionStatus.Executing from the Execute method. However, here the workflow runtime knows that this activity has yet to complete as it has an outstanding bookmark. When the activity is done with bookmark processing (in this case it receives two pieces of data) it simply closes it and since the runtime now knows it has no outstanding bookmarks the activity must be complete. Also notice that no special interfaces require implementation, the activity simply associates a callback to be fired when execution resumes at this bookmark. So how does execution resume? Heres the code:

myInstance.ResumeBookmark("rlBookMark", "hello bookmark");

This code is in the host or an extension (new name for a workflow runtime service) associated with the activity and as you can see is very simple.

With this model, after Execute returns and after each bookmark callback that doesn't close the bookmark, the workflow will go idle (assuming no other activities are runnable) and could be unloaded and/or persisted. So this is ideal for situations where the next execution resumption will be after some time. However, if we're simply performing async IO then the next resumption may be subsecond.

This model will not work for such processing and so there is a lightweight version of async processing in 4.0. Lets look at an example

public class LightweightAsyncActivity : WorkflowElement
    protected override void Execute(ActivityExecutionContext context)
        AsyncOperationContext block = context.SetupAsyncOperationBlock();

        Action a = AsyncWork;

        AsyncCallback cb = delegate(IAsyncResult iar)
            // called on non workflow thread
                                         ar => block.EndCompleteOperation(ar),

        a.BeginInvoke(cb, null);

    private void AsyncProcessingComplete(ActivityExecutionContext ctx, Bookmark bm, object val)
        // called on workflow thread
        Console.WriteLine("Bookmark callback");

    private void AsyncWork()
        // called on non workflow thread
        Console.WriteLine("Done Sleeping");

This time, instead of explicitly creating a bookmark we create an AsyncOperationContext. This then allows us to run on a separate thread (BeginInvoke hands the AsyncWork method to the threadpool) and then when it completes tell the workflow infrastructure that our async work is done. It also allows us to provide another method that will execute on a workflow thread in case there are other workflow related tasks we need to perform (AsyncProcessingComplete in the above example). Although when Execute returns the workflow will go idle, this infrastructure puts in place a new construct called a No Persist Block that prevents persistence while the async operation is in progress.

So these are the two new async models but how does the new persistence model interact. Well the above description mentions some interaction but lets look at 4.0 persistence in more detail and then draw the two parts of the post together.

The WorkflowPersistenceService has now been remodelled in terms of Persistence Providers. You get one out-of-the-box as before for SQL Server. There are two scripts to run against a database to put the new persistence constructs in place SqlPersistenceProviderSchema40.sql and SqlPersistenceProviderLogic40.sql.To enable the sql persistence provider you create an instance of the provider for each workflow instance using a factory and add it as an extension to that workflow instance:

string conn = "server=.;database=WFPersist;integrated security=SSPI";
SqlPersistenceProviderFactory fact = new SqlPersistenceProviderFactory(conn, true);

WorkflowInstance myInstance = WorkflowInstance.Create(new MyWorkflow());

PersistenceProvider pp = fact.CreateProvider(myInstance.Id);


The factory and the provider must be have Open called on them as they both derive from the WCF CommunicationObject class (presumably due to the way this layers in for durable services and Dublin). Note that there is no flag for the provider equivelent to 3.5's PersistOnIdle. It is now up to the host to specifically call Persist or Unload on the WorkflowInstance if wants it persisted or unloaded. So now in the Idle event you may put code like this:

myInstance.Idle += delegate

I'll come back to why use TryPersist rather than Persist in a moment. However, there is another job to be done. You need to explicitly tell the persistence provider that you are done with a workflow instance to ensure that it gets deleted from the persistence store when the workflow completes. And so you will typically call Unload in the Completed event:

myInstance.Completed += delegate(object sender, WorkflowCompletedEventArgs e)

However, things are a little gnarly here. Remember above that both the Bookmark and AsyncOperationBlock models for async will trigger the Idle event but the AsyncOperationBlock prevents persistence. Thats why we have to use TryPersist and TryUnload as Persist and Unload will throw exceptions if persistence is not possible. The team have said they are working on getting this revised for beta 1 and having seen a preview of the kind of things they are looking at it should be a lot neater then.

So thats the new async and persistence model for WF 4.0. The simplicity and extra functionality now in the async model will take away a good deal of the complexity in building custom activtities that play well in the long running workflow infrastructure



.NET | Dublin | WCF | WF
Wednesday, 10 December 2008 14:32:56 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

I've always found the workflow execution model very compelling. Its a very different way of thinking about writing applications but it solves a set of difficult problems particularly in the area of asynchronous and long-running processing. So when I found out that .NET 4.0 introduces a new version of the workflow infrastructure I was intregued as to what changes had been made.

The ideas behind workflow have not changed but the WF 4.0 implementation is very different from the 3.5 one, involving a very different API and a rewritten design-time experience. The decision to do a ground up rewrite was not taken lightly and was necessary to resolve issues that people were facing with 3.5 - particularly around serialization, performance and complexity. So you can say goodbye to WorkflowRuntime, WorkflowRuntimeService, WorkflowQueue, ActivityExecutionStatusIEventActivity and a number of other familiar classes and interfaces; the new API looks quite different.

So lets look at the code to create and execute a workflow in WF 4.0

WorkflowInstance myInstance = WorkflowInstance.Create(new MyWorkflow());


Console.WriteLine("Press enter to exit");
string data = Console.ReadLine();

So firstly two things are the same: you still use a WorkflowInstance object to address the workflow and starting the workflow is still an asychronous operation. However, notice that we don't use WorkflowRuntime to create the workflow, everything is done against the WorkflowInstance itself.

There is also a more lightweight workflow execution model if you simply want to inline some workflow functionality

WorkflowInvoker inv = new WorkflowInvoker(new MyWorkflow());

Workflows are now, by default, authored declaratively in XAML as are custom activities. In fact the activtity model looks quite different. The base unity of execution in a workflow is a WorkflowElement. Activities are composite activities, normally declared in XAML and these ultimately derive from WorkflowElement. However, neither of these constructs is actually executed; an ActivityInstance is generated from the activity definition and it is this that is executed. This is because arbitrary state can be attached to an executing activity in the form of variables. An activity can only bind its own state (modelled explicitly in the form of arguments) to variables declared on its parent or other ancestors. This is a very different model from 3.5 where an activity's state could be data bound to properties of any other activity in the entire workflow activity tree. This mean't, in effect, that to truly capture the state of a workflow you can to serialize the entire activity tree, including those activties that had finished execution and those that had yet to start execution. We can see this here

The 4.0 model only requires the currently executing set of activities to be serialized as they cannot bind state outside of their parental relationships


This makes persistence much faster and more compact.

Creating custom activties is often a case of combining building blocks defined as other activties - these are generally modelled in XAML. However, what if you want to create a building block activity yourself? In this case you simply derive from WorkflowElement directly and override the Execute method. This sounds a lot like creating custom activities in 3.5, however there are important differences. Firstly state in and out of an activity is explicitly modelled with arguments that have direction, secondly you no longer have to tell the runtime when the activity is finished or not in Execute - the runtime knows based on what actions you perform.

public class WriteLineActivity : WorkflowElement
    public InArgument<string> Text { get; set; }
    protected override void Execute(ActivityExecutionContext context)

Notice here that this activity is assuming that it is running in a Console host. Generally we try to avoid tying an activity to a certain type of host and this was achieved in 3.5 by using pluggable Workflow Runtime Services. The same is true in 4.0 but these services have been renamed Extensions to avoid confusion with Workflow Services (where workflow is used to implement a WCF service). The WorkflowInstance class now has an Extensions collection and the ActivityExecutionContext has a GetExtension<T> method for the activity to retrieve an extension it is interested in.

Some of the real power of the workflow infrastructure only becomes apparent when you plug in a persistence service and your activties are asynchronous so the workflow can be taken out of memory. It is this combination that allows workflows to perform long running processing and this infrastructure has also changed a great deal. However I will walk through those changes in my next post.

Wednesday, 10 December 2008 10:44:27 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, 08 December 2008

For the last two years I have been DevelopMentor's CTO. This has meant determining the technical direction of the company, deciding what courses should be developed, being the technical voice on the management team and much more. However, I have missed cutting production code and so I have decided to step down as CTO and return to consulting and teaching on a freelance basis.

I'm looking forward to getting back to the coalface with WCF, Workflow, BizTalk, Azure, Dublin and all the new goodness coming out of Redmond

Monday, 08 December 2008 15:57:50 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

I've uploaded an updated version of the BlobExplorer (first blogged about here). It now uses a decent algorithm for determining MIME type (thanks Christian) and the blob list now retains a single entry when you update an existing blob

Downloadable from my website here

BlobExplorer12.zip (55.28 KB)

downloadable from Azure Blob Storage here

.NET | Azure | BlobExplorer | REST | WPF
Monday, 08 December 2008 09:33:00 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Sunday, 07 December 2008

The Windows Azure Storage infrastructure has three types of storage:

  • Blob - for storing arbitrary state as a blob (I recently blogged about writing an explorer for blob storage)
  • Table - for storage structured or semi-structured data with the ability to query it
  • Queue - primarily for communication between components in the cloud

This post is going to concentrate on queue storage as its usage model is a bit different traditional queuing products such as MSMQ. The code samples in this post use the StorageClient library sample shipped in the Windows Azure SDK.

Lets start with the basics: you authenticate with Azure storage using your account name and key provided by the Azure portal when you create a storage project. If you're using the SDK's deveopment storage the account name, key and uri are fixed. Here is the code to set up authentication details with development queue storage:

string accountName = "devstoreaccount1";
string accountKey = "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==";
string address = "";

StorageAccountInfo info = new StorageAccountInfo(new Uri(address), null, accountName, accountKey);

QueueStorage qs = QueueStorage.Create(info);

Queues are named within the storage so to create a queue (or address an existing queue) you use the collowing code:

MessageQueue mq = qs.GetQueue("fooq");


The CreateQueue method will not recreate an already existing queue and has an overload to tell you that the queue already existed. You can then create message objects and push them into the queue as follows:

Message msg = new Message(DateTime.Now.ToString());


Hopefully none of this so far is particularly shocking. Where things start to get interesting is on the receive side. We can simply receive a message so:

msg = mq.GetMessage(5);
if (msg != null

So what is interesting about this? Firstly notice that the GetMessage takes a parameter - this is a timeout in seconds. Secondly, GetMessage doesn't block but will return null if there is no message to receive. Finally, although you can't see it from the above code, the message is still on the queue. To remove the message you need to delete it:

Message msg = mq.GetMessage(5);
if (msg != null

This is where that timeout comes in: having received a message, that message is locked for the specified timeout. You must called DeleteMessage within that timeout otherwise the message is unlocked and can be picked up by another queue reader. This prevents a queue reader trying to process a message and then dying leaving the message locked. It, in effect, provides a loose form of transaction around the queue without having to formally support transactional sematics which tend not to scale at web levels.

However, currently we have to code some polling logic as the GetMessage call doesn't block. The MessageQueue class in StorageClient also provides the polling infrastructure under the covers and delivers the message via eventing:

mq.MessageReceived += new MessageReceivedEventHandler(mq_MessageReceived);
mq.PollInterval = 2000; // in milliseconds
mq.StartReceiving(); // start polling

The event handler will fire every time a message is received and when there are no more messages the client will start polling the queue according to the PollInterval

Bear in mind that the same semantics apply to receiving messages. It is beholden on the receiver to delete the message within the timeout otherwise the message will again become visible to other readers. Notice here we don't explicitly set a timeout and so a default is picked up from the Timeout property on the MessageQueue class (this defaults to 30 seconds).

Using the code above it is very easy to think that everything is running with a nicely tuned .NET API under the covers. Remember, however, that this infrastructure is actually exposed via internet standard protocols. All we have here in reality is a convenient wrapper over the storage REST API. And so bear in mind that the event model is just a convenience provided by the class library and not an inherent feature built into queue storage. Similarly the reason that GetMessage doesn't block is simply that it wraps an HTTP request that returns an empty queue message list if there are no messages on the queue. If you use GetMessage or eventing API then the messages are retrieved one at a time. You can also use the GetMessages API which will allows you to receive multiple messages at once.

.NET | Azure | REST
Sunday, 07 December 2008 11:04:49 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Saturday, 06 December 2008

I've been digging around in Azure Storage recently and as a side project I decided to write an explorer for Blob Storage. My UI skills are not my strongest suit but I had fun dusting off my WPF knowledge (I have to thank Andy Clymer, Dave "no blog" Wheeler and Ian Griffiths for nursemaiding me through some issues)

Here is a screen shot of the explorer attached to development storage. You can also attach it to your hosted Azure storage in the cloud

In the spirit of cloud storage and dogfooding I used the Blob Explorer to upload itself into the cloud so you can find it here

However, as Azure is a CTP environment I thought I would also upload it here

BlobExplorer1.zip (55.08 KB)

as, on the web, URIs live forever and my storage account URIs may not after the CTP. I hope someone gets some mileage out of it - I certainly enjoyed writing it. Comments, bug reports and feature requests are appreciated

.NET | Azure | BlobExplorer | REST | WPF
Saturday, 06 December 2008 22:00:19 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, 01 December 2008

DevelopMentor UK ran a one day post PDC event on Friday at the Microsoft Offices in London. Myself and Dominick covered WF 4.0, Dublin, Geneva, Azure and Oslo

You can find the materials and demos here

.NET | Azure | Oslo | WCF | WF | Dublin
Monday, 01 December 2008 11:30:39 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   |