# Monday, April 08, 2013

If you have been working with WCF for a while you may have noticed that, by default, messages over a certain size fail to get processed. The reason for this is that WCF tries to protect the message receiver from getting swamped with messages that will consume huge amounts of memory to process. The thing that controls this is the maxReceivedMessageSize on the binding which defaults to 64Kb.

Notice that maxReceivedMessageSize is only about the receiver of the message, it doesn’t stop you sending a very large message. The idea is that the sender is in control of what they send whereas the receiver has no way to control the size of message it is being asked to deal with. There is no built in way to limit the size of a message on the sender side, although you could use a MessageInspector for this purpose as long as the message wasn’t being streamed.

However, there are two ways to send messages: buffering and streaming. Buffered messages get fully buffered in the channel layer before being handed to the service model layer whereas streamed messages only buffer the SOAP headers and then pass the body as a stream to the service model layer (this requires a service contract designed for streaming).

Depending on the binding you may be able to change the maximum size of message that can be dealt with purely by changing maxReceivedMessageSize however, for bindings that support streaming (e.g. BasicHttpBinding) there is a second binding parameter that may have to change - the maxBufferSize. If you are buffering messages then the maxReceivedMessageSize and maxBufferSize must be the same. When streaming the maxBufferSize must be large enough to buffer the headers.

Once the binding allows messages larger than 64Kb that may not be the whole story as there are other default limits in WCF: quotas and serializer limits. The quotas control the maximum size of things like arrays and strings and are controlled by the <readerQuotas> element under the binding. The serializer limit for the DataContractSerializer limits the maximum number of objects in the object graph and is controlled by the DataContractSerializer service behavior. Prior to WCF 4.5 these values often needed adjusting for large messages. However, in WCF 4.5 these values were all defaulted to int.MaxValue as the maxReceivedMessageSize already provides default protection.

If you suspect you are hitting on of these limits the quickest way normally to diagnose this is to turn on tracing (I recorded a screencast here showing how). It will show which limit has been breached.

One final word on default endpoints introduced in WCF 4.0.If you have changed all the relevant values and you are still seeing the maxRecevedMessageSize stopping your messages make sure that you service config has the right name for the service. If not, and you have a base address, the default endpoint functionality will use a set of defaults anyway, ignoring your carefully crafted values.

Monday, April 08, 2013 12:08:10 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Sunday, April 07, 2013

Before WCF it was, of course, possible for software running on different machines to communicate. Over the lifetime of Windows there have been many technologies to achieve this: sockets, DCOM, MSMQ, .NET Remoting, ASMX Web Services and more. The problem is all of these have different APIs and different levels of capability. The goal of WCF was to provide a unified API for communication and to be able to provide a common level of service irrespective of the underlying transport.

To understand the structure of WCF and why it looks the way it does  useful starting point are the four tenets of service orientation. This is not because they are the one true definition of service orientation but because they came from the team that  wrote WCF and, therefore, give an insight to the thinking behind the architecture of WCF. Fundamentally the internals of a service should not be visible to software consuming that service and interaction between systems is based on passing platform independent messages.

There are three core concepts at play whenever you use WCF: messages, channels and encoders.

Message is a first class construct in WCF and are modeled on SOAP messages. In other words they have a payload known as the body and an extensible collection of contextual headers independent of the transport. They also support a specific header called the action header which is defined in the WS-Addressing specification. The action identifies the intent (purpose) of the message. Messages are represented by the System.ServiceModel.Channels.Message class. How messages are passed between two parties depends on the Message Exchange Pattern. One way messaging means messages are passed on one direction only. Request/Response means that there is a matching response message for each request message. Duplex means that messages are passed freely in both directions without any required causality between them. WCF supports all three of these patterns out of the box.

There are two types of channels in WCF: transport and protocol. Transport channels are probably the most obvious: they move bytes from one place to another using some transport protocol. WCF, as of 4.5, ships with the following transports: TCP, HTTP, MSMQ, Named Pipes, UDP and WebSockets. It is, however, an extensible model and, for example, there is an RabbitMQ AMQP channel. Protocol channels are designed to layer in functionality not supported by the transport, for example, transaction flow of HTTP. The three core protocol channels that come with WCF are security, transaction flow and reliable messaging. They are implementations of Web Service specification such as WS-Security, WS-AtomicTransaction and WS-ReliableMessaging and pass any required information to-and-fro using the message’s Headers collection.

Messages are objects and transport channels move bytes. Something has to translate between those two worlds and that something is called an encoder. The encoder controls the format of a message on the wire. WCF comes with four encoders: Text, Binary, MTOM and POX. The Text encoder puts textual XML SOAP messages on the wire. The Binary encoder put a binary format of the SOAP message on the wire – this binary format is WCF specific and is not designed to be interoperable with other platforms. The MTOM encoder implements the MTOM specification and is a way to transport binary payloads over SOAP without base 64 encoding them. The POX (Plain Old XML) encoder strips of the SOAP pieces and just places the body of the message on the wire and, as such, isn’t limited to XML.

Protocol channels, encoders and transport channels are composed in a channel stack controlling how messages are passed between a sender and receiver.

As you can probably tell, the channel stacks on both sides of the communication need to be compatible. Messages, Channels and Encoders make up the WCF Channel Layer.

Programming at the channel layer, however, is relatively heavy lifting and so, generally, we work at a higher level of abstraction that takes care of much of the inherent complexity. This abstraction sits on top of the channel layer and is called the Service Model Layer. With the service model layer there are only two things we have to build (we can build more if we choose to), the code that consumes the service functionality and the implementation of that functionality. All of the plumbing can be automated.


The service model layer has two concepts: endpoints and behaviors. Endpoints are those things that are agreed between the consumer and the service. Behaviors are local functionality that is not visible to the other side of the communication (e.g. how requests are mapped to threads).

You will often hear people talk about the ABCs of WCF. This is because the three component parts of an endpoint are Address, Binding and Contract.

This defines what functionality an endpoint supports: the operations and the messages they expect and may return. In WCF contracts are defined by using the [ServiceContract] and [OperationContract] attributes on interfaces or classes. Interfaces are commonly used as it allows a single service to implement many contracts.

Bindings specify the “how” of the communication: what transport protocol; how security is configured; what the messages look like on the wire, etc. In other words it defines what the channel layer looks like. There are a number of standard bindings modeling common scenarios and you can also define custom ones if you have other requirements. You can find a list of the standard bindings in WCF 4.5 here.

The address of the endpoint is where the service is listening and where the consumer will send messages. Address structure is determines by the transport protocol being used, for example HTTP based addresses start with http:// and have a machine name and potentially a port number in the address whereas MSMQ address start with net.msmq:// and contain the machine and queue name.

In the service model layer, a service is simply a class that implements one or more contracts. However, something must actively start listening for incoming messages. The ServiceHost class is responsible for instantiating the set of objects that listen at the address and dispatch the messages as method calls to the service implementation. You can either create the ServiceHost instance yourself and call Open on it (known as self hosting) or you can get the Windows Process Activation Service to do this (formally known as WAS hosting). WAS hosting is also commonly referred to as IIS hosting as the service is configured via IIS manager and the ASP.NET infrastructure.

Clients use proxy objects to make calls to services. These can be obtained in one of two ways: shared assembly and metadata.

Shared Assembly
Remember that the client must have the same endpoint definition as the service. In that case where does the client’s view of the contract come from? sharing an assembly between client and service gives both the definition of the contract. Then, using the same binding and address, the client creates an instance of the ChannelFactory<T> class (where T is the contract) and then calls CreateChannel to create the proxy.

Using a shared assembly works fine when it is reasonable for the service and client to share code but cannot be used if the service and client are different technologies and may want to be avoided if the client and service are maintained by unrelated teams. In these situations we can use metadata in the form of WSDL or WS-MetadataExchange to provide the client with a description of the endpoint which tools (Add Service Reference in Visual Studio and svcutil.exe from the command line) can consume and build the necessary code and configuration to invoke the service. Metadata is not turned on by default and must be enabled using the ServiceMetadataBehavior behavior in the service.

That then is a broad brush for how WCF hangs together. There are many other factors that need to be considered when building services but the raw plumbing of network communication isn’t something with which service developers need to get involved. Of course, there is nothing to stop developers getting involved at many of the extensibility points of WCF but fortunately that is not a requirement to get clients and services talking to each other.

Sunday, April 07, 2013 8:28:47 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Wednesday, July 25, 2012

Microsoft recently announced the beta of Service Bus 1.0 for Windows Server. This is the on-premise version of the Azure Service Bus that so many have been asking for. There is a good walkthrough of the new beta in the MSDN documentation here including how to install it.

So why this blog post? Well if you install the beta on a Windows Server 2008 R2 box and you actually develop on another machine then there’s a couple of things not mentioned in the documentation that you need to do (having previously spent long hour glaring at certificate errors allowed me to resolve things pretty quickly)

First a little background into why there is an issue. Certificate validation has a number of steps to it for a certificate to be considered valid:

  1. The issuer of the cert must be trusted. In fact the issuer of the issuer must also be trusted. In fact the issuer of the issuer of the issuer must be trusted. In fact … well you get the picture. In other words, when a certificate is validated all certificates in the chain of issuers must be trusted back to a trusted root CA (normally bodies like Verisign or your corporate certificate server). You will find these in your certificate store in the “Trusted Root Certificate Authorities” section. This mechanism is known as chain trust.
  2. The certificate must be in date (between the valid from and valid to dates)
  3. The usage of the certificate must be correct for the supported purposes of the cert (server authentication, client authentication, code signing, etc)
  4. For a server authentication cert (often known as an SSL cert) the subject of the cert must be the same as the server DNS name
  5. The certificate must not be revoked (if a cert is compromised the issuer can revoke the cert and publishes this in a revocation list) according to the issuers revocation list

For 1. there is an alternative called Peer Trust where the presented certificate must be in the clients Trusted People section of their certificate store

There may be other forms of validation but these are the ones that typically fail

So the issue is that the installer of Service Bus (SB), if you just run through the default install, generates a Certificate Authority cert and puts it into the target machine’s Trusted CA store – it obviously doesn’t put it in the client’s Trusted CA store because it knows nothing about that. It also generates a server authentication cert with the machine name you are installing on. This causes two issues: no cert issued by the CA will be trusted and so an SSL failure will happen the first time you try to talk to SB, the generated SSL cert will have problems generally validating due to, for example, no revocation list

The first of these issues can be fixed by exporting the generated CA cert and importing it into the Trusted CA Store ion the dev machine


The second can be fixed by exporting the SSL cert and importing it into the trusted people store on the dev machine


With these in place the Getting Started / Brokered Messaging / QueuesOnPrem sample should work fine

.NET | Azure | ServiceBus | WCF
Wednesday, July 25, 2012 5:04:54 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Monday, July 02, 2012

The demos from DEV 326 – my talk on WCF 4.5 at TechEd EMEA are available here. The session was also recorded and the video is now on Channel9 here

Monday, July 02, 2012 2:13:14 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Wednesday, June 15, 2011

Just a heads up that when self hosting the new WCF Web API. By default if you try to add the Web API references via Nuget you will get a failure (E_FAIL returned from a COM component).

This is due to the likely project types (Console, Windows Service, WPF) defaulting to the client profile rather than the full framework. If you change the project to the full framework the Nuget packages install correctly

Yet again bitten by the Client Profile

Wednesday, June 15, 2011 10:21:33 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Friday, February 04, 2011

I have just found myself answering essentially the same question 4 times on the MSDN WCF Forum about how instances, threading and throttling interact in WCF. So to save myself some typing I will walk through the relationships here and then I can reference this post in questions.

WCF has 3 built in instancing models: PerCall, PerSession and Single. They are set on the InstanceContextMode on the ServiceBehavior attribute on the service implementation. They relate to how many instances of the service implementation class get used when requests come in, and work as follows:

  • Single – one instance of the implementation class is used for all requests
  • PerCall – every request gets its own instance of the implementation class
  • PerSession – this is the slightly odd one as it means every session gets its own instance. In practical terms it means that for Session supporting bindings: NetTcpBinding, WSHttpBinding, NetNamedPipeBinding, etc, every proxy gets an instance of the service. For bindings that do not support session: BasicHttpBinding, WebHttpBinding, we get the same effect as PerCall. To add to the confusion, this setting is the default

By default WCF assumes you do not understand multithreading. Therefore, it only allows one thread at a time into an instance of the service implementation class unless you tell it otherwise. You can control this behavior using the ConcurrencyMode on the ServiceBehavior; it has 3 values:

  • Single – this is the default and only one thread can get into an instance at a time
  • Multiple – any thread can enter an instance at any time
  • Reentrant – only makes a difference in duplex services but means that an inbound request can be received from the component you are currently making a request to (sounds a bit vague but in duplex the role of service and client are somewhat arbitrary)

Now these two concepts are different but have some level of interaction.

If you set InstanceContextMode to Single and ConcurrencyMode to Single then your service will process exactly one request at a time. If you set InstanceContextMode to Single and ConcurrencyMode to Multiple then your service processes many requests but you are responsible for ensuring your code is threadsafe.

If you set InstanceContextMode to PerCall then ConcurrencyMode Single and Multiple behave the same as each request gets its own instance

For PerSession ConcurrencyMode Multiple is only required if you want to support a client sending multiple requests through the same proxy from multiple threads concurrently

Unless you turn on ASP.NET Compatibility, WCF calls are processed on IO threads in the system threadpool. There is no thread affinity so any of these threads could process a request. The number of threads being used will grow until the throughput of the service matches the number of concurrent requests (assuming the server machine has the resources to match the number of concurrent requests). Although the number of IO threads is capped at 1000 by default, if you hit this many then unless you are running on some big iron hardware you probably have problems in your architecture.

Throttling is there to ensure your service is not swamped in terms of resources. There are three throttles in place:

  • MaxConcurrentCalls – the number of concurrent calls that can be made – under .NET 4 defaults to 16 x number of cores
  • MaxConcurrentSessions – the number of concurrent sessions that can be in in flight – under .NET 4 defaults to 100 x number of cores
  • MaxConcurrentObjects – the number of service implementation objects that are in use – defaults to the sum of MaxConcurrentCalls + MaxConcurrentSessions

In reality, depending on whether you are using sessions or not, the session or call throttle will affect your service the most. The object one will only affect your service if you set it lower than the others or you do something unsual and handle the mapping of requests to objects yourself using a custom IInstanceContextProvider

You can control the throttle values using the serviceThrottling service behavior which you set in the config file or in code

Friday, February 04, 2011 9:50:40 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, June 08, 2010


I’m speaking at Software Architect 2010 in October. I’m going to be delivering two sessions on Windows Workflow Foundation 4.0: the first explains the basic architecture and looks at using workflow as a Visual Scripting environment to empower business users. The second looks at building big systems with workflow concentrating on the WCF integration features.

In addition to that I’ll be delivering two all-day workshops with Andy Clymer: Building Applications the .NET 4.0 Way and Moving to the Parallel Mindset with .NET 4.0. The first of these will take a number of new features of .NET 4.0 and show how they can be combined to create compelling applications. The second will look at the Parallel Framework Extensions (PFx) introduced in .NET 4.0 examining both the rich functionality of the library, how it can be best leveraged avoiding common parallel pitfalls and finally looking at patterns that aid parallelisation of your code

.NET | PFx | RSK | WCF | WF | WF4
Tuesday, June 08, 2010 10:52:40 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Monday, June 07, 2010
.NET | Dublin | WCF | WF
Monday, June 07, 2010 10:05:40 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Tuesday, March 30, 2010

Rock Solid Knowledge on iTunes is live! We’ve taken the feed to our free screencasts and they are now available through iTunes via the following link


Now you can watch them on the move

.NET | EF4 | PFx | RSK | SilverLight | WCF | WF | WF4
Tuesday, March 30, 2010 9:13:22 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Wednesday, March 17, 2010

Thanks to everyone who attended my sessions at DevWeek 2010. I’ve now uploaded the demos which you can find at the following locations

A Day of .NET 4.0 Demos

Windows Workflow Foundation 4.0 Demos

Creating WCF Services using WF4 Demos

I’ll be around for the rest of the conference so drop by for a chat at our developer clinic in the exhibition area

.NET | EF4 | RSK | WCF | WF | WF4
Wednesday, March 17, 2010 8:13:59 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Wednesday, March 10, 2010

In my last post I pointed to a screencast I had recorded that showed how to create a custom message filter to plug your own logic into the WCF 4.0 Routing Service. However, the simple custom filter is only the start – you can actually take control of part of the routing table which allows you to make global decisions about which filters match a particular request. On that basis I have created another screencast that shows how to build one of these more complex custom filters. in this case I use the example of a round robin load balancer where you can use a file on the file system to indicate whether a specific endpoint should be considered part of the load balancing algorithm

You can find this last in the series of screencasts on the routing service, along with all the others here, on the Rock Solid Knowledge site


Wednesday, March 10, 2010 1:05:40 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, March 08, 2010

I’ve just uploaded a new screencast on the the Rock Solid Knowledge site. This one shows you how to plug your own routing logic into the new Routing Service that is part of WCF 4.0. It uses a custom message filter that can be used to supplement the existing set of filters such as matching on XPath and Action.

You can find the screencast (along with the previous ones in the series) here


Monday, March 08, 2010 12:37:22 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, March 02, 2010

Continuing my screencast series on the Routing Service in WCF 4.0, I have just uploaded one on Data Dependent Routing

You can get to the latest screencast (and all of the others) here http://rocksolidknowledge.com/ScreenCasts.mvc

Tuesday, March 02, 2010 10:50:18 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, March 01, 2010

I have just published my second screencast on one of the new features of WCF 4.0 – the Routing Service. This screencast focuses on enabling multicast and failover with the Routing Service

You can get to the screencast from here http://rocksolidknowledge.com/ScreenCasts.mvc

Monday, March 01, 2010 1:16:51 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, February 26, 2010

Thanks to everyone who attended my two sessions at BASTA! – another thoroughly enjoyable conference. I’ve uploaded the demos

What’s new in Workflow 4.0 – this includes the application with the rehosted designer

Building WCF services with Workflow 4.0

.NET | WCF | WF | WF4
Friday, February 26, 2010 10:02:42 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, February 05, 2010

A while back I wrote a blog post about DataSets and why you shouldn’t use them on service boundaries. The fundamental issues are:

  1. No non-.NET client has any idea what the data looks like you are sending them
  2. Hidden, non essential data is being passes up and down the wire
  3. Your client gets coupled to the shape of your data (usually a product of the Data Access Layer)

So when I saw that Entity Framework 4.0 supports Self Tracking Entities (STEs) I was interested to see how they would work – after all, automated change tracking is one of the reasons people wanted to use DataSets in service contracts. The idea is that as you manipulate the state the object itself tracks the changes to properties and whether it has been created new or marked for deletion. It does this by implementing an interface called IObjectWithchangeTracker which is then used by the ObjectContext to work out what needs to be done in terms of persistence. There is an extension method on the ObjectContext called ApplyChanges which does the heavy lifting.

The Entity Framework team has released a T4 Template to generate these STEs from an EDMX file and the nice thing is that the generated entities themselves have no dependency on the Entity Framework. Only the generated context class has this dependency and, so the story goes, the client needs to know nothing about Entity Framework, only the service does. The client, and the entities, remain ignorant of the persistence model. For this reason STEs have been touted as a powerful tool in n-tier based architectures

All of this seems almost too good to be true … and unfortunately it is.

To understand what the issue is with STEs we have to remember what two of the main goals of Service based systems are:

  1. Support heterogeneous systems – the service ecosystem is not bound to one technology
  2. Decoupling – to ensure changes in a service do not cascade to consumers of the service for technical reasons (there may obviously be business reasons why we might want a change in a service to effect the consumers such as changes in law or legislation)

So to understand the problem with STEs we need to look at the generated code.Here’s the model I’m using:


Now lets look at the T4 Template generated code for, say, the OrderLine

[DataContract(IsReference = true)]
public partial class OrderLine: IObjectWithChangeTracker, INotifyPropertyChanged
    #region Primitive Properties
    public int id
        get { return _id; }
            if (_id != value)
                ChangeTracker.RecordOriginalValue("id", _id);
                _id = value;
    private int _id;
    // more details elided for clarity

A few things to notice: firstly this is a DataContract and therefore is designed to be used on WCF contracts – that is its intent; secondly a bit of work takes place inside the generated property setters. The property setters check to see if the data is actually changed, then it records the old value and raises calls OnPropertyChanged to raise the PropertyChanged event (defined on INotifyPropertyChanged). Lets have a look inside OnPropertyChanged:

protected virtual void OnPropertyChanged(String propertyName)
     if (ChangeTracker.State != ObjectState.Added && ChangeTracker.State != ObjectState.Deleted)
         ChangeTracker.State = ObjectState.Modified;
     if (_propertyChanged != null)
         _propertyChanged(this, new PropertyChangedEventArgs(propertyName));

Ok so maybe a bit more than raising the event. It also marks the object as modified in the ChangeTracker. The ChangeTracker state is partly how the STE serializes its self tracked changes. It is this data that is used by the ApplyChanges extension method to work out what has changed. So the thing to remember here is that the change tracking is performed by code generated into the property setters.

Well the T4 Template has done its work so we create our service contract using these conveniently generated types and the client uses metadata and Add Service Reference to build its proxy code. It gets the OrderLine from the service, updates the quantity and sends it back. The service calls ApplyChanges on the context and then saves the changes and … nothing changes in the Database. What on earth went wrong?

At this point we have to step back and think about what those types we use in service contracts are actually doing. Those types are nothing more than serialization helpers – to help us bridge the object world to the XML one. The metadata generation uses the type definition and the attribute annotations to generate a schema (XSD) definition of the data in the class. Notice we’re only talking about data – there is no concept of behavior. And this is the problem When the Add Service Reference code generation takes place its based on the schema in the service metadata – so the objects *look* right, just the all important code in the property setters is missing. So you can change the state of entities in the client and the service will never be able to work out if the state has changed – so changes don’t get propagated to the database.

There is a workaround for this problem. You take the generated STEs and put them in a separate assembly which you give to the client. Now the client has all of the change tracking code available to it, changes get tracked and the service can work out what has changed and persist it.

But what have we just done? We have forced the client to be .NET. Not only that, we’ve probably compiled against .NET 4.0 and so we are requiring the client to be .NET 4.0 aware – we might as well have taken out a dependency on Entity Framework 4.0 in the client at this point. In addition, changes I make to the EDMX file are going to get reflected in the T4 generated code – I have coupled my client to my data access layer. Lets go back to the problems with using DataSets on service boundaries again. We’re back where we pretty much started – although the data being transmitted is more controlled.

So STEs at first glance look very attractive, but in service terms they are in fact similar to using DataSets in terms of the effect on the service consumer. So what is the solution? We’ll we’re back with our old friends DTOs and AutoMapper. To produce real decoupling and allow heterogeneous environments we have to explicitly model the *data* being passed at service boundaries. How this is patched this into our data access layer is up to the service. Entity Framework 4.0 certainly improves matters here over 1.0 as we can use POCOs which aid testability and flexibility of our service

.NET | EF4 | WCF
Friday, February 05, 2010 12:48:08 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, February 04, 2010

I’ve just noticed that Andy has blogged about our drop in clinic we are running at DevWeek 2010. All of the Rock Solid Knowledge guys will be at the conference so come over and let us help solve your design and coding problems – whether it be WCF, Workflow, Multithreading, WPF, Silverlight, ASP.NET MVC,  Design Patterns or whatever you are currently wrestling with. We’re also doing a bunch of talks (detailed on our Conferences page)


.NET | RSK | WCF | WF | WF4
Thursday, February 04, 2010 1:19:44 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, January 18, 2010

I’ve just realised that DevWeek 2010 is on the horizon. DevWeek is one of my favourite conferences  - lots of varied talks, vendor independence and I get to hang out with lots of friends for the week.

This year I’m doing a preconference talk and two sessions:

A Day of .NET 4.0
Mon 15th March 2010
.NET 4.0 is a major release of .NET, including the first big changes to the .NET runtime since 2.0. In this series of talks we will look at the big changes in this release in C#, multithreading, data access and workflow.
The best-known change in C# is the introduction of dynamic typing. However, there have been other changes that may affect your lives as developers more, such as support for generic variance, named and optional parameters. There is a whole library for multithreading, PFx, that not only assists you in parallelizing algorithms but in fact replaces the existing libraries for multithreading with a unified, powerful single API.
Entity Framework has grown up – the initial release, although gaining some popularity, missed features that made it truly usable in large-scale systems. The new release, version 4.0, introduces a number of features, such as self-tracking objects, that assist using Entity Framework in n-tier applications.
Finally, workflow gets a total rewrite to allow the engine to be used far more widely, having overcome limitations in the WF 3.5 API.
This workshop will take you through all of these major changes, and we’ll also talk about some of the other smaller but important ones along the way.

An Introduction to Windows Workflow Foundation 4.0
Tues 16th March 2010
.NET 4.0 introduces a new version of Windows Workflow Foundation. This new version is a total rewrite of the version introduced with .NET 3.0. In this talk we look at what Microsoft are trying to achieve with WF 4.0, why they felt it necessary to rewrite rather than modify the codebase, and what new features are available in the new library. Along the way we will be looking at the new designer, declarative workflows, asynchronous processing, sequential and flowchart workflows and how workflow’s automated persistence works.

Creating Workflow-based WCF Services
Tues 16th March 2010
There are very good reasons for using a workflow to implement a WCF service: workflows can provide a clear platform for service composition (using a number of building block services to generate a functionally richer service); workflow can manage long running stateful services without having to write your own plumbing to achieve this. The latest version of Workflow, 4.0, introduces a declarative model for authoring workflows and new activities for message based interaction. In addition we have a framework for flexible message correlation that allows the contents of a message to determine which workflow the message is destined for. In this session we will look at how you can consume services from workflow and expose the resulting workflow itself as a service.

As well as these you may well see me popping up as a guest code-monkey in the other Rock Solid Knowledge sessions. Hope to see you there

Monday, January 18, 2010 2:52:19 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, January 07, 2010

The Routing Service is a new feature of WCF 4.0 (shipping with Visual Studio 2010). It is an out-of-the-box SOAP Intermediary able to perform protocol bridging, multicast, failover and data dependent routing. I’ve just uploaded a new screencast walking through setting up the Routing Service and showing a simple example of protocol bridging (the client sends messages over HTTP and the service receives them over NetTcp). This screencast is one of a series I will be recording about the Routing Service. You can find the screencast here.

Thursday, January 07, 2010 1:24:26 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, October 02, 2009

I just got back from speaking at Software Architect 2009. I had a great time at the conference and thanks to everyone who attended my sessions. As promised the slides and demos are now available on the Rock Solid Knowledge website and you can get them from the Conferences page.

.NET | Azure | REST | RSK | WCF | WF
Friday, October 02, 2009 10:12:18 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 

I had a lucky coincidence while speaking at Software Architect 2009. Four hours before I gave a talk on Contract First Design with WCF the WSCF Blue guys released version 1.0 of their tool. Contract first design is the most robust way to build services that potentially have to exist in a heterogeneous environment – it essentially means you start from the WSDL which means everyone can consume the service as no implementation details leak on to the wire. The major stumbling block in the WCF world has been there is no tool that can take a WSDL document and generate the skeleton of a service that implements the defined contract. Finally we have a released tool that does this very thing in the shape of WSCF Blue. In fact it can go further than that because writing WSDL by hand is too much like heavy lifting for many people – so WSCF Blue can also take one or more schema and runs a wizard to build a WSDL document that creates a contract based on the messages in the schema documents.

WSCF Blue solves a problem that has been there since WCF 3.0 and the great thing is that its free – being release on codeplex


Friday, October 02, 2009 10:06:53 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Saturday, September 26, 2009

Thanks to everyone who came to my sessions at BASTA! My first time at that conference and I had a great time. Here are the slides and demos

What’s New in Workflow 4.0

Contract First Development with WCF

Generics, Extension Methods and Lambdas – Beyond List<T>

Saturday, September 26, 2009 9:24:56 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Sunday, July 26, 2009

Wenlong Dong has just posted about changes to the WCF thottling defaults in WCF  4.0. The new throttle defaults are going to be based on the number of processors/cores in the machine – this means the more powerful the machine the higher the throttle will be  - which is a good thing. The throttle that always hit people first were either session, if they had a session as a result of contract/binding settings or, no session, concurrent calls. Now the changes (100 * processor count for session and 16 * processor count for calls) will mean that either one would bite first.

As Wenlong says, the original values were always too low for enterprise applications. What they did do, however, is force people to address throttling if they were writing enterprise apps. And Wenlong is right that that sometimes meant people putting the throttles up to int.MaxValue which is really the same as no throttle at all. But although, in general, I think the change is the right move (it mostly works out of the box), it will lead to people facing performance problems with absolutely no idea why they might be taking place.

What I mean by this is problems caused by the current throttles are really easy to spot “as soon as I put more than 10 calls though it breaks”. You can google that and find the problem and the solution. The new values make it hugely non-obvious why an application suddenly has problems with more than 400 clients (NetTcpBinding on machine with 4 cores for example). Good time to be a consultant in the UK and Europe I guess ;-)

Sunday, July 26, 2009 10:24:17 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Saturday, July 04, 2009

One of the frustrations with the current WCF tooling is that, although I can generate the client proxy code from a WSDL document I cannot generate the server side skeleton that matches the WSDL. In situations where two parties agree a contract based on WSDL this makes implementing the service side error prone. Back in the ASMX days WSDL.EXE had a /s switch that would create the service side skeleton, but this feature was not carried through into SVCUTIL.EXE.

Fortunately the WSCF (Web Service Contract First) guys  have finally released a beta of a WCF compatible version of their tool that allows you to build up a model of the service and then generate the client and/or service

Here’s Santosh’s announcement

Saturday, July 04, 2009 10:03:55 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Thursday, July 02, 2009

In my last post I linked to the screencast I made on processing large messages in WCF using buffering. I also said that I would be putting up another one on streaming messages shortly. That second screencast has now gone live on the Rock Solid Knowledge website. In part 2 of the large message handling screencast I talk about enabling streaming, designing contracts for streaming and how this affects the way the receiver has to process the data. You can find the new screencast here


Thursday, July 02, 2009 1:02:39 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Thursday, June 18, 2009

I’ve just uploaded a new screencast on to Rock Solid Knowledge. This one walks though configuring WCF to be able to pass large messages between client and service. Its the of two parts, this one talks about the default mode WCF uses for transferring data – buffering. I’ll be doing another one soon that looks at WCF in streaming mode.

You can find the screencast here

Thursday, June 18, 2009 1:04:34 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Wednesday, June 10, 2009

I am one of the moderators of the MSDN WCF Forum. One of the main areas of questions on the forum is duplex messaging – particularly using the WSDualHttpBinding. So instead of typing long messages repeating the same thing in answer to these questions I’ve decided to write this blog post to give a bit of background about duplex messaging and then discuss the options for bindings and common problems people have.

What is Duplex Messaging?

There are many ways that messages can be exchanged between two parties in a service based system: the client can send messages to the server and never get any back; the client can send a message and wait for a response; the client and service can send eachother messages without any pre-defined pattern; the client can send the service a message but not wait synchronously for a response and then then service can send a message back asynchronously; and there are many others. However, the first three of these are supported natively in WCF and are known as One-way, request/response and duplex.

So Duplex messaging is where, unsolicited, the client and service can send eachother messages. Most commonly this is characterized by the service sending the client “events” or notifications or progress of “interesting things”.

Duplex Contracts in WCF

To send messages to eachother the client and service must have an idea of what operations are available and what messages are sent and received during the communication. In WCF this idea is modelled by the contract. Now normally a contract just determines what functionality is available at the service. However, now the service is going to be sending messages to the client that the client isn’t specifically waiting for so it needs an idea of what messages the client can deal with. So we need a contract that models both directions of the conversation.

A bi-directional contract is modelled using two interfaces bound together with a ServiceContract – like this:

interface IOrderPizza
    void PlaceOrder(string PizzaType);
interface IPizzaProgress
    void TimeRemaining(int minutes);
    void PizzaReady();

The import bit here is the CallbackContract that establishes the relationship between the service’s and client’s contracts.

Writing the Service

The service is implemented normally apart from two issues: firstly it needs to access the callback contract to be able to send messages back to the client; secondly the communication infrastructure (modelled by the binding) needs to be able to cope with duplex messaging. Firstly lets look at accessing the callback contract:

class PingService : IOrderPizza
    IPizzaProgress callback;
    public void PlaceOrder(string PizzaType)
        callback = OperationContext.Current.GetCallbackChannel();
        Action preparePizza = PreparePizza;
        preparePizza.BeginInvoke(ar => preparePizza.EndInvoke(ar), null);
    void PreparePizza()
        for (int i = 10 - 1; i >= 0; i--)

The critical line here is calling GetCallbackContract on the OperationContext. This gives the service access to a proxy to call back to the client.

Now the service also needs to use a contract that is compatible with duplex messaging. WSHttpBinding is the default for the built in WCF projects but it does not support duplex messaging. People generally then move to the WSDualHttpBinding which is similar to the WSHttpBinding but does support duplex. I will go into more depth about bindings for duplex shortly but for now lets stick to this for now - it will work in our test rig on a single machine without issue.

Writing the Client

If the client is going to receive these messages it needs to provide an implementation of the callback contract. It can gets its definition from either a shared contract assembly or from metadata. If using metadata the callback contract will be named the same as the service’s contract but with the work Callback appended. It will also need to supply this implementation to the WCF infrastructure and it does this by wrapping an instance in an InstanceContext object and passing it to the proxy constructor. So here is the client:

class Program
    static void Main(string[] args)
        InstanceContext ctx = new InstanceContext(new Callback());
        OrderPizzaClient proxy = new OrderPizzaClient(ctx);
        Console.WriteLine("press enter to exit");
class Callback : IOrderPizzaCallback
    public void TimeRemaining(int minutes)
        Console.WriteLine("{0} seconds remaining", minutes);
    public void PizzaReady()
        Console.WriteLine("Pizza is ready");

Running the service and the client will have this working quite happily – so it would seem that duplex messaging and WCF works very well … so why on earth do people keep asking questions about it on the WCF forums?

It Worked on My Machine but Broke when we Deployed It!

Ahh well you probably did the thing that is obvious but almost always a bad idea. You went and chose WSDualHttpBinding as your duplex binding. To understand why this is a bad idea we need to dig a little deeper into how the WSDualHttpBinding works. HTTP is a unidirectional protocol: the client makes a request and the server sends a response. There is no way for a server to initiate an exchange with the client. So how on earth is duplex messaging going to work because it requires exactly this facility? Well the “Dual” in the name is significant, the WSDualHttpBinding actually consists of two connections: one outbound from client to server and one inbound from server to client – this second connection may already be ringing alarm bells with you. The are a two big problems with inbound connections to a client: firewalls very often block inbound connections to clients; the client may not be reachable from the server, it may be using NAT translation behind a router and so cannot be contacted without port forwarding being set up on the router. Both of these issues are showstoppers in real network topologies. You can take some small steps to help – you can specify what port the client should listen on for example by using the clientBaseAddress property of the WSDualHttpBinding. This means the network admin will only have to punch one hole in their firewall (but lets face it, network admins don’t allow any holes to be punched in the firewall).

So if you really shouldn’t use WSDualHttpBinding for duplex, what should you use instead? Well NetTcpBinding supports duplex out of the box and the nice thing about this is that the outbound connection that it establishes can also be used be used for inbound traffic – suddenly we don;t have the inbound connection firewall/NAT issues. “But hold on, isn’t NetTcpBinding for intranet? I’ve read books that tell me that in their ‘which binding should I use?’ flowcharts!” Well it turns out those flowcharts are talking rubbish – NetTcpBinding works very happily over the internet, its just not interoperable by design. “Aha! but I need interop so WSDualHttpBinding is for me!” Well unfortunately not, NetTcpBinding is non-interoperable by design, WSDualHttpBinding is non-interoperable despite its design. From the name it would suggest interoperability but  Arun Gupta from Sun wrote this excellent post describing why it wasn’t.

So now seeing that we really are not talking about interop anyway, NetTcpBinding is far more useful than WSDualHttpBinding. Its not bullet proof, if the firewall only allows outbound port 80 but also allows inbound port 80, then WSDualHttpBinding would work where NetTcpBinding wouldn’t – but in this situation we’re really talking server to server and so I’d argue its probably better to roll your own bidirectional communication with two standard HTTP based connections.

The final option you have for duplex communication is to add a piece of infrastructure into the mix. The .NET Services Service Bus (part of the Azure platform) allows two parties to exchange messages both making outbound connections – potentially even using HTTP port 80. The two outbound connections rendezvous in the Service Bus which mediates their message exchanges. If the receiver has had to use outbound port 80 then it polls to receive message bound for it.

It Worked for the First 10 Clients and then the Rest Timed Out!

Irrespective if which of the standard bindings you are using, duplex assumes a constant relationship between proxy and service. In WCF this idea is modelled by the concept of session. All duplex bindings require session. A while back I wrote in some detail about sessions. You will have to either put up with increasing the session throttle (see the linked article for details) or roll your own custom binding that can do duplex without session – you can find an example of this here.

I Use the Callback While the Client is calling the Service and it Deadlocks!

This is because your client is probably a Rich Client GUI based application (Windows Forms or WPF). To understand why this is a problem we need to step back briefly and look at UI clients, threading and WCF threading. UI applications have a rule: you must only update the UI from the thread that created those UI components. In general a GUI application has one UI thread so anything that changes the UI needs to be done from that thread. .NET 2.0 introduced a new construct to simplify the process of a background thread updating the UI: SynchronizationContext. The idea is that a UI framework creates an implementation of a SynchronizationContext derived class that handles the mechanics of marshalling a call on to the UI thread. An instance of this implementation is then made available on the UI and accessible via the SynchronizationContext.Current.

WCF adds more complexity into the mix by enforcing a rule that says “unless you tell me otherwise I will only allow one thread at a time into an object that I control”. You see this with singleton services that will only allow one call at a time by default. The same is also true of the callback implementation object – so WCF will only allow one active thread in the client at a time. So while WCF is performing an outbound call it will not allow an inbound call into the object. This causes the initial problem with the deadlock that the service’s callback cannot be dispatched while the client’s outbound call is in progress. To solve this we use the “unless you tell me otherwise” part of the above rule. You do this by annotating the callback implementation class with a [CallbackBehavior] attribute like this:

class Callback : IOrderPizzaCallback
    public void TimeRemaining(int minutes)
        Console.WriteLine("{0} seconds remaining", minutes);
    public void PizzaReady()
        Console.WriteLine("Pizza is ready");

But now there is another problem: by default WCF will attempt to dispatch using an available SynchronizationContext. The problem with this callback is the UI thread is already blocked in an outbound call. SO for the call to dispatch we need to tell WCF not to use the SynchronizationContext – again using the CallbackBehavior attribute:

[CallbackBehavior(ConcurrencyMode=ConcurrencyMode.Reentrant, UseSynchronizationContext=false)]
class Callback : IOrderPizzaCallback

Now the issue is of course that the call is going to be processed on a non UI thread so you would have to manually marshal any UI interaction using the SynchronizationContext.Post method.

Duplex messaging can be a useful message exchange pattern but in WCF there can be some unexpected issues. Hopefully this blog post clarifies those issues and demonstrates workarounds for them.

.NET | Azure | WCF
Wednesday, June 10, 2009 7:54:13 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Thursday, May 21, 2009

We’ve just posted a new screencast at Rock Solid Knowledge. In this one I walk you through enabling WCF tracing to assist in diagnosing bugs in your services using the example of a serialization failure. You can find the screen casts here


Thursday, May 21, 2009 12:22:42 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Monday, May 18, 2009

Soma has blogged that beta 1 has finally shipped. Lots of new stuff in here since PDC. His blog post is here


Monday, May 18, 2009 4:06:32 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 

We’ve started a library of screencasts at rock solid knowledge. I’ve created one on WCF Serialization and one on using SSL with Self hosted WCF services. Andy Clymer has recorded one on Visual Studio Tricks and Tips and Dave Wheeler has one on character animation in Silverlight 2.0. You can find the screen casts at


We’ll be adding to the library as time goes on so subscribe to the library RSS feed

.NET | RSK | SilverLight | WCF
Monday, May 18, 2009 11:50:15 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Saturday, May 02, 2009

Cliff Simpkins has just posted a blog entry detailing some of the changes between the PDC preview of WF 4.0 and what is coming in beta 1, due shortly.

Important information here not just about beta 1 but also about things you can expect and also things that won’t make it into RTM

Saturday, May 02, 2009 12:27:00 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Sunday, April 05, 2009

I’ve been doing some work with the WCF REST Starter Kit for our website http://rocksolidknowledge.com. Preview 2 of the start kit has a bunch of client side plumbing (the original release concentrated on the service side)

The client side code looks something like this:

HttpClient client = new HttpClient("http://twitter.com");

HttpResponseMessage response = client.Get("statuses/user_timeline/richardblewett.xml");


As compact as this is I was a bit disappointed to see that I only had a few options for processing the content: ReadAsString, ReadAsStream and ReadAsByteArray. Now seeing as they had a free hand to give you all sorts of processing options I was surprised there weren’t more. However, one of the assemblies with the start kit is called Microsoft.Http.Extensions. So I opened it up in Reflector and lo and behold there are a whole bunch of extension methods in there – so why wasn’t I seeing them?

Extension methods become available to your code when their namespace is in scope (e.g. when you have a using statement for the namespace in your code). It turns out that the team put the extension methods in the namespaces appropriate to the technology they were exposing. So for example the ReadAsXElement extension method is in the System.Xml.Linq namespace and the ReadAsXmlSerializable<T> method is in the System.Xml.Serialization namespace.

Although I really like the functionality of the WCF Starter Kit, this particular practice, to me, seems bizarre. Firstly, it makes the API counter intuitive – you use the HttpClient class and then there is no hint in the library that there are a bunch of hidden extensions. Secondly, injecting your extensions into someone else’s namespace increases the likelihood of extension method collision (where two libraries define the same extension method in the same namespace). The same named extension method in difference namespaces can be disambiguated, the same named extension in the same namespace gives you no chance.

I think if you want to define extension methods then you should keep them in your own namespace – it makes everyone’s life simpler in the long run.

Sunday, April 05, 2009 4:58:48 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Saturday, March 28, 2009

Thanks all who attended my post conference day on a day of connected systems with .NET 4.0, Dublin and Oslo. You can get the demos here.

Dublin | Oslo | WCF | WF
Saturday, March 28, 2009 8:39:03 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, February 20, 2009

A while back I did this post talking about how WCF contract definitions should model the messages being passed and not use business  objects. This inevitably  means that you have to translate from the Data Transfer Object (DTO) in the contract to business object and back again. This can feel like a lot of overhead but it really does protect you from a lot of heartache further down the line.

However Dom just pointed out the AutoMapper to me. This is in its early stages but looks like the kind of library that will really take away a lot of the grunt work associated with using DTOs – kudos!

Friday, February 20, 2009 6:55:55 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, January 23, 2009

Sometimes you have to wonder if this subject will ever go away …

A few weeks ago I posted on using OO constructs in your DataContracts. It’s one of those things that is understandable when .NET developers first start building message based systems. Another issue that raises its head over and over again goes along the lines of “I’m trying to send a DataSet back to my client and it just isn’t working properly”. This reminds me of the old joke:

Patient: Doctor, doctor it hurts when I do this (patient raises his arm into a strange position over his head)
Doctor: Well don’t do it then

So what is the issue with using DataSets as parameters or return types in your service operations?

Lets start off with interoperability – or the total lack thereof. If we are talking about interoperability we have to think about what goes into the WSDL for the DataSet – after all it is a .NET type. In fact DataSets, by default serialize as XML so surely it must be ok! Here’s what a DataSet looks like in terms of XML Schema in the WSDL

  <xs:element ref="xs:schema" /> 
  <xs:any />

In other words I’m going to send you some … XML – you work out what do with it. But hold on – if I use Add Service Reference, it *knows* its a DataSet so maybe it is ok. Well WCF cheats; just above that sequence is another piece of XML

       <ActualType Name="DataSet" Namespace="http://schemas.datacontract.org/2004/07/System.Data"
="http://schemas.microsoft.com/2003/10/Serialization/" /> 

So WCF cheats by putting an annotation only it understands into the schema so it knows to use a DataSet. If you really do want to pass back arbitrary XML as part of a message then use an XElement.

So how about if I have WCF on both ends of the wire? Well then you’ve picked a really inefficient way to transfer around the data. You have to remember how highly functional a DataSet actually is. Its not just the data in the tables that support that functionality, there is also : change tracking data to keep track of what rows have been added, updated and removed since the DataSet was filled; relationship data between tables; a schema  describing itself. DataSets are there to support disconnected processing of tabular data, not as a general purpose data transfer mechanism.

Then you may say – “hey we’re running on an intranet – the extra data is unimportant”. So the final issue you get with a DataSet is tight coupling of the service and the consumer. Changes to the structure of the data on one side of the wire cascade to the other side to someone who may not be expecting the changes. Admittedly not all changes will be breaking ones but are you sure you know which ones will be and which ones won’t. As long as the data you actually want to pass isn’t changing why are you inflicting this instability on the other party in the message exchange. The likelihood that you will have to make unnecessary changes to, say, the client when the service changes is increased with DataSets 

So what am I suggesting to do instead? Instead model the data that you do want to pass around using DataContract (or XElement if you truly want to be able to pass untyped XML). Does this mean you have to translate the data from a DataSet to this DataContract when you want to send it? Yes it does, but that code can be isolated in a single place. When you receive the data as a DataContract and want to process it as a DataSet, does this mean you have to recreate a DataSet programmatically? Yes it does, but again you can isolate this code in a single place.

So what does doing this actually buy you if you do that work? You get something which is potentially interoperable, that only passes the required data across the wire and that decouples the service and the consumer.

Friday, January 23, 2009 4:49:20 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, January 22, 2009

I’ve been writing a lab on Workflow Services in 4.0 recently. Part of what I was showing was the new data orientated correlation (the same kind of mechanism that BizTalk uses for correlation). So I wanted to have two operations that were correlated to the same workflow instance based on data in the message (rather than a smuggled context id as 3.5 does it). As I was writing this lab I suddenly started getting an InvalidOperationException stating DispatchOperation requires Invoker every time I brought the .xamlx file up in a browser. It appeared that others had seen this as well but not really solved it. So I dug around looking at the XAML (workflow services can be written fully declaratively now) and the config file and could see no issues there. I asked around but no one I asked knew the reason.

So I created a simple default Declarative Workflow Service project and that worked ok. I compared my lab workflow and the default one and it suddenly dawned on me what was wrong. The default project has just one operation on the contract and has a ServiceOperationActivity to implement it. My contract had my two operations but I had, so far, only bound one ServiceOperationActivity. So in other words I had not implemented the contract. This is obviously an issue and looking back I’m annoyed I didn’t see it sooner.

However, the problem is that this is a change in behavior between 3.5 and 4.0. In 3.5 if I didn’t bind a ReceiveActivity to every operation I got a validation warning but I could still retrieve metadata; in 4.0 you get a fatal error. Its not hugely surprising that the behavior has changed – after all the whole infrastructure has been rewritten.

On the whole its a good thing that implementation of the contract is enforced – although it would be nice if the validation infrastructure caught this at compile time rather than it being a runtime failure.

.NET | BizTalk | WCF | WF
Thursday, January 22, 2009 10:10:15 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Wednesday, January 21, 2009

A forum question was asking how to create a host that dynamically loaded up services deployed in DLLs. I told the poster to use reflection but he wasn't familiar with it so I thought I would quickly knock up a sample

The host looks in a folder under the directory in which it is executing. The service must be annotated with a [ServiceClass] attribute (defined in a library that accompanies the host). Then you set up the config file for the service ( [ServiceBehavior(ConfigurationName="<some name>")] helps decouple the config from the actual implementing type nicely

The sample is here

DynamicHost.zip (77.07 KB)

Wednesday, January 21, 2009 11:00:56 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, January 16, 2009

Hanging out on the WCF Forums, it appears one in five answers I give seem to be to do with sessions in WCF. As a result, rather than write the same thing over and over again I decided to distil my forum posts into a blog post that I can link to.

WCF has the concept of a session. A session is simply an identifier for the proxy making the call. Where does this ID come from? Well it depends on the binding: with NetTcpBinding, the session is inherent in the transport – TCP is a connected session based transport; with WSHttpBinding session is an artifice piggybacking either WS-SecureConversation or WS-ReliableMessaging. So different bindings go via different mechanisms to achieve the concept of session. However, not all bindings support session – BasicHttpBinding, for example, has no mechanism for maintaining a session. Be aware that every mechanism for maintaining a session in WCF has issues with load balancing. The relationship between proxy and service is a machine-bound one. If you want to use load balancing you will have to use sticky sessions.

So what does having a session do for me? Well some parts of the infrastructure rely on sessions – callbacks for example. WCF also supports the ability to associate a specific instance of the service implementation class with a session (called PerSession instancing). Having a specific instance for the session can seem very attractive in that it allows you to make multiple calls to a service from a specific proxy instance and the service can maintain state in member variables in that service instance in behalf of the client. In fact, if the binding supports session, this is the default model for mapping service instances to requests. WCF also supports two other instancing models (controlled by the service behavior InstanceContextmode): PerCall – each request gets its own instance; Single – all calls use a single instance.

Session has an impact beyond the service – client proxy behavior is affected by the idea of session. Say the client crashes, what happens to the session? Again this depends on the binding. NetTcpBinding is connection orientated so the service side gets torn down if the client crashes. WSHttpBinding on the othe hand rides on top of a connectionless protocol (HTTP) and so if the client crashes the service has no idea. In this situation the session will eventually time out depending on the configuration of WS-SecureConverstion or WS-ReliableMessaging – the default is 10 minutes. To tear down the session the proxy has to call back to the service to say that it is done. This means that closing the proxy in the client is crucial to efficient session management and that Close with WSHttpBinding will actually roundtrip to the service. If the service has died, Close, in this situation will throw an exception and you must call Abort to clean up the proxy correctly.

Session also has effects that are often hidden during development. WCF has an inbuilt throttle to control concurrent operations. There are three throttles: concurrent requests (self explanatory); concurrent objects (remember that there are options as to how many service instances service requests – in reality this one is very unlikely to affect you); and most importantly for the discussion here, concurrent sessions – this defaults to 10. So if you do not clean up your proxy by default your service will support up to 10 concurrent proxies which is generally way too low for many systems.

Sometimes your service will require the ability to maintain per client state. However, due to the complexity of dealing with sessions it is often best to avoid them (unfortunately impossible with the NetTcpBinding) and use another, applkication defined, session identifier and put the session state in a database. However, for example with callbacks, sometimes you require session in your service. In this case you should annotate your contract with the modified ServiceContract


If you do not need session and you do not need to use a session orientated protocol like named pipes or tcp then I would turn off session explicitly by using the following on the contract


In my experience it is rarely useful to use sessions and they introduce overhead, load balancing restrictions and complexity for the client.

Friday, January 16, 2009 11:18:54 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Saturday, January 10, 2009

I'm really excited to announce that I've joined thinktecture as a consultant. I've known Ingo, Christian and Dominick for some time now and its great to take that relationship on to a new level. I have huge respect for the abilities of the thinktecture team and so it was fantastic when they asked me if I was interested in working with them.

I'll be focusing on all things distributed at thinktecture - so that includes WCF, Workflow, BizTalk, Dublin and Azure. And I guess I'll have to learn to speak German now ...

Azure | BizTalk | Dublin | Life | WCF | WF
Saturday, January 10, 2009 6:27:22 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, January 08, 2009

I’ve recently started hanging out on the MSDN WCF forum. Its interesting seeing the questions that are coming up there. One fairly common one is along the lines of:

I have a service that I want to return a list of Person objects but it doesn’t seem to be working when I pass Employees and Managers in the list.

At some point someone will reply

You should annotate the contract with the ServiceKnownType attribute for each type that the Person could be.

Now in a strict sense this is true – however, I think there is a much deeper question that needs discussion.

To understand where the basic problem is we need to take a quick detour into the architecture of WCF. WCF has two layers: the channel layer and the service model layer.

The channel layer deals with Message objects passing them through a composable stack of channels which add or process message headers, handle mapping the Message object to and from a stream of bytes and moving those bytes from one place to another. Notice here we have no notion of ServiceContracts, etc. All messages are untyped in .NET terms are are expressed in terms of an XML Infoset.

The service model layer sits on top of the channel layer and deals with endpoints, behaviors, dispatchers and other bits of plumbing – but the phrase “sits on top of the channel layer” is hugely important. An endpoint is made up of three elements: an address (where), a binding (how) and a contract (what). It is the contract that often causes confusion. In WCF code it takes the form of an annotated interface (can be a class but an interface is more flexible)

interface ICalc
     int Add( int x, int y);

But what do the parameters and return types really mean? What you are in fact doing here is specifying the request and response message bodies. This information will be translated into XML by the time the Message hits the channel layer. Now here we have only used simple types. The data that we generally want to move around is more complex and so we need another way to describe it – for that we use a .NET type and a serializer that translates an object into XML and back again. There are two main serializers that can be used: the long standing XmlSerializer and the DataContractSerializer that was introduced with WCF; the DataContractSerializer is the default one. You normally take a .NET class and annotate is with attributes (although .NET 3.5 SP1 removed the need to do this its generally a good idea anyway as you retain control over XML namespaces, etc)

class Person
     public string Name{ get; set; }
     public int Age{ get; set; }

interface IMakeFriends
     void AddFriend( Person p );

So far so good. So where is the problem? Well now we’re in .NET type system world so it starts to become tempting to pass types that derive from person to the AddFriend method – after all the C# compiler doesn’t doesn’t complain. And at this point when you call the AddFriend method it doesn’t work and so you post the above question to the WCF forum. Unfortunately the way that contracts are expressed in WCF makes is very easy to forget what their purpose is: to define the messages send to the operation and being sent back from the operation. In reality you have to think “how would I express this data in XML?”. XML doesn’t support inheritance so whatever you put in the contract is going to have to have some way of mapping to XML. The data contracts used to define the messages are simply a .NET typed convenience for generating the XML for the data you want to pass – if you view them any other way you are destined for a world of pain. So think about the data you want to pass, not how it may happen to be represented in your business layer and design your DataContracts accordingly.

“But wait!” you may say “I may want to pass and Employee or a Manager here and I need the data to get to the receiver”. Again it comes back to “how would I express that in XML?” as that is what will be happening at the end of the day. If you have a discrete set of types that could be passed then you have something similar to a XML Schema Choice Group. In this case ServiceKnownType is a reasonable approach (although it doesn’t use a Choice Group under the covers). Remember you will need to change the contract if new types need to be passed – as with Choice Groups. However, if you require looser semantics where an extensible number of things may be passed without wanting to change the contract then you require a different way of expressing this in XML and therefore a different way of expressing this in .NET code. Something like:

public class KeyValuePair
public string Key { get; set; }
public string Value { get; set; }

public class FlexibleData
public string Discriminator { get; set; }
public List<KeyValuePair> Properties { get; set; }

In this case the receiver will be responsible for unpicking the Properties based on the Discriminator. The other thing you could do is to pass the data as an XElement and work explicitly at the XML level, with LINQ to XML this is not very onerous. Is this all really convenient to use in .NET? Well not as easy as inheritance, but then we’re actually talking about XML here not .NET. Again, the .NET typed contract is simply a convenience for creating the XML message.

Thursday, January 08, 2009 4:40:18 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, December 15, 2008

I will finally submit to Dominick's harrassment and stop running Visual Studio elevated. I started doing it because I couldn't be bothered to mess about with HttpCfg and netsh when writing self-hosted WCF apps exposed over HTTP. Registering part of the HTTP namespace with http.sys requires admin privilege. I could have sorted it out with a wildcard registration but never got round to it.

Since writing the BlobExplorer, Christian has been my major source of feature requests. His latest was he wanted drag and drop. Now I'm not an expert in WPF so I hit google to find out how to do it. And it seemed really simple - just set AllowDrop=true on the UIElement in question and handle the Drop event. But I just couldn't get the app to accept drag events from Windows Explorer. After much frustration I finally realised that the problem had nothing to do with my code but was rather a security issue. the BlobExplorer was running elevated because I started it from Visual Studio which was also running elevated. Elevated applications will not accept drag events from non-elevated applications.

So I'm setting VS to run normally again and will do the proper thing with http.sys for my WCF applications.

.NET | Azure | WCF | WPF
Monday, December 15, 2008 12:49:41 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Wednesday, December 10, 2008

As I said in my previous post, asynchrony and persistence enable arguable the most powerful feature of workflow, that of stateful, long running execution. However as I also said, the WF runtime has been rewritten for WF 4.0 - so what does asynchrony and persistence look like in the 4.0 release?

First lets look at what was problematic in WF 3.5 in this area:

  • Writing an asynchronous activity was hellishly complex as they get executed differently dependent on their parent (State Machine, Listen Activity, standard activity)
  • There was no model for performing async work that didn't open up the possibility of being persisted during processing. This is fine if you are waiting for a nightly extract file to arrive on a share; its a real problem if you are trying to invoke a service asynchronously as the workflow may have been unloaded by the time the HTTP response comes back

The WF team have tried to solve both of these problems in this release and in addition obviously have to align with the new runtime model I talked about in my last post. Lets talk about async first and then we'll look at the related but separate issue of persistence.

The standard asynchronous model has been streamlined so the communication mechanisms have been wrapped in the concept of a Bookmark. A bookmark represents a resumable point of execution. Lets look at the code for an async activity using bookmarks.

public class StandardAsyncActivity : WorkflowElement
    protected override void Execute(ActivityExecutionContext context)
        BookmarkCallback cb = Callback;
        context.CreateNamedBookmark("rlBookMark", cb, BookmarkOptions.MultipleResume);

    int callTimes = 0;
    private void Callback(ActivityExecutionContext ctx, Bookmark bm, object state)
        if (callTimes == 2)


This code demonstrates a number of features. In 3.5 an activity indicated it was async by returning ActivityExecutionStatus.Executing from the Execute method. However, here the workflow runtime knows that this activity has yet to complete as it has an outstanding bookmark. When the activity is done with bookmark processing (in this case it receives two pieces of data) it simply closes it and since the runtime now knows it has no outstanding bookmarks the activity must be complete. Also notice that no special interfaces require implementation, the activity simply associates a callback to be fired when execution resumes at this bookmark. So how does execution resume? Heres the code:

myInstance.ResumeBookmark("rlBookMark", "hello bookmark");

This code is in the host or an extension (new name for a workflow runtime service) associated with the activity and as you can see is very simple.

With this model, after Execute returns and after each bookmark callback that doesn't close the bookmark, the workflow will go idle (assuming no other activities are runnable) and could be unloaded and/or persisted. So this is ideal for situations where the next execution resumption will be after some time. However, if we're simply performing async IO then the next resumption may be subsecond.

This model will not work for such processing and so there is a lightweight version of async processing in 4.0. Lets look at an example

public class LightweightAsyncActivity : WorkflowElement
    protected override void Execute(ActivityExecutionContext context)
        AsyncOperationContext block = context.SetupAsyncOperationBlock();

        Action a = AsyncWork;

        AsyncCallback cb = delegate(IAsyncResult iar)
            // called on non workflow thread
                                         ar => block.EndCompleteOperation(ar),

        a.BeginInvoke(cb, null);

    private void AsyncProcessingComplete(ActivityExecutionContext ctx, Bookmark bm, object val)
        // called on workflow thread
        Console.WriteLine("Bookmark callback");

    private void AsyncWork()
        // called on non workflow thread
        Console.WriteLine("Done Sleeping");

This time, instead of explicitly creating a bookmark we create an AsyncOperationContext. This then allows us to run on a separate thread (BeginInvoke hands the AsyncWork method to the threadpool) and then when it completes tell the workflow infrastructure that our async work is done. It also allows us to provide another method that will execute on a workflow thread in case there are other workflow related tasks we need to perform (AsyncProcessingComplete in the above example). Although when Execute returns the workflow will go idle, this infrastructure puts in place a new construct called a No Persist Block that prevents persistence while the async operation is in progress.

So these are the two new async models but how does the new persistence model interact. Well the above description mentions some interaction but lets look at 4.0 persistence in more detail and then draw the two parts of the post together.

The WorkflowPersistenceService has now been remodelled in terms of Persistence Providers. You get one out-of-the-box as before for SQL Server. There are two scripts to run against a database to put the new persistence constructs in place SqlPersistenceProviderSchema40.sql and SqlPersistenceProviderLogic40.sql.To enable the sql persistence provider you create an instance of the provider for each workflow instance using a factory and add it as an extension to that workflow instance:

string conn = "server=.;database=WFPersist;integrated security=SSPI";
SqlPersistenceProviderFactory fact = new SqlPersistenceProviderFactory(conn, true);

WorkflowInstance myInstance = WorkflowInstance.Create(new MyWorkflow());

PersistenceProvider pp = fact.CreateProvider(myInstance.Id);


The factory and the provider must be have Open called on them as they both derive from the WCF CommunicationObject class (presumably due to the way this layers in for durable services and Dublin). Note that there is no flag for the provider equivelent to 3.5's PersistOnIdle. It is now up to the host to specifically call Persist or Unload on the WorkflowInstance if wants it persisted or unloaded. So now in the Idle event you may put code like this:

myInstance.Idle += delegate

I'll come back to why use TryPersist rather than Persist in a moment. However, there is another job to be done. You need to explicitly tell the persistence provider that you are done with a workflow instance to ensure that it gets deleted from the persistence store when the workflow completes. And so you will typically call Unload in the Completed event:

myInstance.Completed += delegate(object sender, WorkflowCompletedEventArgs e)

However, things are a little gnarly here. Remember above that both the Bookmark and AsyncOperationBlock models for async will trigger the Idle event but the AsyncOperationBlock prevents persistence. Thats why we have to use TryPersist and TryUnload as Persist and Unload will throw exceptions if persistence is not possible. The team have said they are working on getting this revised for beta 1 and having seen a preview of the kind of things they are looking at it should be a lot neater then.

So thats the new async and persistence model for WF 4.0. The simplicity and extra functionality now in the async model will take away a good deal of the complexity in building custom activtities that play well in the long running workflow infrastructure



.NET | Dublin | WCF | WF
Wednesday, December 10, 2008 2:32:56 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

I've always found the workflow execution model very compelling. Its a very different way of thinking about writing applications but it solves a set of difficult problems particularly in the area of asynchronous and long-running processing. So when I found out that .NET 4.0 introduces a new version of the workflow infrastructure I was intregued as to what changes had been made.

The ideas behind workflow have not changed but the WF 4.0 implementation is very different from the 3.5 one, involving a very different API and a rewritten design-time experience. The decision to do a ground up rewrite was not taken lightly and was necessary to resolve issues that people were facing with 3.5 - particularly around serialization, performance and complexity. So you can say goodbye to WorkflowRuntime, WorkflowRuntimeService, WorkflowQueue, ActivityExecutionStatusIEventActivity and a number of other familiar classes and interfaces; the new API looks quite different.

So lets look at the code to create and execute a workflow in WF 4.0

WorkflowInstance myInstance = WorkflowInstance.Create(new MyWorkflow());


Console.WriteLine("Press enter to exit");
string data = Console.ReadLine();

So firstly two things are the same: you still use a WorkflowInstance object to address the workflow and starting the workflow is still an asychronous operation. However, notice that we don't use WorkflowRuntime to create the workflow, everything is done against the WorkflowInstance itself.

There is also a more lightweight workflow execution model if you simply want to inline some workflow functionality

WorkflowInvoker inv = new WorkflowInvoker(new MyWorkflow());

Workflows are now, by default, authored declaratively in XAML as are custom activities. In fact the activtity model looks quite different. The base unity of execution in a workflow is a WorkflowElement. Activities are composite activities, normally declared in XAML and these ultimately derive from WorkflowElement. However, neither of these constructs is actually executed; an ActivityInstance is generated from the activity definition and it is this that is executed. This is because arbitrary state can be attached to an executing activity in the form of variables. An activity can only bind its own state (modelled explicitly in the form of arguments) to variables declared on its parent or other ancestors. This is a very different model from 3.5 where an activity's state could be data bound to properties of any other activity in the entire workflow activity tree. This mean't, in effect, that to truly capture the state of a workflow you can to serialize the entire activity tree, including those activties that had finished execution and those that had yet to start execution. We can see this here

The 4.0 model only requires the currently executing set of activities to be serialized as they cannot bind state outside of their parental relationships


This makes persistence much faster and more compact.

Creating custom activties is often a case of combining building blocks defined as other activties - these are generally modelled in XAML. However, what if you want to create a building block activity yourself? In this case you simply derive from WorkflowElement directly and override the Execute method. This sounds a lot like creating custom activities in 3.5, however there are important differences. Firstly state in and out of an activity is explicitly modelled with arguments that have direction, secondly you no longer have to tell the runtime when the activity is finished or not in Execute - the runtime knows based on what actions you perform.

public class WriteLineActivity : WorkflowElement
    public InArgument<string> Text { get; set; }
    protected override void Execute(ActivityExecutionContext context)

Notice here that this activity is assuming that it is running in a Console host. Generally we try to avoid tying an activity to a certain type of host and this was achieved in 3.5 by using pluggable Workflow Runtime Services. The same is true in 4.0 but these services have been renamed Extensions to avoid confusion with Workflow Services (where workflow is used to implement a WCF service). The WorkflowInstance class now has an Extensions collection and the ActivityExecutionContext has a GetExtension<T> method for the activity to retrieve an extension it is interested in.

Some of the real power of the workflow infrastructure only becomes apparent when you plug in a persistence service and your activties are asynchronous so the workflow can be taken out of memory. It is this combination that allows workflows to perform long running processing and this infrastructure has also changed a great deal. However I will walk through those changes in my next post.

Wednesday, December 10, 2008 10:44:27 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, December 01, 2008

DevelopMentor UK ran a one day post PDC event on Friday at the Microsoft Offices in London. Myself and Dominick covered WF 4.0, Dublin, Geneva, Azure and Oslo

You can find the materials and demos here

.NET | Azure | Oslo | WCF | WF | Dublin
Monday, December 01, 2008 11:30:39 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, November 21, 2008

Thanks to all who came to my REST talk at Oredev

The slides are here

REST.pdf (325.61 KB)

and the demos are here

REST.zip (30.32 KB)

Friday, November 21, 2008 3:28:54 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

Thanks to everyone who came to my SOA with WF and WCF talk at Oredev

The slides are here

SOA-WCF-WF.pdf (278.79 KB)

and the demos are here

ServiceComposition.zip (107.08 KB)

Friday, November 21, 2008 7:17:50 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, November 03, 2008

I was in Redmond a few weeks ago looking at the new stuff that Microsoft's Connected System Division (CSD) were working on (WF 4.0, REST Toolkit, Oslo, Dublin). At the end of the week I did an interview for Ron Jacobs for Endpoint.tv on Channel 9. We discussed WF 4.0, Dublin, Oslo and M - as well as 150 person Guerrilla courses. You can watch it here

.NET | BizTalk | Oslo | REST | WCF | WF
Monday, November 03, 2008 11:12:08 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Sunday, November 02, 2008

.NET Services is a block of functionality layered on top of the Azure platform. This project has had a couple of names in the past – BizTalk Services when it was an incubation project and then Zurich as it was productionized.


.NET Services consists of three services:

1)      Identity

2)      The Service Bus

3)      Workflow


Lets talk about identity first. Officially named the .NET Access Control Service, this provides you a Security Token Service (STS) that you can use to configure how different forms of authentication map into claims (it leverages the claims construct first surfaced in WCF and now wrapped up in Geneva). It supports user ids/passwords, Live ID, Certs and full blown federation using WS-Trust. It also supports rule based authorization against the claim sets.


The Service Bus is a component that allows you to both listen for and send messages into the “service bus”. More concretely it means that from inside my firewall I can listen to an internet based endpoint that others can send messages into. It supports both unicast and multicast. The plumbing is done by a bunch of new WCF Bindings (the word Relay in the binding indicates it is a Service Bus binding) although there an HTTP based API too. Sending messages to the internet is fairly obvious how that happens, its the listening that is more interesting. The preferred way is to use TCP (NetTcpRelayBinding) to connect. This parks a TCP session in the service bus which it then sends messages to you down. However they also support HTTP although the plumbing is a bit more complex. There they construct a Message Buffer that they send messages into then the HTTP relay listener polls it for messages. The message buffer is not a full blown queue although it does have some of the same characteristics of a queue. There is a very limited size and TTL for the messages in it. Over time they may turn it into a full blown queue but for the current CTP it is not.


So what’s the point of the Service Bus? It enables to be subscribe to internet based events (I find it hard to use the word “Cloud” ;-)) to allow loosely coupled systems over the web. It also allows the bridging of on-premises systems to web based ones through the firewall


Finally there is the .NET Workflow Service. This is WF in the Cloud (ok I don’t find it that hard). They provide a constrained set of activities (currently very constrained although they are committed to providing a much richer set. They provide HTTP Receive and Send and Service bus Send plus some flow control and some XPath ones that allow content based routing. You deploy your workflow into their infrastructure and can create instances of it waiting for messages to arrive and to route them, etc. With the toolset you basically get one-click deployment of XAML based workflows. They currently only support WF 3.5 although they will be rolling out WF 4.0 (which is hugely different from WF 3.5 – thats the subject of another post) in the near future.


So what does .NET Services give us? It provides a rich set of messaging infrastructure over and above that of Windows Azure Services

.NET | Azure | WCF | WF
Sunday, November 02, 2008 3:10:04 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, September 16, 2008

I'm going to be doing a couple of sessions at the Oredev conference in Sweden in November

Writing REST based Systems with .NET

For many, building large scale service based systems equate to using SOAP. There is, however, another way to architect service based systems by embracing the model the web uses - REpresentational State Transfer, or REST. .NET 3.5 introduced a way of building the service side with WCF - however you can also use ASP.NET's infrastructure as well. In this session we talk about what REST is, two approaches to creating REST based services and how you can consume these services very simply with LINQ to XML.

Writing Service Oriented Systems with WCF and Workflow

Since its launch WCF has been Microsoft's premier infrastructure to writing SOA based systems. However one of the main benefits of Service Orientation is combining the functionality of services to create higher order functionality which itself is exposed as a service - namely service composition. Workflow is a very descriptive way of showing how services are combined and in .NET 3.5 Microsoft introduced an integration layer between WCF and Workflow to simplify the job of service composition. In this session we examine this infrastructure and bring out both its string and weak points with an eye to what is coming down the line in Project Oslo - Microsoft's next generation of its SOA platform.

Hope to see you there

.NET | ASP.NET | LINQ | MVC | Oslo | REST | WCF | WF
Tuesday, September 16, 2008 1:45:26 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Friday, June 06, 2008

As promised, here are the demos from the precon myself and Dave Wheeler (get a blog Dave) did at Software Architect 2008. It was a fun day talking about security, WCF, WF, Windows Forms, WPF, Silverlight, Ajax, ASP.NET MVC, LINQ and Oslo

DotNetForArchitects.zip (791.24 KB)

There is a text file in the demos directory in the zip that explains the role of each of the projects in the solution

Edit: Updated the download link so hopefully the problems people have been experiencing will be resolved

.NET | LINQ | Oslo | SilverLight | WCF | WF | WPF
Friday, June 06, 2008 10:02:07 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 

I've just got back from Software Architect 2008. Its a great conference to speak at and an interesting change from speaking at hard core developer conferences like DevWeek. Thanks to everyone who attended my sessions - the slides and demos are below

SOA with WCF and WF - SOA.zip (368.43 KB)

Volta - Volta.zip (506.21 KB)

The slides and demos from the pre conference workshop on .NET 3.5 for architects that Dave Wheeler and me presented will be posted early next week. We have realised we really need a guide to what all the projects are and how they relate - so we'll add this documentation and post them

.NET | Volta | WCF | WF
Friday, June 06, 2008 8:39:14 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Friday, May 09, 2008

The Workflow Mapper Activity for copying data from one object to another was an interesting project for myself, Jörg and Christian. However, none of us has the cycles to turn it into the hugely valuable activity I think it could be. therefore, we have decided to publish the code on codeplex so hopefully we can get community involvement to polish the functionality.

You can find the project at


Take a look and let us know if you want to get involved in the project

Friday, May 09, 2008 5:47:56 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Saturday, March 15, 2008

The demos from my DevWeek 2008 Postcon A Day of Connected Systems with VS 2008 are now here:

DayOfCS.zip (1.48 MB)

Thanks for attending the session

.NET | BizTalk | WCF | WF
Saturday, March 15, 2008 9:47:05 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, March 13, 2008

The demos from my DevWeek 2008 Silver talk about integrating WCF and WF are available here:

Silver.zip (107.74 KB)

Thanks for coming to the talk

Thursday, March 13, 2008 11:51:43 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Wednesday, February 06, 2008

For the third year running I'm going to be speaking at BearPark's DevWeek conference. This year I actually managed to submit some breakout session titles on time too.

Wednesday 12th March

11:30 A developer’s guide to Windows Workflow Foundation
There are many challenges to writing software. Not least of these are lack of transparency of code and creating software that can execute correctly in the face of process or machine restart. Windows Workflow Foundation (WF) introduces a new way of writing software that solves these problems and more. This session explains what WF brings to your applications and explains how it works. Along the way we will see the major features of WF that make it a very powerful tool in your toolkit, removing the need for you to write a lot of complex plumbing.

14:00 Creating robust, long-running Workflows
Long-running processes have unique requirements in that they need to maintain state over process restart; Windows Workflow Foundation (WF) enables this with its persistence infrastructure. However, there are issues around hosting and activity development that require attention for long running workflows to be robust. This session looks at the design of the workflow persistence service; issues around hosting and creating full featured asynchronous activities. This session assumes some familiarity with WF.

16:00 Cross my palm with Silver – creating workflow-based WCF services
There are very good reasons for using a workflow to implement a WCF service: workflows can provide a clear platform for service composition (using a number of building block services to generate functionally richer service); workflows can manage long running stateful services without having to write your own plumbing to achieve this. This session introduces the new Visual Studio 2008 Workflow Services. This technology, previously known as “Silver”, provides a relatively seamless integration between WF and WCF, enabling the service developer to concentrate on the application functionality rather than the plumbing. This session assumes some familiarity with WF and WCF.

Friday 14th March - Postcon

A day of connected systems with Visual Studio 2008
Most businesses find themselves building applications that use two or more machines working together to produce their functionality. One of the challenges in this world, apart from the actual business logic being implemented, is connecting the different parts of the application in a way that best fits the environment the machines are places – are there firewalls in place? Are some parts of the application written on different platforms such as Java? Do the different parts of the application have to maintain their state over machine restart? Late 2006 saw Microsoft release WCF and WF to tackle some of these challenges. However, parts of the story were left untold – especially the integration between the two.
Visual Studio 2008 introduces a number of new features for writing service based software. Its features build on the libraries released as part of .NET 3.0, providing an integration layer between the two. In this pre/post conference session we start at the basics of how WCF and WF work and then look at the various integration technologies introduced in Visual Studio 2008.

So if you're attending DevWeek I hope to see you there

Wednesday, February 06, 2008 3:16:10 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, December 14, 2007

When working with WF I always find it useful having a BizTalk background. Issues that are reasonably well known in BizTalk are not immediatedly apparent in Workflow unless you know they are part and parcel of that style of programming.

One issue in BizTalk is where you are waiting for a number of messages to start off some processing and you don't know in which order they are going to arrive. In this situation you use a Parallel Shape and put activating receives in the branches initializing the same correlation set. The orchestration engine understands what you are trying to do and creates a convoy for the remaining messages when the first one arrives. This is known as a Concurrent Parallel Receive. You don't leave the parallel shape until the all the messages have arrived and the convoy ensures that the remaining messages are routed to the same orchestration instance.

There is, however, an inherent race condition in this architecture in that: if two messages arrive simultaneously, due to the asynchronous nature of BizTalk, both messages could be processed before the orchestration engine has a chance to set up the convoy infrastructure. We will end up with two instances of the orchestration both waiting for messages that will never arrive. All you can do is put timeouts in place to ensure your orchestrations can recover from that situation and flag the fact that the messages require resubmitting.

With Workflow Services we essentially have the same issue waiting for us. Lets set up the workflow ...

If we call this from a client as follows:

PingClient proxy = new PingClient();

then everything works ok - in fact it works irrespective of the order the operations are called in, that's the nature of this pattern. It works because by the time we make the second call, the first has completed and the context is now cached on the proxy.

But lets make the client a bit more complex:

static IDictionary<string, string> ctx = null;
static void Main(string[] args)
  PingClient proxy = new PingClient();
  IContextManager mgr = ((IChannel)proxy.InnerChannel).GetProperty<IContextManager>();

  Thread t = new Thread(DoIt);

  if (ctx != null)

  ctx = mgr.GetContext();

  Console.WriteLine("press enter to exit");

static void DoIt()
  PingClient proxy = new PingClient();
  IContextManager mgr = ((IChannel)proxy.InnerChannel).GetProperty<IContextManager>();

(ctx != null
  ctx = mgr.GetContext();

Here we make the two calls on different proxies on different threads. A successful call stores the context in the static ctx field. Now a proxy will use the context if it has already been set, otherwise it assumes that it is the first call. So here the race condition has made its way all the way back to the client. There are things we could do about this in the client code (taking a lock out while we're making the call so we complete one and store the context before the other checks to see if the context is null), however, that really isn't the point. The messages may come from two separate client applications which both check a database for the context. Again we have an inherent race condition that we need to put architecture in place to detect and recover from. It would be nice to put the ParallelActivity in a ListenActivity with a timeout DelayActivity. However you can't do this because a ParallelActivity does not implement IEventActivity and so the ListenActivity validation will fail (the first activity in a branch must implement IEventActivity). We therefore have to put each receive in its own ListenActivity and time out the waits individually.

So is the situation pretty much the same for both WF and BizTalk? Well not really. The BizTalk window of failure is much smaller than the WF one as the race condition is isolated server side. Because WF ropes the client into message correlation the client has to receive the context and put it somewhere visible to other interested parties before the race condition is resolved.

Hopefully Oslo will bring data orientated correlation to the WF world

.NET | BizTalk | WCF | WF | Oslo
Friday, December 14, 2007 10:31:35 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Wednesday, December 12, 2007

Ian pointed out that there was a race condition in the UniversalAsyncResult that I posted a few days ago. I have amended the original post rather than repost the code.

The changes are to make the complete flag and the waithandle field volatile and to check the complete flag after allocating the waithandle in case Complete was called while the waithandle was being allocated.

Isn't multithreaded programming fun :-)

Wednesday, December 12, 2007 9:29:13 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, December 11, 2007

Yesterday I announced the release of the MapperActivity. I said I'd clean up the source code and post it.

Two things are worth noting:

  1. The dependency on the CodeProject Type Browser has gone
  2. There is a known issue where you can map more than one source element to the same destination - in this case the last mapping you create will win. We will fix this in a subsequent release.

Here's the latest version with the code

MapperActivityLibrary.zip (933.62 KB)

.NET | BizTalk | WCF | WF
Tuesday, December 11, 2007 8:31:41 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, December 10, 2007

One of the neat things you can do in BizTalk is to create a map that takes one kind of XML message and transforms it into another. They basically are a wrapper over XSLT and its extension infrastructure. The tool you use to create maps is the BizTalk mapper. In this tool you load in a source and destination schema - which get displayed in treeviews and then you drag elements from the source to the destination to describe how to build the destination message.

Now, in WF - especially when using Silver (the WF/WCF integration layer introduced in .NET 3.5) - you tend to spend alot of time taking data from one object and putting it into another as different objects are used to serialize and deserialize messages received and sent via WCF. This sounds alot like what the BizTalk Mapper does, but with objects rather than XML messages.

I was recently teaching Guerrilla Connected Systems (which has now been renamed Guerrilla Enterprise.NET) and Christian Weyer was doing a guest talk on REST and BizTalk Services. Afterwards, Christian and me were bemoaning the amount of mundane code you get forced to write in Silver and how sad it was that WF doesn't have a mapper. So we decided to build one. Unfortunately neither myself nor Christian are particularly talented when it comes to UI - but we knew someone who was: Jörg Neumann. So with Jörg doing the UI, myself writing the engine and Christian acting as our user/tester we set to work.

The result is the MapperActivity. The MapperActivity requires you to specify the SourceType and DestinationType - we used the Type Browser from CodeProject for this. Then you can edit the Mappings which brings up the mapping tool

You drag members (properties or fields) from the left hand side and drop them on type compatible members (properties or fields) on the right hand side.

Finally you need to databind the SourceObject and DestinationObject to tell the mapper where the data comes from and where the data is going. You can optionally get the MapperActivity to create the destination object tree if it doesn't exist using the CanCreateDestination property. Here's the full property grid:

And thanks to Jon Flanders for letting me crib his custom PropertyDescriptor implementation (why on earth don't we have a public one in the framework?). You need one of these when creating extender properties (the SourceObject and DestinationObject properties change type based on the SourceType and DestinationType respectively).

Here is the activity - I'll clean up the code before posting that.

MapperActivity.zip (37.78 KB)

Feedback always appreciated

.NET | BizTalk | WCF | WF
Monday, December 10, 2007 9:56:07 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

In my last post I talked about the WCF async programming model. On the service side you have to create a custom implementation of IAsyncResult. Fortunately this turns out to be pretty much boilerplate code so here is a general purpose one

class UniversalAsyncResult : IAsyncResult, IDisposable
  // To store the callback and state passed
  AsyncCallback callback;
  object state;
  // For the implementation of IAsyncResult.AsyncCallback
  volatile ManualResetEvent waithandle;

  // Guard object for thread sync
  object guard = new object();

  // complete flag that is set to true on completion (supports IAsyncResult.IsCompleted)
  volatile bool complete;

  // Store the callback and state passed
  public UniversalAsyncResult(AsyncCallback callback, object state)
    this.callback = callback;
    this.state = state;

  // Called by consumer to set the call object to a completed state
  public void Complete()
    // flag as complete
    complete = true;

    // signal the event object
    if (waithandle != null)

    // Fire callback to signal the call is complete
    if (callback != null)

#region IAsyncResult Members
  public object AsyncState
    get { return state; }
  // Use double check locking for efficient lazy allocation of the event object
  public WaitHandle AsyncWaitHandle
      if (waithandle == null)
        lock (guard)
          if (waithandle == null)
            waithandle = new ManualResetEvent(false);
            // guard against the race condition that the waithandle is
            // allocated during or after complete
            if( complete )

      return waithandle;

    get { return false; }

    get { return complete; }

IDisposable Members
  // Clean up the event if it got allocated
  public void Dispose()
    if (waithandle != null)

You create an instance of UniversalAsyncResult passing in an AsyncCallback and state object if available. When you have finished your async processing just call the Complete method on the UniversalAsyncResult object.

Comments welcome

Monday, December 10, 2007 12:07:29 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

WCF supports asynchronous programming. On the client side you can generate an async API for invoking the service using svcutil

svcutil /async <rest of command line here> 

In VS 2005 you had to use the command line. In VS2008 the Add Service Reference dialog now has the Advanced button which allows you to generate the async operations

However, the fact that you can do asynchronous programming on the service side is not so obvious. You need to create the service contract according to the standard .NET async pattern of Beginxxx / Endxxx. You also need to annotate the Beginxxx method with an OperationContract that says this is part of an async pair. Note you do not annotate the Endxxx method with [OperationContract]

interface IGetData
// This pair of methods are bound together as WCF is aware of the async pattern
    IAsyncResult BeginGetData(AsyncCallback cb, object state);
    string[] EndGetData(IAsyncResult iar);

Now, its fairly obvious that when the request message arrives the Beginxxx bethod is called by WCF. The question is how is the Endxxx method called? WCF passes the Beginxxx method an AsyncCallback that must somehow be cached and invoked when the operation is complete. This callback triggers the call to Endxxx. Another question is "what am I meant to return from the Beginxxx method?" - where does this IAsyncResult come from? And that, unfortunately, is up to you - you have to supply one, which is a good place to cache that AsyncCallback that WCF passed you.

Here is a simple sample shows how the service side works

Async.zip (219.99 KB)

Comments, as always, welcome

Monday, December 10, 2007 11:52:30 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, June 07, 2007

In the previous article I talked about hosting options for WCF. Here I'll discuss hosting for WF. However, to understand the options fully we need to digress briefly and talk about some of the features of WF.

Asynchronous Execution

The execution model of WF is inherently asynchronous. Activities are queued up for execution and the runtime asks the scheduler service to make the calls to Activity.Execute on to threads. The DefaultWorkflowSchedulerService (which you get if you take no special actions) maps Execute calls on to thread pool worker threads. The ManualWorkflowSchedulerService (which also comes out of the box but you have to install specifically install using WorkflowRuntime.AddService) schedules Execute on the thread that calls ManualWorkflowSchedulerService.RunWorkflow.


By default the workflow runtime does not persist workflows. Only when an object of a type that derives from WorkflowPersistenceService is added as a runtime service does the workflow runtime start persisting workflows both in the form of checkpoints when significant things have happened in the workflow's execution and to unload a workflow when it goes idle. When something happens that should be directed to an unloaded workflow the host (or a service) calls WorkflowRuntime.GetWorkflow(instanceId) and the runtime can reload the workflow from persistent storage.

One issue here is that of timers. If a workflow has gone idle waiting for a timer to expire (say via the DelayActivity) then no external event is going to occur to trigger a call WorkflowRuntime.GetWorkflow(instanceId)to allow the workflow to continue execution. In this case the persistence service must have functionality to discover when a timer has expired in a persisted workflow. The out of the box implementation, the SqlWorkflowPersistenceService polls the database periodically (how often is controlled by a parameter passed to its constructor). Obviously for this to occur the workflow runtime has to be loaded and running (via WorkflowRuntime.StartRuntime).

Hosting Options

The above two issues affect your WF hosting decisions and implementation. As with WCF you can host in any .NET process but in many ways the responsibilities of a WF host are more far reaching than for a WCF host. Essentially a WCF host has to start listening on the endpoints for the services it is hosting. A WF host not only has to configure and start the workflow runtime but also decide when new workflow instances should be created and broker communication from the outside world into the workflow. So lets look at the various options.

WF Console Hosting

When you install the Visual Studio 2005 Extensions for Windows Workflow Foundation package you get new project types which include console projects for both sequential and state machine workflows. These project types are useful for doing research and demos but thats about it.

WF Smart Client Hosting

There are a number of WF features that make it desirable to host WF in a smart client. Among these are the ability to control complex page navigation and the WF rules engine for controlling complex cross field navigation. The Acropolis team are planning to provide WF based navigation in their framework.

WF ASP.NET Hosting

One of the common use cases of WF hosting is controlling page flow in ASP.NET applications. This isn't by any means the sum total of application of WF in ASP.NET; for example, the website may be the gateway into starting and monitoring long running business processes. There are two main issues with ASP.NET hosting:

  1. ASP.NET runs executes on thread pool worker threads as does the DefaultWorkflowSchedulerService. ASP.NET also stores the HttpContext in thread local storage. So if the workflow invokes functionality that requires access to HttpContext or the ASP.NET request needs to wait for the workflow then you should use the ManualWorkflowSchedulerService as the scheduler. This also means that the workflow runtime and ASP.Net will not be competing for threads.
  2. For long running workflows the workflow will probably be persisted and so the persistence service needs to recognise when timeouts have occurred. However, to do this the workflow runtime needs to be executing. So why is this an issue? Well in ASP.NET hosting the lifetime of the workflow runtime is generally tied to that of the Application object and therefore to the lifetime of the hosting AppDomain. But ASP.NET does AppDomain and worker process recycling - the AppDomain not being recreated until the next request. Therefore, if no requests come in (maybe the site has bursts of activity then periods of inactivity) the AppDomain may be recycled and the workflow runtime stop executing. At this point the persistence service can no longer recognise when timers have expired.

As long as your workflows do not require persistence or you have (or can create via a sentinel process) a regular stream of requests then ASP.NET is an excellent workflow host - especially if IIS is being used to host WCF endpoints that hand off to workflows. just remember it is not suitable for all applications.

WF Windows Service Hosting

A Windows Service is the most flexible host for WF as the workflow runtime lifetime is generally tied to the lifetime of the process. Obviously, you will probably have to write a communication layer to allow the workflows to be contacted by the outside world. In places where ASP.NET isn't a suitable host it can delegate to a Windows Service based host. Just remember there is an overhead to hand off to the windows service from another process (say via WCF using the netNamedPipeBinding) so make sure ASP.NET really isn't suitable before you go down this path.

WF is a powerful tool for many applications and the host plays a crucial role in providing the right environment for WF execution. Hopefully this article has given some insight into use cases and issues with the various hosting options.


Thursday, June 07, 2007 1:39:04 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 

I have just uploaded the code from mine and Jon's precon on WCF and WF. You can download it here.

Thursday, June 07, 2007 4:26:21 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Wednesday, June 06, 2007

WCF and WF are both frameworks. Specifically this means they are not full blown products but rather blocks of utility code with which to build products. For both of these frameworks the hosting process is undefined and it is up to you to provide this. As .NET based frameworks the host can be any .NET process so what are the advantages and disadvantages of each? In this the first of two articles I'll look at options and issues hosting WCF then in the next article I'll do the same for WF.

WCF Console Hosting

Console apps are useful for testing things out and can be useful for mocking up a service if you are testing a client. However, they are not suitable for production systems as they have no resilience associated with them.

WCF Smart Client Hosting

Useful for peer to peer applications.

WCF Windows Service Hosting

This is a tangible host for  production services. Whether this is the best host depends on a number of questions:

  • What operation system am I running on?
  • What network protocols do I need to support?
  • Do I need to execute any code before the first request comes in?

Assuming you want to run this on a server OS and one that is currently released you will have to use Windows Server 2003 (.NET 3.0 isn't supported on Windows 2000 or earlier). If you want to use any protocol other HTTP or you need to execute code before the first request comes in you must use a Windows Service to host your WCF service.

If you are happy to run on Vista or Windows Server 2008 Beta (formally known as Longhorn Server) then you should only use a Windows Service if you need to execute code before the first request arrives.

WCF IIS Hosting

IIS should be your host of choice for WCF services. However, there are two issues to be aware of:

  1. IIS6 and earlier can only host services that use HTTP as a transport. IIS7 introduces the Windows Process Activation Service that allows arbitrary protocols (such as TCP and MSMQ) to have the same activation facilities as HTTP.
  2. IIS provides activation on the first request. If you need to execute code before this (for example if you have a service that records trend data over time for some external data source and your service queries this trend data) then you cannot use IIS as a host without adding out of band infrastructure to bootstrap the application by performing a request.

So the options for WCF hosting are pretty straightforward. In the next article we'll see that the hosting options for WF have some issues that may not at first be obvious.

Wednesday, June 06, 2007 10:36:24 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   |