# Tuesday, December 04, 2007

There are two types of custom composites you may end up writing:

  1. a form of UserControl with a set of activities whose composite functionality you want to reuse. This is what you get when you add a new activity to a workflow project using the menu item. You end up with a class that derives from SequenceActivity and is the one occasion with a custom activity where you don;t really have to override Execute
  2. a parent that wants to take control over the execution model of its children (think of the execution model of the IfElseActivity, WhileActivity and the ParallelActivity)

To me the second of these is more interesting (although no doubt less common) and involves writing a bit of plumbing. In general, though, there is a common pattern to the code and so I have created a base class that wraps this plumbing.

CustomCompositeBase.zip (55.6 KB)

There are two fundemental overridable methods:

LastActivityReached: returns whether there are more child activities to execute
GetNextActivity: returns the next activity to be scheduled

Inside the zip there is also a test project that implements a random execution composite (highly useful I know).

Feedback, as always, very welcome.

.NET | WF
Tuesday, December 04, 2007 9:19:18 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Sunday, December 02, 2007

Yesterday I posted a base class for Async Activities that buried all the plumbing. However, looking at the zip I only had the binary for the base class and the test harness. Here is the zipped code.

AsyncActivityBaseClass12.zip (69.71 KB)

In addition Rod gave me some feedback (for example making the base class abstract messed up the design time view for the derived activity). In the light of that and other ideas I have made some changes:

  • The base class is now concrete and so designer view now works for derived classes
  • If you don't supply a queue name then the queue name defaults to the QualifiedName of the activity
  • Cancellation support is now embedded in the base class.
  • There is a new override called DoAsyncTeardown to allow you to deallocate any resources you allocated in DoAsyncSetup

I hope someone finds this useful

.NET | WF
Sunday, December 02, 2007 12:19:15 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Saturday, December 01, 2007

A while back Joe Duffy announced that they were changing the default max number of worker threads in the CLR threadpool in CLR 2.0 SP1. The change was to increase the limit 10 fold from 25 threads per processor per process to 250 per processor per process.

The basis of the change is that it is possible to deadlock the threadpool if you spawn and then wait for threadpool work from a threadpool thread (e.g. in processing an ASP.NET request). Increasing the size makes this occurance less likely (although doesn't remove the risk).

Although its not obvious, CLR 2.0 SP1 ships with .NET 3.5. So I was teaching Guerrilla .NET a couple of months ago and thought I'd demo the change by running  a project built under VS2005 then rebuild it under VS2008. What I hadn't expected (although on reflection its not that surprising) is that when I ran under 2005 on a machine with 2008 *installed* then 2005 also has the new behavior. In other words it patches the 2.0 CLR rather than being something that only affects new applications.

Why should we care? Well there are two main reasons we use threadpools. Firstly to remove the repeated cost of creating and destroying threads by repeatedly using a set of already created threads; secondly, it makes sure that we don't drown the machine in spawned threads when the application comes under load. Extra threads consume resources: the scheduler has to give them time and they default to 1Mb stack space.

And here is where I think the problem is. If you create a server side application, on 32 bit Windows, deploy it on an 8-way machine and put in under load, before this change you will cap out at 200 threads by default which is a reasonable (maybe a little high) number for IO bound processing. However, after the change this will cap out by default at 2000 threads. 2000 threads with 1Mb stack space means 2Gb of memory consumed by the threads. On 32 bit windows this is the total amount of addressable memory address space ... ouch! Say hello to Mr OutOfMemoryException.

So for existing applications this is a potentially breaking change. Your applications will have to take control of the max number of worker threads to ensure correct processing. ASP.NET already does this and you can adjust it via the <processModel> configuration element. In a windows service based application you have to make a call to ThreadPool.SetMaxThreads to control the threadpool.

To his credit Joe doen't pretend this change was the ideal resolution to a real-world problem but it is a change that will affect existing applcations as it is a patch to CLR 2.0 rather than a feature of a new version of the CLR

Saturday, December 01, 2007 9:45:17 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

One of the most powerful aspects of Windows Workflow is that it can manage a workflow instance's state when it has no work to do. Obviously to do this it needs to know that your activity is waiting for something to happen. A number of activities in the Standard Activity Library run asynchronously like this -e.g. the DelayActivity and the new ReceiveActivity. However, most real WF projects require you to write your own custom activities. To perform long running operations or wait for external events to occur, these too should be able to run asynchronously.

To be usable in all situations, an async activity needs to do more than just derive from Activity and return ActivityExecutionStatus.Executing from the Execute override. It also needs to implement IActivityEventListener<QueueEventArgs> and IEventActivity (the latter allows it to be recognised as an event driven activity). We end up with 3 interesting methods

Execute
IEventActivity.Subscribe
IEventActivity.Unsubscribe

Depending where your activity is used these get called in a different order

Sequential Workflow: Execute
State Machine: Subscribe, Execute, Unsubscribe
ListenActivity: Subscribe, Unsubscribe, Execute

So getting the behavior correct in where you create the workflow queue you are going to listen on, set up the event handler on the queue, process the results from the queue and close the activity can be a little tricky. So I have created a base class for async activities that buries the complexity of the plumbing and leaves you with three responsiblities:

  1. Tell the base class the name of the queue you wish to wait on
  2. Optionally override a method to set up any infrastructure you need before you start waiting on the queue
  3. Override a method to process the data that was passed on to the queue and decide whether you are still waiting for more data or you are finished

You can download the code here AsyncActivityBaseClass.zip (38.36 KB). Any feedback gratefully received

Finally, thanks to Josh and Christian for sanity checking the code

.NET | WF
Saturday, December 01, 2007 8:17:49 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Thursday, November 29, 2007

Really as a note to myself as I always forget how to do this. To turn off JIT debugging which takes an age to try to attach the debugger under the covers, you set this value in the registry

HKLM\SOFTWARE\Microsoft\.NETFramework\DbgJITDebugLaunchSetting=1

At least I now know where to go to find that information :-)

Thursday, November 29, 2007 12:12:28 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, June 18, 2007

The release of BizTalk 2006 has brought a lot of love to the messaging engine in terms of handling failures. Probably the biggest change was messaging errors like “no subscription” are now resumable. However, there are other changes such as being able to subscribe to failed messages using the ErrorReport context properties and enabling routing for failed messages on a receive or send port.

Another change was added in the Xml and Flat File disassemblers – that of recoverable interchange processing. When a disassemble is breaking apart a composite message either in the form of a flat file or an envelope schema what happens if something is wrong about one of the child messages? In BizTalk 2004 the entire batch would fail. In BizTalk 2006 this is still the default behaviour; however, you can now set RecoverableInterchange to true on the disassembler which will cause all successful messages to be processed and only the incorrect ones to fail.

This seems like a great idea but in this article I want to look at some of the subtleties in the XmlDisassembler that make this technology less useful than may first appear.

First, here is the setup. I have two schemas: an envelope that is simply a container for repeated records; a person schema with child elements firstName and lastName as part of a sequence group.

This is the person schema

<xs:schema xmlns:b="http://schemas.microsoft.com/BizTalk/2003"    

           xmlns="http://dotnetconslt.co.uk/schema/person"

           targetNamespace=http://dotnetconslt.co.uk/schema/person

           xmlns:xs="http://www.w3.org/2001/XMLSchema">

  <xs:element name="person">

    <xs:complexType>

      <xs:sequence>

        <xs:element name="firstName" type="xs:string" />

        <xs:element name="lastName" type="xs:string" />

      </xs:sequence>

    </xs:complexType>

  </xs:element>

</xs:schema>

And here is the envelope

<xs:schema xmlns:ns0="http://dotnetconslt.co.uk/schema/person"

           xmlns:b="http://schemas.microsoft.com/BizTalk/2003"

           xmlns="http://dotnetconsult.co.uk/schema/envelope"

     targetNamespace="http://dotnetconsult.co.uk/schema/envelope"

           xmlns:xs="http://www.w3.org/2001/XMLSchema">

  <xs:import schemaLocation=".\person.xsd"

            namespace="http://dotnetconslt.co.uk/schema/person" />

  <xs:annotation>

    <xs:appinfo>

      <b:schemaInfo is_envelope="yes"                 

         xmlns:b=http://schemas.microsoft.com/BizTalk/2003 />

      <b:references>

        <b:reference

           targetNamespace="http://dotnetconslt.co.uk/schema/person" />

      </b:references>

    </xs:appinfo>

  </xs:annotation>

  <xs:element name="envelope">

    <xs:annotation>

      <xs:appinfo>

        <b:recordInfo body_xpath="/*[local-name()='envelope' and

namespace-uri()='http://dotnetconsult.co.uk/schema/envelope']" />

      </xs:appinfo>

    </xs:annotation>

    <xs:complexType>

      <xs:sequence>

        <xs:element minOccurs="1"

                    maxOccurs="unbounded"

                    ref="ns0:person" />

      </xs:sequence>

    </xs:complexType>

  </xs:element>

</xs:schema>

Notice the emboldened pieces that the XmlDisassembler uses to break apart the envelope.

Now, I deploy these two schemas into Biztalk, and then create a Receive Port with a Receive Location using the XmlReceive pipeline. I make sure that error reporting is turned on.

Next I create two send ports: one with a filter set to the BTS.ReceivePortName of the receive port; one using a filter set for error reporting like this

So with the infrastructure in place we can try submitting a message. Let’s start with one that works just to ensure that the envelope is working correctly. Here is the good message:

<ns0:envelope xmlns:ns0="http://dotnetconsult.co.uk/schema/envelope">

  <ns1:person xmlns:ns1="http://dotnetconslt.co.uk/schema/person">

    <firstName>Rich</firstName>

    <lastName>Blewett</lastName>

  </ns1:person>

  <ns1:person xmlns:ns1="http://dotnetconslt.co.uk/schema/person">

    <firstName>Niels</firstName>

    <lastName>Bergland</lastName>

  </ns1:person>

  <ns1:person xmlns:ns1="http://dotnetconslt.co.uk/schema/person">

    <firstName>Jason</firstName>

    <lastName>Whittington</lastName>

  </ns1:person>

</ns0:envelope>

And as expected, dropping this into the receive folder results in three messages in the non-error send folder.

Now let’s use this message

<ns0:envelope xmlns:ns0="http://dotnetconsult.co.uk/schema/envelope">

  <ns1:person xmlns:ns1="http://dotnetconslt.co.uk/schema/person">

    <firstName>Rich</firstName>

    <lastName>Blewett</lastName>

  </ns1:person>

  <ns1:personbad xmlns:ns1="http://dotnetconslt.co.uk/schema/person">

    <firstName>Niels</firstName>

    <lastName>Bergland</lastName>

  </ns1:personbad>

  <ns1:person xmlns:ns1="http://dotnetconslt.co.uk/schema/person">

    <firstName>Jason</firstName>

    <lastName>Whittington</lastName>

  </ns1:person>

</ns0:envelope>

The problem here is that we don’t have a schema deployed describing the personbad element. We drop this into the receive folder and this time we get the default behaviour that the whole batch gets abandoned and the single envelope message ends up in the error send folder.

OK let’s turn on recoverable interchange now. We could deploy a new pipeline with the XmlDisassembler configured differently. However in BizTalk 2006 there is a UI for the per-instance pipeline data so we can actually reconfigure this receive location’s XmlReceive pipeline. The button to get to it is shown here

Now we can change the Recoverable Interchange setting to true

Now when we deploy the last message we get two child messages into the non-error send folder and one into the error send folder – the one that could not be processed.

Well that seems to work pretty well for an incorrect well-formed message – lets try a non-well-formed message:

<ns0:envelope xmlns:ns0="http://dotnetconsult.co.uk/schema/envelope">

  <ns1:person xmlns:ns1="http://dotnetconslt.co.uk/schema/person">

    <firstName>Rich</firstName>

    <lastName>Blewett</lastName>

  </ns1:person>

  <ns1:personbad xmlns:ns1="http://dotnetconslt.co.uk/schema/person">

    <firstName>Niels</firstName>

    <lastName>Bergland</lastName>

  </ns1:person>

  <ns1:person xmlns:ns1="http://dotnetconslt.co.uk/schema/person">

    <firstName>Jason</firstName>

    <lastName>Whittington</lastName>

  </ns1:person>

</ns0:envelope>

Notice that the second person record the start and end element names do not match. If we drop this into the receive folder we end up with the single envelope in the error send folder. This is because the XmlDisassembler requires well-formed XML to work. OK so that’s not going to work – what we need is another type of error to test. Lets make one of the child person records schema invalid.

<ns0:envelope xmlns:ns0="http://dotnetconsult.co.uk/schema/envelope">

  <ns1:person xmlns:ns1="http://dotnetconslt.co.uk/schema/person">

    <firstName>Rich</firstName>

    <lastName>Blewett</lastName>

  </ns1:person>

  <ns1:person xmlns:ns1="http://dotnetconslt.co.uk/schema/person">

    <lastName>Bergland</lastName>

    <firstName>Niels</firstName>

  </ns1:person>

  <ns1:person xmlns:ns1="http://dotnetconslt.co.uk/schema/person">

    <firstName>Jason</firstName>

    <lastName>Whittington</lastName>

  </ns1:person>

</ns0:envelope>

This time the second person record has the first and last name element transposed. As this is part of a sequence group this is not valid according to the schema.

We drop this message into the receive folder and we get three messages in the non-error send folder ... what?! It turns out that by default BizTalk with the XmlReceive pipeline does not perform validation.

However, we can force the XmlDisassembler to validate messages – once again lets use the per-instance pipeline data to achieve this

Now we drop the last message into the receive folder again and we get the single envelope message in the error send folder. Hmmm, so apparently the XmlDisassembler validates the entire envelope which is not what we want at all. The envelope validation is failing before we ever take the envelope apart.

All is not lost. Let’s validate the messages after disassembly. To do this we can use the XmlValidator component. But this component is not in the XmlReceive pipeline so we have to deploy a new pipeline with the XmlDisassembler in the disassemble stage and the XmlValidator in the validation stage. We’ll configure the XmlValidator to use the person schema to validate the message as this will be the message type after the disassember has run. Remember the XmlDisassembler will need to use recoverable interchange but not perform validation. So its pipeline data will look something like this:

So now we drop our envelope, with the schema invalid child, into the receive folder and out the other end we get ... the envelope message in the error send folder. The problem here is that Recoverable Interchange is only valid during the disassemble stage and so again the validation error causes the whole batch to fail and we are back to square one.

OK, so validation will have to take place in the disassemble stage. Unfortunately even if we make the envelop “dumb” with the child elements weakly typed as xs:any, the actual message validation uses the deployed schema and before the disassembly takes place – so the whole message is validated, not the ones resulting from disassembly.

So it would appear that as long as the envelope message has a matching pair on invalid child root node elements then Recoverable Interchange adds value. However, the likelihood that this is the only sort of invalid message you will get is, to me, pretty unlikely so based on that I’d say that Recoverable Interchange is of limited value with the XmlDisassembler. The frustrating thing is all it would take to make it a very powerful tool is to allow validation to only run on the disassembled messages and not on the envelope as a hole. In that case the failures would be within the scope of recoverable interchange and, therefore, the good messages would get through and the bad ones be caught.

Monday, June 18, 2007 4:21:50 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Sunday, June 17, 2007

So I was teaching a cut down version of our Essential BizTalk course last week and one thing I was talking about struck me as making a good blog post (to follow shortly). So I cranked up my BizTalk virtual machine and realised I hadn't run it on this computer before as it discovered lots of new hardware.

I noticed that I also hadn't yet installed the VMWare Tools so I installed them and rebooted. At that point I remembered why i hadn't installed the VMWare Tools. this was a VPC image that I had converted that had the VPC Additions installed and the two don't play nice together.

I found myself at the CTRL-ALT-DEL screen with the image refusing to recognize any input such as CTRL-ALT-INS or using the menu to send the CTRL-ALT-DEL to the virtual machine.

I decided to reboot into safe mode and uninstall the VMWare Tools - no dice - still no input accepted. At this point I was faced with having to rebuild my image from scratch (stupidly I'd never taken a snapshot so I had nothing to recover to).

Then I remembered that the VMWare conversion was non-destructive, so I deleted all the VMWare bits of the image and left myself with the .vmc and .vhd files. This time when I re-added the image it re-converted and I got back to my starting point ... I immediately took a snapshot this time.

I keep on being reminded why I like VMWare so much.

Sunday, June 17, 2007 4:08:48 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Saturday, June 09, 2007

Sometimes you have to wonder at the stupidity of organizations.

There were thunderstorms over Miami where my flight home to the UK from TechEd so amid chaotic scenes at gate 15 my flight got canceled. Unfortunately when they made the announcement I was on the phone so didn't initially realize. So when I finally worked out what was going on the gate was clearing. I was told to go reclaim my bags in the baggage hall then reschedule my flight at ticketing. And here I have to state my amazement of how massively inefficient the baggage handing appears to be at Orlando - I waited 1:15 on my way in and it took about an hour for my bag to come out after they canceled the flight even though they hadn't actually taken it on to a plane - most bizarre.

So I eventually get my bag and head up to ticketing only to find the other 200 people on my flight in front of me. Now the problem is that the line was so long it now went outside the terminal - ahh Orlando in the summer and the joy of non-A/C environments. The queue was moving very slowly but it turned out that people had been given a 1-800 number to call to rebook their flights. So I got that number off someone and got my flight rebooked for tomorrow. However, the airline were putting people up at a hotel and were giving out vouchers which you had to stay in the queue to receive. 

So here is what I think is stupid: a whole bunch of people in that queue had already got different flights and were simply waiting to get their hotel voucher - why on earth didn't they have someone walking down the queue giving out the vouchers to people who had already resolved their flight?

In the end i got sick of waiting and went and booked my own hotel over the Internet ... hold on, maybe I just answered my own question.

Saturday, June 09, 2007 4:09:13 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Thursday, June 07, 2007

Something seems broken in the CLR ThreadPool. I was teaching Effective .NET recently and I noticed very strange behavior. It was the first time I had taught the course off my laptop which is a Core 2 Duo. Normally I teach the course in our training centre and the instructor machine is hyper-threaded.

I ran the following code:

static void Main(string[] args)
{
 
Console.ReadLine();

  ThreadPool.SetMinThreads(50, 1000);
 
ThreadPool.SetMaxThreads(5, 1000);

  int wtcount;
 
int iocount;

  ThreadPool.GetMaxThreads(out wtcount, out iocount);

  Console.WriteLine("{0} : {1}", wtcount, iocount);
}

Here I am calling SetMinThreads and SetMaxThreads. SetMinThreads "warms up" the thread pool so the default 0.5 second delay in spawning a new thread into the thread pool is avoided. SetMaxThreads changes the cap on the maximum number of threads available in the thread pool.

I had run this code several times before when talking about controlling the ThreadPool and expected the following output

5 : 1000

In other words the capping of the thread pool takes precedence over warming it up - which I would have thought is the correct behavior. However, ran I ran it this time I saw

50 : 1000

Which basically meant that SetMinThreads was overriding SetMaxThreads. But I had seen the expected behavior before on a hyper-threaded machine. So I did one more test (which is why the Console.ReadLine is at the top of the program). I started the program and then before pressing Enter went to task manager and set the process affinity to only execute on one processor. Lo and Behold I get the output

5 : 1000

So the behavior with respect to SetMinThreads and SetMaxThreads is different if the process is running with two (or more?) processors or not (note there is no difference between standard single processor and a hyper-threaded processor). This to me is totally busted. I know what I think the behavior should be but irrespective of that - it should definitely be consistent across the number of processors.

Edit: Dan points out to me that SetMinThreads(50, 1000) is obviously going to fail on a single proc machine as there are only 25 threads in the thread pool - Doh! So what was happening was on a single proc test the call to SetMinThreads failed and so the call to SetMaxThreads succeeded. As Dan pointed out - if you use 25 instead of 50 for the number of threads in SetMinThreads then the behavior is consistent in that the two APIs both fail if they try to do inconsistent things. Thanks Dan.

However, I do find it odd that I have asked the CLR to do something for me and it has failed - I would kind of expect an exception. Now I know I didn't check the return value so it is partially my own fault, but surely return values should only be used for expected behavior.

Thursday, June 07, 2007 10:45:22 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 

In the previous article I talked about hosting options for WCF. Here I'll discuss hosting for WF. However, to understand the options fully we need to digress briefly and talk about some of the features of WF.

Asynchronous Execution

The execution model of WF is inherently asynchronous. Activities are queued up for execution and the runtime asks the scheduler service to make the calls to Activity.Execute on to threads. The DefaultWorkflowSchedulerService (which you get if you take no special actions) maps Execute calls on to thread pool worker threads. The ManualWorkflowSchedulerService (which also comes out of the box but you have to install specifically install using WorkflowRuntime.AddService) schedules Execute on the thread that calls ManualWorkflowSchedulerService.RunWorkflow.

Persistence

By default the workflow runtime does not persist workflows. Only when an object of a type that derives from WorkflowPersistenceService is added as a runtime service does the workflow runtime start persisting workflows both in the form of checkpoints when significant things have happened in the workflow's execution and to unload a workflow when it goes idle. When something happens that should be directed to an unloaded workflow the host (or a service) calls WorkflowRuntime.GetWorkflow(instanceId) and the runtime can reload the workflow from persistent storage.

One issue here is that of timers. If a workflow has gone idle waiting for a timer to expire (say via the DelayActivity) then no external event is going to occur to trigger a call WorkflowRuntime.GetWorkflow(instanceId)to allow the workflow to continue execution. In this case the persistence service must have functionality to discover when a timer has expired in a persisted workflow. The out of the box implementation, the SqlWorkflowPersistenceService polls the database periodically (how often is controlled by a parameter passed to its constructor). Obviously for this to occur the workflow runtime has to be loaded and running (via WorkflowRuntime.StartRuntime).

Hosting Options

The above two issues affect your WF hosting decisions and implementation. As with WCF you can host in any .NET process but in many ways the responsibilities of a WF host are more far reaching than for a WCF host. Essentially a WCF host has to start listening on the endpoints for the services it is hosting. A WF host not only has to configure and start the workflow runtime but also decide when new workflow instances should be created and broker communication from the outside world into the workflow. So lets look at the various options.

WF Console Hosting

When you install the Visual Studio 2005 Extensions for Windows Workflow Foundation package you get new project types which include console projects for both sequential and state machine workflows. These project types are useful for doing research and demos but thats about it.

WF Smart Client Hosting

There are a number of WF features that make it desirable to host WF in a smart client. Among these are the ability to control complex page navigation and the WF rules engine for controlling complex cross field navigation. The Acropolis team are planning to provide WF based navigation in their framework.

WF ASP.NET Hosting

One of the common use cases of WF hosting is controlling page flow in ASP.NET applications. This isn't by any means the sum total of application of WF in ASP.NET; for example, the website may be the gateway into starting and monitoring long running business processes. There are two main issues with ASP.NET hosting:

  1. ASP.NET runs executes on thread pool worker threads as does the DefaultWorkflowSchedulerService. ASP.NET also stores the HttpContext in thread local storage. So if the workflow invokes functionality that requires access to HttpContext or the ASP.NET request needs to wait for the workflow then you should use the ManualWorkflowSchedulerService as the scheduler. This also means that the workflow runtime and ASP.Net will not be competing for threads.
  2. For long running workflows the workflow will probably be persisted and so the persistence service needs to recognise when timeouts have occurred. However, to do this the workflow runtime needs to be executing. So why is this an issue? Well in ASP.NET hosting the lifetime of the workflow runtime is generally tied to that of the Application object and therefore to the lifetime of the hosting AppDomain. But ASP.NET does AppDomain and worker process recycling - the AppDomain not being recreated until the next request. Therefore, if no requests come in (maybe the site has bursts of activity then periods of inactivity) the AppDomain may be recycled and the workflow runtime stop executing. At this point the persistence service can no longer recognise when timers have expired.

As long as your workflows do not require persistence or you have (or can create via a sentinel process) a regular stream of requests then ASP.NET is an excellent workflow host - especially if IIS is being used to host WCF endpoints that hand off to workflows. just remember it is not suitable for all applications.

WF Windows Service Hosting

A Windows Service is the most flexible host for WF as the workflow runtime lifetime is generally tied to the lifetime of the process. Obviously, you will probably have to write a communication layer to allow the workflows to be contacted by the outside world. In places where ASP.NET isn't a suitable host it can delegate to a Windows Service based host. Just remember there is an overhead to hand off to the windows service from another process (say via WCF using the netNamedPipeBinding) so make sure ASP.NET really isn't suitable before you go down this path.

WF is a powerful tool for many applications and the host plays a crucial role in providing the right environment for WF execution. Hopefully this article has given some insight into use cases and issues with the various hosting options.

 

.NET | WCF | WF
Thursday, June 07, 2007 1:39:04 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   |