# Wednesday, 15 June 2011

Just a heads up that when self hosting the new WCF Web API. By default if you try to add the Web API references via Nuget you will get a failure (E_FAIL returned from a COM component).

This is due to the likely project types (Console, Windows Service, WPF) defaulting to the client profile rather than the full framework. If you change the project to the full framework the Nuget packages install correctly

Yet again bitten by the Client Profile

.NET | REST | WCF
Wednesday, 15 June 2011 10:21:33 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Friday, 02 October 2009

I just got back from speaking at Software Architect 2009. I had a great time at the conference and thanks to everyone who attended my sessions. As promised the slides and demos are now available on the Rock Solid Knowledge website and you can get them from the Conferences page.

.NET | Azure | REST | RSK | WCF | WF
Friday, 02 October 2009 22:12:18 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Sunday, 05 April 2009

I’ve been doing some work with the WCF REST Starter Kit for our website http://rocksolidknowledge.com. Preview 2 of the start kit has a bunch of client side plumbing (the original release concentrated on the service side)

The client side code looks something like this:

HttpClient client = new HttpClient("http://twitter.com");

HttpResponseMessage response = client.Get("statuses/user_timeline/richardblewett.xml");

Console.WriteLine(response.Content.ReadAsString());

As compact as this is I was a bit disappointed to see that I only had a few options for processing the content: ReadAsString, ReadAsStream and ReadAsByteArray. Now seeing as they had a free hand to give you all sorts of processing options I was surprised there weren’t more. However, one of the assemblies with the start kit is called Microsoft.Http.Extensions. So I opened it up in Reflector and lo and behold there are a whole bunch of extension methods in there – so why wasn’t I seeing them?

Extension methods become available to your code when their namespace is in scope (e.g. when you have a using statement for the namespace in your code). It turns out that the team put the extension methods in the namespaces appropriate to the technology they were exposing. So for example the ReadAsXElement extension method is in the System.Xml.Linq namespace and the ReadAsXmlSerializable<T> method is in the System.Xml.Serialization namespace.

Although I really like the functionality of the WCF Starter Kit, this particular practice, to me, seems bizarre. Firstly, it makes the API counter intuitive – you use the HttpClient class and then there is no hint in the library that there are a bunch of hidden extensions. Secondly, injecting your extensions into someone else’s namespace increases the likelihood of extension method collision (where two libraries define the same extension method in the same namespace). The same named extension method in difference namespaces can be disambiguated, the same named extension in the same namespace gives you no chance.

I think if you want to define extension methods then you should keep them in your own namespace – it makes everyone’s life simpler in the long run.

.NET | REST | WCF
Sunday, 05 April 2009 16:58:48 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Monday, 08 December 2008

I've uploaded an updated version of the BlobExplorer (first blogged about here). It now uses a decent algorithm for determining MIME type (thanks Christian) and the blob list now retains a single entry when you update an existing blob

Downloadable from my website here

BlobExplorer12.zip (55.28 KB)

downloadable from Azure Blob Storage here

.NET | Azure | BlobExplorer | REST | WPF
Monday, 08 December 2008 09:33:00 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Sunday, 07 December 2008

The Windows Azure Storage infrastructure has three types of storage:

  • Blob - for storing arbitrary state as a blob (I recently blogged about writing an explorer for blob storage)
  • Table - for storage structured or semi-structured data with the ability to query it
  • Queue - primarily for communication between components in the cloud

This post is going to concentrate on queue storage as its usage model is a bit different traditional queuing products such as MSMQ. The code samples in this post use the StorageClient library sample shipped in the Windows Azure SDK.

Lets start with the basics: you authenticate with Azure storage using your account name and key provided by the Azure portal when you create a storage project. If you're using the SDK's deveopment storage the account name, key and uri are fixed. Here is the code to set up authentication details with development queue storage:

string accountName = "devstoreaccount1";
string accountKey = "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==";
string address = "http://127.0.0.1:10001";

StorageAccountInfo info = new StorageAccountInfo(new Uri(address), null, accountName, accountKey);

QueueStorage qs = QueueStorage.Create(info);

Queues are named within the storage so to create a queue (or address an existing queue) you use the collowing code:

MessageQueue mq = qs.GetQueue("fooq");

mq.CreateQueue();

The CreateQueue method will not recreate an already existing queue and has an overload to tell you that the queue already existed. You can then create message objects and push them into the queue as follows:

Message msg = new Message(DateTime.Now.ToString());

mq.PutMessage(msg);

Hopefully none of this so far is particularly shocking. Where things start to get interesting is on the receive side. We can simply receive a message so:

msg = mq.GetMessage(5);
if (msg != null
)
{
    Console
.WriteLine(msg.ContentAsString());
}

So what is interesting about this? Firstly notice that the GetMessage takes a parameter - this is a timeout in seconds. Secondly, GetMessage doesn't block but will return null if there is no message to receive. Finally, although you can't see it from the above code, the message is still on the queue. To remove the message you need to delete it:

Message msg = mq.GetMessage(5);
if (msg != null
)
{
    Console
.WriteLine(msg.ContentAsString());
    mq.DeleteMessage(msg);
}

This is where that timeout comes in: having received a message, that message is locked for the specified timeout. You must called DeleteMessage within that timeout otherwise the message is unlocked and can be picked up by another queue reader. This prevents a queue reader trying to process a message and then dying leaving the message locked. It, in effect, provides a loose form of transaction around the queue without having to formally support transactional sematics which tend not to scale at web levels.

However, currently we have to code some polling logic as the GetMessage call doesn't block. The MessageQueue class in StorageClient also provides the polling infrastructure under the covers and delivers the message via eventing:

mq.MessageReceived += new MessageReceivedEventHandler(mq_MessageReceived);
mq.PollInterval = 2000; // in milliseconds
mq.StartReceiving(); // start polling

The event handler will fire every time a message is received and when there are no more messages the client will start polling the queue according to the PollInterval

Bear in mind that the same semantics apply to receiving messages. It is beholden on the receiver to delete the message within the timeout otherwise the message will again become visible to other readers. Notice here we don't explicitly set a timeout and so a default is picked up from the Timeout property on the MessageQueue class (this defaults to 30 seconds).

Using the code above it is very easy to think that everything is running with a nicely tuned .NET API under the covers. Remember, however, that this infrastructure is actually exposed via internet standard protocols. All we have here in reality is a convenient wrapper over the storage REST API. And so bear in mind that the event model is just a convenience provided by the class library and not an inherent feature built into queue storage. Similarly the reason that GetMessage doesn't block is simply that it wraps an HTTP request that returns an empty queue message list if there are no messages on the queue. If you use GetMessage or eventing API then the messages are retrieved one at a time. You can also use the GetMessages API which will allows you to receive multiple messages at once.

.NET | Azure | REST
Sunday, 07 December 2008 11:04:49 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Saturday, 06 December 2008

I've been digging around in Azure Storage recently and as a side project I decided to write an explorer for Blob Storage. My UI skills are not my strongest suit but I had fun dusting off my WPF knowledge (I have to thank Andy Clymer, Dave "no blog" Wheeler and Ian Griffiths for nursemaiding me through some issues)

Here is a screen shot of the explorer attached to development storage. You can also attach it to your hosted Azure storage in the cloud

In the spirit of cloud storage and dogfooding I used the Blob Explorer to upload itself into the cloud so you can find it here

However, as Azure is a CTP environment I thought I would also upload it here

BlobExplorer1.zip (55.08 KB)

as, on the web, URIs live forever and my storage account URIs may not after the CTP. I hope someone gets some mileage out of it - I certainly enjoyed writing it. Comments, bug reports and feature requests are appreciated

.NET | Azure | BlobExplorer | REST | WPF
Saturday, 06 December 2008 22:00:19 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, 21 November 2008

Thanks to all who came to my REST talk at Oredev

The slides are here

REST.pdf (325.61 KB)

and the demos are here

REST.zip (30.32 KB)

.NET | ASP.NET | MVC | REST | WCF
Friday, 21 November 2008 15:28:54 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, 03 November 2008

I was in Redmond a few weeks ago looking at the new stuff that Microsoft's Connected System Division (CSD) were working on (WF 4.0, REST Toolkit, Oslo, Dublin). At the end of the week I did an interview for Ron Jacobs for Endpoint.tv on Channel 9. We discussed WF 4.0, Dublin, Oslo and M - as well as 150 person Guerrilla courses. You can watch it here

.NET | BizTalk | Oslo | REST | WCF | WF
Monday, 03 November 2008 23:12:08 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Saturday, 01 November 2008

Having spent the last week at PDC - with very spotty Internet access. I thought I'd take some time to reflect on what I thought about the various announcements and technologies that I dug around in. This post is about the big announcement: Windows Azure

 

Azure (apparently pronounced to rhyme with badger) is an infrastructure designed to control applications installed in a huge datacenter. Microsoft refer to it as a “cloud based operating system” and talk about it being where you deploy your apps for Internet  scale.

 

So I guess we have to start with: what problem are Microsoft trying to solve? There are two answers here:

1)      Applications that need Internet scale are really hard to deploy due to the potentially huge hardware requirements and cost of managing that infrastructure. Most organizations that go through rapid growth experience a lot of pain as they try to scale for the 10,000s to many millions of users (Pinku has some great slides on the pain MySpace went through and how they solved it). This normally requires rearchitecting of the app to more loosely couple, etc.

2)      Microsoft have a serious threat from both Amazon and Google in the high scaling world as both of those companies already have “cloud solutions” in place. They had to do something to ensure they were not left behind.

 

So Microsoft started the project to create an infrastructure to allow customers to deploy applications into Microsoft’s datacenter that would seamlessly scale as their requirements grew – Azure is the result.

 

The idea then is that you write your software and tell Microsoft what facilities you require – this is in the form of a manifest config file: “I require 20 instances of the web front end and 5 instances of the data processing engine”. The software and config is then deployed in to Microsoft’s infrastructure and a component called the Fabric Controller maps the software on to virtual machines under the control of a hypervisor. They also put a load balancer in front of the web  front end. The web site runs as a “web role” that can accept requests from the Internet and the processing engine gets mapped to a “worker role” that cannot be accessed externally.

 

The Fabric Controller has responsibility to ensure that there are always a number of running instances your components even in the face of hardware failures. It will also ensure that you can perform rolling upgrades without taking your app offline if that what you require.

 

The problem that apps then face is that even though they may run on a specific machine at one point, the next time they are initialized they may be on a completely separate machine and so storing anything locally is pointless apart from as a localized cache. That begs the question: where do I store my state? Enter the Azure Storage Service. This is a massively scalable, fault tolerant, highly available storage system. There are three types of storage available

 

Blob: Unstructured data with sizes up to 50Gb

Table: Structured tabular data with up to 252 user defined properties (think columns)

Queue: queue based data for storing messages to be passed from one component to another in the azure infrastructure

 

Hopefully Blob and Queue are fairly familiar constructs to most people. Table probably needs a little clarification. We are not talking about a relational database here. There is no schema for the table based data so, in fact, every row could be shaped completely differently (although this would be pretty ugly to try to read back out again). There are no relations and therefore no joins. There are also no server managed indexes – you define a pseudo index with the idea of a partition ID – this ID can be used to horizontally partition the data across multiple machine clusters but the partition ID is something you are responsible for managing. However, each row must be uniquely identifiable so there is also a concept of a row id and the partition id/row id combination make up the primary key of the table. There is also a system maintained version number for concurrency control. So this is where the strange number of 252 user defined properties comes from 255 – 3 (partition id, row id, version)

 

So in the above example, the web front end passes the data processing engine the data by enqueing it into queue storage. The processing engine then stores the data further (say in a blob or table) or just processes and pushes the results back on another queue. It can also send messages out to external endpoints.

 

All components run under partial trust (a variant similar to ASP.NET medium trust) so Azure developers will need some understanding of CAS.

 

The API for talking to Azure is REST based which can be wrapped by ADO.NET Data Services if appropriate (e.g. for table storage)

 

To get started you need an account provisioned (to get space reserved in the datacenter). You can do this via http://www.azure.com. There are other services built on top of Azure, which I will cover in subsequent posts, which get provisioned in the same place.

 

There is an SDK and VS2008 SP1 integration. This brings a development fabric to your machine that has the same services available as the cloud based one so you can test without deploying into the cloud. There are also VS project templates which in fact create multiple projects: the application one(s) and another to specify the deployment configuration.

 

So where does that leave Microsoft. They have created an offering that in its totality (which is much more than I have talked about here) is beyond what both Amazon and Google have created in terms of functionality. But they are left with two issues:

1)      Will companies trust Microsoft to run their business critical applications? Some definitely will but others will reserve judgement for some time until they have seen in practice that this infrastructure is sound. Microsoft say they will also have an SLA in the contract that will have financial penalty clauses if they fail to fulfil it in some currently unspecified way

2)      Microsoft have not yet announced any pricing model. This leaves companies in a difficult position – do they throw resources at a project with a lot of potential and bear the risk that when Microsoft unveil the pricing their application is not economically viable? Or do they wait to start investing in this technology until Microsoft announce pricing. Unfortunately this is a chicken and egg situation – Microsoft cannot go live commercially  until the infrastructure has been proven in practice by companies creating serious app on it, and yet they do not want to announce pricing until they are ready for commercial release. Hopefully they will be prepared to discuss pricing for any organization that it serious about building on the infrastructure on a case by case basis before full commercial release.

 

Azure definitely has huge potential for those companies that need a very flexible approach to scale or who will require scale over time but that time cannot yet be determined. It also has some challenges for how you build applications - there are design constraints you have to cater for (failure is a fact of life and you have to code to accept that for example).

 

Definitely interesting times

.NET | Azure | REST
Saturday, 01 November 2008 02:39:23 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, 16 September 2008

I'm going to be doing a couple of sessions at the Oredev conference in Sweden in November

Writing REST based Systems with .NET

For many, building large scale service based systems equate to using SOAP. There is, however, another way to architect service based systems by embracing the model the web uses - REpresentational State Transfer, or REST. .NET 3.5 introduced a way of building the service side with WCF - however you can also use ASP.NET's infrastructure as well. In this session we talk about what REST is, two approaches to creating REST based services and how you can consume these services very simply with LINQ to XML.

Writing Service Oriented Systems with WCF and Workflow

Since its launch WCF has been Microsoft's premier infrastructure to writing SOA based systems. However one of the main benefits of Service Orientation is combining the functionality of services to create higher order functionality which itself is exposed as a service - namely service composition. Workflow is a very descriptive way of showing how services are combined and in .NET 3.5 Microsoft introduced an integration layer between WCF and Workflow to simplify the job of service composition. In this session we examine this infrastructure and bring out both its string and weak points with an eye to what is coming down the line in Project Oslo - Microsoft's next generation of its SOA platform.

Hope to see you there

.NET | ASP.NET | LINQ | MVC | Oslo | REST | WCF | WF
Tuesday, 16 September 2008 13:45:26 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Wednesday, 18 June 2008

I've been looking at the routing infrastructure Microsoft are releasing in .NET 3.5 SP1. This is the infrastructure that allows me to bind an HTTP Hander to a URI rather than simply using webforms. It is used in the ASP.NET MVC framework that is in development. The infrastructure is pretty clean in design, First you add a reference to System.Web.Routing. Then you simply create Route objects binding a URI to an implementation of IRouteHandler. Finally you add it to a RouteTable static class. Global.asax is the ideal spot for this code.

void Application_Start(object sender, EventArgs e) 
{
        Route r = new Route("books", new BooksRouteHandler());
        RouteTable.Routes.Add(r);
        r = new Route("books/isbn/{isbn}", new BooksRouteHandler());
        RouteTable.Routes.Add(r);
        r = new Route("search/price", new SearchRouteHandler());
        RouteTable.Routes.Add(r);
}

Here BooksRouteHandler and SearchRouteHandler implement IRouteHandler

public interface IRouteHandler
{
    IHttpHandler GetHttpHandler(RequestContext requestContext);
}

So for example the BooksRouteHandler looks like this

public class BooksRouteHandler : IRouteHandler
{
    public IHttpHandler GetHttpHandler(RequestContext requestContext)
    {
        if (requestContext.RouteData.Values.ContainsKey("isbn"))
        {
            string isbn = (string)requestContext.RouteData.Values["isbn"];
            return new ISBNHandler(isbn);
        }
        else
        {
            return new BooksHandler();
        }
    }
}

Where ISBNHandler and BooksHandler both implement IHttpHandler

This is all pretty straightforward. The one thing that had me puzzling for a while is who looks at the RouteTable. Reflector to the rescue! There is a module in the System.Web.Routing assembly called UrlRoutingModule. If you add this in to your web.config the routing starts working. The config piece looks like this

<httpModules>
      <add name="Routing" 
           type="System.Web.Routing.UrlRoutingModule, System.Web.Routing, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
</httpModules>


I'm currently using it to build a REST based service for the second part of a two part article for the DevelopMentor DevelopMents newsletter so if you're not on the distribution list for that subscribe!
.NET | ASP.NET | REST | MVC
Wednesday, 18 June 2008 10:36:52 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   |