# Saturday, December 06, 2008

I've been digging around in Azure Storage recently and as a side project I decided to write an explorer for Blob Storage. My UI skills are not my strongest suit but I had fun dusting off my WPF knowledge (I have to thank Andy Clymer, Dave "no blog" Wheeler and Ian Griffiths for nursemaiding me through some issues)

Here is a screen shot of the explorer attached to development storage. You can also attach it to your hosted Azure storage in the cloud

In the spirit of cloud storage and dogfooding I used the Blob Explorer to upload itself into the cloud so you can find it here

However, as Azure is a CTP environment I thought I would also upload it here

BlobExplorer1.zip (55.08 KB)

as, on the web, URIs live forever and my storage account URIs may not after the CTP. I hope someone gets some mileage out of it - I certainly enjoyed writing it. Comments, bug reports and feature requests are appreciated

.NET | Azure | BlobExplorer | REST | WPF
Saturday, December 06, 2008 10:00:19 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, December 01, 2008

DevelopMentor UK ran a one day post PDC event on Friday at the Microsoft Offices in London. Myself and Dominick covered WF 4.0, Dublin, Geneva, Azure and Oslo

You can find the materials and demos here

.NET | Azure | Oslo | WCF | WF | Dublin
Monday, December 01, 2008 11:30:39 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Friday, November 21, 2008

Thanks to all who came to my REST talk at Oredev

The slides are here

REST.pdf (325.61 KB)

and the demos are here

REST.zip (30.32 KB)

.NET | ASP.NET | MVC | REST | WCF
Friday, November 21, 2008 3:28:54 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 

Thanks to everyone who came to my SOA with WF and WCF talk at Oredev

The slides are here

SOA-WCF-WF.pdf (278.79 KB)

and the demos are here

ServiceComposition.zip (107.08 KB)

.NET | WCF | WF
Friday, November 21, 2008 7:17:50 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, November 03, 2008

I was in Redmond a few weeks ago looking at the new stuff that Microsoft's Connected System Division (CSD) were working on (WF 4.0, REST Toolkit, Oslo, Dublin). At the end of the week I did an interview for Ron Jacobs for Endpoint.tv on Channel 9. We discussed WF 4.0, Dublin, Oslo and M - as well as 150 person Guerrilla courses. You can watch it here

.NET | BizTalk | Oslo | REST | WCF | WF
Monday, November 03, 2008 11:12:08 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Sunday, November 02, 2008

.NET Services is a block of functionality layered on top of the Azure platform. This project has had a couple of names in the past – BizTalk Services when it was an incubation project and then Zurich as it was productionized.

 

.NET Services consists of three services:

1)      Identity

2)      The Service Bus

3)      Workflow

 

Lets talk about identity first. Officially named the .NET Access Control Service, this provides you a Security Token Service (STS) that you can use to configure how different forms of authentication map into claims (it leverages the claims construct first surfaced in WCF and now wrapped up in Geneva). It supports user ids/passwords, Live ID, Certs and full blown federation using WS-Trust. It also supports rule based authorization against the claim sets.

 

The Service Bus is a component that allows you to both listen for and send messages into the “service bus”. More concretely it means that from inside my firewall I can listen to an internet based endpoint that others can send messages into. It supports both unicast and multicast. The plumbing is done by a bunch of new WCF Bindings (the word Relay in the binding indicates it is a Service Bus binding) although there an HTTP based API too. Sending messages to the internet is fairly obvious how that happens, its the listening that is more interesting. The preferred way is to use TCP (NetTcpRelayBinding) to connect. This parks a TCP session in the service bus which it then sends messages to you down. However they also support HTTP although the plumbing is a bit more complex. There they construct a Message Buffer that they send messages into then the HTTP relay listener polls it for messages. The message buffer is not a full blown queue although it does have some of the same characteristics of a queue. There is a very limited size and TTL for the messages in it. Over time they may turn it into a full blown queue but for the current CTP it is not.

 

So what’s the point of the Service Bus? It enables to be subscribe to internet based events (I find it hard to use the word “Cloud” ;-)) to allow loosely coupled systems over the web. It also allows the bridging of on-premises systems to web based ones through the firewall

 

Finally there is the .NET Workflow Service. This is WF in the Cloud (ok I don’t find it that hard). They provide a constrained set of activities (currently very constrained although they are committed to providing a much richer set. They provide HTTP Receive and Send and Service bus Send plus some flow control and some XPath ones that allow content based routing. You deploy your workflow into their infrastructure and can create instances of it waiting for messages to arrive and to route them, etc. With the toolset you basically get one-click deployment of XAML based workflows. They currently only support WF 3.5 although they will be rolling out WF 4.0 (which is hugely different from WF 3.5 – thats the subject of another post) in the near future.

 

So what does .NET Services give us? It provides a rich set of messaging infrastructure over and above that of Windows Azure Services

.NET | Azure | WCF | WF
Sunday, November 02, 2008 3:10:04 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Saturday, November 01, 2008

Having spent the last week at PDC - with very spotty Internet access. I thought I'd take some time to reflect on what I thought about the various announcements and technologies that I dug around in. This post is about the big announcement: Windows Azure

 

Azure (apparently pronounced to rhyme with badger) is an infrastructure designed to control applications installed in a huge datacenter. Microsoft refer to it as a “cloud based operating system” and talk about it being where you deploy your apps for Internet  scale.

 

So I guess we have to start with: what problem are Microsoft trying to solve? There are two answers here:

1)      Applications that need Internet scale are really hard to deploy due to the potentially huge hardware requirements and cost of managing that infrastructure. Most organizations that go through rapid growth experience a lot of pain as they try to scale for the 10,000s to many millions of users (Pinku has some great slides on the pain MySpace went through and how they solved it). This normally requires rearchitecting of the app to more loosely couple, etc.

2)      Microsoft have a serious threat from both Amazon and Google in the high scaling world as both of those companies already have “cloud solutions” in place. They had to do something to ensure they were not left behind.

 

So Microsoft started the project to create an infrastructure to allow customers to deploy applications into Microsoft’s datacenter that would seamlessly scale as their requirements grew – Azure is the result.

 

The idea then is that you write your software and tell Microsoft what facilities you require – this is in the form of a manifest config file: “I require 20 instances of the web front end and 5 instances of the data processing engine”. The software and config is then deployed in to Microsoft’s infrastructure and a component called the Fabric Controller maps the software on to virtual machines under the control of a hypervisor. They also put a load balancer in front of the web  front end. The web site runs as a “web role” that can accept requests from the Internet and the processing engine gets mapped to a “worker role” that cannot be accessed externally.

 

The Fabric Controller has responsibility to ensure that there are always a number of running instances your components even in the face of hardware failures. It will also ensure that you can perform rolling upgrades without taking your app offline if that what you require.

 

The problem that apps then face is that even though they may run on a specific machine at one point, the next time they are initialized they may be on a completely separate machine and so storing anything locally is pointless apart from as a localized cache. That begs the question: where do I store my state? Enter the Azure Storage Service. This is a massively scalable, fault tolerant, highly available storage system. There are three types of storage available

 

Blob: Unstructured data with sizes up to 50Gb

Table: Structured tabular data with up to 252 user defined properties (think columns)

Queue: queue based data for storing messages to be passed from one component to another in the azure infrastructure

 

Hopefully Blob and Queue are fairly familiar constructs to most people. Table probably needs a little clarification. We are not talking about a relational database here. There is no schema for the table based data so, in fact, every row could be shaped completely differently (although this would be pretty ugly to try to read back out again). There are no relations and therefore no joins. There are also no server managed indexes – you define a pseudo index with the idea of a partition ID – this ID can be used to horizontally partition the data across multiple machine clusters but the partition ID is something you are responsible for managing. However, each row must be uniquely identifiable so there is also a concept of a row id and the partition id/row id combination make up the primary key of the table. There is also a system maintained version number for concurrency control. So this is where the strange number of 252 user defined properties comes from 255 – 3 (partition id, row id, version)

 

So in the above example, the web front end passes the data processing engine the data by enqueing it into queue storage. The processing engine then stores the data further (say in a blob or table) or just processes and pushes the results back on another queue. It can also send messages out to external endpoints.

 

All components run under partial trust (a variant similar to ASP.NET medium trust) so Azure developers will need some understanding of CAS.

 

The API for talking to Azure is REST based which can be wrapped by ADO.NET Data Services if appropriate (e.g. for table storage)

 

To get started you need an account provisioned (to get space reserved in the datacenter). You can do this via http://www.azure.com. There are other services built on top of Azure, which I will cover in subsequent posts, which get provisioned in the same place.

 

There is an SDK and VS2008 SP1 integration. This brings a development fabric to your machine that has the same services available as the cloud based one so you can test without deploying into the cloud. There are also VS project templates which in fact create multiple projects: the application one(s) and another to specify the deployment configuration.

 

So where does that leave Microsoft. They have created an offering that in its totality (which is much more than I have talked about here) is beyond what both Amazon and Google have created in terms of functionality. But they are left with two issues:

1)      Will companies trust Microsoft to run their business critical applications? Some definitely will but others will reserve judgement for some time until they have seen in practice that this infrastructure is sound. Microsoft say they will also have an SLA in the contract that will have financial penalty clauses if they fail to fulfil it in some currently unspecified way

2)      Microsoft have not yet announced any pricing model. This leaves companies in a difficult position – do they throw resources at a project with a lot of potential and bear the risk that when Microsoft unveil the pricing their application is not economically viable? Or do they wait to start investing in this technology until Microsoft announce pricing. Unfortunately this is a chicken and egg situation – Microsoft cannot go live commercially  until the infrastructure has been proven in practice by companies creating serious app on it, and yet they do not want to announce pricing until they are ready for commercial release. Hopefully they will be prepared to discuss pricing for any organization that it serious about building on the infrastructure on a case by case basis before full commercial release.

 

Azure definitely has huge potential for those companies that need a very flexible approach to scale or who will require scale over time but that time cannot yet be determined. It also has some challenges for how you build applications - there are design constraints you have to cater for (failure is a fact of life and you have to code to accept that for example).

 

Definitely interesting times

.NET | Azure | REST
Saturday, November 01, 2008 2:39:23 AM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Monday, October 27, 2008
I'm sitting here in the PDC 08 Keynote. Ray Ozzie has just announced Windows Azure - a new Windows platform "in the cloud". In other words a set of services hosted in Microsoft's datacenters that you can deploy your apps into. As a platform it has a whole systems management side to it.

The service model uses services, endpoints - contracts ... seems familiar. You deploy the code and a model describing the app so the systems management can support your app.

Storage system is highly available with storage for blobs, tables and queues. Supports dynamic load balancing and caching

Azure development is done in Visual Studio and supports both managed and unmanaged code. New "cloud" project templates give you a pair of projects - one is a standard familiar .NET project and the other is the one is configuation that describes the app.

The Azure portal lets you change the configuration dynamically to scale up as required. Currently you have to edit the XML but they will be providing a UI for the configuration.

This all looks pretty exciting - looking forward to getting hold of the bits tomorrow

.NET | Azure
Monday, October 27, 2008 4:24:25 PM (GMT Standard Time, UTC+00:00)  #    Disclaimer  |   | 
# Tuesday, September 16, 2008

I'm going to be doing a couple of sessions at the Oredev conference in Sweden in November

Writing REST based Systems with .NET

For many, building large scale service based systems equate to using SOAP. There is, however, another way to architect service based systems by embracing the model the web uses - REpresentational State Transfer, or REST. .NET 3.5 introduced a way of building the service side with WCF - however you can also use ASP.NET's infrastructure as well. In this session we talk about what REST is, two approaches to creating REST based services and how you can consume these services very simply with LINQ to XML.

Writing Service Oriented Systems with WCF and Workflow

Since its launch WCF has been Microsoft's premier infrastructure to writing SOA based systems. However one of the main benefits of Service Orientation is combining the functionality of services to create higher order functionality which itself is exposed as a service - namely service composition. Workflow is a very descriptive way of showing how services are combined and in .NET 3.5 Microsoft introduced an integration layer between WCF and Workflow to simplify the job of service composition. In this session we examine this infrastructure and bring out both its string and weak points with an eye to what is coming down the line in Project Oslo - Microsoft's next generation of its SOA platform.

Hope to see you there

.NET | ASP.NET | LINQ | MVC | Oslo | REST | WCF | WF
Tuesday, September 16, 2008 1:45:26 PM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   | 
# Monday, July 07, 2008

Shawn Wildermuth called me out to put my two-penneth together in this ongoing Meme. So ...

How old were you when you first started programming?

16. At school I’d just finished the compulsory part of my education and was moving on to do A levels. Me and a couple of friends found out there was this machine: a Research Machines 380Z in the building - so intrigued, we went to find it. Eventually we found the “computer room” housing the legendary RM 380Z and this other thing that looked like a lathe and apparently was used to consume punched cards. But the punch card lathe was not remotely interesting to us – the 380Z had a screen and keyboard.

How did you get started in programming?

There were other people in the room so we took it in turns to use this strange environment on there called BASIC:

10 PRINT Richard

20 GOTO 10

...

WOW!

That was so awesomely cool I started learning about all these other things – apparently GOTO wasn’t the only way I could make the program “move around”: GOSUB worked too – and I could get it to come back to where I’d called it when it was finished too! At one point one of the guys (yes we were all male) started talking about this thing called erase which apparently you could use to hold data but that was just some weird voodoo magic in my opinion.

So I managed to persuade my parents to push the boat out and buy me my very own computer – a Sinclair ZX81. At Christmas I eagerly unwrapped it and plugged it into the TV. I wrote my first program on it:

10 PRINT Richard

20 GOTO 10

Look everyone – how cool is that! No one in the family apart from me seemed to think this was very interesting. Unfortunately I soon realized that 1Kb was not a lot of room to write anything very interesting and we couldn’t afford the 16 Kb RAM Pack – or the duct tape to stop it falling off the back of the machine. So I returned to my first love – the 380Z. By this time I’d managed to work out that BASIC was just something you loaded on like my programs and more importantly you could alter it. Ahh the fun I had swapping the RUN and NEW commands. Unfortunately my teacher failed to see the funny side when he spent an entire day typing in an economics simulation from a listing in a magazine and then tried to run it.

What was your first language?

Well as you can see BASIC was my first language. I tried to learn C using a Lattice C compiler on my Atari ST but I found Kernigan and Ritchie apparently less accessible than all my cool computer friends. Eventually at college I learned Fortran and then Pascal. And that was my programming life until I started my first real job in 1989.

What was the first real program you wrote?

I started working for a bank in Sheffield. I sat there with a senior programmer (wow what a job that was to aspire to) while he showed me the line number and wrote down the code that I had to enter in an Algol (yes Algol) program. I had to change a limit from £4000 to £5000. I nervously started up a text editor and made the change. Before I committed the change the senior programmer looked over at the code and gave it his approval. I worked on the interactive branch office system as a programmer and systems tester, learning COBOL along the way, for about a year. I finally realised that with many years ahead of me in this industry I should probably try to get myself into the bleeding edge and so found myself programming C on OS/2 (yes OS/2).

What languages have you used since you started programming?

So I started with BASIC, then Fortran and Pascal. From there I learned 3 flavours of Algol and COBOL along with some Paradox along the way. I then started using C on OS/2 and then switched to C++ on Windows. I spent a long time in C++ and Windows – also using VB3, 4, 5 and 6. I learned TSQL so I guess that counts too. At one point I learned a strange little language called JADE and also VBScript and Javascript. I dabbled a little with Java but finally found my spiritual home with C#. I can write VB.NET if I really have to and have played with Ruby (emphasis on played) – oh and I mustn’t forget LOLCode.NET.

What was your first professional programming gig?

I think I answered this one above

If you knew then what you know now, would you have started programming?

Absolutely – but I’d have skipped a couple of the jobs along the way

If there is one thing you learned along the way that you would tell new developers, what would it be?

When you tell a project manager how long something is going to take they really do not believe the figures you give them. Their job is not to plan the actual project, it’s to plan the project they think their manager will approve.

Oh actually here’s a second one: UML is a tool not a way of life

Oh and a 3rd: a team of 8 very good programmers will outperform any team of 20 programmers no matter how good some of them are.

What’s the most fun you’ve ever had … programming?

Ahh this one is really difficult. I worked with some great people on the National Police systems buried in hardcore ATL. Moving from a project of 100 people to a team of 3 for my next contract was mindblowing in terms of how simple life could be if you wanted to get something done. But the most fun has been some of the hacking together demos in the middle of a Guerrilla.NET course with the other instructors to show some stuff we’d just discovered or decided would be compelling. Last week it was building a Silverlight app that consumed a WCF REST based service that reproduced the “type the alphabet” game that seems to be going around with a high-score table that all the students could play.

So who's next?

I nominate:

·         Dominick Baier

·         Andy Clymer

·         Christian Weyer

·         Marvin Smit

·         Mark Fussell

Monday, July 07, 2008 11:34:06 AM (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |   |