Unity container comparison

If you recall many moons ago I posted a series of articles on the Castle Project's IOC Container "Windsor" teaching the fundamentals of IoC with a practical bent - lots of people liked them, and I still get feedback every now and then from people starting to use windsor and finding them useful.

At any rate Michael McGuire was once such person who read those tutorials a year or so ago and has now started a series of his own - mirroring my castle container tutorials but with the P&P Unity container instead - you can find it here.

As someone who has not given Unity much more then a brief skim it's a nice way to quickly get up to speed on some of the key differences.

So far after reading a couple of articles I've learnt.

  • You need to implement your own type converters for things like arrays or dictionaries in configuration.
  • Configuration syntax is not particularly human-friendly, obviously designed for management via a tool  - requiring the entry of full types all over the spot like "Microsoft.Practices.Unity.Configuration.TypeInjectionElement, Microsoft.Practices.Unity.Configuration" - just to register a component!
  • Default lifestyle is transient... hmmm.. personally I think singleton is more-often the norm for me when writing applications, but it really depends on how the container is being used/abused I guess.
  • Support for multiple configurations looks a little more baked in - but this is trivial stuff to implement in most containers.


I'll be interested to see how decorator chains etc. are implemented in Unity.

Good work Michael.

Read More

Architecture Chat #28

5 people turned up This week.

Peter kicked off t1he discussion with a review of the Agricultural Field days in a disheartening lack of anything IT there, this sparked an interesting discussion around what's holding back adoption of technologies such as RFID's for animal identification and some possible inhibiting factors, like the cost to early adoptors, education etc.

After this we returned to more mundane things... first off we Discussed Velocity a bit, comparing it to memcached and some of the interesting features like tagging and the current lack of push functionality in the CTP.


Silverlight 2 beta 2
was next... talked about the new visual state manager and designer integration into Expression Blend.  I noticed after the chat that Ivan has posted an

Interesting discussion
around why he believes the Visual State Manager isn't a great idea - during the chat we did puzzle a little over why silverlight is diverging from WPF, and just how cross-polination between WPF and Siliverlight will occur.

Other things that interested us about the silverlight 2 beta 2 release were Inking & Stylus support (and incidentally second-hand tablet PC's are becoming dirt cheap, so no excuse not to have one lying on your desk!).

Multi-tile source, which could prove interesting for providing information generated on the fly or integrated with existing GIS sources etc.

Cross-domain support, background thread support for networking and duplex WCF communications - I could see this providing interesting possibilities, i.e. a silverlight control that makes the web client a temporary member of a grid network, perhaps distributed virally as a facebook app.  Not to mention the more mundane business applications.

After talking silverlight for a while Jamie then mentioned the OAuth library I'd written - so I went through what OAuth is/does vs. OpenID (there seems a bit of confusion in some peoples minds of what each of these projects aims to achieve) and then what's been
implemented, and what is yet to come - for more info on the OAuth library check out this wiki page.

A rambling discussion sparked off by Peter mentioning IBM having broken the
"petaflop barrier"
and the gradual approach towards a platform for an accurate simulation of the human brain, I made some references to "I am a strange loop" and everyone talked about the general difficulties with artificial inteligence and the current predictions regarding when computers will have enough horsepower to emulate brain function.

Thanks all for coming - see you all in a couple of weeks (Thursday 26th June).

Read More

Splicer 1.0 released.


Version 1.0.0.0 of splicer (the little video/audio composition library that leverages DirectShow which I started a few years ago) is now available on Codeplex
here
this marks a milestone in stability, and probably the main "feature" of this release is 64bit support, something that's been bugging me for ages as I could only work on the project in a VM!

A quick list of changes since the last release are:

  • Now uses DirectShow.Net 2.0 (thanks to felix, a fellow NZ'r).
  • RenderProgress event.
  • Renderers are disposable.
  • Support for 64bit operating systems.
  • Vista fixes/support.
  • Additional samples (i.e. SampleTimeWatermarkParticipant, and a few others).
  • Tests updated for NUnit 2.4.7.
  • Solution upgraded to VS2008.

What's splicer?


With this library and a little imagination you can:
  • Encode video or audio suitable for use on a website.
  • Create slide shows from images, videos and audio.
  • Apply effects and transitions to audio and video.
  • Grab System.Drawing.Image clips from a video at certain times.
  • Modify individual video frames during encoding via standard C# mage Drawing code.
  • Add new soundtracks to existing video clips.
  • Watermark videos.
  • Build a video editing suite, if you were so inclined.

Read More

OAuth for Beginners

OAuth for Beginners


For those unfamiliar with OAuth, here's a very short run-down... I'm skipping over some of the details but I think this should give you a taste for what it's all about - for a more well rounded introduction, check out this article on the OAuth.Net website.

The participants


Consumer - "weitu.googlepages.com" - that application that wants to see protected information the provider has for a user.
Provider - "google.com" - the keeper of a users protected information.
User - a user who stores protected information with the provider (say contacts in gmail)

The goal


To allow the user to give a consumer access to their data on the provider without the user having to disclose their credentials (username & password) and to allow for fine-grained control over access granted to an individual consumer - i.e. putting power in the hands of the user to revoke access when they want to, and having it only affect one consumer.

A consumer needs to be known to a provider before they can request a token.

How it works


(For this example we'll use google, for more info on the google implementation see this  thread)

The provider publishes 3 Urls for their service and documents them on their site somewhere:


The consumer is known to google by it's consumer key (which in the case of a google api is normally a host address, like www.test.com) and this relationship is established in a proprietary manor (i.e. it's not covered by the OAuth spec).

Getting a Request Token


The start the ball rolling the consumer makes a request to the Request Token Url, they get back some form-encoded parameters in the body of the response which contains the token information.

As an example, here's an http request to get a new request token:

GET /accounts/OAuthGetRequestToken?
scope=http%3A%2F%2Fwww.google.com%2Fm8%2Ffeeds&oauth_nonce=759437c3-3edf-4098-ac14-58d4f162b0e6
&oauth_consumer_key=weitu.googlepages.com
&oauth_signature_method=RSA-SHA1
&oauth_timestamp=1213129078
&oauth_version=1.0
&oauth_token=

&oauth_signature=peUZigwq1BLs%2Bb721vcct2vzA3Odk1j...HTTP/1.1

Host: www.google.com
Connection: Keep-Alive


And here's the response:
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Date: Tue, 10 Jun 2008 20:18:01 GMT
Expires: Tue, 10 Jun 2008 20:18:01 GMT
Cache-Control: private, max-age=0
Content-Length: 51
Server: GFE/1.3oauth_token=CMiJx-LdFxD56bOXAQ&oauth_token_secret=

Notice the oauth_signature and other oauth_ etc. parameters - as part of the OAuth core specification it requires that requests be "signed" so that a provider can ensure they haven't been tampered with - this is one of the aspects my library will take care of for you (signing and verifying requests).

User Authorization


At that point the consumer now needs to send the user off to the providers site - this involves using the second of the 3 urls, the User Authorize Url... we just append the scope (required by google, identifies the service you wish to access - not part of OAuth spec itself) and the request token (CMiJx-LdFxD56bOXAQ)

Note that the User Authorize Url isn't signed like the other requests... this is because this step may be manual i.e. a user typing or copying a link into their browser or some hand held device.

GET /accounts/accounts/OAuthAuthorizeToken?
scope=http://www.google.com/m8/feeds
&oauth_token=CMiJx-LdFxD56bOXAQHTTP/1.1

In this case, google takes us to a universal login page:

Once authenticated it then takes us to a page where we can authorize the consumer to have access:

By granting access at this point the consumer can then use the last of the 3 Urls, the Access Token Url, to exchange their request token for an access token. Upon granting access a few things should happen:

  • An access token should be created.
  • The access token should be related to the request token.
  • The currently logged in user should be associated with the access token.


The last point is important - because you're passing tokens around, rather than account names, you need to have the provider implementation record the association between the access token and the user granting access - and it should be easy for your API implementation to fetch the associated user when a protected resource is accessed.

Exchanging Tokens


Once the user has authorized the consumers access request, the consumer can then exchange their request token for an access token - generally a request token can only be used once - so if the request failed for some reason they would need to start the authorization process again from scratch.

here's the http request for exchanging tokens:

GET /accounts/OAuthGetAccessToken?
scope=http%3A%2F%2Fwww.google.com%2Fm8%2Ffeeds
&oauth_token=CMiJx-LdFxD56bOXAQ&oauth_nonce=19fe6f62-8b2c-4a40-b055-210d279ba770
&oauth_consumer_key=weitu.googlepages.com
&oauth_signature_method=RSA-SHA1
&oauth_timestamp=1213129477
&oauth_version=1.0

&oauth_signature=hagokrS1W%2BcBXdRwTIlOd84PSO56OT... HTTP/1.1 Host: www.google.com


And the corresponding response from the google server:
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Date: Tue, 10 Jun 2008 20:24:39 GMT
Expires: Tue, 10 Jun 2008 20:24:39 GMT
Cache-Control: private, max-age=0
Content-Length: 57
Server: GFE/1.3oauth_token=CNO384n8BRD6pZTT_P____8B&oauth_token_secret=

 

Accessing a Protected Resource


Now that the consumer has an access token they can then make requests for protected resources - they just need to use the access token, here's an example of doing just that:
GET /m8/feeds/contacts/default/base?
scope=http%3A%2F%2Fwww.google.com%2Fm8%2Ffeeds
&oauth_token=CNO384n8BRD6pZTT_P____8B&oauth_nonce=3ae44855-9d27-4b80-8b4f-2f68d1531657
&oauth_consumer_key=weitu.googlepages.com
&oauth_signature_method=RSA-SHA1
&oauth_timestamp=1213129479
&oauth_version=1.0
&oauth_signature=kTFRbcD1IKzjPADfgF%2B3...HTTP/1.1 Host: www.google.com


Obviously once the request has been validated (i.e. valid signature, valid token, valid timestamp range, nonce is unique etc.) the provider implementation needs to fetch the user associated with the access token, so it can then return the correct data back to the consumer - normally you would want to automatically associate the token's user with the current request / controller / channel so that OAuth is basically transparent (i.e. it's just like getting a request from a user who's authenticated normally).

Risks & Issues


One obvious risk is that of phishing... if the consumer sends you to a site that looks like googles authentication page, but isn't google then you're in trouble.  Of course this kind of phishing is more a general problem, then something isolated to OAuth.

Another potential risk are that some signature methods are risky/flawed to the consumer due to implementation i.e. if you have a flickr uploader winforms application, and you
use RSA-SHA1, the uploader will need to ship with the x509 certificate (including the private key) in their application ... this basically invalidates the strength of that certificate, because anyone could extract and use the private key themselves (so it's as bad as a plain text signature) - on the flip side for a website RSA-SHA1 is very strong because the private key is kept private.

Read More