CORS and WebAPI

Introduction

CORS (Cross-origin resource sharing) is a way in which a browser can make a request to a web server other then the one which served up the original resource.

As per the Mozilla docs:

The CORS specification mandates that requests that use methods other than POST or GET, or that use custom headers, or request bodies other than text/plain, are preflighted. A preflighted request first sends the OPTIONS header to the resource on the other domain, to check and see if the actual request is safe to send. This capability is currently not supported by IE8′s XDomainRequest object, but is supported by Firefox 3.5 and Safari 4 with XMLHttpRequest. The web developer does not need to worry about the mechanics of preflighting, since the implementation handles that.

You can achieve this using a DelegatingHandler in ASP.Net Web API - the way it works is to:

  • Identify requests containing the "Origin" header"
  • If the request is NOT of HTTP method "OPTIONS" (no preflighting) then we add the following headers to the response:
    • Access-Control-Allow-Origin - this will have the same value as the Origin value passed in the request.
    • Access-Control-Allow-Credentials - this will have the value true, allowing requests to contain basic-auth credentials


  • If the request is of HTTP method "OPTIONS", then we treat this as a "pre-flighting" request, responding with a 200-OK response, and returning the following headers in the response:
    • Access-Control-Allow-Origin - this will have the same value as the Origin value passed in the request header.
    • Access-Control-Allow-Credentials - this will have the value true, allowing requests to contain basic-auth credentials
    • Access-Control-Allow-Methods - this will have the same value as the Access-Control-Request-Method value passed in the request header.
    • Access-Control-Allow-Headers - this will have the same value as the Access-Control-Request-Headers value passed in the request header.



And here's the code to achieve that:

public class CORSHandler : DelegatingHandler
{
const string Origin = "Origin";
const string AccessControlRequestMethod = "Access-Control-Request-Method";
const string AccessControlRequestHeaders = "Access-Control-Request-Headers";
const string AccessControlAllowOrigin = "Access-Control-Allow-Origin";
const string AccessControlAllowMethods = "Access-Control-Allow-Methods";
const string AccessControlAllowHeaders = "Access-Control-Allow-Headers";
const string AccessControlAllowCredentials = "Access-Control-Allow-Credentials";

protected override Task SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
{
bool isCorsRequest = request.Headers.Contains(Origin);
bool isPreflightRequest = request.Method == HttpMethod.Options;
if (isCorsRequest)
{
if (isPreflightRequest)
{
HttpResponseMessage response = new HttpResponseMessage(HttpStatusCode.OK);
response.Headers.Add(AccessControlAllowOrigin,
request.Headers.GetValues(Origin).First());

string accessControlRequestMethod =
request.Headers.GetValues(AccessControlRequestMethod).FirstOrDefault();

if (accessControlRequestMethod != null)
{
response.Headers.Add(AccessControlAllowMethods, accessControlRequestMethod);
}

string requestedHeaders = string.Join(", ",
request.Headers.GetValues(AccessControlRequestHeaders));

if (!string.IsNullOrEmpty(requestedHeaders))
{
response.Headers.Add(AccessControlAllowHeaders, requestedHeaders);
}

response.Headers.Add(AccessControlAllowCredentials, "true");

TaskCompletionSource tcs = new TaskCompletionSource();
tcs.SetResult(response);
return tcs.Task;
}
else
{
return base.SendAsync(request, cancellationToken).ContinueWith(t =>
{
HttpResponseMessage resp = t.Result;
resp.Headers.Add(AccessControlAllowOrigin, request.Headers.GetValues(Origin).First());
resp.Headers.Add(AccessControlAllowCredentials, "true");
return resp;
});
}
}
else
{
return base.SendAsync(request, cancellationToken);
}
}
}

If using basic-auth, it's worth noting that the pre-flighted request is unauthenticated - so add the CORSHandler to your configurations set of MessageHandlers prior to any authentication handlers, so the OPTIONS requests can be handled correctly.

Access-Control-Allow-Credentials

In the handler above we enable Access-Control-Allow-Credentials - meaning if the Web API supports basic auth, then the browser is able to authenticate with the API using basic-auth cross-origin, in a jQuery ajax request that would look like this:

$.ajax({
url: "http://localhost/myapp/api/collectionres,
type: "GET",
username: "mylogin",
password: "mypassword",
data: "$top=10"
xhrFields: {
withCredentials: true
},
crossDomain: true,
success: success
});

I definitely don't recommend this approach for production code - but, it can make for great way for people to play with an API programmatically in javascript, without having to resort to using Node (and using a tool like Dropbox allows them to privately "host" their html/javascript as well, making it possible to share and collaborate on simple mashups).

Read More

Web API Implementation - Final Thoughts

Ease of development

Moving to ASP.Net Web API over all has been a very pleasant experience, working in the framework for us has been pretty painless.

That said, moving from release to release has been quite a chore:

  • The move from WCF to ASP.Net MVC was quite jarring.
  • The move through various beta builds was also painful, the removal of OData caused us quite a lot of rework (but ultimately for the best I think).
  • The move to RTM though, almost completely without issue (even though the list of changes was quite long - they were largely additive, not really breaking anything we already had in place).

Ease of integration

We found it pretty easy to integrate the Web API parts into our plugin infrastructure - one area we did struggle with a little was getting the Monorail routing engine to dispatch the API requests correctly - I didn't bother covering that in this series as it's a fairly niche issue.

Also keep in mind that as I write this, the RTM of ASP.Net Web API has been available for less then a week though - so we were certainly early adopters - I think now that the API is stable for the RTM, and as progress continues on out-of-band support for OData, people starting new API projects will have a much easier time of it then some of the people adopting this technology earlier on in the release cycle.

Total development effort for the API has been in the order of 2 developer months, which resulted in 92 controllers (at last count), automatically generated documention, good test coverage and some simple extension points for 3rd party developers to use when extending the API to include features for their own plugins - I'm pretty pleased with that overall!

What's Next

Nothing is ever done, and we have plenty of ideas for features we want to add to the API in the future:

PATCH support

PATCH support - I have been looking with interest at the latest OData WebAPI support previews approach to PATCH via a lightweight dynamic object representing the Delta, we are also very interested in Matt Warren's Eval based patching feature in RavenDB, which combined with our TQL and OData based filtering could provide a very powerful (though certainly not particularly RESTful) approach to updating application in Enterprise Tester in bulk.

Activity Stream API

Activity Streams - we have an activity stream implementation, which does support plugins posting events for a user or set of users activity streams within the application - next steps would be to expose this via an API (likely via the the http://activitystrea.ms specification).

Webhooks

Webhooks - Webhooks would support a number of useful scenarios for people integrating with the API - And also thinking about ways we could make it possible for people to compose multiple applications together via webhooks (there is a brief discussion on this topic here, with some good comments).

Custom Media Types

Embracing custom media types - so far we have been using application/json, which in effect is enacting the anti-pattern of tunneling the real media type, relying on consumers of our API's to refer to human-readable document (albeit nice human readable documentation!) to determine the schema for those representations.

Shifting to custom media types would be a good move in this case - programmable web had a good post on this a while back - but is likely not something we would do until we feel the API is stable.

Granular OAuth security

Currently the OAuth implementation is at a user level - but why give a consumer of an API access to features/data they don't need.

Security is currently conext-specific so this will likely take the form of an API consumer requesting a set of required permissions, along with an associated scope (project-specific or application-wide) as part of fetching the request token - and these scope restrictions the being applied to any security checks.

Implementing this delegation is actually likely to be less challenging then the refactoring of our permission structure to be more granular to suit the requirements of an API, as opposed to the existing UI usecases.

Usage Metrics/Tracking

Anonmyous usage metrics for the API would be very valuable when it comes to making decisions about changing aspects of the API.

Additionally tracking usage (including the type of usage, IP address etc.) would be valuable especially for instances of Enterprise Tester externally accessible on the internet (Either hosted or self-hosting customers).

FYI - StrathWeb has a post on implementing this via 2 DelegatingHandler's which might be a starting point for anyone looking to implement this themselves.

Wrapping Up

So this is where I end my series on the API development for Enterprise Tester - hopefully this series has been interesting to at least a few of you embarking on implementing your own API's for existing applications.

Describing the Experience

I think if I had to describe API development for a large existing application, it would be Cathartic - It's an opportunity to revisit old code, review decisions you made in the past, and re-imagine what interacting with your application can be like - it's quite cleansing/refreshing, as it's not often you implement a feature that touches on so many parts of an application at one time, that isn't some death march of restructuring and refactoring to please the gods of static typing.

Resources

I found these blogs/websites particulary useful while working on the API:

Previous Parts

Read More

Anatomy of an API plugin

The Enterprise Tester application is extensible - so not only the core plugin, but other plugins, need to contribute to the overall API exposed.

This wasn't particularly complex to implement - but we did have a few issues we had to circumvent.

Controller Registration

We took a pretty restrictive approach to registration of API controllers - plugin developers are able to register the controller along with a name and a route (optionally you could also specify defaults and constraints, but this generally wasn't necessary) - but we don't allow one route to service multiple controllers.

public class CoreRestResourcesPluginInstaller : AbstractPluginInstaller
{
const string ApiPath = "api/";

public override void Install(InstallationHelper helper)
{
helper.RegisterRestService("project", ApiPath + "project/{id}");
helper.RegisterRestService("projects", ApiPath + "projects");
...
}
}

This approach makes the process of writing the API for your plugin alot easier in many ways as you can be generally assured your plugin's routes wont clash with other plugins (and where a clash exists we can throw up a meaningful error/warning, describing where the conflict exists) - but it won't win any fans with the convention over configuration purest crew.

Notice also that registering a REST service does not restrict the route to start with /api - we did originally enforce that convention, but then relaxed it because not all WCF WebAPI controllers being registered would necessarily be part of the API for a plugin.

As a result of these decisions, you end up often with 2 controllers - one for the resource representing an individual entity, and another for the collection resource - there was an interesting discussion about this in February of this year between Rob Conery, some of the more Zealous members of the REST community and eventually Glenn Block - which is well worth a read.

In a future post I'm going to cover how I demo REST API's to customers (and strongly encourage feedback on if anybody knows better ways to do this - short of writing sample clients for your API) - as this can be another one of those exercises where going through the process of demonstrating the API interactively (even with non-developers) hilights issues with your API design you won't necessarily discover through testing.

Delayed Registration

One of the other challenges we faced is that in our implementation we expose the methods for registering REST services as part of our plugin framework (so it's part of the core) - but our REST framework (which exists in it's own plugin) takes dependencies on all sorts of thing such as our OAuth, Search, Custom Field etc. plugins - and some of those plugins may even want to register their own API controllers (causing circular reference issues).

To get around this the RegisterRestService extension method adds the registration to a static "ServiceManager" instance, which then takes care of either registering the route etc. immediately if the REST infrastructure is in-place, or collects the registration information awaiting the REST infrastructure being ready as part of application start-up process.

public static class RestServiceRegistrationExtensions
{
public static InstallationHelper RegisterRestService(this InstallationHelper helper, string name, string routeTemplate, object defaults = null, object constraints = null)
where T : class
{
helper.Register(Component.For().LifeStyle.Transient);

ServiceManager.Instance.RegisterService(name, routeTemplate, defaults, constraints);

return helper;
}
}

Here's the ServiceManager:

public class ServiceManager : IServiceManager
{
static IServiceManager _serviceManager = new ServiceManager();
readonly IList _services = new List();
Action _callback;

public static IServiceManager Instance
{
get { return _serviceManager; }
}

public void RegisterService(string name, string routeTemplate, object defaults = null, object constraints = null)
{
var metadata = new ServiceMetadata {RouteTemplate = routeTemplate, ControllerType = typeof (T), Defaults = defaults, Constraints = constraints, Name = name};

_services.Add(metadata);

if (_callback != null)
{
_callback(metadata);
}
}

public IList GetAllServices()
{
return _services;
}

public void SetServiceRegisteredCallback(Action callback)
{
if (callback == null) throw new ArgumentNullException("callback");

if (_callback == null)
{
foreach (ServiceMetadata service in _services) callback(service);
}

_callback = callback;
}

public static void Reset()
{
_serviceManager = new ServiceManager();
}
}

Nothing particular exciting here but it's something to consider if trying to support WebAPI in a pluggable application. Also notice the Reset() method which is necessary to support the end-to-end tests.

Mapping Installers

As well as controllers, we have classes which register any necessary functionality against the ViewModelMapper - we normally have one of these per resource type.

Instances of the mapping installer are automatically registered into the IoC container, and implement an IStartable lifestyle in Windsor (so will be immediately created once all the dependencies are satisfied).

public class SomeMappingInstaller : AbstractViewModelMappingInstaller
{
public override void Install(IViewModelMapper mapper)
{
...
}
}

This is necessary (Rather then registering the mappings directly in the plugin installation method) because often the mapping logic will need to get access to other services to be able to complete an expansion from one Entity type or another (in which case you would add a constructor where the necessary services could be injected upon creation of the mapping installer).

And that's it...

As you can see we have tried to keep the number of moving parts to a minimum when adding an API controller to a plugin for Enterprise Tester as a 3rd party developer.

The other desirable trait that comes from this minimalism is that 3rd party developers can't introduce side-effects into other API controllers accidentally (i.e. by adding a new DelegatingHandler, or removing an existing one - such as Authentication).

Next

Next in part 9 of this series (The final part) - I give my thoughts on how developing the API using ASP.Net Web API went, including the experience of transition from preview/RC/Beta bits and the recent upgrade to RTM, plus what we didn't get to implement/will be implementing in the future for the API in Enterprise Tester.

Read More

WebAPI Testing

In the past...

Before WebAPI we were implementing API's using the Monorail MVC Framework (It was adopted in the application long before ASP.Net MVC had established a comparable set of features).

Monorail is very test friendly, but generally speaking our test approach was one of:

  • Constructing a controller by hand, or via some auto-registering IoC container
  • Stub/mock out the necessary mechanics of Monorail
  • Invoke the action methods directly, then check the returned values + state of the controller

It let's you focus on testing the controller in Isolation, but ignores all the mechanics such as routing, filters etc.

End to end testing

When moving to the WebAPI for the API implementation, we found it was in fact much easier to setup the entire pipeline (including all the DelegatingHandlers, routes etc.) and execute a request and get a response, here's the constructor for our base class for API tests:

Reset();

InitializeEnvironmentForPluginInstallation();

new RestAPIPluginInstaller().Install(helper);

new CoreRestResourcesPluginInstaller().Install(helper);

var host = IoC.Resolve();

jsonNetFormatter = host.JsonNetFormatter;

server = new HttpServer(host.Configuration);

client = new HttpClient(server);

And here's how a simple test looks:

[Fact]
public void Post_for_existing_package_throws_forbidden()
{
var model = new CreateOrUpdateScriptPackageModel
{
Id = new Guid("FBA8F2E7-43E8-417E-AF4E-ADA7A4CF7A9E"),
Name = "My Package"
};

HttpRequestMessage request = CreateRequest("api/scriptpackages", "application/json", HttpMethod.Post, model);

using (HttpResponseMessage response = client.SendAsync(request).Result)
{
Assert.Equal(HttpStatusCode.Forbidden, response.StatusCode);
Assert.Equal("POST can not be used for updates.", GetJsonMessage(response));
}
}

The GetJsonMessage method in this case just extracts the error information from the JSON response.

For tests returning full responses we used ApprovalTests with the DiffReporter - this proved incredibly productive.

[Fact]
[UseReporter(typeof (DiffReporter))]
public void Post_script_package_for_project()
{
var project = new Project {Id = new Guid("59C1A577-2248-4F73-B55E-A778251E702B")};

UnitOfWork.CurrentSession.Stub(stub => stub.Get(project.Id)).Return(project);

authorizationService.Stub(stub => stub.HasOperations((TestScriptPackage) null, CoreOperations.Instance.TestManagement.ManageScripts)).IgnoreArguments().Return(true);

var model = new CreateOrUpdateScriptPackageModel
{
Name = "My Package",
ProjectId = project.Id
};

HttpRequestMessage request = CreateRequest("api/scriptpackages", "application/json", HttpMethod.Post, model);

using (HttpResponseMessage response = client.SendAsync(request).Result)
{
Assert.Equal(HttpStatusCode.Created, response.StatusCode);
Assert.Equal("application/json", response.Content.Headers.ContentType.MediaType);
Approvals.Verify(response.Content.ReadAsStringAsync().Result);
}
}

If you have not used ApprovalTests before, the magic occurs here:

Approvals.Verify(response.Content.ReadAsStringAsync().Result);

This get's the content of the response (JSON) as a string and then checks to see if it matches our "golden master" - if it does not, you are shown a Merge UI with the results of the current test compared to the golden master:

At this point you can:

  • Accept all the changes.
  • Fix what's broken and run the test again.

For this to work well you need to render your JSON with identation enabled - and you need to ensure that however you serialization works, the order of the properties in the output is repeatable.

The JsonMediaTypeFormatter that ships with WebAPI has a property called Indent you can force to true for your testing in this case (we also have it wired up for debug builds).

I think what's great about this approach is:

  • It's really easy
  • You catch errors you might miss if just checking parts of your response for consistency
  • You are reading your JSON output constantly from your application - I find this process extremely helpful - a lot of issues that were not picked up during the initial implementation/design were uncovered just by reviewing how we presented our resources in JSON
  • Did I mention it's really easy?!

Authentication

The creation of a test request was handled by a few helper methods on the API tests base class.

protected HttpRequestMessage CreateRequest(string url, string mthv, HttpMethod method, User user = null)
{
var request = new HttpRequestMessage();
request.RequestUri = new Uri(_url + url);
request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue(mthv));
request.Method = method;
request.Properties["user"] = (user ?? currentUser);

return request;
}

protected HttpRequestMessage CreateRequest(string url, string mthv, HttpMethod method, T content, MediaTypeFormatter formatter = null, User user = null) where T : class
{
var request = new HttpRequestMessage();
request.RequestUri = new Uri(_url + url);
request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue(mthv));
request.Method = method;
request.Content = new ObjectContent(content, (formatter ?? jsonNetFormatter));
request.Properties["user"] = (user ?? currentUser);
return request;
}

Notice we inject the user in the request's Properties collection, this allowed us to bypass to need to setup basic auth etc. headers and handling the additional overhead of mocking out the authentication of a user.

Is that it?

Pretty much - we certainly had more traditional Unit tests for supporting parts of the API such as the help generation, view model mapping and filters/delegating handlers - but they were very standard, the actual API testing was all done through the methods describe above...

And I think that's great news! I've worked with other technologies in the past where you could dedicate a series of posts mocking out different aspects of the underlying framework mechanics - but in the case of the WebAPI there was no need, because it can be easily self-hosted without a whole lot of bother.

Next

Next in part 8 we take a look at the "anatomy of a plugin" - investigating how we implemented support for 3rd party developers to develop API controllers as part of a plugin for Enterprise Tester.

Read More

Long running tasks

Eventually most applications develop some mechanism for launching and tracking the progress of a task running asynchronously.

In Enterprise Tester this is normally seen ss a progress dialog:

In the application this was handled in more then one way, by different parts of the application, but through the API we saw an opportunity to unite these different methods.

API as plaster (spackle for Americans)

Applications over time grow and morph in often unforeseen ways, heading in directions you never originally imagined (an incidentally this is part of the reason why our jobs as developers is so much fun).

The result of this is that you can often end up with multiple features over time, that at first seem very different, but at some point a perception-shift occurs and you realize in fact they are variations on the same feature.

At this point there's a strong desire to try and rectify the issue - but your faced with some problems:

  • It's going to involve lots of work to align everything together.
  • Unless you plan to build further on this feature, it's difficult to justify any increase in value to the business.
  • If you are somewhat pragmatic, you may struggle to justify it internally as well.

But as an alternative to addressing the problem from the bottom up, when adding an API to your product, you also have the option of addressing it an API level - and having the API take care of then delegating to find the appropriate implementation.

This is where the API then behaves as plaster, smoothing over the cracks and small imperfections in your implementation as it is exposed to the world of potential 3rd party developers.

But enough of the hypothetical - let's take a look at what we did for background tasks.

First we introduced a new layer of abstraction:

public interface IJobHandler : IModule
{
string Key { get; }
string Description { get; }
string CreateJob(IDictionary parameters);
ProgressReportDTO GetProgressReport(string jobId);
bool CanHandle(string jobId);
}

This allowed a thin adapter to be created over the top of each background task implementation.

Next - in each implementation of this interface we created a composite key (under the hood most of the task implementations used a GUID Identifier for tracking the progress of the job) which could be used to differentiate the ID of the job from other handlers:

public bool CanHandle(string jobId)
{
if (!jobId.StartsWith(_keyPrefix))
{
return false;
}

if (ExtractId(jobId) == null)
{
return false;
}

return true;
}

The key prefix also has the bonus of allowing our background tasks to be identified by something a little more meaningful then a GUID i.e. "reindex_task_B55C4A97-9731-4907-AF8F-13BB10A01C3A" - a small change, but a pleasant one.

Last of all, each job implementation wraps the progress results from the underlying implementation , mapping them into the a common DTO.

IDictionary AdditionalProperties { get; }
IList Links { get; }

Like we do with other models returned from the API, we leverage dictionaries and JSON rewriting to handle adding additional information to the progress results.

Controller

Just for the sake of completeness, here is the controller action we now use for creating a task:

public HttpResponseMessage Post(CreateBackgroundTask dto)
{
IJobHandler handler = _registry.GetByKey(dto.Type);

string jobId = handler.CreateJob(dto.Parameters);

ProgressReportDTO reportDto = handler.GetProgressReport(jobId);

ViewBackgroundTaskModel wrapped = _viewModelMapper.Map(reportDto, Expands);

HttpResponseMessage response = Request.CreateResponse(HttpStatusCode.Created, wrapped);

response.Headers.Location = new Uri(_urlTransformer.ToAbsolute(string.Format("~/api/backgroundtask/{0}", jobId)));

return response;
}

When creating a new background task, we might get a result like this for example:

{
"Complete": false,
"TotalElements": 0,
"ProcessedElements": 0,
"StartedAt": "2012-08-06T11:28:45Z",
"ProgressInPercent": 0.0,
"Id": "ticketlinking_cba3035a-bf63-4006-89b1-b291aaac0460",
"Message": null,
"Self": "http://localhost/api/backgroundtask/ticketlinking_cba3035a-bf63-4006-89b1-b291aaac0460"
}

We can make additional GET requests to the Self URI to get progress updates, upon completion the response contains additional information (including in this case a link to a new resource that was created as part of the execution of this background task).

{
"Complete": true,
"StartedAt": "2012-08-06T11:39:45Z",
"FinishedAt": "2012-08-06T11:39:53Z",
"ProgressInPercent": 1.0,
"Id": "ticketlinking_9b01796c-a9ae-40cb-a6ad-a802346c0c33",
"Message": "Completed",
"IncidentId": "029b2c43-38be-4c94-b547-a0a50185fb9e",
"Self": "http://localhost/api/backgroundtask/ticketlinking_9b01796c-a9ae-40cb-a6ad-a802346c0c33",
"Links": [
{
"Href": "http://localhost/api/incident/029b2c43-38be-4c94-b547-a0a50185fb9e",
"Rel": "Incident"
}
]
}

What about SignalR

Currently getting progress for a background task is done by polling the resource URL - we did investigate leveraging SignalR to make this work in a more real-time fashion, but struck a few issues:

  • Internally the underlying sources of the progress information didn't support progress change events - so we would still be having to poll internally.
  • Many of our clients would still end up polling because it's simpler to implement
  • The SignalR + WebAPI story wasn't very well developed - we did review the SignalR.AspNetWebApi project on github, but it wasn't being updated at the same pace as the ASP.Net Web API preview releases were hitting github.

We also investigated some other ideas - including PushStreamContent Which is now really easy to implement in the RTM build of WebAPI or trying to leverage WebBackgrounder (but that didn't really fit our needs).

Next

Next in part 7 we are going to take a look at the approach we took to testing our API (including end-to-end testing and Approval Tests).

Read More