First Auckland Code Retreat

Auckland Code Retreat



The first code retreat in NZ (AFAIK) was organised by Ian (@kiwipom) on Saturday October 27th, and facilitated by the one and only Corey Haines.

It was a great event - and I offer a big thanks to Ian, Corey and the sponsors, as well as the gracious hosts at the Bizdojo makerspace who diligently made coffee and kept us well fed.

The whole day was great, and I think a comment from a fellow attendee Josh Robb in the closing circle rang very true with me as well, in that I came away pleasantly surprised at how much relevance the sessions had to the day to day work I do. Every time we commenced a new session, with some different constraints, I found myself thinking of code I've written in the past where the same challenges were faced, or problems such as primitive obsession perhaps led me away from discovering a far more suitable abstraction or approach to solving a problem.

Through the day I got to pair with people mostly in .Net (C#) - as well as with a scala developer (though we paired using Java) - and a good mix of developers at different experience levels (at one point I even found myself having to give a very brief crash-course in Lambda's to my pair who I guess was probably stuck working with older versions of Visual Studio/legacy code bases).

Comfort

I think one thing that surprised (but also, not really... ) was how 'comfortable' I've got in the last 7 years, since I started working from home.

I'm not talking comfortable i.e. lazy / set in my ways - the last 7 years I've spent working from various home-office's have probably been some of the busiest times of my life, with lots of challenging tasks. And I certainly keep myself busy talking to other developers, sharing experiences and trying to learn from others.

But what I mean is comfortable in my working environment...

Over the years I have slowly reduced my tech life down to:

  • A fast Desktop PC.
  • Lots of screen space (3 x 30" monitors + sometimes some additional smaller screens)
  • A good full-sized keyboard (currently using a daskeyboard, though the previous Logitched G15 I had was also good for other reasons).
  • Full size mouse (or a mouse at all).
  • Windows.
  • Resharper
  • Coding largely in isolation.
  • Using a laptop probably once every couple of months.
  • Largely working on brown fields projects.

So when it came to pairing on laptops throughout the day (and deleting my code every 45 minutes) I found:

  • My muscle memory for different laptop keyboard layouts was non-existent.
  • I really struggled with laptop keyboard cramping/typing accuracy
  • My Mac keyboard knowledge is very poor - and same goes for my knowledge of Elicpse shortcut keys.
  • I struggle to work around Visual Studio without ReSharper, and my ReSharper keyboard short cuts are somewhat non-standard.
  • I struggle with a single screen (I generally work with test + code side-by-side on one large screen, and the test runner on another).
  • I wasted time setting up test frameworks etc. unnecessarily (A force of habit coming from my brown-fields background) - when I could just have easily written a single-line Assert method that throws an exception and been done with it - and had a few extra minutes to focus on the actual problem.
  • I wasn't great an articulating my ideas to other developers necessarily

This all lent itself towards a feeling of general uneasiness. But the good, motivating kind, if you catch my drift. The up-side of the code-retreat session is that deleting your code after 45 minutes makes it easier to forget the uneasiness and clear your mind ready for the next session.

So What?

Well I think I came away overall with a stronger desire to start pairing with people more often. The 45 minute session format really surprised me - it may not be long enough to necessarily solve the problem, but it's a great length of time to learn something / hone your craft around a particular constraint.

And it's certainly pushed me into trying and see if I can fit at least 1 session of pair-programming into my weekly routine if at all possible.

One Idea I have is perhaps pairing with somebody either immediately before or after the Architecture Chat that's run every second Thursday... so if anyone is in town and keen for a quick hour of pair-programming, let me know and perhaps we can catch up :)

Read More

Demonstrating a REST API

Demonstrate all the things!


As you might have noticed, I have been doing a bit of work on API's in the past couple of months - and one of the things that has evolved out of this work, is my approach to demonstrating an API to an audience of developers and non-developers alike, for example during meetings to discuss progress etc.

I have tried a few things, including cURL, Fiddler, Javascript (with CORS) etc. in the past - but have settled on the combination of google chrome + a chrome browser add-in called "Advanced REST client", available from the chrome store for free:

Quick Tour

It's a pretty basic tool (also why I like it..)

The main interface is geared towards sending requests:

And here we can see the list of previously saved request (this will include the raw request body, headers etc.)

And here an example of a POST request, with a JSON body:

So to recap, advantages this tool has over some of the alternatives I have tried:

  • Simple
  • Does not require admin access, or a specific Operating System (like say Fiddler) - though still no support for the chrome store on iPad unfortunately.
  • Pretty-printing of JSON results (but still being able to access the raw response)
  • History of requests (persisted between sessions transparently)
  • "linkifying" hyperlinks in responses
  • Ability to save requests (and keyboard shortcuts for features like Save)
  • Handles form encoding and mixed/multipart file uploads

And of course, like any tool, it's not perfect - in particular:

  • Sometimes automatic hyperlinking in responses does not work
  • It would be nice if you could click on a link in a response, and have it update the URL text field
  • Basic auth doesn't work using Username:password in URL most of the time, you need to assign a header (which requires manually base64 encoding the username/password pair).

But overall I'm pretty happy with it.

So, Are demo's important?


Yes!

Obviously a demo is great to get buy in and upskill your whole team internally, or for getting customers up to speed on what's possible with your particular API.  But even more so an API is feature that provides very little value unless people use it, even less so then the 80% of the features in your App that are seldom used, as the cost of maintaining an API is high (it becomes part of the work you do for every release of your product once incorporated into your product).

Beyond the obvious though, the thing I have found most interesting is how demonstrating an API really focuses your attention on what's missing in your API from a usability perspective (especially with consideration to hypermedia) - I would rate this activity almost as high as attempting to develop a client for your API (another great way of finding missing features).

Give it a try, I think you will find it a really effective way to shake out bugs, oversights and general "cruftyness" issues in your API design.

Tell a Story


In demonstrating an API, the approach I use is to first whip up a quick narrative or story for the parts of the API I want to demonstrate (thanks sublime text distraction-free mode!)...

One approach to use when trying the pique the interest of a mixed audience is choosing a story which demonstrates things you can do via the API which are not possible or easy to achieve via the user interface - Focus on points of difference, as opposed to what's similar, or appeal to past pain that an API might be able to resolve for them.

If you want to know after your demo if you got the narrative right, just observe your audience... far-away looks in peoples eyes and lack of questions == story they don't care about.

When doing this I try to avoid:

  • "Given When Then" - Given you want to create a widget, when the widget does not exist, then you POST to widgets collection resource. It's not engaging or fun for people to listen to..
  • Skipping the steps to finding the resource you are going to work on - so for example if I'm going to update the groups a user belongs to, performing a GET request directly on a /user/{id}/groups collection resource is guaranteed to loose parts of your audience. Performing a search on /users (by name), then retrieving the individual user, and then traversing the rel=groups link from the user to fetch the groups, will be more familiar.


And I also try to incorporate:
  • Switching back to the User Interface regularly (if applicable) - as you fetch resources, relating that back to the equivalent UI (if one exists) can keep people on-track, it also leads to useful observations i.e. I can see this field in the UI, but I can't see it in the representation returned.
  • Selecting parts of a response (using your mouse to hilight) for example what might have changed as the result of a PUT request - especially for people who don't speak "JSON".
  • JOSN lessons - People are familiar with XML, but don't assume the same of JSON. Take some time to explain how a value relates to a key, or what an array is. The pretty printing of JSON in the REST client helps here as well, because you can easily collapse parts of the JSON response, showing that all the items belong to an array etc.
  • HTTP Method refreshers - Keep explaining what PUT, POST and if you support it, PATCH do... it's not enough to just explain this once.
  • Relate what you are doing back to any API documentation you have (so for each request you end up flipping to the API help, then issue the request, review response, flip back to API help, look at response example, and then back into the UI).
  • Failed requests are important too i.e. accessing a resource with missing/invalid query parameters, or missing parts in the request JSON - make these part of the story, and relate them back to your API documentation as well. And take a critical eye to anywhere you turn up a 500 general exception error - is it truly exceptional, or a common failure case that deserves a response guiding API users towards the pit of success, if so, fix it and also demonstrate it so API users are encouraged to explore.

Dry-run


Once I have my narrative sorted out (and have included some queues in the story to remind me about things like switching to UI etc.) then I'm ready for a dry run.

At this point I work through each step of the narrative - this is normally where you become immediately aware of usability issues in your API:

Inability to navigate:


These normally turn up as "smells" as opposed to outright bugs:
  • Having to copy an ID from the current request to then construct your next request URI.
    • Action: Add a link to your response, or consider an Expand if applicable (favour links over expands though I think).


  • Moving to next page of a page collection resource requires you to change the URI.
  • Navigating to a link, and getting an exception message back due a missing parameter.
    • Action: return a human friendly message explaining how the resource needs to be used.


  • 404 not found.
    • Action: Fix it, probably an issue with routing, or a resource you have forgotten to implement, doh!



Errors when reusing responses as the body of requests:


This is a death move when demonstrating API's, but often not an issue when writing a client. If you make a GET request, you should be able to use the response in a PUT request without changing anything, even if some of the content is ignored.

Failures here indicate problems with adhering to the Robustness Principle (Be conservative in what you send, liberal in what you accept). Common causes for this failure are things like expanded properties and links, which adorn but don't necessarily make up part of the DTO underlying the API implementation, which might be rejected if you have an "error on missing member policy" in your JSON deserializer (normally only applies to an API written in a statically typed language).

Unable to locate/display the results in the UI


This is another common issue - normally because you have too much data in your sample set, or don't have features in your UI to navigate directly to an item by ID or Name.

There are a couple of ways to make this easier - either make it really easy to locate an entity by some kind of unique ID also present in your resource representations (In a thick client App I normally like to do this via a bespoke "go to' dialog, activated via CTRL+G).

Or (my preferred approach for a web app) include an "Edit" absolute URI as a link associated with your API resource, which can be used to navigate to the equivalent UI screen in your application - as an added bonus, the Advanced REST client will "linkify" these hyperlinks in your response - so you can just right-click, open in new window, and be taken directly to the equivalent item in the UI, making your demo very seamless.

General Tips


Demonstrating from a data set that you can revert back to is also important, especially if you want to create a screen cast, where multiple takes may be required.  Any demo you do should be easily repeatable.

Also, for each request where you are making a request with a body or complex URI (i.e. maybe containing a search query) then use CTRL+S to save your request as well, this can be helpful in cases where you are having trouble during your Demo and need to get a known good response (requests are also saved in history, but this is limited to last 60 or so requests by default).

Also, remember to always use the Raw input / Raw output tabs in the REST client when copying a response for use in a new request - and explain this when demonstrating the API to people who might want to go and try your demo for themselves, as copy/pasting from the pretty-printed response tab, when not understanding JSON structure, can lead to people getting frustrated quickly.

Live demo


At this point your live demo should go smoothly, but there are certainly some things to check off your list before starting:
  • Reverting to your known good data set, if possible
  • Having a couple of displays attached, so you can have your narrative+support data on one screen, and the demo of another.
  • Pre-load pages for the help content you will need (if your API documentation is too large to quickly navigate).
  • Check all your automated integration tests still pass before demoing off a W.I.P codebase.. nothing worse then having to fix bugs/routing issues mid-demo

Side note - Authentication


Thought not strictly necessary - I think supporting session based authentication for web application API's really helps with the demonstration process - though security is important, and session is not a common use case for programmatic access to the API, as a learning aid it's very useful.

This advice only applies to API's in the small (what I have been working on mostly i.e. API exposed as part of an on-premise/hosted application) - For large public web properties and multi-tenanted applications, this is probably not a wise decision, depending on how the API is deployed, and I would revert to using Basic Auth for demonstration purposes, and probably use a different tool to the Advanced rest client plugin as well (say fiddler).

Read More

CORS and WebAPI

Introduction

CORS (Cross-origin resource sharing) is a way in which a browser can make a request to a web server other then the one which served up the original resource.

As per the Mozilla docs:

The CORS specification mandates that requests that use methods other than POST or GET, or that use custom headers, or request bodies other than text/plain, are preflighted. A preflighted request first sends the OPTIONS header to the resource on the other domain, to check and see if the actual request is safe to send. This capability is currently not supported by IE8′s XDomainRequest object, but is supported by Firefox 3.5 and Safari 4 with XMLHttpRequest. The web developer does not need to worry about the mechanics of preflighting, since the implementation handles that.

You can achieve this using a DelegatingHandler in ASP.Net Web API - the way it works is to:

  • Identify requests containing the "Origin" header"
  • If the request is NOT of HTTP method "OPTIONS" (no preflighting) then we add the following headers to the response:
    • Access-Control-Allow-Origin - this will have the same value as the Origin value passed in the request.
    • Access-Control-Allow-Credentials - this will have the value true, allowing requests to contain basic-auth credentials


  • If the request is of HTTP method "OPTIONS", then we treat this as a "pre-flighting" request, responding with a 200-OK response, and returning the following headers in the response:
    • Access-Control-Allow-Origin - this will have the same value as the Origin value passed in the request header.
    • Access-Control-Allow-Credentials - this will have the value true, allowing requests to contain basic-auth credentials
    • Access-Control-Allow-Methods - this will have the same value as the Access-Control-Request-Method value passed in the request header.
    • Access-Control-Allow-Headers - this will have the same value as the Access-Control-Request-Headers value passed in the request header.



And here's the code to achieve that:

public class CORSHandler : DelegatingHandler
{
const string Origin = "Origin";
const string AccessControlRequestMethod = "Access-Control-Request-Method";
const string AccessControlRequestHeaders = "Access-Control-Request-Headers";
const string AccessControlAllowOrigin = "Access-Control-Allow-Origin";
const string AccessControlAllowMethods = "Access-Control-Allow-Methods";
const string AccessControlAllowHeaders = "Access-Control-Allow-Headers";
const string AccessControlAllowCredentials = "Access-Control-Allow-Credentials";

protected override Task SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
{
bool isCorsRequest = request.Headers.Contains(Origin);
bool isPreflightRequest = request.Method == HttpMethod.Options;
if (isCorsRequest)
{
if (isPreflightRequest)
{
HttpResponseMessage response = new HttpResponseMessage(HttpStatusCode.OK);
response.Headers.Add(AccessControlAllowOrigin,
request.Headers.GetValues(Origin).First());

string accessControlRequestMethod =
request.Headers.GetValues(AccessControlRequestMethod).FirstOrDefault();

if (accessControlRequestMethod != null)
{
response.Headers.Add(AccessControlAllowMethods, accessControlRequestMethod);
}

string requestedHeaders = string.Join(", ",
request.Headers.GetValues(AccessControlRequestHeaders));

if (!string.IsNullOrEmpty(requestedHeaders))
{
response.Headers.Add(AccessControlAllowHeaders, requestedHeaders);
}

response.Headers.Add(AccessControlAllowCredentials, "true");

TaskCompletionSource tcs = new TaskCompletionSource();
tcs.SetResult(response);
return tcs.Task;
}
else
{
return base.SendAsync(request, cancellationToken).ContinueWith(t =>
{
HttpResponseMessage resp = t.Result;
resp.Headers.Add(AccessControlAllowOrigin, request.Headers.GetValues(Origin).First());
resp.Headers.Add(AccessControlAllowCredentials, "true");
return resp;
});
}
}
else
{
return base.SendAsync(request, cancellationToken);
}
}
}

If using basic-auth, it's worth noting that the pre-flighted request is unauthenticated - so add the CORSHandler to your configurations set of MessageHandlers prior to any authentication handlers, so the OPTIONS requests can be handled correctly.

Access-Control-Allow-Credentials

In the handler above we enable Access-Control-Allow-Credentials - meaning if the Web API supports basic auth, then the browser is able to authenticate with the API using basic-auth cross-origin, in a jQuery ajax request that would look like this:

$.ajax({
url: "http://localhost/myapp/api/collectionres,
type: "GET",
username: "mylogin",
password: "mypassword",
data: "$top=10"
xhrFields: {
withCredentials: true
},
crossDomain: true,
success: success
});

I definitely don't recommend this approach for production code - but, it can make for great way for people to play with an API programmatically in javascript, without having to resort to using Node (and using a tool like Dropbox allows them to privately "host" their html/javascript as well, making it possible to share and collaborate on simple mashups).

Read More

Web API Implementation - Final Thoughts

Ease of development

Moving to ASP.Net Web API over all has been a very pleasant experience, working in the framework for us has been pretty painless.

That said, moving from release to release has been quite a chore:

  • The move from WCF to ASP.Net MVC was quite jarring.
  • The move through various beta builds was also painful, the removal of OData caused us quite a lot of rework (but ultimately for the best I think).
  • The move to RTM though, almost completely without issue (even though the list of changes was quite long - they were largely additive, not really breaking anything we already had in place).

Ease of integration

We found it pretty easy to integrate the Web API parts into our plugin infrastructure - one area we did struggle with a little was getting the Monorail routing engine to dispatch the API requests correctly - I didn't bother covering that in this series as it's a fairly niche issue.

Also keep in mind that as I write this, the RTM of ASP.Net Web API has been available for less then a week though - so we were certainly early adopters - I think now that the API is stable for the RTM, and as progress continues on out-of-band support for OData, people starting new API projects will have a much easier time of it then some of the people adopting this technology earlier on in the release cycle.

Total development effort for the API has been in the order of 2 developer months, which resulted in 92 controllers (at last count), automatically generated documention, good test coverage and some simple extension points for 3rd party developers to use when extending the API to include features for their own plugins - I'm pretty pleased with that overall!

What's Next

Nothing is ever done, and we have plenty of ideas for features we want to add to the API in the future:

PATCH support

PATCH support - I have been looking with interest at the latest OData WebAPI support previews approach to PATCH via a lightweight dynamic object representing the Delta, we are also very interested in Matt Warren's Eval based patching feature in RavenDB, which combined with our TQL and OData based filtering could provide a very powerful (though certainly not particularly RESTful) approach to updating application in Enterprise Tester in bulk.

Activity Stream API

Activity Streams - we have an activity stream implementation, which does support plugins posting events for a user or set of users activity streams within the application - next steps would be to expose this via an API (likely via the the http://activitystrea.ms specification).

Webhooks

Webhooks - Webhooks would support a number of useful scenarios for people integrating with the API - And also thinking about ways we could make it possible for people to compose multiple applications together via webhooks (there is a brief discussion on this topic here, with some good comments).

Custom Media Types

Embracing custom media types - so far we have been using application/json, which in effect is enacting the anti-pattern of tunneling the real media type, relying on consumers of our API's to refer to human-readable document (albeit nice human readable documentation!) to determine the schema for those representations.

Shifting to custom media types would be a good move in this case - programmable web had a good post on this a while back - but is likely not something we would do until we feel the API is stable.

Granular OAuth security

Currently the OAuth implementation is at a user level - but why give a consumer of an API access to features/data they don't need.

Security is currently conext-specific so this will likely take the form of an API consumer requesting a set of required permissions, along with an associated scope (project-specific or application-wide) as part of fetching the request token - and these scope restrictions the being applied to any security checks.

Implementing this delegation is actually likely to be less challenging then the refactoring of our permission structure to be more granular to suit the requirements of an API, as opposed to the existing UI usecases.

Usage Metrics/Tracking

Anonmyous usage metrics for the API would be very valuable when it comes to making decisions about changing aspects of the API.

Additionally tracking usage (including the type of usage, IP address etc.) would be valuable especially for instances of Enterprise Tester externally accessible on the internet (Either hosted or self-hosting customers).

FYI - StrathWeb has a post on implementing this via 2 DelegatingHandler's which might be a starting point for anyone looking to implement this themselves.

Wrapping Up

So this is where I end my series on the API development for Enterprise Tester - hopefully this series has been interesting to at least a few of you embarking on implementing your own API's for existing applications.

Describing the Experience

I think if I had to describe API development for a large existing application, it would be Cathartic - It's an opportunity to revisit old code, review decisions you made in the past, and re-imagine what interacting with your application can be like - it's quite cleansing/refreshing, as it's not often you implement a feature that touches on so many parts of an application at one time, that isn't some death march of restructuring and refactoring to please the gods of static typing.

Resources

I found these blogs/websites particulary useful while working on the API:

Previous Parts

Read More

Anatomy of an API plugin

The Enterprise Tester application is extensible - so not only the core plugin, but other plugins, need to contribute to the overall API exposed.

This wasn't particularly complex to implement - but we did have a few issues we had to circumvent.

Controller Registration

We took a pretty restrictive approach to registration of API controllers - plugin developers are able to register the controller along with a name and a route (optionally you could also specify defaults and constraints, but this generally wasn't necessary) - but we don't allow one route to service multiple controllers.

public class CoreRestResourcesPluginInstaller : AbstractPluginInstaller
{
const string ApiPath = "api/";

public override void Install(InstallationHelper helper)
{
helper.RegisterRestService("project", ApiPath + "project/{id}");
helper.RegisterRestService("projects", ApiPath + "projects");
...
}
}

This approach makes the process of writing the API for your plugin alot easier in many ways as you can be generally assured your plugin's routes wont clash with other plugins (and where a clash exists we can throw up a meaningful error/warning, describing where the conflict exists) - but it won't win any fans with the convention over configuration purest crew.

Notice also that registering a REST service does not restrict the route to start with /api - we did originally enforce that convention, but then relaxed it because not all WCF WebAPI controllers being registered would necessarily be part of the API for a plugin.

As a result of these decisions, you end up often with 2 controllers - one for the resource representing an individual entity, and another for the collection resource - there was an interesting discussion about this in February of this year between Rob Conery, some of the more Zealous members of the REST community and eventually Glenn Block - which is well worth a read.

In a future post I'm going to cover how I demo REST API's to customers (and strongly encourage feedback on if anybody knows better ways to do this - short of writing sample clients for your API) - as this can be another one of those exercises where going through the process of demonstrating the API interactively (even with non-developers) hilights issues with your API design you won't necessarily discover through testing.

Delayed Registration

One of the other challenges we faced is that in our implementation we expose the methods for registering REST services as part of our plugin framework (so it's part of the core) - but our REST framework (which exists in it's own plugin) takes dependencies on all sorts of thing such as our OAuth, Search, Custom Field etc. plugins - and some of those plugins may even want to register their own API controllers (causing circular reference issues).

To get around this the RegisterRestService extension method adds the registration to a static "ServiceManager" instance, which then takes care of either registering the route etc. immediately if the REST infrastructure is in-place, or collects the registration information awaiting the REST infrastructure being ready as part of application start-up process.

public static class RestServiceRegistrationExtensions
{
public static InstallationHelper RegisterRestService(this InstallationHelper helper, string name, string routeTemplate, object defaults = null, object constraints = null)
where T : class
{
helper.Register(Component.For().LifeStyle.Transient);

ServiceManager.Instance.RegisterService(name, routeTemplate, defaults, constraints);

return helper;
}
}

Here's the ServiceManager:

public class ServiceManager : IServiceManager
{
static IServiceManager _serviceManager = new ServiceManager();
readonly IList _services = new List();
Action _callback;

public static IServiceManager Instance
{
get { return _serviceManager; }
}

public void RegisterService(string name, string routeTemplate, object defaults = null, object constraints = null)
{
var metadata = new ServiceMetadata {RouteTemplate = routeTemplate, ControllerType = typeof (T), Defaults = defaults, Constraints = constraints, Name = name};

_services.Add(metadata);

if (_callback != null)
{
_callback(metadata);
}
}

public IList GetAllServices()
{
return _services;
}

public void SetServiceRegisteredCallback(Action callback)
{
if (callback == null) throw new ArgumentNullException("callback");

if (_callback == null)
{
foreach (ServiceMetadata service in _services) callback(service);
}

_callback = callback;
}

public static void Reset()
{
_serviceManager = new ServiceManager();
}
}

Nothing particular exciting here but it's something to consider if trying to support WebAPI in a pluggable application. Also notice the Reset() method which is necessary to support the end-to-end tests.

Mapping Installers

As well as controllers, we have classes which register any necessary functionality against the ViewModelMapper - we normally have one of these per resource type.

Instances of the mapping installer are automatically registered into the IoC container, and implement an IStartable lifestyle in Windsor (so will be immediately created once all the dependencies are satisfied).

public class SomeMappingInstaller : AbstractViewModelMappingInstaller
{
public override void Install(IViewModelMapper mapper)
{
...
}
}

This is necessary (Rather then registering the mappings directly in the plugin installation method) because often the mapping logic will need to get access to other services to be able to complete an expansion from one Entity type or another (in which case you would add a constructor where the necessary services could be injected upon creation of the mapping installer).

And that's it...

As you can see we have tried to keep the number of moving parts to a minimum when adding an API controller to a plugin for Enterprise Tester as a 3rd party developer.

The other desirable trait that comes from this minimalism is that 3rd party developers can't introduce side-effects into other API controllers accidentally (i.e. by adding a new DelegatingHandler, or removing an existing one - such as Authentication).

Next

Next in part 9 of this series (The final part) - I give my thoughts on how developing the API using ASP.Net Web API went, including the experience of transition from preview/RC/Beta bits and the recent upgrade to RTM, plus what we didn't get to implement/will be implementing in the future for the API in Enterprise Tester.

Read More