Expand implementation and view model mapping

What is Expand

Round-trips are the death of performance in many cases, and this is no different for API's.

The web does scale out well - so there is certainly the option to make lots of simultaneous requests, but this does not take care of the problem of addressing those related resources - if the you need to fetch back a resource's representation before you can construct additional requests to fetch other resources, you still are faced with the issues of latency.

OData provides a mechanism for implementing this via a URL containing the $expand query parameter. API's implemented for products such as Atlassians Jira (popular defect tracker) include an "expand" parameter which achieves the same thing, but uses a slightly different approach.

The API being presented here is not an OData compliant service - but certainly the Expand concept was a useful one we wanted to adopt.

Deep expansion

In addition to single-level expansion:

GET /api//scriptpackage/{id}?$expand=Children

Which might return:

{
"Id": "1911ea14-3ede-46ca-bb1b-a0a80019f6cf",
"ProjectId": "53cc97cd-7514-4465-b352-a0a80019f180",
"Name": "Script Library",
"OrderNumber": 2,
"Expands": [
"Children",
"Parent",
"Project",
"Scripts"
],
"Self": "https://localhost/api/scriptpackage/1911ea14-3ede-46ca-bb1b-a0a80019f6cf"
}

We wanted to support deeper expansion - so that something like:

GET /api/project/{id}/scriptpackages?$expand=Children.Children,Children.Scripts

Would return a script package (folder) with all it's child packages, those child packages children and those child packages scripts (test cases).

{
"Id": "1911ea14-3ede-46ca-bb1b-a0a80019f6cf",
"ProjectId": "53cc97cd-7514-4465-b352-a0a80019f180",
"Name": "Script Library",
"OrderNumber": 2,
"Expands": [
"Parent",
"Project",
"Scripts"
],
"Children": [
{
"Id": "1fe2686a-eb17-485a-96b4-a0a80019f6cf",
"ParentId": "1911ea14-3ede-46ca-bb1b-a0a80019f6cf",
"Name": "Sprint 1",
"OrderNumber": 0,
"Expands": [
"Parent",
"Project"
],
"Scripts": [...],
"Children":[...],
...
},
...
}

Notice that we advertise the available expansions as a property of the resource - this is a feature of the Atlassian Jira API we adopted (and this list changes based on what expansions have already been applied).

Building a mapper

To allow expansion to be done correctly and at any depth, we needed to hand over construction of our view models to a third party - thus enters the view model mapper:

public interface IViewModelMapper
{
void RegisterSearchResultConstructor(
Func intermediateConstructor)
where TFrom : class
where TTo : AbstractModel
where TIntermediate : class;
void RegisterDefaultConstructors(Type[] from, Type to);
void RegisterDefaultConstructor();
void RegisterConstructor(Func constructor);
void RemoveConstructor();
void RegisterExpander(string expansionName, string resourceName,
Func expansion);
void RegisterExpander(Type fromType, Type toType, string expansionName,
string resourceName, Func expansion);
void RegisterEntityTypeAndIdToResourceResolver(Func urlFunc,
params string[] entityTypes);
void RemoveExpand(string expansionName);
TTo Map(TFrom from, params string[] expansions)
where TTo : AbstractModel;
object MapSearchResult(object from, string[] expansions);
object Map(object instance, Type targetType, params string[] expansions);
IEnumerable GetExpandersFor(Type fromType, Type toType);
string ResolveUrlForEntityTypeResource(string entityType, Guid id);
}

The implementation of this interface comprises a service where you can register:

  • Constructors - which are able to take a DTO/domain class/Tuple/whatever and construct a view model from it.
  • Expanders - a named expansion attached to a constructor
  • Map methods for mapping an instance to a view model, with a set of expansions to apply
  • Handling of special cases such as translating the results of a search to a suitable form for then translating into a view model

Given the plugin architecture used within the application, this provided the ability for plugins to add new Expand options to existing resources - so for example if a customer has the automated testing plugin enabled, then the script packages (folder) will also support expansions for the "AutomatedTests" collection of automated tests within that package.

As an example how we register an expander - here is code to register the expansion for a collection of steps associated with a script.

...

mapper.RegisterExpander(
"Steps", null, (input, expands) => RenderSteps(expands, input));
}

IList RenderSteps(string[] expands, EditScriptDto input)
{
ExpandsUtility.AssertEmpty("Steps", expands);

return (input.Steps ?? Enumerable.Empty())
.Select(StepModel.CreateFrom)
.OrderBy(model => model.OrderNumber).ToList();
}

In this case we are using Expand to avoid the cost of expanding a large collection (the steps for a testscript/test case) which is part of the Script aggregate (believe it or not, there are testers out there writing tests scripts with 300+ steps...).

Controllers

Within our API controllers we just call the mapping method and pass in the expands parameter:

var script = _scriptReportingService.GetScript(id);
var wrapped = _viewModelMapper.Map(script, Expands);
return Request.CreateResponse(HttpStatusCode.OK, wrapped);

The Expands property in this case just exposed a property associated with the current request.

protected virtual string[] Expands
{
get { return (string[]) (Request.Properties["expand"] ?? new string[] {}); }
}

And this request property was captured by a simple DelegatingHandler that would parse the query string for various OData parameters - this approach made it a bit easier for other delegating handlers to have access to this information prior to the controller's methods being invoked.

OData support in ASP.Net Web API

For those who have been working with the various releases of ASP.Net WEB API since it was originally targeting WCF, there have been quite a few breaking changes along the way, including OData - which was introduced initially as a basic [Queryable] attribute that could be added to controller methods, and then later on, removed entirely pending a new OData re-implementation.

Recently the Web API team have announced greatly improved support for OData in the WebAPI - allowing the construction of entirely OData compliant services, as a preview release on nuget - the [Queryable] attribute is also back.

I believe this now includes support for $expand, which was previously missing, but I haven't yet had a chance to play with the latest release to confirm this - but I'm not entirely sure if this would have worked for our approach at any rate.

Next

Next, in part 3 of this series we take a look at how we generated API documentation.

Read More

Links, absolute URI's and JSON rewriting

Serialization


As part of the design for the resources being returned from our API we wanted to ensure they included some useful information:
So we are looking to return entities that look like this:
{
"Id": "5b2b0ad0-5371-4abf-a661-9f410088925f",
"UserName": "joeb",
"Email": "joe.bloggs@test.com",
"FirstName": "Joe",
"LastName": "Bloggs",
"Expands": [
"Groups"
],
"Self": "http://localhost:29840/api/user/5b2b0ad0-5371-4abf-a661-9f410088925f",
"Links": [
{
"Title": "Group Memberships",
"Href": "http://localhost:29840/api/user/5b2b0ad0-5371-4abf-a661-9f410088925f/groups",
"Rel": "Groups"
}
]
}

This inevitably means creating some kind of view model that you return from your API as the representation of the underlying resource (entity, aggregate etc.)

After some experimentation we landed on the idea of leveraging the capabilities of JSON.Net to perform on-the-fly JSON rewriting of our serialized entities.

This meant deriving our view models from this base class, and implementing the abstract "Self" property (to return a link to the entity itself) as well as supporting the links and expansions.

public abstract class AbstractModel
{
public const string ExpansionsProperty = "__expansions__";
public const string SelfProperty = "__self__";
public const string LinksProperty = "__links__";

protected AbstractModel()
{
Expansions = new Dictionary();
Links = new List();
}

[JsonProperty(SelfProperty, NullValueHandling = NullValueHandling.Ignore)]
public abstract string Self { get; }

[JsonProperty(NullValueHandling = NullValueHandling.Ignore)]
public string[] Expands { get; set; }

[JsonProperty(ExpansionsProperty, NullValueHandling = NullValueHandling.Ignore)]
public IDictionary Expansions { get; set; }

[JsonProperty(LinksProperty, NullValueHandling = NullValueHandling.Ignore)]
public IList Links { get; set; }
}


Each link, was then also represented by a LinkModel, which was a very simple class:
public class LinkModel
{
[JsonProperty(NullValueHandling = NullValueHandling.Ignore)]
public string Title { get; set; }
public bool Inline { get; set; }
public string Href { get; set; }
public string Rel { get; set; }
}

If you look closely at the abstract model class above you will see the properties use the names __expansions__, __self__ and __links__ - so initially when JSON.Net is used to serialize the entity, that's the name we get.

We then extend the existing JsonMediaTypeFormatter to perform serialization to JToken and then rewrite the token. This is an abstract class:

public abstract class AbstractRewritingJsonMediaTypeFormatter : JsonMediaTypeFormatter
{
protected abstract JToken Rewrite(JToken content);

public override Task WriteToStreamAsync(Type type, object value, Stream writeStream, HttpContent content, TransportContext transportContext)
{
if (type == null) throw new ArgumentNullException("type");
if (writeStream == null) throw new ArgumentNullException("writeStream");

if (UseDataContractJsonSerializer)
{
return base.WriteToStreamAsync(type, value, writeStream, content, transportContext);
}

return TaskHelpers.RunSynchronously(() =>
{
Encoding effectiveEncoding = SelectCharacterEncoding(content == null ? null : content.Headers);

JsonSerializer jsonSerializer = JsonSerializer.Create(SerializerSettings);

using (var tokenWriter = new JTokenWriter())
{
jsonSerializer.Serialize(tokenWriter, value);

JToken token = tokenWriter.Token;

JToken rewrittenToken = Rewrite(token);

using (var jsonTextWriter = new JsonTextWriter(new StreamWriter(writeStream, effectiveEncoding)) {CloseOutput = false})
{
if (Indent)
{
jsonTextWriter.Formatting = Formatting.Indented;
}

rewrittenToken.WriteTo(jsonTextWriter);

jsonTextWriter.Flush();
}
}
});
}
}


Which we then have a concrete implementation of:
public class JsonNetFormatter : AbstractRewritingJsonMediaTypeFormatter
{
readonly IUrlTransformer _urlTransformer;
readonly ExpandsRewriter _expandsRewriter;
readonly SelfRewriter _selfRewriter;
readonly LinksRewriter _linksRewriter;

public JsonNetFormatter(IUrlTransformer urlTransformer)
{
if (urlTransformer == null) throw new ArgumentNullException("urlTransformer");
_urlTransformer = urlTransformer;
_expandsRewriter = new ExpandsRewriter();
_selfRewriter = new SelfRewriter(_urlTransformer);
_linksRewriter = new LinksRewriter(_urlTransformer);
}

protected override JToken Rewrite(JToken token)
{
_expandsRewriter.Rewrite(token);

_selfRewriter.Rewrite(token);

_linksRewriter.Rewrite(token);

return token;
}
}


By doing this we can then implement simple visitors which can rewrite the JSON on the fly looking for those special token names - so for example, in our links above we have a property

public bool Inline { get; set; }

If Inline is true, we actually "in-line" the link into the body of the representation (using Rel as the name of the property), but if the link is not inline, we include in the set of links.

This rewriting process also takes care of rewriting relative API URL's to be absolute, so controllers largely don't need to care about resolving absolute URLs within representations themselves.

JSON Rewriting does bring a cost with it, but so far we have found the cost to be very low (as we are not transforming JSON strings, but just serializing directly to tokens first, then converting the tokens to a string - this avoids the need to delve into reflection to achieve the same results.

Links


Our linking implementation is largely bespoke, but mirror's that of a hyperlink within a web page. Initially we just had a Rel and Href property, but after a while adopted a Title as well (So similar to The netflix links representation in XML).

Though REST as a term is used to describe the "type" of API that is implemented, in fact the API (like most) falls into the camp of "REST'ish" as opposed to RESTful - though personally a fan of HATEOAS, in this case it's a trait we would like to move closer towards, but is certainly not a constraint our API must fulfill before we make it available for consumption.

There are some standards/proposals out there for links within JSON responses, but largely the impact would have been mostly negative to the consumption of the API, by making the representations more internally inconsistent in naming style etc. and for little gain, as the proposed standards don't make implementing the client any simpler at this stage.

The Links collection can be populated by external resources, but largely we have the the model itself populate the set of available links upon construct.

Collection results

When returning collection results, if a collection was page (more on paging in a future post we we look at querying/OData) we also aimed to return the following links as part of the response:

* First page
* Last page
* Next page
* Previous page

The IANA provides a list of well-known link relationships including "first", "next", "last" and "prev" - so we adopted those values for the links in the API.

public class QueryResults : AbstractModel
{
public override string Self
{
get;
}

public int? Skip { get; set; }

public int? Top { get; set; }

public int? Total { get; set; }

public IList Items { get; set; }

public QueryResults SetSelfAndGenerateLinks(string self)
{
...
}

public QueryResults SetSelfAndGenerateLinks(Uri uri)
{
..
}

protected void AddLink(string rel, Uri uri, int start)
{
..
}
}

Notice we have a method for setting the Self URL in this case - this is because in the case of a query result the code returning the set of query results may be decoupled from the controller where knowledge of the current request URI exists.

Within the call to SetSelfAndGenerateLinks we have this code for adding the links, based on where we are at in the set of results.

if (inMiddle || atStart)
{
AddLink(StandardRelations.Next, uri, Math.Max(0, Skip.Value + Top.Value));
AddLink(StandardRelations.Last, uri, Math.Max(0, Total.Value - Top.Value));
}

if (inMiddle || atEnd)
{
AddLink(StandardRelations.Previous, uri, Math.Max(0, Skip.Value - Top.Value));
AddLink(StandardRelations.First, uri, 0);
}

And then when the request is rendered we might end up with a response that looks like this:

{
"Skip": 20,
"Top": 25,
"Total": 45,
"Items": [
...
],
"Self": "http://localhost:29840/api/search?tql=Name+~+test&$skip=20&$top=25",
"Links": [
{
"Href": "http://localhost:29840/api/search?tql=Name+~+test&$skip=0&$top=25",
"Rel": "prev"
},
{
"Href": "http://localhost:29840/api/search?tql=Name+~+test&$skip=0&$top=25",
"Rel": "first"
}
]
}

Providing links like this can really simplify the implementation of clients and allows us to potentially change URI structure without causing as many headaches for API consumers.

Next

Next, in part 2 of this series we take a look at the implementation of Expand in the API, and how we mapped our entities to View models for the API.

Read More

Enterprise Tester API Series

Introduction


As I mentioned in my last post, part of the development for the latest version of Enterprise Tester has included a greatly expanded REST API.

This REST API is implemented using ASP.Net WebAPI.

The technology Behind the API is not necessarily "exciting" (compared to what the API enables for customers and 3rd party developers) - but demonstrates taking a large and complex application and exposing much of the functionality through the API.

There are plenty of examples of WebAPI usage out there on blogs, but not much information from people implementing larger API's end-to-end, or retro-fitting an API to an existing (brown-fields) application.

So I thought this was a good opportunity to dive a little deeper into the implementation of the API.

I'm going to break this up into a few posts covering the various aspects of the API implementation:

  1. Links, absolute URI's and JSON rewriting.
  2. Expand" implementation and view model mapping.
  3. Generating API documentation.
  4. OData, TQL and Filtering.
  5. Authentication - Session, Basic and OAuth.
  6. Long running tasks.
  7. WebAPI Testing.
  8. Anatomy of an API plugin.
  9. Final Thoughts.

Click here to take a look at part 1 - Links, absolute URL's and JSON rewriting - where I cover the process we went through to generate our JSON responses including links and transforming relative URL to absolute URI's as part of the request.

Disclaimer

Though I work for Catch Software, I'm publishing this series on my personal blog - so any opinions expressed here are strictly my own and not those of Catch Limited New Zealand (and certainly are likely to fall out of date after publishing this series) - for the official word on Enterprise Tester, please instead check out the company blogs - http://blogs.catchsoftware.com or the website http//www.enterprisetester.com.

Catch Software is hiring

Catch Software is needing to expand the development team and are looking for a passionate senior developer, or highly motivated intermediate developer looking to move into a senior position.

You would be working on the Enterprise Tester application (as well as other existing and new products Catch Software are developing) alongside myself and the rest of the great Catch Software team - you can find more about the job offer here.

Read More

Taking the lid off a large application


For the past 2+ years, a great deal of my energy has been spent on an application called Enterprise Tester.

Enterprise Tester is a Quality Management tool developed by Catch Software here in Auckland, New Zealand - but used around the world from small to very large QA teams.

Codebase

The application is developed in .Net and currently consists of 4,865 source files and 491,809 lines of code (including tests, comments and blank lines) - and currently targets the .Net Framework 4.0.

The client-side code is mostly JavaScript, and weighs in at 257 files and 174,785 lines of application-specific code (excluding 3rd party libraries such as jquery).

What is it?

Quality Management Tools (more traditionally known as Test Management Tools) are applications or suites of applications used by QA teams to ensure a level of quality in a product (often software). Using a combination of manual testing and automated testing (to determine the level of quality/maintain a level of quality) and then relating those tests etc. back to features (requirements/user stories/epics/use cases) to then establish the level of both coverage and impact/risk associated with a change.

Probably the most familiar tool in this category to many people, especially if you have been around for a while is HP Quality Center (Originally Mercury Quality Center).

What's next

Over the next few weeks I'm going to looking at some of the more Interesting features of Enterprise Tester (from a technical perspective) that I have been involved in implementing, and taking a bit of deep dive on their implementation "under the hood".

This first thing I plan to review is the recent introduction of ASP.Net WebAPI to implement our REST API, which hopefully should be interesting to anyone working with long-lived codebases / brown-field applications where they are looking to retro-fit an API.

But before diving into implementation I thought it might be worth providing some context around the products overall structure.

Structure

Currently the implementation consists of a:

  • Core Web Project
  • 2 Core Assemblies (EnterpriseTester.Common and EnterpriseTester.Core)
  • A set of plugins, which have inter-dependencies between each other (generally each plugin is a single assembly).
  • Everything is wired together by an IoC container (Castle Windsor) + MEF (for discovery and loading of the "plugin installers" that exist in each plugin assembly).
  • We make heavy use of a concept called modules, which predates MEF (did I mention brown fields?) - A module is an interface which inherits from this Interface:

public interface IModule
{
bool IsEnabled { get; }
void SetEnabled(bool isEnabled);
}

There is then a matching interface that components can implement which allows them be "aware" of modules being registered, or unregistered.

public interface IModuleAware
{
void ModuleRegistered(TService instance);
void ModuleUnregistered(TService instance);
}

This basic mechanism is then used to compose the features of the application - so for example we have an interface that toolbar items implement in the application:

public interface IToolbarItemDefinition : IModule
{
int Order { get; }
bool Supports(IQueryContext context);
IToolbarItemPresenter GetPresenter();
}

And we then have a class which implements a registry for these toolbar "modules" which collects them, and provides in this case a way to get back the toolbar items that a relevant to a kind of query being executed (these are toolbar items that belong to a grid of search results).

public class ToolbarItemDefinitionRegistry : IModuleAware, IToolbarItemDefinitionRegistry
{
readonly IList _definitions = new List();

public void ModuleRegistered(IToolbarItemDefinition instance)
{
_definitions.Add(instance);
}

public void ModuleUnregistered(IToolbarItemDefinition instance)
{
_definitions.Remove(instance);
}

public IList GetAllDefinitions()
{
return _definitions.ToList();
}

public IList GetDefinitionsFor(IQueryContext context)
{
return _definitions.Where(def => def.Supports(context)).OrderBy(def => def.Order).ToList();
}
}

This approach to modules is generally how individual bits of functionality have been kept insulated and allowed for feature switching at run time as well. A Facility within our IoC container then takes care of detecting when a module has all it's dependencies satisfied and can be created and added to the module aware component.

This should be familiar to anyone working with MEF Today where you can use ImportMany to achieve something similar - the only difference in this case is that our implementation allows for control over if a module can be loaded (so we can store enabled/disabled state permanently for each module across application restarts, or prevent a module from loading if some requirement is not met i.e. Licensing).

So here we can see a list of all the plugins which are currently loaded - at the plugin level we manage both dependencies between plugins (so for example we have an Automated testing plugin, that then provides the framework for automated test tool adapter plugins to load, such as Selenium, XUnit etc.) and Licensing concerns (so not loading plugins, if a valid license for that plugin does not exist or has expired).



And you can also manage which individual modules are enabled/disabled - which is useful if you want to soft-launch new features in the product, or provide a mechanism for Administrators to remove functionality from within the application.



Storage

The application is fairly traditional with data storage being to Sql Server, MySql, PostgreSql or Oracle - the application uses an ORM - Castle Active Record (NHibernate) - to handle persistence of data to the database.

Searching of data within the application is handled via an application-specific search and query implementation built on top of Lucene.

I'll hopefully also cover some of the implementation specifics there in a future post.

Front-end

The front-end of the application is built using Sencha's ExtJS and is for the most part implemented as a single page application - there is a plugin implementation for the front-end UI as well, and a simple pub/sub implementation is used to allow different parts of the application to respond to various events.

Communication of the client-side to the server is almost exclusively performed via JSON.

Disclaimer

Though I work for Catch Software, I'm publishing this series on my personal blog - so any opinions expressed here are strictly my own and not those of Catch Limited New Zealand (and certainly are likely to fall out of date after publishing this series) - for the official word on Enterprise Tester, please instead check out the company blogs - http://blogs.catchsoftware.com or the website http//www.enterprisetester.com

Catch Software is hiring

Catch Software is needing to expand the development team and are looking for a passionate senior developer, or highly motivated intermediate developer looking to move into a senior position.

You would be working on the Enterprise Tester application (as well as other existing and new products Catch Software are developing) alongside myself and the rest of the great Catch Software team - you can find more about the job offer here.

Read More