Sunday, August 26, 2012

ServiceStack: Reusing DTOs

This post is no longer relevant, because these ideas were included into the heart of the ServiceStack New API. It just a pleasure to use such a great framework!

I've recently worked on HTTP API for one of our products. As it was a bit of time pressure, we've decided to use ServiceStack - an open source web services framework. If you didn't hear about it go ahead and read. It is simple yet very powerful, well written framework which just works and makes things done without friction. And it exists for years already (if you would ask why not we were using Web API, which was not yet released at the moment).

I'd like to share some experience gained during my work. So here is the context. The first consumer of API will be our another project having back-end written in C# as well. ServiceStack allows to provide clients with strongly-typed assembly reusing the same requests/responses dtos used to build service (note that no code generation is needed). In fact this is recommended approach for C# clients.

The preferred way to call web service is to utilize REST endpoints, which can be configured using RestService attribute per each request DTO. Example below (here is a gist) shows how we can call services via REST endpoints, and how DTOs are reused.

/// DTOs

[RestService("/orders/{id}", "GET")]
[RestService("/orders/by-code/{code}", "GET")]
[RestService("/orders/search", "GET")]
public class GetOrders {
    public int? Id { get; set; }
    public string Code { get; set; }
    public string Name { get; set; }
    public string Customer { get; set; }

public class GetOrdersResponse {
    public Order[] Orders { get; set; }
    public ResponseStatus ResponseStatus { get; set; }
public class Order {
    public int Id { get; set; }
    public string Code { get; set; }
    public string Name { get; set; }
    public string Customer { get; set; }

[RestService("/orders", "POST")] // to create new order.
[RestService("/orders/{id}", "PUT")] // to create or update existing order with specified Id.
public class SaveOrder {
    public int? Id { get; set; }
    // Order details.

public class SaveOrderResponse {
    public int Id { get; set; }
    public ResponseStatus ResponseStatus { get; set; }

//// Rest endpoints usage

IRestClient client = new JsonServiceClient();

var orderById = client.Get<GetOrdersResponse>("/orders/" + 5).Orders.Single();
var orderByCode = client.Get<GetOrdersResponse>("orders/by-code/" + Uri.EscapeDataString(orderById.Code)).Orders.Single();

var searchUrl = "orders/search?Name={0}&Customer={1}".Fmt(
    Uri.EscapeDataString(orderByCode.Name), Uri.EscapeDataString(orderByCode.Customer));

var foundOrders = client.Get<GetOrdersResponse>(searchUrl).Orders;

var createOrderResponse = client.Post<SaveOrderResonse>("/orders", new SaveOrder { /* Order details */});
int orderId = createOrderResponse.Id;
var updateOrderResponse = client.Put<SaveOrderResonse>("/orders/" + orderId, new SaveOrder { /* Order details */ });

Notice that dtos follow naming convention - response classes named exactly like request classes but with Response prefix. This simplifies clients life, as it becomes obvious what response is expected. It also allows to generate services metadata automatically.

However, there are several things that bothers me here. While I still want clients to call services via REST endpoints, I'd like to have more generic and easy to follow api. Here are things that come to mind:

  • Whether we really need to specify Response type, while we already know what kind of request we send. Couldn't we automatically determine it by the request type?
  • We already know urls available for given request type (from RestService attributes). It would nice if we can simplify developers life by picking and populating them automatically based on request state.
  • We can go further and automatically determine required HTTP method.
  • And last - for GET and DELETE request we could send additional request properties (not mapped in url template) as query parameters.

As side effect of achieving this, we'll get another important benefit - ability to change Urls and even Http methods without breaking clients code. And this is essential for me - I'm sure I would like to change some Urls while it used by our other product only (but before it goes public).

So this is a method I'd like to have:
IRestClient.Send<TResponse>(IRequest<TResponse> request)

where IRequest - is just a marker interface applied to all request types, thus allowing to determine corresponding response type at compile time. And this is how our example will looks like (here is the gist):

// Response types omitted.
// Note that request types now marked with IRequest<tresponse> interface.
// This allows Send method to determine response type at compile time.

[RestService("/orders/{id}", "GET")]
[RestService("/orders/by-code/{code}", "GET")]
[RestService("/orders/search", "GET")]
public class GetOrders : IRequest<GetOrdersResponse> {
    public int? Id { get; set; }
    public string Code { get; set; }
    public string Name { get; set; }
    public string Customer { get; set; }
[RestService("/orders", "POST")] // to create new order.
[RestService("/orders/{id}", "PUT")] // to create or update existing order with specified Id.
public class SaveOrder : IRequest<SaveOrderResonse> {
    public int? Id { get; set; }
    // Order details.

//// Proposed interface

// Marker interface allowing to determine response type at compile time.
public interface IRequest<TResponse> {

public static class RestServiceExtensions
    public static TResponse Send<TResponse>(this IRestClient client, IRequest<TResponse> request) {
        // Determine matching REST endpoint for specified request (via RestService attributes).
        // Populate url with encoded variables and optional query parameters.
        // Invoke corresponding REST method with proper Http method 
        // and return strongly typed response

//// Service usage

IRestClient client = new JsonServiceClient();

// We don't specify response type - it is determined automatically based on request type.
// We don't specify URLs or HTTP verbs - they are determined based on request state.
// We don't write boilerplate code to encode url parts. It is done automatically.

// GET /orders/5
var orderById = client.Send(new GetOrders { Id = 5 }).Orders.Single(); 

// GET /orders/by-code/Code
var orderByCode = client.Send(new GetOrders { Code = orderById.Code }).Orders.Single();

// GET /orders/search?Name=Name&Customer=Customer
var getOrders = new GetOrders { Name = orderById.Name, Customer = orderByCode.Customer };
var foundOrders = client.Send(getOrders).Orders;

// POST /orders
var createOrderResponse = client.Send(new SaveOrder { /* Order details */});

// PUT /orders/id
int orderId = createOrderResponse.Id;
var updateOrderResponse = client.Send(new SaveOrder { Id = orderById.Id, /* Order details */ });

That being said, I've implemented such extension method for IRestService interface. You can find source code with a test in a gist at github. There are several things missed there, but they should not be hard to implement:

  • Same extension for Async service.
  • Special formatting for DateTime, List variables in url.
  • Other opinionated decisions on how to choose url when several urls matches specified request.
Hope you find this useful. Anyway, please let me know event if it feels like completely wrong approach.

Friday, August 26, 2011

Reducing size of RTF file with image

We use RTF format to generate documents in our products as it is widely supported by software vendors. The bad thing about RTF files is a file size, which drastically increases when document contains images (compared to the same document saved in "doc" format). This becomes a big  issue when you have hundreds of thousands documents stored in a databases, and send them to other parties.

As it turned out, the file size issue is not caused by RTF format itself, but by the way MS Word saves RTF files with images. When MS Word saves document in RTF format, it saves two copies of each image - original one and also a copy converted to WMF format. Fortunately, there is way to disable such a strange behavior. You can read how at Microsoft knowledge base article Document file size increases with EMF, PNG, GIF, or JPEG graphics in Word.

Wednesday, August 17, 2011

Using NuGet to download all packages for the solution

Here I'm talking about technique which allows you to use NuGet without committing packages to source control (which is handy in case you use Mercurial). First time I've heard about this approach from José F. Romaniello post. In essence: each project within solution has a pre-build step ensuring that all packages are downloaded via NuGet.exe command line utility. While this approach works well for a small project, it has some drawbacks with large solutions:
  • It forces me to insert that build step into each project within solution, which is a little bit annoying.
  • It increases solution build time. This becomes visible for a solution containing a lot of projects. I guess that was one of the main reasons why Simon Cropp wrote this post.
So, do I really need to check for NuGet packages on each build? It think that checking for dependencies at CI server, and perhaps after pulling changes from source control might be enough.

To achieve this I use a Powershell script which looks for packages.config files within a solution and executes NuGet.exe for each of them.

This script uses repositories.config file to locate packages.config files instead of searching them in file system. Obviously to make it work, repositories.config should be placed in source control and path variables should be adjusted according to the solution folders structure.

Hope you find this post helpful. At least I've learned some new things about powershell :). Please share your thoughts.

Sunday, March 6, 2011

Великий піст 2011.

Наступний тиждень доведеться холостякувати, ще й грошей залишилось 200 грн. Що ж, саме вчасно наступає Великий піст в цьому році.

Якщо хтось теж збирається постити, то ось схема із сайту, яка допомагає розібратися що можна їсти і коли.

Sunday, September 5, 2010

How to integrate MSpec into Hudson CI server

If you’re find this post, you might already know what is MSpec and Hudson CI server. If not, I’ll tell you.

MSpec (short for Machine.Specifications) is an awesome .Net BDD style testing framework for, which allows you write tests without language noise.

Hudson CI is an open source continuous integration server, which is very easy to install and configure with a user friendly interface. It allows you continuously build you projects and verify the quality of these builds. And it has a lot of plugins, which make developing with Hudson more interesting and fun.

So what does it mean integrate and why should I do this?

MSpec has a command line runner, which can fail the build in case some tests are failing, and it can generate nice HTML report which can be placed in build artifacts. So what else do I need, and why i should bother with Hudson integration?

Because dealing with tests is one of the Hudson core features. It can show you information about tests for a particular build and tests history information, such as when they started breaking, how many new tests were added, how test duration changed from build to build, etc. Hudson can also show nice history trend charts.


At the screenshot above you can see the test result for a single build.



At this screenshot (from another project) you can see tests trend.

And the most fun feature (at least to me) is the test tracking in a Hudson Continuous Integration game plugin, which gives points to user on improving the builds. This is my favorite plugin (ok, maybe after Chack Norris plugin).


Would not it be great if MSpec tests will participate in such activities?

How to achieve that?

So how does Hudson know about all available testing tools for various programming languages? Of cause it doesn’t know about all of them. It works internally with JUnit xml reports, but there are many special plugins for most popular testing tool, which in fact transform testing tool output to JUnit format, thus allowing them to be used by Hudson. Unfortunately, there is no such plugin for MSpec currently. And I’m not familiar with Java to write it :)

Good news is that MSpec can generate xml report (although it not as informative as i would like it to be). And we can write XSLT transformation to convert MSpec xml output to JUnit format and use it to provide test results to Hudson directly as JUnit tests.

So here is such XSLT:

So now you can transform MSpec results to JUnit output in you build script and than use it in Hudson. If you are using MSBuil you can use it XsltTransformation task to do the conversion:

And finally you configure Hudson to use test results by specifying converted xml output.


Additionally it is a good idea to publish MSpec HTML report. You may do that with help of HTML Publisher Plugin:


That is all you need (except of course that you need to write build script which would run you specifications and will generate xml and html reports). At the end you may see you specifications included as usual tests in Hudson. And as a bonus you get nice HTML report:


The link at the top of the page points to the Specifications HTML report:



That’s not all I wanted to say. But I’m a bit tired already. So much words :) Hope it would be useful for someone. And hope that yours Mr. Hudson and Chack Noris will always be happy:).

Thursday, March 4, 2010

XML file in Visual Studio - Request for permission failed.

Напишу для себе про помилку, яку показувала Visual Studio коли я відкривав xml файл. Бо не знатиму, що робити з цією помилкою, коли вона виникне наступного разу.

Отже ситуація така. Включив до проекту скачаний з інтернету XML файл. Відкрив його в Visual Studio 2008 і побачив таку помилку:

Повідомлення про помилку:
Request for the permission of type 'System.Security.Permissions.FileIOPermission, mscorlib, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.

Якщо в DOCTYPE зазначено url схеми в інтернеті, то відповідно виникає помилка:
Request for the permission of type 'System.Net.WebPermission, System, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.

При цьому до проекту було вже підключено кілька XML файлів із таким жe DTD.

Гугл мені не допоміг. Здогадався подивитися у властивості файлу. Проблема виявилась простою - Windows заблокувала файл, тому що він був завантажений з інтернету.

Дуже дякую. Це ж мабуть я мав відразу здогадатися? Витратив кучу часу.

Monday, February 15, 2010

Великий піст 2010. Що можна їсти?

UPD: Оновлену схему посту для 2011 року можна подивитись тут.

Сьогодні, 15 лютого, розпочався Великий піст. Для мене це привід змінити себе на краще,
поборотися зі шкідливими звичками. І хоча всі кажуть, що піст – це не дієта, все ж утримання від їжі - це головний атрибут посту.

Яку їжу можна вживати в піст

Є різні правила. Біль та менш строгі. Я користуюсь тими, що описані на сторінці сайту Там я знайшов і цю зручну схему:

Отже оcновні правила такі:
  • Не можна їсти м'ясо, рибу, яйця, молоко та інші продукти тавринного походження.
  • У перші два дні посту (15 та 16 лютого) та в передостанній день (2 квітня) - строгий піст. Рекомендується повне утримання від їжі, або невелика кількість пісної їжі.
  • В середу та п'ятницю - «сухояденіє». Не можна вживати ні варену, ні приготовану на пару їжу. Не можна вживати олію. Дозволяються хліб, свіжі, сушені й квашені овочі та фрукти.
  • В інші дні дозволена рослинна олія.
  • В суботу, в неділю а також в свята 25 лютого, 9 березня та 22 березня можна вживати морепродукти. 
  • В Вербну неділю (28 березня) можна їсти рибу. А в Лазареву суботу (напередодні Вербної неділі) можна їсти ікру.
І наостанок, слід пам'ятати, що головне чого не можна їсти - це своїх ближніх.