Wednesday, September 23, 2009

Logging in modern .NET applications

Background - A common evolution to logging

Recently we needed to implement a decent logging solution for a new application. The application (like so many) had evolved from a proof of concept (P.O.C) code base. 5 weeks after the cut over from P.O.C to code that was intended to ship, we had a plethora of Debug.WriteLine, Console.WriteLine and Trace.WriteLines sprinkled through the code base. Most of the *.WriteLine code was to help debug
  1. code that implemented 3rd party code
  2. code that had no tests (booo!) and F5 testing seemed to be "good enough"
After a couple of preliminary builds were sent to the client for initial evaluation, several un-handled exceptions had crept through. To be fair to the team, most of these were related to some undocumented features of the 3rd party system and few were my fault where maybe I was pushing WPF a bit to far for the clients machines. The result of these exceptions were a custom popup window on the WPF application, stating that a fatal exception had occurred and would shut down now. We would also email the exception as a quick way to notify the dev team of any errors.
At this point in time the team had decided we had several real problems here:
  1. We had various "debugging" systems (Console, debug and trace)
  2. We had hard coded our implementations of "logging". If logging a specific piece of functionality is a requirement we really should test that we are logging the correct thing!
  3. As the WPF guy I am interested in any WPF binding errors (in the VS output window), but the service/agent guy were flooding the console screen with their pub/sub debug messages. How could I filter out their messages, and how could they filter out mine?
  4. We were catching the un-handled exception at the dispatcher, not at the point it was thrown. This mean we had a huge stack trace (noise) pointing back to the problem and we also were missing some useful data to debug the exception such as the inputs to the method that threw the exception.

Setting the requirements

From the problems we have above the team decided that our requirements for the logging system were:
  • Testable
  • Provide different levels(severity) of logging. Trace, Debug and Exception.
  • Filter on logging level
  • Log to various targets. Email, File and Console.
  • Provide different areas or categories to log as. e.g. Presentation, Agent, Services...
  • Filter on logging Category (so I don't see the agent debug messages and agent developers don't see my presentation guff)
The requirements for the implementation of the logging system are:
  • Log un-handled exceptions so we can analyse then when they occur on live
  • Log the inputs to the method that threw the exception so we can identify it it is data related
  • Replace console/debug/trace calls with new logger.

Journey to a solution

Logging System

First thing I will tackle is the testability. I will create my own interface called ILogger that will do my logging for me
public interface ILogger
{
  void Write(string message);
}

Well that was easy! Next, Provide different levels of logging. I am going to do that by replacing my Write method with level specific methods:
public interface ILogger
{
    void Trace(string message);
    void Debug(string message);
    void Error(string message);
}

To now add Categories. Here I will add a category parameter to each of the methods. I would advise against using enums for your categories, especially if your logger becomes a shared API. Enums can not be extended and really tie you down.

public interface ILogger
{
    void Trace(string category, string message);
    void Debug(string category, string message);
    void Error(string category, string message);
}

Hmmm. It now looks like the write we are making a mess for the poor developer who has to implement this interface. I think I may have been on the right track with the write method in the first place. I am going to change that back a Write method that takes 3 parameters. I will offer extension methods to give users the ability to make calls like the one above (a little bit more work for me but less work for everyone else). See here for more info on extending interfaces with Extension methods.

public interface ILogger
{
    void Write(LogLevel level, string category, string message);
}

Some may be thinking that "Hang on. Strings are bad! Surely a category Enum has to be better?". To answer this I would recommend using an internal class like this

class Category
{
    public const string Agent = "Agent";
    public const string Presentation = "Presentation";
    public const string Agent = "Service";
}

If you want to expose categories at your public API, create a public version of this class that consumers can use an even inherit from to add their own categories too. If you are thinking "shouldnt they be public static readonly?", I will cover that later.
Checking off aginst our requirements list we can cross somethings off

  • Testable (Interfaces make things testable)
  • Provide different levels(severity) of logging. Trace, Debug and Exception.
  • Filter on logging level
  • Log to various targets. Email, File and Console.
  • Provide different areas or categories to log as. eg Presentation, Agent, Services...
  • Filter on logging Category (so I don’t see the agent debug messages and agent developers don't see my presentation guff)
So that is a good start. Now we need to add filtering and targeting various outputs. Well luckily there are plenty of 3rd party logging tools out there that do all of this for us. As our project is using Enterprise Library already we will just use their logging application block. Look to the example at the end of the post for complete example.
Great! Thanks to the 3rd party logging system we have ticked off all of our system requirements, now for our implementation requirements.

Implementation of Logging

Now that we have an interface to code against, let us now look at how we would use it.
Ed- This is completely back to front. You should normally look at how you would use it first and then create the implementation. TDD is a great methodology for this. We are approaching it back-to-front in this post because I think it is easier to consume for the reader.
So the main pain point we have is logging exceptions that occur at the agents (the classes that get data from services and map them to client side entities). This is due to poor 3rd party documentation, Live being slightly different to Dev and QA and integration testing being harder to perform than unit testing.
Prior to our new Logging system some agent code might look like this:
public void AmendPortfolio(PortfolioAmmendment portfolio)
{
    Console.WriteLine("AmendPortfolio {0}", portfolio.Id);
    var data = portfolio.MapToServiceData();    //Mapper extension method.
    _service.AmendBasket(_session, data, true);    //Call to 3rd party system
    Console.WriteLine("AmendPortfolio complete");
}

Now if we swapped out the console calls with out logger and then put in some exception logging it may look like this
public void AmendPortfolio(PortfolioAmmendment portfolio)
{
    _logger.Debug(Category.Agent, "AmendPortfolio {0}", portfolio.Id);
    try
    {
        var data = portfolio.MapToServiceData();    //Mapper extension method.
        _service.AmendBasket(_session, data, true);    //Call to 3rd party system
    }
    catch(Exception ex)
    {
        _logger.Error(Category.Agent, ex.ToString);
        throw;
    }
    _logger.Debug(Category.Agent, "AmendPortfolio complete");
}

Oh dear. We now have more code that does logging than code that does real work. While we have satisfied our requirements, we have doubled our work load. Not good. Back to the drawing board.

AOP

Some will have heard of Aspect Orientated Programming. It seemed like 5 years ago it was going to change everything. Well it mainly just changed logging. AOP is a style of programming that allows code to be injected at a given interception point. In our case the interception point would be the start of our agent method, the end of the agent method and when the method throws and exception (which is really the same as the end of the method). As far as I know there are 2 popular ways to achieve this
  1. at run time using a tool like Castle Windsor or Microsoft Unity
  2. at compile time using a tool like PostSharp
I have had some experience with PostSharp as I used it prior to Unity having the ability to add interceptors. So for our solution we went with PostSharp. I believe to switch between any of these options would not be a huge amount of work.
First a quick introduction on how I have previously done AOP logging with PostSharp. I would create an Attribute that I would apply to classes or methods that I wanted to be logged. The attribute would satisfy the requirements of PostSharp so that code would be injected at compile time to do my logging.
code like this
[Logged]
public void AmendPortfolio(PortfolioAmmendment portfolio)
{
    var data = portfolio.MapToServiceData();    //Mapper extension method.
    _service.AmendBasket(_session, data, true);    //Call to 3rd party system
}

which at compile time would alter the IL to represent something more like this:
public void AmendPortfolio(PortfolioAmmendment portfolio)
{
    Logger.Debug("AmendPortfolio({0})", portfolio.Id);
    try
    {
        var data = portfolio.MapToServiceData();    //Mapper extension method.
        _service.AmendBasket(_session, data, true);    //Call to 3rd party system
    }
    catch(Exception ex)
    {
        Logger.Error(ex.ToString);
        throw;
    }
    Logger.Debug("AmendPortfolio complete");
}
Well that looks perfect doesn't it? Not really. We don’t have a category specified and we have a hard coded reference the static class Logger from Enterprise Library Logging. It no longer points to our _logger member variable which was of type ILogger. This makes our testing harder to do. If testing your logger is not really something you care about (which is fine) then this AOP solution might be for you. If you do want to be more specific about logging then we need to find a way of getting hold of the instance of ILogger. As PostSharp is a compile time AOP framework, it is a touch harder to integrate than if we used Unity or Windsor. The main problem is how do we get a handle on the Logger? A solution we came up with was to create an ILogged interface
public interface ILogged
{
    ILogger Logger { get; }
}
By doing this we expose the Logger so we can use it in the Aspect/Attribute. Now if we look at out method that we are trying to log in the greater context of the class it resides in we can see what the implementation may now look like.
[Logged]
public class PortfolioAgent : IPortfolioAgent, ILogged
{
    private readonly ILogger _logger;
    private readonly ISomeService _service;
    public PortfolioAgent(ILogger logger, ISomeService _service)
    {
        _logger = logger;
        _service = service;
    }
    public void AmendPortfolio(PortfolioAmmendment portfolio)
    {
        var data = portfolio.MapToServiceData();    //Mapper extension method.
        _service.AmendBasket(_session, data, true);    //Call to 3rd party system
    }
}

That looks kinda cool to me. One thing my colleagues made a note of is that they would prefer if in this scenario we used property injection for the logger. This is fine with me as long as it is easy to use and we can still test it. The ILogged interface does not preclude using property injection, it is just not my preference. Another thing to note is the lack of Category. Easy fix there is to add a property to our LoggedAttribute of string Category.
[Logged(Category=Category.Agent)]
public class PortfolioAgent : IPortfolioAgent, ILogged
{
...
}


If you remember earlier that I mentioned public const vs public static readonly. This is the reason why I choose to use const types so that they could be used in the attribute.


I am pretty happy with this now. I think we tick off all of our requirements and have not added a lot of complexity to our code. The one last thing that bugs me is that I have to use the LoggedAttribute and the ILogged interface as a couple. If I forget to use one without the other, I either will get no logging, a nasty runtime exception or if I code a aspect correctly I can get a compile time error (the most desirable). At first I coded the attribute to do the latter (compile time error), but then realised that all of my agents were in one project and all used the same Category. To make life a little bit easier I moved the attribute to the AssemblyInfo and had the Aspect apply itself automatically to any class the implemented ILogged. This maybe a step too far towards black magic for some teams, so do what fits best.

Have a look at the sample code here. Both styles are shown in the example so take your pick of which suits you best.

Wednesday, September 16, 2009

Success, Motivation and Management

I just watched an interesting presentation from Dan Pink on The surprising science of motivation. Here he discusses the conventional ways to get productivity out of employees via carrot and stick mentality. Watch the video first (18min) so I don't take the wind from his sails.

What I found interesting especially on refection of my earlier post Projects – Measuring success and providing boundaries, was how he related the 20th century management style to prescriptive roles. Prescriptive roles are roles where you can give very clear guidance on how to perform a task and boundaries of the role. Boundaries normally define measurable tasks with reward/punishment (carrot/stick) attached to the tasks. These can anything from simple things such as

  • If you work late the company will pay for pizza delivery
  • if the project comes in on time you get a bonus
  • if you meet your KPIs you get all of your bonus. Pessimistically viewed as: miss any of your KPIs and we will dock your pay.

However the interesting thing about prescriptive roles is that in the 21st century the game has changed. Any task that can be completed without a level of creativity, abstract thought or reasoning generally can be done:

  • cheaper by outsourced labour
  • faster by automation (mechanical or computer)

This affects the software industry massively. Outsourcing burst on to the scene at the turn of the century and appeared to be the Holy Grail for accountants across western civilisation. This also scared the hell out of any "computer guy" as how was he going to make the payments on his new sports car? Outsourcing was not all that is was cracked up to be with stories of low quality product and communication failures. Outsourcing seems to be making a small come back and I think that we will see this see-saw rock a little bit more before outsourcing becomes part of our day to day life. See the 4 hour working week for some great ideas on outsourcing your life.

Dan Pink discusses that the 20th Century style of carrot/stick management worked well with prescriptive roles. But we are in the 21st century now and I would like to think that any one reading this is not performing a prescriptive role. I would even argue that our role is to eliminate or automate what we can. Normally these things that can be eliminated or automated are prescriptive processes. Roles that add real value to any company do require creativity, problem solving, communication skills, building relationships etc. These things cannot (yet) be automated.

So moving from the production line 20th century to the service orientated 21st century we are seeing a shift from the role of management being one based around compliance (carrot/stick) to self managing, autonomous teams/employees. This is in line with what agile and lean concepts are trying to achieve. Creating a culture where

  1. autonomy
  2. mastery
  3. purpose

are values that are held dear, creates an amazing shift positivity of the team. Instead prescribing exactly what is to be done and creating a drone army (which could be replaced by outsourcing or automation), try setting clear expectations of the outcomes you want achieved and let the team go for it. This will give the team a sense of worth as you are touching on Maslow's Hierarchy of Needs by giving them a channel for creativity and problem solving, but probably more importantly a sense of respect and belonging. Obviously it doesn't have to be a free-for-all, which would no doubt result in total failure, but simple things like burn down charts, daily stand-up's can give 'management' visibility of progress.

So what can you do if you are not management?

I believe that people are attracted to like people. If you value and exercise principles like mastery, purpose, independence, interdependence then I think you will attract to companies where that is the culture. Easiest thing to do is trying for mastery. In the information age it is simple to learn new skills. Simple is not the same as Easy however. Finding the information on a subject is not the same as mastering it, so you have to jump in feet first and make mistakes. Ideally this is not on mission critical projects. I do a lot of mini projects at home to learn a new skill. Some companies like Google are now offering time at work to do this stuff. If your company does not offer this time, then you may want to consider your priorities and see if you can make time for self development. 30 minutes on the way to work, you can read 10 pages of a book. That should allow you around 1 book per month. If you drive/cycle to work, then maybe audio books or podcasts are for you. This is only one part of the path to mastery. You then need to distil the information you are collecting. At the end of a chapter, jot down some notes in your own words of what you just learnt. The act of distilling the information will help you understand it better. To further improve your understanding, actually do something that requires your new skill. Lastly to really see if you understand the skill, try teaching someone else the skill. This really shows you where you are weak still.

Next, start automating the prescriptive parts of your role. For C# guys this mainly done for us with tools like automated build servers and Resharper. If have not worn out our tab key then as a C# code you probably don't use snippets/live templates enough. If you don't have a little keyboard short cut for getting latest, building, running tests and checking in then automate it! If part of your role can't be automated, try outsourcing it. It is amazing what people will do if you ask them nicely - "Excuse me Tony you don't mind taking this mail to the mail room for me?"

Once you are on the track for mastery by creating some sort of self development routine*, you have automated or outsourced the prescriptive parts to your role, you can now concentrate on delivering value from your role. This is where purpose comes in. Ideally as you develop more skills, reduce drone work and start delivering more value from your role management may feel that you can have a level of autonomy. From their perspective giving you autonomy now is really just reducing their work load. If you constantly deliver value, management will naturally move away from compliance management to engagement.

S.M.A.R.T. goals

Projects – Measuring success and providing boundaries

*I try to read a minimum one book per month. I also try to keep up with my blog subscriptions but constantly have hundreds of unread post. As a guide apparently Bill Gates reads 3 books a weekend!

Friday, August 14, 2009

How not to write an Interface in C# 3.0

While working with the IUnityContainer interface from Microsoft's Patterns and Practices team, I decided that it was well worth a post on how not to design interfaces. Recent discussion amongst Rhys and Colin (here as well) have been interesting but I imagine most would agree that both arguments are probably fighting over which polish to use on their code. If these are the big battles you have to face at work then I am jealous.

Introducing IUnityContainer ....
42 members are defined on this interface. Fail.
With much restraint I wont go on about it and flame the guys who write it. Most people know of my disdain for the P&P team at Microsoft. So what tips do I give to make this a usable again? Lets break down the solution into a few tips:

  • Extension methods
  • Cohesion
  • Intention revealing interfaces

How do each of these help us. Let look at some code I was writing pre-extension methods. It was a validation interface that had 2 methods, Validate and Demand

public interface IValidator<T>
{
  IEnumerable<ValidationErrors> Validate(T item);

  ///<exception cref="ValidationException">Thrown when the item is not valid</exception>
  void Demand(T item);
}

The problem with this interface is that all implementations of the interface would/could implement Demand the same way; Make a call to Validate(T item) and if any ValidationErrors came back throw an exception with the validation failures. Time ticks by and I realise the now that I have extension methods in C# 3.0 (.NET 3.5). I only need to have the Validate method on my interface now and provide the Demand implementation as an extension method. The code now becomes:

public interface IValidation<T>
{
  /// <summary>
  /// Validates the specified item, returning a collection of validation errors.
  /// </summary>
  /// <param name="item">The item to validate.</param>
  /// <returns>Returns a <see cref="T:ArtemisWest.ValidationErrorCollection"/> that is empty for a valid item, 
  /// or contains the validation errors if the item is not in a valid state.</returns>
  /// <remarks>
  /// Implementations of <see cref="T:ArtemisWest.IValidation`1"/> should never return null from the validate method.
  /// If no validation errors occured then return an empty collection.
  /// </remarks>
  IEnumerable<ValidationErrors> Validate(T item);
}
public static class ValidatorExtensions
{
  /// <summary>
  /// Raises a <see cref="T:ArtemisWest.ValidationException"/> if the <paramref name="item"/> is not valid.
  /// </summary>
  /// <typeparam name="T">The type that the instance of <see cref="T:IValidation`1"/> targets.</typeparam>
  /// <param name="validator">The validator.</param>
  /// <param name="item">The item to be validated.</param>
  /// <exception cref="T:ArtemisWest.ValidationException">
  /// Thrown when validating the <paramref name="item"/> returns any errors. Only the first 
  /// validation error is raised with the exception. Any validation errors that are marked as
  /// warnings are ignored.
  /// </exception>
  public static void Demand<T>(this IValidation<T> validator, T item)
  {
    foreach (var error in validator.Validate(item))
    {
      if (!error.IsWarning)
      {
        throw new ValidationException(error);
      }
    }
  }
}

Then end result is that the API feels the same as I have access to both methods, but the cost of implementing my interface is reduced to just its core concern.
So, extension methods is one trick we have in the bag. Next cohesion.

The recent discussion between Rhys and Colin is "how many members belong on an interface?" I think both will agree that answer is not 42. Juval Lowy made a great presentation at TechEd2008 on Interfaces and that we should be aiming for 3-7 members per interface. This provides us with a level of cohesion and a low level of coupling. Lets look at some of the members on the IUnityContainer Interface.

  • 8 overloads of RegisterInstance
  • 16 overloads of RegisterType
  • Add/Remove "Extensions" methods
  • 4 overloads of BuildUp
  • 2 overloads of ConfigureContainer
  • CreateChildContainer
  • 4 overloads of Resolve
  • 2 overloads of ResolveAll
  • A TearDown method
  • and a Parent property
Whew! Well how can we tame this beast? When I look at this interface I see certain groups that look like they are related in usage. They would be the
  1. Register and Resolve functionality
  2. And and Remove extensions functionality
  3. Build up and teardown functionality
  4. Container hierarchy functionality (Parent and CreateChildContainer)
  5. Container configuration


Interestingly on our current project we only use the Register/Resolve functionality.
These five groups to me have some level of cohesion. That to me then makes them a candidate for there own interfaces. The big giveaway being that I use unity quite successfully but never use 4/5 of the functionality defined on this interface. So our 2nd tip would be to split out these groups on functionality into there own interfaces.
But what do we name the new interfaces? This is our 3rd tip:

Intention revealing interfaces.

Looking at my list I would imagine some useful names could be:

  1. IContainer
  2. IExtensionContainer
  3. ILifecycleContainer
  4. INestedContainer
  5. IConfigurableContainer
To be honest, I have put little thought into these names. Normally I would put a LOT of effort into getting the naming right, but I don't work on the P&P team so these changes will never be done so why waste my time? Edit: My laziness here really does take the wind our of the sails of this argument. Sorry.
Ok so how can we bring this all together? My proposal would be to have 6 interfaces
  1. IContainer
  2. IExtensionContainer
  3. ILifecycleContainer
  4. INestedContainer
  5. IConfigurableContainer
  6. IUnityContainer : IContainer, IExtensionContainer, ILifecycleContainer, INestedContainer, IConfigurableContainer


Next I would create some extension methods to deal with the silly amount of duplication the multiple overloads incur. Looking at the implementation in UnityContainerBase I would think that all of these methods could be candidates for extension methods

public abstract class UnityContainerBase : IUnityContainer, IDisposable
{
  //prior code removed for brevity
  public IUnityContainer RegisterInstance<TInterface>(TInterface instance)
  {
    return this.RegisterInstance(typeof(TInterface), null, instance, new ContainerControlledLifetimeManager());
  }
  public IUnityContainer RegisterInstance<TInterface>(string name, TInterface instance)
  {
    return this.RegisterInstance(typeof(TInterface), name, instance, new ContainerControlledLifetimeManager());
  }
  public IUnityContainer RegisterInstance<TInterface>(TInterface instance, LifetimeManager lifetimeManager)
  {
    return this.RegisterInstance(typeof(TInterface), null, instance, lifetimeManager);
  }
  public IUnityContainer RegisterInstance(Type t, object instance)
  {
    return this.RegisterInstance(t, null, instance, new ContainerControlledLifetimeManager());
  }
  public IUnityContainer RegisterInstance<TInterface>(string name, TInterface instance, LifetimeManager lifetimeManager)
  {
    return this.RegisterInstance(typeof(TInterface), name, instance, lifetimeManager);
  }
  public IUnityContainer RegisterInstance(Type t, object instance, LifetimeManager lifetimeManager)
  {
    return this.RegisterInstance(t, null, instance, lifetimeManager);
  }
  public IUnityContainer RegisterInstance(Type t, string name, object instance)
  {
    return this.RegisterInstance(t, name, instance, new ContainerControlledLifetimeManager());
  }
  //Remaining code removed for brevity
}

all of these methods just delegate to the one method overload left as abstract

public abstract IUnityContainer RegisterInstance(
    Type t, 
    string name, 
    object instance, 
    LifetimeManager lifetime);

The obvious thing to do here is to just make all of these extension methods in the same namespace as the interfaces

public static class IContainerExtensions
{
  public IContainer RegisterInstance<TInterface>(this IContainer container, TInterface instance)
  {
    return container.RegisterInstance(typeof(TInterface), null, instance, new ContainerControlledLifetimeManager());
  }
  public IContainer RegisterInstance<TInterface>(this IContainer container, string name, TInterface instance)
  {
    return container.RegisterInstance(typeof(TInterface), name, instance, new ContainerControlledLifetimeManager());
  }
  public IContainer RegisterInstance<TInterface>(this IContainer container, TInterface instance, LifetimeManager lifetimeManager)
  {
    return container.RegisterInstance(typeof(TInterface), null, instance, lifetimeManager);
  }
  public IContainer RegisterInstance(this IContainer container, Type t, object instance)
  {
    return container.RegisterInstance(t, null, instance, new ContainerControlledLifetimeManager());
  }
  public IContainer RegisterInstance<TInterface>(this IContainer container, string name, TInterface instance, LifetimeManager lifetimeManager)
  {
    return container.RegisterInstance(typeof(TInterface), name, instance, lifetimeManager);
  }
  public IContainer RegisterInstance(this IContainer container, Type t, object instance, LifetimeManager lifetimeManager)
  {
    return container.RegisterInstance(t, null, instance, lifetimeManager);
  }
  public IContainer RegisterInstance(this IContainer container, Type t, string name, object instance)
  {
    return container.RegisterInstance(t, name, instance, new ContainerControlledLifetimeManager());
  }
}

This now reduces our IContainer Interface to just the one method

public interface IContainer
{
  IContainer RegisterInstance(Type t, string name, object instance, LifetimeManager lifetime);
}


One thing to note here is that we have broken the contract because we now return IContainer not IUnityContainer. We will come back to this problem later.
If we then follow suit on the other interfaces we have created, we will have 6 interfaces that look like this:

public interface IContainer
{
  IContainer RegisterType(Type from, Type to, string name, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers);
  IContainer RegisterInstance(Type t, string name, object instance, LifetimeManager lifetime);
  object Resolve(Type t, string name);
  IEnumerable<object> ResolveAll(Type t);
}
public interface IExtensionContainer
{
  IExtensionContainer AddExtension(UnityContainerExtension extension);
  IExtensionContainer RemoveAllExtensions();
}
public interface ILifecycleContainer
{
  object BuildUp(Type t, object existing, string name);
  void Teardown(object o);
}
public interface INestedContainer
{
  INestedContainer CreateChildContainer();
  INestedContainer Parent { get; }
}
public interface IConfigurableContainer
{
  object Configure(Type configurationInterface);
}
public interface IUnityContainer : IDisposable, IContainer, IExtensionContainer, ILifecycleContainer, INestedContainer, IConfigurableContainer
{}

So now we have some much more managable interfaces to work with. However, we have broken the feature that it had before of returning IUnityContainer from each method. You may ask why would you return the instance when clearly you already have the instance? By doing so you can create a fluent interface. See my other post on Fluent interfaces and DSLs for more information.
Now that we have removed all the noise from the interfaces we actually have a reasonable number of members to work with. This makes me think that maybe we can refactor back to a single interface? Lets have a look:

public interface IUnityContainer : IDisposable
{
  IUnityContainer RegisterType(Type from, Type to, string name, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers);
  IUnityContainer RegisterInstance(Type t, string name, object instance, LifetimeManager lifetime);
  object Resolve(Type t, string name);
  IEnumerable<object> ResolveAll(Type t);
  IUnityContainer AddExtension(UnityContainerExtension extension);
  IUnityContainer RemoveAllExtensions();
  object BuildUp(Type t, object existing, string name);
  void Teardown(object o);
  IUnityContainer Parent { get; }
  IUnityContainer CreateChildContainer();
  object Configure(Type configurationInterface);
}

Well that is 13 members, which is above my happy limit of 10 and nearly double my ideal limit of 7, however...I think this would be a vast improvement on the silly interface that currently exists with its 42 members.
Just for fun here are the Extension methods that would complete the interface to bring it back to feature complete.

public static class IUnityContainerExtentions
{
  public static IUnityContainer AddNewExtension<TExtension>(this IUnityContainer container) where TExtension : UnityContainerExtension, new()
  {
    return container.AddExtension(Activator.CreateInstance<TExtension>());
  }
  public static T BuildUp<T>(this IUnityContainer container, T existing)
  {
    return (T)container.BuildUp(typeof(T), existing, null);
  }
  public static object BuildUp(this IUnityContainer container, Type t, object existing)
  {
    return container.BuildUp(t, existing, null);
  }
  public static T BuildUp<T>(this IUnityContainer container, T existing, string name)
  {
    return (T)container.BuildUp(typeof(T), existing, name);
  }
  public static TConfigurator Configure<TConfigurator>(this IUnityContainer container) where TConfigurator : IUnityContainerExtensionConfigurator
  {
    return (TConfigurator)container.Configure(typeof(TConfigurator));
  }
  public static IUnityContainer RegisterInstance<TInterface>(this IUnityContainer container, TInterface instance)
  {
    return container.RegisterInstance(typeof(TInterface), null, instance, new ContainerControlledLifetimeManager());
  }
  public static IUnityContainer RegisterInstance<TInterface>(this IUnityContainer container, string name, TInterface instance)
  {
    return container.RegisterInstance(typeof(TInterface), name, instance, new ContainerControlledLifetimeManager());
  }
  public static IUnityContainer RegisterInstance<TInterface>(this IUnityContainer container, TInterface instance, LifetimeManager lifetimeManager)
  {
    return container.RegisterInstance(typeof(TInterface), null, instance, lifetimeManager);
  }
  public static IUnityContainer RegisterInstance(this IUnityContainer container, Type t, object instance)
  {
    return container.RegisterInstance(t, null, instance, new ContainerControlledLifetimeManager());
  }
  public static IUnityContainer RegisterInstance<TInterface>(this IUnityContainer container, string name, TInterface instance, LifetimeManager lifetimeManager)
  {
    return container.RegisterInstance(typeof(TInterface), name, instance, lifetimeManager);
  }
  public static IUnityContainer RegisterInstance(this IUnityContainer container, Type t, object instance, LifetimeManager lifetimeManager)
  {
    return container.RegisterInstance(t, null, instance, lifetimeManager);
  }
  public static IUnityContainer RegisterInstance(this IUnityContainer container, Type t, string name, object instance)
  {
    return container.RegisterInstance(t, name, instance, new ContainerControlledLifetimeManager());
  }
  public static IUnityContainer RegisterType<T>(this IUnityContainer container, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(typeof(T), null, null, null, injectionMembers);
  }
  public static IUnityContainer RegisterType<TFrom, TTo>(this IUnityContainer container, params InjectionMember[] injectionMembers) where TTo : TFrom
  {
    return container.RegisterType(typeof(TFrom), typeof(TTo), null, null, injectionMembers);
  }
  public static IUnityContainer RegisterType<T>(this IUnityContainer container, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(typeof(T), null, null, lifetimeManager, injectionMembers);
  }
  public static IUnityContainer RegisterType<TFrom, TTo>(this IUnityContainer container, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers) where TTo : TFrom
  {
    return container.RegisterType(typeof(TFrom), typeof(TTo), null, lifetimeManager, injectionMembers);
  }
  public static IUnityContainer RegisterType<T>(this IUnityContainer container, string name, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(typeof(T), null, name, null, injectionMembers);
  }
  public static IUnityContainer RegisterType<TFrom, TTo>(this IUnityContainer container, string name, params InjectionMember[] injectionMembers) where TTo : TFrom
  {
    return container.RegisterType(typeof(TFrom), typeof(TTo), name, null, injectionMembers);
  }
  public static IUnityContainer RegisterType(this IUnityContainer container, Type t, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(t, null, null, null, injectionMembers);
  }
  public static IUnityContainer RegisterType<T>(this IUnityContainer container, string name, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(typeof(T), null, name, lifetimeManager, injectionMembers);
  }
  public static IUnityContainer RegisterType<TFrom, TTo>(this IUnityContainer container, string name, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers) where TTo : TFrom
  {
    return container.RegisterType(typeof(TFrom), typeof(TTo), name, lifetimeManager, injectionMembers);
  }
  public static IUnityContainer RegisterType(this IUnityContainer container, Type t, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(t, null, null, lifetimeManager, injectionMembers);
  }
  public static IUnityContainer RegisterType(this IUnityContainer container, Type t, string name, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(t, null, name, null, injectionMembers);
  }
  public static IUnityContainer RegisterType(this IUnityContainer container, Type from, Type to, params InjectionMember[] injectionMembers)
  {
    container.RegisterType(from, to, null, null, injectionMembers);
    return container;
  }
  public static IUnityContainer RegisterType(this IUnityContainer container, Type t, string name, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(t, null, name, lifetimeManager, injectionMembers);
  }
  public static IUnityContainer RegisterType(this IUnityContainer container, Type from, Type to, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(from, to, null, lifetimeManager, injectionMembers);
  }
  public static IUnityContainer RegisterType(this IUnityContainer container, Type from, Type to, string name, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(from, to, name, null, injectionMembers);
  }
  public static T Resolve<T>(this IUnityContainer container)
  {
    return (T)container.Resolve(typeof(T));
  }
  public static T Resolve<T>(this IUnityContainer container, string name)
  {
    return (T)container.Resolve(typeof(T), name);
  }
  public static object Resolve(this IUnityContainer container, Type t)
  {
    return container.Resolve(t, null);
  }
  public static IEnumerable<T> ResolveAll<T>(this IUnityContainer container)
  {
    //return new <ResolveAll>d__0<T>(-2) { <>4__this = this };
    //This implementation requires more effort than I am willing to give (6hours till my holiday!)
  }
}

Bloody hell. Imagine trying to implement that on every class that implemented the interface!

While I know this entire post is academic as we cant change Unity, but I hope that sews some seeds for other developers so that when they design their next interface it wont have silly overloads that just make the consuming developer's job harder than it ought to be.

Sunday, July 26, 2009

Code inspections

Code Inspections (or Code Reviews or Peer Reviews) is a technique a team can implement to improve the quality of the software they deliver. It is initially a bit of a cultural shift for many teams but can produce benefits not just in software but if done well also in team morale.

Goals

The goals for performing a Code Inspection are the following:

  1. To cross pollinate Domain knowledge
  2. To cross pollinate technical knowledge
  3. To maintain a consistent team process
  4. To identify any obvious errors

A code inspection should be an enjoyable experience where two developers are able to bond and cooperatively improve the quality of software delivered. The exercise should not feel like a witch-hunt or a boasting exercise for one developer to flex their ability over another. Initially the exercise may feel somewhat uncomfortable for the team, this is usually due to many different styles and tolerances the each member of the team brings to the collective skill set. In these scenarios it is better to agree amongst the team the level of detail that is in the remit of the inspections.

Introducing a team to Code Inspections

It is probably best if a team ease into code inspections, for it is better to be doing some code inspections and forgiving style differences than it is to be too aggressive in initial code inspections and cause tension in the team which sees the exercise become avoided. While this may be difficult to accept initially, one must remember that increasing the quality of today's code is of less importance than spreading the knowledge through out the team and maintain a quality team spirit. To know what to target in initial inspections, the team should conduct a brief team meeting and pick a handful of areas that the team agree needs work. This way we can get the biggest gain without creating to much friction.

Tools

The "what" to look for is of less importance than the process itself but the process can be aided quite nicely by the use of automated tools. In the .NET community the tools I would recommend are FxCop (or the Visual Studio Code Analysis), Simian (identifies duplicate code) and NCover (for test coverage).
To ease into code inspections I think it is best to set the bar quite low. For FxCop just enable the following Rules
  • Design
  • Naming
  • Performance
  • Security
  • Usage
While some teams may have their own style that might conflict with some of the styles in FxCop, ultimately this is a community standard now. As developers move between jobs it is great to eliminate this kind of induction friction when all .NET developers can be "singing from the same hymn sheet".
Simian provides the ability to set a tolerance of duplication to 6 lines of duplicate code. This should be fine.
Assuming the team is TDD or at least writing tests the the bar for test coverage could start at something like 80%. While I understand that chasing 100% test coverage can be frowned upon, it is nice to know that code was writing for a reason and does what it was intended on doing.

Setting expectations

Once the initial team meeting has been completed a set of expectations should be drawn up. This will be the bench mark for code inspections. The benchmark should be set at a level that everyone in the team agrees that they are happy for their code to be "rejected" if it does not meet the expectations. As you can imagine this in itself will lower the bar quite considerably as developers know that there are circumstances beyond their control which may prohibit them meeting the expectations that they would like to meet. Project deadlines, new technologies and processes can affect a developers ability to meet these goals. Remember the goals for code inspection is not perfect code, it is to improve the ability of the team to create better code, and continually get better at creating better code.
An example for an initial code inspection check list may look like this for a C# 3.0 project:
  1. Exceptions that are explicitly thrown from a public or protected members must be documented with XML comments.
  2. FxCop Naming and Performance rules must be met.
  3. Classes should be testable.
While this is a very small list of things to consider, just setting these expectations combined with knowledge sharing that the team would perform would see a dramatic improvement on any team not already doing code inspections.
Once the team is comfortable with the process and culture of code inspections, have another meeting to raise the bar a little bit. Again, refrain from being too eager. All team member should feel comfortable with the expectations being set.

Performing Code Inspections

Be fore we cover the actual actions of what to do when doing a code inspection just quickly remeber the goals of a code inspection
  1. To cross pollinate Domain knowledge
  2. To cross pollinate technical knowledge
  3. To maintain a consistent team process
  4. To identify any obvious errors

The inspection is not to create perfect code.
Steps to perform a code inspection (the first few step are as a courtesy to the reviewer):
  1. First perform a quick review yourself.
  2. Indicate to the person you would like to review your code that you will want some of their time in a few moments. It is important that this person not be the same few people. Have a method where you cycle through the team (clock wise round the office, youngest to oldest etc). Generally you will want to do most of your reviews with your immediate team, but if you have a larger development team/department that can be involved in the inspection be sure to include them too. *
  3. Perform any automated analysis (FxCop, Simian, Unit tests etc...)
  4. Open all the necessary documents ready to go, and consider the logical path to show the reviewer through your code.
  5. Some prefer to review code on paper. This is very good for removing the desire to solve any problems with the code, it also allows you to easily make notes against the code and possibly perform the review in a quiet place. However, I also think that if done properly this creates a mass of wasted paper and ink.
  6. When both parties are ready, have the author describe the intent of the code to the reviewer. This should start with a 10 second overview of what set of code does. 
  7. Show the reviewer the output of any automated analysis (FxCop, Tests, Coverage etc.). Obviously the results of which should met the set expectations.
  8. If at the computer give the reviewer the mouse and keyboard. If using a print out have the print out in front of the reviewer.
  9. Describe what the code does, ideally from entry point to each of the leaves. Very good code will basically read like prose anyway so this would be very easy.
  10. The reviewer sets the pace of the review by scrolling or navigating to classes/methods as they understand the code.
  11. The reviewer prompts for "what-if" scenarios and bookmarks any concerns (Ctrl+k+k in Visual Studio). No attempt is made to "fix" any code by either party.
  12. As the reviewer bookmarks any code they should also take a note of why the bookmark was made (Unclear code, not meeting agreed expectations, possible error etc). This can be done on paper or if available a Code Inspection prompt sheet.
  13. The author generally will identify most of their mistakes as they describe the code to the reviewer.
  14. If the code has been flagged in any way that would fail to meet the set expectations -or- would result in probably faulty software; the Author requests the the Reviewer come back to verify their changes and approve the code.**
  15. Once the review(s) have come to a point where no probable faults exists in the code and the team expectations are met the code review is complete.


*In my experience my project teams have been generally in sizes of 3-8. In these scenarios I would try to have each member of the team review my code over the course of a week. Once I had engaged each team member I would next engage someone from another team. This allows for a knowledge transfers across teams too (like finding out each team is using different logging tools!).

**By having the Author suggest that they fix their code, this creates far less stress for the reviewer to feel as if they are judging the authors code. Essentially the reviewer is a soundboard and prompt for the author.

Hints and Tips

  • Ensure that the team is clear that both the Reviewer and the Author are there to find any defects. It is a cooperative approach.
  • Focus on finding defects and failed expectations, not fixing them.
  • The biggest benefit for performing Code Inspections is that it creates a culture where the team all get better technically and also domain knowledge is not isolated to pockets of people.
  • Inspect all new and modified code
  • For best results review often and review early. Not many developers write so much code that 4 reviews in a day would be necessary, but not may developers would write so little code that only 1 or no code reviews would be done in a day. An exmple of a set of code to review could be when an MVP screen is complete; review the Model, View, Presenter and tests. I imagine few developer produce more than 4 tested screens per day.
  • As you get better at code reviews they should become less and less formal and quicker to execute. Extreme Programming takes peer review to the next level by have 2 developer work together constantly in a fashion called Pair Programming.
  • Some teams keep track of their code reviews an can create statistics to analyse where current weaknesses are. If you see value in this than give it a go, but make sure it doesn't just create "busy work". I believe that if the inspections are done correctly the team will know that there is an area of need and should raise it appropriately.
  • Code reviews should be done at the time the code was completed. New code should not be started until the current code has been inspected and approved. Keeping thing fresh in your head is key to a good review. It also means that "unfinished"/un-inspected code doesn't just slip out into production.
  • Some source/version control systems allow you to attribute check ins. If this is the case then code that has passed a check in can be attributed ideally with the reviewer's details. This should then allow the team to analyse all code that has not been reviewed (ie. last check-in for a file was not attributed).

Thursday, July 9, 2009

.NET Parallel Framework - Videos

Way back in June 2008 Joe Albahari gave a presentation on the upcoming parallel programming features in .NET 4.0. I have finally published the recording of his session here. Just for reference the Parallel Framework encompasses PLINQ and is sometimes referred to as the TPL (Task Parallel Library) or System.Threading.dll

Sorry about the delay in the post.

Joe Albarhari Parallel FX and PLINQ videos : 6 videos @ ~10min and 40-50MB each.

Saturday, June 6, 2009

Responsive WPF User Interfaces Part 7

Creating a responsive Image control

In part 6 we discovered problems even when we follow the guidelines from this series. In this case we find that sometimes it can be the controls themselves that are the cause of the lagging interface. In part 6 it was the Image control that was to blame. I make the assumption that it was trying to perform it's bitmap decoding on my precious dispatcher thread. However not only Image controls can get ugly, Charles Petzold shows some problems you may find when using chart controls (or any Items Control) in his Writing More Efficient Items Controls article.

The first thing I wanted to do was to just sub class Image however, I personally was stumped as to how I would then create my control template for it. Next I actually only wanted to accept URIs as my source so that I can do the decoding from file to an ImageSource myself explicitly and not on the UI thread via a TypeConverter. So I want a sub class of Control with a dependency property of UriSource and then 3 read-only properties ImageSource, IsLoading and HasLoadFailed. UriSource will provide the hook to bind your file name to. ImageSource will then provide the decoded ImageSource to be displayed. IsLoading and HasLoadFailed will be there so that you can update the UI appropriate to the lifecycle of the image.

The Style and control template I came up for this design was this:

<Style TargetType="{x:Type controls:Thumbnail}">
  <Setter Property="Template">
    <Setter.Value>
      <ControlTemplate TargetType="{x:Type controls:Thumbnail}">
        <Image x:Name="ImageThumbnail"
               Source="{TemplateBinding Image}"
               HorizontalAlignment="{TemplateBinding HorizontalAlignment}"
               VerticalAlignment="{TemplateBinding VerticalAlignment}"
               MaxHeight="{TemplateBinding MaxHeight}"
               MaxWidth="{TemplateBinding MaxWidth}"
               Stretch="Uniform"
               StretchDirection="DownOnly" />
      </ControlTemplate>
    </Setter.Value>
  </Setter>
</Style>

Where I want to use the Thumbnail control I can have some XAML that hides and shows a loading indicator. This example here just shows the text "Loading Image..." but would probably have a nice animation or an indeterminate progress indicator.

<Border CornerRadius="4" BorderBrush="Silver" BorderThickness="1"
        Width="150"
        Height="150"
        Margin="10">
  <Grid>
    <local:Thumbnail x:Name="Thumbnail"
                     MaxWidth="148"
                     MaxHeight="148"
                     UriSource="{Binding}" 
                     HorizontalAlignment="Center" VerticalAlignment="Center"
                     ToolTip="{Binding}">
      <local:Thumbnail.Style>
        <Style TargetType="{x:Type local:Thumbnail}">
          <Setter Property="Visibility" Value="Visible"/>
          <Style.Triggers>
            <DataTrigger Binding="{Binding RelativeSource={RelativeSource Self}, Path=IsLoading}" Value="True">
              <Setter Property="Visibility" Value="Collapsed"/>
            </DataTrigger>
          </Style.Triggers>
        </Style>
      </local:Thumbnail.Style>
    </local:Thumbnail>

    <TextBlock Text="Loading image..."
               Foreground="Gray"
               HorizontalAlignment="Center"
               VerticalAlignment="Center">
      <TextBlock.Style>
          <Style TargetType="TextBlock">
              <Setter Property="Visibility" Value="Visible"/>
              <Style.Triggers>
                  <DataTrigger Binding="{Binding ElementName=Thumbnail, Path=IsLoading}" Value="False">
                      <Setter Property="Visibility" Value="Collapsed"/>
                  </DataTrigger>
              </Style.Triggers>    
          </Style>
      </TextBlock.Style>
    </TextBlock>
  </Grid>
</Border>

Well that is the easy bit, the public API. Trust me it gets more interesting.

My gut feeling was basically take the code from reflector and use that to decode Uris to ImageSource objects, but perform the action on a background thread. Two problems here:

  1. Most of the fun code that happens when the URI is being parsed and then decoded, is internal. :-(
  2. You cant pass most sub-classes of ImageSource across threads. If you create them on one thread they cant be accessed on another thread. So this stops us blindly copying code from reflector as all that code only runs on the dispatcher thread so wont face this problem.

We do have something to work with however: WriteableBitmap.

In some vain attempt at brevity (he says in his 7th post in the series!), I will try to skip over all of the brick walls I faced and jump straight to the solution I came up with. This was a great learning experience for me, and in the spirit of a learning experience the code is fairly rough (with TODO comments still intact). I will try to build this control into a stable control but in its current state is very much demo-ware.

First thing I want to achieve was to have the decoding and if possible the reading from disk happen off the UI thread. Lets start with the easy bit; reading the file into memory. I'm going to keep that simple and just go with grabbing the file as a byte array like this

byte[] buffer = File.ReadAllBytes(UriSource);

Next I want to decode the byte array into some form of ImageSource. To do this I have used a combination of the BitmapDecoder and the WriteableBitmap. First I take the byte array and load it into a MemoryStream, and then pass the stream to the BitmapDecoder factory method Create. This will return me an instance of one of its implementations (BitmapDecoder is abstract/MustInherit). From here I make a bold assumption that we only care about the first "Frame" of the image. I believe that most formats don't support multiple frames but formats like GIF do which allows them to have animation features. Anyway, in my demo-ware code I just take the first frame.

using (Stream mem = new MemoryStream(buffer))
{
  BitmapDecoder imgDecoder = null;
  imgDecoder = BitmapDecoder.Create(mem, BitmapCreateOptions.None, BitmapCacheOption.None);

  BitmapFrame frame = imgDecoder.Frames[0];
  double scale = GetTransormScale(maxWidth, maxHeight, frame.PixelWidth, frame.PixelHeight);

  BitmapSource thumbnail = ScaleBitmap(frame, scale);

  // this will disconnect the stream from the image completely ...
  var writable = new WriteableBitmap(thumbnail);
  writable.Freeze();
  return writable;
}

From here I figure out the ratio that I want to scale it to. There doesn't seem much point in returning an 8MB image if we only want to see it as 300x300 does it? Next we request the image as what is almost our final product. We have a helper method scale the frame and return it as a BitmapSource. We can't assign this BitmapSource back to our ImageSource dependency property as they don't play nice over thread boundaries. So, the last thing we need to do is take the BitmapSource we just generated and create a WriteableBitmap from it and then call its Freeze method. This puts the WriteableBitmap in an immutable state which then makes it thread safe. Whew!

Just for reference here is the GetTransformScale method

private static double GetTransormScale(double maxWidth, double maxHeight, double currentWidth, double currentHeight)
{
  double xRatio = maxWidth / currentWidth;
  double yRatio = maxHeight / currentHeight;
  double resizeRatio = (xRatio > yRatio) ? xRatio : yRatio;
  if (resizeRatio > 1)
    resizeRatio = 1;
  return resizeRatio;
}

and the ScaleBitmap method

private static BitmapSource ScaleBitmap(BitmapSource source, double scale)
{
  if (scale > 0.9999 && scale < 1.0001)
  {
    return source;
  }
  var thumbnail = new TransformedBitmap();
  thumbnail.BeginInit();
  thumbnail.Source = source;
  var transformGroup = new TransformGroup();
  transformGroup.Children.Add(new ScaleTransform(scale, scale));
  thumbnail.Transform = transformGroup;
  thumbnail.EndInit();
  return thumbnail;
}

So all things considered, once you get over the decoding stuff and then wrestle with the various subclasses of ImageSource that play their part, its not too bad. But like I said earlier; it gets more interesting. So far we have only looked at how to decode the image, we have yet to consider how to make the call to perform the decoding. My first instinct was to make the call on any change to the UriSource property. However I may not have the MaxHeight and MaxWidth information at that point in time. This caused me much stress over when should I call this decode functionality. If no value is ever going to be set for MaxHeight or MaxWidth then I should just process the image, but if first the UriSource is set then the MaxHeight, then the MaxWidth I would end up creating 3 asynchronous calls to render 3 different sized images. This would be a disaster as it would surely end up with race conditions and most likely the wrong sized image being displayed. The other obvious problem with that is that we would be performing 3 times the work. Hmm.

My solution (and mileage may vary as I am green to concurrent programming models) was to create a stack of render requests for each instance of a Thumbnail. As a property was set then a request would be added to the stack and a request to start processing the stack would occur. Periodically while processing the image I would check to see if the current work was invalidated by any new requests to the stack. If the current request was invalid it would terminate its work. The request to start processing the stack would simply try to pop the last request from the stack and clear out all other requests (effectively ignoring stale requests). On returning from the render request it would loop back and try to pop anything new from the stack. This last part while important is very much implementation details and could vary dramatically from anything you may implement. I have just read over the code myself and I could do with some work. It is amazing what just 3months (and reading Joe Duffy's Concurrent Programming for Windows) does for your appreciation of your code.

As always the code is available for you to have a play. Open it up, pick it to pieces, use what you like. Obviously I take no responsibility for the code if you do choose to use it as this is intended on being a learning exercise. Having said that I do try to produce good code for my demos so if you do see something that is not good enough let me know.

This has been a fun series and I hope you liked it and learnt as much as I did from it.

Back to series Table Of Contents

Working version of the code can be found here

Tuesday, May 26, 2009

Responsive WPF User Interfaces Part 6

Unresponsive Controls

Sorry about the unreasonable delay between this post and the last post in this series. Interestingly readers that have been following the series have already jumped ahead to see that not all is fine even if you follow the guidance provided. 'Arne' makes a comment on the last post that he is having some problems even when he follows the guidance especially when loading a lot of images. Some of the team I work with are currently working with graphing/charting products that are great with tens and hundreds of rows of data but create an unresponsive UI when pushed to use thousands of rows of data (eg +5 years of daily stock info).

So if we are following the rules what can we do? Well let us look at a concrete problem first and for consistency let's stay with the PhotoAlbum project. For a quick refresher, we have a "ViewModel" with a property Images that is of type ObservableCollection<Uri>. The XAML snippet that displays the Images looks like this:

<ListBox ItemsSource="{Binding Images}"
         ScrollViewer.HorizontalScrollBarVisibility="Disabled">
  <ListBox.ItemTemplate>
    <DataTemplate>
      <Border Background="AliceBlue"
              CornerRadius="2"
              Width="150"
              Height="150">
        <Image Source="{Binding}"
               MaxHeight="148"
               MaxWidth="148"
               Stretch="Uniform"
               StretchDirection="DownOnly"
               HorizontalAlignment="Center"
               VerticalAlignment="Center"
               ToolTip="{Binding}" />
      </Border>
    </DataTemplate>
  </ListBox.ItemTemplate>
  <ListBox.ItemsPanel>
    <ItemsPanelTemplate>
      <WrapPanel IsItemsHost="True" />
    </ItemsPanelTemplate>
  </ListBox.ItemsPanel>
</ListBox>

Now in theory there is nothing wrong with this XAML. From a declarative programming point of view, I think it is well written (Ed: Cause I wrote it!) as it describes what we want (Items in the ListBox to be wrapped horizontally, displayed as Images within a border.

The problem with this declarative code is what most people fear with declarative code: what is actually happening under the hood? Before we trek too far, if I run the code (see example 6_UnresponsiveImages in the code example) I can select folders with small amount of small images just fine. The application runs ok. The current code becomes a problem when I point it to folders with large images. This is compounded when there are lots of those large images or when I am running on a slower/older computer.

We will quickly go off track to consider the likelihood of the large image slow computer problem. Digital cameras currently are getting cheaper and the quality is going up. Mid range cameras range between 8-12 megapixel and can have in excess of 8GB of memory.  Phones now are coming out with 5-8 megapixel cameras. If you come back and read this 12months from now, I guess you will laugh at the small sizes I refer to. Next consider the average user's computer. Rightly or wrongly, the release of vista saw very few new features that users felt they needed. Many people still run Win XP and do so on old hardware. Pentium 4 chips are still pervasive in many homes and offices. Loading a gig of 10 megapixel images in our current app does prove to be a performance problem.

We actually have 2 problems here:

  1. loading of the image in it's full resolution
  2. loading the image on the UI thread

We will only discuss issue 2 because technically this series is about responsive UIs, not performance (similar but not the same problem). So to get to the bottom of why our UI is laggy even though we have

  • loaded our Images on a different thread,
  • updated the UI using the dispatcher
  • and written nice XAML

we should understand a little bit about the Image control. The Image control has a Source property. Most often this source property is given a string or Uri, however some may have noticed that it actually is of type ImageSource. Through the magic of TypeConverters our strings and URI are converted to an appropriate sub class of ImageSource, most likely BitmapFrame. From having a quick browse over the code implemented around the ImageSource, BitmapSource, BitmapFrame & BitmapDecoder, I can only deduct that the only code executed off the UI thread is the downloading of the image where the source is a URI with http or https protocol. One may then argue that "well if it came from the file system wouldn't that be so fast that it could be done on the UI thread". One may argue that, but that is not the whole argument. Once the file has been read from disk it still must be decoded and then resized. While this generally is fast and would normally take less than 1 second, we should remember back to some magic number I made up that was 250ms-330ms (1/4 to 1/3 of a second) is the max the UI thread can hang for before it is very clear to the user. Now if I load just 8 images that each take 330ms-500ms to load from disk, decode and then resize this will create an awful user experience for the ender user as the progress bar stutters and images appear and the mouse is jerky.

To reproduce the problems discussed in this post you will want to get some hi-res images and load up the example code that can be found here. Try out the 6th example and point the source folder of the pop up widow to your folder with the hi-res images.

Next post we will look at the issues we will face with solving the problem of Images that kill the responsiveness of our UI. We will discuss fun things like decoders, freezables and maybe the parallel extensions that will become part of .NET 4.0

Previous - Responsive WPF User Interfaces Part 5 - Testing multi-threaded WPF code

Next - Responsive WPF User Interfaces Part 7 - Responsive Controls

Back to series Table Of Contents

Working version of the code can be found here

Monday, February 16, 2009

Responsive WPF User Interfaces Part 5

Testing multi-threaded WPF code

Stop press: Dont use this method. Read the article, but follow comment at bottom of post (ie Use Rx instead)

In the last post in this series we took a rather long way to get to the multi-threaded result we were looking for. This was because we decided to separate out our code in cohesive parts that allowed for more maintainable code. The popular method of creating maintainable code currently is through Unit Testing. An even better approach would be to apply Test Driven Development. TDD is a style of design where tests outlining our goals are written first and then code to satisfy those tests is created. Having identified the technical challenges of keeping a WPF application responsive, namely; the concepts related to the Dispatcher and parallel programming, we can now step back again to see the bigger picture as we did in part 4, to move forward.

In this post we will drive out our presentation model through tests and discover the issues related to testing WPF code and in particular testing the multi-threaded code. Just to set expectations, there is an assumption here that the reader is familiar with WPF Data Templates and unit testing with a mocking framework.

From my experience with unit testing it is always good to start off with a set of requirements that we could turn into tests. Continuing with our example of a thumbnail viewer, lets create a list of our requirements:

  • User can choose a folder
  • Can see what folder was selected
  • Will list all images in that directory and sub directories
  • Will indicate when it is processing
  • Cannot set folder while processing
  • The user interface will remain responsive during processing

For brevity, I will try to cheat as much as I can so more time can be spent on the technical issues of testing multi-threaded WPF code and less time spouting TDD goodness. I will first cheat by making the assumption that this is WPF code so will need implement constructs such as ICommand and INotifyPropertyChanged.

From these requirements I eventually came to the following test cases:

[TestClass]
class PhotoAlbumModelFixture
{
    [TestMethod]
    public void SetSourceCommand_sets_SourcePath(){}
    
    [TestMethod]
    public void SetSourceCommand_calls_FileService_with_value_from_FolderSelector(){}

    [TestMethod] 
    public void SetSourceCommand_calls_FileService_with_an_ImageOnly_filter() {}

    [TestMethod]
    public void SetSourceCommand_populates_images_from_FileService(){}

    [TestMethod]
    public void SetSourceCommand_recursively_calls_FileService(){}

    [TestMethod]
    public void SetSourceCommand_does_not_call_FileService_with_empty_value(){}

    [TestMethod]
    public void SetSourceCommand_sets_IsLoading_to_True(){}

    [TestMethod]
    public void SetSourceCommand_is_disabled_when_IsLoading(){}

    [TestMethod]
    public void FileService_is_called_Async(){}
}

When writing some of the tests I found that standard unit test code was fine. When I started to implement my code that had to make multithreaded calls, however, testing became problematic.

For example, my first attempt to write the "SetSourceCommand_calls_FileService_with_value_from_FolderSelector" test looked like this:

[TestMethod]
public void SetSourceCommand_calls_FileService_with_value_from_FolderSelector()
{
    Rhino.Mocks.MockRepository mockery = new Rhino.Mocks.MockRepository();
    ArtemisWest.Demo.ResponsiveUI._4_MultiThreaded.IFileService fileService = mockery.DynamicMock<ArtemisWest.Demo.ResponsiveUI._4_MultiThreaded.IFileService>();
    ArtemisWest.Demo.ResponsiveUI._5_MultiThreadedTest.IFolderSelector folderSelector = mockery.DynamicMock<ArtemisWest.Demo.ResponsiveUI._5_MultiThreadedTest.IFolderSelector>();

    PhotoAlbumModel model = new PhotoAlbumModel(Dispatcher.CurrentDispatcher, fileService, folderSelector);

    using (mockery.Record())
    {
        Expect.Call(folderSelector.SelectFolder()).Return(FolderPath);
        Expect.Call(fileService.ListFiles(FolderPath, null)).Return(EmptyList).IgnoreArguments().Constraints(
          Rhino.Mocks.Constraints.Is.Equal(FolderPath),
          Rhino.Mocks.Constraints.Is.Anything());
        Expect.Call(fileService.ListDirectories(FolderPath)).Return(EmptyList);
    }

    using (mockery.Playback())
    {
        model.SetSourcePathCommand.Execute(null);
    }
}

And following TDD rules of just write enough code to satisfy the test, my implementation of the Execute command handler looks like this:

void ExecuteSetSourcePath(object sender, ExecutedEventArgs e)
{
    SourcePath = this.folderSelector.SelectFolder();
    IsLoading = true;
    LoadPath(SourcePath);
    IsLoading = false;
}

void LoadPath(string path)
{
    IEnumerable<string> files = fileService.ListFiles(path, IsImage);
    foreach (string item in files)
    {
        Images.Add(item);
    }

    IEnumerable<string> directories = fileService.ListDirectories(path);
    foreach (string item in directories)
    {
        LoadPath(item);
    }
}

Astute readers will notice the complete lack of "Responsive UI Code". Well I have yet to reach my test that specifies that the call to FileService should be asynchronous. So after I have finished writing all of my other tests and the code to support them, I write some test code to enforce the responsive requirement:

[TestMethod]
public void FileService_is_called_Async()
{
    SlowFileServiceFake fileService = new SlowFileServiceFake();
    Rhino.Mocks.MockRepository mockery = new Rhino.Mocks.MockRepository();
    ArtemisWest.Demo.ResponsiveUI._5_MultiThreadedTest.IFolderSelector folderSelector = mockery.DynamicMock<ArtemisWest.Demo.ResponsiveUI._5_MultiThreadedTest.IFolderSelector>();

    PhotoAlbumModel model = new PhotoAlbumModel(Dispatcher.CurrentDispatcher, fileService, folderSelector);

    using (mockery.Record())
    {
        Expect.Call(folderSelector.SelectFolder()).Return(FolderPath);
    }

    using (mockery.Playback())
    {
        model.SetSourcePathCommand.Execute(null);
        //Due to the sleeps in the stub this should only have 1 item at this stage.
        Assert.IsTrue(model.Images.Count < fileService.ExpectedFiles.Count);
    }
    Thread.Sleep(300);
    CollectionAssert.AreEquivalent(fileService.ExpectedFiles, model.Images);
}

private class SlowFileServiceFake : ArtemisWest.Demo.ResponsiveUI._4_MultiThreaded.IFileService
{
    public readonly List<string> ExpectedFiles = new List<string>() { "value0", "value1", "value2", "value3", "value4", "value5" };

    public IEnumerable<string> ListFiles(string path, Predicate<string> filter)
    {
        foreach (var item in ExpectedFiles)
        {
            yield return item;
            Thread.Sleep(30);
        }
    }

    public IEnumerable<string> ListDirectories(string path)
    {
        return EmptyList;
    }
}

Here I have used a Fake to provide canned results with a hard coded delay. In theory this should drip feed values to the presentation model.

I now go back to update my model to get the tests to pass. Following my own advice I change the code to look like the following:

void ExecuteSetSourcePath(object sender, ExecutedEventArgs e)
{
    string folder = this.folderSelector.SelectFolder();
    if (!string.IsNullOrEmpty(folder))
    {
        SourcePath = folder;
        StartLoadingFiles(SourcePath);
    }
}

void StartLoadingFiles(string path)
{
    BackgroundWorker fileWorker = new BackgroundWorker();
    fileWorker.DoWork += (sender, e) =>
    {
        LoadPath(path);
    };
    fileWorker.RunWorkerCompleted += (sender, e) =>
    {
        IsLoading = false;
    };
    fileWorker.RunWorkerAsync();
}

void LoadPath(string path)
{
    IEnumerable<string> files = fileService.ListFiles(path, IsImage);
    foreach (string item in files)
    {
        dispatcher.Invoke(new Action(() => { Images.Add(item); }));
    }

    IEnumerable<string> directories = fileService.ListDirectories(path);
    foreach (string item in directories)
    {
        LoadPath(item);
    }
}

To my surprise this code does not satisfy the test! The reason is because the Dispatcher is not in an Executing loop. This would normally be started by WPF when the application starts, but here we are just in test code. The solution lies with the DispatcherFrame. A dispatcher frame represents an execution loop in a Dispatcher. We can kick start this loop by simply instantiating a new DispatcherFrame and calling Dispatcher.PushFrame with it as the parameter. See Dan Crevier's blog for a really simple way to unit test in this fashion. If we implement Dan's method we end up with the following test code:

[TestMethod]
public void FileService_is_called_Async()
{
    SlowFileServiceFake fileService = new SlowFileServiceFake();
    Rhino.Mocks.MockRepository mockery = new Rhino.Mocks.MockRepository();
    ArtemisWest.Demo.ResponsiveUI._5_MultiThreadedTest.IFolderSelector folderSelector = mockery.DynamicMock<ArtemisWest.Demo.ResponsiveUI._5_MultiThreadedTest.IFolderSelector>();

    PhotoAlbumModel model = new PhotoAlbumModel(Dispatcher.CurrentDispatcher, fileService, folderSelector);

    using (mockery.Record())
    {
        Expect.Call(folderSelector.SelectFolder()).Return(FolderPath);
    }

    DispatcherFrame frame = new DispatcherFrame();
    model.PropertyChanged += (sender, e) => 
    {
        if (e.PropertyName == "IsLoading" && !model.IsLoading)
        {
            frame.Continue = false;
        }
    };

    using (mockery.Playback())
    {
        model.SetSourcePathCommand.Execute(null);
        //Due to the sleeps in the stub this should only have 1 item at this stage.
        Assert.IsTrue(model.Images.Count < fileService.ExpectedFiles.Count);
        Dispatcher.PushFrame(frame);
    }
    Thread.Sleep(300);
    CollectionAssert.AreEquivalent(fileService.ExpectedFiles, model.Images);
}

We could call it a day here, however I am not happy enough with this style of coding. My first problem is the explicit sleep I have in there. This makes my test slow. The fastest this test will ever run is 300ms. I could replace that with some sort of loop to check progress but then my test code is looking less like test code and more like low level code to dance-around dispatchers. This raises my other issue; the clutter I have added by adding the DispatcherFrame stuff.

My solution here was to create a construct that allowed me to code my intentions in a way that I thought to be clearer. I wanted to be able to

  • test code that would involve multi-threaded code and the dispatcher
  • specify what condition defined its completion
  • specify a timeout
  • treat the code block as a blocking call

The result I came up with is the following:

[TestMethod]
public void FileService_is_called_Async()
{
    Rhino.Mocks.MockRepository mockery = new Rhino.Mocks.MockRepository();
    SlowFileServiceFake fileService = new SlowFileServiceFake();
    ArtemisWest.Demo.ResponsiveUI._5_MultiThreadedTest.IFolderSelector folderSelector = mockery.DynamicMock<ArtemisWest.Demo.ResponsiveUI._5_MultiThreadedTest.IFolderSelector>();

    PhotoAlbumModel model = new PhotoAlbumModel(Dispatcher.CurrentDispatcher, fileService, folderSelector);
    using (mockery.Record())
    {
        Expect.Call(folderSelector.SelectFolder()).Return(FolderPath);
    }

    using (DispatcherTester dispatcherTester = new DispatcherTester(Dispatcher.CurrentDispatcher))
    {
        model.PropertyChanged += (sender, e) =>
        {
            if (e.PropertyName == "IsLoading" && !model.IsLoading)
                dispatcherTester.Complete();
        };
        dispatcherTester.Execute(() =>
          {
              model.SetSourcePathCommand.Execute(null);
              Assert.IsTrue(model.Images.Count < fileService.ExpectedFiles.Count);
          },
          new TimeSpan(0, 0, 2));
    }
    CollectionAssert.AreEquivalent(fileService.ExpectedFiles, model.Images);
}

This code describes a block of code that may require the use of the dispatcher, the condition where the asynchronous code is considered complete and a TimeSpan for a timeout. The test code is still not perfect but I think it is a step in the right direction.

The code for the DispatcherTester looks something like this:

internal sealed class DispatcherTester : IDisposable
{
  private readonly Dispatcher dispatcher;
  private readonly DispatcherFrame dispatcherFrame = new DispatcherFrame();

  public DispatcherTester(Dispatcher dispatcher)
  {
    this.dispatcher = dispatcher;
  }

  public Dispatcher Dispatcher
  {
    get { return dispatcher; }
  }

  public void Execute(Action action, TimeSpan timeout)
  {
    Execute(action, timeout, new TimeSpan(10));
  }

  public void Execute(Action action, TimeSpan timeout, TimeSpan wait)
  {
    Stopwatch stopwatch = Stopwatch.StartNew();
    action.Invoke();
    Dispatcher.PushFrame(dispatcherFrame);

    while (dispatcherFrame.Continue && stopwatch.Elapsed < timeout)
    {
      Thread.Sleep(wait);
    }
    if (stopwatch.Elapsed >= timeout)
    {
      dispatcherFrame.Continue = false;
      Dispatcher.DisableProcessing();
      Dispatcher.ExitAllFrames();
      throw new TimeoutException();
    }
  }

  public void Complete()
  {
    dispatcherFrame.Continue = false;
  }

  #region IDisposable Members

  public void Dispose()
  {
    dispatcherFrame.Continue = false;
  }

  #endregion
}

If any other developers have hit problems trying to write unit tests for their WPF code then I hope that this snippet of code helps. For those that are interested, the end result for the Model looks like

public class PhotoAlbumModel : INotifyPropertyChanged
{
  #region Fields
  private readonly Dispatcher dispatcher;
  private readonly _4_MultiThreaded.IFileService fileService;
  private readonly IFolderSelector folderSelector;
  private readonly DelegateCommand setSourcePathCommand;
  private readonly ObservableCollection<string> images = new ObservableCollection<string>();
  private string sourcePath;
  private bool isLoading = false;
  #endregion

  public PhotoAlbumModel(Dispatcher dispatcher, _4_MultiThreaded.IFileService fileService, IFolderSelector folderSelector)
  {
    this.dispatcher = dispatcher;
    this.fileService = fileService;
    this.folderSelector = folderSelector;
    setSourcePathCommand = new DelegateCommand(ExecuteSetSourcePath, CanSetSourcePath);
    this.PropertyChanged += (sender, e) => { if (e.PropertyName == "IsLoading") setSourcePathCommand.OnCanExecuteChanged(); };
  }

  public ObservableCollection<string> Images
  {
    get { return images; }
  }

  public bool IsLoading
  {
    get { return isLoading; }
    private set
    {
      isLoading = value;
      OnPropertyChanged("IsLoading");
    }
  }

  public string SourcePath
  {
    get { return sourcePath; }
    private set
    {
      sourcePath = value;
      OnPropertyChanged("SourcePath");
    }
  }

  public ICommand SetSourcePathCommand { get { return setSourcePathCommand; } }

  #region Command handlers
  void CanSetSourcePath(object sender, CanExecuteEventArgs e)
  {
    e.CanExecute = !IsLoading;
  }

  void ExecuteSetSourcePath(object sender, ExecutedEventArgs e)
  {
    string folder = this.folderSelector.SelectFolder();
    if (!string.IsNullOrEmpty(folder))
    {
      SourcePath = folder;
      StartLoadingFiles(SourcePath);
    }
  }
  #endregion

  #region Private method
  void StartLoadingFiles(string path)
  {
    IsLoading = true;
    BackgroundWorker fileWorker = new BackgroundWorker();
    fileWorker.DoWork += (sender, e) =>
    {
      LoadPath(path);
    };
    fileWorker.RunWorkerCompleted += (sender, e) =>
    {
      IsLoading = false;
    };
    fileWorker.RunWorkerAsync();
  }

  void LoadPath(string path)
  {
    IEnumerable<string> files = fileService.ListFiles(path, IsImage);
    foreach (string item in files)
    {
      dispatcher.Invoke(new Action(() => { Images.Add(item); }));
    }

    IEnumerable<string> directories = fileService.ListDirectories(path);
    foreach (string item in directories)
    {
      LoadPath(item);
    }
  }

  bool IsImage(string file)
  {
    string extension = file.ToLower().Substring(file.Length - 4);
    switch (extension)
    {
      case ".bmp":
      case ".gif":
      case ".jpg":
      case ".png":
        return true;
      default:
        return false;
    }
  }
  #endregion

  #region INotifyPropertyChanged Members
  public event PropertyChangedEventHandler PropertyChanged;

  protected void OnPropertyChanged(string propertyName)
  {
    PropertyChangedEventHandler handler = PropertyChanged;
    if (handler != null)
    {
      handler(this, new PropertyChangedEventArgs(propertyName));
    }
  }
  #endregion
}

Notice that the Folder dialogue code from the previous post has now been pushed out to an interface which cleans up the code and also allows us to test this model.

The data template looks almost identical to the previous post's window xaml:

<DataTemplate DataType="{x:Type local:PhotoAlbumModel}">
  <DataTemplate.Resources>
    <BooleanToVisibilityConverter x:Key="boolToVisConverter"/>
  </DataTemplate.Resources>
  <DockPanel>
    <DockPanel DockPanel.Dock="Top">
      <ProgressBar IsIndeterminate="True" Height="18" 
                   DockPanel.Dock="Top" 
                   Visibility="{Binding Path=IsLoading, 
                                Converter={StaticResource boolToVisConverter}}" />
      <Button x:Name="SetSourcePath" Command="{Binding Path=SetSourcePathCommand}" 
              DockPanel.Dock="Right">Set Source</Button>
      <Border BorderThickness="1" BorderBrush="LightBlue" Margin="3">
        <Grid>
          <TextBlock Text="{Binding SourcePath}">
            <TextBlock.Style>
              <Style TargetType="TextBlock">
                <Setter Property="Visibility" Value="Visible"/>
                <Style.Triggers>
                  <DataTrigger Binding="{Binding SourcePath}" Value="">
                    <Setter Property="Visibility" Value="Collapsed"/>
                    </DataTrigger>
                  <DataTrigger Binding="{Binding SourcePath}" Value="{x:Null}">
                    <Setter Property="Visibility" Value="Collapsed"/>
                  </DataTrigger>
                </Style.Triggers>
              </Style>
            </TextBlock.Style>
          </TextBlock>
          <TextBlock Text="Source not set" Foreground="LightGray" FontStyle="Italic">
            <TextBlock.Style>
              <Style TargetType="TextBlock">
                <Setter Property="Visibility" Value="Collapsed"/>
                <Style.Triggers>
                  <DataTrigger Binding="{Binding SourcePath}" Value="">
                    <Setter Property="Visibility" Value="Visible"/>
                    </DataTrigger>
                  <DataTrigger Binding="{Binding SourcePath}" Value="{x:Null}">
                    <Setter Property="Visibility" Value="Visible"/>
                  </DataTrigger>
                </Style.Triggers>
              </Style>
            </TextBlock.Style>
          </TextBlock>
        </Grid>
      </Border>
    </DockPanel>
    <ListView ItemsSource="{Binding Images}" 
              ScrollViewer.VerticalScrollBarVisibility="Visible" />
  </DockPanel>
</DataTemplate>

I hope that the examples here

  • give some insight on how you too can unit test your code,
  • show why a model is a better option for your WPF applications as you can test them
  • give you the courage to write multi threaded code in your WPF models in the knowledge that you can test it

Next we discover that sometimes controls can cause an unresponsive UI in Part 6.

Previous - Responsive WPF User Interfaces Part 4 - Multi-threaded Programming in WPF.

Back to series Table Of Contents

Working version of the code can be found here