Thursday, October 24, 2013

DDD North 2013 - Testing ASP.NET MVC from the outside in

The final session was another testing related talk. It had plenty of interesting insights and talking points to take away and discuss further back in the office.

Testing crap in web applications like ASP.NET MVC
Rob Ashton. What a presenter! I attended his session on Javascript last year and this session finished the day with a BOOM! 

His talk nicely followed on from Ian's earlier talk. He focused on talking about testing an MVC application from the outside in. What does that mean? Simply put, UI test pretty much everything. Only unit test complex domain logic or validation rules but everything else should be tested as a feature through the UI. The UI? But that is going to be slow and lacking in granularity? What about feedback when my tests fail so I know where they have failed? These are all questions which Rob answered in a very convincing manner.

Slow - "If your application is so slow that it is not practical to test via the UI then there is a problem with your application. You have a defect that needs to be fixed right there - Make your application faster". Rob cut to the chase when answering this but is this attainable is all scenarios? Yes, we should be writing web applications that are fast as possible for a good user experience but does that extend to being able to execute 100's of UI tests against it in seconds..?

Persistence -  He discussed the use of in memory data stores or in memory representations for relational databases (NHibernate and RavenDB support this. Not sure about EF) when running tests locally and for CI. These implementations can be swapped out for a real relational database when moving to QA. He also discussed using persistence technologies like RavenDB and Redis (NoSQL) where ever possible as these themselves are fast enough to allow UI testing to take place with persistence retrieval at a fast pace.

Granularity - Log everything. When your tests fail, let your logs tell you what went wrong. He argued what is the one thing you wish you had more of in a live system? Logging! So, put lots of it in there. Yes, a single test is touching lots of code so it is not very granular but let your logging report on how it progressed (and failed). Do not really on a large suite of very granular but highly coupled tests to provide you with detailed answers to where in your system you have a problem. Use logging. This makes a lot of sense as when you have a problem in the production environment, you will be looking at the logs, not the previous unit test runs...

He demonstrated this testing process in action  - His UI tests were not remotely slow even though he was setting up and tearing down both the web server and data store between each test!

A number of tools were introduced which I had not heard of before to help with the testing process:
Phantomjs - A headless browser based on Chrome to execute your tests in.
Coypu - Helps with the browser automation process. Sounds very promising as it aims to solve many problems including timing issues.

Rob again had a focus on testing the output of a feature as the client would see it as opposed to the granular unit testing approach we have come accustom to - testing each class, public method, path of execution etc. Let the features drive your tests.

Combined with Ian's earlier session, this has given me a lot to think about.

Monday, October 21, 2013

Creating an IoC Container - The Container

This is the first part of a series about creating your own IoC container. Let me start by saying that there's no good reason for you to ever have to do this (unless you're stuck on .NET 1.1 and don't have generics) because there are many established open source IoC containers out there that will cater to everything the average developer will need and more; this is targeted at the inquisitive few who aren't satisfied with it just working, but need to know how.

I'm not going to go into what an IoC container is or what Dependency Injection is all about as there are many good resources that cover that already. So let's jump straight into creating the container.


There are two major functions that an IoC container carry out and the first one is registration. This is where the container is populated with details about the types that can be requested from it.

public void Register<T, U>()
  where U : class, new()
  var abstractType = typeof(T);
  var concreteType = typeof(U);

  var registration = new Registration()
    AbstractType = abstractType,
    ConcreteType = concreteType


This method takes two generic types which define the abstract and concrete mapping for a registration. Currently, the container is unable to handle creating objects that have dependencies so this is prevented by using the
new() generic type constraint on the concrete type to ensure that it has a parameter-less constructor. After creating the Registration it is added to an internal collection.


The second function of an IoC container is to instantiate and serve types that are requested from it.

public T Resolve<T>()
  var requestedType = typeof(T);

  Registration registration = GetRegistration(requestedType);
  object instance = null;
  if (registration != null)
    instance = Activator.CreateInstance(registration.ConcreteType);

  return (T)instance;

private Registration GetRegistration(Type type)
  Registration registration = null;

  if (type.IsInterface)
    registration = _registrations.Where(reg => reg.AbstractType == type)
    registration = _registrations.Where(reg => reg.ConcreteType == type)

  return registration;

If a registration for the requested type is found, an instance is created using its default constructor via the Activator (the Activator allows you to create objects dynamically); in the case that no registration is found in the collection of registered types, a null will be returned.

So there you have it, that's a very simple IoC container. It's not very useful in its current incarnation, but in the next part of this series, I will extend it to create instances of objects that take dependencies via their constructor. 

A complete project containing all the source code for the container and a set of unit tests can be found here.

Saturday, October 19, 2013

DDD North 2013 - Beyond the Automated build + Rx

Continuing on from my previous posts regarding my time at DDD North 2013, the third session I attended was Beyond the Automated build. I think this was the only session I walked out of feeling that I would have liked to have seen a little more. It introduced a lot of products but we never got to see any of them in action to any degree.

The forth session, on Reactive Extensions was an enjoyable introduction to the technology (and marble diagrams!).

Automated build is not the end of the story
This session proved to be a demonstration of a number of software packages (Microsoft and non-Microsoft) which are available to help extend the build process beyond the automated build. The focus was mainly on provisioning VMs and auto deployments to these VMs.

In the Microsoft stack, the main products mentioned were:
System Center - VM Manager 2012
System Center - Operations Manager
Visual Studio and Team Foundation Server VM Factory

In the non Microsoft stack, the main products mentioned were:
DevOps Workbench Express Edition (ALM Rangers)
Build Master

Richard Fennell provided plenty of advice regarding what we could pack into a build process  - ensure your automated build outputs a deployable package, build once - deploy to many environments, use config transforms to handle switching between environments. It was just a shame that we were introduced to numerous products but not actually seeing any of them in use. It would have been nicer if we could have been introduced to fewer products but got to see more detailed examples of how they could be utilized.

Tyrannosaurus Rx: slaying the event-driven sauropod with Reactive Extensions for .NET
This was an introductory session on Reactive Extensions. Exactly what I was after! John Stovin took us through a series of examples to introduce us to the benefits of using Rx. The biggest eye opener for me was that Rx is all about inverting IEumerable. Instead of pulling data out you are having data pushed to you via IObservable which is coupled with various options for threading, filtering and many more things.

John's examples demonstrated using Rx for handling the standard .NET event pattern and showed using very readable code (fluent syntax and method chaining) how you can filter on events with ease using Rx.

As primarily a web developer, I am not 100% sure where I can use Rx in a server side scenario yet but am sure there are plenty of uses for it on the client side - Especially as there is a Javascript version of Rx available!

John's Rx code examples can be found here.

Tuesday, October 15, 2013

DDD North 2013 - Scaling systems - Architectures that grow

Following on from my previous post, the second session I attended was all about scaling.

Scaling systems - Architectures that grow
Kendall Miller presented a very good session with a clear message: "There are three things which will help you scale and one thing that will not - Just remember ACD/C".

Help you to scale:
A - Async
C - Caching
D - Distribution

Inhibits scaling:
C - Consistency

"Only do what you really need to do now and do everything else later - Determine the critical path for your application". The example Kendall used was Amazon. When you place an order, Amazon's critical path is to capture your order, place it in a queue for processing and send you a confirmation email. Other tasks such as checking stock levels or ensuring the order prints out in the warehouse are ignored at this time as they can all be done later. For the transaction of a user placing an order, only the minimum should be done and feedback given so they are happy that their order has been placed successfully.

"Don't do anything you don't have to". Kendall used an example of displaying a customers first name when logging in to a applications homepage. The users name is not going to change very often (if ever) so why go to the database each time to retrieve it when the homepage is loaded? Cache it! Even if the user did change their name and it took a short while to update, they probably would not mind or even notice. Caching should be your first option when attempting to improve performance.

"Get as many people to do the work as possible". This is focused on adding additional resources to your application (multiple web-servers, databases etc) so your application can scale out. He highlighted some of the common challenges with this and introduced the idea of partitioning - Partitioning data / a whole system which has a common factor. Using Amazon, Kendell illustrated a high level example of partitioning is and - Users are partitioned based on their geographical location. This allows Amazon to be comprised of multiple sub systems instead of one massive system.

"The degree to which all parties observe the same state of the system at the same time". Kendell put across a valid point that we tend to put stricter than required consistency requirements on our systems. The example used here was the order number. Say a system was being developed with distribution in mind to allow multiple orders to be processed simultaneously, however, when talking to the accounts people they have specified that order numbers must be sequential, we have a potential problem. As the order numbers must be sequential we can only now have one system which generates order numbers for orders to be processed providing a bottle neck to our multiple order processors who are left waiting for order numbers. The question to ask here is: "Do you really need sequential order numbers?"

I walked away from this session very happy as it had fulfilled my expectations of learning some very high level techniques and architectures for allowing systems to scale. 

You can find the slides to Kendell's session here.

Sunday, October 13, 2013

DDD North 2013 - The aftermath + TDD - Where did it all go wrong?

DDD North 2013 has come and gone but what an event! It was a brilliantly arranged event in a great venue with plenty of very good sessions. This year it was based at Sunderland University so a two hour drive but it was more than worth it.

It was a great learning experience and has given me plenty to think about which I will try and summarize in the next few posts.

TDD - Where did it all go wrong? 
What a first session! Ian Cooper did not let us down. His session was an (re-)eye opener about how we have strayed away from Kent Beck's original vision of TDD. Ian's argument was that current teachings of TDD have lead us to believe we should be testing everything at a granular level - a class, each of the methods of a class, the unit under test should be isolated - All of which are not 100% true to Kent's original intentions for TDD. Ian argues that a new feature is what should be the trigger of a new suite of unit tests and those unit tests should only assert against the output of those features as would be returned to the calling client.

Ian has based this argument on many years of developing in a TDD environment and hitting the point 3 - 4 years down the line in a long term project where requirements have changed and where unit test code is having a negative effect on refactoring the code base as required - "If I make this implementation change, 100 tests will now fail and I will need to fix that as well".

The basic points he was trying to get across were:
1) Writing unit tests should be triggered by a new feature, not by a new class or method.
2) Tests should not have ANY understanding of implementation - They should only be interested in inputs and expected outputs (bye bye to Verify.WasCalled / Verift.WasNotCalled checks on mocked dependencies). If the output matches the expect output based on the inputs, you should be able to determine from that the correct methods were / were not called.
3) Delete tests which test the implementation details once you have finished developing a feature. Delete tests?! I know, a shocking statement but it makes a lot of sense. Unit tests which test implementation specifics are fine to help drive out your design but once your design is finished and the feature is complete, that coupling to implementation will become a handicap when attempting to refactor your implementation at a later date. A simple act of renaming a method, altering it's parameters or return value could cause countless tests to suddenly fail. Delete these tests and rely on your tests which test the feature as a whole instead.

My summary above does not give Ian's session the justice it deserves. It was highly thought provoking and has made me question the way I think about TDD.

You can find Ian delivering his talk at NDC here.

Friday, October 11, 2013

DDD North 2013 - What sessions to go to?

DDD North 2013 is almost upon us! 

DDD North is a free .NET developer event put on in the north of England. You can find out all about it here

I have been debating about this for weeks and as it stands, my current choice of sessions to attend are:

TDD - Where did it all go wrong? 
This session is being delivered by Ian Cooper. I attended a session of his on Event Driven Service Orientated Architecture at last year's DDD North event and found him to be a very informative and engaging presenter. His session is focused on helping us re-discover Kent Beck's original proposition for unit testing and in the process write fewer but better tests - Surely this can only be a good thing?

Scaling systems - Architectures that grow
I like to try and attend an architecture related session whenever possible to help broaden my view of how you can design a system. I do not expect to come away an expert in the subject (there is only so much that can be delivered and absorbed in an hour...) but hope it will give me a high level idea of what you should be aspiring to.

Automated build is not the end of the story
I have been doing quite a bit of work regarding going beyond the automated build recently with some colleagues at work. There has been a focus on driving towards continuous delivery with automated deployments using Octopus and Selenium driven acceptance tests. We use TeamCity as our build server so I am very interested in seeing the tooling which is available to achieve continuous delivery when working with TFS.

Tyrannosaurus Rx: slaying the event-driven sauropod with Reactive Extensions for .NET
I have heard lots of great things about Reactive Extensions for .NET and really want to learn more about it - Enough said.

Testing crap in web applications like ASP.NET MVC
Yes, it's another testing focus session but it promises to go beyond the unit test "taking a feature-by-feature approach to testing". For me, this was a no-brainer.

The whole agenda can be found here

I will report back on how I got on at the event and how the sessions turned out.