Tuesday 27 January 2015

Precise Assertions

​It could be argued that one of the key roles of automated tests is to protect the existing level of code quality, preventing its decline by verifying that changes to the code comply with a set of known constraints. This protection really earns its money when working on a large and complicated codebase. Changes can be made, relatively safe in the knowledge that an unwitting mistake or omission will be flagged during development by a failing test. The reliability of this protection hinges on the quality of the tests and what they verify by assertion.

A problem I’ve recently uncovered is where a test’s assertion is not precise enough to provide adequate protection. This example is contrived but it makes the point. If we take a presenter that adds items to a list on a view, we might see something like:
 foreach (var item in items)  
{  
   View.AddToList(item.Name);  
}
​This functionality was verified (using NSubstitute)​ by:​​​
 fakeView.RecievedWithAnyArgs().AddListItem(null);  
This is a reasonable test. It verifies that items are added to the view using their name. However, if the use case contains acceptance criteria that states that the item must be added to list with its name displayed, we’re unable to verify this explicitly. It could be argued that the test doesn’t need to be any more precise. The view logic, in this case, is so simple that failure would be easily discovered during manual testing. I would argue that this confidence is misplaced…

Initially our presenter just adds list items. However, we add functionality to update list items. We add tests to cover the update functionality and copy the assertion regarding the addition of the list item:​​​
[Test]  
Public void Update_ValidChanges_ShouldChangeListItem()  
{  
   // Arrange  
   // Act  
   fakeView.RecievedWithAnyArgs().AddListItem(null);  
}  
There are now tests covering the addition/modification of items in the list and verifying that the items are added to the list.

A new requirement emerges: the list items should display the name followed by its id in brackets: Item 1 (345). The presenter is changed so that the add code is modified to format the name correctly. The test covering the addition doesn't change as the perception is that the existing assertion is good enough. However, for whatever reason, the update code is missed. Code changes completed, the unit tests are run and everything is green. However, the update feature now has a defect: when an item is updated, the name that's shown in the list is not correct; it doesn't feature the id in brackets.

Ultimately, the defect is introduced due to developer not fully understanding or assessing the impact of the change. However, more detailed test assertions could've done more to protect the quality of the code. While it's true that manual testing easily finds this defect, that process is not free. If the assertion regarding the addition of the list item had contained more detail about what was being added, the update tests would've failed, alerting the developer that they had unwittingly missed something:​
fakeView.Recieved().AddListItem(Arg.Is<string>(n => n.Equals(expectedName));  
This can be taken a step further by centralising the assertion so that the attributes of a valid name are maintained in one place:​​
Private bool ListItemNameIsCorrect(string name)  
{  
   ...  
}  
   ...  
   Assert.That(ListItemNameIsCorrect(testName), Is.True);  
Now it's a lot easier to understand the scope of a change to this part of the code.
In conclusion, ensuring test assertions have the right precision and, where appropriate, are aligned with acceptance criteria can improve the quality of the test, reinforcing their role as protectors of code quality, and reduce the need to rely on manual testing.

Thursday 15 January 2015

Verifying interaction with 3rd-party frameworks via tests - NHibernate

Following on from my previous post regarding tests that verify how code interacts with Castle Windsor, this post looks at the same concept with NHibernate.

One of my classes works with NHibernate sessions, opening and disposing them. As with releasing components using Castle Windsor, ensuring that sessions are disposed is essential.

The NHibernate SessionFactory contains a Statistics property that provides access to many useful metrics. Including this parameter in the NHibernate config <property name="generate_statistics">true</propertyenables the ability to inspect these statistics.

Now, using the Statistics class, the test becomes quite simple:

<Test()>
<Category(TestCategory.Integration)>
Public Sub Execute_NoSessionInjected_ShouldCreateAndDisposeItsOwnSession()
 
   Dim expectedSessionCloseCount = SessionProvider.Factory.Statistics.SessionCloseCount + 1
 
   _controller.Execute(Of ICommand)()
 
   Assert.That(expectedSessionCloseCount, [Is].EqualTo(SessionProvider.Factory.Statistics.SessionCloseCount))
 
End Sub
The test gets the current count of sessions closed and increments it, as we're expecting it to. It does the test action and then asserts using the Statistics property that NHibernate has closed one more session than it had before we acted.

Inspecting the SessionFactory statistics is also useful for testing second-level caching.

Wednesday 14 January 2015

Verifying interaction with 3rd-party frameworks via tests - Castle Windsor

I recently developed a simple class to process Commands. The CommandProcessor class takes advantage of Castle Windsor's excellent Typed Factory Facility. In essence, the CommandProcessor depends on a CommandFactory to resolve and release instances of Command objects that it executes.

As part of the testing of this class, I wanted to ensure that I could verify that the Command objects were being resolved and released by Castle Windsor as I expected. It's not that I don't trust how Castle Windsor behaves but I like my tests to serve as explicit statements of how I have designed my code to work. Too many times I've looked at code, my own included, and questioned the true intent. Furthermore, I like my tests to prevent unwitting changes that will negatively affect the code's behaviour. Considering operations like resolving and releasing object instances via IoC, these negative effects might prove very subtle and difficult to debug.

My test takes advantage of Castle Windsor's support for events:
container.Register(
    Component.
        For(Of ICommand)().
        UsingFactoryMethod(Function(k) Substitute.For(Of ICommand)()).
        LifestyleTransient().
        OnCreate(Sub(c) componentWasCreated = True).
        OnDestroy(Sub(c) componentWasReleased = True).
        IsDefault())

I register a fake command with the container and tell it that when it creates and destroys this command, it should set local variables so that I can assert against them.

Then I go on to resolve the processor and execute the command:
Dim processor = container.Resolve(Of ICommandProcessor)()
 
processor.Execute(Of ICommand)()

Finally, I assert against the variables that should've been set when Castle Windsor raised the appropriate events:
Assert.That(componentWasCreated, [Is].True)
Assert.That(componentWasReleased, [Is].True)

The test in full:
<Test()>
Public Sub Execute_IocContainerConfiguredByApplication_ShouldResolveAndReleaseCommand()
 
    Dim container = Program.CreateIoCContainer()
 
    Dim componentWasCreated As Boolean
    Dim componentWasReleased As Boolean
 
    container.Register(
        Component.
            For(Of ICommand)().
            UsingFactoryMethod(Function(k) Substitute.For(Of ICommand)()).
            LifestyleTransient().
            OnCreate(Sub(c) componentWasCreated = True).
            OnDestroy(Sub(c) componentWasReleased = True).
            IsDefault())
 
    Dim processor = container.Resolve(Of ICommandProcessor)()
 
    processor.Execute(Of ICommand)()
 
    Assert.That(componentWasCreated, [Is].True)
    Assert.That(componentWasReleased, [Is].True)
 
End Sub

Previously, I might not have attempted to write this kind of test, assuming that the code was simple enough not to warrant verifying its behaviour. However, when frameworks offer the appropriate interaction points that allow test verification, and when we start to look at tests as more than just things that verify behaviour, it becomes clear that these kind of tests offer a significant degree of value.