Recently I’ve been looking at how a unit of work should be tested and asking myself the question “should every class have it’s own unit test?”

A test per class?

A lot of my unit testing historically has been around making sure every class is individually tested. Every dependecy in the class would be mocked so I could focus the test on the class itself.

However, I’ve been experimenting with testing a full unit of work rather than just each class itself. In some cases this might be a single class but only if the class covers a unit of work.

The boundaries

For my dependecies I now create a new instance of each so in effect I am still testing each class, just not individually.

These aren’t integration tests though. The boundaries will still be mocked out. The boundaries may be external, such as a database or an api, or they may be internal still, such as the layers of the application.

Refactoring

By focusing on the tests on what the code is trying to achieve and not the implementation I have found it much easier to refactor.

If a class or method is becoming too bulky and needs to be abstracted in to a new class for example, the test will still cover this new class while still ensuring the desired outcome is still happening.

With not having to keep in my head how all these little classes will fit together to perform the pice of work, my confidence in wanting to refactor has grown. When each little class is just tested individually, with no tests to ensure they are are fitting together I find I can be reluctant to start changing the design of the system.

Reveal intent

The increased scope of my unit tests from class to unit of work has helped not only my code, but helped my tests to follow one of Kent Beck’s 4 simple design rules;

Reveals intention

I’ve always taken care when naming tests, but now my names tend to be more useful. They have moved from names such as;

[Test]
public void GivenValueIsX_DependencyBIsCalled()
{
  //testing the individual class goes here
}

To something more like;

[Test]
public void GivenAFormPost_TheCorrectDataIsCalculatedThenStored()
{
  //here a call will be made to the controller
  //new up the required services rather than mock them
  //only the external call to the database would be mocked
}

Again, I’m finding this means I have to keep less of the systems reasoning in my head. I don’t have to remember why its important a certain service is called. The test will tell me why something is happening and test the result. It doesn’t care how I get there.

So should every class have its own unit test?

No.

Unless there is a good reason, such as external dependencies, each class should be covered by tests but not necessarily their own.

Here is another Linux command that I want to remember; the cat command.

My Linux command posts

These posts are NOT supposed to be exhaustive documentation for the commands they cover, but are here as a reminder for myself as I’ve found them useful and my memory is terrible.

The cat command

The cat (concatenate) command is pretty useful. It can be used make files, look at the contents of a file and combine files to make a new one.

Why I needed it

The task it came in useful for was to combine the logs from several days from several load balanced servers.

How I used it

cat file1.txt file2.txt > myNewFile.txt

Much easier than going in to each file, copying the contents, pasting in to a new file and remembering which files I had already done.

Here is the link to my other Linux command posts.

After setting up my first ASP.NET Core MVC site I’ve looked in to expanding it with a test project and taking advantage of more of the new features.

Test project

One of the first things I wanted to learn how to do in ASP.NET Core was how to get some unit testing going.

Firstly I set up the solution structure by creating a source directory and a seperate test directory. At the root of the solution I added a global.json file.

{
    "projects": [
        "src",
        "test"
    ]
}

In the test directory, to create the test project, there is a very helpful dotnet command.

dotnet new -t xunittest

This is a very helpful option to the new command. In the project.json file it sets the testrunner to xunit and the needed dependencies.

I’ve not used xunit before, but apart from a few syntax differences to nunit, it seems pretty straightforward.

For a mocking framework I went with Moq. It’s not my first choice, but I read it worked with ASP.NET Core so I went for it. Below is the dependancy to add to the project.json;

"Moq": "4.6.38-alpha",

Once all the tests are in place, you just need to run the following command in the test project;

dotnet test

The docs for testing in ASP.NET Core can be found here.

It was a big relief to find that setting up unit testing was straightforward as I’m loving learning ASP.NET Core and I didn’t want it to be a blocker.

Tag helpers

While in ASP.NET Core you can use Razor as your view engine, there are still some updates that you can take advantage of. On my little test site I was trying to set up a form by using the html helper begin form;

@using (Html.BeginForm())
{
    //form goodness goes here
}

However after reading through the ASP.NET Core docs I found the new tag helpers. I was sceptical at first as I’ve always found html helpers to be useful and didn’t particularly consider them needing replacing.

But now I’m a convert, tag helpers are pretty cool.

So my form became;

<form asp-controller="Home" asp-action="Index" method="post" id="usrform">
    <!-- Input and Submit elements -->
    <button type="submit">Get name</button>
</form>

No longer do you need to fill in the html helper’s signature, you just write the markup you need with the required attributes. For example to set the controller;

asp-controller="Home"

This makes the markup much nicer to read for everyone and removes that dependence on knowing how to use each helper. For example I would much rather read;

<label class="myClass" asp-for="MyProperty"></label>

Compared to;

@Html.Label("MyProperty", "My Property:", new {@class="myClass"})

I’m so glad I don’t have to remember that awful syntax for adding classes to html helpers.

_ViewImports.cshtml

In order to use tag helpers you need to make them available in a _ViewImports.cshtml file in the Views folder.

@addTagHelper "*, Microsoft.AspNetCore.Mvc.TagHelpers"

The above will add the tag helpers in the Microsoft.AspNetCore.Mvc.TagHelpers namespace. By following the same convention you can make custom tag helpers available as well.

Debuging

While unit testing massively reduces the need to use the debugger, I’m only human.

For a free IDE, the debugging capabilities of VSCode are fantastic.

Rather than using the VSCode launch.json and the built in debug launcher (since it’s not a console application), attaching the debugger to the running process worked well.

By running “dotnet run” from the source directory of the application the mvc site will be up and running. Then from the debug panel choose the attach to process option.

Debugging in VSCode
Debugging an ASP.NET Core MVC app in VSCode

As you can see in the screenshot, the debugger is pretty comparable to debugger in the full version Visual Studio. You can set break points, watch variables and step through.

Static files

The typical location for static assets in an ASP.NET Core MVC app is in a folder called wwwroot at the root of the project.

Then in the WebHostBuilder set the content root to be the current directory.

var host = new WebHostBuilder()
    .UseKestrel()
    .UseContentRoot(Directory.GetCurrentDirectory())
    .UseStartup<Startup>()
    .Build();

host.Run();

Static assets will then be available by using a relative path to the wwwroot folder. For example;

<link rel="stylesheet" href="~/css/bootstrap.css" >

Example site

My example site so far is on Github here

As Linux continues to blow my mind, I thought I better start getting down some of the commands that I’ve been finding useful before I forget them.

My Linux command posts

These posts are NOT supposed to be exhaustive documentation for the commands they cover, but are here as a reminder for myself as I’ve found them useful and my memory is terrible. If they happen to help someone else, then awesome, and if anyone wants to let me know of any more, even better.

The grep command

The grep command is used for searching text for regular expressions and returning the lines that match. The result can then be output to a new file

Why I needed it

Recently I was asked to get certain details out of a very large log file. My first thought was to open the log in Excel and start filtering away. But then a Linux loving colleague showed me the grep command.

How I used it

grep "the text I was searching for" mylogfile.log > newfile.txt

I couldn’t believe how easy and fast this made a boring task. Definitely a command I need to remember.

Both a great introduction to agile development and a reminder of how to focus on what value is and how to get it.

Who the book is for

I really enjoyed the style of this book. There aren’t a lot of words on a page and each page has pictures. That might not make me sound the sharpest tool in the box, and maybe I’m not, but for someone who reads a couple of minutes here and there on a commute it helps a lot.

I’ve been working in agile environments for a few years now and I still found this book useful. It’s simple layout and message of “keep it simple” is a really nice reminder that if things are complicated, you are probably doing them wrong.

In the intro, Uncle Bob describes this book as a must for CTOs, directors of software and team leaders. While I think that is true, I think it would be better for a wider audience to read this book. It would be amazing if this book was read by the recipients of software. Agile development has a lot of benefits, but those benefits are hard to achieve if all parties aren’t bought in to the process.

What is a feature?

A lot of the book is centered around the importance of features.

We should plan by features, we should build by features, we should even grow our teams around features. Working towards features does have a lot of advantages. Planning is easier with smaller chunks, it is easier to pivot the project and while I think estimates are rarely helpful, it is easier to estimate smaller pieces of work.

However, I wonder if feature is the best term for what we should be breaking the work down in to? Would something more in keeping with the agile ideas of always being in a releasable state and getting value from releases like “deliverable” be more appropriate. Sometimes I find that the term “feature” is too easily interchangeable with “project”.

Once the term feature gets swapped with project, it makes realising agile benefits such as being able to work on the most valuable thing at the time difficult. As people often like the feeling of completeness that comes with finishing a project.

Bug free

In order to keep delivering value and enable flexibility the code has to be kept bug free.

One way Jeffries says to achieve this is to continually design the system as it grows. While I have always refactored code as I develop as part of the TDD cycle, this made me think that sometimes I need to take the brave choice to refactor the design of the system. From a business’s perspective this may not seem like the popular choice as speed is often seen as crucial, but keeping the system clean and flexible is important in the long term.

Another example of going slow to go fast is ensuring the system has good test coverage. Unit tests are an essential part of development, but acceptance tests can be massively helpful in enabling rapid releases of small features. By knowing that new code hasn’t broken existing features this can help to reduce manual testing and increase confidence in the system.

Up front planning

I find planning a very interesting topic in agile development.

Large detailed plans seem to make people feel comfortable and think that they are reducing risk. However, this rarely seems to be the case. Goals and measures of value are important and so is knowing the current direction to those goals. But a detailed long term plan is rarely going to be helpful, unless you can see the future. However it can also be damaging to the success of a project. It may stop you feeling like you have the flexibility to change direction.

Whip the ponies harder

There is an interesting chapter in the book dealing with the dangers of just putting more pressure on the team to get more out of them. This is extremely unlikely to work very well. Due to the extra pressure, mistakes will be made, best practices will slip and defects will be introduced in to the system. As previously mentioned, overall this will make the project take longer.

Jeffries goes on to say that analysing and removing any sources of delay for the team is likely to have a much larger impact. The same goes for helping the team to improve. This could be through training or helping them analyse how they are working for example.

I would have to say I whole heartedly agree with this point. Maybe I’ve just been very lucky in my career to have only met hard working developers but I have never worked with someone who I thought “if they worked harder this would go a lot faster”. Training and working to help resolve the team’s issues has always been a much better way to help increase the flow of work.

A very helpful book

I really enjoyed this book. It’s reminded me to take a step back everynow and again, slow down and think “am I doing this the best way?” when it comes to agile development.

It’s so easy to say “yes I know how you should work in an agile manner”, but it’s also very easy to get bogged down in processes and “agile” methods.

The Nature of Software Development has made me think more about why we work in a an agile way in the first place, that must be a good thing.