The Nature of Software Development book

Both a great introduction to agile development and a reminder of how to focus on what value is and how to get it.

Who the book is for

I really enjoyed the style of this book. There aren’t a lot of words on a page and each page has pictures. That might not make me sound the sharpest tool in the box, and maybe I’m not, but for someone who reads a couple of minutes here and there on a commute it helps a lot.

I’ve been working in agile environments for a few years now and I still found this book useful. It’s simple layout and message of “keep it simple” is a really nice reminder that if things are complicated, you are probably doing them wrong.

In the intro, Uncle Bob describes this book as a must for CTOs, directors of software and team leaders. While I think that is true, I think it would be better for a wider audience to read this book. It would be amazing if this book was read by the recipients of software. Agile development has a lot of benefits, but those benefits are hard to achieve if all parties aren’t bought in to the process.

What is a feature?

A lot of the book is centered around the importance of features.

We should plan by features, we should build by features, we should even grow our teams around features. Working towards features does have a lot of advantages. Planning is easier with smaller chunks, it is easier to pivot the project and while I think estimates are rarely helpful, it is easier to estimate smaller pieces of work.

However, I wonder if feature is the best term for what we should be breaking the work down in to? Would something more in keeping with the agile ideas of always being in a releasable state and getting value from releases like “deliverable” be more appropriate. Sometimes I find that the term “feature” is too easily interchangeable with “project”.

Once the term feature gets swapped with project, it makes realising agile benefits such as being able to work on the most valuable thing at the time difficult. As people often like the feeling of completeness that comes with finishing a project.

Bug free

In order to keep delivering value and enable flexibility the code has to be kept bug free.

One way Jeffries says to achieve this is to continually design the system as it grows. While I have always refactored code as I develop as part of the TDD cycle, this made me think that sometimes I need to take the brave choice to refactor the design of the system. From a business’s perspective this may not seem like the popular choice as speed is often seen as crucial, but keeping the system clean and flexible is important in the long term.

Another example of going slow to go fast is ensuring the system has good test coverage. Unit tests are an essential part of development, but acceptance tests can be massively helpful in enabling rapid releases of small features. By knowing that new code hasn’t broken existing features this can help to reduce manual testing and increase confidence in the system.

Up front planning

I find planning a very interesting topic in agile development.

Large detailed plans seem to make people feel comfortable and think that they are reducing risk. However, this rarely seems to be the case. Goals and measures of value are important and so is knowing the current direction to those goals. But a detailed long term plan is rarely going to be helpful, unless you can see the future. However it can also be damaging to the success of a project. It may stop you feeling like you have the flexibility to change direction.

Whip the ponies harder

There is an interesting chapter in the book dealing with the dangers of just putting more pressure on the team to get more out of them. This is extremely unlikely to work very well. Due to the extra pressure, mistakes will be made, best practices will slip and defects will be introduced in to the system. As previously mentioned, overall this will make the project take longer.

Jeffries goes on to say that analysing and removing any sources of delay for the team is likely to have a much larger impact. The same goes for helping the team to improve. This could be through training or helping them analyse how they are working for example.

I would have to say I whole heartedly agree with this point. Maybe I’ve just been very lucky in my career to have only met hard working developers but I have never worked with someone who I thought “if they worked harder this would go a lot faster”. Training and working to help resolve the team’s issues has always been a much better way to help increase the flow of work.

A very helpful book

I really enjoyed this book. It’s reminded me to take a step back everynow and again, slow down and think “am I doing this the best way?” when it comes to agile development.

It’s so easy to say “yes I know how you should work in an agile manner”, but it’s also very easy to get bogged down in processes and “agile” methods.

The Nature of Software Development has made me think more about why we work in a an agile way in the first place, that must be a good thing.

ASP.NET Core

I’ve heard a lot of good things about ASP.NET Core so I thought I’d check it out. It feels very different to the Microsoft development I’m used to, but after a bit of a learning curve I’m very impressed.

Why ASP.NET Core?

Rather than just bringing out a new version of ASP.NET with version 5, Microsoft decided to give us ASP.NET Core 1.0.

So far it seems pretty cool and addresses some of the issues with good old Microsoft development.

Microsoft can describe it better than I can; so here it is.

ASP.NET Core is a new open-source and cross-platform framework for building modern cloud based internet connected applications, such as web apps, IoT apps and mobile backends.

ASP.NET Core has a number of architectural changes that result in a much leaner and modular framework. ASP.NET Core is no longer based on System.Web.dll. It is based on a set of granular and well factored NuGet packages. This allows you to optimize your app to include just the NuGet packages you need. The benefits of a smaller app surface area include tighter security, reduced servicing, improved performance, and decreased costs in a pay-for-what-you-use model. According to Microsoft, there a lot benefits from moving to the new framework.

ASP.NET Core Introduction

Some of the main benefits that made me want to try it out include;

  • Lightweight, modular system
  • Cross platform (yes, that’s right, it runs on Linux and Mac!)
  • Use a text editor rather than Visual Studio
  • Open source

Installing

As I’ve mentioned, one of the most exciting things about ASP.NET Core is that it is cross platform.

It can run on Windows, Mac, Docker and various Linux platforms. My mind was blown when I ran my first C# application on Linux!

Download and install instructions for your chosen platform are available here.

I purposefully avoided the Windows installation with Visual Studio as I was keen to try it with just a text editor. Visual Studio is great and I love it, but I do find it can feel slow and clunky, especially on older laptops.

Basic set up

After installation you can have a .NET application running in just 5 lines.

mkdir hwapp
cd hwapp
dotnet new
dotnet restore
dotnet run    

MVC

With ASP.NET Core being pretty different from what I’m used to I wanted to hold on to something I knew, so I set out to build an MVC site with it.

Overall, things went pretty smoothly but there were definitely some head banging moments. Mainly these were caused by the documentation being so bad. Maybe I just need to get my head around Core more first but I didn’t find them very useful at all. They spend a lot of time explaining the concepts and patterns behind MVC itself rather than how to set up a site in Core.

The tutorials weren’t a great deal of help either as they wanted you to use the Visual Studio templates. Something I was avoiding as I wanted to get it working with the minimal amount of boilerplate/templating I could.

Controller

After running the initializer, a controller was my first step.

So I added my folder, added my controller class, used the new Microsoft.AspNetCore.Mvc reference and unsurprisingly it didn’t work. Rather than a nice tutorial to use I had to bounce about the docs for a while.

A couple of things are needed to get the controller to work.

Startup.cs

In an ASP.NET Core program you can use an optional startup configuration file.

Along with the Configuration method required in a startup class, you also can have an optional ConfigureServices method. This takes an IServiceCollection, and its on here that you need to enable MVC by calling AddMvc().

It’s also in the startup class that you can define your routes for your MVC site. I decided to go for this option;

app.UseMvc(routes =>
        {
            routes.MapRoute("default", "{controller=Home}/{action=Index}/{id?}");
        });

but you also have the option of using attribute based routing;

[Route("Home/Index")]
public IActionResult Index()
{
   return View();
}

View

Views work in the same as before so things like view routing etc are as before.

However there area couple of dependencies that need to be added in first. In your project.json add in;

"Microsoft.AspNetCore.Mvc.Razor": "1.0.1",
"Microsoft.AspNetCore.StaticFiles": "1.0.0"

Then in your Program.cs when you set up your host, you will need to tell your application to use the content root so view routing will work properly.

var host = new WebHostBuilder()
   .UseKestrel()
   .UseContentRoot(Directory.GetCurrentDirectory())
   .UseStartup<Startup>()
   .Build();

Also, just as with enabling MVC you need to enable using static files in Startup.Configure;

app.UseStaticFiles();

There was one final head scratcher that I was struggling with before I could get Razor views to work. In you project.json, if you have run “dotnet new” you should have a section called “buildOptions”. In this section you will need to add “preserveCompilationContext” as below;

"buildOptions": {
    "preserveCompilationContext": true,
    "debugType": "portable",
    "emitEntryPoint": true
   },

Model

One of the nice features of ASP.NET Core is that it has been designed with dependency injection in mind. I’ve not really tried many of the features yet but it was incredibly easy to setup.

We need another dependency in project.json;

"Microsoft.Extensions.DependencyInjection": "1.0.0",

Then in the ConfigureServices method in the Startup class you can register your dependencies;

services.AddTransient<IGetHomeModels, HomeModelService>();

Logging

I found the logging feature very useful. Without I found debugging very difficult. As with the dependency injection this was pretty easy to setup.

Another package reference;

"Microsoft.Extensions.Logging.Console": "1.0.0"

and enable logging in the Configure method;

loggerFactory.AddConsole(LogLevel.Error);

Visual Studio Code IDE

I used Visual Studio Code to set up my MVC site and it really surprised me with how good it works as a C# IDE. I was worried that without Resharper it was going to feel like one finger typing.

However, even from this tiny project I was able to use several shortcuts within VSCode.

VSCode was able to “implement interface” on a class, just like in full Visual Studio.

Code snippets are also available, such as, “ctor” to create a constructor within the current class.

Another feature I was surprised to see was the ability to rename easily. Just hit F2 and the rest is taken care of.

With me still getting use to the package system in ASP.NET Core, another useful feature is the “remove unused usings”

With all these IDE features, and I’m sure there will be loads more I just havn’t seen yet, Visual Studio Code is shaping up to be a pretty competent C# IDE.

Next steps

The next couple of steps for me in ASP.NET Core are adding a test project and getting it deployed somewhere.

If you want to fork my base MVC site, here it is.

curl referer

I had to use the referer option on a curl command recently in order to spoof the referer header. After initially thinking my code wouldn’t be testable until going live, I was pretty pleased to find this option was available.

My curl posts

The more I use curl the more useful I think it is. Each time I find a useful command or option I’m going to make a post about it so I can use it as a reference. If anyone else finds it useful as well, that’s awesome.

What is curl

According to the curl website, curl is “used in command lines or scripts to transfer data”. I use it most day to day to test websites I’m working on.

It’s incredibly useful for testing things like redirect status codes as you don’t get all the overly aggressive caching that you often get with browsers.

Referer option

Referer header
The referer header

This post is all about the referer option on the curl command. By using the referer option, you can set the referer header.

You can use -e or –referer.

curl -e https://www.google.co.uk/ http://mywebsite.com

For full curl options here is the documentation

Using the Referer option to debug a .net solution

Recently I had to implement a feature on a site where the logic depended on what the referring site was.

Initially I thought this work was going to be untestable until it went live I could get the real site in the referer header.

However by using the -e option on the curl command I was able to set the referring site and still be able to hit breakpoints in my Visual Studio solution.

My DIY Hologram

Here’s my DIY hologram from Microsoft since I can’t afford a Hololens.

It’s a hologram coming out of your phone!

I felt like I was in Star Trek.

a hologram hotdog
My DIY hologram hotdog

OK, bit of an exaggeration but I still thought it was pretty cool.

The instructions from Microsoft can be found here. I think this is a great example from Microsoft of how, with a little bit of creativity, you can turn a digital experience in to a physical one.

Time to get the scissors out

On the instructions is a template for the prism to print and cut out of clear plastic. I was able to find a laminator pocket which I put through the machine while empty. Not the clearest plastic in the world but seemed to work ok for this.

From my experience I would make the template a little bigger than the instructions state. I found this made the prism a little easier to stand up. Making the tab to stick the prism together bigger than the template suggests helps as well.

a hologram hotdog
And a hologram deer

How it works

Once you have the prism, you place it on your phone as the gif plays. The gif gets reflected in to prism and you have your floating hologram!

If you are a better designer than I am you can make your own gif to use. However you can always use the example gifs that are supplied with the instructions.

I was amazed by my hologram but it did leave me wanting a Hololens. Time to get saving I guess.

Razor View Interfaces

By using interfaces rather than classes for Razor models you can decouple them and make them more flexible.

Atomic Design

After reading Brad Frost’s awesome blog post on atomic design I got to thinking how a similar approach could be taken with Razor views.

Here is a very brief introduction to some concepts from Atomic Design.

Atomic Design is based around creating design systems from reusable and composable parts. These are split in to five sections.

Atoms

Atoms are the smallest parts of the system. They can include the styles of a button or heading.

Molecules

When you start to gather several Atoms together you end up with a Molecule. An example would be a site search form. Consisting of an input atom, a label atom and a button atom. This molecule can then be used in different places on the site.

Organisms

Organisms are made by bringing together molecules to form parts of a UI. For example a navigation molecule and a site search molecule might make up a site’s header.

Templates

A template is the framework for a page and how it will look when the organisms are arranged.

Pages

Once the template is populated with final content the page is then complete.

I found this concept of breaking down designs brilliant and it really reminded me of the Don’t Repeat Yourself rule in development.

Razor views

I started thinking how the principles of Atomic Design could be applied to Razor views. After some experimenting I realised that you could use an interface as the model for a strongly typed view. This sounds like a small change but it was news to me and opened up several new possibilities.

Reusable Partials

By setting the model for a partial view to be an interface rather than a class they can be more reusable.

Partial view;

@model ViewInterfaces.Interfaces.ICanRenderTitleSection

<div>
    <h1>@Model.Title</h1>
    <h3>@Model.SubTitle</h3>
</div>

This partial then can be used by any model that implements that interface.

Calling view;

@model ViewInterfaces.Models.HomeViewModel

@{
    ViewBag.Title = "Home Page";
}

@Html.Partial("/Views/Partials/_TitleSection.cshtml", Model)

Since the main view model is implementing all of the small interfaces required for the small partials, the partials are easily composable. As with Atomic Design the partials can reflect the organisms, molecules and atoms.

Interface;

namespace ViewInterfaces.Interfaces
{
    public interface ICanRenderTitleSection
    {
        string Title { get; set; }
        string SubTitle { get; set; }
    }
}

Small Interfaces

It is better to have your main view model to implement many small interfaces rather than one large one to help your model will be more flexible.

By following the Interface Segregation principle from the SOLID principles you are giving clear instructions on what partials the model can be used for and showing clear intent.

I’ve also found that it can help when it comes to using the model in the partial. Any properties on the view model that aren’t needed for that partial won’t be accessible due to using the interface.

Example

You can check out my MVC example on GitHub