Softwire Blog


Lightning Talks – A Recipe for Disaster


2 March 2016, by

In October 2015 we ran another Lightning Talks competition where eight employees each had five minutes to tell us something they find interesting – inside or outside the software development world. We had talks this year on topics from SignalR to RSI!

Once again we voted on our favourite talks and the top two won Amazon vouchers.  This is Ben Below’s winning talk about the interesting and humorous world of test data!

 

Your Build Smells: Nervous Pushes


6 January 2016, by

This post is the second post in a series looking at Build Smells – signs of major issues in team build processes – and the things we can do to try and solve them. You might also be interested in the first post on painful merges, and do watch this space or follow us on twitter to see more of these as we publish them.

Nervous Pushes

Actually pushing code frequently is important for lots of reasons. As we covered in our previous post, sharing code earlier with your team can significantly reduce the time you spend fighting your version control systems, but even outside that, pushing code brings you closer to actually releasing useful code, and generally getting anything useful done.

Anything that stops you pushing code is something that’s slowing and hurting your team, but unfortunately it’s easy to get into a situation where your build does exactly that. If pushing code normally means something breaks then not only do you lose the time required to investigate and fix the issue (if there even is one), but you also lose the focus and concentration you had up until that point, as you have to context switch out of whatever new task you’d picked up in the meantime back to the fallout from your last one.

All this time and effort is something we humans instinctively avoid, and very quickly you’ll find if pushing code creates hassle, then people will push code less often. At this point, your build is making your team worse at development. This is a build smell.

Confidence

All of this fundamentally undermines your confidence in your build. If you have confidence in your build then you can write code, quickly find out whether it works (and quickly deal with it if it doesn’t), and iterate at high speed to code that actually solves your problem. Confidence lets you go fast, focusing on each task in hand sequentially and quickly moving toward success there, rather than having to keep one distracted eye nervously watching everything else to see what’ll break next.

This is one of the underlying components of flow, the much-discussed feeling of complete focus and being ‘in the zone’ that lets you zoom through development tasks with speed and control. This only works if you can be quickly confident in whether your code works as you write it; you need to be able to keep moving forwards without distractions and uncertainty, have your mind full with only what you’re building, and concentrate on your goals. Without confidence in your build, that’s not possible.

When confidence really falls apart you end up assuming the build is wrong more than it’s right and this gets much worse; you lose more than just the required time to fix issues, but you normalize build failures, start to ignore them, and lose the ability to deal effectively with actual failures when they come up. Once you start automatically just rerunning the build a couple of times when it fails then fixing the build starts to take even longer still, this problem gets even worse, and the whole situation quickly spirals.

If we can set up our builds to give us confidence instead, then we can build better software, faster (and enjoy ourselves more). If your build doesn’t give you this, then you have a problem to fix.

How can we fix this?

This all revolves around the core quality-checking steps of your build, which I mean to include anything that could really fail because of potentially incorrect code changes: automated tests, linting and style checks, and even compilation phases.

To get confidence your build, these steps need to give you accuracy, thoroughness, speed and simplicity.

Accuracy

If the code actually works, the build should pass.

The easiest way to totally undermine confidence is to have a build that fails half the time. As soon as this becomes normal, it becomes very difficult to quickly or easily get confidence in the code you write, and your development speed (and quality) drops significantly. There’s a few things that we’ve found particularly effective for helping here:

  • Reduce test dependencies – The most common cause of inaccuracy is intermittent issues with something outside your codebase, be that external systems, time and timezones, or the current operating system. Dependencies on any of these will bite you eventually, and working to remove or limit those really helps. Setup and teardown dependencies within tests, and take care about the assumptions you make about your environment.
  • Keep environments identical – Builds that fail only on the build server are hard to trust, frustrating, and difficult to fix. Try and keep environments consistent to avoid that, at the very least make it possible to create an environment locally that’s identical to CI, for debugging. There’s lots of great tools nowadays to support this; Docker and Vagrant are both good options, for example.
  • Have zero tolerance for inaccuracy – Lack of accuracy will snowball; once you’ve got one test that randomly fails more will join it without you noticing, and the frustrations of this make it difficult for people to invest in test quality generally, making it worse. You have to disproportionately fight to beat these issues when they happen, or they’ll get far worse.

Thoroughness

If the build passes, the code should actually work.

Passing tests need to be something that gives you good confidence in the quality of the codebase, so you’ve got passing tests you can move forwards and focus on the next problem, with nagging doubts that you’ve broken something and not noticed.

  • Monitor Coverage (a bit) – Code coverage is an oft-rejected metric, and strict adherence to arbitrary goals can be harmful, but overall coverage trends and patterns reliably contain useful data. It’s very useful to know that coverage is going down recently, or that one area is much better covered than another. Measure (and examine) these patterns to get the real value, and keep your team focused on thoroughness.
  • Ensure every potential risk is covered – The key things here is to not skip testing things because it’s hard. Sometimes you can’t usefully unit test something, but that doesn’t mean you should skip testing it. Add thorough integration tests instead, and ensure every potential risk gets caught somewhere.
  • Test what might fail – Overtesting can bite you too, increasing maintenance cost, and sapping morale and team investment in good testing. If there are no reasons you can think of why a test could fail, it’s not useful (even Kent Beck agrees), and you should potentially move the test up to a higher level. Instead of writing tests for every simple getter, test them as part of the code using them.

Speed

Builds should run fast enough to let you quickly know their result.

Speed powers confidence, allowing you to quickly check how you’re doing and iterate. Time spent waiting for builds is typically unproductive time, but also creates painful context switching: If you get pulled back to your previous change three hours later when the build fails then you’re no longer in the best place to fix it, and you’ve lost any focus on what you moved onto next too.

  • Quick feedback – Effective team flow and the ability to move quickly depends on a quick feedback cycle. Waiting 4 hours to know whether your change worked makes it very hard to quickly iterate on changes. Anything you can do to reduce the time to feedback makes this much easier, and helps you and your team develop faster and better. Tools like Wallaby (for JS) and NCrunch (for .Net) are fantastic for this.
  • Watchable unit tests – Unit tests that you are prepared to look at while they run (so 10 seconds max, unless they’re somehow fantastic to watch) are tests that you’ll be much happier to run all the time. Longer, and your mind will wander – taking your focus with it – and resulting in you running the tests far less. Fight to keep unit tests running fast.
  • Smoke test suites – You won’t manage to get integration or system test suites down to this time. Once you’ve got a 4 hour system test suite though, you can be sure that nobody on the team is running it before every commit. You can make that easier by picking a very small representative set of these to use as smoke tests, which are quick enough that people can run them as they work. You can also set these up as a early build step in CI, to ensure major problems are caught quickly in CI (and to avoid waiting 4 hours to discover that 100% of your tests failed).

Simplicity

Builds should be easy enough to check locally and fix that people actually do.

People are lazy, and that includes you, me, and everybody on your team. If running your build takes work, they’ll avoid it. Keeping the build simple and easy to use encourages people to actually go and use it, rather than avoiding it as an obstacle on their road to getting things done.

  • One click – If your build takes a series of actions, people will forget one, do them wrong, or skip to ‘only the important ones’. It’s typically easier than you’d expect to get it down to a single click (or simple command line step), and providing that kind of simple interfaces makes it drastically more likely than you’d expect that people will use it.
  • Zero setup – Manual setup wastes time, and you’ll do it wrong anyway. It also You’ll also have big problems when the setup changes, you have to migrate users from setup A to setup B, and you start finding tests that only pass in one setup. Again, this is often easier than you’d expect (especially with tools like Docker/Vagrant and Puppet/Ansible/Chef), and has big value.
  • Easy debugging – Eventually builds will fail, and making it easy to work out why will make people fix them more quickly and more effectively. Have Selenium take screenshots at the end of failing tests, make sure your build and CI setup provides detailed info on what exactly failed, have any logs from the system under test easily accessible, and ensure it’s easy to run individual tests standalone locally.

Put all this together, and you can put your project in a state where your tests are easy and quick to run, and reliably give you a good clear idea of whether your system works. All of this powers confidence, and makes it much easier to quickly and successfully build high-quality software every day.

Hopefully that’s an interesting insight into the kind of issues we see in projects, and some of our approaches to solving them!

We do encourage all our teams to find and play with their own solutions for problems they face, and we studiously avoid dogmatically sticking to any one approach, so I should note that these are just an example of a range of practices we’ve found effective in different cases. Don’t blindly follow advice from the internet just because it sounds convincing! Take a look at the concrete problems your team in facing, and consider whether some of the approaches and practices here might help you to effectively solve your real issues.

Let us know your thoughts all this below, or on twitter! In the next post, coming soon, we’ll take a look at the next smell in the series: Rare Releases.

Find out the java testing libraries that will make your life easier


22 August 2014, by

Unit testing is generally considered a good thing, but a worker is only as good as their tools. The tools that immediately spring to mind when writing java unit tests are JUnit, and HamCrest. But are these the best tools for the job? This post explores a couple of alternatives. Depending on the task you are trying achieve, these may be better or worse alternatives, but it’s always good to know your options.

JarSpec

This library is a neat way of writing more descriptive unit tests. It was created by Softwire’s Harry Cummings as a solution which enables Java unit tests to be coded in a manner similar to RSpec:

@RunWith(JarSpecJUnitRunner.class)
public class MinimalSpec implements Specification {
  @Override
  public SpecificationNode root() {
    return describe("addition", () ->
      describe("of 1+1", () -> {
        int result = 1 + 1;
        return by(
          it("should equal 2", () -> assertEquals(2, result)),
          it("should equal 3", () -> assertEquals(3, result))
        );
      })
    );
  }
}

The code to JarSpec is on GitHub.

AssertJ

AssertJ is a matcher library that is set up to allow good integration with IDEs.
Rather than typing:

assertContains(list, value)

which could match several different types other that List, you type:

assertThat(list).contains(value)

Not only is this nicer to read, after typing the period, your IDE can immediately suggest all of the assertions that can be performed on the list. When trying to do code completion in the first example, you have to pick your method from all of the assertions that exist.

The code to AssertJ is on GitHub.

The Journey Pattern


25 March 2014, by

The need for DRY code is a well established idea, which is well explained by Uncle Bob. The Page Object Pattern is the basic application of this principle to web tests. However, it is an idea that does not solve all our problems.

Although it goes some of the way to improving the readbility and extensability of web tests, it still has its issues. For example, people often find that when they start modifying their code, large numbers of their web tests can break, as they are brittle. Locating the source of the problem is also tricky, despite the implementation of the Page Object Pattern.

So are there any better ways to structure web tests that get around this problem?
(more…)

Getting FAT – Becoming Fanatical About Testing at Softwire


26 November 2013, by

What are Guilds?

At Softwire, we’re always keen to learn from others, but it can be difficult to apply lessons from elsewhere in the software industry that we come across at conferences, in literature and online; because much of the wisdom on best practices in our field seems to be aimed at product companies or in-house enterprise development teams. As a services company, we have a very different structure and different commercial pressures.

However, sometimes lessons from other companies really resonate with us. One example of this was the article on Scaling Agile at Spotify, which Camille posted to our internal blog. Much of it seemed relevant to us, especially as we’d grown in the last few years from a small business to a medium-sized one, while trying to retain for ourselves the benefits of working in a small company environment.

There are a lot of great ideas in Spotify’s structure, but the concept of guilds stood out particularly. Guilds are described in the above article as “… a more organic and wide-reaching ‘community of interest’, a group of people that want to share knowledge, tools, code, and practices”. Something about the idea of a guild as a group of particularly enthusiastic people also reminded us of a phrase in Kevlin Henney’s keynote at DevWeek earlier this year: “we are fanatics for testing” – this struck a chord with us at Softwire!
(more…)

Mean Bean


4 November 2013, by

Recently, on one of Softwire’s projects, we were tasked with retrofitting a test suite to a codebase. The use of Hibernate on the project had resulted in a large number of getters and setters, and we wanted an easy way to check for obvious bugs in this code. MeanBean:

  • Tests that the getter and setter method pairs of a JavaBean/POJO function correctly.
  • Verifies that the equals and hashCode methods of a class comply with the Equals Contract and HashCode Contract respectively.
  • Verifies property significance in object equality.

(more…)

Lightning Talks: Page Objects: Put Some OOP in Your Web Testing Soup!


21 May 2013, by

In April 2013 Rupert Wood organised our second Lightning Talks competition where eight employees each had five minutes to tell us something interesting about software development. This time our theme was “A Call To Arms”. Once again we voted on our favourite talks and the top three won Amazon vouchers.

This is Rowan’s third-place talk: “Page Objects: Put some OOP in your web testing soup!”, a pattern for organising Selenium web-testing code.

Lightning Talks: Tim Perry’s Lightning Talk: Performance Testing Better


10 May 2013, by

In April 2013 Rupert Wood organised our second Lightning Talks competition where eight employees each had five minutes to tell us something interesting about software development. This time our theme was “A Call To Arms”. Once again we voted on our favourite talks and the top three won Amazon vouchers.

This is Tim’s second-place talk: “Performance Testing Better”. He explains why it is important to build performance testing into your project from an early stage and practical steps for setting this up and monitoring the results.

Testing and Debugging Webpages for BlackBerry Mobiles


22 January 2013, by

If you are testing a webpage you have developed for BlackBerry mobiles, then the BlackBerry website is a useful place to visit as it has simulators of all their devices available to download. Once installed, the simulators that are running OS6 or higher can be used to view your webpage in their browser straight away. To get the browser working on OS5 simulators or older, the BlackBerry Mobile Data Service simulator (MDS) needs to be installed and run, instructions for doing this on Windows 7 x64 can be found in this helpful blog post here (this is a link to the Google cache version as the site is down at the time of writing).

The most useful thing I have found is, since BlackBerry OS7, there is now a way to debug webpages, which is very useful as the BlackBerry browser has a lot of quirks, despite being Webkit based since OS6. The following can be done in either the simulator or on an actual device:

First of all you need to go to the Browser Options and ensure ‘Enable Developer Tools’ is selected. In the browser you then need to bring up the menu by pressing the BlackBerry button and select Developer Tools > Enable Web Inspector. A popup will then display an IP address and port number.

You can then connect to that IP address using a Webkit based browser on your PC (or a PC on the same network as the device if you aren’t using the simulator) and select the site that is being displayed by the BlackBerry.

Once you have selected the webpage you have access to the full debug tools that are available to Webkit browsers, which are very useful for figuring out why that <select> is appearing twice as large as it is on every other browser.

Clicking to download hundreds of files


30 November 2012, by

As part of testing some changes I made recently, I needed to check that a page of report spreadsheet downloads still worked, which meant clicking and download all of the 300+ reports in the system.

Of course I considered automating this process, but in the end I decided it would be simplest and quickest to just click down the list of links by hand (you can see my reasoning at the end of this post). Still, the manual route was not exactly straightforward – and I thought I’d share with you the best approach I found in case you’re unlucky enough to find yourself in the same situation!

(more…)