8 February 2017, by Harry Cummings
This is part of a series of blog posts on code reviews, based on two sessions of an internal discussion forum at Softwire. See the first post in this series for more information. In this post, we’ll discuss improving the interaction between the reviewer and the recipient of the review (i.e. the developer).
How do we mediate reviews?
There are all sorts of ways to mediate a review:
- Specialist review tools like Crucible and UpSource
- Repository systems with review features, like GitHub, GitLab, and Gerrit
- Sending diffs/patches over email
- Talking through the code face-to-face, e.g.
- Have the reviewer sit with the developer at their computer
- Reviewing code as a team around a big screen
- The old-fashioned way: Print out the code, stick it to the wall, and scribble over it together
- Pair-programming is arguably a form of code review, taking place as the code is written
Broadly speaking, all of the above approaches fall into two categories: asynchronous online reviews, or in-person reviews. Which kind of approach people preferred was probably the most contentious part of our discussions at Softwire. There was a range of opinions here, highlighting benefits and drawbacks to each approach.
Asynchronous online reviews vs. in-person reviews
Several people made the point that in-person reviews can be more useful for training, and for knowledge sharing in both directions. Face-to-face discussions make it easier to provide context around the code changes. They also give the developer a chance to talk the reviewer through the changes in a sensible order, or perhaps commit-by-commit. This might be better than the arbitrary order in which changes are presented by a diff/patch or an online review tool.
In-person reviews may also provide opportunities to pick up other context that might not be directly relevant to the code quality but is useful for the reviewer to know. For example, any frustrating obstacles the developer encountered while working on the task, which the team might need to address. Reviewing in-person can also save developers from context-switching. If you have enough reviewers on a team, developers can get a review as soon as they finish their code rather than starting on another task and subsequently switching back to deal with review feedback. This obviously comes at the cost of the reviewers having to make themselves highly interruptable though.
A lot of the literature on code reviews also favours some kind of in-person reviews. Here’s one particularly strongly stated example:
“Effective code reviews are not done via a tool or remotely—they are done when you’re sitting side-by-side with the person or pair who just wrote the code. This personal way allows you to share and teach much more information than you can pass in a text-based tool. Don’t skimp on this! If you’re going to do code reviews because your code sucks, do them right.” – Roy Osherove in Notes to a Software Team Leader
On the other hand, some of our reviewers felt that asynchronous online reviews were better partly because they don’t provide the reviewer any additional context. Online reviews arguably make for a more authentic review of maintainability (future developers on the project probably won’t have the luxury of talking through the code with the original developer). Also, coming at the review from their own angle might allow the reviewer to spot issues that the developer has missed.
One major advantage of online tools is that they leave a permanent record of review comments. Some tooling combinations make it particularly easy to go back through old reviews (for example, Crucible with JIRA integration). Several people had worked on projects where they had benefited from the ability to do this.
Several people found it useful to mix online and in-person approaches, perhaps depending on the nature of the change. For example:
- Performing a high-level review in isolation first, then talking through with the developer for context, before finally performing a line-by-line review online.
- Saving face-to-face reviews for bigger or more complex changes
- Carrying out most of the review discussion in person, but using an online tool to track this. That is, initiating the review process and documenting any important outcomes of the discussion.
Reviewers: Making code reviews better for the developer
Quite a few people found phrasing review comments to be a challenge, especially when using online review tools. Some of our reviewers were concerned whether we did enough to make new-starters comfortable with the process, and to make it clear that they can and should challenge their reviewers. After all, the developer is always closest to the code and knows it best. It can be worth a reviewer (particularly one in a more senior position) reminding the developer of this explicitly.
Ways to make reviews more positive included:
- Phrasing review comments as questions or suggestions rather than statements
- Talking through major issues in person rather than writing lengthy review comments
- Talking through and fixing batches of very minor issues in person, rather than writing lots of tiny review comments
- Remembering to always make some positive comments (especially in reviews with some criticisms elsewhere)
This might be more or less important depending on the developer. Generally, reviewers should be conscious of the recipient of the code review and tailor things to them. Extra tact may be required when reviewing code from external third parties, which can be politically awkward.
Developers: Making code reviews better for the reviewer
This wasn’t a question we had set out to answer with our discussion. However, people naturally mentioned how they approached submitting their own work for review, and several common points arose:
- Performing their own review of the changes first (ideally in the same review tool the reviewer will be using)
- Linking to any relevant context (e.g. the relevant ticket in the project’s issue tracker)
- Keeping auto-generated files out of the review (if appropriate and the review tool allows)
- Splitting into sensible commits
- Especially keeping big renames, file moves, or other refactorings separate
- Also splitting code changes across multiple commits where appropriate)
- On one project, we experimented with commiting and reviewing one test at a time, resulting in many small reviews. On a small team, this turned out to be a very effective workflow.
As we saw in the previous post, there are many different valid approaches to code reviews. Organisations should give teams the flexibility to choose a code review process that meets their needs. The first post in this series covered the wide and varied benefits of code reviews. As a team, you should reflect on your code review process, considering what value it provides and what further value it could provide. This will allow you to evolve your code review process to be more effective. I hope this series gives you some ideas that you find useful. Please feel free to share your own ideas on code reviews in the comments below.
1 February 2017, by Karl Graham
We have met with business leaders from across the globe in diverse industries. They see the impact technology is having on their business and say, ‘We need that!’ They recognise the need to exploit digital transformation opportunities to discover new markets, find ways to do business more effectively and respond to the challenges from new entrants and movers in their sector. They realise that embracing technology will enable them to be more responsive to potential and existing customer demands. They recognise that being technically complacent will mean lost opportunities, lost market share, lost customers. But they don’t know how to go about getting the benefits that come from using technology in new and disruptive ways.
In the main, the business leaders we speak to are asking:
- How do I create a company culture that encourages and enables exploration and experimentation whilst acknowledging and managing risk?
- Where am I going to get the skills I need to make this happen? I have great people in my business but this is new to us.
- How do I make sure I’m getting value for money and not just kicking off projects that will wither on the vine?
- What benefits and ROI should I expect from digital transformation initiatives?
Below are some of our responses to these questions.
Think BIG. Plan BIG. Start small.
Organisationally you need to know where you want to get to, you need to have clarity on the vision you are trying to achieve, whilst being flexible about how you get there.
Core questions include
- What do you want to achieve?
- What will be different as a consequence?
- How will you know its been successful?
- How long can you take to get started?
- When do you need to start seeing results?
Answering these questions and others will give you a strong footing for making key decisions and a reference point when it starts to get hard and you come up against challenges or resistance.
The key point: Clarity. For you, for your leadership team and for your employees. Everybody needs to know what the plan is, how it’s going to happen and how excited leadership are about the journey. And when you tell your people about it – always err on the side of over communication. Tell and tell the story about how great things are going to be. Celebrate the successes. Publish them. Share them. Make a lot of positive noise.
Once you have clarity, you need to make someone responsible for action. We recommend a key member of the executive team. The most commonly selected role type is the CTO/CDO. They can then get on with selecting their team, making their lower level plans and executing them. We recommend a cross-functional core team. The make-up of the core team will depend on your objectives but role types include accountable leader, line of business owner who is seeking change, technical architecture specialists, business process specialists and programme management. Depending on the size of the organisation some of these roles may be covered by a single person. The key at this stage is to get a plan in place and start getting stuff done.
We recommend starting small. Based on the strategic vision and objectives this team should select some key hypotheses to validate and then using experimentation techniques to understand if the hypotheses will return the expected results. To facilitate experimentation whilst managing risk, businesses should adopt rapid innovation tools such as Lean and Agile. They should also consider coupling these to a change approach such as Kaizen.
Lean and Agile approaches allow businesses to quickly validate or discard hypotheses, whilst minimising investment. Coupling them to Kaizen as an evolutionary, incremental change method allows management of significant change from existing operating models without alienating staff along the way. Used correctly this approach can create a cultural paradigm shift. Whilst some businesses will have experience of these tools and methods, where you do not, we recommend investing in training and finding a partner who can work with you to embed this capability in the organisation.
The key here is to have tools and processes for getting stuff done and Getting On With It.
I.T. Skills Shortage
Look inside and outside the organisation for talent that is both complementary and challenging
There is a current and growing IT skills shortage. At some point this is likely to have a direct impact on your ability to achieve your objectives. It’s an Elephant in the room. Adopting a cross-functional and shared services approach can address some IT shortages, but it will not help support the skills that are lacking within a business. Therefore, you need to be open to and actively seek ways to create a highly collaborative culture. To facilitate external collaboration, leaders need to seek collaboration opportunities with partners, such as Softwire. These partners should have, or be able to develop a deep understanding of your business and help design and implement digital technology solutions.
Controlling costs and generating ROI:
Leverage legacy systems to free up investment capital. Allow for reasonable failure. Learn from it fast.
Leaders need to be clear on how they are going to invest and how they are going to measure both tangible and intangible ROI across the organisation. Digital transformation is technology driven. However it is not solely driven by the I.T. department. It crosses lines of business. It impacts ways of doing business – people, process and policy. It can succeed or fail based on the buy-in and attention given from people who are not directly I.T. staff.
Organisational silos can be a significant impediment to digital business transformation. When people are protective of their ‘turf’ or budgets this gets in the way of disruptive innovation. As already mentioned, creating cross-functional teams can reduce the negative impact of silos and this protectionism. Getting the right people hooked into the process and empowering them with clarity of purpose and confidence enables each team member to give their expertise and insight.
Adopting Lean and Agile methods means you can commit to small, incremental investments based on validating specific hypotheses – whether the outcome is learning quickly to kill an idea, or pressing the button to scale a proof of concept into a fully-fledged customer offering. The key is to keep investment small, work quickly to learn all you can and make active decisions based on evidence.
Where investment grows without checks and balances on the value, where decisions get bogged down in unnecessary bureaucracy or committees, you will eventually find a disgruntled finance executive demanding that this ‘waste of money’ be canned. So, when you have success, celebrate it. Make sure it’s shared widely and repeatedly.
We all know budgets are always tight. In most organisations the I.T. Department is seen as a cost centre, especially since 80% of an I.T. budget is generally spent on maintenance and support of legacy systems. As a consequence, we recommend leveraging existing legacy technologies and processes rather than creating new systems. That said, one of the major challenges with legacy systems is the inertia from decades of systems and processes. It’s true that the business needs to invest and maintain systems they rely on to operate. However, this is an area where budget can be freed up to aid experimentation with new technologies. In addition, years of organic growth in legacy systems across multiple lines of business can lead to a complex matrix of technologies and processes. We suggest significant benefits are achievable from harmonising processes, in particular where customers’ have to engage with these systems.
‘There’s a battle outside ragin’, It’ll soon shake your windows, And rattle your walls, For the times they are a-changin’[i].
We are in a period of significant upheaval across the business landscape. Macro and local economic impacts are meteoric. Technology disruption and innovation impacts are seismic. Whole sectors have been decimated. Some are under attack right now. Others are seeing the early waves breaking against their shores.
We have seen traditional responses to these attacks fail.
In addition customers are much more savvy. They realise how powerful they are. They demand to engage with the business on their terms. The quality of experience and service they receive is ever more in direct proportion to the level of loyalty they are willing to give. Customer tolerance for a subpar experience is at an all-time low. We see this demonstrated in the way they move on to a new supplier almost immediately something does not suit them.
This behaviour alone is driving digital transformation, and shaking up businesses. With customers expecting an experience that is fast, efficient and simple we have to find ways of meeting their needs, or be left behind. Its little wonder business leaders are looking at leading technology companies and saying, ‘We need that!
[i] The Times They Are A-Changin’, Bob Dylan, 1964.
1 February 2017, by Harry Cummings
This is part of a series of blog posts on code reviews, based on two sessions of an internal discussion forum at Softwire. See the first post in this series for more information. In this post, we’ll cover some of our current approaches to code reviews.
We tend to at least implicitly perform code reviews in multiple passes. These break down into three stages:
- “Outside-in” preliminary review
- Reading through the original user story or defect
- Reviewing the design
- Checking out the dev branch and doing some cursory testing (this can be useful for reviewing UI issues or things that are hard to spot from the code or by automated tests)
- Reviewing the tests at a high level (do they function as good developer documentation for the code)
- Review of the code itself and the tests in detail
- Review of any activities surrounding the code change, e.g.:
- Manual testing
- External documentation
- Risk/impact assessment
Note that not every project needs all of these passes. The point is that “code review” is a broad term covering a range of activities. Which activities you carry out, and when, may vary by project. Although within each project, there’s a lot of value in being consistent. Consistency helps developers become comfortable with the review process, and makes code reviews a much more reliable tool for quality assurance.
When do we review
As noted above, different code review activities may be carried out at different times. There was a general consensus in our discussions that reviewing earlier is preferable. Most projects insisted on at least some form of review before commit, although a few relaxed this in special cases (depending on the type of project) to avoid becoming a bottleneck.
About half our teams are actively performing up-front High-Level Design reviews. These can be useful for everyone but especially for less experienced developers (which might just mean less experience with the particular project). They encourage working through design issues up front, avoiding wasted time at the implementation stage. It also means the code reviews can then focus on just the code. The only problem mentioned with HLD reviews was that it can be a bit unclear what we mean by an HLD, and sometimes people go too low-level. For projects broken up into well-sized tasks, an HLD could just be a couple of sentences and a few bullet points.
An alternative to up-front HLD reviews is reviewing roughly implemented code, essentially a spike or proof-of-concept. This can be particularly useful on tricky legacy codebases, where it might be hard to see how to go about introducing new functionality.
Who carries out reviews?
Most of the people in our discussions were their team’s technical lead. Unsurprisingly, tech leads were doing reviews themselves, but there was a lot of support for reviews being done by not only the tech lead. Getting more people involved in the review process is a good way to build people’s confidence, share knowledge within the team, and help people become more comfortable with the review process. One person doing all the reviews can also become a bottleneck and slow the team down. Perhaps more importantly, giving developers the autonomy to carry out genuine peer reviews is a show of faith in the team’s ability, and makes it easier for reviews to act as a positive motivator.
One problem with having multiple people involved in reviewing is that it can become confusing for developers. It’s not always clear how to pick an initial reviewer, or when a review would be considered “done”. It’s important for each team to agree on a consistent approach, although approaches can of course vary between teams. Most of our teams use one of the following approaches:
- Let the developer choose the initial reviewer and allow the developer or the reviewer escalate to the tech lead if needed
- Have a high-level second line review as part of the standard review process
- Include the tech lead on every review, but allow developers to merge their changes as soon as at least one person has reviewed it. This prevents the tech lead becoming a bottleneck but still gives them a chance to go into detail on any red flags.
All the above approaches include the possibility of the tech lead acting as a second line reviewer. Our tech leads would go into more or less detail in their review based on the nature of the change and the experience of the other people involved (i.e. the original developer and reviewer). In some cases perhaps just reviewing the comments from the first-line reviewer and/or looking out for changes within common problem areas of the codebase.
How much detail to go into in a second line review is a matter of judgement and may not be obvious. It can help to think of the goal of reviewing as gaining trust that the code is up to standard, and getting involved enough to meet this goal. Of course, it’s still worth bearing in mind the importance of code reviews for training and mentoring. A second-line reviewer may be looking out for learning opportunities for both the developer and the initial reviewer. They’re also in a position to assess the quality of the interaction between these two roles. This will be the subject of the next post.