Softwire Blog


Advanced Clean Code


15 March 2017, by

This post is a quick overview of patterns I have picked up over the years that I aim to use when coding and usually end up pointing out in code reviews.

These aren’t hard and fast rules, but to me, this post forms an extension to Uncle Bob’s clean code (which you should read right now if you haven’t already!).

(more…)

The many benefits of code reviews, and how to achieve them – 3. Improving the Process


8 February 2017, by

This is part of a series of blog posts on code reviews, based on two sessions of an internal discussion forum at Softwire. See the first post in this series for more information. In this post, we’ll discuss improving the interaction between the reviewer and the recipient of the review (i.e. the developer).

How do we mediate reviews?

There are all sorts of ways to mediate a review:

  • Specialist review tools like Crucible and UpSource
  • Repository systems with review features, like GitHub, GitLab, and Gerrit
  • Sending diffs/patches over email
  • Talking through the code face-to-face, e.g.
    • Have the reviewer sit with the developer at their computer
    • Reviewing code as a team around a big screen
    • The old-fashioned way: Print out the code, stick it to the wall, and scribble over it together
  • Pair-programming is arguably a form of code review, taking place as the code is written

Broadly speaking, all of the above approaches fall into two categories: asynchronous online reviews, or in-person reviews. Which kind of approach people preferred was probably the most contentious part of our discussions at Softwire. There was a range of opinions here, highlighting benefits and drawbacks to each approach.

Asynchronous online reviews vs. in-person reviews

Several people made the point that in-person reviews can be more useful for training, and for knowledge sharing in both directions. Face-to-face discussions make it easier to provide context around the code changes. They also give the developer a chance to talk the reviewer through the changes in a sensible order, or perhaps commit-by-commit. This might be better than the arbitrary order in which changes are presented by a diff/patch or an online review tool.

In-person reviews may also provide opportunities to pick up other context that might not be directly relevant to the code quality but is useful for the reviewer to know. For example, any frustrating obstacles the developer encountered while working on the task, which the team might need to address. Reviewing in-person can also save developers from context-switching. If you have enough reviewers on a team, developers can get a review as soon as they finish their code rather than starting on another task and subsequently switching back to deal with review feedback. This obviously comes at the cost of the reviewers having to make themselves highly interruptable though.

A lot of the literature on code reviews also favours some kind of in-person reviews. Here’s one particularly strongly stated example:

“Effective code reviews are not done via a tool or remotely—they are done when you’re sitting side-by-side with the person or pair who just wrote the code. This personal way allows you to share and teach much more information than you can pass in a text-based tool. Don’t skimp on this! If you’re going to do code reviews because your code sucks, do them right.” – Roy Osherove in Notes to a Software Team Leader

On the other hand, some of our reviewers felt that asynchronous online reviews were better partly because they don’t provide the reviewer any additional context. Online reviews arguably make for a more authentic review of maintainability (future developers on the project probably won’t have the luxury of talking through the code with the original developer). Also, coming at the review from their own angle might allow the reviewer to spot issues that the developer has missed.

One major advantage of online tools is that they leave a permanent record of review comments. Some tooling combinations make it particularly easy to go back through old reviews (for example, Crucible with JIRA integration). Several people had worked on projects where they had benefited from the ability to do this.

Several people found it useful to mix online and in-person approaches, perhaps depending on the nature of the change. For example:

  • Performing a high-level review in isolation first, then talking through with the developer for context, before finally performing a line-by-line review online.
  • Saving face-to-face reviews for bigger or more complex changes
  • Carrying out most of the review discussion in person, but using an online tool to track this. That is, initiating the review process and documenting any important outcomes of the discussion.

Reviewers: Making code reviews better for the developer

Quite a few people found phrasing review comments to be a challenge, especially when using online review tools. Some of our reviewers were concerned whether we did enough to make new-starters comfortable with the process, and to make it clear that they can and should challenge their reviewers. After all, the developer is always closest to the code and knows it best. It can be worth a reviewer (particularly one in a more senior position) reminding the developer of this explicitly.

Ways to make reviews more positive included:

  • Phrasing review comments as questions or suggestions rather than statements
  • Talking through major issues in person rather than writing lengthy review comments
  • Talking through and fixing batches of very minor issues in person, rather than writing lots of tiny review comments
  • Remembering to always make some positive comments (especially in reviews with some criticisms elsewhere)

This might be more or less important depending on the developer. Generally, reviewers should be conscious of the recipient of the code review and tailor things to them. Extra tact may be required when reviewing code from external third parties, which can be politically awkward.

Developers: Making code reviews better for the reviewer

This wasn’t a question we had set out to answer with our discussion. However, people naturally mentioned how they approached submitting their own work for review, and several common points arose:

  • Performing their own review of the changes first (ideally in the same review tool the reviewer will be using)
  • Linking to any relevant context (e.g. the relevant ticket in the project’s issue tracker)
  • Keeping auto-generated files out of the review (if appropriate and the review tool allows)
  • Splitting into sensible commits
    • Especially keeping big renames, file moves, or other refactorings separate
    • Also splitting code changes across multiple commits where appropriate)
    • On one project, we experimented with commiting and reviewing one test at a time, resulting in many small reviews. On a small team, this turned out to be a very effective workflow.

Summary

As we saw in the previous post, there are many different valid approaches to code reviews. Organisations should give teams the flexibility to choose a code review process that meets their needs. The first post in this series covered the wide and varied benefits of code reviews. As a team, you should reflect on your code review process, considering what value it provides and what further value it could provide. This will allow you to evolve your code review process to be more effective. I hope this series gives you some ideas that you find useful. Please feel free to share your own ideas on code reviews in the comments below.

Digital Transformation Essentials


1 February 2017, by

d2q4kik8vbu-jean-pierre-brungs_2

Digital Transformation

 

We have met with business leaders from across the globe in diverse industries. They see the impact technology is having on their business and say, ‘We need that!’ They recognise the need to exploit digital transformation opportunities to discover new markets, find ways to do business more effectively and respond to the challenges from new entrants and movers in their sector. They realise that embracing technology will enable them to be more responsive to potential and existing customer demands. They recognise that being technically complacent will mean lost opportunities, lost market share, lost customers. But they don’t know how to go about getting the benefits that come from using technology in new and disruptive ways.

In the main, the business leaders we speak to are asking:

  • How do I create a company culture that encourages and enables exploration and experimentation whilst acknowledging and managing risk?
  • Where am I going to get the skills I need to make this happen? I have great people in my business but this is new to us.
  • How do I make sure I’m getting value for money and not just kicking off projects that will wither on the vine?
  • What benefits and ROI should I expect from digital transformation initiatives?

Below are some of our responses to these questions.

Company Culture

Think BIG. Plan BIG. Start small.

Organisationally you need to know where you want to get to, you need to have clarity on the vision you are trying to achieve, whilst being flexible about how you get there.

Core questions include

  • What do you want to achieve?
  • What will be different as a consequence?
  • How will you know its been successful?
  • How long can you take to get started?
  • When do you need to start seeing results?

Answering these questions and others will give you a strong footing for making key decisions and a reference point when it starts to get hard and you come up against challenges or resistance.

The key point: Clarity. For you, for your leadership team and for your employees. Everybody needs to know what the plan is, how it’s going to happen and how excited leadership are about the journey. And when you tell your people about it – always err on the side of over communication. Tell and tell the story about how great things are going to be. Celebrate the successes. Publish them. Share them. Make a lot of positive noise.

Once you have clarity, you need to make someone responsible for action. We recommend a key member of the executive team. The most commonly selected role type is the CTO/CDO. They can then get on with selecting their team, making their lower level plans and executing them. We recommend a cross-functional core team. The make-up of the core team will depend on your objectives but role types include accountable leader, line of business owner who is seeking change, technical architecture specialists, business process specialists and programme management. Depending on the size of the organisation some of these roles may be covered by a single person. The key at this stage is to get a plan in place and start getting stuff done.

We recommend starting small. Based on the strategic vision and objectives this team should select some key hypotheses to validate and then using experimentation techniques to understand if the hypotheses will return the expected results. To facilitate experimentation whilst managing risk, businesses should adopt rapid innovation tools such as Lean and Agile. They should also consider coupling these to a change approach such as Kaizen.

Lean and Agile approaches allow businesses to quickly validate or discard hypotheses, whilst minimising investment. Coupling them to Kaizen as an evolutionary, incremental change method allows management of significant change from existing operating models without alienating staff along the way. Used correctly this approach can create a cultural paradigm shift. Whilst some businesses will have experience of these tools and methods, where you do not, we recommend investing in training and finding a partner who can work with you to embed this capability in the organisation.

The key here is to have tools and processes for getting stuff done and Getting On With It.

I.T. Skills Shortage

Look inside and outside the organisation for talent that is both complementary and challenging

There is a current and growing IT skills shortage. At some point this is likely to have a direct impact on your ability to achieve your objectives. It’s an Elephant in the room. Adopting a cross-functional and shared services approach can address some IT shortages, but it will not help support the skills that are lacking within a business. Therefore, you need to be open to and actively seek ways to create a highly collaborative culture. To facilitate external collaboration, leaders need to seek collaboration opportunities with partners, such as Softwire. These partners should have, or be able to develop a deep understanding of your business and help design and implement digital technology solutions.

Controlling costs and generating ROI: 

Leverage legacy systems to free up investment capital. Allow for reasonable failure. Learn from it fast.

Leaders need to be clear on how they are going to invest and how they are going to measure both tangible and intangible ROI across the organisation. Digital transformation is technology driven. However it is not solely driven by the I.T. department. It crosses lines of business. It impacts ways of doing business – people, process and policy. It can succeed or fail based on the buy-in and attention given from people who are not directly I.T. staff.

Organisational silos can be a significant impediment to digital business transformation. When people are protective of their ‘turf’ or budgets this gets in the way of disruptive innovation. As already mentioned, creating cross-functional teams can reduce the negative impact of silos and this protectionism. Getting the right people hooked into the process and empowering them with clarity of purpose and confidence enables each team member to give their expertise and insight.

Adopting Lean and Agile methods means you can commit to small, incremental investments based on validating specific hypotheses – whether the outcome is learning quickly to kill an idea, or pressing the button to scale a proof of concept into a fully-fledged customer offering. The key is to keep investment small, work quickly to learn all you can and make active decisions based on evidence.

Where investment grows without checks and balances on the value, where decisions get bogged down in unnecessary bureaucracy or committees, you will eventually find a disgruntled finance executive demanding that this ‘waste of money’ be canned. So, when you have success, celebrate it. Make sure it’s shared widely and repeatedly.

We all know budgets are always tight. In most organisations the I.T. Department is seen as a cost centre, especially since 80% of an I.T. budget is generally spent on maintenance and support of legacy systems. As a consequence, we recommend leveraging existing legacy technologies and processes rather than creating new systems. That said, one of the major challenges with legacy systems is the inertia from decades of systems and processes. It’s true that the business needs to invest and maintain systems they rely on to operate. However, this is an area where budget can be freed up to aid experimentation with new technologies.  In addition, years of organic growth in legacy systems across multiple lines of business can lead to a complex matrix of technologies and processes. We suggest significant benefits are achievable from harmonising processes, in particular where customers’ have to engage with these systems.

Conclusion

There’s a battle outside ragin’, It’ll soon shake your windows, And rattle your walls, For the times they are a-changin’[i].

We are in a period of significant upheaval across the business landscape. Macro and local economic impacts are meteoric. Technology disruption and innovation impacts are seismic. Whole sectors have been decimated. Some are under attack right now. Others are seeing the early waves breaking against their shores.

We have seen traditional responses to these attacks fail.

In addition customers are much more savvy. They realise how powerful they are. They demand to engage with the business on their terms. The quality of experience and service they receive is ever more in direct proportion to the level of loyalty they are willing to give. Customer tolerance for a subpar experience is at an all-time low. We see this demonstrated in the way they move on to a new supplier almost immediately something does not suit them.

This behaviour alone is driving digital transformation, and shaking up businesses. With customers expecting an experience that is fast, efficient and simple we have to find ways of meeting their needs, or be left behind. Its little wonder business leaders are looking at leading technology companies and saying, ‘We need that!

[i] The Times They Are A-Changin’, Bob Dylan, 1964.

 

The many benefits of code reviews, and how to achieve them – 2. How do we go about code reviews?


1 February 2017, by

This is part of a series of blog posts on code reviews, based on two sessions of an internal discussion forum at Softwire. See the first post in this series for more information. In this post, we’ll cover some of our current approaches to code reviews.

We tend to at least implicitly perform code reviews in multiple passes. These break down into three stages:

  • “Outside-in” preliminary review
    • Reading through the original user story or defect
    • Reviewing the design
    • Checking out the dev branch and doing some cursory testing (this can be useful for reviewing UI issues or things that are hard to spot from the code or by automated tests)
    • Reviewing the tests at a high level (do they function as good developer documentation for the code)
  • Review of the code itself and the tests in detail
  • Review of any activities surrounding the code change, e.g.:
    • Manual testing
    • External documentation
    • Risk/impact assessment

Note that not every project needs all of these passes. The point is that “code review” is a broad term covering a range of activities. Which activities you carry out, and when, may vary by project. Although within each project, there’s a lot of value in being consistent. Consistency helps developers become comfortable with the review process, and makes code reviews a much more reliable tool for quality assurance.

When do we review

As noted above, different code review activities may be carried out at different times. There was a general consensus in our discussions that reviewing earlier is preferable. Most projects insisted on at least some form of review before commit, although a few relaxed this in special cases (depending on the type of project) to avoid becoming a bottleneck.

About half our teams are actively performing up-front High-Level Design reviews. These can be useful for everyone but especially for less experienced developers (which might just mean less experience with the particular project). They encourage working through design issues up front, avoiding wasted time at the implementation stage. It also means the code reviews can then focus on just the code. The only problem mentioned with HLD reviews was that it can be a bit unclear what we mean by an HLD, and sometimes people go too low-level. For projects broken up into well-sized tasks, an HLD could just be a couple of sentences and a few bullet points.

An alternative to up-front HLD reviews is reviewing roughly implemented code, essentially a spike or proof-of-concept. This can be particularly useful on tricky legacy codebases, where it might be hard to see how to go about introducing new functionality.

Who carries out reviews?

Most of the people in our discussions were their team’s technical lead. Unsurprisingly, tech leads were doing reviews themselves, but there was a lot of support for reviews being done by not only the tech lead. Getting more people involved in the review process is a good way to build people’s confidence, share knowledge within the team, and help people become more comfortable with the review process. One person doing all the reviews can also become a bottleneck and slow the team down. Perhaps more importantly, giving developers the autonomy to carry out genuine peer reviews is a show of faith in the team’s ability, and makes it easier for reviews to act as a positive motivator.

One problem with having multiple people involved in reviewing is that it can become confusing for developers. It’s not always clear how to pick an initial reviewer, or when a review would be considered “done”. It’s important for each team to agree on a consistent approach, although approaches can of course vary between teams. Most of our teams use one of the following approaches:

  • Let the developer choose the initial reviewer and allow the developer or the reviewer escalate to the tech lead if needed
  • Have a high-level second line review as part of the standard review process
  • Include the tech lead on every review, but allow developers to merge their changes as soon as at least one person has reviewed it. This prevents the tech lead becoming a bottleneck but still gives them a chance to go into detail on any red flags.

All the above approaches include the possibility of the tech lead acting as a second line reviewer. Our tech leads would go into more or less detail in their review based on the nature of the change and the experience of the other people involved (i.e. the original developer and reviewer). In some cases perhaps just reviewing the comments from the first-line reviewer and/or looking out for changes within common problem areas of the codebase.

How much detail to go into in a second line review is a matter of judgement and may not be obvious. It can help to think of the goal of reviewing as gaining trust that the code is up to standard, and getting involved enough to meet this goal. Of course, it’s still worth bearing in mind the importance of code reviews for training and mentoring. A second-line reviewer may be looking out for learning opportunities for both the developer and the initial reviewer. They’re also in a position to assess the quality of the interaction between these two roles. This will be the subject of the next post.

The many benefits of code reviews, and how to achieve them – 1. Introduction


25 January 2017, by

There is a lot of writing on the importance of code reviews and how to go about them at a low-level (i.e. what to look for in the code). This series of posts takes a higher-level look at how to approach code reviews and how to maximise their benefits. We’ll also refer throughout to other useful writing on code reviewing and how to make the most of it.

At Softwire, we have always carried out code reviews in one form or another. But our methods and tooling have evolved over time, and often varied between projects. This allows each project team to find a way of working that best suits them.

We recently ran couple of internal lunchtime discussion forums to talk about code reviewing and pool our collective experience. The goal here wasn’t to try and agree on “one true code review methodology”, but just to share ideas between teams. We discussed the perceived benefits and overall aims of code review, and how we go about achieving these.

It turned out that the approaches to code reviews within the company are even more varied than I’d thought. But there was some broad consensus on the benefits and aims of code reviews. However, the benefits that we valued most may surprise you.

What value do we get from code reviews?

Both of our discussions came up with a similar consensus on the range of benefits provided by code reviews. In rough order of their prominence in the discussions, these were:

  • Training and mentoring
    • In fact, several people felt that the healthiest approach to code reviewing was to treat it as a training opportunity first
  • Knowledge sharing
    • In both directions (i.e. both the developer and the reviewer had opportunities to learn from one another)
    • This includes sharing domain knowledge, general technical knowledge, and knowledge of the specific codebase
  • Encouraging people to produce better work (knowing that it will be scrutinised by your peers)
  • Keeping the codebase consistent, in terms of style and structure
  • More generally, keeping the codebase maintainable
  • Catching defects, or at least catching them earlier (and so making them cheaper to fix)

One point I found interesting here was that “catching defects” was one of the last points to come up, in both discussions. There was a lot more emphasis on the holistic benefits of code review: training, knowledge sharing, and improving the overall codebase.

Quality assurance is the main focus of some widely-referenced sources on code reviewing. For example, Jeff Atwood’s blog post Code Reviews: Just Do It, and the two books that it references (Peer Reviews in Software and Code Complete). These mainly focus on reducing errors/defects/bugs, and only briefly mention other benefits. Of course, improving defect detection is not a bad argument for doing code reviews. Perhaps it’s also the most persuasive one in organisations that don’t have a history of doing code reviews and are reluctant to let people start spending time on it.

Defect detection is a valuable and somewhat measurable benefit, which may make it a good angle to sell the idea of doing code reviews at all. However, for the developers and tech leads who actually carry out code review, it doesn’t need to be the only aim or even the primary aim of the exercise. So how do we go about code reviews at Softwire? We’ll cover this in the next post.

A quick guide to javascript streams


16 January 2017, by

In this short article, I’ll lead you through a crash course on what a stream is, and how you can harness them to make your code more understandable.

If you want to try out any of the examples here, they will work with the excellent library bacon.js.

So what is a stream?

A stream is a sequence of objects, but unlike other sequences, it’s handled lazily (so the items aren’t retrieved until they’re required). This means that unlike a normal List, it doesn’t have to be finitely long.

You can put anything in your stream characters, numbers or even user events (like keystrokes or mouse presses). Once you’ve got your sequence, you can manipulate it in lots of different ways.

Streams can also completely remove the requirement to think about time, meaning fewer race conditions and therefore more declarative code. This removal of the need to represent time in code also results in architecture diagrams which are easy to draw and reason about.

Manipulating your stream

Now that we know the purpose of a stream, how do we manipulate it? What follows is a selection of functions that can be called on streams along with visual representations of the effect of these functions when called. It should provide everything needed to set up and use some programs which utilise streams.

map(f)

Map takes a function f. Then, for every item that appears in the stream, it applies the function f to it and produces a new stream with the new values.

scan(f)

Scan takes a function f. Then for each value in the stream, applies the function to the previous result along with the current value. This is similar to fold, but the output is a stream of all the values rather than just the final value, which fold would provide.

filter(f)

Filter takes a function f, then for each value computes the function, and if it evaluates to true, puts the value into the output stream, else discards the value.

merge

There are several ways to combine two streams. Using merge will take values from both streams and output values (in the order they are received) into the output stream.

This differs from concat, which takes two streams and returns all of the first stream before returning all of the second stream.

sampledBy(f)

SampledBy takes a function f and two streams (A and B). The function f is then called whenever a value is provided by stream A, passing in the last value from each stream. In the example below, a value is created in the output stream whenever a value occurs in the top stream. The output is then created from the value in the bottom stream.

slidingWindow(n)

SlidingWindow takes an integer n and returns the last n values from the stream as an array each time a new value occurs in the input stream.

Stream Summary

So in summary, you can use stream functions to manipulate streams in a multitude of different ways. You should use streams to remove the time component from code, allowing for simpler to understand and debug code.

This post first appeared on Chris Arnott’s blog.

Using sqlite for dev nodeJS environments


29 November 2016, by

This post will explain how to get one click DBs working in nodeJS.

Problems with your zero to hero

I was recently working on a NodeJS project, and it was set up to use a mySQL database. This was fine for production, but meant that people joining the project had several manual steps to get the project working.

  • Install SQL server
  • Set up a SQL database
  • Update the NodeJS config to point to the database.

This isn’t too difficult to do. But I wanted to reduce the ramp up to getting the project up and running on a dev machine.

This meant moving to a database that could configure itself from whatever was checked into the codebase.

One option would have been to translate the steps above into code so that running a command would create a SQL database that was configured to be used by the code.

This would have been overkill, as there are much better options.

I chose to instead configure dev and test environments to use a local sqlite database.

Local sqlite database

A sqlite database can be added as an npm dependency, so that it is installed as part of dependency management when a new developer checks out the code.

"dependencies": {
  "sqlite3": "3.1.4"
}

Now that sqlite has been added as a dependency, it can be called from the ORM that we’re using. In this example, I’ve used Sequelize as the ORM, as once set up, it allows easy mapping of Javascript objects to database objects, as well as having built in support for database migrations.

The next thing to do is to have a configuration file which details how to connect to the database in different environments:

module.exports = {
  development: {
    dialect: 'sqlite',
    storage: 'data/dev-db.sqlite3'
  },
  test: {
    dialect: 'sqlite',
    storage: 'data/test-db.sqlite3'
  },
  production: {
    username: process.env.RDS_USERNAME,
    password: process.env.RDS_PASSWORD,
    database: 'ebdb',
    port: process.env.RDS_PORT,
    host: process.env.RDS_HOSTNAME,
    dialect: 'mysql'
  }
};

This config.js file shows that in dev and test mode, we’ll connect to the sqlite database which is stored in the file “data/<env>-db.sqlite3”.

In production, it instead connects to an RDS instance (the production machine in this project was running on AWS), with the connection details stored in the environment on the cloud machine (rather than being checked into the code).

Now we need to setup sequelize to use the correct database when it is initialised. This is done in the index.js file under the models folder:

var path = require('path');
var Sequelize = require('sequelize');
const basename = path.basename(module.filename);
const env = process.env.NODE_ENV || 'development';
const config = require(path.join(__dirname, '..', 'config', 'config.js'))[env];
const db = {};
var sequelize = new Sequelize(config.database, config.username, config.password, config);

This code fetches the relevant section of the config.js file, and uses it to initialise the ORM. What’s not shown above is the code that loads all the models into Sequelize. We’ll leave that as an excercise for the reader.

As you can see. This file is using an environment variable to determine what mode we are running the code in. All that’s required to run in production is set NODE_ENV in the environment variables to be equal to “production”.

The app can now be started using “npm start”, which will start, using the dev database.

Migrations

Just a quick note here on using Sequelize for database migrations. You can hook your migrations into the “npm start” command above by adding the following to your package.json file:

"scripts": {
    "prestart": "sequelize db:migrate -c config/config.js"
}

Summary

So what have we learnt?

  • You should try and ensure setting up a project is as easy as possible for new developers
  • You can use different environments variables to separate out dev/test/production environments (and default to test).
  • This method can work, even if you are deploying to the cloud.
  • You can easily add database migrations into the start command of your application.

This post originally appeared on Chris Arnott’s personal blog.

Better CSS-only tabs, with :target


2 November 2016, by

CSS-only tabs are a fun topic, and :target is a delightfully elegant declarative approach, except for the bit where it doesn’t work. The #hash links involved jump you around the page by default, and disabling that jumping breaks :target in your CSS, due to poorly defined behaviour in the spec, and corresponding bugs in every browser.

Or so we thought. Turns out there is a way to make this work almost perfectly, despite these bugs, and get perfect CSS-only accessible linkable history-tracking tabs for free.

What? Let’s back up.

What’s :target?

:target is a CSS psuedo-class that matches an element if its id is the hash part of the URL. If you’re on http://example.com#my-element for example,div:target would match <div id="my-element"> and not <div id="abc">.

This ties perfectly into <a href="#my-element">. All of a sudden you can add links that apply a style to one part of the page on demand, with just simple CSS.

There’s lots of fun uses for this, but an obvious one is tabs. Click a link, the page updates, and that part of the page is shown. Click another link, that part of the page disappears, and a different part is shown. All with working history and linkability, because the URL is just updating like normal.

Implementation is super easy, and works in 98% of browsers, right back to IE9. It looks like this:

<!DOCTYPE html>
<html>
<head>
  <style>
    div:not(:target) {
      display: none;
    }
    
    div:target {
      display: block;
    }
  </style>
</head>
<body>  
  <a href="#tab-1">Tab One</a>
  <a href="#tab-2">Tab Two</a>

  <div id="tab-1">
    Tab one contents
  </div>
  
  <div id="tab-2">
    Tab two contents
  </div>  


</body>
</html>

Hop, Skip, and Jump

If you try to do this in any page longer than the height of your browser though, you’ll quickly discover that this can jump around, which can be annoying. Try http://output.jsbin.com/melama — you’ll have to scroll back up after hitting each tab. It’s functionally fine, and in some cases this behaviour might be ok too, but for many it isn’t exactly what you want.

This is because links to #hash links not only update the URL, but also scroll down to the element with the corresponding id on the page. We can fight that behaviour though, with a little extra JavaScript.

JavaScript, in our beautiful CSS-only UI? Sadly yes, but it’s important to understand why this is still valuable. With a tiny bit of JS on top we still retain the benefits that a CSS-based approach gives us (page appearance independent of any logic, defined declaratively, completely accessible, and with minimal page weight) and then progressively enhance it to improve the UI polish only when we can.

In environments where JavaScript isn’t available the tabs still work fine; they swap, and the browser jumps down to the tab you’ve opened. When you do have JavaScript, we can enhance them further, to tune the exact behaviour and polish that as we like.

Our JavaScript enhancement looks something like this:


var hashLinks = document.querySelectorAll("a[href^='#']");
[].forEach.call(hashLinks, function (link) {
  link.addEventListener("click", function (event) {
    // Disable jumping around and URL updates
    event.preventDefault();
    // Update the URL only ourselves
    history.pushState({}, "", link.href);
  });
});

This disables the normal behaviour on every #hash link, and instead manually updates the hash and the browser history. Unfortunately though it doesn’t work. The URL updates, but the :target selector matching doesn’t!

You end up with #tab-1 in the URL, but <div id="tab-1"> doesn’t match :target. The spec doesn’t actually cover this case and every singlebrowser currently has this crazy behaviour (although both the spec authors and browser vendors are looking at fixing this). Game over.

Two steps forward, one step back

We can beat this. Right now we can disable jumping around, but it breaks :target. We need a way to disable jumping around, but still update the hash in a way that :target will listen to.

There’s a few ways we might be able to work around this. Ian Hansson has a clever trick where you position: fixed and hide the targeted element, to control the scroll, but depend on its :target status with sibling selectors. Meanwhile Chris Coyier has suggested capturing the scroll position and resetting it, and I think there might be a route through if you change the element’s id to something else and back at just the right time too. These are all very hacky though; it’d be nice to come up with a way of just fixing the JS we want to use above, so it actually works properly.

Fortunately a helpful comment from Han mentions some interesting current browser behaviour that might help us out, while examining compatibility issues with fixing this for real:

I’ve found a good reason to believe that virtually no webpages would be broken by the “breaking” change of updating :target selectors upon pushState: though they are currently not updated, if you however hit Back and then Fwd again, then :target rules do get applied

Moving in the browser history give us the :target behaviour we (and all sane people) are expecting. If we can find a way to transparently go back and forward without breaking the user’s experience, we can fix :target.

From there’s it’s easy. We can build a simple workaround in JavaScript to do exactly this, and save the day:

var hashLinks = document.querySelectorAll("a[href^='#']");
[].forEach.call(hashLinks, function (link) {
  link.addEventListener("click", function (event) {
    event.preventDefault();
    history.pushState({}, "", link.href);
    
    // Update the URL again with the same hash, then go back
    history.pushState({}, "", link.href);
    history.back();
  });
});

Here we add the hash to the history twice, and immediately remove it once.

This isn’t perfect, and it would be nice if :target worked properly without these workarounds. As-is though, this gives perfect behaviour, with the only downside being that the Forward button in the user’s browser isn’t displayed as disabled, as they might expect. Actually going forward won’t do anything though (it’s the same URL they’re already on), and your users are not typically going to notice this.

This will keep working even if/when :target is fixed, and you’ll then be able to remove the extra workaround here, to lose that slightly messy extra behaviour. If this does break, or any users don’t have JavaScript, they’ll get working tabs with jumping-to-tab behaviour. Slightly annoying, but perfectly usable.

So what?

This lets you build amazingly simple & effective CSS-only tabs.

Minimal clean markup & CSS, perfect accessibility, working in IE9, with shareable tab URLs and a fully working history for free. Enjoy!

Full code:

<!DOCTYPE html>
<html>
<head>
  <style>
    div:not(:target) {
      display: none;
    }
    
    div:target {
      display: block;
    }
    
    /* Make the div big, so we would jump, if the JS was still broken */
    div {
      height: 100vh;
    }
  </style>
</head>
<body>  
  <a href="#tab-1">Tab One</a>
  <a href="#tab-2">Tab Two</a>

  <div id="tab-1">
    Tab one contents
  </div>
  
  <div id="tab-2">
    Tab two contents
  </div>
  
  <script>
  // Stop href="#hashtarget" links jumping around the page
  var hashLinks = document.querySelectorAll("a[href^='#']");
  [].forEach.call(hashLinks, function (link) {
    link.addEventListener("click", function (event) {
      event.preventDefault();
      history.pushState({}, "", link.href);
      history.pushState({}, "", link.href);
      history.back();
    });
  });
  </script>
  </body>
</html>

Cross platform phone apps


28 October 2016, by

If you want to write a phone app, and want it to run on multiple platforms, but don’t want to spend large amounts of time maintaining two code bases, then there are several solutions that allow writing one app, and deploying it to several platforms.

These multi-platform apps work by running a mini website on a phone, which is accessed via a web view, which is how the app appears native.

In this post, we’ll discuss several different approaches to writing a multi-platform app, and have a look which situations you should choose each option.

(more…)

Tips for managing technical people – Blog Post wrap up


14 October 2016, by

Galvanizing the geeksOver the past few months, we’ve been posting excerpts from my new book, “Galvanizing the Geeks – Tips for Managing Technical People”. You can buy the full book on my website, here.

This post serves as a reference to all the snippets that are freely available here.

(more…)