Softwire Blog


Launching the Minimum Viable Diversity Pledge


3 November 2016, by

Diversity in tech is a major problem, and tech isn’t alone in this. One of the places where this is most visible is in speaker lineups and panels at events. Across industry after industry, speakers at events are extremely rarely from marginalized groups, including women, people of colour, people with disabilities and members of the LGBT community. Not only are they underrepresented, in many cases events don’t feature any diversity at all.

This is bad. Encouraging diversity not only makes the world a happier, fairer place, it also has concrete benefits for productivity and problem solving. When you never see a lineup of role models who are anything like you, you don’t feel included, it’s hard to be inspired, and you end up being slowly pushed out of the industry.

Diversity matters at Softwire, and we want to do everything we can to improve this. Today with the help of the Women in Engineering Society and the 30% Club we’re launching a new initiative to take a concrete step forwards on event diversity: the Minimum Viable Diversity Pledge.

Minimum Viable Diversity Pledge logo

The goal of the Minimum Viable Diversity Pledge is to totally stop the worst offenders for speaker diversity. By pledging, you’re committing to never actively supporting a paid event or panel that includes zero diversity whatsoever. This is a minimum bar, and we’d encourage people to go further, but the low bar is key.

We can all agree that such events have clear problems, and that finding at least one underrepresented speaker for any given topic is achievable. A low bar focused on this makes it easy for as many speakers, attendees, events and companies to sign up. Once that reaches a critical mass, running a paid event and totally ignoring diversity becomes impossible.

The world we’re aiming for here is one where every event organiser gets at least 2 or 3 of their speakers accept their invite on the condition that there’s at least some diversity in their lineup, along with attendees checking there’ll be at least some diversity included before they buy tickets. Once that happens, you can’t run an event without thinking about diversity, and you can’t host a lineup filled with a range of identical voices without a few of them publically dropping out. This won’t solve diversity overnight, but does make life far more difficult for those who totally ignore it, and provides steady pressure on every event to actively put in at least a little effort towards this issue.

There are four pledges, for speakers, attendees, events themselves, and companies, so everybody can get involved:

  • Speaker: I will never speak at any paid conferences or panels as part of a homogeneous group of speakers.
  • Attendee: I will never attend any paid conferences or panels with a homogeneous group of speakers.
  • Event: We will never organise an event lineup or panel with a homogeneous group of speakers.
  • Company: We will never sponsor or organise paid conferences or panels with a homogeneous group of speakers, we will strongly encourage our employees not to attend or speak at such events, and we’ll support them in raising diversity concerns with events directly.

This is where we need your help. This only works if enough people sign up and get involved. Take a concrete step on diversity, help shut down the worst offenders, and make it impossible to run a paid event that ignores its responsibilities. Sign the pledge.

Better CSS-only tabs, with :target


2 November 2016, by

CSS-only tabs are a fun topic, and :target is a delightfully elegant declarative approach, except for the bit where it doesn’t work. The #hash links involved jump you around the page by default, and disabling that jumping breaks :target in your CSS, due to poorly defined behaviour in the spec, and corresponding bugs in every browser.

Or so we thought. Turns out there is a way to make this work almost perfectly, despite these bugs, and get perfect CSS-only accessible linkable history-tracking tabs for free.

What? Let’s back up.

What’s :target?

:target is a CSS psuedo-class that matches an element if its id is the hash part of the URL. If you’re on http://example.com#my-element for example,div:target would match <div id="my-element"> and not <div id="abc">.

This ties perfectly into <a href="#my-element">. All of a sudden you can add links that apply a style to one part of the page on demand, with just simple CSS.

There’s lots of fun uses for this, but an obvious one is tabs. Click a link, the page updates, and that part of the page is shown. Click another link, that part of the page disappears, and a different part is shown. All with working history and linkability, because the URL is just updating like normal.

Implementation is super easy, and works in 98% of browsers, right back to IE9. It looks like this:

<!DOCTYPE html>
<html>
<head>
  <style>
    div:not(:target) {
      display: none;
    }
    
    div:target {
      display: block;
    }
  </style>
</head>
<body>  
  <a href="#tab-1">Tab One</a>
  <a href="#tab-2">Tab Two</a>

  <div id="tab-1">
    Tab one contents
  </div>
  
  <div id="tab-2">
    Tab two contents
  </div>  


</body>
</html>

Hop, Skip, and Jump

If you try to do this in any page longer than the height of your browser though, you’ll quickly discover that this can jump around, which can be annoying. Try http://output.jsbin.com/melama — you’ll have to scroll back up after hitting each tab. It’s functionally fine, and in some cases this behaviour might be ok too, but for many it isn’t exactly what you want.

This is because links to #hash links not only update the URL, but also scroll down to the element with the corresponding id on the page. We can fight that behaviour though, with a little extra JavaScript.

JavaScript, in our beautiful CSS-only UI? Sadly yes, but it’s important to understand why this is still valuable. With a tiny bit of JS on top we still retain the benefits that a CSS-based approach gives us (page appearance independent of any logic, defined declaratively, completely accessible, and with minimal page weight) and then progressively enhance it to improve the UI polish only when we can.

In environments where JavaScript isn’t available the tabs still work fine; they swap, and the browser jumps down to the tab you’ve opened. When you do have JavaScript, we can enhance them further, to tune the exact behaviour and polish that as we like.

Our JavaScript enhancement looks something like this:


var hashLinks = document.querySelectorAll("a[href^='#']");
[].forEach.call(hashLinks, function (link) {
  link.addEventListener("click", function (event) {
    // Disable jumping around and URL updates
    event.preventDefault();
    // Update the URL only ourselves
    history.pushState({}, "", link.href);
  });
});

This disables the normal behaviour on every #hash link, and instead manually updates the hash and the browser history. Unfortunately though it doesn’t work. The URL updates, but the :target selector matching doesn’t!

You end up with #tab-1 in the URL, but <div id="tab-1"> doesn’t match :target. The spec doesn’t actually cover this case and every singlebrowser currently has this crazy behaviour (although both the spec authors and browser vendors are looking at fixing this). Game over.

Two steps forward, one step back

We can beat this. Right now we can disable jumping around, but it breaks :target. We need a way to disable jumping around, but still update the hash in a way that :target will listen to.

There’s a few ways we might be able to work around this. Ian Hansson has a clever trick where you position: fixed and hide the targeted element, to control the scroll, but depend on its :target status with sibling selectors. Meanwhile Chris Coyier has suggested capturing the scroll position and resetting it, and I think there might be a route through if you change the element’s id to something else and back at just the right time too. These are all very hacky though; it’d be nice to come up with a way of just fixing the JS we want to use above, so it actually works properly.

Fortunately a helpful comment from Han mentions some interesting current browser behaviour that might help us out, while examining compatibility issues with fixing this for real:

I’ve found a good reason to believe that virtually no webpages would be broken by the “breaking” change of updating :target selectors upon pushState: though they are currently not updated, if you however hit Back and then Fwd again, then :target rules do get applied

Moving in the browser history give us the :target behaviour we (and all sane people) are expecting. If we can find a way to transparently go back and forward without breaking the user’s experience, we can fix :target.

From there’s it’s easy. We can build a simple workaround in JavaScript to do exactly this, and save the day:

var hashLinks = document.querySelectorAll("a[href^='#']");
[].forEach.call(hashLinks, function (link) {
  link.addEventListener("click", function (event) {
    event.preventDefault();
    history.pushState({}, "", link.href);
    
    // Update the URL again with the same hash, then go back
    history.pushState({}, "", link.href);
    history.back();
  });
});

Here we add the hash to the history twice, and immediately remove it once.

This isn’t perfect, and it would be nice if :target worked properly without these workarounds. As-is though, this gives perfect behaviour, with the only downside being that the Forward button in the user’s browser isn’t displayed as disabled, as they might expect. Actually going forward won’t do anything though (it’s the same URL they’re already on), and your users are not typically going to notice this.

This will keep working even if/when :target is fixed, and you’ll then be able to remove the extra workaround here, to lose that slightly messy extra behaviour. If this does break, or any users don’t have JavaScript, they’ll get working tabs with jumping-to-tab behaviour. Slightly annoying, but perfectly usable.

So what?

This lets you build amazingly simple & effective CSS-only tabs.

Minimal clean markup & CSS, perfect accessibility, working in IE9, with shareable tab URLs and a fully working history for free. Enjoy!

Full code:

<!DOCTYPE html>
<html>
<head>
  <style>
    div:not(:target) {
      display: none;
    }
    
    div:target {
      display: block;
    }
    
    /* Make the div big, so we would jump, if the JS was still broken */
    div {
      height: 100vh;
    }
  </style>
</head>
<body>  
  <a href="#tab-1">Tab One</a>
  <a href="#tab-2">Tab Two</a>

  <div id="tab-1">
    Tab one contents
  </div>
  
  <div id="tab-2">
    Tab two contents
  </div>
  
  <script>
  // Stop href="#hashtarget" links jumping around the page
  var hashLinks = document.querySelectorAll("a[href^='#']");
  [].forEach.call(hashLinks, function (link) {
    link.addEventListener("click", function (event) {
      event.preventDefault();
      history.pushState({}, "", link.href);
      history.pushState({}, "", link.href);
      history.back();
    });
  });
  </script>
  </body>
</html>

Effortlessly add HTTPS to Dokku, with Let’s Encrypt


26 September 2016, by

You’ve written an application deployed using Dokku, and you’ve got it all up and running and great. You’ve heard a lot about why HTTPS is important nowadays, especially the SEO and performance benefits, and you’d like to add all that, with minimal cost and hassle. Let’s get right on that.

Let’s Encrypt is a new certificate authority (an organisation that issues the certificates you need to host an HTTPS site), which provides certificates to sites entirely for free, by totally automating the system, to help get 100% of the web onto HTTPS. Sounds great, right?

To set up this up, you’re going to need to prove you own the domain you’re asking for a certificate for, you need to get the certificates and start using them, and you’re going to want to have some system in place to automatically renew your certificates, so you never have to think about this again (Let’s Encrypt certificates expire every 90 days, to encourage automation). That means we’ve got a few key steps:

  • Generate a key-pair to represent ourselves (you can think of this as our login details for Let’s Encrypt).
  • Complete Let’s Encrypt’s Simple HTTP challenge, by signing and host a given JSON token at /.well-known/acme-challenge/<token> on port 80 for the given domain name, with our public key. This validates the key pair used to sign this token as authorized to issue certificates for this domain.
  • Request a certificate for this domain, signing the request with our now-authorized key.
  • Set up our server to use this key.
  • Set up an automated job to re-request certificates and update the certificate we’re using at regular intervals.

(Interested in the full details of how the Let’s Encrypt process works? Check out their detailed intro: https://letsencrypt.org/how-it-works/)

Fortunately, you barely have to do any of this! In reality, with Dokku, this is pretty much a case of turning a plugin on, and walking away.

How do you actually do this with Dokku?

First up, we need to install dokku-letsencrypt, which will do most of the setup magically for us (see full instructions here). On your server, run the command below for your dokku version (run ‘dokku version’ to check):

# dokku 0.5+
dokku plugin:update letsencrypt

# dokku 0.4
dokku plugin:update letsencrypt dokku-0.4

To configure it, set an email for your app:

dokku config:set --no-restart myapp [email protected]

(This email will receive the renewal warnings for your certificate)

Turn it on:

dokku letsencrypt myapp

For Dokku 0.5+, set up auto-renewal:

dokku letsencrypt:cron-job --add

For Dokku 0.4, you’ll have to manually add a cronjob scheduled every 60 days to kick off the auto-renewal process:

dokku letsencrypt:auto-renew

That’s it!

What just happened?

The magic is in the ‘dokku letsencrypt myapp’ command. When you run that, Dokku-LetsEncrypt:

  • Starts a new service, in a Docker container, and temporarily delegates the top-level .well-known path on port 80 to it in Nginx (the server which sits in front of all your services in Dokku and routes your requests).
  • Generates a key pair, and authorizes it for each of the domains you have configured for your app (see ‘dokku domains myapp’).
  • Uses that key pair to get certificates for each of those domains.
  • Configures Nginx to enable HTTPS with these certificates for each of those domains.
  • Configures Nginx to 301 redirect any vanilla HTTP requests on to HTTPS for each of those domains.

The later auto-renewal steps just do the work to update these certificates later on.

Easy as pie. Don’t forget to check it actually worked though! It’s quite likely you’ll find mixed content warnings when you first turn on HTTPS. Most major CDNs or other services you might be embedding will have HTTPS options available nowadays, so this should just be a matter of find/replacing http: tohttps: throughout your codebase. With that done though, that shiny green padlock should be all yours.

This post originally appeared on Tim Perry’s personal blog.

Building a Server-Rendered Map Component – Part 2


19 September 2016, by

Part 2: How to use client-side libraries like Leaflet, in Node.

As discussed in Part One: Why?, it’d be really useful to be able to take an interesting UI component like a map, and pre-render it on the server as a web component, using Server Components.

We don’t want to do the hard mapping ourselves though. Really, we’d like this to be just as easy as building a client-side UI component. We’d like to use a shiny mapping library, like Leaflet, to give us all the core functionality right out of the box. Unfortunately though, Leaflet doesn’t run server-side.

This article’s going to focus on fixing that so you can use Leaflet with Server Components, but you’ll hit the same problems (and need very similar fixes) if you’re doing any other Node-based server rendering, including with React.The JS ecosystem right now is not good at isomorphism, but with a few small tweaks you can transform any library you like, to run anywhere.

Let’s focus on Leaflet for now. It doesn’t run server-side, because there’s not that many people seriously looking at rendering nice UIs outside a browser, so JS libraries are pretty trigger-happy making big browser-based assumptions. Leaflet expects a few things that don’t fit neatly outside a browser:

  • Global window, document and navigator objects.
  • A live element in an HTML DOM to be inserted into.
  • A Leaflet <script> tag on the page, so it can find its URL, so it can autodetect the path to the Leaflet icons.
  • To export itself just by adding an ‘L’ property to the window object.

All of these are things need tricky fixes. Just finding these issues is non-trivial: you need to try and use the library in Node, hit a bug, solve the bug, and repeat, until you get the output you’re expecting.

Leaflet is a relatively hard case though. Most libraries aren’t quite so involved in complex DOM interactions, and just need the basic globals they expect injected into them.

So, how do we fix this?

Managing Browser Globals

If you npm install leaflet and then require(“leaflet”), you’ll immediately see our first issue:

> ReferenceError: window is not defined

Fix this one, and we’ll hit a few more at require() time, for document and navigator too. We need to run Leaflet with the context it’s expecting.

It would be nice to do that by having a DOM module somewhere that gives us a document and a window, and using those as our globals. Let’s assume we’ve got such a module for a moment. Given that, we could prefix the Leaflet module with something like:

var fakeDOM = require("my-fake-dom");
var window = fakeDOM.window;
var document = fakeDOM.document;
var navigator = window.navigator;
[...insert Leaflet code...]

(Instead we could just define browser globals as process-wide Node globals and leave the Leaflet source untouched, but this isn’t good behaviour, and it’ll come back to bite you very quickly if you’re not careful)

Doing something like this will get you much closer. With any reasonable DOM stub you should be able to get Leaflet successfully importing here. Unfortunately though, this fails because of a fundamental difference between browser and Node rendering. On the server, we have to support multiple DOM contexts in one process, so we need to be able to change the document and window.

We can still pull this off though, just taking this a step further with something like:

module.exports = function (window, document) {
  var navigator = window.navigator;
  [...insert Leaflet code...]
}

Now this is a Node module that exports not a single Leaflet, but a factory function to build Leaflet for a given window and document, provided by the code using the library. This doesn’t actually return anything though when called, as you might reasonably expect, instead creating window.L, as is common for browser JS libraries. In some cases that’s probably ok, but in my case I’d rather leave Window alone, and grab the Leaflet instance directly, by adding the below to the end of the function, after the Leaflet code:

return window.L.noConflict();

This tells Leaflet to remove itself as a global, and just give you the library as a reference directly.

With this, require(“leaflet”) now returns a function, and passing that a window and document gives you a working ready-to-use Leaflet.

Emulating the expected DOM

We’re not done though. If you want to use this Leaflet, you might define a Server Component like:

var LeafletFactory = require("leaflet");
var components = require("server-components");
var MapElement = components.newElement();   MapElement.createdCallback = function (document) {
  var L = LeafletFactory(new components.dom.Window(), document);
  var map = L.map(this).setView([41.3851, 2.1734], 12);
  L.tileLayer('http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', {
    maxZoom: 19,
  }).addTo(map);
});
components.registerElement("leaflet-map", {prototype: MapElement});

This should define a component that generates the HTML for a full working map when rendered. It doesn’t. The problem is that Leaflet here is given a DOM node to render into (‘this’, inside the component), and it tries to automatically render at the appropriate size. This isn’t a real browser though, we don’t have a screen size and we’re not doing layout (that’s why it’s cheap), and everything actually has zero height or width.

This isn’t as elegant a fix, but it’s an unavoidable one in any server-rendering approach I think: you need to pick a fixed size for your initial render, and nudge Leaflet to use that. Here that’s easy, you just make sure that before the map is created you add:

this.clientHeight = 500;
this.clientWidth = 500;

And with that, it works.

This fakes layout, as if the browser had decided that this was how big the element is. You can render like this at a fixed size for lots of applications, and potentially add client-side rendering on top to resize too if you want.

With that added, you can take this component, render it with a cheekycomponents.renderFragment(“<leaflet-map></leaflet-map”) and be given working HTML for a lovely static map you can send straight to your users. Delightful.

There is still one last step required if you want to take this further. Leaflet by default includes a set of icons, and uses the ‘href’ in its script tag in the page to automatically work out the URL to these icons. This is a bit fragile in quite a few ways, including this environment, and if you extend this example to use any icons (e.g. adding markers), you’ll find your icons don’t load.

This step’s very simple though, you just need to set L.Icon.Default.imagePathappropriately. If you want to do that in a nice portable Server Component, that means:

var componentsStatic = require("server-components-static");
var leafletContent = componentsStatic.forComponent("leaflet");
L.Icon.Default.imagePath = leafletContent.getUrl("images");

This calculates the client-facing URL you’ll need that maps to Leaflet’s images folder on disk (see Server-Components-Static for more details).

Making this (more) maintainable

There’s one more step though. This is a bit messy in a few ways, but particularly in that we have to manually fork and change the code of Leaflet, and maintain that ourselves in future. It would be great to automate this instead, to dynamically wrap normal Leaflet code, without duplicating it. With Sandboxed-Module we can do exactly that.

Sandboxed-Module lets you dynamically hook into Node’s require process, to transform module code however you like. There’s lots of somewhat crazy applications of this (on-require compilation of non-JS languages, for example), but also some very practical ones, like our changes here.

There’s potentially a very small performance hit on startup from this for the transformation, but for the rest of runtime it shouldn’t make any difference; it hooks into the initial file read to change the result, and then from that point on it’s just another Node module.

So, what does this look like?

var SandboxedModule = require('sandboxed-module');
module.exports = SandboxedModule.require('leaflet', {
  sourceTransformers: {
      wrapToInjectGlobals: function (source) {
        return `
        module.exports = function (window, document) {
          var navigator = window.navigator;
          ${source}
          return window.L.noConflict();
        }`;
      }
  }
});

That’s it! Your project can now depend on any version of Leaflet, and require this wrapped module to automatically get given a Node-compatible version, without having to maintain your own fork.

This same approach should work for almost any other library that you need to manage server side. It’s not perfect — if Leaflet starts depending on other browser global things may break — but it should be much easier to manage and maintain than copying Leaflet’s code into your project wholesale.

Hopefully in future more projects will improve their native support for running in other environments, and this will go away, but in the meantime there are some relatively simple changes you can make to add Node support to even relatively complex client-side libraries.


Let’s stop there for now. In the next post, we’ll take a proper look at a full working map component, complete with configurability, static content and marker support, and see what you can do to start putting this into action yourself. Can’t wait? Check out https://github.com/pimterry/leaflet-map-server-component for the map component codebase so far.

This post originally appeared on Tim Perry’s personal blog. 

Building a Server-Rendered Map Component – Part 1


12 September 2016, by

Part 1: Why do we need better maps?

Maps are a standard tool in our web UI toolkit; they’re an easy way to link our applications to the real world they (hopefully) involve. At the same time though, maps aren’t as easy to include in your web site as you might want, and certainly don’t feel like a native part of the web.

Critically, unlike many other UI components, you normally can’t just serve up a map with plain HTML. You can easily serve up HTML with a videos embedded, or complex forms, or intricate SVG images and animations, or MathML, but maps don’t quite make the cut. (Google Maps does actually have an Embed API that’s closer to this, but it’s very limited in features, and it’s really just bumping the problem over their fence).

If you want to throw a map on page, you’re going to need pick out a library, write and serve up scripts to imperatively define and render your map, and bring in the extra complexity (and page weight and other limitations) of all the supporting code required to put that into action. That’s not really that much work, and we’ve got used to making fixes like these everywhere. It’s a far cry though from the declarative simplicity of:

<video src=“cats.mp4”></video>

Maps are one small example of the way that the HTML elements we have don’t match what we’re using day to day, and the workarounds you have to use to get closer.

How can we show a map on a page with the same simplicity and power?

Web Components

What we really want is to add<map lat=1.2 long=3.4 zoom=5> to our pages, get a great looking map, and be done with it. That’s the goal.

In principle, web components handle this nicely. You define a custom element for your map, give it its own rendering logic and internally isolated DOM, and then just start using it. That’s really great, it’s a fantastic model, and in principle it’s perfect. Clicking together isolated encapsulated UI elements like this is the key thing that every shiny client-side framework has in common, and getting that as a native feature of the web is incredible.

Support is terrible though. Right now, it’s new Chrome only, so you’re cutting out 50% of your users right off the bat, and it’s going to be a very long time until you can safely do this everywhere. You can get close with lots of polyfills, and that’s often a good choice, but there is extra weight and complexity there, and you really just want a damn <map> — not to spend hours researching the tradeoffs of Polymer or the best way to polyfill registerElement outside it.

Even once this works, you still have the extra front-end complexity that results, and the associated maintenance. You have to serve up your initial content, and then re-render in your JavaScript before some of it is visible. You quickly end up with content that is invisible to almost all automated scripts, scrapers, devices, and search spiders. Depending on JavaScript doesn’t just cut off the small percentage of users who turn it off.

Mobile devices make this even more important. With their slow connections and slow rendering, are going to have to wait longer to see your page when you render things client-side. JavaScript is not at all resilient to connection issues either — any device that drops any scripts required to for basic rendering won’t show any of that rendered content, while simple HTML pages continue with whatever’s available. New Relic’s mobile statistics show 0.9% of HTTP requests from mobile devices failing on the network. That failure rate builds up quickly, when applied to every resource in your page! With just that 0.9% of failures and only 10 resources to load, 9% of your users aren’t going to see your whole page.

Web components are a great step in the right direction, and are exactly the right shape of solution. Web components that progressively enhance already meaningful HTML content are even better, and can mitigate many of the issues above. In many cases like maps though, your component is rendering core page content. Web components do give you a superb way to do that client-side, but rendering your content client-side rendering comes with heavy costs. Worse, it’s frequently not a real necessity. A static map would give all the core functionality (and could then be enhanced with dynamic behaviour on top).

What if we could get these same benefits, without the costs?

Server Components

Server Components is an attempt to improve on our current approach to rendering the web, with the same approach and design as web components, but renderable on the server-side.

Without Server Components, if you wanted to quickly show a map of a location sprinkled with pins, or shaded areas, or whatever else, you couldn’t easily do so without serving up a mountain of JavaScript. Instead, I want to write <map lat=1.2 long=3.4 zoom=5> in my HTML and have my users instantly see a map, without extra client-side rendering or complexity or weight.

This sounds like a pipe dream, but with Server Components, we can do exactly that. We can define new HTML elements, write simple rendering code in normal vanilla JavaScript that builds and transforms the page DOM, and run all of that logic on the server to serve up the resulting ready-to-go JavaScript. All the client-side simplicity and power, without the costs.


Let’s stop there for now. In the next post, we’ll take a look at exactly how you’d implement this, and what you can do to start putting it into action yourself. Can’t wait? Check out https://github.com/pimterry/leaflet-map-server-component for the map component so far.

This post originally appeared on Tim Perry’s personal blog

Introducing Git Confirm


24 August 2016, by

Get easy confidence on exactly what you’re committing.

Git Confirm is a git hook, which asks you to confirm when you commit a change that includes additions from a (configurable) list of risky matches. Think ‘TODO’, ‘FIXME’, ‘@Ignore’, ‘describe.skip/it.skip’ and ‘describe.only/it.only’. You can drop Git Confirm in, and effortlessly stop yourself ever committing anything like this by accident.

TODO is the easiest example. It’s really useful to sprinkle TODO comments in your code as you work, to mark things that will need fixing in future, and jot down notes as you work. It’s really terrible to end up with a codebase riddled with them though. You need to actively deal with them in the short term, either fixing them immediately as you wrap up your larger piece of work, or moving them into a proper task tracking system somewhere.

Unfortunately, it’s easy to fail to do that; to add todos as you go, skim your code later and assume you’ve to-done them, and end up slowly rotting your codebase over time.

With Git Confirm, you put ‘TODO’ on the watchlist, and any time you commit code that adds TODO, you’ll see it (in context) so you can confirm you definitely want to commit it. Demo time!

This could be you right now.

Simple, effective and ready to go. There’s other options for this, but personally they’re either too simplistic (rejecting commits to any matching files outright) or more complex and overpowered than I want by default (automatically opening TODO issues on Github, or fully synchronized tracking of TODOs with external issue trackers). All of these have their place, but there’s a big gap in the middle for a simple tool that stays out of your way and catches you if you slip up, without being too opinionated.

There’s also a lot of edge cases that these often don’t cover: changing other parts of files that do already contain TODOs, removing lines with TODOs in, or including context with failures so you can see find the issue. Deep interactions with Git like this create surprisingly fiddly edge cases, but Git Confirm has your back all the way.

In addition, as far as I can tell there’s no tools for the general case at all, where I want to match arbitrary patterns. I’d really like to never accidentally commit an ignored test, and never ever ever ever accidentally ignore all my tests but one. With Git-Confirm, that never happens.

Want to try it? In the root of your repo, run:

curl https://cdn.rawgit.com/pimterry/git-confirm/v0.2.1/hook.sh >   .git/hooks/pre-commit
chmod +x .git/hooks/pre-commit

Done! By default it’ll just match ‘TODO’, but you can override that and add any regular expressions you like with:

git config --add hooks.confirm.match "pattern-to-match"

Happy committing!


This post originally appeared on Tim Perry’s personal blog. Want to learn more? Check out Git Confirm on Github to see the full details, and reply below or tweet Tim at @pimterry with your questions.

Server Components: Web Components for Node.js


11 July 2016, by

Web components logo

We at Softwire are big on open source. Many of us are involved in a variety of open-source projects, and some of us are even running our own, from database setup tools to logging libraries to archiving tools to HTTP performance testing and Roslyn-based code generators.

In this post I want to take a look at a recent new addition to this list: Server Components, a small library I’ve been working on to let you render web components entirely on the server-side.

Web components, for those not familiar, are a series of new specs coming (slowly) to browsers everywhere. It’s actually a set of 4 new features (Custom Elements, HTML Templates, Shadow DOM and HTML Imports), which together provide a fantastic, fast, built-into-the-browser way of defining brand new elements. You define your elements with their own internal rendering logic, presentation and behaviour, and then compose them together to build your page. Building your web page becomes a process of defining or just downloading small composable blocks, and then declaratively clicking them together.

There’s more to this than I can cover in this post, but if you’re interested in an intro then Smashing Magazine’s guide is a pretty good place to start.

These are a fantastic set of technologies if you’re looking for a nice way to build web applications on the client-side. They give you a tool to transform page build and JavaScript from being a fundamentally mess unstructured process (one enormous DOM, with everything operating at the same level of abstraction), to a manageable modern process, letting you layer abstractions atop one another and build your applications from many small standalone parts.

With web components, you can build web pages from independent encapsulated chunks of behaviour and UI. Within these web components, you can then nest other components, or pass them other components as input. Building a UI becomes much easier when your toolbox looks like:

 

<loading-spinner></loading-spinner>

<login-form action="/login"></login-form>

<qr-code data="http://example.com"></qr-code>

<calendar-date-picker value="2016-01-14"></calendar-date-picker>

<google-map lat=1234 long=5678 zoom=12></google-map>

<confirmation-dialog>
  Are you sure you want to quit?
</confirmation-dialog>

<item-paginator>
  <an-item>Item one</an-item>
  <an-item>Item two</an-item>

  ...

  <an-item>Item one thousand</an-item>
</item-paginator>

 

It’s a powerful paradigm, and this is really a standalone native version of the core feature that front-end frameworks like Ember, Angular and React have given us in recent years: the ability to powerfully compose together web pages quickly and easily out of smaller parts, while providing enough structure and encapsulation to keep them maintainable.

At the moment though, it’s a paradigm largely confined to the client-side. Clicking together server-side user interfaces has never really become an effective or enjoyable reality. You’re left with larger MVC frameworks that dictate your entire application structure, or trivial templating libraries that rarely offer more than token encapsulation or reusability. If you want a great experience for web UI development, you have to move to the client side.

Server Components aims to solve this. It’s taking the core of the client-side development experience, and moving it onto the server. Many web sites don’t need client-side interactivity but end up developing for client-side applications (and paying the corresponding costs in complexity, SEO, performance and accessibility) to get a better developer experience. Server Components is trying to give you that same power and flexibility to quickly build user interfaces on the web, without the client-side pain.

Screenshot of Tim.FYI

This is still early days for server components; I’m aiming to expand them significantly further as feedback comes in from more real world usage, and they’re not far past being a proof of concept for now. As a public demo though, I’ve rebuilt my personal site (Tim.FYI) with them, building the UI with delightful components like:

 

  <social-media-icons
    twitter="pimterry"
    github="pimterry"
    linkedin="pimterry"
    medium="pimterry"
  ></social-media-icons>

and even more complex ones, with interactions and component nesting:

 

  <item-feed count=20>
    <twitter-source username="pimterry" />
    <github-source username="pimterry" type-filter="PullRequestEvent" />
    <rss-source icon="stack-overflow" url="http://stackoverflow.com/feeds/user/68051" />
  </item-feed>

 

All entirely server-rendered, declarative and readable, and with zero client-side penalties. JavaScript is added on top but purely for extra interactivity and polish, rather than any of the core application functionality. Take a look at the full HTML, tiny chunk server code and the component sources to get a feel for how this works under the hood.

Interested? Want to go deeper and try this out yourself? Take a look at Server Components on Github.

Alternatively if you’d like some more detailed explanation first, take a look at the full introduction on Medium, complete with worked examples. If you’d like to hear even more about this, I’ll be talking to the Web Platform Podcast in more detail on July 17th, and feel free to send questions in on Twitter or in the comments below.

Improving Open-Source Deployment with Docker


6 April 2016, by

Open-source software has a lot going for it, but easy of use is not typically at the top of the list. Fortunately, it’s rarely a problem; as developers much of the open-source code we use is in simple tools and libraries, and most of the core interactions we have these are managed by package managers, which have focused on building a convenient usable layer to manage this for any project.

That’s not the case for other domains though, especially non-trivial standalone applications. There are a lot of popular open-source tools that follow this model, and require you to install and run them in an environment providing all their core dependencies. WordPress (which runs this blog) is a good example, along with apps like Discourse (a forum we use for internal discussion). These provide great value and they’re great tools, but setup isn’t easy, often involves many manual steps following sometimes painful documentation, and typically then fails because of some inexplicable idiosyncrasy of the server you’re using.

Staytus is another good example of this. Staytus is an open-source web application that provides a status site for your product, aiming to be a beautiful, usable, easy to manage tool that companies can drop into place to give their customers information on how their system is doing. Take a look at their demo site to see it in action.

Staytus, like many other tools in this domain, isn’t effortless to set up though. You have to install the right version of Ruby and all their ruby dependencies (still an annoyingly fiddly process on Windows especially, if you’re not already using Ruby elsewhere), install Node, install, configure and prepare a MySQL server, configure Staytus to glue this all together, and then hook the Staytus startup commands into whatever service running tool you want to use. None of this is cripplingly difficult, but it’s all friction that gets in the way of putting Staytus into users’ hands.

I found Staytus recently, while looking out for exciting new open-source projects, and decided this needed fixing. I wanted to try it out, but all the above was hassle. It would be better if you could run a single command, have this all done for you, and get a server up straight away. I’d also been hankering to have a closer look at Docker, which felt like a great fit for this, so I dived in.

The steps

So, we want to provide a single command you can run which downloads, installs, configures and starts a working Staytus server. To do that, we need a few things:

  • A working Ruby 2.1 environment
  • All the required Ruby dependencies
  • Node.js (for Rails’s JS asset uglification)
  • A configured MySQL server
  • Staytus configuration files, including the MySQL server details
  • A startup script that prepares the database, and starts the service

Automating this with Docker

Docker lets us define an immutable machine image, and provides extremely fast and convenient mechanisms to share, update, and use these images.

In practice you can treat this like an incredibly good virtual machine management system. In reality under the hood the details are quite different – it is providing isolated containers for systems, but through process isolation within a single operating system, rather than totally independent machines – but while those differences power the benefits, they doesn’t really need to affect how you think about the basics of using Docker in practice.

I’m not going to go into the details of Docker in great depth here, I’m just going to look at an example real-world use, at a high-level. If you’re really interested, take a look at their introductory video, or read through their excellent documentation.

What we need to do is define a recipe for an image of a machine that is configured and ready to run Staytus, following the steps above. We can then build this into an actual runnable machine image, and hopefully then just start it to immediately have that machine ping into existence.

Dockerfile

To start with we need the recipe for such a machine. That recipe (the Dockerfile), and the startup script it needs (a simple bash script) are below.

It’s important to note that while this is a very effective & working approach, there are parts of this that are more practical than they are Docker Best Practice. We’ll talk about that later (see ‘Caveats’).

# Start from the standard pre-prepared Ruby image
FROM ruby
MAINTAINER Tim Perry <[email protected]>

USER root

# Run all these commands inside the image
RUN apt-get update && \
    export DEBIAN_FRONTEND=noninteractive && \
    # Set MySQL password to temp-pw - reset to random password later
    echo mysql-server mysql-server/root_password password temp-pw \
      | debconf-set-selections && \
    echo mysql-server mysql-server/root_password_again password temp-pw \
      | debconf-set-selections && \   
    # Install MySQL for data, node as the JS engine for uglifier
    apt-get install -y mysql-server nodejs
    
# Copy the current directory (the Staytus codebase) into the image
COPY . /opt/staytus

# Inside that directory in the image, install our dependencies
RUN cd /opt/staytus && \
    bundle install --deployment --without development:test

# When you run this image, run docker-start.sh
ENTRYPOINT /opt/staytus/docker-start.sh

# Persist the MySQL DB to an external volume
# This means it can be independent of the life of the container
VOLUME /var/lib/mysql

# Persist copies of other relevant files (config, custom themes).
# Contents of this are copied to the relevant places when the container starts
VOLUME /opt/staytus/persisted

EXPOSE 5000

With this saved as Dockerfile inside the root of the Staytus codebase, we can then run docker build . to build (or rebuild) an image following this locally.

An interesting consideration when writing these Dockerfiles is image invalidation. Docker builds intermediate images for each command here, and rebuilding an image only reruns the steps that have been invalidated, using as many from its cache as possible. That means that by writing the Dockerfile as above rebuilding a new image with changes to the Staytus codebase is very cheap; the Ruby, Node and MySQL installation and setup phases are all cached, and we just take that image, copy the new code in, and pull down the dependencies the current codebase specifies. We only rerun the parts from COPY . /opt/staytus down. Small tweaks like this make iterating on your Docker image much easier.

Take a look at this article about working with the Docker build cache if you’re interested in this (and don’t forget to look at Docker’s best practices guide generally)

docker-start.sh

That Dockerfile installs everything required, copies the codebase into the image, and tells Docker to run the ‘docker-start.sh’ script when the image is started as a container.

To actually use this, we need a docker-start.sh script, to manage service startup process. That full content of that is below.

Note that this script includes some further database setup that could have been done above, at image definition time. That’s done here instead, to ensure the DB password is randomized for each container not baked into the published image, so we don’t end up with Staytus images all over the internet running databases with identical default passwords. Docker doesn’t obviate the need for good security practices!

#!/bin/bash
/etc/init.d/mysql start # Start MySQL as a background service

cd /opt/staytus

# Configure DB with random password, if not already configured
if [ ! -f /opt/staytus/persisted/config/database.yml ]; then
  export RANDOM_PASSWORD=`openssl rand -base64 32`

  mysqladmin -u root -ptemp-pw password $RANDOM_PASSWORD
  echo "CREATE DATABASE staytus CHARSET utf8 COLLATE utf8_unicode_ci" | mysql -u root -p$RANDOM_PASSWORD

  cp config/database.example.yml config/database.yml
  sed -i "s/username:.*/username: root/" config/database.yml
  sed -i "s|password:.*|password: $RANDOM_PASSWORD|" config/database.yml

  # Copy the config to persist it, and later copy back on each start, to persist this config 
  # without persisting all of /config (which is mostly app code)
  mkdir /opt/staytus/persisted/config
  cp config/database.yml /opt/staytus/persisted/config/database.yml

  # On the first run only, run the staytus:install task to setup the DB schema.
  bundle exec rake staytus:build staytus:install
else
  # If it's not the first run:

  # Use the previously saved config from the persisted volume
  cp /opt/staytus/persisted/config/database.yml config/database.yml

  # The DB should already be configured. Check if there are any migrations to run though:
  bundle exec rake staytus:build staytus:upgrade
fi

# Start the Staytus service
bundle exec foreman start

Putting this to use

With this written, you can check out the Staytus codebase, run docker build . to build an image of Staytus, and run docker run -d -p 0.0.0.0:80:5000 [built-image-id] to instantly start a container with that image, listening locally on port 80.

For end users, that a lot easier than all the previous setup we had! There’s still a little more we can do though. Having done this, we can publish that image to Docker Hub, and users no longer need to check out the codebase at all.

The full setup now, from a blank slate, is:

  • Install Docker (a single standard platform-specific installer)
  • Run docker run -d -p 0.0.0.0:80:5000 --name=staytus adamcooke/staytus
  • Browse to http://localhost:80

(Note the ‘adamcooke/staytus‘ part; that’s the published image name)

This is drastically easier than following all the original steps by hand, and very hard to do wrong!

I wrote this all up, contributed this back to Staytus itself in July last year, Adam Cooke (the maintainer of Staytus) merged in and published the resulting image, and Staytus is now ready and available for quick easy use. Give it a go!

Caveats

Some is this is not exactly how things should be done in Docker land – there’s more than a few concessions to short-term practicality – but this does work very nicely, and provides exactly the benefits we’re looking for.

The key Docker rule that’s not being follow here is that we’ve put two processes (Staytus and its MySQL server) into a single image, to run as a single container. Instead, we should run two containers (a Staytus container, and a totally standard MySQL container) and link them together, typically using Docker Compose. At the time though Docker Compose wasn’t yet recommended for production use, and to this day moving to this model still makes it a little harder for users to get set up and running that it would be with the one image. There’s ongoing work to finish that up now though, and Staytus is likely to evolve further in that direction soon.

Softwire Side Projects: Build Focus


16 March 2016, by

At Softwire we have lots of people with interesting side projects, for all sorts of reasons. Sometimes you want a playground to learn strange and wonderful new things, perhaps you’d like to try out some unusual tech we can’t easily use day to day, and often it just feels good to test yourself with fun new kinds of problems.

In this post, I want to look at a side project I’ve released recently: Build Focus, a productivity tool I’ve built to help you improve your focus and avoid all the distactions and demands for your attention that the internet creates.

The Build Focus website

The Internet is a Distraction

A lot of the internet is actively designed to steal your attention and time. Twitter, Facebook, most news sites, and every app you use are doing all they can to keep you engaged, so you habitually and instinctively spend your time and energy with them, instead of doing whatever you really want to be doing (like getting things done). There’s a whole range of techniques behind this, particularly drawing from the tricks that make slot machines so addictive to help make apps like Farmville as compelling as possible and to lead you towards habitually constantly checking Facebook.

All of this is a bit concerning, and the effects and problems it creates have been debated and discussed all over the place. I’d like to find a solution to this, and I’m particularly interested in whether it’s possible to use the same techniques that these sites use to distract you, but flip them around, to reward you for concentrating rather than getting distracted, and thereby addict you to focusing and getting things done.

Gamification for Good

Enter Build Focus. Build Focus is a city simulator (because building city simulators is fun), wrapped around a pomodoro timer. If you focus for 25 minutes then your city will expand or upgrade, but if you get distracted during that time (by opening Facebook, or any other distracting URL you’ve added) a random building is destroyed. It’s essentially gamified concentration.

It’s also remarkably effective; I’ve been using this myself for months, and found it impressively good at molding my day-to-day habits, and I’ve also got a few hundred early users, of whom 20 or so use Build Focus almost every single day.

This is a free Chrome extension (a strange and wonderful environment I’d never normally work in), it’s written in TypeScript (as a chance to build a whole project with a language a little outside the norm), and opens a huge range of interesting problems I haven’t looked at before: from simulating realistic traffic, to doing my own product marketing. It’s neatly ticking off everything I look for in a side project, and I’d highly recommend finding similar projects and challenges yourself.

For now Build Focus is still in private alpha, so if you want to give it a go you’ll need to sign up for early access at www.buildfocus.io. I’m iterating on user feedback to steadily improve the design though, and I’m aiming to have it publically available to the whole world in the next few months. Watch this space!

Are you interested in playing with different solutions to problems like this too? Do you have your own side projects? Send us yours on Twitter, or leave a comment below.

24 Pull Requests 2015


20 January 2016, by

At Softwire we make an effort to be involved in the open-source community that powers so much of what we do. For the last three years, we’ve spent December getting involved in 24 Pull Requests – an initiative encouraging developers to contribute pull requests to projects they use and love through the run up to Christmas – and this year is no exception!

Even more than last year though, this year has been an enormous success. We’ve even managed to bring ourselves up into an amazing 4th place position overall, out of the 9,934 organisations taking part. (This may have been strongly encouraged by the suggestion of a team croquembouche appearing if we reached the top 5. We at Softwire are certainly not immune to the charms of an exciting cake!)

Particular commendation should go to the top contributors, each of whom put in:

  1. Hereward Mills – 25 pull requests
  2. Tim Perry – 24 pull requests
  3. David Simons – 17 pull requests
  4. Rob Pethick – 14 pull requests
  5. David Broder-Rodgers – 9 pull requests

Pull Requests

A selection of particularly interesting pull requests within the 161 pull requests we contributed in December:

You can check out the full list of everything we got up to at 24pullrequests.com/organisations/Softwire

Bonus Stats

24 days of pull requests
161 PRs from the Softwire team (so 7 per day, on average)
36 contributors contributing at least one PR (so 4 each, on average)
78 projects contributed to (including 17 PRs to DefinitelyTyped, 14 to Workflowy2Web and 8 to AssertJ & 24PullRequests)

Fantastic work all round!