Softwire Blog


Submitting your cordova app to the apple app store


12 October 2016, by

These days, everyone has an app. If you are one of the people who decided to write a cross platform app using cordova and would like to now post it to the app store, then keep reading and I’ll explain how to go about doing that.

Just a word upfront, in order to build your app, and to post the built app to the app store, you will need an apple machine.

Building your app

There are three steps required to build a release version of your app. Create a certificate, create an app identifier and create a provisioning profile. Once you have done these steps, you will be able to build a signed ipa of your app.

Create a certificate

First, you will need to login to Apple’s developer website. If you do not have an app id, you will need to create one, and you will also need to register as an Apple Developer, which will cost you $99.

Now that you are logged in, go to the distribution certificates page, and create a new certificate by clicking the plus button, marking your certificate as Production > “App Store and Ad Hoc”, and then following through the instructions on the website, which involve creating a signing request.

Create an app identifier

Again, the first step is to login to Apple’s developer website. This time, go to the app identifier page.  Fill in a short description, and then specify the app identifier for your app (this is stored in the config.xml in your cordova project:

<widget id="com.softwire.exampleApp" ... >

At this stage, if you want to add any extra services to your app (e.g. PushNotifications) then you will need to mark them on the app identifier.

Create a provisioning profile

As with the other steps, you need to login to Apple’s developer website. Head over to create a new Provision Profile. Select Distribution > “App Store” then continue.

On the next page, you will need to select the App ID that you have just created and click continue.

On the final page, you will need to select the certificate that you have just created and click continue.

Finally, give the provisioning profile a name, and click continue again.

You do not need to download the provisioning profile, we will get XCode to do that for us in the “Submitting your app”.

Submitting your app

In order to submit your app to the app store, you will need to build the signed ipa file, create an app store listing, and then finally submit the app you have built.

Building a signed IPA

Now that you have created a provisioning profile (see above if not), you must get xcode to download it. To do this, login to your Apple account on XCode (Preferences > Accounts), and then click the + button and follow the instructions to add your account.

Once your account has been added, select your account in the left panel, and click “View Details…” in the right panel. On the screen that opens, you should see the provisioning profiles attached to your account. One of these will be the Provisioning Profile you created above. Select the Provisioning Profile and click download. This will now be available to xcode when building your project.

The next step is to inform cordova of which provisioning profile to use when building. This is done using a build.json file. You should create a build.json file, and include in it the id of the provisioning profile that you downloaded.

Finally, you can now run a build on your mac using:

cordova build ios --device --release --buildConfig build.json

This will create a signed ipa of your app under

platforms/ios/build/device/*.ipa

Creating an app listing

Login to iTunes Connect. From here, you can go to the apps page, and then create a new project. You will now need to fill in details for your app (there are lots to fill in, but the help Apple provides is quite useful), and ensure that you have uploaded screenshots of the app running on both phone and tablet (if relevant).

Once you are happy with the details, save them.

Submitting the IPA to the app store

Now that an app listing has been created, the next step is to upload the build (created above) to Apple.

To perform this upload, you will need Application Loader 3.0. You can download this from the “Prepare for submission” page on the app listing.

Once you have Application Loader installed, login and click choose. Select the ipa that was built earlier, which will upload the app to Apple. It will automatically be linked to your app listing, as the app’s id will match the one specified on the app listing.

After upload, the ipa will be processed by Apple, and following that, can be selected as the version that should be published to the store. If you’re happy with everything, then submit the app to review.

Apple will now review your app, and it will appear in the app store when it is ready. The review process will take a couple of days, and you will receive an email when the submission is complete.

Tips for managing technical people – Recommended reading


7 October 2016, by

Galvanizing the geeksThe following is an excerpt from my new book, “Galvanizing the Geeks – Tips for Managing Technical People”. You can buy the full book on my website, here.

These are some of the books – not all of them aimed specifically at the technology sector – that I’ve found useful throughout my career.

(more…)

Effortlessly add HTTPS to Dokku, with Let’s Encrypt


26 September 2016, by

You’ve written an application deployed using Dokku, and you’ve got it all up and running and great. You’ve heard a lot about why HTTPS is important nowadays, especially the SEO and performance benefits, and you’d like to add all that, with minimal cost and hassle. Let’s get right on that.

Let’s Encrypt is a new certificate authority (an organisation that issues the certificates you need to host an HTTPS site), which provides certificates to sites entirely for free, by totally automating the system, to help get 100% of the web onto HTTPS. Sounds great, right?

To set up this up, you’re going to need to prove you own the domain you’re asking for a certificate for, you need to get the certificates and start using them, and you’re going to want to have some system in place to automatically renew your certificates, so you never have to think about this again (Let’s Encrypt certificates expire every 90 days, to encourage automation). That means we’ve got a few key steps:

  • Generate a key-pair to represent ourselves (you can think of this as our login details for Let’s Encrypt).
  • Complete Let’s Encrypt’s Simple HTTP challenge, by signing and host a given JSON token at /.well-known/acme-challenge/<token> on port 80 for the given domain name, with our public key. This validates the key pair used to sign this token as authorized to issue certificates for this domain.
  • Request a certificate for this domain, signing the request with our now-authorized key.
  • Set up our server to use this key.
  • Set up an automated job to re-request certificates and update the certificate we’re using at regular intervals.

(Interested in the full details of how the Let’s Encrypt process works? Check out their detailed intro: https://letsencrypt.org/how-it-works/)

Fortunately, you barely have to do any of this! In reality, with Dokku, this is pretty much a case of turning a plugin on, and walking away.

How do you actually do this with Dokku?

First up, we need to install dokku-letsencrypt, which will do most of the setup magically for us (see full instructions here). On your server, run the command below for your dokku version (run ‘dokku version’ to check):

# dokku 0.5+
dokku plugin:update letsencrypt

# dokku 0.4
dokku plugin:update letsencrypt dokku-0.4

To configure it, set an email for your app:

dokku config:set --no-restart myapp [email protected]

(This email will receive the renewal warnings for your certificate)

Turn it on:

dokku letsencrypt myapp

For Dokku 0.5+, set up auto-renewal:

dokku letsencrypt:cron-job --add

For Dokku 0.4, you’ll have to manually add a cronjob scheduled every 60 days to kick off the auto-renewal process:

dokku letsencrypt:auto-renew

That’s it!

What just happened?

The magic is in the ‘dokku letsencrypt myapp’ command. When you run that, Dokku-LetsEncrypt:

  • Starts a new service, in a Docker container, and temporarily delegates the top-level .well-known path on port 80 to it in Nginx (the server which sits in front of all your services in Dokku and routes your requests).
  • Generates a key pair, and authorizes it for each of the domains you have configured for your app (see ‘dokku domains myapp’).
  • Uses that key pair to get certificates for each of those domains.
  • Configures Nginx to enable HTTPS with these certificates for each of those domains.
  • Configures Nginx to 301 redirect any vanilla HTTP requests on to HTTPS for each of those domains.

The later auto-renewal steps just do the work to update these certificates later on.

Easy as pie. Don’t forget to check it actually worked though! It’s quite likely you’ll find mixed content warnings when you first turn on HTTPS. Most major CDNs or other services you might be embedding will have HTTPS options available nowadays, so this should just be a matter of find/replacing http: tohttps: throughout your codebase. With that done though, that shiny green padlock should be all yours.

This post originally appeared on Tim Perry’s personal blog.

Tips for managing technical people – Train technical leadership


21 September 2016, by

Galvanizing the geeksThe following is an excerpt from my new book, “Galvanizing the Geeks – Tips for Managing Technical People”. You can buy the full book on my website, here.

You need to ensure that there’s a technical career path as well as a management one open to your managees. Many companies nowadays ensure that there’s a path by which their technical people can reach the very top of the company, whether this is via a Technical Advisory Board (favoured by companies such as Thoughtworks and UBS) or by other means.

Creating the technical path and putting people on it isn’t enough. You also need to train people to fill the relevant roles. A common misunderstanding of the technical lead role is to see it as a sort of glorified developer, who gets to tell the other developers what to do. While a progression to project manager is seen as a move to a different role, the role of the technical lead can be seen as ‘more of the same’.

(more…)

Building a Server-Rendered Map Component – Part 2


19 September 2016, by

Part 2: How to use client-side libraries like Leaflet, in Node.

As discussed in Part One: Why?, it’d be really useful to be able to take an interesting UI component like a map, and pre-render it on the server as a web component, using Server Components.

We don’t want to do the hard mapping ourselves though. Really, we’d like this to be just as easy as building a client-side UI component. We’d like to use a shiny mapping library, like Leaflet, to give us all the core functionality right out of the box. Unfortunately though, Leaflet doesn’t run server-side.

This article’s going to focus on fixing that so you can use Leaflet with Server Components, but you’ll hit the same problems (and need very similar fixes) if you’re doing any other Node-based server rendering, including with React.The JS ecosystem right now is not good at isomorphism, but with a few small tweaks you can transform any library you like, to run anywhere.

Let’s focus on Leaflet for now. It doesn’t run server-side, because there’s not that many people seriously looking at rendering nice UIs outside a browser, so JS libraries are pretty trigger-happy making big browser-based assumptions. Leaflet expects a few things that don’t fit neatly outside a browser:

  • Global window, document and navigator objects.
  • A live element in an HTML DOM to be inserted into.
  • A Leaflet <script> tag on the page, so it can find its URL, so it can autodetect the path to the Leaflet icons.
  • To export itself just by adding an ‘L’ property to the window object.

All of these are things need tricky fixes. Just finding these issues is non-trivial: you need to try and use the library in Node, hit a bug, solve the bug, and repeat, until you get the output you’re expecting.

Leaflet is a relatively hard case though. Most libraries aren’t quite so involved in complex DOM interactions, and just need the basic globals they expect injected into them.

So, how do we fix this?

Managing Browser Globals

If you npm install leaflet and then require(“leaflet”), you’ll immediately see our first issue:

> ReferenceError: window is not defined

Fix this one, and we’ll hit a few more at require() time, for document and navigator too. We need to run Leaflet with the context it’s expecting.

It would be nice to do that by having a DOM module somewhere that gives us a document and a window, and using those as our globals. Let’s assume we’ve got such a module for a moment. Given that, we could prefix the Leaflet module with something like:

var fakeDOM = require("my-fake-dom");
var window = fakeDOM.window;
var document = fakeDOM.document;
var navigator = window.navigator;
[...insert Leaflet code...]

(Instead we could just define browser globals as process-wide Node globals and leave the Leaflet source untouched, but this isn’t good behaviour, and it’ll come back to bite you very quickly if you’re not careful)

Doing something like this will get you much closer. With any reasonable DOM stub you should be able to get Leaflet successfully importing here. Unfortunately though, this fails because of a fundamental difference between browser and Node rendering. On the server, we have to support multiple DOM contexts in one process, so we need to be able to change the document and window.

We can still pull this off though, just taking this a step further with something like:

module.exports = function (window, document) {
  var navigator = window.navigator;
  [...insert Leaflet code...]
}

Now this is a Node module that exports not a single Leaflet, but a factory function to build Leaflet for a given window and document, provided by the code using the library. This doesn’t actually return anything though when called, as you might reasonably expect, instead creating window.L, as is common for browser JS libraries. In some cases that’s probably ok, but in my case I’d rather leave Window alone, and grab the Leaflet instance directly, by adding the below to the end of the function, after the Leaflet code:

return window.L.noConflict();

This tells Leaflet to remove itself as a global, and just give you the library as a reference directly.

With this, require(“leaflet”) now returns a function, and passing that a window and document gives you a working ready-to-use Leaflet.

Emulating the expected DOM

We’re not done though. If you want to use this Leaflet, you might define a Server Component like:

var LeafletFactory = require("leaflet");
var components = require("server-components");
var MapElement = components.newElement();   MapElement.createdCallback = function (document) {
  var L = LeafletFactory(new components.dom.Window(), document);
  var map = L.map(this).setView([41.3851, 2.1734], 12);
  L.tileLayer('http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', {
    maxZoom: 19,
  }).addTo(map);
});
components.registerElement("leaflet-map", {prototype: MapElement});

This should define a component that generates the HTML for a full working map when rendered. It doesn’t. The problem is that Leaflet here is given a DOM node to render into (‘this’, inside the component), and it tries to automatically render at the appropriate size. This isn’t a real browser though, we don’t have a screen size and we’re not doing layout (that’s why it’s cheap), and everything actually has zero height or width.

This isn’t as elegant a fix, but it’s an unavoidable one in any server-rendering approach I think: you need to pick a fixed size for your initial render, and nudge Leaflet to use that. Here that’s easy, you just make sure that before the map is created you add:

this.clientHeight = 500;
this.clientWidth = 500;

And with that, it works.

This fakes layout, as if the browser had decided that this was how big the element is. You can render like this at a fixed size for lots of applications, and potentially add client-side rendering on top to resize too if you want.

With that added, you can take this component, render it with a cheekycomponents.renderFragment(“<leaflet-map></leaflet-map”) and be given working HTML for a lovely static map you can send straight to your users. Delightful.

There is still one last step required if you want to take this further. Leaflet by default includes a set of icons, and uses the ‘href’ in its script tag in the page to automatically work out the URL to these icons. This is a bit fragile in quite a few ways, including this environment, and if you extend this example to use any icons (e.g. adding markers), you’ll find your icons don’t load.

This step’s very simple though, you just need to set L.Icon.Default.imagePathappropriately. If you want to do that in a nice portable Server Component, that means:

var componentsStatic = require("server-components-static");
var leafletContent = componentsStatic.forComponent("leaflet");
L.Icon.Default.imagePath = leafletContent.getUrl("images");

This calculates the client-facing URL you’ll need that maps to Leaflet’s images folder on disk (see Server-Components-Static for more details).

Making this (more) maintainable

There’s one more step though. This is a bit messy in a few ways, but particularly in that we have to manually fork and change the code of Leaflet, and maintain that ourselves in future. It would be great to automate this instead, to dynamically wrap normal Leaflet code, without duplicating it. With Sandboxed-Module we can do exactly that.

Sandboxed-Module lets you dynamically hook into Node’s require process, to transform module code however you like. There’s lots of somewhat crazy applications of this (on-require compilation of non-JS languages, for example), but also some very practical ones, like our changes here.

There’s potentially a very small performance hit on startup from this for the transformation, but for the rest of runtime it shouldn’t make any difference; it hooks into the initial file read to change the result, and then from that point on it’s just another Node module.

So, what does this look like?

var SandboxedModule = require('sandboxed-module');
module.exports = SandboxedModule.require('leaflet', {
  sourceTransformers: {
      wrapToInjectGlobals: function (source) {
        return `
        module.exports = function (window, document) {
          var navigator = window.navigator;
          ${source}
          return window.L.noConflict();
        }`;
      }
  }
});

That’s it! Your project can now depend on any version of Leaflet, and require this wrapped module to automatically get given a Node-compatible version, without having to maintain your own fork.

This same approach should work for almost any other library that you need to manage server side. It’s not perfect — if Leaflet starts depending on other browser global things may break — but it should be much easier to manage and maintain than copying Leaflet’s code into your project wholesale.

Hopefully in future more projects will improve their native support for running in other environments, and this will go away, but in the meantime there are some relatively simple changes you can make to add Node support to even relatively complex client-side libraries.


Let’s stop there for now. In the next post, we’ll take a proper look at a full working map component, complete with configurability, static content and marker support, and see what you can do to start putting this into action yourself. Can’t wait? Check out https://github.com/pimterry/leaflet-map-server-component for the map component codebase so far.

This post originally appeared on Tim Perry’s personal blog. 

Building a Server-Rendered Map Component – Part 1


12 September 2016, by

Part 1: Why do we need better maps?

Maps are a standard tool in our web UI toolkit; they’re an easy way to link our applications to the real world they (hopefully) involve. At the same time though, maps aren’t as easy to include in your web site as you might want, and certainly don’t feel like a native part of the web.

Critically, unlike many other UI components, you normally can’t just serve up a map with plain HTML. You can easily serve up HTML with a videos embedded, or complex forms, or intricate SVG images and animations, or MathML, but maps don’t quite make the cut. (Google Maps does actually have an Embed API that’s closer to this, but it’s very limited in features, and it’s really just bumping the problem over their fence).

If you want to throw a map on page, you’re going to need pick out a library, write and serve up scripts to imperatively define and render your map, and bring in the extra complexity (and page weight and other limitations) of all the supporting code required to put that into action. That’s not really that much work, and we’ve got used to making fixes like these everywhere. It’s a far cry though from the declarative simplicity of:

<video src=“cats.mp4”></video>

Maps are one small example of the way that the HTML elements we have don’t match what we’re using day to day, and the workarounds you have to use to get closer.

How can we show a map on a page with the same simplicity and power?

Web Components

What we really want is to add<map lat=1.2 long=3.4 zoom=5> to our pages, get a great looking map, and be done with it. That’s the goal.

In principle, web components handle this nicely. You define a custom element for your map, give it its own rendering logic and internally isolated DOM, and then just start using it. That’s really great, it’s a fantastic model, and in principle it’s perfect. Clicking together isolated encapsulated UI elements like this is the key thing that every shiny client-side framework has in common, and getting that as a native feature of the web is incredible.

Support is terrible though. Right now, it’s new Chrome only, so you’re cutting out 50% of your users right off the bat, and it’s going to be a very long time until you can safely do this everywhere. You can get close with lots of polyfills, and that’s often a good choice, but there is extra weight and complexity there, and you really just want a damn <map> — not to spend hours researching the tradeoffs of Polymer or the best way to polyfill registerElement outside it.

Even once this works, you still have the extra front-end complexity that results, and the associated maintenance. You have to serve up your initial content, and then re-render in your JavaScript before some of it is visible. You quickly end up with content that is invisible to almost all automated scripts, scrapers, devices, and search spiders. Depending on JavaScript doesn’t just cut off the small percentage of users who turn it off.

Mobile devices make this even more important. With their slow connections and slow rendering, are going to have to wait longer to see your page when you render things client-side. JavaScript is not at all resilient to connection issues either — any device that drops any scripts required to for basic rendering won’t show any of that rendered content, while simple HTML pages continue with whatever’s available. New Relic’s mobile statistics show 0.9% of HTTP requests from mobile devices failing on the network. That failure rate builds up quickly, when applied to every resource in your page! With just that 0.9% of failures and only 10 resources to load, 9% of your users aren’t going to see your whole page.

Web components are a great step in the right direction, and are exactly the right shape of solution. Web components that progressively enhance already meaningful HTML content are even better, and can mitigate many of the issues above. In many cases like maps though, your component is rendering core page content. Web components do give you a superb way to do that client-side, but rendering your content client-side rendering comes with heavy costs. Worse, it’s frequently not a real necessity. A static map would give all the core functionality (and could then be enhanced with dynamic behaviour on top).

What if we could get these same benefits, without the costs?

Server Components

Server Components is an attempt to improve on our current approach to rendering the web, with the same approach and design as web components, but renderable on the server-side.

Without Server Components, if you wanted to quickly show a map of a location sprinkled with pins, or shaded areas, or whatever else, you couldn’t easily do so without serving up a mountain of JavaScript. Instead, I want to write <map lat=1.2 long=3.4 zoom=5> in my HTML and have my users instantly see a map, without extra client-side rendering or complexity or weight.

This sounds like a pipe dream, but with Server Components, we can do exactly that. We can define new HTML elements, write simple rendering code in normal vanilla JavaScript that builds and transforms the page DOM, and run all of that logic on the server to serve up the resulting ready-to-go JavaScript. All the client-side simplicity and power, without the costs.


Let’s stop there for now. In the next post, we’ll take a look at exactly how you’d implement this, and what you can do to start putting it into action yourself. Can’t wait? Check out https://github.com/pimterry/leaflet-map-server-component for the map component so far.

This post originally appeared on Tim Perry’s personal blog

Using git-tfs in anger


31 August 2016, by

A while ago I wrote about how no-one should ever user TFS, ever. While I was unfortunately forced to use it I experimented with git-tfs to see if it would ease my pain, and this is what I found.
Why git-tfs
It seems to be the main git-to-tfs-bridge right now. There’s another option, git-tf, but that one hasn’t been updated since 2013. git-tfs appears to be an active project.

The Good

  • Local version control: local commits and local branches. Proper branches (not TFS folder-branches) which are lightning-fast to switch between, and you can cherry-pick between.
  • Rebase temporary dev branches back onto master instead of clumsily re-applying old shelvesets.
  • If more than one person is using it, push and pull git branches between yourselves, and keep developing even when the central TFS server (inevitably) goes down.
  • Separate source control from Visual Studio: generally a good idea in my opinion, and particularly useful if VS is already running slowly.
  • Reliable version control: TFS get-latest is a bit buggy, but git-TFS seemd to work reliably. Perhaps it abstracts away for you the tedium of always telling TFS explicitly to get the latest version of all files, not just some.

The Bad

  • You can only checkout one directory at a time so you need separate checkouts for different root-level projects in the same repo. This is pretty minor though.
  • Visual Studio gets a bit confused and integration isn’t as good. Keep it happy by using msys git (git for windows) not cygwin git if you can.
  • Changing between TFS and git-tfs, or just using visual studio with a git-tfs checkout, leads to corrupt TFS workspaces.
  • Doing things differently from the rest of the team means you lose out on their TFS knowledge and support, for what it’s worth.
  • You won’t be able to add projects to the solution properly.
  • Visual Studio will constantly try to remove the tfs element from the .sln file, which will break TFS integration for everyone else if you commit it.
  • If you need to keep local changes, e.g. for local environment config, then it’s a pain to stash the changes all the time (you need a clean checkout to rebase).
  • It’s just really slow – mainly because my team’s TFS server was slow; but git-tfs was even slower.

Verdict

For me, unfortunately the advantages don’t outweigh the disadvantages, and I probably wouldn’t use it again. it’s a bit of a pain to set it up and to use different tools to the rest of the team. On my next TFS project (and I’ll do everything in my power to make sure there won’t be one) I’ll just embrace the shelveset workflow instead.

What I might try is just to git init a local repo in my TFS checkout and get local branching that way, adding the .git folder etc to the local ignore list, although it might lead to some manual effort keeping branches in sync with the remote.

Setting up

If you do decide to use it, this is what I did to set it up:

  1. Install git, and configure it:
$ git config --global core.autocrlf true # see https://help.github.com/articles/dealing-with-line-endings/
$ git config --global core.filemode false
  1. Install git-tfs: http://git-tfs.com/
  2. Clone the repo:
$ git tfs clone http://tfs:8080/tfs/DefaultCollection $/some_project -c XXX

Where XXX is a recent changeset – you’ll only have history available from this changeset, but the further back you go the longer the clone will take.

  1. Open in Visual Studio!

Don’t bother to un-map your workspace: the Visual Studio TFS integration will go away when you open your new “git” project. And it will be easier to go back to pure TFS if you want to.

Everyday Use

Always rebase when pulling:
$ git tfs pull --rebase

To map local commits to checkins one to one, use rcheckin instead of checkin or checkintool.

$ git tfs rcheckin --no-merge

Be careful though, this makes a number of commits in a row, and because TFS is so sloooow the can be nearly a minute apart and trigger more than one consecutive CI build. If the build time is slow this might hold other people up from committing, so keep an eye on it: you can always cancel the first build(s) if this happens. You might just want to squash up the commits to save time committing!

If your project requires certain custom notes fields to be supplied, use the -n (--notes) argument to supply it from the command line:

$ git tfs rcheckin --no-merge -n 'JiraNumber' 'AA-123'

(more…)

Introducing Git Confirm


24 August 2016, by

Get easy confidence on exactly what you’re committing.

Git Confirm is a git hook, which asks you to confirm when you commit a change that includes additions from a (configurable) list of risky matches. Think ‘TODO’, ‘FIXME’, ‘@Ignore’, ‘describe.skip/it.skip’ and ‘describe.only/it.only’. You can drop Git Confirm in, and effortlessly stop yourself ever committing anything like this by accident.

TODO is the easiest example. It’s really useful to sprinkle TODO comments in your code as you work, to mark things that will need fixing in future, and jot down notes as you work. It’s really terrible to end up with a codebase riddled with them though. You need to actively deal with them in the short term, either fixing them immediately as you wrap up your larger piece of work, or moving them into a proper task tracking system somewhere.

Unfortunately, it’s easy to fail to do that; to add todos as you go, skim your code later and assume you’ve to-done them, and end up slowly rotting your codebase over time.

With Git Confirm, you put ‘TODO’ on the watchlist, and any time you commit code that adds TODO, you’ll see it (in context) so you can confirm you definitely want to commit it. Demo time!

This could be you right now.

Simple, effective and ready to go. There’s other options for this, but personally they’re either too simplistic (rejecting commits to any matching files outright) or more complex and overpowered than I want by default (automatically opening TODO issues on Github, or fully synchronized tracking of TODOs with external issue trackers). All of these have their place, but there’s a big gap in the middle for a simple tool that stays out of your way and catches you if you slip up, without being too opinionated.

There’s also a lot of edge cases that these often don’t cover: changing other parts of files that do already contain TODOs, removing lines with TODOs in, or including context with failures so you can see find the issue. Deep interactions with Git like this create surprisingly fiddly edge cases, but Git Confirm has your back all the way.

In addition, as far as I can tell there’s no tools for the general case at all, where I want to match arbitrary patterns. I’d really like to never accidentally commit an ignored test, and never ever ever ever accidentally ignore all my tests but one. With Git-Confirm, that never happens.

Want to try it? In the root of your repo, run:

curl https://cdn.rawgit.com/pimterry/git-confirm/v0.2.1/hook.sh >   .git/hooks/pre-commit
chmod +x .git/hooks/pre-commit

Done! By default it’ll just match ‘TODO’, but you can override that and add any regular expressions you like with:

git config --add hooks.confirm.match "pattern-to-match"

Happy committing!


This post originally appeared on Tim Perry’s personal blog. Want to learn more? Check out Git Confirm on Github to see the full details, and reply below or tweet Tim at @pimterry with your questions.

Tips for managing technical people – Talk to the Duck


10 August 2016, by

Galvanizing the geeksThe following is an excerpt from my new book, “Galvanizing the Geeks – Tips for Managing Technical People”. You can buy the full book on my website, here.

‘Rubber duck’ technique is one of the innovations that I have seen introduced by a project manager who was willing to try something new. He didn’t invent the practice, but no-one works in a vacuum; the best project managers build up their skillsets through others’ experiences as well as their own.

(more…)

Speed Coding 2016 – Q4 Solution


3 August 2016, by

This post explains the solutions to Question 4 of our speed coding 2016 competition. If you haven’t yet read the question, head back and have a go now.

(more…)