WDCNZ Web Dev conference, Wellington: Part 3


16 October 2014, by

At last, the long-awaited final part of my WCDNZ experience, following part 1 and part 2

Building Your Own Tools

The highlight of the day was James Halliday (@substack), creator of browserify and other things, giving us a whirlwind talk and demo live from his laptop. Unfortunately it went so blisteringly fast I didn’t take many notes, so I’m not really going to do it any sort of justice here at all.

The theme was about writing tools which work together nicely. There were two main arenas discussed: the command line, and node. We started with a general note: don’t be afraid to write your own tools, but you should probably first check that there isn’t already just such a tool out there.

  • First example: examining pull requests from github
    • James gets a lot of pull requests
    • Tip: if you add .patch to the end of the URL of a github pull request, you get a patch that works with git-am
    • git-am is a git tool which was apparently made by Linus Torvalds for his own personal use – so the tradition of building one’s own command line tools is an illustrious one
    • You can use the pipe to send the patch to git am, either by using curl to get the patch, or copying it from a browser and using xclip, maybe something like
      • $ xclip -o | git am
    • So here we have useful command line tools that you can chain up using the pipe, and even swap tools in and out depending whether input is from the web (curl) or clipboard
  • James then showed us some command line tools of his own, along similar lines, which provided the various parts of a time tracking mechanism. I can’t remember much detail but I think it went something like this
    • There was a tool that generated some JSON for one timesheet entry
    • There was a tool that accepted this json did something else with it
    • There was a tool that took a bunch of these timesheet entries and saved a timesheet file, or something
    • Another tool took the same input but emailed a timesheet
    • One tool took lots of timesheet entries and combined them according to task code
    • … or something. Anyway all these tools ate and returned json and so could all be piped together in different combinations depending what you wanted to do
  • A pattern emerges: each tool is very small and does one thing well, and they share a common input/output
  • Finally we turned to some of James’ tools on NPM: browserify (bundles up node server modules into modules that will run in a browser), tape (a node test harness) and some pieces of accountdown (user management and authentication). I don’t remember enough detail to say much useful about this, but I thoroughly enjoyed it.
    • We had a great demo where browserify and tape were used together to great effect on the command line
      • One interesting takeaway is that tape outputs in a language called TAP, which makes it compatible with various testing tools (like testling) and swappable with other test harnesses (nodeunit has a TAP reporter plugin)
    • Similarly, the node demo where some of the accountdown functionality was built incrementally from scratch went pretty fast but the gist was to isolate little bits of functionality, e.g. creating a user, authenticating a user, into their own node modules.
  • Some choice quotes:
    • If you’re going to build your own tools, please please test them. That way you can convince other people that they work.
    • I use my own test harness [tape]. Everyone should write their own test harness, but if you like you can use mine, it’s pretty good.
    • The people who know what they’re doing can make a little module that encapsulates that knowledge, then everyone can benefit.

Miscellaneous Tips

  • Check out the persistent in-memory key-value store levelDB – it’s easy and unlike some other in-memory DBs, it’s very fast.
  • If you have a project with some usage examples, you can bootstrap tests for that project by copying all the examples into a new “tests” folder and converting them into tests.

Further reading

Writing Better CSS

Toni Barret (@toni_b) tells us that good CSS is:

  • Maintainable
  • Understandable
  • Well-planned

In particular, good CSS need not be highly optimised for performance. How to do it:

  • Plan
    • Decide which browsers you will support
      • And decide what “support” actually means. For example you may have different levels of support: “full”, “mainly works”, “functional but not pretty”, “only if time”
    • Decide on a set of standards
      • For example, don’t nest more than three layers is a good rule to have. Doing so often leads to bad selectors…
      • …where a “bad selector” is one which is reliant on the html structure, too specific, and/or not reusable.
    • Decide which tools you will use and set them up from the start
      • Concatenation, minification, pre-compilers, etc etc
      • You don’t want to retro-fit this stuff later on if you don’t have to
  • Use Comments
    • Don’t avoid commenting CSS: you comment your code when you need to, and yet people rarely comment CSS.
    • In particular that big Table of Contents comment at the top (that comes with bootstrap and other libraries) is very useful: if you are using it then make sure you take the time to maintain it
  • Divide and group you CSS logically into files and sections
    • avoid the common everything-is-appended-to-the-bottom-of-main.css problem
    • and avoid the each-developer-has-their-own-css-section-with-different-styles problem
  • Use a “pattern library
    • Exactly what this means went a bit over my head, but checkout http://pea.rs/ for an example, and I think it also provides a way to bootstrap your own pattern library
    • In a bit team or a big/long project, this will make development easier and lead to a more consistent style

Further reading:

That’s it!

WDCNZ Lanyard

Tags: ,

Categories: Technical

«
»

Leave a Reply

* Mandatory fields


1 + = eight

Submit Comment