Softwire Blog


Better CSS-only tabs, with :target


2 November 2016, by

CSS-only tabs are a fun topic, and :target is a delightfully elegant declarative approach, except for the bit where it doesn’t work. The #hash links involved jump you around the page by default, and disabling that jumping breaks :target in your CSS, due to poorly defined behaviour in the spec, and corresponding bugs in every browser.

Or so we thought. Turns out there is a way to make this work almost perfectly, despite these bugs, and get perfect CSS-only accessible linkable history-tracking tabs for free.

What? Let’s back up.

What’s :target?

:target is a CSS psuedo-class that matches an element if its id is the hash part of the URL. If you’re on http://example.com#my-element for example,div:target would match <div id="my-element"> and not <div id="abc">.

This ties perfectly into <a href="#my-element">. All of a sudden you can add links that apply a style to one part of the page on demand, with just simple CSS.

There’s lots of fun uses for this, but an obvious one is tabs. Click a link, the page updates, and that part of the page is shown. Click another link, that part of the page disappears, and a different part is shown. All with working history and linkability, because the URL is just updating like normal.

Implementation is super easy, and works in 98% of browsers, right back to IE9. It looks like this:

<!DOCTYPE html>
<html>
<head>
  <style>
    div:not(:target) {
      display: none;
    }
    
    div:target {
      display: block;
    }
  </style>
</head>
<body>  
  <a href="#tab-1">Tab One</a>
  <a href="#tab-2">Tab Two</a>

  <div id="tab-1">
    Tab one contents
  </div>
  
  <div id="tab-2">
    Tab two contents
  </div>  


</body>
</html>

Hop, Skip, and Jump

If you try to do this in any page longer than the height of your browser though, you’ll quickly discover that this can jump around, which can be annoying. Try http://output.jsbin.com/melama — you’ll have to scroll back up after hitting each tab. It’s functionally fine, and in some cases this behaviour might be ok too, but for many it isn’t exactly what you want.

This is because links to #hash links not only update the URL, but also scroll down to the element with the corresponding id on the page. We can fight that behaviour though, with a little extra JavaScript.

JavaScript, in our beautiful CSS-only UI? Sadly yes, but it’s important to understand why this is still valuable. With a tiny bit of JS on top we still retain the benefits that a CSS-based approach gives us (page appearance independent of any logic, defined declaratively, completely accessible, and with minimal page weight) and then progressively enhance it to improve the UI polish only when we can.

In environments where JavaScript isn’t available the tabs still work fine; they swap, and the browser jumps down to the tab you’ve opened. When you do have JavaScript, we can enhance them further, to tune the exact behaviour and polish that as we like.

Our JavaScript enhancement looks something like this:


var hashLinks = document.querySelectorAll("a[href^='#']");
[].forEach.call(hashLinks, function (link) {
  link.addEventListener("click", function (event) {
    // Disable jumping around and URL updates
    event.preventDefault();
    // Update the URL only ourselves
    history.pushState({}, "", link.href);
  });
});

This disables the normal behaviour on every #hash link, and instead manually updates the hash and the browser history. Unfortunately though it doesn’t work. The URL updates, but the :target selector matching doesn’t!

You end up with #tab-1 in the URL, but <div id="tab-1"> doesn’t match :target. The spec doesn’t actually cover this case and every singlebrowser currently has this crazy behaviour (although both the spec authors and browser vendors are looking at fixing this). Game over.

Two steps forward, one step back

We can beat this. Right now we can disable jumping around, but it breaks :target. We need a way to disable jumping around, but still update the hash in a way that :target will listen to.

There’s a few ways we might be able to work around this. Ian Hansson has a clever trick where you position: fixed and hide the targeted element, to control the scroll, but depend on its :target status with sibling selectors. Meanwhile Chris Coyier has suggested capturing the scroll position and resetting it, and I think there might be a route through if you change the element’s id to something else and back at just the right time too. These are all very hacky though; it’d be nice to come up with a way of just fixing the JS we want to use above, so it actually works properly.

Fortunately a helpful comment from Han mentions some interesting current browser behaviour that might help us out, while examining compatibility issues with fixing this for real:

I’ve found a good reason to believe that virtually no webpages would be broken by the “breaking” change of updating :target selectors upon pushState: though they are currently not updated, if you however hit Back and then Fwd again, then :target rules do get applied

Moving in the browser history give us the :target behaviour we (and all sane people) are expecting. If we can find a way to transparently go back and forward without breaking the user’s experience, we can fix :target.

From there’s it’s easy. We can build a simple workaround in JavaScript to do exactly this, and save the day:

var hashLinks = document.querySelectorAll("a[href^='#']");
[].forEach.call(hashLinks, function (link) {
  link.addEventListener("click", function (event) {
    event.preventDefault();
    history.pushState({}, "", link.href);
    
    // Update the URL again with the same hash, then go back
    history.pushState({}, "", link.href);
    history.back();
  });
});

Here we add the hash to the history twice, and immediately remove it once.

This isn’t perfect, and it would be nice if :target worked properly without these workarounds. As-is though, this gives perfect behaviour, with the only downside being that the Forward button in the user’s browser isn’t displayed as disabled, as they might expect. Actually going forward won’t do anything though (it’s the same URL they’re already on), and your users are not typically going to notice this.

This will keep working even if/when :target is fixed, and you’ll then be able to remove the extra workaround here, to lose that slightly messy extra behaviour. If this does break, or any users don’t have JavaScript, they’ll get working tabs with jumping-to-tab behaviour. Slightly annoying, but perfectly usable.

So what?

This lets you build amazingly simple & effective CSS-only tabs.

Minimal clean markup & CSS, perfect accessibility, working in IE9, with shareable tab URLs and a fully working history for free. Enjoy!

Full code:

<!DOCTYPE html>
<html>
<head>
  <style>
    div:not(:target) {
      display: none;
    }
    
    div:target {
      display: block;
    }
    
    /* Make the div big, so we would jump, if the JS was still broken */
    div {
      height: 100vh;
    }
  </style>
</head>
<body>  
  <a href="#tab-1">Tab One</a>
  <a href="#tab-2">Tab Two</a>

  <div id="tab-1">
    Tab one contents
  </div>
  
  <div id="tab-2">
    Tab two contents
  </div>
  
  <script>
  // Stop href="#hashtarget" links jumping around the page
  var hashLinks = document.querySelectorAll("a[href^='#']");
  [].forEach.call(hashLinks, function (link) {
    link.addEventListener("click", function (event) {
      event.preventDefault();
      history.pushState({}, "", link.href);
      history.pushState({}, "", link.href);
      history.back();
    });
  });
  </script>
  </body>
</html>

Building a Server-Rendered Map Component – Part 2


19 September 2016, by

Part 2: How to use client-side libraries like Leaflet, in Node.

As discussed in Part One: Why?, it’d be really useful to be able to take an interesting UI component like a map, and pre-render it on the server as a web component, using Server Components.

We don’t want to do the hard mapping ourselves though. Really, we’d like this to be just as easy as building a client-side UI component. We’d like to use a shiny mapping library, like Leaflet, to give us all the core functionality right out of the box. Unfortunately though, Leaflet doesn’t run server-side.

This article’s going to focus on fixing that so you can use Leaflet with Server Components, but you’ll hit the same problems (and need very similar fixes) if you’re doing any other Node-based server rendering, including with React.The JS ecosystem right now is not good at isomorphism, but with a few small tweaks you can transform any library you like, to run anywhere.

Let’s focus on Leaflet for now. It doesn’t run server-side, because there’s not that many people seriously looking at rendering nice UIs outside a browser, so JS libraries are pretty trigger-happy making big browser-based assumptions. Leaflet expects a few things that don’t fit neatly outside a browser:

  • Global window, document and navigator objects.
  • A live element in an HTML DOM to be inserted into.
  • A Leaflet <script> tag on the page, so it can find its URL, so it can autodetect the path to the Leaflet icons.
  • To export itself just by adding an ‘L’ property to the window object.

All of these are things need tricky fixes. Just finding these issues is non-trivial: you need to try and use the library in Node, hit a bug, solve the bug, and repeat, until you get the output you’re expecting.

Leaflet is a relatively hard case though. Most libraries aren’t quite so involved in complex DOM interactions, and just need the basic globals they expect injected into them.

So, how do we fix this?

Managing Browser Globals

If you npm install leaflet and then require(“leaflet”), you’ll immediately see our first issue:

> ReferenceError: window is not defined

Fix this one, and we’ll hit a few more at require() time, for document and navigator too. We need to run Leaflet with the context it’s expecting.

It would be nice to do that by having a DOM module somewhere that gives us a document and a window, and using those as our globals. Let’s assume we’ve got such a module for a moment. Given that, we could prefix the Leaflet module with something like:

var fakeDOM = require("my-fake-dom");
var window = fakeDOM.window;
var document = fakeDOM.document;
var navigator = window.navigator;
[...insert Leaflet code...]

(Instead we could just define browser globals as process-wide Node globals and leave the Leaflet source untouched, but this isn’t good behaviour, and it’ll come back to bite you very quickly if you’re not careful)

Doing something like this will get you much closer. With any reasonable DOM stub you should be able to get Leaflet successfully importing here. Unfortunately though, this fails because of a fundamental difference between browser and Node rendering. On the server, we have to support multiple DOM contexts in one process, so we need to be able to change the document and window.

We can still pull this off though, just taking this a step further with something like:

module.exports = function (window, document) {
  var navigator = window.navigator;
  [...insert Leaflet code...]
}

Now this is a Node module that exports not a single Leaflet, but a factory function to build Leaflet for a given window and document, provided by the code using the library. This doesn’t actually return anything though when called, as you might reasonably expect, instead creating window.L, as is common for browser JS libraries. In some cases that’s probably ok, but in my case I’d rather leave Window alone, and grab the Leaflet instance directly, by adding the below to the end of the function, after the Leaflet code:

return window.L.noConflict();

This tells Leaflet to remove itself as a global, and just give you the library as a reference directly.

With this, require(“leaflet”) now returns a function, and passing that a window and document gives you a working ready-to-use Leaflet.

Emulating the expected DOM

We’re not done though. If you want to use this Leaflet, you might define a Server Component like:

var LeafletFactory = require("leaflet");
var components = require("server-components");
var MapElement = components.newElement();   MapElement.createdCallback = function (document) {
  var L = LeafletFactory(new components.dom.Window(), document);
  var map = L.map(this).setView([41.3851, 2.1734], 12);
  L.tileLayer('http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', {
    maxZoom: 19,
  }).addTo(map);
});
components.registerElement("leaflet-map", {prototype: MapElement});

This should define a component that generates the HTML for a full working map when rendered. It doesn’t. The problem is that Leaflet here is given a DOM node to render into (‘this’, inside the component), and it tries to automatically render at the appropriate size. This isn’t a real browser though, we don’t have a screen size and we’re not doing layout (that’s why it’s cheap), and everything actually has zero height or width.

This isn’t as elegant a fix, but it’s an unavoidable one in any server-rendering approach I think: you need to pick a fixed size for your initial render, and nudge Leaflet to use that. Here that’s easy, you just make sure that before the map is created you add:

this.clientHeight = 500;
this.clientWidth = 500;

And with that, it works.

This fakes layout, as if the browser had decided that this was how big the element is. You can render like this at a fixed size for lots of applications, and potentially add client-side rendering on top to resize too if you want.

With that added, you can take this component, render it with a cheekycomponents.renderFragment(“<leaflet-map></leaflet-map”) and be given working HTML for a lovely static map you can send straight to your users. Delightful.

There is still one last step required if you want to take this further. Leaflet by default includes a set of icons, and uses the ‘href’ in its script tag in the page to automatically work out the URL to these icons. This is a bit fragile in quite a few ways, including this environment, and if you extend this example to use any icons (e.g. adding markers), you’ll find your icons don’t load.

This step’s very simple though, you just need to set L.Icon.Default.imagePathappropriately. If you want to do that in a nice portable Server Component, that means:

var componentsStatic = require("server-components-static");
var leafletContent = componentsStatic.forComponent("leaflet");
L.Icon.Default.imagePath = leafletContent.getUrl("images");

This calculates the client-facing URL you’ll need that maps to Leaflet’s images folder on disk (see Server-Components-Static for more details).

Making this (more) maintainable

There’s one more step though. This is a bit messy in a few ways, but particularly in that we have to manually fork and change the code of Leaflet, and maintain that ourselves in future. It would be great to automate this instead, to dynamically wrap normal Leaflet code, without duplicating it. With Sandboxed-Module we can do exactly that.

Sandboxed-Module lets you dynamically hook into Node’s require process, to transform module code however you like. There’s lots of somewhat crazy applications of this (on-require compilation of non-JS languages, for example), but also some very practical ones, like our changes here.

There’s potentially a very small performance hit on startup from this for the transformation, but for the rest of runtime it shouldn’t make any difference; it hooks into the initial file read to change the result, and then from that point on it’s just another Node module.

So, what does this look like?

var SandboxedModule = require('sandboxed-module');
module.exports = SandboxedModule.require('leaflet', {
  sourceTransformers: {
      wrapToInjectGlobals: function (source) {
        return `
        module.exports = function (window, document) {
          var navigator = window.navigator;
          ${source}
          return window.L.noConflict();
        }`;
      }
  }
});

That’s it! Your project can now depend on any version of Leaflet, and require this wrapped module to automatically get given a Node-compatible version, without having to maintain your own fork.

This same approach should work for almost any other library that you need to manage server side. It’s not perfect — if Leaflet starts depending on other browser global things may break — but it should be much easier to manage and maintain than copying Leaflet’s code into your project wholesale.

Hopefully in future more projects will improve their native support for running in other environments, and this will go away, but in the meantime there are some relatively simple changes you can make to add Node support to even relatively complex client-side libraries.


Let’s stop there for now. In the next post, we’ll take a proper look at a full working map component, complete with configurability, static content and marker support, and see what you can do to start putting this into action yourself. Can’t wait? Check out https://github.com/pimterry/leaflet-map-server-component for the map component codebase so far.

This post originally appeared on Tim Perry’s personal blog. 

Building a Server-Rendered Map Component – Part 1


12 September 2016, by

Part 1: Why do we need better maps?

Maps are a standard tool in our web UI toolkit; they’re an easy way to link our applications to the real world they (hopefully) involve. At the same time though, maps aren’t as easy to include in your web site as you might want, and certainly don’t feel like a native part of the web.

Critically, unlike many other UI components, you normally can’t just serve up a map with plain HTML. You can easily serve up HTML with a videos embedded, or complex forms, or intricate SVG images and animations, or MathML, but maps don’t quite make the cut. (Google Maps does actually have an Embed API that’s closer to this, but it’s very limited in features, and it’s really just bumping the problem over their fence).

If you want to throw a map on page, you’re going to need pick out a library, write and serve up scripts to imperatively define and render your map, and bring in the extra complexity (and page weight and other limitations) of all the supporting code required to put that into action. That’s not really that much work, and we’ve got used to making fixes like these everywhere. It’s a far cry though from the declarative simplicity of:

<video src=“cats.mp4”></video>

Maps are one small example of the way that the HTML elements we have don’t match what we’re using day to day, and the workarounds you have to use to get closer.

How can we show a map on a page with the same simplicity and power?

Web Components

What we really want is to add<map lat=1.2 long=3.4 zoom=5> to our pages, get a great looking map, and be done with it. That’s the goal.

In principle, web components handle this nicely. You define a custom element for your map, give it its own rendering logic and internally isolated DOM, and then just start using it. That’s really great, it’s a fantastic model, and in principle it’s perfect. Clicking together isolated encapsulated UI elements like this is the key thing that every shiny client-side framework has in common, and getting that as a native feature of the web is incredible.

Support is terrible though. Right now, it’s new Chrome only, so you’re cutting out 50% of your users right off the bat, and it’s going to be a very long time until you can safely do this everywhere. You can get close with lots of polyfills, and that’s often a good choice, but there is extra weight and complexity there, and you really just want a damn <map> — not to spend hours researching the tradeoffs of Polymer or the best way to polyfill registerElement outside it.

Even once this works, you still have the extra front-end complexity that results, and the associated maintenance. You have to serve up your initial content, and then re-render in your JavaScript before some of it is visible. You quickly end up with content that is invisible to almost all automated scripts, scrapers, devices, and search spiders. Depending on JavaScript doesn’t just cut off the small percentage of users who turn it off.

Mobile devices make this even more important. With their slow connections and slow rendering, are going to have to wait longer to see your page when you render things client-side. JavaScript is not at all resilient to connection issues either — any device that drops any scripts required to for basic rendering won’t show any of that rendered content, while simple HTML pages continue with whatever’s available. New Relic’s mobile statistics show 0.9% of HTTP requests from mobile devices failing on the network. That failure rate builds up quickly, when applied to every resource in your page! With just that 0.9% of failures and only 10 resources to load, 9% of your users aren’t going to see your whole page.

Web components are a great step in the right direction, and are exactly the right shape of solution. Web components that progressively enhance already meaningful HTML content are even better, and can mitigate many of the issues above. In many cases like maps though, your component is rendering core page content. Web components do give you a superb way to do that client-side, but rendering your content client-side rendering comes with heavy costs. Worse, it’s frequently not a real necessity. A static map would give all the core functionality (and could then be enhanced with dynamic behaviour on top).

What if we could get these same benefits, without the costs?

Server Components

Server Components is an attempt to improve on our current approach to rendering the web, with the same approach and design as web components, but renderable on the server-side.

Without Server Components, if you wanted to quickly show a map of a location sprinkled with pins, or shaded areas, or whatever else, you couldn’t easily do so without serving up a mountain of JavaScript. Instead, I want to write <map lat=1.2 long=3.4 zoom=5> in my HTML and have my users instantly see a map, without extra client-side rendering or complexity or weight.

This sounds like a pipe dream, but with Server Components, we can do exactly that. We can define new HTML elements, write simple rendering code in normal vanilla JavaScript that builds and transforms the page DOM, and run all of that logic on the server to serve up the resulting ready-to-go JavaScript. All the client-side simplicity and power, without the costs.


Let’s stop there for now. In the next post, we’ll take a look at exactly how you’d implement this, and what you can do to start putting it into action yourself. Can’t wait? Check out https://github.com/pimterry/leaflet-map-server-component for the map component so far.

This post originally appeared on Tim Perry’s personal blog

Using git-tfs in anger


31 August 2016, by

A while ago I wrote about how no-one should ever user TFS, ever. While I was unfortunately forced to use it I experimented with git-tfs to see if it would ease my pain, and this is what I found.
Why git-tfs
It seems to be the main git-to-tfs-bridge right now. There’s another option, git-tf, but that one hasn’t been updated since 2013. git-tfs appears to be an active project.

The Good

  • Local version control: local commits and local branches. Proper branches (not TFS folder-branches) which are lightning-fast to switch between, and you can cherry-pick between.
  • Rebase temporary dev branches back onto master instead of clumsily re-applying old shelvesets.
  • If more than one person is using it, push and pull git branches between yourselves, and keep developing even when the central TFS server (inevitably) goes down.
  • Separate source control from Visual Studio: generally a good idea in my opinion, and particularly useful if VS is already running slowly.
  • Reliable version control: TFS get-latest is a bit buggy, but git-TFS seemd to work reliably. Perhaps it abstracts away for you the tedium of always telling TFS explicitly to get the latest version of all files, not just some.

The Bad

  • You can only checkout one directory at a time so you need separate checkouts for different root-level projects in the same repo. This is pretty minor though.
  • Visual Studio gets a bit confused and integration isn’t as good. Keep it happy by using msys git (git for windows) not cygwin git if you can.
  • Changing between TFS and git-tfs, or just using visual studio with a git-tfs checkout, leads to corrupt TFS workspaces.
  • Doing things differently from the rest of the team means you lose out on their TFS knowledge and support, for what it’s worth.
  • You won’t be able to add projects to the solution properly.
  • Visual Studio will constantly try to remove the tfs element from the .sln file, which will break TFS integration for everyone else if you commit it.
  • If you need to keep local changes, e.g. for local environment config, then it’s a pain to stash the changes all the time (you need a clean checkout to rebase).
  • It’s just really slow – mainly because my team’s TFS server was slow; but git-tfs was even slower.

Verdict

For me, unfortunately the advantages don’t outweigh the disadvantages, and I probably wouldn’t use it again. it’s a bit of a pain to set it up and to use different tools to the rest of the team. On my next TFS project (and I’ll do everything in my power to make sure there won’t be one) I’ll just embrace the shelveset workflow instead.

What I might try is just to git init a local repo in my TFS checkout and get local branching that way, adding the .git folder etc to the local ignore list, although it might lead to some manual effort keeping branches in sync with the remote.

Setting up

If you do decide to use it, this is what I did to set it up:

  1. Install git, and configure it:
$ git config --global core.autocrlf true # see https://help.github.com/articles/dealing-with-line-endings/
$ git config --global core.filemode false
  1. Install git-tfs: http://git-tfs.com/
  2. Clone the repo:
$ git tfs clone http://tfs:8080/tfs/DefaultCollection $/some_project -c XXX

Where XXX is a recent changeset – you’ll only have history available from this changeset, but the further back you go the longer the clone will take.

  1. Open in Visual Studio!

Don’t bother to un-map your workspace: the Visual Studio TFS integration will go away when you open your new “git” project. And it will be easier to go back to pure TFS if you want to.

Everyday Use

Always rebase when pulling:
$ git tfs pull --rebase

To map local commits to checkins one to one, use rcheckin instead of checkin or checkintool.

$ git tfs rcheckin --no-merge

Be careful though, this makes a number of commits in a row, and because TFS is so sloooow the can be nearly a minute apart and trigger more than one consecutive CI build. If the build time is slow this might hold other people up from committing, so keep an eye on it: you can always cancel the first build(s) if this happens. You might just want to squash up the commits to save time committing!

If your project requires certain custom notes fields to be supplied, use the -n (--notes) argument to supply it from the command line:

$ git tfs rcheckin --no-merge -n 'JiraNumber' 'AA-123'

(more…)