Softwire Blog


A quick guide to javascript streams


16 January 2017, by

In this short article, I’ll lead you through a crash course on what a stream is, and how you can harness them to make your code more understandable.

If you want to try out any of the examples here, they will work with the excellent library bacon.js.

So what is a stream?

A stream is a sequence of objects, but unlike other sequences, it’s handled lazily (so the items aren’t retrieved until they’re required). This means that unlike a normal List, it doesn’t have to be finitely long.

You can put anything in your stream characters, numbers or even user events (like keystrokes or mouse presses). Once you’ve got your sequence, you can manipulate it in lots of different ways.

Streams can also completely remove the requirement to think about time, meaning fewer race conditions and therefore more declarative code. This removal of the need to represent time in code also results in architecture diagrams which are easy to draw and reason about.

Manipulating your stream

Now that we know the purpose of a stream, how do we manipulate it? What follows is a selection of functions that can be called on streams along with visual representations of the effect of these functions when called. It should provide everything needed to set up and use some programs which utilise streams.

map(f)

Map takes a function f. Then, for every item that appears in the stream, it applies the function f to it and produces a new stream with the new values.

scan(f)

Scan takes a function f. Then for each value in the stream, applies the function to the previous result along with the current value. This is similar to fold, but the output is a stream of all the values rather than just the final value, which fold would provide.

filter(f)

Filter takes a function f, then for each value computes the function, and if it evaluates to true, puts the value into the output stream, else discards the value.

merge

There are several ways to combine two streams. Using merge will take values from both streams and output values (in the order they are received) into the output stream.

This differs from concat, which takes two streams and returns all of the first stream before returning all of the second stream.

sampledBy(f)

SampledBy takes a function f and two streams (A and B). The function f is then called whenever a value is provided by stream A, passing in the last value from each stream. In the example below, a value is created in the output stream whenever a value occurs in the top stream. The output is then created from the value in the bottom stream.

slidingWindow(n)

SlidingWindow takes an integer n and returns the last n values from the stream as an array each time a new value occurs in the input stream.

Stream Summary

So in summary, you can use stream functions to manipulate streams in a multitude of different ways. You should use streams to remove the time component from code, allowing for simpler to understand and debug code.

This post first appeared on Chris Arnott’s blog.

Using sqlite for dev nodeJS environments


29 November 2016, by

This post will explain how to get one click DBs working in nodeJS.

Problems with your zero to hero

I was recently working on a NodeJS project, and it was set up to use a mySQL database. This was fine for production, but meant that people joining the project had several manual steps to get the project working.

  • Install SQL server
  • Set up a SQL database
  • Update the NodeJS config to point to the database.

This isn’t too difficult to do. But I wanted to reduce the ramp up to getting the project up and running on a dev machine.

This meant moving to a database that could configure itself from whatever was checked into the codebase.

One option would have been to translate the steps above into code so that running a command would create a SQL database that was configured to be used by the code.

This would have been overkill, as there are much better options.

I chose to instead configure dev and test environments to use a local sqlite database.

Local sqlite database

A sqlite database can be added as an npm dependency, so that it is installed as part of dependency management when a new developer checks out the code.

"dependencies": {
  "sqlite3": "3.1.4"
}

Now that sqlite has been added as a dependency, it can be called from the ORM that we’re using. In this example, I’ve used Sequelize as the ORM, as once set up, it allows easy mapping of Javascript objects to database objects, as well as having built in support for database migrations.

The next thing to do is to have a configuration file which details how to connect to the database in different environments:

module.exports = {
  development: {
    dialect: 'sqlite',
    storage: 'data/dev-db.sqlite3'
  },
  test: {
    dialect: 'sqlite',
    storage: 'data/test-db.sqlite3'
  },
  production: {
    username: process.env.RDS_USERNAME,
    password: process.env.RDS_PASSWORD,
    database: 'ebdb',
    port: process.env.RDS_PORT,
    host: process.env.RDS_HOSTNAME,
    dialect: 'mysql'
  }
};

This config.js file shows that in dev and test mode, we’ll connect to the sqlite database which is stored in the file “data/<env>-db.sqlite3”.

In production, it instead connects to an RDS instance (the production machine in this project was running on AWS), with the connection details stored in the environment on the cloud machine (rather than being checked into the code).

Now we need to setup sequelize to use the correct database when it is initialised. This is done in the index.js file under the models folder:

var path = require('path');
var Sequelize = require('sequelize');
const basename = path.basename(module.filename);
const env = process.env.NODE_ENV || 'development';
const config = require(path.join(__dirname, '..', 'config', 'config.js'))[env];
const db = {};
var sequelize = new Sequelize(config.database, config.username, config.password, config);

This code fetches the relevant section of the config.js file, and uses it to initialise the ORM. What’s not shown above is the code that loads all the models into Sequelize. We’ll leave that as an excercise for the reader.

As you can see. This file is using an environment variable to determine what mode we are running the code in. All that’s required to run in production is set NODE_ENV in the environment variables to be equal to “production”.

The app can now be started using “npm start”, which will start, using the dev database.

Migrations

Just a quick note here on using Sequelize for database migrations. You can hook your migrations into the “npm start” command above by adding the following to your package.json file:

"scripts": {
    "prestart": "sequelize db:migrate -c config/config.js"
}

Summary

So what have we learnt?

  • You should try and ensure setting up a project is as easy as possible for new developers
  • You can use different environments variables to separate out dev/test/production environments (and default to test).
  • This method can work, even if you are deploying to the cloud.
  • You can easily add database migrations into the start command of your application.

This post originally appeared on Chris Arnott’s personal blog.

Better CSS-only tabs, with :target


2 November 2016, by

CSS-only tabs are a fun topic, and :target is a delightfully elegant declarative approach, except for the bit where it doesn’t work. The #hash links involved jump you around the page by default, and disabling that jumping breaks :target in your CSS, due to poorly defined behaviour in the spec, and corresponding bugs in every browser.

Or so we thought. Turns out there is a way to make this work almost perfectly, despite these bugs, and get perfect CSS-only accessible linkable history-tracking tabs for free.

What? Let’s back up.

What’s :target?

:target is a CSS psuedo-class that matches an element if its id is the hash part of the URL. If you’re on http://example.com#my-element for example,div:target would match <div id="my-element"> and not <div id="abc">.

This ties perfectly into <a href="#my-element">. All of a sudden you can add links that apply a style to one part of the page on demand, with just simple CSS.

There’s lots of fun uses for this, but an obvious one is tabs. Click a link, the page updates, and that part of the page is shown. Click another link, that part of the page disappears, and a different part is shown. All with working history and linkability, because the URL is just updating like normal.

Implementation is super easy, and works in 98% of browsers, right back to IE9. It looks like this:

<!DOCTYPE html>
<html>
<head>
  <style>
    div:not(:target) {
      display: none;
    }
    
    div:target {
      display: block;
    }
  </style>
</head>
<body>  
  <a href="#tab-1">Tab One</a>
  <a href="#tab-2">Tab Two</a>

  <div id="tab-1">
    Tab one contents
  </div>
  
  <div id="tab-2">
    Tab two contents
  </div>  


</body>
</html>

Hop, Skip, and Jump

If you try to do this in any page longer than the height of your browser though, you’ll quickly discover that this can jump around, which can be annoying. Try http://output.jsbin.com/melama — you’ll have to scroll back up after hitting each tab. It’s functionally fine, and in some cases this behaviour might be ok too, but for many it isn’t exactly what you want.

This is because links to #hash links not only update the URL, but also scroll down to the element with the corresponding id on the page. We can fight that behaviour though, with a little extra JavaScript.

JavaScript, in our beautiful CSS-only UI? Sadly yes, but it’s important to understand why this is still valuable. With a tiny bit of JS on top we still retain the benefits that a CSS-based approach gives us (page appearance independent of any logic, defined declaratively, completely accessible, and with minimal page weight) and then progressively enhance it to improve the UI polish only when we can.

In environments where JavaScript isn’t available the tabs still work fine; they swap, and the browser jumps down to the tab you’ve opened. When you do have JavaScript, we can enhance them further, to tune the exact behaviour and polish that as we like.

Our JavaScript enhancement looks something like this:


var hashLinks = document.querySelectorAll("a[href^='#']");
[].forEach.call(hashLinks, function (link) {
  link.addEventListener("click", function (event) {
    // Disable jumping around and URL updates
    event.preventDefault();
    // Update the URL only ourselves
    history.pushState({}, "", link.href);
  });
});

This disables the normal behaviour on every #hash link, and instead manually updates the hash and the browser history. Unfortunately though it doesn’t work. The URL updates, but the :target selector matching doesn’t!

You end up with #tab-1 in the URL, but <div id="tab-1"> doesn’t match :target. The spec doesn’t actually cover this case and every singlebrowser currently has this crazy behaviour (although both the spec authors and browser vendors are looking at fixing this). Game over.

Two steps forward, one step back

We can beat this. Right now we can disable jumping around, but it breaks :target. We need a way to disable jumping around, but still update the hash in a way that :target will listen to.

There’s a few ways we might be able to work around this. Ian Hansson has a clever trick where you position: fixed and hide the targeted element, to control the scroll, but depend on its :target status with sibling selectors. Meanwhile Chris Coyier has suggested capturing the scroll position and resetting it, and I think there might be a route through if you change the element’s id to something else and back at just the right time too. These are all very hacky though; it’d be nice to come up with a way of just fixing the JS we want to use above, so it actually works properly.

Fortunately a helpful comment from Han mentions some interesting current browser behaviour that might help us out, while examining compatibility issues with fixing this for real:

I’ve found a good reason to believe that virtually no webpages would be broken by the “breaking” change of updating :target selectors upon pushState: though they are currently not updated, if you however hit Back and then Fwd again, then :target rules do get applied

Moving in the browser history give us the :target behaviour we (and all sane people) are expecting. If we can find a way to transparently go back and forward without breaking the user’s experience, we can fix :target.

From there’s it’s easy. We can build a simple workaround in JavaScript to do exactly this, and save the day:

var hashLinks = document.querySelectorAll("a[href^='#']");
[].forEach.call(hashLinks, function (link) {
  link.addEventListener("click", function (event) {
    event.preventDefault();
    history.pushState({}, "", link.href);
    
    // Update the URL again with the same hash, then go back
    history.pushState({}, "", link.href);
    history.back();
  });
});

Here we add the hash to the history twice, and immediately remove it once.

This isn’t perfect, and it would be nice if :target worked properly without these workarounds. As-is though, this gives perfect behaviour, with the only downside being that the Forward button in the user’s browser isn’t displayed as disabled, as they might expect. Actually going forward won’t do anything though (it’s the same URL they’re already on), and your users are not typically going to notice this.

This will keep working even if/when :target is fixed, and you’ll then be able to remove the extra workaround here, to lose that slightly messy extra behaviour. If this does break, or any users don’t have JavaScript, they’ll get working tabs with jumping-to-tab behaviour. Slightly annoying, but perfectly usable.

So what?

This lets you build amazingly simple & effective CSS-only tabs.

Minimal clean markup & CSS, perfect accessibility, working in IE9, with shareable tab URLs and a fully working history for free. Enjoy!

Full code:

<!DOCTYPE html>
<html>
<head>
  <style>
    div:not(:target) {
      display: none;
    }
    
    div:target {
      display: block;
    }
    
    /* Make the div big, so we would jump, if the JS was still broken */
    div {
      height: 100vh;
    }
  </style>
</head>
<body>  
  <a href="#tab-1">Tab One</a>
  <a href="#tab-2">Tab Two</a>

  <div id="tab-1">
    Tab one contents
  </div>
  
  <div id="tab-2">
    Tab two contents
  </div>
  
  <script>
  // Stop href="#hashtarget" links jumping around the page
  var hashLinks = document.querySelectorAll("a[href^='#']");
  [].forEach.call(hashLinks, function (link) {
    link.addEventListener("click", function (event) {
      event.preventDefault();
      history.pushState({}, "", link.href);
      history.pushState({}, "", link.href);
      history.back();
    });
  });
  </script>
  </body>
</html>

Cross platform phone apps


28 October 2016, by

If you want to write a phone app, and want it to run on multiple platforms, but don’t want to spend large amounts of time maintaining two code bases, then there are several solutions that allow writing one app, and deploying it to several platforms.

These multi-platform apps work by running a mini website on a phone, which is accessed via a web view, which is how the app appears native.

In this post, we’ll discuss several different approaches to writing a multi-platform app, and have a look which situations you should choose each option.

(more…)

Tips for managing technical people – Blog Post wrap up


14 October 2016, by

Galvanizing the geeksOver the past few months, we’ve been posting excerpts from my new book, “Galvanizing the Geeks – Tips for Managing Technical People”. You can buy the full book on my website, here.

This post serves as a reference to all the snippets that are freely available here.

(more…)

Submitting your cordova app to the apple app store


12 October 2016, by

These days, everyone has an app. If you are one of the people who decided to write a cross platform app using cordova and would like to now post it to the app store, then keep reading and I’ll explain how to go about doing that.

Just a word upfront, in order to build your app, and to post the built app to the app store, you will need an apple machine.

Building your app

There are three steps required to build a release version of your app. Create a certificate, create an app identifier and create a provisioning profile. Once you have done these steps, you will be able to build a signed ipa of your app.

Create a certificate

First, you will need to login to Apple’s developer website. If you do not have an app id, you will need to create one, and you will also need to register as an Apple Developer, which will cost you $99.

Now that you are logged in, go to the distribution certificates page, and create a new certificate by clicking the plus button, marking your certificate as Production > “App Store and Ad Hoc”, and then following through the instructions on the website, which involve creating a signing request.

Create an app identifier

Again, the first step is to login to Apple’s developer website. This time, go to the app identifier page.  Fill in a short description, and then specify the app identifier for your app (this is stored in the config.xml in your cordova project:

<widget id="com.softwire.exampleApp" ... >

At this stage, if you want to add any extra services to your app (e.g. PushNotifications) then you will need to mark them on the app identifier.

Create a provisioning profile

As with the other steps, you need to login to Apple’s developer website. Head over to create a new Provision Profile. Select Distribution > “App Store” then continue.

On the next page, you will need to select the App ID that you have just created and click continue.

On the final page, you will need to select the certificate that you have just created and click continue.

Finally, give the provisioning profile a name, and click continue again.

You do not need to download the provisioning profile, we will get XCode to do that for us in the “Submitting your app”.

Submitting your app

In order to submit your app to the app store, you will need to build the signed ipa file, create an app store listing, and then finally submit the app you have built.

Building a signed IPA

Now that you have created a provisioning profile (see above if not), you must get xcode to download it. To do this, login to your Apple account on XCode (Preferences > Accounts), and then click the + button and follow the instructions to add your account.

Once your account has been added, select your account in the left panel, and click “View Details…” in the right panel. On the screen that opens, you should see the provisioning profiles attached to your account. One of these will be the Provisioning Profile you created above. Select the Provisioning Profile and click download. This will now be available to xcode when building your project.

The next step is to inform cordova of which provisioning profile to use when building. This is done using a build.json file. You should create a build.json file, and include in it the id of the provisioning profile that you downloaded.

Finally, you can now run a build on your mac using:

cordova build ios --device --release --buildConfig build.json

This will create a signed ipa of your app under

platforms/ios/build/device/*.ipa

Creating an app listing

Login to iTunes Connect. From here, you can go to the apps page, and then create a new project. You will now need to fill in details for your app (there are lots to fill in, but the help Apple provides is quite useful), and ensure that you have uploaded screenshots of the app running on both phone and tablet (if relevant).

Once you are happy with the details, save them.

Submitting the IPA to the app store

Now that an app listing has been created, the next step is to upload the build (created above) to Apple.

To perform this upload, you will need Application Loader 3.0. You can download this from the “Prepare for submission” page on the app listing.

Once you have Application Loader installed, login and click choose. Select the ipa that was built earlier, which will upload the app to Apple. It will automatically be linked to your app listing, as the app’s id will match the one specified on the app listing.

After upload, the ipa will be processed by Apple, and following that, can be selected as the version that should be published to the store. If you’re happy with everything, then submit the app to review.

Apple will now review your app, and it will appear in the app store when it is ready. The review process will take a couple of days, and you will receive an email when the submission is complete.

Tips for managing technical people – Recommended reading


7 October 2016, by

Galvanizing the geeksThe following is an excerpt from my new book, “Galvanizing the Geeks – Tips for Managing Technical People”. You can buy the full book on my website, here.

These are some of the books – not all of them aimed specifically at the technology sector – that I’ve found useful throughout my career.

(more…)

Effortlessly add HTTPS to Dokku, with Let’s Encrypt


26 September 2016, by

You’ve written an application deployed using Dokku, and you’ve got it all up and running and great. You’ve heard a lot about why HTTPS is important nowadays, especially the SEO and performance benefits, and you’d like to add all that, with minimal cost and hassle. Let’s get right on that.

Let’s Encrypt is a new certificate authority (an organisation that issues the certificates you need to host an HTTPS site), which provides certificates to sites entirely for free, by totally automating the system, to help get 100% of the web onto HTTPS. Sounds great, right?

To set up this up, you’re going to need to prove you own the domain you’re asking for a certificate for, you need to get the certificates and start using them, and you’re going to want to have some system in place to automatically renew your certificates, so you never have to think about this again (Let’s Encrypt certificates expire every 90 days, to encourage automation). That means we’ve got a few key steps:

  • Generate a key-pair to represent ourselves (you can think of this as our login details for Let’s Encrypt).
  • Complete Let’s Encrypt’s Simple HTTP challenge, by signing and host a given JSON token at /.well-known/acme-challenge/<token> on port 80 for the given domain name, with our public key. This validates the key pair used to sign this token as authorized to issue certificates for this domain.
  • Request a certificate for this domain, signing the request with our now-authorized key.
  • Set up our server to use this key.
  • Set up an automated job to re-request certificates and update the certificate we’re using at regular intervals.

(Interested in the full details of how the Let’s Encrypt process works? Check out their detailed intro: https://letsencrypt.org/how-it-works/)

Fortunately, you barely have to do any of this! In reality, with Dokku, this is pretty much a case of turning a plugin on, and walking away.

How do you actually do this with Dokku?

First up, we need to install dokku-letsencrypt, which will do most of the setup magically for us (see full instructions here). On your server, run the command below for your dokku version (run ‘dokku version’ to check):

# dokku 0.5+
dokku plugin:update letsencrypt

# dokku 0.4
dokku plugin:update letsencrypt dokku-0.4

To configure it, set an email for your app:

dokku config:set --no-restart myapp [email protected]

(This email will receive the renewal warnings for your certificate)

Turn it on:

dokku letsencrypt myapp

For Dokku 0.5+, set up auto-renewal:

dokku letsencrypt:cron-job --add

For Dokku 0.4, you’ll have to manually add a cronjob scheduled every 60 days to kick off the auto-renewal process:

dokku letsencrypt:auto-renew

That’s it!

What just happened?

The magic is in the ‘dokku letsencrypt myapp’ command. When you run that, Dokku-LetsEncrypt:

  • Starts a new service, in a Docker container, and temporarily delegates the top-level .well-known path on port 80 to it in Nginx (the server which sits in front of all your services in Dokku and routes your requests).
  • Generates a key pair, and authorizes it for each of the domains you have configured for your app (see ‘dokku domains myapp’).
  • Uses that key pair to get certificates for each of those domains.
  • Configures Nginx to enable HTTPS with these certificates for each of those domains.
  • Configures Nginx to 301 redirect any vanilla HTTP requests on to HTTPS for each of those domains.

The later auto-renewal steps just do the work to update these certificates later on.

Easy as pie. Don’t forget to check it actually worked though! It’s quite likely you’ll find mixed content warnings when you first turn on HTTPS. Most major CDNs or other services you might be embedding will have HTTPS options available nowadays, so this should just be a matter of find/replacing http: tohttps: throughout your codebase. With that done though, that shiny green padlock should be all yours.

This post originally appeared on Tim Perry’s personal blog.

Tips for managing technical people – Train technical leadership


21 September 2016, by

Galvanizing the geeksThe following is an excerpt from my new book, “Galvanizing the Geeks – Tips for Managing Technical People”. You can buy the full book on my website, here.

You need to ensure that there’s a technical career path as well as a management one open to your managees. Many companies nowadays ensure that there’s a path by which their technical people can reach the very top of the company, whether this is via a Technical Advisory Board (favoured by companies such as Thoughtworks and UBS) or by other means.

Creating the technical path and putting people on it isn’t enough. You also need to train people to fill the relevant roles. A common misunderstanding of the technical lead role is to see it as a sort of glorified developer, who gets to tell the other developers what to do. While a progression to project manager is seen as a move to a different role, the role of the technical lead can be seen as ‘more of the same’.

(more…)

Building a Server-Rendered Map Component – Part 2


19 September 2016, by

Part 2: How to use client-side libraries like Leaflet, in Node.

As discussed in Part One: Why?, it’d be really useful to be able to take an interesting UI component like a map, and pre-render it on the server as a web component, using Server Components.

We don’t want to do the hard mapping ourselves though. Really, we’d like this to be just as easy as building a client-side UI component. We’d like to use a shiny mapping library, like Leaflet, to give us all the core functionality right out of the box. Unfortunately though, Leaflet doesn’t run server-side.

This article’s going to focus on fixing that so you can use Leaflet with Server Components, but you’ll hit the same problems (and need very similar fixes) if you’re doing any other Node-based server rendering, including with React.The JS ecosystem right now is not good at isomorphism, but with a few small tweaks you can transform any library you like, to run anywhere.

Let’s focus on Leaflet for now. It doesn’t run server-side, because there’s not that many people seriously looking at rendering nice UIs outside a browser, so JS libraries are pretty trigger-happy making big browser-based assumptions. Leaflet expects a few things that don’t fit neatly outside a browser:

  • Global window, document and navigator objects.
  • A live element in an HTML DOM to be inserted into.
  • A Leaflet <script> tag on the page, so it can find its URL, so it can autodetect the path to the Leaflet icons.
  • To export itself just by adding an ‘L’ property to the window object.

All of these are things need tricky fixes. Just finding these issues is non-trivial: you need to try and use the library in Node, hit a bug, solve the bug, and repeat, until you get the output you’re expecting.

Leaflet is a relatively hard case though. Most libraries aren’t quite so involved in complex DOM interactions, and just need the basic globals they expect injected into them.

So, how do we fix this?

Managing Browser Globals

If you npm install leaflet and then require(“leaflet”), you’ll immediately see our first issue:

> ReferenceError: window is not defined

Fix this one, and we’ll hit a few more at require() time, for document and navigator too. We need to run Leaflet with the context it’s expecting.

It would be nice to do that by having a DOM module somewhere that gives us a document and a window, and using those as our globals. Let’s assume we’ve got such a module for a moment. Given that, we could prefix the Leaflet module with something like:

var fakeDOM = require("my-fake-dom");
var window = fakeDOM.window;
var document = fakeDOM.document;
var navigator = window.navigator;
[...insert Leaflet code...]

(Instead we could just define browser globals as process-wide Node globals and leave the Leaflet source untouched, but this isn’t good behaviour, and it’ll come back to bite you very quickly if you’re not careful)

Doing something like this will get you much closer. With any reasonable DOM stub you should be able to get Leaflet successfully importing here. Unfortunately though, this fails because of a fundamental difference between browser and Node rendering. On the server, we have to support multiple DOM contexts in one process, so we need to be able to change the document and window.

We can still pull this off though, just taking this a step further with something like:

module.exports = function (window, document) {
  var navigator = window.navigator;
  [...insert Leaflet code...]
}

Now this is a Node module that exports not a single Leaflet, but a factory function to build Leaflet for a given window and document, provided by the code using the library. This doesn’t actually return anything though when called, as you might reasonably expect, instead creating window.L, as is common for browser JS libraries. In some cases that’s probably ok, but in my case I’d rather leave Window alone, and grab the Leaflet instance directly, by adding the below to the end of the function, after the Leaflet code:

return window.L.noConflict();

This tells Leaflet to remove itself as a global, and just give you the library as a reference directly.

With this, require(“leaflet”) now returns a function, and passing that a window and document gives you a working ready-to-use Leaflet.

Emulating the expected DOM

We’re not done though. If you want to use this Leaflet, you might define a Server Component like:

var LeafletFactory = require("leaflet");
var components = require("server-components");
var MapElement = components.newElement();   MapElement.createdCallback = function (document) {
  var L = LeafletFactory(new components.dom.Window(), document);
  var map = L.map(this).setView([41.3851, 2.1734], 12);
  L.tileLayer('http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', {
    maxZoom: 19,
  }).addTo(map);
});
components.registerElement("leaflet-map", {prototype: MapElement});

This should define a component that generates the HTML for a full working map when rendered. It doesn’t. The problem is that Leaflet here is given a DOM node to render into (‘this’, inside the component), and it tries to automatically render at the appropriate size. This isn’t a real browser though, we don’t have a screen size and we’re not doing layout (that’s why it’s cheap), and everything actually has zero height or width.

This isn’t as elegant a fix, but it’s an unavoidable one in any server-rendering approach I think: you need to pick a fixed size for your initial render, and nudge Leaflet to use that. Here that’s easy, you just make sure that before the map is created you add:

this.clientHeight = 500;
this.clientWidth = 500;

And with that, it works.

This fakes layout, as if the browser had decided that this was how big the element is. You can render like this at a fixed size for lots of applications, and potentially add client-side rendering on top to resize too if you want.

With that added, you can take this component, render it with a cheekycomponents.renderFragment(“<leaflet-map></leaflet-map”) and be given working HTML for a lovely static map you can send straight to your users. Delightful.

There is still one last step required if you want to take this further. Leaflet by default includes a set of icons, and uses the ‘href’ in its script tag in the page to automatically work out the URL to these icons. This is a bit fragile in quite a few ways, including this environment, and if you extend this example to use any icons (e.g. adding markers), you’ll find your icons don’t load.

This step’s very simple though, you just need to set L.Icon.Default.imagePathappropriately. If you want to do that in a nice portable Server Component, that means:

var componentsStatic = require("server-components-static");
var leafletContent = componentsStatic.forComponent("leaflet");
L.Icon.Default.imagePath = leafletContent.getUrl("images");

This calculates the client-facing URL you’ll need that maps to Leaflet’s images folder on disk (see Server-Components-Static for more details).

Making this (more) maintainable

There’s one more step though. This is a bit messy in a few ways, but particularly in that we have to manually fork and change the code of Leaflet, and maintain that ourselves in future. It would be great to automate this instead, to dynamically wrap normal Leaflet code, without duplicating it. With Sandboxed-Module we can do exactly that.

Sandboxed-Module lets you dynamically hook into Node’s require process, to transform module code however you like. There’s lots of somewhat crazy applications of this (on-require compilation of non-JS languages, for example), but also some very practical ones, like our changes here.

There’s potentially a very small performance hit on startup from this for the transformation, but for the rest of runtime it shouldn’t make any difference; it hooks into the initial file read to change the result, and then from that point on it’s just another Node module.

So, what does this look like?

var SandboxedModule = require('sandboxed-module');
module.exports = SandboxedModule.require('leaflet', {
  sourceTransformers: {
      wrapToInjectGlobals: function (source) {
        return `
        module.exports = function (window, document) {
          var navigator = window.navigator;
          ${source}
          return window.L.noConflict();
        }`;
      }
  }
});

That’s it! Your project can now depend on any version of Leaflet, and require this wrapped module to automatically get given a Node-compatible version, without having to maintain your own fork.

This same approach should work for almost any other library that you need to manage server side. It’s not perfect — if Leaflet starts depending on other browser global things may break — but it should be much easier to manage and maintain than copying Leaflet’s code into your project wholesale.

Hopefully in future more projects will improve their native support for running in other environments, and this will go away, but in the meantime there are some relatively simple changes you can make to add Node support to even relatively complex client-side libraries.


Let’s stop there for now. In the next post, we’ll take a proper look at a full working map component, complete with configurability, static content and marker support, and see what you can do to start putting this into action yourself. Can’t wait? Check out https://github.com/pimterry/leaflet-map-server-component for the map component codebase so far.

This post originally appeared on Tim Perry’s personal blog.