Softwire Blog


A quick guide to javascript streams


16 January 2017, by

In this short article, I’ll lead you through a crash course on what a stream is, and how you can harness them to make your code more understandable.

If you want to try out any of the examples here, they will work with the excellent library bacon.js.

So what is a stream?

A stream is a sequence of objects, but unlike other sequences, it’s handled lazily (so the items aren’t retrieved until they’re required). This means that unlike a normal List, it doesn’t have to be finitely long.

You can put anything in your stream characters, numbers or even user events (like keystrokes or mouse presses). Once you’ve got your sequence, you can manipulate it in lots of different ways.

Streams can also completely remove the requirement to think about time, meaning fewer race conditions and therefore more declarative code. This removal of the need to represent time in code also results in architecture diagrams which are easy to draw and reason about.

Manipulating your stream

Now that we know the purpose of a stream, how do we manipulate it? What follows is a selection of functions that can be called on streams along with visual representations of the effect of these functions when called. It should provide everything needed to set up and use some programs which utilise streams.

map(f)

Map takes a function f. Then, for every item that appears in the stream, it applies the function f to it and produces a new stream with the new values.

scan(f)

Scan takes a function f. Then for each value in the stream, applies the function to the previous result along with the current value. This is similar to fold, but the output is a stream of all the values rather than just the final value, which fold would provide.

filter(f)

Filter takes a function f, then for each value computes the function, and if it evaluates to true, puts the value into the output stream, else discards the value.

merge

There are several ways to combine two streams. Using merge will take values from both streams and output values (in the order they are received) into the output stream.

This differs from concat, which takes two streams and returns all of the first stream before returning all of the second stream.

sampledBy(f)

SampledBy takes a function f and two streams (A and B). The function f is then called whenever a value is provided by stream A, passing in the last value from each stream. In the example below, a value is created in the output stream whenever a value occurs in the top stream. The output is then created from the value in the bottom stream.

slidingWindow(n)

SlidingWindow takes an integer n and returns the last n values from the stream as an array each time a new value occurs in the input stream.

Stream Summary

So in summary, you can use stream functions to manipulate streams in a multitude of different ways. You should use streams to remove the time component from code, allowing for simpler to understand and debug code.

This post first appeared on Chris Arnott’s blog.

Building a Server-Rendered Map Component – Part 2


19 September 2016, by

Part 2: How to use client-side libraries like Leaflet, in Node.

As discussed in Part One: Why?, it’d be really useful to be able to take an interesting UI component like a map, and pre-render it on the server as a web component, using Server Components.

We don’t want to do the hard mapping ourselves though. Really, we’d like this to be just as easy as building a client-side UI component. We’d like to use a shiny mapping library, like Leaflet, to give us all the core functionality right out of the box. Unfortunately though, Leaflet doesn’t run server-side.

This article’s going to focus on fixing that so you can use Leaflet with Server Components, but you’ll hit the same problems (and need very similar fixes) if you’re doing any other Node-based server rendering, including with React.The JS ecosystem right now is not good at isomorphism, but with a few small tweaks you can transform any library you like, to run anywhere.

Let’s focus on Leaflet for now. It doesn’t run server-side, because there’s not that many people seriously looking at rendering nice UIs outside a browser, so JS libraries are pretty trigger-happy making big browser-based assumptions. Leaflet expects a few things that don’t fit neatly outside a browser:

  • Global window, document and navigator objects.
  • A live element in an HTML DOM to be inserted into.
  • A Leaflet <script> tag on the page, so it can find its URL, so it can autodetect the path to the Leaflet icons.
  • To export itself just by adding an ‘L’ property to the window object.

All of these are things need tricky fixes. Just finding these issues is non-trivial: you need to try and use the library in Node, hit a bug, solve the bug, and repeat, until you get the output you’re expecting.

Leaflet is a relatively hard case though. Most libraries aren’t quite so involved in complex DOM interactions, and just need the basic globals they expect injected into them.

So, how do we fix this?

Managing Browser Globals

If you npm install leaflet and then require(“leaflet”), you’ll immediately see our first issue:

> ReferenceError: window is not defined

Fix this one, and we’ll hit a few more at require() time, for document and navigator too. We need to run Leaflet with the context it’s expecting.

It would be nice to do that by having a DOM module somewhere that gives us a document and a window, and using those as our globals. Let’s assume we’ve got such a module for a moment. Given that, we could prefix the Leaflet module with something like:

var fakeDOM = require("my-fake-dom");
var window = fakeDOM.window;
var document = fakeDOM.document;
var navigator = window.navigator;
[...insert Leaflet code...]

(Instead we could just define browser globals as process-wide Node globals and leave the Leaflet source untouched, but this isn’t good behaviour, and it’ll come back to bite you very quickly if you’re not careful)

Doing something like this will get you much closer. With any reasonable DOM stub you should be able to get Leaflet successfully importing here. Unfortunately though, this fails because of a fundamental difference between browser and Node rendering. On the server, we have to support multiple DOM contexts in one process, so we need to be able to change the document and window.

We can still pull this off though, just taking this a step further with something like:

module.exports = function (window, document) {
  var navigator = window.navigator;
  [...insert Leaflet code...]
}

Now this is a Node module that exports not a single Leaflet, but a factory function to build Leaflet for a given window and document, provided by the code using the library. This doesn’t actually return anything though when called, as you might reasonably expect, instead creating window.L, as is common for browser JS libraries. In some cases that’s probably ok, but in my case I’d rather leave Window alone, and grab the Leaflet instance directly, by adding the below to the end of the function, after the Leaflet code:

return window.L.noConflict();

This tells Leaflet to remove itself as a global, and just give you the library as a reference directly.

With this, require(“leaflet”) now returns a function, and passing that a window and document gives you a working ready-to-use Leaflet.

Emulating the expected DOM

We’re not done though. If you want to use this Leaflet, you might define a Server Component like:

var LeafletFactory = require("leaflet");
var components = require("server-components");
var MapElement = components.newElement();   MapElement.createdCallback = function (document) {
  var L = LeafletFactory(new components.dom.Window(), document);
  var map = L.map(this).setView([41.3851, 2.1734], 12);
  L.tileLayer('http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', {
    maxZoom: 19,
  }).addTo(map);
});
components.registerElement("leaflet-map", {prototype: MapElement});

This should define a component that generates the HTML for a full working map when rendered. It doesn’t. The problem is that Leaflet here is given a DOM node to render into (‘this’, inside the component), and it tries to automatically render at the appropriate size. This isn’t a real browser though, we don’t have a screen size and we’re not doing layout (that’s why it’s cheap), and everything actually has zero height or width.

This isn’t as elegant a fix, but it’s an unavoidable one in any server-rendering approach I think: you need to pick a fixed size for your initial render, and nudge Leaflet to use that. Here that’s easy, you just make sure that before the map is created you add:

this.clientHeight = 500;
this.clientWidth = 500;

And with that, it works.

This fakes layout, as if the browser had decided that this was how big the element is. You can render like this at a fixed size for lots of applications, and potentially add client-side rendering on top to resize too if you want.

With that added, you can take this component, render it with a cheekycomponents.renderFragment(“<leaflet-map></leaflet-map”) and be given working HTML for a lovely static map you can send straight to your users. Delightful.

There is still one last step required if you want to take this further. Leaflet by default includes a set of icons, and uses the ‘href’ in its script tag in the page to automatically work out the URL to these icons. This is a bit fragile in quite a few ways, including this environment, and if you extend this example to use any icons (e.g. adding markers), you’ll find your icons don’t load.

This step’s very simple though, you just need to set L.Icon.Default.imagePathappropriately. If you want to do that in a nice portable Server Component, that means:

var componentsStatic = require("server-components-static");
var leafletContent = componentsStatic.forComponent("leaflet");
L.Icon.Default.imagePath = leafletContent.getUrl("images");

This calculates the client-facing URL you’ll need that maps to Leaflet’s images folder on disk (see Server-Components-Static for more details).

Making this (more) maintainable

There’s one more step though. This is a bit messy in a few ways, but particularly in that we have to manually fork and change the code of Leaflet, and maintain that ourselves in future. It would be great to automate this instead, to dynamically wrap normal Leaflet code, without duplicating it. With Sandboxed-Module we can do exactly that.

Sandboxed-Module lets you dynamically hook into Node’s require process, to transform module code however you like. There’s lots of somewhat crazy applications of this (on-require compilation of non-JS languages, for example), but also some very practical ones, like our changes here.

There’s potentially a very small performance hit on startup from this for the transformation, but for the rest of runtime it shouldn’t make any difference; it hooks into the initial file read to change the result, and then from that point on it’s just another Node module.

So, what does this look like?

var SandboxedModule = require('sandboxed-module');
module.exports = SandboxedModule.require('leaflet', {
  sourceTransformers: {
      wrapToInjectGlobals: function (source) {
        return `
        module.exports = function (window, document) {
          var navigator = window.navigator;
          ${source}
          return window.L.noConflict();
        }`;
      }
  }
});

That’s it! Your project can now depend on any version of Leaflet, and require this wrapped module to automatically get given a Node-compatible version, without having to maintain your own fork.

This same approach should work for almost any other library that you need to manage server side. It’s not perfect — if Leaflet starts depending on other browser global things may break — but it should be much easier to manage and maintain than copying Leaflet’s code into your project wholesale.

Hopefully in future more projects will improve their native support for running in other environments, and this will go away, but in the meantime there are some relatively simple changes you can make to add Node support to even relatively complex client-side libraries.


Let’s stop there for now. In the next post, we’ll take a proper look at a full working map component, complete with configurability, static content and marker support, and see what you can do to start putting this into action yourself. Can’t wait? Check out https://github.com/pimterry/leaflet-map-server-component for the map component codebase so far.

This post originally appeared on Tim Perry’s personal blog. 

Building a Server-Rendered Map Component – Part 1


12 September 2016, by

Part 1: Why do we need better maps?

Maps are a standard tool in our web UI toolkit; they’re an easy way to link our applications to the real world they (hopefully) involve. At the same time though, maps aren’t as easy to include in your web site as you might want, and certainly don’t feel like a native part of the web.

Critically, unlike many other UI components, you normally can’t just serve up a map with plain HTML. You can easily serve up HTML with a videos embedded, or complex forms, or intricate SVG images and animations, or MathML, but maps don’t quite make the cut. (Google Maps does actually have an Embed API that’s closer to this, but it’s very limited in features, and it’s really just bumping the problem over their fence).

If you want to throw a map on page, you’re going to need pick out a library, write and serve up scripts to imperatively define and render your map, and bring in the extra complexity (and page weight and other limitations) of all the supporting code required to put that into action. That’s not really that much work, and we’ve got used to making fixes like these everywhere. It’s a far cry though from the declarative simplicity of:

<video src=“cats.mp4”></video>

Maps are one small example of the way that the HTML elements we have don’t match what we’re using day to day, and the workarounds you have to use to get closer.

How can we show a map on a page with the same simplicity and power?

Web Components

What we really want is to add<map lat=1.2 long=3.4 zoom=5> to our pages, get a great looking map, and be done with it. That’s the goal.

In principle, web components handle this nicely. You define a custom element for your map, give it its own rendering logic and internally isolated DOM, and then just start using it. That’s really great, it’s a fantastic model, and in principle it’s perfect. Clicking together isolated encapsulated UI elements like this is the key thing that every shiny client-side framework has in common, and getting that as a native feature of the web is incredible.

Support is terrible though. Right now, it’s new Chrome only, so you’re cutting out 50% of your users right off the bat, and it’s going to be a very long time until you can safely do this everywhere. You can get close with lots of polyfills, and that’s often a good choice, but there is extra weight and complexity there, and you really just want a damn <map> — not to spend hours researching the tradeoffs of Polymer or the best way to polyfill registerElement outside it.

Even once this works, you still have the extra front-end complexity that results, and the associated maintenance. You have to serve up your initial content, and then re-render in your JavaScript before some of it is visible. You quickly end up with content that is invisible to almost all automated scripts, scrapers, devices, and search spiders. Depending on JavaScript doesn’t just cut off the small percentage of users who turn it off.

Mobile devices make this even more important. With their slow connections and slow rendering, are going to have to wait longer to see your page when you render things client-side. JavaScript is not at all resilient to connection issues either — any device that drops any scripts required to for basic rendering won’t show any of that rendered content, while simple HTML pages continue with whatever’s available. New Relic’s mobile statistics show 0.9% of HTTP requests from mobile devices failing on the network. That failure rate builds up quickly, when applied to every resource in your page! With just that 0.9% of failures and only 10 resources to load, 9% of your users aren’t going to see your whole page.

Web components are a great step in the right direction, and are exactly the right shape of solution. Web components that progressively enhance already meaningful HTML content are even better, and can mitigate many of the issues above. In many cases like maps though, your component is rendering core page content. Web components do give you a superb way to do that client-side, but rendering your content client-side rendering comes with heavy costs. Worse, it’s frequently not a real necessity. A static map would give all the core functionality (and could then be enhanced with dynamic behaviour on top).

What if we could get these same benefits, without the costs?

Server Components

Server Components is an attempt to improve on our current approach to rendering the web, with the same approach and design as web components, but renderable on the server-side.

Without Server Components, if you wanted to quickly show a map of a location sprinkled with pins, or shaded areas, or whatever else, you couldn’t easily do so without serving up a mountain of JavaScript. Instead, I want to write <map lat=1.2 long=3.4 zoom=5> in my HTML and have my users instantly see a map, without extra client-side rendering or complexity or weight.

This sounds like a pipe dream, but with Server Components, we can do exactly that. We can define new HTML elements, write simple rendering code in normal vanilla JavaScript that builds and transforms the page DOM, and run all of that logic on the server to serve up the resulting ready-to-go JavaScript. All the client-side simplicity and power, without the costs.


Let’s stop there for now. In the next post, we’ll take a look at exactly how you’d implement this, and what you can do to start putting it into action yourself. Can’t wait? Check out https://github.com/pimterry/leaflet-map-server-component for the map component so far.

This post originally appeared on Tim Perry’s personal blog

Server Components: Web Components for Node.js


11 July 2016, by

Web components logo

We at Softwire are big on open source. Many of us are involved in a variety of open-source projects, and some of us are even running our own, from database setup tools to logging libraries to archiving tools to HTTP performance testing and Roslyn-based code generators.

In this post I want to take a look at a recent new addition to this list: Server Components, a small library I’ve been working on to let you render web components entirely on the server-side.

Web components, for those not familiar, are a series of new specs coming (slowly) to browsers everywhere. It’s actually a set of 4 new features (Custom Elements, HTML Templates, Shadow DOM and HTML Imports), which together provide a fantastic, fast, built-into-the-browser way of defining brand new elements. You define your elements with their own internal rendering logic, presentation and behaviour, and then compose them together to build your page. Building your web page becomes a process of defining or just downloading small composable blocks, and then declaratively clicking them together.

There’s more to this than I can cover in this post, but if you’re interested in an intro then Smashing Magazine’s guide is a pretty good place to start.

These are a fantastic set of technologies if you’re looking for a nice way to build web applications on the client-side. They give you a tool to transform page build and JavaScript from being a fundamentally mess unstructured process (one enormous DOM, with everything operating at the same level of abstraction), to a manageable modern process, letting you layer abstractions atop one another and build your applications from many small standalone parts.

With web components, you can build web pages from independent encapsulated chunks of behaviour and UI. Within these web components, you can then nest other components, or pass them other components as input. Building a UI becomes much easier when your toolbox looks like:

 

<loading-spinner></loading-spinner>

<login-form action="/login"></login-form>

<qr-code data="http://example.com"></qr-code>

<calendar-date-picker value="2016-01-14"></calendar-date-picker>

<google-map lat=1234 long=5678 zoom=12></google-map>

<confirmation-dialog>
  Are you sure you want to quit?
</confirmation-dialog>

<item-paginator>
  <an-item>Item one</an-item>
  <an-item>Item two</an-item>

  ...

  <an-item>Item one thousand</an-item>
</item-paginator>

 

It’s a powerful paradigm, and this is really a standalone native version of the core feature that front-end frameworks like Ember, Angular and React have given us in recent years: the ability to powerfully compose together web pages quickly and easily out of smaller parts, while providing enough structure and encapsulation to keep them maintainable.

At the moment though, it’s a paradigm largely confined to the client-side. Clicking together server-side user interfaces has never really become an effective or enjoyable reality. You’re left with larger MVC frameworks that dictate your entire application structure, or trivial templating libraries that rarely offer more than token encapsulation or reusability. If you want a great experience for web UI development, you have to move to the client side.

Server Components aims to solve this. It’s taking the core of the client-side development experience, and moving it onto the server. Many web sites don’t need client-side interactivity but end up developing for client-side applications (and paying the corresponding costs in complexity, SEO, performance and accessibility) to get a better developer experience. Server Components is trying to give you that same power and flexibility to quickly build user interfaces on the web, without the client-side pain.

Screenshot of Tim.FYI

This is still early days for server components; I’m aiming to expand them significantly further as feedback comes in from more real world usage, and they’re not far past being a proof of concept for now. As a public demo though, I’ve rebuilt my personal site (Tim.FYI) with them, building the UI with delightful components like:

 

  <social-media-icons
    twitter="pimterry"
    github="pimterry"
    linkedin="pimterry"
    medium="pimterry"
  ></social-media-icons>

and even more complex ones, with interactions and component nesting:

 

  <item-feed count=20>
    <twitter-source username="pimterry" />
    <github-source username="pimterry" type-filter="PullRequestEvent" />
    <rss-source icon="stack-overflow" url="http://stackoverflow.com/feeds/user/68051" />
  </item-feed>

 

All entirely server-rendered, declarative and readable, and with zero client-side penalties. JavaScript is added on top but purely for extra interactivity and polish, rather than any of the core application functionality. Take a look at the full HTML, tiny chunk server code and the component sources to get a feel for how this works under the hood.

Interested? Want to go deeper and try this out yourself? Take a look at Server Components on Github.

Alternatively if you’d like some more detailed explanation first, take a look at the full introduction on Medium, complete with worked examples. If you’d like to hear even more about this, I’ll be talking to the Web Platform Podcast in more detail on July 17th, and feel free to send questions in on Twitter or in the comments below.

What does it mean for an app to feel like an app?


21 December 2015, by

Here at Softwire, we’ve been busy building a mobile app for a chain of health clubs, and this is the second post in a four-part series about our experience and what we’ve learned. Watch this space for the next instalment, or follow us on twitter for updates!

We’ve chosen to build the app using Apache Cordova, so our code can easily run on both iOS and Android platforms.

As a quick recap: Cordova is cross-platform way of building mobile apps using web technologies. As a developer, you write a regular website using HTML, CSS and javascript, and Cordova takes this website and creates an app that is a wrapper around the website.

For the iOS app to be available on the App Store it needs to be approved by Apple. One of Apple’s requirements that is particularly relevant for Cordova apps is that the app “feels like an app, not like a website”.

What does this mean for us? There are a lot of components to this, but in this post I want to look at how it’s affected our approach to client-server interaction.

No 404s

We’re lucky to have some really nice designs for the app, and we wanted the entire UI to live up to these.

IMG_0014IMG_0017IMG_0019

To keep this user experience really smooth one thing we really have to ensure is that we don’t ever show the user an ugly 404 page.

On a regular website, a user moves from one page to another. Each page is loaded individually from the server and contains the data they have asked for, as well as the code needed to take them to other pages.

This wasn’t an option for us. Our app users might be on the move, underground or have a slow data connection. If we built the application extremely minimally as a wrapper around a traditional server-hosted application then we’d be making these page requests in an environment where they might fail. We can’t risk showing them an half-loaded page or, even worse, a 404.

Load data, not code

To solve this, we’ve built the application as a single-page website. All the code is bundled into the initial download – that way, all our pages load instantly, without having to wait for any code to be downloaded. This approach is an important part of good Cordova development, and helps solve the error handling issue above and reduce latency for users to keep the app feeling snappy.

The only communication we have with our servers is to send and receive data (rather than code). We have a RESTful API that manages all our operations, like loading class timetables, making bookings and getting club opening hours.

If any of these API requests fail we can show the user a pretty error message, and allow them to retry, without the app looking broken.

Smooth transitions

The app looks beautiful so we want it all to perform well on the phone. That includes loading pages, but we also want scrolling, swiping and transitions to all be smooth, and to avoid the screen juddering.

In the next post we’ll look at how we achieved this. Keep an eye on our blog for the next edition, or follow us on twitter for updates.

Write an app without writing an app


16 December 2015, by

Here at Softwire, we’ve been busy building a mobile app for a chain of health clubs, and this is the first post in a four-part series about our experience and what we’ve learned. Watch this space for the next instalment, or follow us on twitter for updates!

We were recently asked by a large chain of health clubs to build an app for both iOS and Android platforms. We chose to build the app using Apache Cordova, rather than writing separate native applications. Why did we choose Cordova, and how do you get started with it on a new project?

The problem with native apps

At Softwire, we’re used to building native mobile phone apps, but building native apps comes with a number of difficulties:

  • Each platform requires you to learn and use a specific programming language, development environment and set of libraries.
  • You have to maintain a separate codebase for each platform and keep the code in sync between platforms.
  • Each platform has its own quirks and bugs will typically be different in each platform.

All of this makes native development potentially more expensive and risky, and we’ve been interested in investigating other approaches for a while.

What’s the alternative?

(more…)

Typing Lodash in TypeScript, with Generic Union Types


5 November 2015, by

TypeScript is a powerful compile-to-JS language for the browser and node, designed to act as a superset of JavaScript, with optional static type annotations. We’ve written a detailed series of posts on it recently (start here), but in this post I want to talk about some specific open-source work we’ve done with it around Lodash, and some of the interesting details around how the types involved work.

TypeScript

For those of you who haven’t read the whole series: TypeScript gives you a language very similar to JavaScript, but including future features (due to the compile step) such as classes and arrow functions, and support for more structure, with it’s own module system, and optional type annotations. It allows you to annotate variables with these type annotations as you see fit, and then uses an extremely powerful type inference engine to automatically infer types for much of the rest of your code from there, automatically catching whole classes of bugs for you immediately. This is totally optional though, and any variables without types are implicitly assigned the ‘any’ type, opting them out of type checks entirely.

This all works really well, and TypeScript has quite a few fans over here at Softwire. It gets more difficult when you’re using code written outside your project though, as most of the JavaScript ecosystem is written in plain JavaScript, without type annotations. This takes away some of your new exciting benefits; every library object is treated as having ‘any’ type, so all method calls return ‘any’ type, and passing data through other libraries quickly untypes it.

Fortunately the open-source community stepped up and built DefinitelyTyped, a compilation of external type annotations for other existing libraries. These ‘type definitions’ can be dropped into projects alongside the real library code to let you write completely type-safe code, using non-TypeScript libraries.

This is great! Sadly, it’s not that simple in practice. These type definitions need to be maintained, and can sometimes be inaccurate and out of date.

In this article I want to take a look at a particular example of that, around Lodash’s _.flatten() function, and use this to look at some of the more exciting newer features in TypeScript’s type system, and how that can give us types to effectively describe fairly complex APIs with ease.

What’s _.flatten?

Let’s step back. Lodash is a great library that provides utility functions for all sorts of things that are useful in JavaScript, notably including array manipulation.

Flatten is one of these methods. Flattening an array unwraps any arrays that appear nested within it, and includes the values within those nested arrays instead. Flatten also takes an optional second boolean parameter, defining whether this processes should be recursive. An example:

 

_.flatten([1, 2, 3]);                     // returns [1, 2, 3] - does nothing

_.flatten([[1], [2, 3]]);                 // returns [1, 2, 3] - unwraps both inner arrays

_.flatten([[1], [2, 3], 4]);              // returns [1, 2, 3, 4] - unwraps both inner arrays,
                                          // and includes the existing non-list element

_.flatten([[1], [2, 3], [[4, 5]]]);       // returns [1, 2, 3, [4, 5]] - unwraps all arrays,
                                          // but only one level

_.flatten([[1], [2, 3], [[4, 5]]], true); // returns [1, 2, 3, 4, 5] - unwraps all arrays 
                                          // recursively

 

This is frequently very useful, especially in a collection pipeline, and is fairly easy to describe and understand. Sadly it’s not that easy to type, and the previous DefinitelyTyped type definitions didn’t provide static typing over these operations.

What’s wrong with the previous flatten type definitions?

Lots of things! The _.flatten definitions include some method overloads that don’t exist, have some unnecessary duplication, incorrect documentation, but most interestingly their return type isn’t based on their input, which takes away your strict typing. Specifically, the method I’m concerned with has a type definition like the below:

 

interface LoDashStatic {
  flatten<T>(array: List<any>, isDeep?: boolean): List<T>;
}

 

This type says that the flatten method on the LoDashStatic interface (the interface that _ implements) takes a list of anything, and an optional boolean argument, and returns an array of T’s, where T is a generic parameter to flatten. Because T only appears in the output though, not the type of our ‘array’ parameter, this isn’t useful! We can pass a list of numbers, and tell TypeScript we’re expecting a list of strings back, and it won’t know any better.

We can definitely do better than that. Intuitively, you can think of the type of this method as being (for any X, e.g. string, number, or HTMLElement):

 

_.flatten(list of X): returns a list of X
_.flatten(list of lists of X): returns a list of X
_.flatten(list of both X and lists of X): returns a list of X

_.flatten(list of lists of lists of X): returns a list of list of X (unwraps one level)
_.flatten(list of lists of lists of X, true): returns a list of X (unwraps all levels)

 

(Ignoring the case where you pass false as the 2nd argument, just for the moment)

Turning this into a TypeScript type definition is a little more involved, but this gives us a reasonable idea of what’s going on here that we can start with.

How do we describe these types in TypeScript?

Let’s start with our core feature: unwrapping a nested list with _.flatten(list of lists of X). The type of this looks like:

flatten<T>(array: List<List<T>>): List<T>;

Here, we say that when I pass flatten a list that only contains lists, which contain elements of some common type T, then I’ll get back a list containing only type T elements. Thus if I call _.flatten([[1], [2, 3]]), TypeScript knows that the only valid T for this is ‘number’, where the input is List<List<number>>, and the output will therefore definitely be a List<number>, and TypeScript can quickly find your mistake if you try to do stupid things with that.

That’s not sufficient though. This covers the [[1], [2, 3]] case, but not the ultra-simple case ([1, 2, 3]) or the mixed case ([[1], [2, 3], 4]). We need something more general that will let TypeScript automatically know that in all those cases the result will be a List<number>.

Fortunately, union types let us define general structures like that. Union types allow you to say a variable is of either type X or type Y, with syntax like: var myVariable: X|Y;. We can use this to handle the mixed value/lists of values case (and thereby both other single-level cases too), with a type definition like:

flatten<T>(array: List<T | List<T>>): List<T>;

I.e. if given a list of items that are either type T, or lists of type T, then you’ll get a list of T back. Neat! This works very nicely, and for clarity (and because we’re going to reuse it elsewhere) we can refactor it out with a type alias, giving a full implementation like:

 

interface MaybeNestedList<T> extends List<T | List<T>> { }

interface LoDashStatic {
  flatten<T>(array: MaybeNestedList<T>): List<T>;
}

 

Can we describe the recursive flatten type?

That fully solves the one-level case. Now, can we solve the recursive case as well, where we provide a 2nd boolean parameter to optionally apply this process recursively to the list structure?

No, sadly.

Unfortunately, in this case the return type depends not just on the types of the parameters provided, but the actual runtime values. _.flatten(xs, false) is the same as _.flatten(xs), so has the same type as before, but _.flatten(xs, true) has a different return type, and we can’t necessarily know which was called at compile time.

(As an aside: with constant values technically we could know this at compile-time, and TypeScript does actually have support for overloading on constants for strings. Not booleans yet though, although I’ve opened an issue to look into it)

We can get close though. To start with, let’s ignore the false argument case. Can we type a recursive flatten? Our previous MaybeNested type doesn’t work, as it only allows lists of X or lists of lists of X, and we want to allow ‘lists of (X or lists of)* X’ (i.e. any depth of list, with an eventually common contained type). We can do this by defining a type alias similar to MaybeNested, but making it recursive. With that, a basic type definition for this (again, ignoring the isDeep = false case) might look something like:

 

interface RecursiveList<T> extends List<T | RecursiveList<T>> { }

interface LoDashStatic {
  flatten<T>(array: RecursiveList<T>, isDeep: boolean): List<T>;
}

 

Neat, we can write optionally recursive type definitions! Even better, the TypeScript inference engine is capable of working out what this means, and inferring the types for this (well, sometimes. It may be ambiguous, in which case we’ll have to explicitly specify T, although that is then checked to guarantee it’s a valid candidate).

Unfortunately when we pass isDeep = false, this isn’t correct: _.flatten([[[1]]], false) would be expected to potentially return a List<number>, but because it’s not recursive it’ll actually always return [[1]] (a List<List<number>>).

Union types save the day again though. Let’s make this a little more general (at the cost of being a little less specific):

flatten<T>(array: RecursiveList<T>, isDeep: boolean): List<T> | RecursiveList<T>;

We can make the return type more general, to include both potential cases explicitly. Either we’re returning a totally unwrapped list of T, or we’re returning list that contains at least one more level of nesting (conveniently, this has the same definition as our recursive list input). This is actually a little dupicative, List<T> is a RecursiveList<T>, but including both definitions is a bit clearer, to my eye. This isn’t quite as specific as we’d like, but it is now always correct, and still much closer than the original type (where we essentially had to blind cast things, or accept any-types everywhere).

Putting all this together

These two types together allow us to replace the original definition. We can be extra specific and include both by removing the optional parameter from the original type definition, and instead including two separate definitions, as we can be more specific about the case where the boolean parameter is omitted. Wrapping all that up, this takes us from our original definition:

 

interface LoDashStatic {
  flatten<T>(array: List<any>, isDeep?: boolean): List<T>;
}

 

to our new, improved, and more typesafe definition:

 

interface RecursiveList<T> extends List<T | RecursiveList<T>> { }
interface MaybeNestedList<T> extends List<T | List<T>> { }

interface LoDashStatic {
  flatten<T>(array: MaybeNestedList<T>): List<T>;
  flatten<T>(array: RecursiveList<T>, isDeep: boolean): List<T> | RecursiveList<T>;
}

 

You can play around with this for yourself, and examine the errors and the compiler output, using the TypeScript Playground.

We’ve submitted this back to the DefinitelyTyped project (with other related changes), in https://github.com/borisyankov/DefinitelyTyped/pull/4791, and this has now been merged, fixing this for Lodash lovers everywhere!

AngularJS vs KnockoutJS – Part 3


15 September 2015, by

In previous blog posts in this series, we examined the motivation behind some of the modern JavaScript frameworks that aim to make front-end code more maintainable. We also examined AngularJS and KnockoutJS as specific examples of how they individually integrate data models with HTML views.

This blog posts aims to compare how well the two work: we don’t want to fall in to the trap of suggesting that they are competing with each other as they both do different things. However, as two libraries that make front-end development easier through attributes, it’s rare you’ll use both and so you’ll often reach a time on a project where you have to make a decision between two.

Setting the Framework Up

Setting KnockoutJS up is as easy as you’d like it to be – you can pull the source file into your project by downloading it, linking to their CDN or using package management tools such as Bower. It works nicely with all major dependency tools such as RequireJS, or can be included globally in your project. All you need to do to get it working is call ko.applyBindings in one place. It works nicely with all of your code, and can be added at any stage in the project as it plays nicely with other libraries.

AngularJS is a bit more of a tricky beast – building a project around it from the beginning is not much more effort than KnockoutJS. There’s slightly more boilerplate at an early stage to deal with modularisation, but these costs are mostly compensated by not requiring the set-up of, say, RequireJS.

Adding AngularJS to an existing project, however, is significantly harder. Because of the opinionated idioms that come from the use of AngularJS, it’s often hard to make it integrate seamlessly with other code you’ve already written. We’ve used AngularJS with other module loaders, for example, but the cost of maintaining that has been far higher than expected.

Conclusion: On greenfield projects, both libraries perform well. KnockoutJS is easier to add incrementally to an existing project.

Browser Support

A minor point, but if you have to support Internet Explorer, then you may have a harder time with AngularJS that has a small number of outstanding issues on all IE builds; and no support for IE versions 8 and under. KnockoutJS goes all the way back to IE6.

Conclusion: If IE6 or 7 support is important, AngularJS cannot be used.

Community Support

One of things that makes JavaScript fascinating (or, if you’re one of the front-end nay-sayers: tolerable) is how fast the community moves. There are now more npm packages than there are Maven Central libraries, showing just how fast people rally behind languages to bolster its features.

We have to be careful about comparing raw statistics of two libraries of differing sizes. In terms of Google Search trends, AngularJS is over 12 times as popular, and exponentially rising. Angular is the second most installed bower package, whilst KnockoutJS doesn’t make the top 100.  The trends certainly suggest that AngularJS has more staying power.

AngularJS’s modular nature makes it a natural fit for community support as people can add directives, modules and services that can be easily pulled in and integrated with your code. This means that if you’re requirements are common, then you can use a large number of off-the-shelf libraries.

KnockoutJS does allow you to write bespoke additions by creating your own bindings, or extending the knockout object itself in an easy and well-documented manner. Although there are some KnockoutJS libraries to add features such as Pager.js for routing, there are definitely fewer. At least a few times I’ve thought “Surely someone’s done this already” when writing features with KnockoutJS.

Conclusion: AngularJS is much more popular, and has more community libraries.

Ease of Learning

The documentation and online tutorial for KnockoutJS are some of the best I’ve seen. Knockout is so simple and light-weight that when new developers see it, they’ve always taken to it immediately and are able to contribute to development teams very quickly.

AngularJS is a much bigger beast. Their documentation is, necessarily, much more heavyweight and bulky to reflect the larger amount of things that people have to learn. Almost everyone who has worked long enough with AngularJS is now happy with how it works and can contribute speedily, but it often takes longer to reach the same depth of understanding.

Conclusion: KnockoutJS has simplicity and brilliant documentation on its side; the larger AngularJS can’t compete.

Testability

AngularJS was made with testability at the very front of its mind, and there are lots of guides online about how to set up unit and integration testing on your AngularJS modules. We’ve had great experiences running our JavaScript unit tests on such projects with Karma and WallabyJS giving us instant feedback when things fail.

KnockoutJS is unopinionated about tests – it’s a much smaller library so we wouldn’t expect anything else. If KnockoutJS is used with some modularisation frameworks (which we always strongly recommend!) then good discipline, such as encapsulating all low-level DOM manipulation, makes unit testing no harder than with any other library.

Conclusion: Both libraries are testable, though AngularJS was written with this very much in mind, making set-up marginally easier and more documented.

So, which is better…?

We’ve found in projects at Softwire uses for both frameworks.

We typically favour KnockoutJS on projects that have less need for the larger, more opinionated AngularJS. This could be because there is less complex interactions between the user and the data; or because the project simply runs for a shorter time. For legacy projects, it allows us to use it minimally only where we need to, preventing a large initial investment of effort. A lot of developers here love KnockoutJS and many use it on their own pet projects because of its light-weight approach.

We have, however, used AngularJS successfully on a number of projects. We’ve found that the ramp-up time is slightly higher, as it takes control of more aspects of the code base. However, if there is sufficient complexity or on a project, or the code base is larger, the initial set-up and learning costs on a project quickly repay itself and it becomes our framework of choice on large projects with potentially complex user interactions.

We are, as we are with all technologies that we use, constantly examining the ecosystem for new and maturing options outside of these! For those interested in seeing more frameworks in action, TodoMVC is a great website that codes the same site using multiple different libraries.

David Simons trains developers in High-Quality Front-end JavaScript. Sign up to his next course here.

AngularJS vs KnockoutJS – Part 2


8 September 2015, by

Previously, we looked at one of the most talked-about family of JavaScript libraries: those that follow the MV* pattern to decouple the visual concerns of a web application from the data-driven business logic. This blog post serves to give a brief introduction of two of the most commonly used such libraries – AngularJS and KnockoutJS.

(more…)

AngularJS vs KnockoutJS – Part 1


1 September 2015, by

In Softwire, we’ve used AngularJS and KnockoutJS on a variety of projects, and have found that this makes web development a lot easier and a lot more pleasant! With this series of blog posts, I’m hoping to share what we’ve found out about these, and other, data-binding libraries along the way by looking at:

  • What groups all these libraries together?
  • How do these libraries work?
  • Which library is the “best” one?

(more…)