Skip to content
Home>All Insights>When open-source libraries close up: replacing core dependencies

When open-source libraries close up: replacing core dependencies

Person standing in a maze, looking into the distance.

When the open-source http framework we relied on for one of our client’s Scala web servers suddenly announced it was switching to a closed-source paid license model, it landed as quite a shock. The license seemed too expensive to justify its value-add in our case, but how were we going to migrate away from such a core dependency? We had our work cut out for us. Such a migration would be risky and potentially expensive for our client, so our approach had several key principles:

Principle 1.
Choose a new framework that is well maintained and unlikely to present material future risk

Principle 2.
Allow for regression testing at each step

Principle 3.
Minimise the amount of risk in any single step of the migration

Principle 4.
Look for 1:1 replacements of functionality we already depend on

Principle 5.
Keep costs as low as possible

Our dependency goes commercial

The server affected was written in Scala and was built using the Akka framework. The Akka framework, maintained by Lightbend, is a http framework which tries to provide a bit of an all-in-one solution. Its functionality includes a http server (receiving requests), a http client (sending requests), a concurrency model (managing threads), and a streaming API (processing data) to name a few. We rely – to varying degrees – on a lot of these different bits of functionality within our server.

So when Lightbend announced in September 2022 that it was moving the Akka framework from an open-source model to a closed-source paid commercial licence model in a year’s time, we knew there was going to be a lot to do. This isn’t the first, nor last, project to make such a change to commercial licensing, with other examples including Elasticsearch, Docker and most recently Redis. In this case though, it didn’t take much number-crunching to figure out that paying for the costs of the licence just didn’t make financial sense for our client, and so we started thinking about other more cost-effective solutions.

Of course, we weren’t the only team in the industry affected by this change, and there were many discussions online about the costs of the new licences and potential alternatives. We didn’t want to rush into anything, so we took our time over the next several months following the discourse and weighing up different alternatives we could recommend to our client.

An open-source fork: moving to Pekko

Quite soon after the announcement, the Apache foundation announced it was going to be incubating Pekko, an open-source fork of Akka which aimed to continue to release security patches. We decided to move to this fork to start with, as all that would be required would be renaming some packages and updating our imports. Doing this would buy us a bit more time to think about other alternatives.

We considered sticking with Pekko for good, but since it was a brand-new project we were worried about how well it would end up being maintained, and we didn’t want to leave our client exposed to vulnerabilities in the future (principle 1).

So having swapped Akka for Pekko, we started thinking about what to do from there. Our clearest priority was that whatever libraries we adopted needed to be mature and actively maintained, as we didn’t want to find ourselves in a similar situation again in the near future. We also wanted a migration which would minimise costs and risk for our client where possible.

Since our dependency on Akka (or now, Pekko) had been quite strong, the inherent risk of any migration was a big concern from the beginning. Every layer of our server was affected – from defining our endpoints and API schemas, to interacting with the database. Any migration plan needed to account for this risk, and hopefully allow us to split the risk into small independently verifiable steps, rather than a few big steps.

So with the usual priorities for this kind of project defined (low cost, low risk, well-maintained), we set out investigating different solutions and creating a couple of prototypes.

Meta-frameworks enabling a gradual migration: moving to Tapir

During our investigations we came across the Tapir project by Softwaremill. Tapir is a sort of meta-http framework which provides an API for http servers and clients but utilises other http server and client libraries as the engine for this unified API. Tapir seemed perfect for us – it supports using Pekko as its engine, as well as several other options we could choose to adopt.

After a bit of prototyping translating our Pekko usages to Tapir, we decided Tapir was a good migration target. First migrating to Tapir + Pekko would let us keep our other uses of the Pekko API in place while we slowly migrated our uses of the Pekko http server and client APIs to their Tapir equivalents. Individual API endpoints could be migrated and tested one at a time (principle 2), without affecting anything else. This was crucial in allowing us to divide and conquer the risk into small, verifiable steps (principle 3).

Looking for replacements: moving to http4s, fs2 and cats-effect

We still had to decide on what we’d use to power Tapir’s http functionality once we moved away from Pekko. Once we had settled on Tapir, our options for http server and client were limited to only those which Tapir could integrate with, and of those http4s seemed like the best of the bunch for our http server. For our http client, Tapir provides a solution they maintain themselves called sttp which integrated well with any http server.

http4s (http server) is mature, well maintained, and integrates nicely with the cats-effect and fs2 libraries, which provide modern, well-maintained replacements for Pekko’s concurrency model and streaming API respectively. cats-effect and fs2 provided simple alternatives for many of our Pekko usages and were therefore good replacements for Pekko (principle 4). After building a prototype server using Tapir + http4s + fs2 + cats-effect as the stack, we felt happy enough with the results to pin this down as our final migration target.

Soberingly, we also knew that this final step of the migration (moving from Tapir + Pekko, to Tapir + http4s) was going to be the hardest. Pekko and http4s can’t be used simultaneously, so this couldn’t be a gradual migration: we were going to have to do it all at once.

Our plan was to replace as many uses of Pekko as possible with Tapir alternatives, testing each change atomically. Once we’d replaced all uses with direct Tapir replacements, we planned a big swap to http4s + fs2 + cats-effect, migrating away all our remaining uses of Pekko and thus fully removing the dependency.

This last step carried a lot of risk in a single code change. We relied heavily on our robust suite of tests plus a healthy amount of manual testing to ensure there were no regressions in performance or functionality (principle 2).

Releasing and a successful migration

In the end, we successfully completed our migration and released in February 2024. This was a great success – we released within budget, and only encountered a couple of small issues which were resolved quickly, so our client was delighted. This was only achievable through our careful planning, prototyping and attention to risk at every step of the migration process.

Alternate paths, and lessons learned

The direction our migration went was motivated by the priorities we laid out at the start. I’d like to talk about what some other migrations might have looked like had our priorities – or our server – been slightly different.

Just sticking with Pekko

Moving to Pekko near the start seemed like an obvious quick win. On the surface this felt like a complete solution: our problems all stemmed from Akka going closed-source, and Pekko is just Akka but open-source, right? It’s possible that this will end up being true; Apache being behind Pekko does add some weight to the project.

This would have been a risk though: the project was unproven, and had we settled on it we would have been gambling that the next time a major vulnerability appeared Pekko would resolve it quickly. If they didn’t, we would end up scrambling to migrate away from Pekko under the added pressure of our web server being exposed to that major vulnerability.

Not using Tapir

We really felt Tapir was perfect for what we were aiming to do. Allowing part of the migration away from Pekko to be done gradually was key for de-risking the migration. The only real negative was that it added another layer of dependency. If our server had been smaller, and our dependency on Pekko less strong, it may have felt easier to simply swap directly to another http library; but for us that simply wasn’t the case.

Alternatives to http4s

This was the step of the migration where we really did have the most freedom of choice, and the options we didn’t pick really were completely valid as well. Among these choices, some notable ones (other than http4s) are Vert.X, Armeria, and Netty.

There were a couple of reasons we chose http4s over these. The first was that http4s was built on top of fs2, which provided a close to 1:1 replacement for Akka streams, which we used quite extensively for our server. The second was that it was the only one of these which was a “Scala-Native” library. The others had good support for Scala, but http4s is the only one built expressly for use with Scala which means it takes full advantage of Scala’s powerful type system and functional nature.

In the end we were happy with http4s, but for a different project it’s likely another http library could have been more suitable.

Using cats-effect

Only briefly mentioned above, cats-effect provides an alternative concurrency model to the one provided in the Scala standard libraries. Now that we have moved to cats-effect, we have seen some benefits such as in performance, and code maintainability.

In our case, the cost of migrating from the standard Scala concurrency model using “Future”s to cats-effect’s alternative model using “IO”s was quite high, and in retrospect maybe wasn’t worth it. I think if we were doing this migration again, this is probably the main thing I would approach differently.

Two halves of a paper heart put together with tape.

Conclusion

Migrating away from such a deep dependency is never something we want to do without good reason, but when the need arises such a migration can be delivered smoothly through careful planning.

Digital Engineering

Get expert help with your digital challenges and unlock modern digital engineering solutions.