Skip to content

The Generation Ship Problem 

Two astronauts in red suits stand on a rocky, desert-like landscape beside a retro rocket, under a hazy sky with a bright sun.

In 1944, A.E. van Vogt published a short story called Far Centaurus. Spoiler alert… in the story, four astronauts climb into suspended animation pods, point their ship at Alpha Centauri, and spend 500 years asleep. When they finally wake, bleary-eyed and disoriented, they discover that human civilisation is already there. Because while they slept on their long voyage, the people they left behind kept working. Faster ships were designed, launched decades after their own departure, and arrived centuries ahead of the first astronauts. I’ve been thinking about this story recently as I advise clients on whether now is the right time to scale AI proofs of concept. There’s a similar “gotcha” at play. I’ll call it the “Generation Ship Problem”.

What is the Generation Ship Problem? 

Suppose you decide to scale your AI product today, based on today’s technology. You commit the budget, you hire. You build pipelines, integrations, prompt scaffolding, evals (turns out AI doesn’t build much of this for you, when it comes to scaling well, by the way).  

You even restructure teams, rewrite policies, and do other work to prepare the organisation. The big day comes and you launch! All is well and it actually works! But six months later, a foundation model update arrives with new features that render most of what you did either obsolete or comically over-engineered. In other words, you have arrived to find that humans are already there. 

This is not just a scare story. OpenAI released an apparently small update that wiped out an entire category of data analysis wrapper products overnight. When large context windows arrived, carefully engineered retrieval-augmented generation pipelines, with their chunking strategies and semantic search layers, became elaborate solutions to a problem that no longer existed. When ChatGPT added native long-form writing, Jasper AI’s revenue reportedly fell by around $70m.  

OpenAI, by its own account, ships a new capability roughly every three days, an astonishing rate of change and a huge challenge to conventional product development and business investment strategies. 

Waiting is a failure mode 

It’s tempting not to launch until the technology stabilises. The problem is that it won’t. Andrew Kennedy, a physicist who formalised this exact dilemma for actual interstellar travel, called it the “incentive trap of progress.” By his careful calculations, there is an optimal departure window, but it’s not “wait indefinitely”.  

The sooner your launch, the sooner you learn 

Early deployment is not without value, however: first-mover advantage can be real. A customer acquired for a product that later becomes obsolete is still a customer relationship. Proprietary data gathered during an early deployment can be highly valuable for future use cases, even if the original application is superseded. There is a general principle here: launching exposes you to opportunities that waiting does not. The danger is to assume those opportunities are permanent, or that the product or service you build around them will survive unchanged. 

How to launch without becoming obsolete 

So how do you embrace a potentially perilous launch without becoming a historical footnote in five years’ time? 

I think there are two dimensions to get right, and that it’s a trap to conflate them: 

1. Product readiness: What value will you gain today even if the product is killed tomorrow? Customer insight, proprietary data, workflow learning, team capability? If the answer is “nothing”, the risk profile of the decision is vastly greater than one where it is “something”.  

2. Organisational readiness: The goal here is not quite resilience, which implies absorbing a shock and returning to a prior state (because the prior state may no longer be appropriate). Instead, the goal is to set up teams that thrive with change rather than merely endure it, treating each model release as an input to act on rather than a disruption to recover from. 

What this means in practice 

You’ll need: 

  • Teams small enough to maintain a genuine shared comprehension of what the model is doing in their system, not just confidence in its outputs.  
  • Evaluation infrastructure that exposes changes in the model quickly, before users discover them 
  • Decision-making authority sitting close to the work for quick pivots and freedom from ponderous governance 
  • People whose expertise is in the general problem domain and the craft of building AI products, not in the specific behaviours of one model version. Going too specific, and not being sufficiently generalist in approach, means carrying switching costs in people as well as product. 

The underlying principle is that the primary output from any deployment should be learning, with the product itself a (probably temporary) vehicle for generating it.  

The sunk cost problem 

Organisations that have invested in a particular AI approach may find that their product may have been “killed” by a new OpenAI or Anthropic feature. The existing solution works, the investment is already made, and a pivot now would be expensive. 

But yesterday’s investment is not a reason to continue with yesterday’s approach. What’s done is done. The only decision available is what to do next, based on current information. 

As ever, clarity around what you’re looking to achieve or learn can make or break a launch. And, critically, it’s important not to become attached to the software and business processes that you’ve created as part of the launch. 

As long as you are precise about your goals and you’ve diligently done the necessary software engineering to scale it, code is easy and quick to generate. Code is no longer an asset in itself that requires continued investment; it is a means to an end. The true asset is in the learning, the data, the new business, the development of your people, and your organisation. 

The real asset is learning 

Van Vogt’s explorers were not wrong to go. Their mission was a great idea, given what they knew in 1944. What they could not account for was the pace of progress behind them, and their trust that the destination would still be worth reaching when they woke up was misplaced. There was little they could do about that, in cryosleep; but faced with the Generation Ship Problem, you have better options. 

The difference is that you already know what they didn’t: the journey (not the destination) is everything. The data, the learning, the customer relationships, the capability your organisation builds along the way – none of that waits at the destination. It accumulates in transit. And unlike Van Vogt’s crew, you can plan for it that way from the start, and you’ll be ready whatever happens on your journey. 

Go, by all means. But go light, go ready to learn, go with your eyes open, and never mistake the act of departure for the guarantee of arrival.