Skip to content

What happens when your 6-month POC is someone else’s Tuesday?

Four fighter planes banking sharply in the sky, captured mid-manoeuvre, illustrating speed, coordination, and rapid decision-making in flight.

Fighter pilots learned that disrupting an opponent’s OODA loop wins dogfights. AI-native startups are applying the same principle to the enterprise. Here’s how to measure and restructure your decision cycle before someone else does.

It’s the late 1990s. Tony Blair. Spice Girls.  

I’m in a stifling meeting room along with 19 other people. We’re discussing the implementation of a PIN that grants access to a tool within a public digital service (millions of users). Each of these people is here to represent their part of “The Business” – call centres, customer correspondence, finance, hardware support, and more – apart from me. I’m here representing the small team of developers that has built the PIN control. The meeting lasts for several hours and involves protracted MoSCoW negotiations and even some raised voices. But it’s ultimately inconclusive, and the only agreement is to reconvene again in a week. 

Frustrated by the pace of change, I left that corporation to found a startup, which went quite well, but that’s another story for another day. The point of this story is to explore that “pace of change”. 

What should have been a technical implementation decision had become a cross-functional negotiation because each stakeholder needed to understand how the change affected their workflows, their metrics, their budgets. The decision cycle for a minor feature took six weeks. Meanwhile, startups were shipping entire products in the same timeframe. 

That gap in decision speed is an “OODA loop” problem, and AI is about to make it catastrophic. 

What is an OODA loop? 

John Boyd, a fighter pilot and military strategist, observed that combat success depended on cycling through four steps faster than your opponent: Observe, Orient, Decide, Act (OODA). The pilot who could complete this loop more rapidly would get inside their opponent’s decision cycle, disrupting it, causing them to react to situations that no longer existed. Boyd’s insight extended beyond dogfighting: disrupt your competitor’s  OODA loop and they become perpetually reactive, always responding to your previous move rather than anticipating your next one. The approach has since made its way into organisational thinking, for example via Simon Wardley’s “Wardley Map” framework

Organisations – large enterprises and startups alike – have institutional OODA loops. 

These loops reflect decades of accumulated learning about governance, risk management, and compliance. A procurement process that takes six months works well for fleet vehicles or property leases because the cost of a bad decision is high and reversibility is low. The same process fails catastrophically for technology decisions where the technology itself changes quarterly or weekly. By the time you’ve decided to adopt a particular solution, that solution has evolved, competitors have emerged, or the problem you were solving has shifted. You’re making decisions about a reality that no longer exists. 

This OODA loop disruption, therefore, is destructive to enterprise value, unless the enterprise can itself carry out in advance the necessary transformation required to take advantage of whatever the cause of the disruption is. 

Past disruptions and why AI is different

Back in the 1990s, the talk of the town was “bricks vs clicks”, i.e. traditional retailers being disrupted by ecommerce. Back in the 1890s, the disruption was the arrival of electricity in factories, which were previously organised around their power plant, but could now be reorganised around flow, with power delivered where it’s required (and, much later, carried around in portable electric tools and electronic devices). Today, the disruptive force is AI. 

Startups can move quickly because they are smaller teams; this much is obvious. There is an implication for their OODA loops, though. A smaller group can turn observations into interpretations, and interpretations into decisions and actions, much more quickly than a large group. As the group grows, each loop is completed more and more slowly. 

This may sound familiar, even if your typical work environment is a meeting room or Teams call rather than the cockpit of a fighter jet. The larger the organisation, the more time is spent reacting than anticipating, especially when there are disruptive forces such as competition from startups, unexpected new regulations or legislation, or “management by press release”. If you feel you spend more time reacting than anticipating, then your organisation’s OODA loop has unfortunately been disrupted. One of the most OODA-disrupting forces is the arrival of new technology, and the arrival of AI is the most significant of these in decades, perhaps in an entire century. 

We read a lot, currently, about AI projects not making it past prototype stage, and enterprises or governments struggling to unlock the value that one is led to believe must be inherent in this new technology. And yet we also read a lot about startups achieving multi-million ARRs within months of launching. While there is scepticism, there are also stories such as that of Sana Labs, with an average 27% ARR monthly growth rate from launch in 2019 to a $1.1bn exit to Workday in 2025.  

What are companies like Sana Labs doing so fundamentally differently?

I think it comes down to their OODA loop. Yes, they are small companies (by headcount if not valuation!), but that means they can adapt quickly, and that, in turn, means that they’ve been able to create an OODA loop built around AI. Meanwhile, the enterprise cannot adapt quickly, and certainly cannot reorient around a technology that moves as fast as AI, purely because its institutional OODA loop runs too slowly. 

As a simple example, consider the “proof of concept” model. It’s common to hear about an enterprise conducting a POC into some aspect of new technology (especially AI) and then to hear statistics (again, especially relating to AI) about how few POCs make it into production. On the other hand, startups don’t bother with POCs in anything like the same way. The POC in startup-land is simply Tuesday. 

A slow OODA loop has not historically been a bad thing for an enterprise; we have built large companies while optimising for particular types of problems. Institutional OODA loops have been perfected over decades and are embedded in governance structures, approval hierarchies, budget cycles, and compliance frameworks. These loops reflect accumulated institutional learning about what works. 

So what happens when the environment changes faster than the loop can adapt? Disruption. Although this will be mitigated by the institution’s self-defence mechanisms, disruption will arrive, perhaps slow and creeping (the problem of “shadow AI”, for example), or perhaps suddenly (a startup appears from nowhere able to do what your firm does, only 100x more efficiently). 

How AI changes institutional OODA loops 

AI enables organisations to drive OODA loops in new ways: 

  • Observation happens continuously rather than periodically. Instead of quarterly business reviews or annual customer surveys, you can monitor product usage, customer behaviour, and market signals in real time. You observe what customers actually do, not what they said they wanted six months ago in a focus group. 
  • Orientation happens at machine speed. Pattern recognition across thousands of customer interactions, synthesis of usage data, identification of what’s working and what isn’t. The system learns and improves. This happens continuously and autonomously rather than in quarterly analysis meetings or consultant reports. 
  • Decisions happen algorithmically. For routine decisions within established parameters, AI evaluates options and acts without human intervention: pricing adjustments, resource allocation, workflow routing, exception handling. No approval escalations, no waiting for management availability. Thousands of decisions daily, with the system learning from outcomes and refining decision logic continuously.  For complex decisions requiring human judgement, AI compresses the timeline by presenting analysed options with predicted outcomes, risk assessments, and relevant precedents. The human decision happens in minutes or hours rather than weeks because the analytical work is complete. 
  • Action happens autonomously and in parallel. AI agents execute decisions across multiple workflows simultaneously. No queues for approval, no meetings, no handoffs. Actions that were previously sequenced (because hundreds of humans need coordination) now happen concurrently as AI agents share context and operate within defined parameters. 

Results feed directly back into observation, completing the loop far more quickly than traditional cycles allow. 

Automation without abandoning governance

You’ll notice I haven’t said the increasingly familiar phrase “with a human in the loop” anywhere in that description. This omission is deliberate. 

Keeping humans in every decision loop often preserves slow cycles rather than providing useful oversight. If a human must approve every decision an AI system proposes, you’ve built an expensive recommendation engine, not a faster OODA loop. 

Crucially, this does not mean eliminating governance! It means distinguishing intelligently between decision types: 

Decisions where speed and scale matter more than occasional errors: A/B test variants, content recommendations, routine customer service responses, inventory reordering within established parameters. These should run autonomously with monitoring and periodic review. An incorrect recommendation has low cost; the aggregate value of thousands of optimised decisions outweighs individual mistakes. 

Decisions where errors are costly or irreversible: Major capital expenditure, hiring and termination, changes to legal terms, strategic partnerships, anything with regulatory or compliance implications. These require human judgement, but AI can compress the decision timeline by completing analysis beforehand. 

Sound governance considers which decisions benefit from direct human oversight versus statistical monitoring of outcomes. Without making this distinction explicit, the default is to require approval (a “human in the loop”) for everything, preserving the status quo. 

Measuring your institutional OODA loop 

If you’re responsible for technology or strategy in an established organisation, map your current decision cycle times. Pick three to five representative decision types and measure observe-to-act duration: 

  1. A routine operational decision (e.g. approving a standard customer service exception, reallocating resources between projects) 
  1. A tactical technology decision (e.g. adopting a new development tool, changing a feature priority) 
  1. A significant but reversible commitment (e.g. a marketing campaign, a hiring plan) 

For each, track: 

  • How long between identifying a need and gathering sufficient data to make a decision 
  • How long to interpret that data and formulate options 
  • How long to reach a decision (including escalations, approvals, meetings) 
  • How long to execute the decision 

Then ask: in how many cases could AI compress each stage? Where could observation be continuous? Where could orientation happen algorithmically? Which decisions could be made autonomously within defined parameters? 

Compare your cycle times to the rate of change in your market. If competitors ship features quarterly and your loop takes six months, you’re reactive by default. You’re responding to situations that no longer exist. 

It’s not only AI that affects institutional OODA loops – data and analytics are also major factors – but it’s certainly, I feel, an exceptionally consequential factor, and obviously a very new one. 

What happens next

Startups building products in AI-native ways have organisational structures designed around fast OODA loops. Small teams can implement these structural changes quickly, enabling the explosive growth we’re seeing from AI-native companies. 

Large organisations optimised for stable environments face a choice: disrupt your own OODA loops deliberately, or let external forces do it for you. The first is uncomfortable but survivable. The second is merely survivable if you’re fortunate. 

Deliberate disruption means examining where observation happens, how long orientation takes, what drives decisions, and how quickly action results. It means accepting that the cost of slow cycles now exceeds the risk of moving faster. Indeed, there will be identifiable risks inherent in the concept of “disruption”, but the greater risk by far (and the most unmanageable) is being disrupted externally. 

The enterprises that survive this transition won’t necessarily be those with the most sophisticated AI models. They’ll be those that went further, thought laterally, and restructured their institutional OODA loops to match the speed at which their environment changes. 

I’d be interested to hear how you’re approaching this. Let’s talk