Skip to content

Emergent ecology: how to stay grounded when AI systems go wild 

AI gives us reason to rethink how we embrace uncertainty and turn it into an advantage. 

Many leaders I work with focus on reducing uncertainty before committing to a project. They want to know when features will be available, what the user experience will look like, whether all scenarios have been tested, and what the upfront cost will be. That approach has helped reduce risk and served us well. 

But AI changes the risk profile of software in ways those practices were not designed to handle. Uncertainty is now something we need to actively manage throughout a system’s life. This raises a practical leadership question. How do we keep moving quickly without increasing risk? 

One answer lies in looking to other industries that already operate successfully in environments where outcomes cannot be fully predicted in advance. Traditional software development often felt like real-world “hard” engineering, building predictable systems much like a bridge or a road. Working with AI feels different. It is closer to gardening, where we cultivate, prune, and monitor living systems as they evolve over time. Success depends less on upfront design and more on observation, care, and adaptation over time. 

For managing complex, unpredictable systems at scale, medicine offers particularly useful lessons. 

The sources of uncertainty in AI-driven products 

AI-powered products introduce three types of uncertainty that we haven’t seen to the same extent with traditional software development: 

  • Wide input ranges. Countless variations in prompts and user context make it impossible to guarantee predictable customer experiences in all cases. 
  • Non-deterministic output. While your AI’s output may be acceptable 99.9% of the time, it might be highly inappropriate for the other 0.1%. Even if a customer support assistant provides correct guidance thousands of times over, a single confidently stated hallucination about refunds or legal terms could have a disproportionate impact. 
  • Volatile environments. Frequent evolution of models, frameworks and regulatory environments makes it difficult for leaders to forecast costs, predict ROI and commit to long-term roadmaps. 

How do we confidently lead the development of ground-breaking products when they have the potential to go wild? 

Three changes to the leadership mindset 

To remain competitive in the age of AI, I think we need to change our mindsets to provide the space for growth and opportunity. 

  1. Embrace uncertainty. We need to move from seeking upfront certainty about projects to encouraging continual experimentation to reduce uncertainty. With AI, we can no longer plan and fully control experiences before exposing them to users. We need to become comfortable treating “we don’t know yet” as the starting state, and to focus on enabling continual trial, learning and improvement. For risky environments, this means guardrails or ‘human-in-the-loop’ approaches until confidence levels are reached. Until we plant the seeds of our AI-generated garden, we will not know how each will grow. 
  1. Focus on outcomes, not internals. You don’t need to understand the biological processes inside a plant to know whether a garden is healthy. What matters is whether plants are thriving, whether weeds are spreading, and whether the ecosystem remains balanced. In the same way, AI leaders should focus on evaluating observable outcomes and constraints – continuously monitoring behaviour and intervening when growth becomes unhealthy or unsafe.  
  1. Act as long-term guardians. Finally, we need to change our mindsets from delivering projects to providing continual guardianship. Just like gardening, AI systems cannot be considered ‘done’ in the same way that a traditional piece of software can be considered ‘built’. They require ongoing monitoring, watering and pruning like a garden too. 

What medicine teaches us about managing uncertainty 

Other industries have already learned how to responsibly manage complex, unpredictable systems like AI-generated software. Medicine is a good example.  

We trust modern medicine with our most valuable possessions: our bodies. Like LLMs, the behaviour of medicine is often too complex to explain in simple terms. And like AI systems, the effects of medicines can vary wildly depending on the ‘user context’. Side effects are subject to evolving and unpredictable cross-interactions. 

Despite this uncertainty, medicine operates under some of the strictest safety and accountability expectations of any industry. 

Here’s how pharmaceutical companies leverage the principles we’ve discussed: 

  1. Embrace uncertainty. Pharmaceutical firms handle uncertainty consciously and deliberately, investing heavily in research for a wide portfolio of medicines, with an upfront understanding that not everything they try will succeed. Progressive trials are always required to evaluate potential. 
  1. Focus on outcomes, not internals. In medicine, regulatory approval can be based on measured outcomes and acceptable risk, rather than on an understanding of how a medicine operates. Drugs like aspirin were successfully used for decades before their molecular mechanisms of action were understood. 
  1. Act as long-term guardians. The development of a medicine cannot be considered complete once it has shipped to pharmacies. Instead, companies need to maintain ongoing responsibility. Medicines need to be continually re-evaluated based on their performance and the side effects reported. 

Doubling down on existing risk-reduction practices 

Although non-deterministic AI products are a relative novelty, software product teams have always had to deal with significant uncertainty – from users and the business itself. 

I believe the following recommended practices are now essential in the context of AI-driven software: 

  • Iterative development. Agile iteration allows teams to flex products as they build them, in response to what they learn along the way. Generative AI underlines the need for iterative development methods because we cannot know in advance what will be generated in production. 
  • Testing early with real users. Lean methodology advocates getting your software into users’ hands as early as possible. Early exposure is even more important in the context of AI products, because we need to test AI with real-world user inputs and context before we can understand what its output looks like. 
  • Quantitative feedback. Traditional software needs KPIs, success measures, analytics and dashboards to understand how users are engaging with it. With the advent of AI-enabled products, quantitative measures are more important than ever – because we need to continually evaluate risk levels. 
  • Experimentation. Structured experimentation allows businesses to validate the impact of product changes at scale. A/B testing has traditionally focused on learning about how a change impacts users. With AI, the remit of experimentation expands to cover testing how different models perform and our ability to influence AI agents, too. 

As software becomes cheaper to generate, the weight of effort is shifting from construction to cultivation – from building systems to observing, verifying, and guiding them as they grow. The most successful teams won’t be those that try to control every variable, but those that plant many seeds, tend their gardens carefully, and evolve the fastest. 

Applying these principles to AI-driven software 

At Softwire, we are doing for AI-driven software what pharmaceutical companies do for medicines: helping customers test hypotheses empirically, measure impact continuously, stage rollout based on risk levels, and continually evaluate in production. This opens the door to capitalising on the promise of AI using measurement to ground ourselves. 

AI

The only thing between you and AI advantage is clarity. Get expert support on your clarity-first strategy to get ahead.