Skip to content

What’s the best way to avoid work?

Experienced users choose augmentation over automation when putting AI to work; understanding why could save you from the backlash that’s already hitting early adopters.

Apparently, we’re quite good at avoiding work. 

Us humans, that is! Since forever, we’ve done whatever we can to make our lives easier (and our lives were very far from easy for much of our history). We’ve done this by finding ways to make our pitiful strength and frail bodies go further – whether that’s throwing sharp sticks, harnessing the might of electricity, or sending remote-controlled robots into toxic waste. In short, we like to “augment” ourselves. 

Another way we like to avoid work is to build machines to do tedious tasks for us (such as the computer!) or even change our environment to avoid work (digging irrigation channels, for example). These are examples of “automation”. 

What AI usage tells us about how people adopt tools

I was thinking about these two approaches to our fundamental laziness while reading recent data analysis by Anthropic, an index of two years of Claude user data that divides categories of things people use Claude for into “augmentation” and “automation”. By coincidence, a couple of days later, I was handed research into ChatGPT usage, which divided prompts into “asking” (roughly equivalent to “augmentation”) and “doing” (i.e. automation). There’s a lot you can read into the statistics (which, famously, can be used to prove anything, though please bear with me…), but what seemed most surprising to me was that longer-term, more experienced users of AI tend to prefer creating “augmentation” solutions over “automation”, whereas new adopters go for automation first. 

Suppose that’s true. Suppose we, as a species, confronted with this new technology, find that the most effective and acceptable way to apply it is to make ourselves better, rather than to replace ourselves. Remember, we’ve been confronted with numerous game-changing techs in the past, and invariably, we use them to make ourselves more effective at accomplishing things, rather than replacing ourselves entirely. 

I think I buy the idea. 

The cultural backlash against automated creative output

People seem to despise AI-generated content, whether that’s “AI slop” replete with em-dashes and odd negatory non-sequiturs (“not just changing shirts—this changes the planet”, and so on), or AI-generated music and images. In these two cases, it’s often now quite hard to distinguish AI-generated from human-created, but when people find out, they seem to feel an instant revulsion for AI-generated music and imagery, and resent the apparent trickery. It’s telling to me that perhaps the most culturally-acceptable form of AI-generated imagery is the lowest common denominator of memes that mock or poke fun (Donald Trump waltzing with King Charles gave me a recent chuckle).  

On a more serious level, we’re seeing explicit warnings from enterprises and governments that AI-generated bid responses “will be rejected”. (Ironic, as it’s often suspected by writers that responses are typically read and evaluated, even if not written, by AI…) Experienced software engineers frequently sound the alarm over code generated purely by AI with no human oversight, and “vibe coding” in high-scale, let alone high-regulation production environments gets a bad rap (despite some clearly impressive recent results elsewhere). 

Why augmentation is usually accepted

On the other hand, it’s commonplace and – I think? – perfectly acceptable to produce media in which AI played an augmenting role. I mean, for example, running a human-written essay through an LLM to check for clumsy prose, rather than getting it to write the essay from scratch. Personally, I like to rubber-duck my ideas with Claude before writing them up myself (this piece included – I hope you don’t mind). We see this with other kinds of AI, too; consider self-driving cars (and the car is a great example of human-augmentation!) – we find autonomous cars troubling from a moral/ethical standpoint (e.g. the trolley problem), but driver aids are a safety boon that lower our insurance premiums. 

It turns out, I think, that the systems and processes we’ve built in our imperfect world are sometimes, even often, pretty good. We revisit them constantly and we make them better. Augmentation in this way elevates us – we have a cycle of continuous improvement as humans. Automation, by contrast, can lock in mediocrity (see RPA – described memorably by one of my colleagues as “pouring concrete over your systems”). We see call-centre, or website customers become infuriated when faced with a chatbot and no way of escalating to a human: automation and broken process, rather than augmentation once the process is exhausted. Committing to an automation doesn’t allow the underlying process to be critically examined and improved.  

For a straightforward example of what I mean, consider some work Softwire did for Tax Systems – we used AI to categorise accounting line-items. This is a tedious, time-consuming task often given to junior accountants. The software we created didn’t automate the process – that is, Tax Systems didn’t just let the AI do the job, locking in any subtle, hidden flaws in the process. By contrast, we augmented the work of the accountants. The AI was able to categorise most line-items competently, but was also able to express doubt and flag up ambiguous items to human accountants. 

Augmenting human judgement

Similar examples include spotting fraudulent transactions at scale: AI can augment our ability to do this. Given millions of transactions daily, we’ve automated fraud detection for a while, but machine learning and augmentation take this a step further. Don’t just take my word for it – here’s Paul Larder, Head of Risk and Assurance at Softwire client LNER:

“Applying machine learning has been a real game changer for us. Previously, we’ve relied on the talent of our Revenue Protection Team to identify customers who deliberately purchase incorrect tickets for travelling on our services. By using AI, we can accurately analyse large amounts of information quickly and identify patterns that our skilled team can investigate further. In essence, it’s helped us make even more of our own luck!” 

Looking to the future, then. I think we’ll want to see more solutions – sophisticated solutions – in AI that augment, rather than automate. Therefore, I think this is a useful rule-of-thumb when considering AI as a solution: ask yourself, am I augmenting? Or am I merely automating? How might we improve this situation, rather than replace one problem with another? 

Let’s get augmenting