
In this episode of The Digital Lighthouse, host Zoe Cunningham speaks with Macs Dickinson, Director of Engineering at LHV Bank. Macs leads teams responsible for building, operating, and supporting the bank’s software infrastructure, and has spent his career working across regulated industries including financial services, gambling, and ticketing.
Zoe and Macs explore a question many technology leaders are wrestling with right now. As generative AI makes it easier than ever to produce code, documentation, and solutions at speed, how do organisations avoid creating a dangerous form of delivery risk: knowledge debt?
Macs explains why AI can accelerate output while quietly reducing understanding, and why that matters even more in regulated environments where teams must be able to explain, support, and stand behind the systems they ship. He shares practical examples of how his teams are adopting machine learning and AI agents safely, why narrow internal use cases are the smartest starting point, and how governance, monitoring, and ownership help leaders stay in control of non deterministic technology.
Discover
- Why AI can accelerate delivery while quietly creating knowledge debt
- Why regulated industries need a different approach to AI adoption
- How governance, monitoring, and human oversight make AI safer to use
- Why narrow internal use cases are often the smartest place to start
- How to build guardrails that keep teams accountable and in control
If you enjoyed this episode, please subscribe so you don’t miss future conversations. We’d also appreciate a quick rating and review on Apple Podcasts or Spotify. Your feedback helps us continue making the show.
Listen here
Watch here
Episode highlights:
- 06:30 – What makes an industry regulated and why the cost of failure is so high
- 14:00 – Why generative AI is fundamentally harder to govern than traditional software
- 15:11 – Building an internal AI code reviewer to understand how AI agents really work
- 25:20 – Why AI creates knowledge debt and why engineers must still own what they ship
About our guest
Macs Dickinson
Director of Engineeering, LHV Bank
Macs Dickinson is Director of Engineering at LHV Bank, where he leads teams responsible for building, operating, and supporting the bank’s core software platforms. He has extensive experience working in highly regulated industries, including financial services, gambling, and ticketing, with previous leadership roles at organisations such as Flutter.
Macs to puts a strong focus on building high performing teams, operational resilience, and safe technology adoption. His work today sits at the intersection of engineering, regulation, and emerging technologies, including the practical and responsible use of AI in complex environments.
Transcript
[00:00:00] Zoe Cunningham: Hello and welcome to The Digital Lighthouse, where we get inspiration from tech leaders to help us navigate the exciting and ever evolving world of digital transformation. I’m Zoe Cunningham.
[00:00:11] Zoe Cunningham: We believe that meaningful conversations can illuminate the path forward, helping us harness the power of technology for innovation, scalability, and sustainability.
[00:00:21] Zoe Cunningham: In this episode, I’m delighted to introduce Macs Dickinson. Macs is the director of engineering at LHV Bank, where he leads his high performing team for engineers to build, operate, and support the software infrastructure of the bank. Over his career, Macs has worked in other regulated industries, including ticketing and gambling, including a stint as senior engineering manager at Flutter.
[00:00:44] Zoe Cunningham: In this episode, we’ll be exploring how far it is possible to take advantages of new tech, especially tech based around, let’s say, less reliable systems like generative AI when you are working within a regulated industry where security is paramount.
[00:01:03] Zoe Cunningham: So Macs, welcome to The Digital Lighthouse.
[00:01:05] Macs Dickinson: Hi Zoe. Thank you for having me.
[00:01:07] Zoe Cunningham: Can we start, maybe can you tell me a bit about your background and how you got to where you are today?
[00:01:13] Macs Dickinson: So I didn’t plan on going into technology or engineering at all. I’d really love to be able to sit here and go, oh yeah, I started fiddling with computers when I was five and I got into it, but it, just didn’t work out like that.
[00:01:25] Macs Dickinson: So I, as the flag behind me suggests, I grew up in rural Mid Wales, and computer science just wasn’t a thing that we learned there. And we had a computer room at school, but it just wasn’t a lesson. So in school I was really good at maths. And that was the most computer sciencey thing that we had going.
[00:01:45] Macs Dickinson: But when I finished my A Levels, I didn’t really know what I wanted to do. And being a, really smart teenager, I thought, I’m gonna be a rockstar. So I went, to uni and, did a music degree and that was an absolute car crash. It just really didn’t suit me.
[00:02:05] Macs Dickinson: So after one year of pretending that I’m a music theorist… I did Music at A Levels and out of my three A levels, Chemistry, Music, and Maths. It was my worst. I got a C. I did really well in the others. And, but then I thought, no I’m gonna be a rock star. But that obviously didn’t happen. So I dropped outta uni when I was 19, after my first year, and didn’t really know what I was gonna do.
[00:02:31] Macs Dickinson: I was good with computers, I’m good with talking to people. So I got a job doing customer support for O2 when the first iPhone came out. I thought that was really cool. Got to get a device before they came out to public launch. That was really fun. And then from that moved into the broadband technical support, second line broadband technical support and realised these computer things, the internet, all that sort of stuff is quite interesting.
[00:02:54] Macs Dickinson: And, from that got a good idea of how the internet worked. So this was all just sort of customer service roles, really. But from that I got a role as a, it was a localisation engineer, which basically means I, I edited files for a living at a translation company. So it was my job to take any sort of file format and make it bilingual.
[00:03:14] Macs Dickinson: And, the old, saying if all you’ve got is a hammer, everything looks like a nail.
[00:03:19] Zoe Cunningham: Yeah.
[00:03:20] Macs Dickinson: For, the.. For about a year, all I had was regular expressions. And wow, you can do a lot with regular expressions. But it got to a point where, it, was really quite painful stuff that, that we were doing.
[00:03:34] Macs Dickinson: And my boss at the time had been doing a bit of C# and I thought, oh, that’s interesting. That can help me do this job. So I started teaching myself a bit of C#. I made the situation work for myself. So at the time my girlfriend, now wife, was working, I’m up in Leeds, she was working in Sheffield. So she was dropping me off at work an hour early and picking me up an hour late.
[00:03:55] Macs Dickinson: And I used that time to teach myself C#. I literally read C#. For Dummies ’cause it was perfect for me.
[00:04:02] Macs Dickinson: It worked! Like, and this is the thing, is it just clicked and I, and the whole sort of creative side, ’cause I think there’s a lot of parallels between music, especially music production and computer science. And that you’re building things, you’re creating things, you’re putting patterns together. And that, really clicked.
[00:04:16] Macs Dickinson: So I started from there, writing software, really quite basic and not very good software. But it was an entry point. And then the next, I don’t know, five, six, seven years from that just really progressed through.
[00:04:28] Macs Dickinson: I got a job as a software engineer, which was a huge step for me. And then progressed through the various different levels. Then about 10 years ago, when I moved to Ticket Arena, that was my first proper engineering leadership role. That was again, another baptism of fire where you, come through as an engineer thinking, yeah, computers, they make sense.
[00:04:48] Macs Dickinson: And then you go into a leadership role and it’s completely different, it’s all about humans. And that was really difficult at first. That was a massive adjustment and one I think a lot of people have been through. But it taught me a huge amount, through making mistakes and through doing something as well it taught me a lot about how to run a team, the importance of psychological in creating an environment for a team to flourish.
[00:05:08] Macs Dickinson: And then I moved through various… went through Flutter and then several different companies within that. And that really taught me how to, make a team perform to the top, top level.
[00:05:19] Macs Dickinson: So we had so much data. It was a bit of a cheat code really being able to have such good observability and to be able to track things so well. That, that really, really taught me how to be a… how to build a strong team.
[00:05:31] Macs Dickinson: And obviously now at LHV Bank, I’m taking a lot of that learning over the last few years. especially from the regulated side of things as well. ‘Cause it does limit what you can and can’t do. I’m taking all of that and putting it together into how to build strong teams.
[00:05:45] Zoe Cunningham: It’s such a great introduction. So firstly, I love this idea that you learn how to code. You don’t have to be taught how to code. I think that’s really important and I think that flexibility within the industry is great. And seeing people switch later in their careers. I, think that’s all amazing.
[00:06:04] Zoe Cunningham: But there was something else you said right at the end I thought was fantastic because it hits on what we wanna talk about today that… we all go into programming thinking, oh, it’s about computers. And so many of us end up in leadership roles and we’re like, ah, no, it’s not, it’s all about people.
[00:06:19] Zoe Cunningham: And AI kind of sits in the middle of those two things. But let’s just set the scene. We’re talking about regulated industries. What do we mean by that?
[00:06:30] Macs Dickinson: I think if you are in a, a scrappy startup and you are just trying, to get things moving as quickly as possible, the risk of doing things wrong is really quite low. Obviously it depends on what you’re building, but generally the risk is quite low.
[00:06:43] Macs Dickinson: The reason industries are regulated is the risk of doing stuff wrong is much, much higher. It is to an extent the same across gambling and finance in that if you do something wrong, well it’s not just about doing something wrong, if you don’t give something the proper care and attention, you can seriously negatively affect people’s lives.
[00:07:04] Macs Dickinson: So in gambling, you can… gambling addiction is a thing. You need to really protect against that and make sure that you’re not adding to that. In banking, it’s people’s livelihoods at stake. If your payment system goes down at a time when someone is, I dunno, stuck in a difficult life situation and they need to make a payment, then you can really cause them serious stress.
[00:07:24] Macs Dickinson: The regulation exists to protect the customers. I think that’s a really important first point. ‘Cause I think as a kind of fresh faced engineer going into a regulated environment, you’re like, oh, this, all this stuff’s annoying. It gets in the way, it slows us down. But actually you have to come at it from the customer’s perspective and think, actually what are we protecting?
[00:07:41] Macs Dickinson: And you look at things like atomic energy or… or healthcare or that sort of thing, and you just, you look at the cost of getting it wrong, basically. And that’s what the regulations are all about, protecting the consumer, and protecting the in, in the, case of atomic energy, I think we all wanna make sure that that software’s working properly.
[00:08:01] Zoe Cunningham: Yeah. And having said that, you come in and from an engineering standpoint, sometimes it can feel frustrating and like you can’t take advantage of things. But have you found that regulation does actually slow you down in terms of adopt… or does it prevent you from adopting certain technologies?
[00:08:23] Macs Dickinson: So when you do it wrong, yes, I’d say. But it is all about the angle that you come at it from. So if you just see compliance and regulation as a gate that you have to get through, or there’s a checklist that we do right at the end, then of course it’s gonna be painful, because you’ve not thought about those things upfront.
[00:08:40] Macs Dickinson: But if you flip it on its head and compliance as an enabler, and what I mean by that is understanding what the customers need and how to protect those customers, then that can be a really clear part of how you design your platform and how you design your product. And that’s absolutely enabling. You do that right at the beginning.
[00:08:56] Macs Dickinson: I think a good example of this would be operational resilience, which is a compliance that has come in recently that. ensures that we need to have certain levels of observability of incident management, tracking between all of our different suppliers, so we know if one thing goes down, we understand the impact of that, and then we are able to have an appropriate response.
[00:09:16] Macs Dickinson: So this is all about protecting the financial system, which is built on so many different suppliers , and we’ve seen it very recently with the CloudFlare issues and we’ve seen other, issues, recently as well and understanding. Okay, so this third party went down, what does that mean? And then what can we do for our customers? What sort of danger our customers in, what can we do to protect that? Do we need to have failures and resilience in place?
[00:09:40] Macs Dickinson: So the regulation’s driving that. But I haven’t seen anything… apart from the reporting aspect of it, I haven’t seen anything in there that we weren’t doing in Skybet in 2016. Because, yes, it was a regulated industry, but that’s not why we were doing it. We were doing it because if we went down, it cost us a ton of money, right? So we had a really big stability drive and we built all of these practices, and I’ve seen all this come in over the last, year or so, and, thinking, okay, oh, so we’re trying to do the thing that we did nine years ago.
[00:10:08] Macs Dickinson: But it’s just different drivers, isn’t it? And it’s a, it is a good thing because it’s making sure that… ’cause not everyone has that same focus on uptime. But then when you have a service that people rely on, you need to have that same focus on uptime.
[00:10:21] Macs Dickinson: Yeah. I, don’t think at all it has… it, doesn’t have to be a blocker. You just have to understand the world in which you’re working. It’s just about being realistic, isn’t it?
[00:10:30] Zoe Cunningham: Well, and accepting that your services, there are consequences of what you do. It… the way you explain it exactly like it sounds almost like just being a grownup, right? And yeah….
[00:10:41] Macs Dickinson: Exactly that. And I think when you’re early on in your career, full of enthusiasm, you want to just, yeah ship it, ship it, ship it. And absolutely, that’s a really good drive and mentality to have. And it’s the whole sort of Silicon Valley fail, fast, fail, quick sort of thing. Which, is, there is some merit in that in terms of the learning that you get from it.
[00:10:59] Macs Dickinson: But you have to understand at what cost. So like I’ve… when I talk about it internally, it’s We can’t, we, are not allowed to fail fast. We can fail safely and like as long as we’ve got processes in place and ’cause you need to fail sometimes to learn, but you need to fail in a way that it’s not negatively affecting the customers.
[00:11:16] Macs Dickinson: So being able to fail in, careful… carefully fail, is the way to look at it anyway.
[00:11:23] Zoe Cunningham: Yeah. That’s really cool. So can you give me, and I really love how you’ve… you organized your thoughts around it. So it’s not just about rushing in and going, we need to do something with AI, we need to keep moving. It is actually, how are we operating and how do we incorporate this into our particular operating environment.
[00:11:43] Zoe Cunningham: So can you tell me like maybe an example or two of specific ways that you’ve been using new technology, like specifically AI. And I suppose what you’ve had to do to make sure that, when you fail, it’s safe rather than disastrous.
[00:12:01] Macs Dickinson: Yeah. So the first thing is not really looking at it too differently to just general software development. I think making sure things go through the relevant governance. So that we’re not putting out something that, first of all our customers don’t need, and secondly that’ll affect them badly.
[00:12:18] Macs Dickinson: So there’s a few things that we’ve been doing, and I think there’s probably two different ways to look at it. So conventional AI or machine learning, as I like to call it, that’s what something that we’ve been using for a long time. It’s not new. None of this stuff is new. And there’s quite well-established patterns for training and monitoring the models.
[00:12:37] Macs Dickinson: But for us it was quite new. so we use a lot of third parties that use these sorts of things, but we’ve more recently started building our own machine learning models specifically around fraud prevention. It’s a really good use case for machine learning ’cause it’s just huge amounts of data.
[00:12:50] Macs Dickinson: You start to see signals and you can build things on top of that. And it was when we were building that model where I looked at our software development lifecycle and I was like, ah! It would be totally possible for us to put something out that’s okay, it’s completely following our process, but would be terrible.
[00:13:07] Macs Dickinson: ‘Cause it could be bringing back the completely wrong responses. So we took that one. Okay. We need to revise this a little bit. And this was where actually spending time with my risk team, to understand model governance that they have around financial models, et cetera, that was able to then inform us to say, ah, okay, so we need to think about how do we monitor the.. how effective this model is. Is it, continuing to give back the… the results that we want? How do we, monitor that on a regular basis?
[00:13:33] Macs Dickinson: So once a month, we pull down the… the model results, we pull down the actual results, we take a sample, and then we run that through the team of humans and make sure, is this, still following in the, right, right way?
[00:13:45] Macs Dickinson: Or maybe the situation’s changed. Maybe we need to course correct in the model training, et cetera. So it’s building that observability in from the outset I think is the absolutely key thing. And that’s probably the main additional need on top of standard software development.
[00:14:00] Macs Dickinson: Then when we look at Gen AI, I think, again, like it’s just exploded, hasn’t it? And again, for the most part, a lot of this is just automation. It is just software engineering. But the big thing that’s come from Gen AI is that it’s non-deterministic. So the conventional approach of we’ll write a load of tests to make sure that it’s, it’s gonna work isn’t quite as foolproof.
[00:14:24] Macs Dickinson: So the thing that we have done here, and, this is both in terms of gen AI that we’re using from the tools that we use. ‘Cause every tool seems to be adding AI plugins, which is its own challenge. But then also the stuff that we’ve been building ourselves, it’s really starting small. It’s understanding how it works, understanding how to monitor it again. But also being able to quality control.
[00:14:46] Macs Dickinson: And I think that’s the really big thing here is I think probably anyone watching this will be able to think of an example of a chat bot that hasn’t worked well, because they’re all over the place, aren’t they?
[00:14:58] Macs Dickinson: And, yeah, the key thing is getting to a level of confidence in the quality of those responses internally to be able to then let it out to your customers whilst also having observability and checking and evaluations in place.
[00:15:11] Macs Dickinson: So what we’ve done initially is we’ve built a number of internal LLM powered bots, essentially. But we’re building it in things that we can quality control ourselves. So the, the, directive I came to the team who’s looking at this, was build something not to release value, but build something to understand how an AI agent would work. But build something that you can quality control yourself.
[00:15:35] Macs Dickinson: So what they built was a code reviewer, something that engineers have a strong opinion on what’s good and what isn’t.
[00:15:41] Zoe Cunningham: Yeah. You know if it’s got it right or not.
[00:15:43] Macs Dickinson: Exactly. And you can tell straight away. And, it was really interesting ’cause the first version that we, that they did, was… it was absolutely brutal actually. And it was calling out things that really didn’t need to be called out. And there was a quite a bit of, and it got them into prompt engineering to understand, okay, how does the prompt change how this works? And they end.. They started off with a single agent doing everything and going, actually, no, that doesn’t work.
[00:16:03] Macs Dickinson: We need to break that down. We want a syntax agent, we want to acceptance criteria agent that goes away and looks at Jira and comes back. And they really broke it down. And then had a single agent at the top going, okay, I’m gonna send this out, get the feedback, but it’s my responsibility to know how to give that back to the developer in a useful way.
[00:16:19] Macs Dickinson: And so there’s some really interesting stuff there. So it starts off as quite a trivial thing, but actually as you get into it, it becomes relatively complicated in how it’s orchestrated.
[00:16:28] Macs Dickinson: And that’s been a really interesting piece of work for them to do. We’ve now got that rolled out across pretty much all of our teams, and we’re getting quite a bit of value from it. But the main value that I’d say, so obviously it’s useful to have it within our pipeline, but the main value is now we now understand the makeup of an agent, and we’ve been able to look at using that in ways that other people within the company can validate.
[00:16:46] Macs Dickinson: So we’ve got things like regulatory bots, so a bot that understands all of the different compliance documentation, which is generally quite complicated and there’s a lot of information going on there. So building that up so that we can put that in front of our compliance team and we can put in… so we’ve actually got one in our parent company that we don’t have it yet in the UK company where you can, so in Slack there’s a bot, you can ask questions about the regulation.
[00:17:08] Macs Dickinson: And actually quite often it comes back and says, ‘I don’t know’, which is fine because there’s humans there that come back and answer. And we have, we’ve intentionally made it come back and say, ‘I don’t know’, rather than trying to work things out, which again, the LLM loves to do, loves to make up an answer. It wants to please you.
[00:17:22] Macs Dickinson: So yeah, working around that. So really it’s building things internally that you can quality control and then the next step is putting it out externally. So again, like customer service is another really good example for this sort of thing. We’re building those models at the moment internally so that they help our customer service agents so that they can be able to get to information more quickly. But absolutely we wanna put that out direct to customers at some point.
[00:17:45] Macs Dickinson: I think this is a really good way to look at it though. ‘Cause I think a lot of people, non-technical people and technical people will be looking at these rise of agents, rise of AI and being worried about their jobs. And I think that’s only natural. Thinking at what point is it gonna replace me?
[00:18:00] Macs Dickinson: But I think taking this approach where you go, look, it’s, not gonna replace you, it can’t replace that human interaction. And actually that filter to be able to go, is this even a good question, right? To understand that is really important, but actually using it as a tool on your side so that rather than having to spend, I don’t know, 30, 40 minutes digging into case information, you can go, okay, AI, go and gather me all that information and tell me what it probably is.
[00:18:24] Macs Dickinson: You might still need to do that 40 minute dig for something that the AI can’t find, but you might only need to do 30 seconds if it can do it more quickly. And I think again, that’s another really good use case for Gen AI and it allows you to test it in a way that it’s not, just immediately impacting your customer, you can manage it quite well internally.
[00:18:42] Macs Dickinson: You do still need to have good monitoring in place to make sure it’s not making stuff up. I think that’s the absolutely crucial part.
[00:18:47] Zoe Cunningham: Yeah, I really like that idea of both using, because tech can sometimes end up feeling a bit like you’re on your own and someone says, build this with no understanding of what’s going on, and then you go away and, then it’s hard to communicate back again.
[00:19:02] Zoe Cunningham: But actually what you’re talking about there in both directions, there’s already risk models and risk assessments for financial modeling. Which is a big part of the financial sector, right? For a long time and actually saying, well this is another whole bunch of data that we’re managing, how do we cross supply those? And then also, have you got any kind of principles that you share back in terms of people saying, I might wanna use an agent for this, or I might wanna think about doing that?
[00:19:32] Macs Dickinson: Yeah, absolutely. Usually the first thing is, okay, let’s limit the scope of this.
[00:19:38] Macs Dickinson: You can do so much, so easily with AI, can’t you? Especially, Chat GPT and Claude and the likes. You can get off the ground so easily. So it is usually a case of saying, actually, if we want this to… if we wanna be able to scale this in any sort of way, and that might even just be from one person to a team of five. Like we need it to be quite reliable.
[00:19:58] Macs Dickinson: And being able to be reliable on a broad set of things is very difficult. But being able to be reliable on a small set of things, it’s still quite, you have to go really deep and this is one of my biggest learnings we’re building agents, is you can do so much, so easily, but it’s surface thin. But as soon as you want to go deep, it’s actually, you have to think about… again, it’s no different to classic software. It’s all about the edge cases and the nitty gritty details. Yeah, but as soon as, once you open up the inputs… because that’s what’s changed is it’s gone from being a fixed API input to just a, it could be anything. Right?
[00:20:31] Macs Dickinson: That’s why it’s suddenly becomes so complicated to actually build a service that can be reliable for that completely wide set of inputs. So the first thing I find is really narrow that use case. Okay, let’s just find what is the top thing. I was talking to, someone in my HR team last week and I was like, okay, what are the top queries that you get that could have been answered by someone just reading the company handbook?
[00:20:54] Macs Dickinson: And they gave me a list of, the list of five questions, and I just threw that into a custom GPT, got the company handbook in there and it was able to answer four out of five of those questions with very little fiddling.
[00:21:04] Macs Dickinson: And so straight away it’s okay, we can make that super narrow and we can probably save them quite a bit of time. And it’s… and you just start there and build up slowly. And I think it’s trying to take quite a… I suppose it is a cautious approach. Like for example, I haven’t given everyone in the organisation Claude code ’cause I think it’s a really quick way to generate a huge amount of tech debt. If you don’t have really clear guidance on how to use it and when to use it. We will get there. We absolutely will get there, but we do it in a managed way.
[00:21:29] Macs Dickinson: So I think that’s the kind of key thing there. I think the other thing talking about, creating a space for agents to work. Again, we’re super, super early on this, but we’ve recently gone through a bit of a restructure. We were using team Topologies methodology to look at how do we create streamlined teams, so teams that have all of the skill sets and the autonomy to be able to deliver things for our value streams or for our segments end to end. And that’s really about how do you give a team agency.
[00:22:00] Macs Dickinson: And you think about like, how do you create the really clear guardrails for that team to work within? Giving them really clear understanding of what they need to do at different points so they’re not constantly having to have handoffs with other teams.
[00:22:10] Macs Dickinson: Now, as we were going through that process, it was the same time we were looking into building an AI agent framework and I hashed one out on a… I had a week off, and I was sat in the lakes with my laptop hashing out an agent framework, and I was just like, huh, this is the same thing.
[00:22:23] Macs Dickinson: This is exactly the… like building the guardrails for an agent to be able to run with autonomy is the same as building a human to be able to run with autonomy. And it actually is the same thing. So having really strong security, least privileged principles. So it can only do the things that it’s specifically allowed to and trained to do.
[00:22:42] Macs Dickinson: Having really strong platform as a service, and that doesn’t necessarily mean like AWS or anything like that, but like really strong guidelines to say, this is how you do this thing. Again, if you build that for your agents, then it’s gonna be really good for your humans as well.
[00:22:55] Macs Dickinson: If you want a customer service agent and you’ve got really good documentation that says, or a, a service desk agent, really good documentation that says, this is what you do in this scenario, then that’s actually gonna be really easy to make an agent.
[00:23:06] Macs Dickinson: And I think this is what we’re seeing a lot at the minute and one of the reasons there’s such a wide variety, or polarized view of how effective AI agents are being. And it’s, ’cause if you’ve got that stuff already, then you’ll benefit massively. But if you haven’t got that stuff already, then you’re gonna create a mess. And, yeah. That’s where I think these really different… sort of some teams saying it’s slowing them down, some saying it’s speeding them up, and it ultimately comes down to how mature your organisation is, from a kinda platform security, et cetera.
[00:23:33] Zoe Cunningham: All right. And just quickly to finish, would you say it is slowing you down or it is speeding you up? Like on average? At the moment, given that it’s early days.
[00:23:42] Macs Dickinson: Definitely speeding me up on, on quite a few things. And, the nice part is it’s the boring stuff for me. Like writing… writing papers and documentation and even like regulation documentation, that sort of stuff. I find it so much easier not to have to think about how am I gonna word this?
[00:24:00] Macs Dickinson: And I might, so I’ll be chatting, I’ll act as a, whatever. I’m.. Here’s.. I need to write this document. Here’s everything that I’ve thought. And it’s, I find it easier to brain dump and it to then structure my message and then for me to then go back and check is everything here right?
[00:24:15] Macs Dickinson: I find that much more… it feels more productive. Whether it is or not, I think is another, thing, but for me, one of my challenges with writing a big document like that is I find it hard to start. The big thing that these AI tools have got for me is, I find it much easier to start, it can be harder to finish at times though and that’s the thing to figure out.
[00:24:33] Macs Dickinson: So that’s worked really well for me. On an engineering side of things, again, I’d say it’s a bit 50 50, I think in some areas, definitely it’s speeding us up and this is where we have spent the time to understand, what are the proper use cases? So when should we use AI versus when we shouldn’t?
[00:24:50] Macs Dickinson: I think in other areas it’s maybe generating a little bit more noise, but again, I’m taking quite a cautious approach from this. Again, regulated, we need to have four eyes principle on every piece of code that goes in. So everything needs a, pull request. So I can’t have Claude generating a load of code across the stack.
[00:25:04] Macs Dickinson: I put a really clear mandate to the team, which just said, look, use the tools, but you commit the code, so you own the code. You understand the code. And I think that’s a really important point that gets missed. There’s a time and a place to use vibe coding, right? When you’re experimenting, you are prototyping, et cetera.
[00:25:20] Macs Dickinson: But what you’re intentionally doing is you are, you are choosing speed over knowledge. And, this is something that we look at anyway, with software development, a lot of people think the main output of software development is the code, but that’s one of the outputs. The knowledge that you gain from building it is a massive part of it.
[00:25:37] Macs Dickinson: And I talk a lot with my teams about always weighing up, the short term, long term. Are you, prioritizing getting it out the door and taking on some knowledge debt? We talk a lot about tech debt. Knowledge debt is absolutely a thing. Most of the techniques that you take to prioritize speed are about siloing, right?
[00:25:53] Macs Dickinson: And so creating knowledge debt to get something out the door. Vibe coding is just a, like a… it’s like that on steroids. It’s like really siloing, generating a load of stuff but don’t, not really understanding it. So if you’re actually building something that you’re gonna have to support and that’s why it’s important, you’re gonna have to come back and support it, then you need to take the time to learn and understand it.
[00:26:09] Macs Dickinson: And the nice thing is AI can help you with that. It absolutely, like if you’ve got a large piece of code that you’ve adopted from someone else, AI is a really good way to understand it and that’s something that people should lean into.
[00:26:19] Macs Dickinson: So for me, it is speeding me up. It’s speeding my team up. But we’re still being really cautious on where we implement it.
[00:26:26] Zoe Cunningham: And, you are not, over promising yet.
[00:26:30] Macs Dickinson: Yeah. Absolutely. And I think I’ve… I definitely have lent in myself on understanding and using a lot of these tools. I think as you can imagine, the promise of, oh, we can automate X, Y, Z and we can speed up delivery. Of course that gets seen and then I get those questions.
[00:26:51] Macs Dickinson: I think the, thing I just keep going back on it is okay, we, maybe we can see those gains in very specific use cases and it’s knowing when to use it. Yeah.
[00:27:00] Zoe Cunningham: Yeah. And we will learn as we go and build on that and going forwards.
[00:27:05] Zoe Cunningham: Thank you so much Macs. I really just appreciate you coming on and sharing your insights, with like such candor, it’s it’s thrilling for me to just see other examples of people doing things out there. So thank you so much for joining me.
[00:27:22] Macs Dickinson: Thank you, Zoe. I really appreciate it.
[00:27:24] Zoe Cunningham: This Digital Lighthouse episode was edited by Steve Folland and produced by Patrick Anderson. The theme music was written and recorded by Ben Baylow. A huge thanks to our sponsor software for their continuing support from the interception of the show in 2019 to the present day.
[00:27:38] Zoe Cunningham: If you love the podcast, please let us know with a rating and review on your platform of choice. We’re always looking for feedback to ensure that we are making the best show possible.
[00:27:48] Zoe Cunningham: And if you’d like to take part, please drop us a line at [email protected].
[00:27:54] Zoe Cunningham: You’ve been listening to The Digital Lighthouse with me Zoe Cunningham. Thank you for sharing your time with us and stay safe on this technological ride that we’re all on.