In our latest Digital Lighthouse episode, Zoe Cunningham is joined by Jason Souloglou, founder and CEO of SeeChange Technologies, a world-leader in real-time AI powered recognition systems, with expertise in making complex, state of the art AI technology simple to deploy and scale.
Jason speaks about how their AI technology services excel within the retail sector to support loss prevention and health and safety compliance, where AI can go in the future and how it can support positive social change.
This ties in with the organisation’s mission to help humanity thrive through the power of visual AI and and create something of real value, guided by solid ethical principles and driven by the belief that technology serves people – not the other way around!
Digital Lighthouse is a mini-series of Techtalks brings you industry insights, opinions, features and interviews impacting the tech industry. Follow us to never miss an episode on SoundCloud now: See all the Digital Lighthouse interviews online for free on SoundCloud
Hello, and welcome to the Digital Lighthouse. I’m Zoe Cunningham.
On the Digital Lighthouse, we get inspiration from tech leaders, to help us to shine a light through turbulent times. We believe that if you have a lighthouse, you can harness the power of the storm.
Today, I’m super-excited to welcome Jason Souloglou, who is the CEO at SeeChange. Hello, Jason, and welcome to the Digital Lighthouse.
Hi, Zoe, I’m really pleased to be here with you today.
Fantastic. So I think you have a job that a lot of people would want to aspire to. How did you come to be a CEO in the AI industry?
Purely by chance and destiny. The journey starts back at university, I ended up co-founding a startup with my professor at university, which at the time had nothing to do with AI. It was a different deep-tech thing. But the point is that it gave me the bug for entrepreneurial things and startup lifestyle.
Can I just ask quickly, were you studying technology? Or were you studying something else, and this was just your route in?
No, I was studying computer science at Manchester University, and I didn’t feel ready to enter the real world. So I decided on the PhD, which was in something that was really interesting. It was very technical, and something that hadn’t been done before. It was to do with translating software from one computer architecture to another in real time. And in the end, it’s something that got taken up by Apple. And it was the thing that Apple ended up calling Rosetta. So Apple’s Rosetta was my PhD.
The thing that I found exciting about that, was it had never been done before. And it had the potential of being turned into a startup. And when we did set up that startup, and then it eventually got picked up by Apple, and then it got sold to IBM, that whole end-to-end journey of an entrepreneur, I was hooked, and I really loved that.
And what I learned about myself going through that process, is while I was an average engineer, I found that I was very good at being creative and having vision in a technical setting. And I found that my skills were more to do with building really good teams, and delivering new things that hadn’t been done before. And that’s really where I wanted to focus my career.
So, after we got acquired by IBM, I ended up experiencing the corporate side, which I personally found a lot less fun. It’s a very different lifestyle, but you do learn a lot of really good skills. You learn the skills to interact with executives, and people that are going to maybe fund you, for example, or people that are going to support you in ways that you need, in a more corporate way. So, for example, all startups usually have to interact with partners, and partners are usually part of big companies. So you kind of know how the politics of all that works, in order to succeed as a startup.
So having gone through a successful acquisition of the startup, working for IBM, setting up a development centre in Manchester for IBM, I was then headhunted by ARM to set up a development centre for them in Manchester. And it was really within ARM that the opportunity for SeeChange came out.
That’s really fascinating, because I think what you’re talking about is the experience of a lot of engineers, maybe, or people who start studying various types of engineering really early. You feel like the goal is you’ve got to be the best. And I think in a classroom setting in particular, it’s always that you’re competing with your peers, and that’s how you think about it.
But in an organisation, there are so many more roles, and so many different skills that are needed. Actually, in the workplace, it’s much more about finding your space and where you fit in. And whether it’s in a small startup organisation, or a large corporate, and whether it’s being an engineer, or leading the engineering team, or leading the product, or leading the company, right?
Yeah, it’s really interesting you say that. I was a typical engineer-mindset person, who had very strong ideas, believed I was always right, very opinionated, annoying – that mentality. And I say that with affection, because that’s fundamentally where I’ve come from, but why I decided one day, just being on the engineering side wasn’t enough for me, because you don’t have enough control over the creativity and the development of a business.
Because I had ideas that I wanted to bring into the world. And just as an engineer, you’re not able to. You don’t have the right level of control and influence to put together a company and then bring something into the world. You have to start thinking differently. You have to start thinking more like a CEO, because you’ve got to put all this stuff together.
So for example, how did SeeChange come about? It was 2016, I was in ARM. And I had completed the mission that ARM had headhunted me for, in first place. I’d put the team together, Manchester was set up as an operation for ARM, and I was looking for something creative to do.
And ARM, at the time, was very much trying to lead the way with IoT, connected devices, streaming data from connected devices for insights to provide value to businesses. And at the same time, there was a lot of stuff going on to do with AI: face recognition was in the news a lot.
And I had this idea, I had this vision, of bringing these things together. Because think about it: there’s a billion cameras out there that see everything, but they understand nothing, right?
And so there was this idea about ‘what if there was a recognition platform?’ A general platform for recognition that could use these existing cameras and other sensors that are all connected together, to recognise scenarios, and provide value: either improve experiences for people, or increase efficiencies for businesses. There’s an infinite number of possibilities.
So having that idea and that vision, that was the easy bit, right? The really hard bit was then convincing ARM to fund this as an innovation project. So that took me two years. And that two years required all of my skills – not as an engineer, because those are the wrong skills, you just end up annoying people! It required different skills to be able to sell that idea, and effectively sell yourself as somebody that could deliver that. And it was to do with understanding the politics of a big company, and who to talk to, and how to bring it all together.
Having got funding for that, in 2018, you then have this situation, which is a blank sheet of paper. Yes, they’re giving you money, but you’ve got this blank sheet of paper and no team. And you basically have to figure out, from nothing, how am I going to put this together to try and take steps towards this ridiculously grand vision that I’ve got for a general-recognition platform that can recognise anything, and apply to any industry, anywhere?
And all of the skills that I learned from the moment I was at university, going through IBM… all of those were the skills I needed to start putting this thing together.
Okay, so fast-forward to now, can you tell me a bit about what SeeChange does? Then maybe we can talk a bit about how it works, as well.
So what SeeChange has focused on – because even if you have a really grand vision, you’ve got to boil it down to something very simple to start off with, and build from there – is taking mostly existing cameras, and using those cameras to recognise different scenarios, using AI, in real time.
So this is real-time AI, using existing cameras, to recognise scenarios. For example, we have a product that recognises when liquid is spilt on the floor in supermarkets, and then sends a notification so that the spill can be cleaned up before somebody has an accident. It’s simple use case, but the technology is actually really challenging, because to detect liquid is really hard, because of the lighting conditions, and if it’s water, and a concrete floor, it’s really quite hard to see.
So SeeChange has a platform called SeeWare, which is designed to be generic, and designed to be able to recognise any type of scenario. But what we’re doing is we’re focusing in the retail tech market, to look for scenarios that have a very obvious problem that needs solving. It’s things like recognising theft at checkouts, recognising health-and-safety issues to do with people slipping over, blocked fire escapes, things like that.
Brilliant. And I think that what you’re talking about there, is there’s a difference between creating amazing technology that can do things that you look at and go, ‘that’s really neat,’ and building a business where you’re using that technology to solve a real-world problem that is useful to people.
So do you think the tech sector in general is good at making this leap, and taking what is quite cool new innovation and actually applying it?
I think that it can be good in certain areas. I think that it has been good for things like speech recognition, or chatbots, and things like that. I think it’s good in controlled conditions. Like if it’s analysing medical scans and looking for anomalies.
Where I don’t think it’s as good, and it’s got a longer way to go, is in real-time visual applications. And the reason that’s hard is because the real world is messy. It’s chaotic. There’s unpredictable things that happen all the time. People behave in unpredictable ways.
That’s not the only reason either. There are three things, I think, that need addressing if you’re going to apply AI tech to the real world. And look, in SeeChange, we have a real distinction between developing the core tech, and then applying that tech to the real world. And they are two very, very different things. Obviously, they overlap. But we have different teams that focus on each of those areas, because they are so distinct.
And I’m not exaggerating when I say that the application of the tech is probably 10 to 100 times harder than actually developing the tech in the first place, right? The prototype in the lab is one thing; actually dealing with the real world is another thing.
So the three things that I believe are needed is, one, a solution that is scalable. Years ago, AI solutions were developed that did just one single thing. They may do face recognition, and that was all they did. But when you enter into more complex markets, like a supermarket, the supermarket doesn’t want to bring on multiple vendors to solve multiple solutions. So if they want to reduce theft at a checkout, or if they want to detect spills on the floor, if they want to detect obstructions or if they want to manage their stock, there’s 100 different use cases you could apply the AI to in a supermarket, they don’t want to have 100 different vendors, they want one. And so the idea of a single, point-solution doesn’t work anymore. It needs to be a platform, what we call a ‘recognition platform’, where you can very quickly and easily apply new use cases or new solutions with very minimal effort. So that scalability is really important.
The other aspect of scalability is you’ve got to use existing infrastructure and existing cameras, because you can’t go to supermarket and say, ‘you need to replace your 100,000 cameras across your estate, that’s going to cost you millions.’ That’s just not going to work. So using existing stuff is really important.
The second aspect is flexible deployments. Because there are some situations where you can deploy this in the cloud, that’s really the simplest one. But in reality, you’re often in a hybrid deployment situation, where customers have a need, potentially, for keeping all the data in-house, in the building. So you have to potentially deploy on servers within the building and the data doesn’t leave. Or sometimes you have to deploy on the devices themselves. So, for example, we have a product that is embedded into a self-checkout, and that has to run on the self-checkout. So being able to deploy flexibly, whether it’s in the cloud, data centre, on-prem, on-device, hybrid, or even in a 5G mesh, which is kind of the new deployment scenario coming through.
And then the third one, this is the one that gets hit, sometimes the hardest, is it needs to be unobtrusive. It needs to be invisible. Because if you’re having too many false positives, or too many false alerts, it actually becomes very annoying for the people using the system.
And not just annoying, people start to ignore it, don’t they? If you get 20 messages saying there’s a spill, and none of them are… the 21st one, you go, ‘well, what’s the point?’
And there’s other aspects to it being invisible and unobtrusive, right? It needs to be integrated with existing systems that the staff are already familiar with, so it doesn’t feel like they’ve got to learn a new thing.
It needs to be able to adapt to changing situations. One of the things that we have in SeeWare is it can self-learn, it can adapt. So there’s feedback loops that allow it to learn from mistakes, or new environments or new situations. A really good example of where you can apply the philosophy of self-learning, and the philosophy of trying to reduce the cost of deployment as much as possible, is in the case of training a system to recognise all the individual products within a supermarket. Because if you think about it, when a product gets scanned at a self-checkout, the system is taking an image of that product. So it sees whatever it is – a bottle of Jack Daniel’s – when it’s actually scanned through. The till data was telling you that it’s a bottle of Jack Daniel’s. So you can basically use that training pair to learn what a bottle of Jack Daniel’s is. So over a relatively short space of time, if you have the system running in the background in a supermarket, you’re basically training your system to learn what every single product looks like in that supermarket.
And then with that training data that you’ve effectively got for free, you can then apply that to detecting when somebody is trying to steal from checkout, detecting when there’s fresh produce being put on the scales… So you don’t have to go through the ‘look up item’ menus.
Wow, that would be great.
But you can then apply that learning to recognising products on shelves, for example, and then doing stock-management, and taken even into the back end, you know, in the warehouse, for example. That’s a really great example of those principles of learning in the field, self-learning, but also trying to reduce the deployment costs as much as possible.
I get so excited when we talk about improvement to supermarket self-checkouts, because I’m such a big believer in them and they just don’t work very well yet. So I am very excited about that particular aspect.
And obviously improving the delivery and making cost-savings within supermarkets is super-important as well, because that’s where the majority of people buy their food from. And the less money that supermarket chains are spending on warehousing and slips, and all of these kinds of things, the better for everyone. So it’s a great area to be working in.
I wanted to ask you a slightly different question. At Softwire, our main offices are in London and Manchester. So we love Manchester very much. And you’re also based in Manchester, and I wondered what it was that drew you to Manchester…
So I went to Manchester University, so I’d already experienced first-hand that there’s a huge amount of talent from multiple universities in Manchester. And that’s a big deal, right? When you’re building a business, one of the never-ending tasks, especially for a CEO, is building the best team possible. And having them on your doorstep is a big deal. And they literally are on the doorstep. I mean, there are some super-talented people in Manchester. So that’s the first thing, and the universities and institutions here in Manchester are top-notch.
But there’s also a really vibrant tech and startup scene, which by the way, wasn’t there back in 2001 when I set up Transitive. We were one of the only games in town in terms of startups. And now, I read that Manchester is the fastest-growing tech hub in Europe, apparently. And you can believe it, because there’s a lot going on, there’s a real buzz.
And the third thing, Zoe, is it’s just a really great place to live and work. It’s a reasonable cost-of-living, there’s a lot of fun, stimulation there, near the countryside, you’re connected to the rest of the country, there’s an international airport there, it’s actually perfect, it really is a great place to work.
And being right in the city centre as well; I would only ever set up a company in a city centre, because it’s such an exciting environment to be in, that buzz, it really helps in terms of the creativity of the things that we’re doing.
It’s so dynamic, isn’t it, and it’s just such a great experience to be part of something that’s growing and developing. And, it’s not just great now, but you can see the potential for the future as well.
Absolutely, and actually, when I look at the time I’ve been in Manchester, like from the early 90s to now, it’s developed and grown so much, it is quite unbelievable. It’s almost unrecognisable, and there’s still so much more investment going into it. But I have no doubt that it’s going to be unrecognisable again in another 10 or 20 years.
So let’s finish by having a chat about the other applications of AI and where AI can go in the future. Because there’s so much potential isn’t there?
There really is. One of the areas that particularly interests me is applying this kind of tech to hospitals. So we have a relationship with Great Ormond Street Hospital, and have had for quite a while since the beginning 2018. And pre-COVID, we were working on a really interesting project with them to deal with controlling infections in hospitals.
And what I learned, shockingly, was that one in eleven people catch an infection when they go into the hospital, from the hospital, which is just horrible. And it turns out that the infection-control teams within the hospitals are actually some of the best-funded and most-important that exist, because of this issue. We were looking at projects with them, to see if there was anything we could do to help control that.
We were doing things like recognising when staff, visitors and patients were using the hand-gel dispensers on the walls. You go into a new ward, the first thing you’re supposed to do is use a hand-gel dispenser, to wash your hands, right? So we were looking to see whether there was a way of monitoring to see if people were doing that. And if they weren’t, and if an infection started spreading, could we track people to see where it originated from, and then try and get to the bottom of how the whole of infection-control can be improved?
And there were other adjacent ideas that we talked about, to do with operating theatres – there were a lot of use cases within operating theatres. One of them is making sure that only authorised people enter the operating theatre, because if people just wander in and out of an operating theatre, you’ve got increased chance of infection.
So there were all sorts of use cases. Another one they told us about that was really interesting, was you have a patient that has got a complicated situation. And there’s a whole bunch of experts from all different departments collaborating to figure out what to do. And most of the time, these people don’t even know each other, right? They’re from different departments, and they don’t necessarily know who each other is. And there’s a big conversation, and at the end of it, they make a decision. But no one can remember how they came to that decision. And so one of the things that they’d like to do, is to be able to have AI create a transcript of who said what, and summarise it in an automated way. So at the end of the conversation, it automatically messages them: ‘This is what you guys decided, and this is why you decided it.’
There were so many interesting use cases that we talked about with them. It’s a really, really interesting area and something I’d love to get into more.
Yeah, absolutely. And even more than supermarkets, right? So important, and so applicable for everyone who is fortunate enough, I guess, to have surgery, you want them to be making the best decision possible.
Well, thank you so much, Jason. Thank you for coming on to the Digital Lighthouse and helping us to shine a light for others.
Thanks, Zoe, I really appreciate being here. Really enjoyed it. Thank you.