Aug. 12, 2025

#504 How Lars Maaløe is Redefining Trust, Accuracy, and Speed in Healthcare AI

#504 How Lars Maaløe is Redefining Trust, Accuracy, and Speed in Healthcare AI

In this episode of The CTO Show with Mehmet, I sit down with Lars Maaløe, Co-Founder and CTO of Corti.ai, to explore how AI is transforming healthcare. From explainability and compliance to building scalable AI infrastructure, Lars shares his journey from researching generative models to delivering life-changing technology for clinicians and patients worldwide.


We unpack the challenges of deploying AI in one of the most highly regulated industries, the importance of trust in AI-assisted decision-making, and why the future belongs to specialized models that combine accuracy, speed, and transparency.


Key Takeaways

Explainability matters — why transparency in AI predictions is non-negotiable in healthcare.

From research to reality — how generative AI evolved into mission-critical medical tools.

Compliance as a competitive advantage — meeting and exceeding HIPAA, GDPR, and other regulatory standards.

Scaling responsibly — designing APIs and infrastructure that empower startups and incumbents alike.

Specialized vs. general models — why domain-specific AI will outperform general-purpose LLMs in clinical settings.



What You’ll Learn

• The mindset shift needed to move AI pilots into full-scale production in healthcare.

• How uncertainty quantification can make AI safer and more reliable.

• Trends in healthtech startups, from ambient scribes to autonomous patient intake agents.

• The role of sovereign cloud deployments in protecting patient data.

• Why the next AI breakthroughs will come from self-supervised learning.


About Lars Maaløe


About Lars Maaløe

Lars Maaløe is a machine learning researcher turned healthtech entrepreneur, with a PhD in generative models long before they became mainstream. As CTO of Corti.ai, he leads the development of AI infrastructure that powers real-time decision support for healthcare providers globally — from ambient clinical documentation to patient triage and diagnostic assistance.


https://www.corti.ai


https://www.linkedin.com/in/larsmaaloe/


Episode Highlights


00:00 — Introduction & guest background

02:15 — The early days of generative AI research

06:05 — Why explainability is mission-critical in healthcare

09:50 — Accuracy, performance, and real-world deployment challenges

14:00 — Navigating healthcare regulations globally

18:20 — Startup trends in healthtech and AI

23:00 — Mindset shifts for rapid AI adoption in healthcare

26:10 — Readiness challenges beyond the tech stack

30:00 — Building AI agents for clinical use cases

32:00 — Sovereign cloud vs. public cloud in healthcare AI

34:00 — AI breakthroughs and the path to trustworthy reasoning

41:00 — Why specialized models outperform general-purpose AI in medicine

43:00 — Final thoughts & how to connect with Lars

[00:00:00] 
 Mehmet: Hello, and welcome back to a new episode of the CT O Show with Mehmet today. I'm very pleased. Joining me, Lars Maaløe, co-founder and CTO of Corti.ai. Lars, thank you very much for being here with me today. The way I love to do it, the audience [00:01:00] knows it by now, is I keep it to my guest, introduce themselves, tell us more about you, your journey, and what you're currently up to, and then we are gonna start the conversation from there. 
 Teaser. And I think it's not hard to guess, we gotta talk about ai, but this time we gotta focus more on healthcare and everything around it. So the floor is your slots. Yeah. Thank you so much. And, 
 Lars: uh, just to interrupt me if I'm speaking for too long here in my introduction, I am from Scandinavia, so we're known to be quite humble in nature, so hopefully not, not too long, at least. 
 That's fine. Um, background within machine learning have been, uh, researching quite a lot, uh, in, in, in, uh, methodology within machine learning. Um. Uh, my PhD, uh, thesis was about around generative models before it got so, uh, world famous here and, uh, and, uh. And, um, I had to sit around a lot of dinner tables informing people about what generative AI was and was saw seeing a lot of rolling eyes around the dinner tables, thinking like you, you could see people who were thinking like, this [00:02:00] will never be a thing. 
 So apparently my pitch was not good enough because when Jet GBT broke out, those same family members and so forth came to me and said. Was that what you researched then? And I could only say yes. So I've been researching within generative models for images, text, and so forth from, uh, starting from, uh, the early tens here and, uh, and, um, fell in love with deep learning from the get go. 
 'cause I think it's so amazing what you can build with these deep journeys of models, um, and, uh, how ized it is, uh, for all of the use cases. Um, then. Uh, I got into, uh, starting a, a healthcare startup. Uh, I'm a firm believer, so my fa my family are either engineers or doctors or nurses. So, um, so basically I had two, two paths here. 
 First and foremost, get a, get an engineering degree, start research within engineering, and then then focus on healthcare. 'cause I think it's the most meaningful, uh, field to focus on. Uh, and I found that, uh, there are so many pains within healthcare today. There are so many, so, so much administrative burden and, uh, and there's [00:03:00] such a higher, so, so much of a higher demand on healthcare today. 
 And if I could just be a small piece of building a te, a technology that could help within healthcare, I would be proud of myself. So, um. I've been on that path since 2016 where we have built on, uh, machine learning models, uh, deep generative models, uh, machine learning infrastructure for healthcare. And now we are at the point where we are in so many different healthcare encounters all across the world, um, and are making a meaningful, uh, change in the every everyday, which makes me super, super proud. 
 Mehmet: Great. And again, thank you very much Lars for being here with me today. Just, you know, a question which I didn't prepare honestly, because you mentioned about telling family members about generative AI and all this, um, and being in this field for long, long time, was it something you expected to appear in that moment? 
 You know, when of course, um. The, the model, I mean the, the GPT as as people, they [00:04:00] don't chat GPT, but I mean the transformer models and all this was quite before, uh, December or November, 2023 or 22. Sorry. Uh, but were you expecting that by that year Probably, or like by that timeframe, this is where it's gonna go to the mass or was like something surprising also for you? 
 Lars: It wasn't surprising, right? Uh, the, um, the attention is all you need. Paper came out in 2018 and the real breakthrough within LS was much later. So I think that the, it, it has been a, but like for the ones that have been within the field, uh, it didn't come as a surprise. Uh, it has been a, in a, it is been a steady increase. 
 I can tell you about the presentations. From, uh, Joshua Banjo in 2012 where he said like, now the curve is broken in terms of speech recognition. And then again, in 2016 when, uh, when Deep Speech Two came out and then we all thought talked about now the curve has been broken in terms of, uh, speech recognition and so forth. 
 And the same story is in image recognition and all of that. We have seen these steady [00:05:00] improvements and there have been quite amazing every time here. I think that the big changes more of the, um, of the, uh. Of the, um, uh, familiarity, familiarity within ai, uh, where we can see more and more people are getting more and more familiar with it because they're using it in their everyday. 
 They're not just using it as a part of Google search when they were going into Google search engine or Google Translate and stuff like that. Now they're actively talking within an. And that has been mm-hmm. Uh, uh, resulting in an immense mind shift. And I think that that mind shift we have really utilized, because that makes, um, makes all of our customers now know what to expect from ai. 
 They have a better grasp on, uh, on, on, on how to formulate, uh, what they believe that they would love to build on an infrastructure like ours. So it has made the sales dialogue much easier. 
 Mehmet: Yeah, indeed. Absolutely. Now let's go to healthcare where you know you specialize and Corte is built all around that. 
 So you talk usually about. Uh, [00:06:00] explainability, ai, explainability and trust, especially when it's, uh, you know, we talk about healthcare, we discuss it many times. Healthcare is a little bit sensitive because, you know, there's the privacy, uh, factor over there. But how you define AI explainability, uh, and how in practice that looks like, uh, in real medical setting, I would say. 
 Lars: Yeah, I think first foremost, um, uh, for choosing an infrastructure like ours, uh, first and foremost, um, you can build some quite fantastic demos on open AI and traffic and all of that. But when you need to get things into production, then you learn about all the quirks that happens when you just plug into the open AI API or the likes here. 
 Um, and. I think that the, what, what, what we are seeing in terms of um, an infrastructure like ours is that it needs to be able to live up to a lot of standards within healthcare. First and foremost, it needs to, uh, all of the models. You need full traceability in terms of what, when you call something, then we need to be able to lock all of your calls, um, such that you know exactly what happened. 
 Uh, in [00:07:00] the, um, in the, uh, in the, in the belly of the, these years that you can also trace if something wrong happened, then you can then go and, and, and find out wh why, and, uh, then optimize that for later. Then secondly, you need to ensure that you can have, um, a validation of the accuracy of the models. For instance, if you log into open AI today, it's really, really difficult to find out how well TT four O or whatever model that you're calling for Google ai, it's actually performing within the healthcare domain that you want to, um, call, first of foremost, that's not compliance towards the regulations and all of that. 
 So you would need an infrastructure that can actually prove the, uh, accuracy towards certain standards and thereby, uh, you can also feel safe about bringing that to a patient or a clinician. A clinician out there. So, um, when you talk about explainability, a lot of people are referring to the fact that you need to be able to, um, uh, when you have a prediction within a model, then you need to go back from the prediction into the input, and then you need to point, uh, at what, um, what input had the effect on the, on the prediction. 
 We [00:08:00] have a lot of capabilities in terms of explainability, and we're, uh, researching and publishing a lot on that topic. Uh, and, um, the methodology changes depending on what piece of the system that you want explainability for. So, um, for instance, for the last language models that needs to predict an output, we have these alignment models where you can use them, uh, uh, uh, for, for finding out what it, what in the input was. 
 Biggest, uh, effect on that output. We also have the, um, the explainability features for the models themselves, where you can go in and find out, uh, what, what the, what the, what the, had the largest effects inside the model, uh, in itself. And you can use that to try to explain the input output. Correlation. 
 Mehmet: So, you know that, that's good. And, you know, you were talking about, you know, the, the model and explaining the model. Now what about the performance, you know, like performance and maybe trying to do the, uh, interpretability also as well. Like, um, is there like something related to the, to the accuracy? 
 [00:09:00] Like did you choose the model? For that use case, it's better than doing it for that other use case. Like if you can just give us like some hints about, about that loss. 
 Lars: Yeah, so, so, um, uh, our customers use our API for a lot of different things from, uh, pro providing ENT documentation solutions to, uh, to, um, to, uh, providing information retrieval solutions for the medical domain and so forth, or to build questionnaire agents that can have, uh, direct conversations with the, with the patients. 
 Um, and we keep on, um, um. Validating performance benchmarks across our solution, and we keep on publishing around that. So, for instance, our speech recognition, um, if we don't know what's being said inside the dialogue or if we cannot accurately understand that, uh, which by the way can be really, really complex within the medical domain. 
 Because there are so much of vocabulary that is not used in the everyday speech. There are so many Latin words, there are so many medication names and so on. So you quickly get up to a vocabulary [00:10:00] of, of, uh, a hundred thousand plus terms that the, uh, that the speech recognition model needs to be able to, uh, validate. 
 We constantly across our languages, uh, validate our speech recognition, give. And that accuracy. And we use everything from water rates to character rates, to um, to leave in distance and so forth. And we also use embedding models to, to, to make good, um, good, um, uh, assessments of, of, of how our internal models work and all of that. 
 Um, for text generation models. We also do a lot of research on that, and we, a lot of it you can find online and you will, you can find it published. Where we use, especially I think, uh, one of the more interesting, uh, domains is, uh, using a model as a judge. So you use a, mm-hmm third party LLM to judge your capabilities here and you can then use a suite of third party lambs to judge your capabilities in terms of text generation. 
 That is, uh, that is a very similar to open AI's, um, uh, uh, medical benchmark for agents. Uh, and there, that's a good [00:11:00] segue to how we also validate our agents, where we are validating the responses of the agents and all of that. In terms of, um, uh, we have these lab environments where we do that at scale and then at the end of the day, of course, we are using the feedback from the clinicians out there when they're finding some, some, some, some occurrences of mis recognitions, misrepresentations and stuff like that. 
 We get that direct feedback through our API and then we can optimize and we can train and we normally, like, we constantly train new models. So, yeah, it's a bread and butter thing. 
 Mehmet: Cool. You just said you take the feedback from the clinicians, like, you know, you explain also the, how the model works for them. 
 Um, do you think like, we need also to explain that to, to the patient as well or like, it doesn't make sense. 
 Lars: I think so, so, uh, back to what I started out saying in terms of, uh, the, the, the educ, the education around using machine learning has gotten broader and broader. So, uh, what we see more and more are actually patients that are expecting that there is machine learning model helping and assisting the condition out there in the [00:12:00] practice. 
 Right. Encounter. Um, I think that, um, uh, where if you asked. Four or five years ago, then this would be a real topic in terms of whether the patient, uh, uh, was probably advised and all of that. And I think that the, that, back to the fact all patients needs to be advised by the clinician, but I think, uh, the, the, the change of the game is, uh, infected. 
 The patient want the clinician to, uh, to use some ai. Tools and, um, similar to me when I'm at the doctor with my kids, I want to ensure that the, that the clinician is actually using all of the tools at their disposal. Uh, um, uh, well knowingly that the clinician is still very, very much in charge of the decision making, but why not get some help from the tools that they're at their. 
 Mehmet: Yeah, that, yeah, of course. That, that I know. Even I, sometime I, um, you know, if I, I have something, you know, the doctor, you know, just for a normal visit, uh, I would expect them that they have double checked, you know, maybe using one of the LLMs. Um, now let's talk [00:13:00] about regulations, uh, a little bit more so. 
 Especially, you know, in some part of the world, you know, regrets are racing to catch up with AI and they, they want to make sure, which is of course understandable that we are not like misusing ai. 
 How you, you ensure that, you know, whatever you use, meet or exceed even the compliance standards in markets. Like, uh, because of course like we in, in the US we have the hipaa, and in Europe we have something else and maybe different parts in the world that will have other things, um, like. 
 What are like some of the things you do first for, for regulators and do you see, you know, they have a tough job actually because like today we talk about something. Tomorrow is something else and we have like breakthroughs now. Very, very fast pace. So your point of view on this? 
 Lars: Yeah, I think it, so, uh, back to, uh, uh, lemme just, uh, like, uh, go one step back and say like, what is [00:14:00] the reason why people buy it from us? 
 As an, uh, as an AI infrastructure for healthcare. Uh, first foremost, they seek us for getting a higher accuracy within the machine learning models, uh, for healthcare. Uh, then secondly, they seek us because we make, uh, it much easier to build an application for healthcare. So we have solved all the, um, documentation automatically. 
 Mm-hmm. We have solved all the speech recognition. We have solved all of the behavior that, that you would normally need within the use cases for healthcare. So you can basically get, get started on building the application in a matter of minutes and then. You can get done with the first prototype in a matter of hours. 
 Um, then thirdly, uh, uh, towards your question on, on Reg Regula regulatory affairs here, uh, we have made our, um, our solution and our infrastructure compliant to, uh, to regulations, uh, both in Europe and the US and the uk and, um, uh, soon to come, uh, and a lot of other countries in the rest of the world that we're very excited about publishing. 
 And, uh, in order to, uh, live up to the regulatory standards, [00:15:00] um, you need to build the infrastructure in a compliant ways so that you can deploy your services, uh, such that you can deploy the services, uh, in terms of segregated data and also in respect of the different customers and their requirements and whatnot. 
 Uh, that is not. Uh, that is easier said than done. Um, and it's something that the big tech companies are not living up to. Uh, so of course that's a competitive advantage in terms of the regulators and whether they have a hard time on, on ensuring that they catch up to all of the development. I actually think that the, uh, that a lot of the regulators have, uh, thought quite, uh. 
 Long and hard about how to, to, to make the AI Act regulations in Europe or the GDPR regulations and or the MDR or the MDD and the so on, so on, uh, regulations that the UK centers are based on, um, uh, in a way where it's. It's future proof, uh, to a certain, um, to a certain level. Uh, we can all, I can come with a lot of examples where it's not complete, um, uh, future proof, however, but, uh, I think that in the [00:16:00] broad strokes, uh, the regulators have done a good job on ensuring that an innovator like us can actually, um, scale. 
 Uh, as long as we build our infrastructure in a compliant way. And, uh, just to note, we talk about regulatory affairs so much, but we should also be aware that, uh, uh, there is a lot of misuse of data out there, patient data, et cetera. So it's, um, from a patient perspective, it's really good that we have these regulatory affairs in place and that, that, that, uh, there are people that are thinking long and hard about traceability in terms of access to the data and, and not misusing access to patient data and so on. 
 So, um, so I think that, um, I'm, I'm, I'm pro. Uh, the, the, the current level of regulations. Uh, maybe also because, uh, we are actually compliant. Um, then, uh, but yes, it is tough job to, uh, to ensure compliance and ensure that you keep on moving with the regulatory affairs and, uh, we are doing our best. 
 Mehmet: Absolutely. And this is good to know. Of course. Yeah. So regulations are there for, uh, for a reason. Indeed. [00:17:00] Um, now. One thing, you know, just because you, you, you, you service your main thing that you sell is the API, um, for healthcare Now. Um, how are you seeing, you know, from now? 'cause you deal with a lot of clients, I'm sure, um, but I'm more interested in, in startups, in the healthcare domain. 
 Um, what are like some of the trends you're seeing, you know. Maybe overall, you know, directions that going with, because, you know, with, with healthcare, again, because it's highly regulated, entry is hard. Um, so I've talked to a lot of of founders, I talked even to investors in this space and you know, there's an era before AI and the era after ai, but now there's kind of diversions of what can be done, you know, for healthcare with ai. 
 So from your point of view, again, what are you seeing? Some of the main things that. [00:18:00] Are promising and, you know, attracting also your attention as someone also who, who, who comes also from, from a technology background. And you, you, you have your PhD actually in machine learning. 
 Lars: Yeah, I think it's a wonderful question. 
 Um. Uh, in, in the, in the, uh, fear of, of forgetting, uh, some of our customers. Uh, just to, just to add broad strokes here, uh, define the different, um, types of, of, uh, application builders that we're seeing built on our infrastructure, or even technology builders like it, it doesn't have to be specifically applications. 
 It can also be another infrastructure that they're building in summer of hours. Um, we are seeing, uh, everything from, uh, from, uh, from applications that want to alleviate the administrative burden, and that's for instance, um, ambient scribes where if you check the ambient scribe market of the startups in, uh, in the ambient scribe category, that is when a patient and a doctor have a consultation and then you have this microphone on, and then, uh, you automatically, uh, write the clinical documentation [00:19:00] for the doctor or clinician or the whomever here. 
 Uh, there are approximately a hundred, uh, plus, uh, avian scribe startups out there and counting, and you see more and more of them starting to specialize. Like healthcare is a big market. So you see some of them right, are specializing on nurse documentation and some of them are specializing on, on, on psychiatry documentation. 
 Others are specializing on bedside, uh, bedside nursing. So there's no into the, um, into the specialization reach. And we're seeing it all, uh, on our platform. Uh, so that's the administrative burden part. Then you also have the, um, uh, uh, in the same like kind of avenue you have all of the startups that are building into the revenue cycle management, uh, regime of healthcare. 
 So basically making sure that, um, that the healthcare providers getting paid for the services that they provide. Uh, and that can be complex within healthcare. 'cause patients that there, there could be a lot wrong with the patient, et cetera. And, um, you have a lot of documentation that is unstructured and. 
 To a certain level structure, but a lot of it is [00:20:00] unstructured. And then you need to code that you need to assign clinical codes, uh, mainly from a coding database of, of 140,000 different codes and you can have different combinations. So basically, um, creating an invoice in healthcare is, is complex and it's not just in the us it's also around Europe and the rest of the Yes and um. 
 You have all of those startups that are focusing on that. Um, now that you talked about this startup, uh, ecosystem here, then you have all of these startups that are focusing on full automation. So building agents, for instance, in the patient engagement for telephone agents where you call your, uh, general practitioner where you want a a, an agent that is asking you about the, uh, uh, the top priority, uh, questions, uh, about the acuity level and so on. 
 And basically. That a triaging use that you can come into the right queue of the clinician, uh, to a hostile intake agents or form building agents and so on. Um, or pre charting, uh, agents that are looking at all the EHR information and [00:21:00] then, uh, compiling that with the intake information in order to make. 
 Perfect pre chatting, uh, setting for the clinician. Um, uh, we see all of those, uh, companies and we see all of the startups. And there's a boom within healthcare because there are so many enablers right now where you see all of these, uh, these, uh, these startups building on, on an API infrastructure, uh, such as ours. 
 And, uh, so that, yeah, we have, uh. Uh, 43, uh, 43 times growth here, uh, uh, in terms of project started on our API infrastructure since, uh, going live with that. Uh, at the end of uh, January, we are, uh, having, uh, immense success in the current K-P-I-A-P-I capabilities. And at the end of the year, without promising too much, we'll have a doubling in our capabilities coming out, uh, very, very soon, uh, on our infrastructure. 
 We're in private beta, so we're quite excited about that. So, um, a lot of fuel for all of those, uh, uh, applications out there. And then you have all of the incumbents as well, all of the, uh, software vendors that, uh, that have been building on healthcare for a long period. So, and that's the health systems, the, [00:22:00] uh, radiology information systems and so on. 
 All of those systems, they all, uh, are building new AI capabilities, enable and space text generation agents and so on. 
 Mehmet: Yeah, healthcare is a very vast and big market and a lot of things, you know, that still need to, to, to be fixed using ai. Indeed. Now, one thing also that happens, and it's not, you know, uh, I, I dealt as a technology consultant in, in some part of my career and when it comes, you know, to talking to healthcare organizations. 
 Um, doing, uh, pilot is, is not easy, right? So there's a lot of moving parts. Um, but now we talk about, you know, adopting AI and moving fast. So what do you think like some of the biggest mindset shifts that need to, to happen, uh, to be able, you know, to move quick, like to take the, something from a pilot and then to, to put it into production at a, at a very, [00:23:00] uh, large scale. 
 Lars: Yeah, I think, uh, I think it's a good question. I, I think there is a misconception sometimes on saying like, healthcare is by definition slow and all that. I think that if the need is big enough, um, then you will see, uh, uh, enough need for, for growing fast and implementing something, right. Uh, if it becomes more and more mission critical, then I can assure you that healthcare is one of the fastest moving markets. 
 Out there. Um, and the current strain on healthcare, um, is making some of these AI solutions extremely mission critical. For instance, um, like there is a severe doctor burnout or, and nurse burnout for that matter here, um, at, at provider, and maybe, uh, even the nurse burnout is, uh, is bigger than the doctor. 
 So, um, sorry for not missing that, uh, as the first priority, um, because they have so much work, uh, uh, on their hands and it doesn't look like it's gonna change in the next foreseeable future 'cause we're in. Lagging so many clinicians out there. So, uh, and, uh, we can, uh, more and more, uh, uh, elderly population that has more needs. 
 We have more complex things that, that, that [00:24:00] could be wrong with us, which is fortunate for all of us. 'cause that means that, uh, we're living younger or, uh, longer and fuller lives here, uh, without getting into the Brian Johnson moment, but, but, but we are definitely, uh, getting, getting older. Um, and so I think that, uh, um. 
 There is a severe need for, uh, assisting all of these clinicians out there. It's not about taking their jobs or anything like that, where that could be the AI conversation 10 years back. Uh, now it's about getting the, those additional hands on, on solving some of the administrative needs out there so the clinicians can do what they do best, uh, or. 
 Helping them out on pointing out some things that could be, uh, be, uh, worth noting with the patient in a pre charting moment, before the clinician is meeting them such that they don't have to meet them again, uh, the next day after, but they can actually get solved. What, what's, what is, what is needed, uh, in that, uh, consultation where they have seven minutes to use? 
 Right? So, um, there's a need. Uh, and that means that there's a lot of pressure from the market to actually get these technologies [00:25:00] in. Uh, the big providers, they all have a, an AI strategy, and it's not just. Fluffy ai, it is tangible value proposition. Mm-hmm. That we need to get solved. 
 Mehmet: Cool. What about the readiness? 
 So, yeah, so there is the willingness to, to move fast. There is, uh, what about readiness? Because, um, again, yeah, so I, um, you know, I might say in any domain, yeah, I want to adopt ai. I see the benefits. Um, I heard it like from few people that. I'm not sure about healthcare. That's why I'm asking you large. Um, they go there and they figure out, like, yeah, there's still some work to be done actually before we start, you know, implementing and building the AI infrastructure and building application around it. 
 So, and there are like some of the stuff that needs to, to happen not related to technology. Indeed. So maybe like, maybe. Some new roles, maybe as some time shift in the culture. I would love to, to hear your, your, uh, your opinion and what, what you've seen working also, uh, in [00:26:00] that space. 
 Lars: Yeah, I thank you. So, uh, readiness in terms of infrastructure is of course a, a, a, a topic, right? 
 If, if you virtual communications infrastructure is not ready to, to adopt new technologies, uh, in terms of, uh, implementing, uh, live AI assistance and all of that. True. Um, there, there is a readiness problem always when you talk about, uh, uh, big industries, uh, that, uh, they could be a slight bit slower in, in moving towards the newest technologies. 
 Um, however, there is always, like, like right now, given the pressure there is, there is always a form factor to, uh, to present and implement these, um, these technologies in the, in healthcare. To give you some examples here. Um, sure. If you, if you, um, uh, if you in your intake pro, uh, process have a difficulty because your system right now in terms of your patient intake, so your patient intake is, is quite an, uh, an important process, right? 
 'cause that's where you find out, um, how to, um, how to admit the patient towards your system. And if you're doing wrong there, they can at the, at the worst of [00:27:00] days, uh, it can be, uh. Have a crucial output, uh, outcome here. If you are, um, uh, and if you are just, uh, uh, uh, uh, not, uh, treating the, the patient at the, at, at the, uh, at the right service level, but not doing something directly wrong, then you end up costing the patient and the system from a lot of software and a lot of money and time. 
 Um. In terms of, uh, patient intake, uh, based on where the system is and what it's based on and all of that, uh, the technology can have a, an adoption curve that is slower. Um, however, what we see a lot of the healthcare providers, uh, doing right now that they're, they're thinking outside of the box. Then they're asking about how can we make an online patient portal, um, uh, that is actually replacing our. 
 Um, phone intake, uh, infrastructure, at least supplementing it. And can we actually channel some of the, uh, some of the volume to that online process, uh, while we also update our, uh, our, um, voice enabled infrastructure, uh, for the new age that we can also introduce, um, uh, [00:28:00] AI assisted, uh, uh, voice agents, um, for the ones that don't like a a a a, a written interface or a, an online interface. 
 So there's a lot of willingness to move and act and, and, and try to, um. Stay flexible 'cause the need is so big. 
 Mehmet: Cool. Now, beyond, just out of curiosity, beyond, you know, providing the infrastructure and, you know, the, the, of course the APIs, um, because we talked a little bit about the agents and, uh, you know, what they can do any interest in, in touching that domain laws and maybe building. 
 Of course, like we cannot build something that works for everyone, but probably maybe kind of templates of agent that, you know, me as a healthcare provider or maybe a startup, whatever, I can, you know, follow these templates to build something useful for me. Of course, I still need to give my requirements to that because. 
 You know, agen tech AI [00:29:00] is, again, now everyone talks about it, but what I discovered, few people knows how to implement it. So probably they need this push, right? Or they need that help. Any, any plans on, on that? 
 Lars: Yeah, so, so right now you can see our API as a, as a, uh, as providing agents for, for instance, ation burden, burden or extracting clinical findings from a voice dialogue or just a, a piece of, uh, clinical text. 
 Uh, what you, uh, will also see on our platform, uh, are real agents that can solve real, uh, uh, like, like use case. Within healthcare, for instance, pre charting, uh, or, or for instance, um, um, uh, like clinical information with research based on a patient's, uh, patient case or taking actions based on the, for instance, a patient dialogue. 
 So we have an application provided right now that is, that is building in a application that is taking the pre charting, it's taking the historical information about a patient. Then it's taking the life information about a patient in order to generate this life recommendation engine towards the [00:30:00] doctor sitting there in the heat of the moment, uh, based on clinical, um, clinical guidelines, clinical research, and so on, uh, where you can go and say, based on the clinical research and the patient context and all of that, and based on what you ask the patients, you might consider asking this and that question, because that relates to research X and YO here. 
 Um, uh, for the future case here, um, those kind of recommendation edges, you can basically build on our templates within our, in, uh, in our API infrastructure and get it up and running really, really fast. And, uh, it's also completely flexible. So you can define your own system prompts around it. You can define your own, your own tools that you want to use. 
 You can generate an MCP around it, so, so you can also use other, um, other data sources for, for building your agents. So, uh, real flexibility in terms of, uh, building whatever. Crazy agent idea you have. And then top that back to the traceability, the explainability. Um, the need here is to make sure that the, that you lock, uh, everything that we do, uh, from the agent perspective and all of that, and you can trace it. 
 Mehmet: Yeah, absolutely. It's, [00:31:00] it's good. And I ask on a purpose so you can explain to the audience who might be interested in that also as well. Now on very quickly, uh, maybe I should have asked this before, of course, you know, you, you, you provide this for, uh, anyone of your customers to have it, you know, in a sovereign cloud, right? 
 So, so, so it's not like the public cloud. How much do you see this as important, um, and. Our customers, some of them, because maybe they would not have themself, you know, the ability to run the infrastructure themself. I mean from, from Harvard perspective and so on. Uh, going as they don't have other choice to, to the public cloud. 
 Like one of the big, uh, hyperscalers or no, when it comes to healthcare, it's always no, like it should be ours. The data stays with us. 
 Lars: No, so, so there's a, there's a lot of flexibility within the market in terms of deployment options. Um, and we are providing them all. Um, so we're providing it on, on [00:32:00] the big tech incumbents, uh, cloud services, uh, and we're providing it, um, uh, on your own. 
 Infrastructure. Mm-hmm. So we basically push all of our infrastructure into whatever setting that you have. Um, uh, hence the sovereign cloud dialogue. We are also on, on, on, on more, um, uh, on smaller cloud provider infrastructures and all that. So there is a large push for the market to make sure that, um, that they stay, um, flex. 
 In terms of this. Mm-hmm. Uh, and it's not just the European providers, given the European politics and all of that that are interested, of course in this, it's also the big healthcare providers in the us. They all have these requirements. Um, and, and the requirements are truthful also in terms of, uh, cybersecurity risks, in terms of disaster recovery rules and all of that. 
 They need to make sure that they have their up time needed for all of these infrastructures. Then they cannot just rely on one server side or service center in the western part of the us. They need to be able to. To, uh, to, to have redundancy and, and whatnot. And then they also, a lot of them have requirements towards, um, investing in cloud services as, uh, we [00:33:00] probably all know, um, that are listening to this, uh, show. 
 Their cloud services are not. And not super inexpensive. Uh, so there, uh, we are all in the, in the, in the game of making sure that, um, that everyone gets the, uh, the cheapest hardware price out there. Uh, and there it, uh, uh, when you are a large consumer, because you have these large language models and all that, that are consuming GPU resources, then you also want to make, make sure that your infrastructure can be deployed in a cost effective manner. 
 Mehmet: Right. Um, back to, you know, the. What's happening, you know, in general with the, within the tech. Um, of course, you know, there are some breakthroughs you talked about that. Uh, but currently what are like some of the, you know, exciting breakthroughs that you're seeing? Um, I'm hearing a lot about, you know. 
 Fully super, fully autonomous. Like, uh, what they call it, the reinforcement learning and how, how it's, you [00:34:00] know, a lot of things happening in that space. Uh, shifting from, you know, supervised, semi-supervised learning also as well. Yeah. So just for the folks, I'm not expert. I took like some courses back in the days, so this is where I know the terms. 
 But of course, I would ask laws like, um, really. Something that you, you, you're expecting to do, you know, as big noise as we saw when the first L LMS started to, to, to appear. So, so what's happening in that space? 
 Lars: Yeah, I think so. So the, the ability to reason, uh, for, for, for, for, uh, building proper reasoning, uh, it is quite fantastic. 
 I think, uh, that's one of the things. And, uh, the fact that you can optimize the reasoning paths, uh, with reinforcement learning is, is definitely also, uh, uh, fantastic. Um, I think that, um, uh, where I find, um, so, um, just to, just to go two steps back on machine learning at the, at the stage Sure. Of the art right now, um, is that, um, um.[00:35:00] 
 As fantastic as the current model capabilities are. Um, there is one big, uh, problem with these models, um, uh, uh, and, um, and that is the, they will always give you an answer even though the answer is wrong. Um, and, uh, mm-hmm. Um, uh, we all have that uncle that will always give you an answer and then, uh, when you grow, grew a little bit older than you found out that he was full of beep. 
 Right. Um, and, um, and, um, and, um. I think that the, these are lis especially these, uh, these genome Purcell limbs, they're optimized with the reinforcement reward function of, of, uh, a lot of non-specialists, people that are sitting there and saying, I would rather have this, uh, this output than another output. 
 And of that, so you will, you will get an, the, uh, a, a response that that is far from always right. Um, you'll see that also with the publications that are there, that you see more, more and more hallucinations also given the fact that you want the models to be able to do more and more. So, um, I think that, um, when you look at the, at the current space, what I'm the most excited about are [00:36:00] these, um, uh, because it's completely fine to be wrong if you, if you use in a limb for a cooking recipe, right? 
 Or. It's annoying, but it's, it's, it's not life or death or if you use it right to, to, to, to, to automatically, uh, write your essay assignment, uh, for your school, uh, on the Monday because you've forgot, uh, making your homework. Um, that is just, uh, annoying for your, for your grade, right? But if you go within healthcare, then you need to make sure that the system is, is, is reliable. 
 And, uh, and there I think that, uh, what, what, what, uh, what I'm the most excited about are, uh, uncertainty quantification of these models. So basically make them, make the models, uh, aware, uh, of their own deficiencies. And also aware when, when they should, uh, not predict or when they, when they should, uh, report back to whomever is, uh, is asking for an output and say, uh, I really don't know. 
 Um, uh, that, that uncertainty quantification is important. And that's a, that's a field that I think is under appreciated because it doesn't create big headline of, [00:37:00] uh, of an elem that is now able to, to, to finish exam, uh, x and y, uh, better than other ELs. Uh, but it's, uh, but it's, uh, probably the most impactful thing in terms of actually providing and, and building AI solutions that are scalable and reliable out there. 
 To the masses, then I think that, um, uh, so uncertainty quantification is one of the things that I think is, is, is a highly researched topic also from, from our perspective, like where there is still a lot of innovation height that needs to come in. And then the second thing I think is, uh, is the capability of training these, uh, models on the, in the, uh, uh, unsupervised slash sales supervised, uh, self supervised, uh, manner. 
 That is giving a company like ours so much scale because we don't have to have these very expensive annotations and also these very mm-hmm error pro limitations for all of our data. But we can actually get the models to learn their own representations. And with the right, uh, infrastructure and the right minds, uh, employed in a company like ours, you can gain so much value out of self supervised learning.[00:38:00] 
 Um, so I think that that excites me a lot. 'cause that means scale and that means covering more data. 
 Mehmet: Yeah, so like, you know, it's like when I go to a doctor and he tells me, or she tells me, Hey, like, this is not my area of expertise, but I can refer it to, to someone else. Like May, maybe at some stage, like one model would, would suggest maybe other models that they maybe have better knowledge in that. 
 So, which is, which makes sense. And you know, like about the. You know, death or life. Indeed. Yeah. To your point, like of course I tell people because I've seen people sometimes, you know, they, they, they, they, maybe they have a pain or whatever and they go and ask Chay Peter or any of that. I said, guys like, yeah, but don't rely a hundred percent on the answer because we never know. 
 And just out of curiosity laws, because you mentioned something I noticed and I, you know, I tried to find answers. Um, which is like the model with time. You said it start to to behave [00:39:00] in a weird manner. Um, is it because, and even, and I don't mean within the same chat, we know like if you keep the chat going on, it'll start to hallucinate more. 
 But even with time, and this is why I had this, this theory that companies like open ai, this is why. They are not doing much more publicly now, but every three months as we are seeing like a new model, a new model, a new model, because you know, I used it very heavily and I figured out like after time either it becomes lazy, literally it, it refused to do the job or it, uh, it start to, to, yeah, like, uh, give short answers. 
 Is there something like this really from, from, of course, a technology point of view? 
 Lars: Yeah, I think, uh, uh, I don't know what their choices are. Um, it might be that they're, that they have a cost efficiency measure where they're trying to, uh, not invest too much, uh, compute, uh, on each individual. [00:40:00] Uh, if I were them, I would, uh, probably also think into that, the massive computer needs and all that. 
 Um, so, so, uh, that doesn't make sense, the shorter responses. I think that another thing that we see from these models, even though that the context window is advertised as being humongous, then, um, yeah, active context window to, in lack of better, uh, um, here that we can see from the general purpose models. 
 Um, it, it's really not impressive. Especially when you're talking to a domain specific topic like, like, like healthcare, um, that these models have a really, really difficult time on, on, on keeping all of this domain specific context. Uh, and from a machinery person, it is not a big revelation. Of course, they have a difficulty on doing that. 
 Uh, and um, and, um, I think that, uh, we should all be aware of that. So that means that when you, uh, chuck in a big, uh, thousand pages long, uh, uh, stock. Trading report or something like that, then you can definitely not expect the model to actually [00:41:00] keep that context in and it will give you an answer no matter what. 
 So the analysis that you're then relying on, uh, I would definitely, uh, uh, question that even more than, than, than your, than your family members that is asking Chad g PT of a whether a code, uh, should be a reason to go to the doctor. Um, um, so the larger of the context, the more complex the, the matter and these, uh, large language models that are general purpose, um, and are very specialized into all of the needs that needs that large context. 
 Mehmet: Yeah, this is the disclaimer that, you know, we, we do to people, and this is where, um, you know, what you do. Lars comes, you know, handy because you are a building specialized models for the, for specific use cases, you know, and it doesn't go beyond the knowledge that it has, which is all medical knowledge. So, um, it, it's, it's good they also wake up call for people just to, to understand. 
 And, you know, of course I'm not saying like not everyone knows. It's good to remind, I remind myself, I have to remind myself like the general purpose models, they have [00:42:00] all the data, including the garbage data of the internet. So, so I might get something which, you know, completely nonsense or wrong. So a hundred percent, yeah. 
 Um, lar, I, you know, as we almost come, uh, to the end, any final thoughts, anything you want to share and where people can get in touch? 
 Lars: I think, uh, basically just that, uh, there'll be, uh, there will be, and there are many more companies like us to come, uh, with these specialized fields. And there are so many specialized fields out there, the needs and the, and and infrastructure like ours. 
 I think that, uh, what I'm the most excited, all of the developers out there that are sitting there with a, with an idea and feel that is difficult to actually get started. Uh, we should hopefully make that easier. Like they can just get started on the, uh, on our, uh, from our website, they can sign off and get the API keys and that they're basically, uh, up and running and that they get billed whatever. 
 Prototype application on whatever vibe coding tool that they like and test their idea out. And we'd love to hear from all of these developers out there in terms of their crazy ideas or better ideas or whatever. We also throw a lot of [00:43:00] hackathons, uh, and have started that, uh, like really, really, uh, getting more traction on those hackathons. 
 And it's so fun and it's so, uh, energizing to see all of the ideas in order to actually, uh, empower a better healthcare system for the future. So, um. Reach out, uh, reach out directly to me on LinkedIn or directly on my, my, my court email. You can share it afterwards as well, or directly through the website. 
 We're responsive and we just love building. 
 Mehmet: Cool. Great. So I'll make sure. All the links are available in the show notes. So if the audience are listening, uh, on their favorite podcasting app, you'll find all the links in the show notes. If you're watching this on YouTube, you'll find that in the description. 
 Lars, again, thank you very, very much for being here with me today. I know how much it can be busy, so thank you for giving us, you know, the time and this is how usually I end my, uh, episodes. This is for the audience. If you just discovered our podcast by luck, thank you for passing by. I hope you enjoyed If you did, so give me a small favor. 
 Subscribe, share it with your friends and colleagues who are available, as I said, on all podcasting applications and [00:44:00] platforms, and on YouTube, and if you are one of the people who are fans, loyal audience, thank you very much for all your support, all your feedback, questions, suggestions. I hear them and I read them all, so please. 
 Keep them coming. And thank you for taking the CTO show with Mead this year to new level by reaching top 200 charts in multiple countries at the same time. So I really appreciate this. It can't happen without you, the audience, and of course my guest, including you, Lord. So thank you very much, and as I say, always stay tuned for any episode very soon. 
 Thank you. Bye-bye.