Oct. 25, 2025

#532 Rewriting the Rulebook: Carina Negreanu on How AI is Transforming Legal Tech

#532 Rewriting the Rulebook: Carina Negreanu on How AI is Transforming Legal Tech

In this episode, Mehmet Gonullu sits down with Carina Negreanu, CTO of Robin AI, to explore how artificial intelligence is reshaping the legal industry. From her roots in physics and machine learning to leading one of the most innovative legal tech companies, Carina shares how Robin AI is using AI to simplify contracts, enhance trust, and improve lawyer productivity.

 

The conversation dives deep into the intersection of AI, law, and human judgment, discussing the challenges of accuracy, bias, verification, and the evolving role of co-pilots and agents in legal workflows.

 

 

About Carina Negreanu

 

Carina Negreanu is the CTO of Robin AI, a fast-growing legal tech company transforming how organizations manage contracts. Before joining Robin AI, she was Principal Research Manager at Microsoft Research, where she led groundbreaking work on formula generation and co-generation for Excel. Carina holds a PhD in Physics, where she studied Einstein’s General Relativity systems — a foundation that shaped her rigorous, data-driven approach to AI.

 

 

Key Takeaways

AI as an Enabler, Not a Replacement – Tools should empower lawyers to verify results and enhance confidence, not automate judgment.

Verification First Mindset – Robin AI’s approach builds user trust through transparency and feedback loops.

Bridging Accuracy and Bias – Continuous improvement and human oversight are essential for ethical AI adoption.

The Build vs. Buy Debate in AI – Why enterprises should evaluate long-term maintenance, model evolution, and vendor credibility.

The Adoption Challenge – AI success depends as much on user education and expectation-setting as on the technology itself.

Legal Co-Pilots & AI Agents – The next frontier: workflow automation that simplifies contract negotiation and legal research.

Personalization is the Future – AI that understands user preferences and adapts over time will redefine the legal workspace.

 

 

What You’ll Learn

• How Robin AI automates and verifies complex contract review workflows

• The importance of human-in-the-loop AI for high-stakes decisions

• Why enterprise AI adoption struggles — and how to fix it

• How legal copilots and agents are redefining trust, transparency, and productivity

• The role of personalization and verification in next-gen AI systems

 

 

Episode Highlights (Timestamps)

 

00:02 — Carina’s journey from physics to AI leadership

00:05 — How Robin AI simplifies contract workflows

00:09 — Tackling accuracy and bias in legal tech

00:13 — Build vs. Buy: navigating AI decision-making

00:18 — Why enterprise AI adoption often fails

00:27 — The rise of legal copilots and workflow agents

00:34 — The future of interoperability and AI ethics

00:37 — Personalization as the next big leap in AI

00:39 — Carina’s advice for CTOs evaluating AI partners

 

 

Resources Mentioned

Robin AI: https://robinai.com/ – Learn more about Carina’s company and its Legal Intelligence Platform

• Carina Negreanu on LinkedIn: https://www.linkedin.com/in/carina-suzana-negreanu/

 

[00:00:00] 

Mehmet: Hello and welcome back to any episode, episode of the CTO Show with Mehmet today. I'm very pleased joining me from the uk, the CTO of Robin, ai, Carina, Negreanu. Carina, thank you very, very much for being here with me today. I really appreciate it can be very busy for a CTO like yourself, [00:01:00] and thank you for making it, uh, the way I love to do it.

Carina, as I was explaining to you before, you know, we start recording. I keep it to my guests to introduce themselves. Tell us a little bit more about you, your background, your journey, what brought you up here to Robin ai, and then of course we're gonna start the discussion from there. Teaser for the audience.

We're gonna talk a lot about legal tech, uh, but of course before that, let's know more about you. So the flow is yours. 

Carina: Hi, thank you so much for having me on your podcast. Um, I'm Carina and I, as Mehmed was saying, I'm the CTO at Robin ai. Um, I have a background in machine learning and before taking the CTO role, I was the VP of ai.

Um, before Robin I used to be a principal research manager in Microsoft Research, uh, where I did a lot of very cool stuff around, uh, formula generation and co-generation in particular for Excel. And before that I used to be a PhD student in physics where I studied, uh, Einstein's General Relativity Systems.

Uh, so quite a journey for me to get into [00:02:00] legal tech, but uh, it's a really hot place to be right now, especially as the machine learning approaches that we have are really relevant today. 

Mehmet: Great. And thank you again Carina for being here with me today. Legal tech. You know, for some people it might look like something boring.

It might look something, you know, you know, there it's like a rigid, uh, uh, domain, I would say of expertise. What excited you, uh, and, you know, let you shift gears and, and come and join Robin Ai. 

Carina: Yeah, I think, uh, I actually had the same question when I started talking to Richard about running his company.

One of the things that's really interesting for me is that with Lego Tech, the space in which you can make errors is really small. So the users have a high expectation that you're write a lot. And even today with our best models, we do see quite frequent errors in their judgment. In their reasoning. Um, and [00:03:00] we basically have to build tools that support users to verify those kind of flows so they can have confidence in the results that we show them.

And that's the same kind of WordSpace and uh, world perspective that I had when I was working on Excel. So the transition, even though the domain is quite different, it's the same kind of conceptual space. Um, which made it really exciting for me because I think there is a lot of space to innovate the legal sector.

Mehmet: Great. Um, Carina, um, one thing which always I'm curious to know about, you know, when I talk to, especially CTOs, and this is the Ct O show, although, like I tell people, I don't only speak to studios, I talk like almost to all the C-suite of, of any company and operators as well. But, um. Every company have developed something to solve a specific, uh, problem.

Right. Um, which have a business impact solving it. [00:04:00] I would love to hear from you little bit more about like, um, how the tech is solving, you know, problems for customers of, of Robin ai. Like if you can give me a little bit more background about the, the kind of problems that, you know, um. Organizations would be facing or people would be facing, and how you are solving this with tech.

I would really appreciate that. 

Carina: Yeah, of course. Um, so at Robin ai, our core mission is to make contracts simpler. And this is because our main sector that we're targeting are, uh, better enterprise lawyers. So in-house lawyers that have very high volumes of work they have to do. And some of it is quite repetitive.

Um, and what Robin tries to position is itself as the market leader in terms of giving those folks the support they need to basically achieve more in their day to day. Um, examples of workflows that we are tackling are contract review. So for example, when a lawyer receives a [00:05:00] request to review a contract and update it based for playbook, we automatically do that for them.

First pass, second pass markups. Uh, these are all very time consuming and laborious pieces of work. And the Robin's goal is to create for them the resources they need to be able to not start from scratch, but more verify. The outputs of a model that produces that work for them. Um, that's one example of a flow.

There are other flows, for example. Um, you have a large volume of contracts and you want to know across them what is the, um. Expiration day or enforcement day for a certain clause across all of them. Um, an example that'll be very time consuming as you can imagine, searching for all of them. So we have built a technology that figures out how to create tables that sources this information for them, citation so they can easily verify.

And also it's tries to help them on the path to verification by producing [00:06:00] levels of different confidence, uh, suggesting places where they want might want to look further. Um, so we are basically trying to not just produce resource for users that they can then have to verify at a very large scale, but support them on their journey, uh, to do a better job in this space.

Um, another example. I think I can, I can talk forever, but No, please. Um, another example would be, um, when people want to do, um, chat across, uh, verified resources. So for instance, you know, maybe one of your colleagues asks you, you know, relevant to my. I dunno, 50, 70, a hundred contracts. Um, what does it mean if the law in Delaware changed now?

Hmm. That's a quite a complex task, usually because folks who have to like, figure out what the relevant law in Delaware would be, have to find out where out of this like massive amount of contracts that applies or doesn't. Um, and Robin has a multi contract system that understands your data [00:07:00] across your whole repository.

Um, and for that to make it a lot more easy to navigate. And, um, understand the outputs we have. Build something that we call the legal intelligence platform that we launched last July. 

Mehmet: Now, one thing when it comes especially to legal Carina, and this is, you know, because of your AI background also, so there are like some overlapping here.

What I hear sometimes is, you know, and you know better than me, you're the expert. So, so we talk about accuracy. We talk about bias. Now when we put this into a legal workflow, right? So how we can make sure that, you know, powering the whole process that you processes actually, that you just mentioned, is still, you know, in compliance.

With what, you know, legal in general would, would accept, because you [00:08:00] know, some of the models, you know, we see people that say the model is biased, or the model is maybe because again, as you train it the way that, uh, you know, you have the data and you give the data. Mm-hmm. And some people still, they say, yeah, but you know, as training the data, who is controlling, you know, the data itself that, you know, it's, it's not bias.

So how do you solve this? In in in Robin ai. 

Carina: Yeah, it's a very good question and it's a very hard topic, obviously. Um, one, so first off, I am of the mindset that it's all about making it easy for people to verify and setting the right expectations. When you produce a tool, the tool is not meant to replace you.

The tool is meant to enhance your workflow. Um, so for example, if, um, with contract review, when you have a playbook that you want to apply to a contract, the model can do a good job up to a point and, you know, that speeds you up a lot. That, you know, uh, gives you time to [00:09:00] like focus on other things that important, but it's really important that we provide you the tool after that that can help you verify the answers and the, the changes that the model has produced.

Um, so. It's about supporting the user on their journey and thinking about verification as the first step in any kind of product that you build. Um, I think it's very unrealistic to believe that we're always going to get things right. So starting from the idea that you are not going to, um, yeah, but just making it easy for folks is really important.

Um, in terms of model training at Robin, we are in a very, um, fortunate position that we are kind of like a two dimensional business in the sense that we have our software sales and software component, but also have a managed service that uses our software to basically do all of these markups and reviews.

Um, and all of the data and all of their insights in how they use the [00:10:00] software are they're fed back into our models. So we have a constant flow of feedback from them and the data that they produce is really helpful for us testing and training. Um, and that is basically data that we have a lot more information about and we can guarantee some levels of quality and de-bias, uh, because it's all something that we can control in house.

Mehmet: Right. Do you work Carina with maybe lawyers or legal experts also to, to, to, to make sure you know, uh, about the quality of, of the results or maybe even to enhance, uh, your models. 

Carina: Yeah, exactly. So this is, um, the managed services piece is an example of that. So we have lawyers in the company in kind of two different places.

One of them is the managed services side. We have like a significant team of lawyers that are basically doing these markups together with our ai. And then we have a legal engineering team [00:11:00] within my organization that create data sets for training, evaluation, and constantly monitoring what's happening under the hood.

So we spend a lot of time building the system such that we can collect the information we need to make a judgment if something goes wrong. So, for example, there are different parts of our system where we depend on models from different providers. Um. And then sometimes, you know, something happens, uh, maybe they have some downtime.

Maybe there's like an underlying change that we weren't expecting. Our job as folks who build companies like Robin is to make sure that we can transparently inform our customers quickly if there's any performance degradation, and for us to learn from it and guard against it. Um, so that's a lot of the work that my team has been doing in the last year.

Mehmet: Great. Um, another, maybe somehow a parallel topic. Uh, and you know, this, uh, something [00:12:00] important to discuss, I believe is about building versus buying, especially in enterprise. Right? So I remember, you know, when I used. Years and years ago, uh, I was one of the, you know, technology team, you know, not, I'm not the decision maker, but, you know, I, I used to sit in the room and even this is before AI or anything else.

So always, you know, we had this discussion, uh, should we go by, should we build Now when it comes to ai, you know, uh, we talk about something major, something that affect the business. Something that, you know, can make it or break it. Mm-hmm. Um. How, what's your point of view on this and what's really the trait of you see when making that decision?

Carina: Hmm. Yeah, so I think it's. It's quite a non problem. And, but for instance, like the way I think about it for my team at Robin itself, for instance, I am not going to build, um, my [00:13:00] own AI that's gonna support my team to code better, right? Like that is a no-brainer. I, you know, there are some very good companies out there that do that, so I'm going to trust them.

Um, and it's the same way I think about others buying Robin. It's about figuring out. A provider that they trust, a provider that they can work with. So for instance, if there's a problem, they can escalate quickly some. Jobs are bigger than others. So for example, you know, how many people did it take you to build something that's custom built for you?

And how are you going to support it long term? Um, so for example, especially in this era of AI, models change all the time. For your task, are you reaching the performance that you want today? Or if not, then you probably should go for something that's a provider that can, you know, constantly update their models and be always on the hook for that to basically deliver you better performance.

Um, so it's interesting, I think with the new developments of the ai, the balance between [00:14:00] buy versus build has changed quite a bit. On one hand, you can build easier, uh, in many ways. On the other hand, maintaining becomes a lot more difficult, uh, because the solutions are a lot more dynamic. Um, so the conversation I usually have with potential customers and prospects is, um, what is the confidence in that other task is basically staying.

Quite constant or flat. Like say they have to review always, uh, thousands of NDAs and that's not changing. Um, how does it, how do current models or the box models perform for them? Because, you know, we don't want them to be disappointed that they're gonna already have like 99% by doing something simple themselves.

Um, and basically figuring out what is the, what it would take for them to build something versus, you know, purchasing the software that we have. Um, I think it's about having these honest conversations because otherwise everyone's gonna get disappointed in the long run. 

Mehmet: Uh, and I agree with you because you know, the [00:15:00] question is, and working as a technology consultant as well, many times on many occasions, and on in different verticals, in different sizes of companies, what I have seen is that someone decided to build an in-house solution regardless of what solution is, and to figure out that what, you know, the team that built that.

Of course they tried to do some documentation, but because no one was able to understand the core architecture, uh, there are like some, some areas and aspects which, uh, no one was able to understand after this, after this team, you know who built that left, so they get stuck. And the problem is, you know, uh, shifting becomes also like harder.

So it's like you, you, you, you are in a situation where you, you don't know what to do and maybe, you know, the, the decision maker, whether it's A-C-T-O-C-I-O, whoever is in response have to take the hard call. So I can pretty much relate to, to what you mentioned, but now [00:16:00] in, in this era of, of ai also ka and let's focus on AI systems in general.

So. You can imagine. And I, I have a lot of friends, I call them friends because, you know, I know them for a long time. Decision makers, their biggest complaint, I would say now that it became a very noisy market, uh, everyone comes and say, Hey, like we have the ultimate, whatever you want to call it. And they said, okay, you know, we are having.

Kind of, um, challenges, let's call them this way, in evaluating the vendor's credibility from, again, your perspective with whatever you're doing now, and of course your long, uh mm-hmm. Your long, uh, you know, experience. How do you evaluate vendor credibility, especially in this times where everyone can say, Hey, like I've developed an [00:17:00] LLMI, I did this, I did that.

Carina: I think it depends a bit on the vertical that you're targeting. Um, so for instance, like two kind of like different perspectives. Um, for legal for instance, it's, I think what matters a lot is to make the, um, inclusion of the software a success. Which is not just about the software being a good product because, you know, maybe, uh, some people would argue that the difference between a lot of software solutions is not as big as you were saying, or that everyone says that they're doing perfectly.

Uh, but the user journey and the support and having the people both in-house and within the vendor that you're trying to purchase from is really important because they have to be someone who's there with you on the journey to make sure that adoption is successful. Um, that kind of leads into the credibility part, but I think it's a part that a lot of people don't talk that much [00:18:00] about.

Um, and I think it's a shame because looking at adoption rates of AI in companies is really low if you actually read the recent studies. And my personal belief that a lot of that is because the adoption part is also an afterthought. Um, so it has to be a vendor that can basically take you on the journey, and that should come as part of the credibility.

Now there are other cases like say. Uh, co-generation, uh, which is one of my favorite pieces is, uh, something I used to work on before, uh, where basically the, the type of users that, you know, buy a solution like Cursor for instance, they are very much aware and they know how to self adopt it. It is very much, uh, very.

Space. So I think that in that case, um, basically having something like self-serve makes, makes things a lot easier. Um, it's very much market dependent as far as I'm concerned, but in general, if you are an enterprise and are looking for a solution where, you know, you [00:19:00] are the folks that are using, uh, the solution and you wanna make them power users, because otherwise, why would you?

Make the adoption, um, they need to be the ones that are on this journey, and you have to figure out what that means for them and ask those questions to all the vendors that you're considering. 

Mehmet: Great point. Uh, carna and just a question that came to my mind now, do you think also this has, of course, like we, we need to, to do our due diligence, let's say on, on any vendor, regardless of the solution in any vertical.

Because you mentioned something about adoption, do you think also the expectation that we as, and I'm saying we here as the other side of the table, we, the enterprise, we the decision makers, uh, you know, defining properly what we need to achieve or what we want to achieve, how much, this is also important because, you know, I can, for example, and let's take legal tech here, like maybe I'm a legal firm, right?

And, and. Or, or any other [00:20:00] organizations that have a legal department probably, and I'm, I'm, I'm relying on, on contracts and I'm relying on a lot of things related to legal. So I can come to you today and you know, with something which even not achievable, right, with the current technology and come and tell you, Hey, Carina, you know what, uh, I don't think your company is capable.

Or like, on the other side, you know, I come to you with almost no requirements. And I kept, I keep telling you, Hey Carina, I don't think you guys are able to fulfill by requirements, but when you go and dig more, you find out that. Actually these guys, they don't have requirement or maybe they have misunderstanding of the technology.

So here, what I want to ask Carina, the also the importance of, of your role. From vendor perspective in helping customers to really set the right criteria for selection regardless of what the technology is, how, how important you [00:21:00] see this, like to have someone on their side to, to set the right expectation and not just start to evaluate, uh, different vendors and providers without like solid, I would say background.

So I like to hear your thoughts on this. 

Carina: Yeah, I think it's a fundamental piece to support people in understanding what they actually need and what technology can do for them. Um, so for example. Uh, we are seeing under a lot of companies that are doing POCs, and that's really good. Mm-hmm. And some of them are like very well thought out, but even in those processes, sometimes we find that asking the right question makes them think totally differently about, say, their usage or the needs.

So I think it's kind of like a shared responsibility between. Vendors and adopters, uh, to basically figure out what is the right solution given what is the need. Um, usually we have, um, very deep conversations at the beginning to write [00:22:00] discovery calls, to try and support folks on their journey to understand what they need AI for.

Um, and they're usually really telling, uh, things from like what integrations they actually need to even use cases. Um, so for example. Say they have a use case in mind that would unblock their business. They have like a very prescriptive way. They think that use case can be solved because that's the way it has been done.

And they think of AI as a replacement for a certain component in the flow, but actually. You can actually change your flow entirely because X, y, and Z is happening in the new technology. So I think, uh, solving that understanding gap is the most important piece here, and this just requires a lot of time spent with customers and potential prospects.

Mehmet: Right now, another thing, Carina, which, uh, I'm sure like you're seeing a lot and, and may, of course you saw it a lot before, uh, related to the adoption, um, [00:23:00] do you think sometimes. As technology leaders who are sitting in the decision maker chair, they're talking to someone like you. I mean, to open ai or maybe from another perspective, maybe talking to another AI provider for another use case, and they are genuine.

They understand what that's saying. But do you think sometimes the failure of adoption comes from. Setting wrong expectation within the organization itself. Uh, do you think because it's like fear from people from ai, and I'm repeating this question recently with all my guests is because I started to spot something, which is little bit alarming.

Um, um, so people comes and. Think one of two, two ways. So they think AI is the solution for every single [00:24:00] problem in the organization, or they think AI is gonna bring all the problems to their organization because they say, okay, I gotta make my employees worried. They gonna feel that they will lose their jobs.

Uh, it'll resonate. It'll, it'll, it'll, right. So the what, what's missing here from your perspective, why the slow adoption also is happening? Uh, beside of course, the few points that you, that you mentioned a few minutes ago. 

Carina: Yeah, so first there's a knowledge gap. So between, you know, the people who build a technology and the people who use the technology, and I think that is something that we need to be a lot more mindful of going forward as technology builders because I think that unless people see quick wins very early, it's very hard to convince them to change their flow.

Um, it's also like, you know, say that they have, you know, 10 use cases of, of your solution and they run the first two and they're great. They run the third one. It's not [00:25:00] great. They're on the fourth one, it's okay. But by this time they have actually invested a lot of time in basically building the knowledge to use your tool.

Uh, that kind of invested time has to basically be worth it. Um, so things like making a good first impression in their, in their using journey, making sure they have the support they need. And sometimes people just don't have that. Making sure that incentives are aligned in corporations where. You know, it's okay to maybe be a bit slower in the first week to learn how to use a tool because we have confidence that then, you know, that time is gonna be, uh, picked up again.

Um, so I think it's all about, I. Like three main components here. Uh, but I also think that technology builders should work more closely with the business side of potential clients to make sure that early on they have the right information to make a good decision quick. The last thing you want to do is have like long [00:26:00] that keep dragging, or like POCs that take months and months and just like mm-hmm.

Drain your teams. Then you know, no one then wants to use it. You know, they have. Have lost all that energy anyway. Um, 

Mehmet: yeah. Uh, yeah, I'm happy. You mentioned about the POCs that keep dragging, and I think this was happening all the time, even before ai, but the AI like over complicated, uh, things in a sense that Yeah, because, uh, people push back a lot to, to your point.

Uh, but I'm belie, I'm a big believer, like similar to penny breakthrough that happened in tech, so once. The pe, the, the, the organizations and people, of course within this organization who adopted, you know, the technology and they started to see, you know, the results. This is where the mass will come. And I think we are still, we are still early days.

I, I, I'm aware of this, uh, and, you know, I would. I don't have a timeframe, but I would say like very soon we are gonna start to see the organization who adopted ai start to see like massive [00:27:00] results. Whether it's like, so saving time, saving money, uh, enhancing customer experience. And then the rest will say, oh, okay, we don't need to now miss the train and let's jump on and, and try to go and, uh, do this.

So, uh, it resonates with what I'm seeing also in the market. Now let's talk a little bit about, um. And of course, like you, you were at Microsoft before and you know, it's like their terms kind of co-pilot, right? So, um. But, but I, I'm hearing the co-pilot and not also only the Microsoft term, but like, but as co-worker, like a co co colleague, you know, AI as a colleague, how do you see the future of a ai co-pilots in, in the legal industry?

Um, and, you know, in, in the field of contracts and, uh, negotiation and, you know, maybe. Even, uh, issuing purchase orders or maybe like, uh, signing memorandum of understanding. So how you see AI evolving in, in that space, [00:28:00] in the co-pilot, you know, sense. 

Carina: So just to clarify, by co-pilot you mean a chat assistant?

Mehmet: Like, okay, this is my issues now because people get angry on me. See if I call it a chatbot, no, it's not a chatbot. So what I mean by copilot? Yeah, like the interface can be chat, can be voice, can be anything. But you know, the kind of, you know, uh, if you want to, yeah, I know some people will be pissed off.

The agent tech one, you know, the one that can go and, you know, all by itself if, if that makes sense. 

Carina: Okay. Yeah, sure. Um. So I think there are several directions, uh, here that are probably gonna happen. Quite simultaneously really. 'cause I think different companies have different perspectives of what to invest in first.

Right? And first off is, uh, what we're seeing at the moment, you know, there's a lot of effort and Robin has put a lot of effort into this as well, into trying to find, to create a conversational agent that basically can support you to answer [00:29:00] questions, uh, so that, you know, research is being. Sped up in the legal world, which used to be a very big bottleneck.

Um, there are, um, these agents that help you draft documents and that help you review documents and they are already like, you know, they're getting there in terms of capability. It used to be until like, I don't know, six, nine months ago, it used to be quite a difficult space, but now with the evolution of models, it has improved a lot.

There are now, um, starting to see workflow agents. So for instance, mm-hmm. Things that help you do a job end to end. And I think that's still very early. Uh, but we like big announcements, you know, from loads of different companies now who are building all these workflows. I think that space is also translating into legal quite quickly.

Having basically connectivity across your applications, your like email accounts, your SharePoints, your CLMs and so on, I think is really important. And we are starting to see that happening in the legal space a lot. Um, obviously it's [00:30:00] an uphill battle because, you know, everyone has a different CLM, everyone has, you know, different providers connecting everything is quite.

Quite consuming, but we are basically getting to a point where we can have information flow from multiple sources to support lawyers to have like a very complete view of their world. Um, so that's. I think that's like one of the main things that's happening as an innovation right now in the field. Um, going forward, I think we're basically going to see a lot more assistance at the level of things like, you know, if you want to file something, if you want to fill in a form, if you want to do.

Things that are very specific for the legal long-term process. I think we're gonna see more automations at that point. Uh, but really in the legal flow, I think it's kind of, um, there are two different, what, there are many different personas, but some of them are all about singularity. Like things like they [00:31:00] have to work on a co complex contract.

In one go. And that's the thing that's taking a lot of time versus scale. So for instance, they have to create an NDA for like a thousand people. Uh, we're gonna, I think my expectation is that the copilots agents are going to be specialized for these kind of flows specifically. So this gonna be a very big differentiator in the market that tackle one or the other.

Um, in the long run. 

Mehmet: Yeah. And, uh, to your point, uh, if you allow me Carina, like I can give something, you know, which I, I see, I saw, I have seen and I keep seeing on daily basis, uh, to your point, the NDAs, right? So many times, you know, two companies, they decide to start discussing any type of, you know, discussion.

It might be maybe a starting of Mercer and acquisition, or maybe it's like, uh, just they want to do partnership and the first thing, okay. Send me your NDA and I've seen this NDA going back and forward, you know, for, for at least two weeks. Uh, [00:32:00] when it comes to master service agreements, especially, you know, like when, when companies, they are like doing like more kind of providing service and I think similar to what Robin AI might be also offering.

So this is where the nightmare starts also as well, like, it, it, it goes weeks and weeks and the biggest one is like kind of master partnership agreement I call them where, you know, two companies. Or trying to agree that maybe you can resell my product or service in a certain region, or, you know mm-hmm.

Who takes the liability and, you know, sometimes I, I sit down and think, oh my God. Like if there was a machine that understand the, I can understand, like cross countries and cross borders is hard. But I'm sure, you know, we can feed a machine all the laws of, let's say the UK where you are now, Carina and I can feed a machine all the laws of the UAE United Rab Emirs where I am now, and then, you know, they can go and take, you know, and, and make, uh, at least they'll not take the final decision, but they can remove the frictions that we see [00:33:00] usually, especially because mm-hmm.

It's, uh. Asymmetric or asynchronous communication because you need someone to come later and, and sit down and, and do it. Um, I predicted something was not related to legal. I predicted something 2017. Still the AI bubble. I mean the AI existed as you know before. Uh, but I mean, I predicted that at some stage, especially in software, uh, sales, that.

A machine will be able to talk to a machine to define success criteria for POCs and, you know, negotiate the pricing. And I was right, like of course we didn't reach that a hundred percent, but now machines, we started to see that they are talking to each others. But yeah, to your point, very, very, uh, you know, uh, valid and, and, and, uh, accurate.

I would say now. Another point. I'm talking about different geographies sometimes, like different companies. Um, and especially with the agent and you know, like [00:34:00] having an AI assistant that might need to interact with another AI assistant. How much important here to think ahead about interoperability, which means like.

And this is whether like, again, I use the agent for, to interact with an external or internal one. How much important to keep things, um, simple, transparent, and explainable? And again, from, from a legal client perspective aspect? 

Carina: Yeah, I mean it's obviously really important. It is quite hard to do in many ways.

Um, so for example, I guess like in your example, you have two agents that communicate to each other. Right? Yeah. And information is being passed. There are some interesting implications here. So for example, making sure that the information is the agents pass one another is actually the one that allowed to pass.

Not at the level of just like some region has GDPR and some doesn't, but also at the level of, especially in the legal space, having very [00:35:00] confidential information that you must preserve. So that is one of the main things that we are working on, is trying to guarantee to our customers that. Information doesn't get used in a way that they are not aware of.

Um, so it's about, you know, education that, you know, if we have two agents in our system that talk to each other like we do in chat, you know, those agents both have access to your contract information. Is that okay? Um, it's about making sure that people are on a journey with you. At the moment, there is no.

There's nothing that's, you know, crazy controversial, but I can foresee a world in which agents are gonna have significantly more autonomy. Um, so for instance, are you happy if an agent files something on your behalf to use your name and details and pass it to the other agent that receives it? Um, it's all about making sure that this kind of information gets agreed upon beforehand, and that the people who build a technology can actually ensure to some extent that its [00:36:00] information doesn't leak.

And it, it is a very difficult problem 

Mehmet: right now as, uh, an expert in ai, Carina and beside what you're doing with Robin, uh, today. What, what are like some of the things that you're really excited about, uh, that are happening in the domain currently? 

Carina: Yeah, I am a very big fan of personalization, um, and I think people hear me talk about this constantly, uh, at work as well.

I think it has a really important space in place in the legal AI space as well, and I think we haven't quite seen that emerging yet, but there are a lot of really cool initiatives right now around personalization, about making sure that the right memory is captured by the agents such that. If, you know, if your preference changes, the agent is aware of that, and that's like a really powerful capability that is emerging.

Um, that, so that's like a very big area of interest for me. Um, other things are about [00:37:00] making sure that the information that we process, that say for instance from like complex documents, flows the world, like videos is actually correctly represented. And how can you have a verification step that ensures that basically.

The models actually see what you intend them to see. Mm-hmm. Um, it's, it's, um, it's a bit of an unloved area, I would say, because it's just usually very engineering heavy and people are like, oh, you know, we can just shove it into another lamb, but that's not quite true. Mm-hmm. Like, we see one in the legal context that.

Contracts are so badly processed usually. Um, and that's because it's kind of hitting a ceiling and you know, if your data input data is bad, you have no, no chance of success really. So let's make sure that we solve the ugly unloved problem first before we do anything else. Um, and then a lot of things that are very cool are, for instance, um, things that can be transferred from a lot of the gaming research that has been happening recently where you [00:38:00] have these agents that coplay with your game and we're seeing quite a lot of success.

And it's the same kind of technology that's happening in instead of driving cars to some extent, where there's this kind of like level of autonomy in terms of what the agent can do and how he can do it based on a lot of visual data they can see. I think a lot of this is really interesting in other spaces as well.

Like for instance, legal, uh, because. It's a basically very high volume of data. Maybe it's data in a different form, but the way you can create agents that are basically autonomous in deciding what is a relevant task, is a really interesting space. And I think it's, um, it's quite understood at the moment, 

Mehmet: all like exciting, uh, space, I would say as we are almost, uh, close to the end, uh, Carina.

Maybe final things you want to share with, uh, other leaders, your peers, CTOs, or whatever their title might be, evaluating AI partners. If you [00:39:00] want to give them like one piece of advice, what that would be. 

Carina: Um, I would say to think of what you're looking for in the partnership, uh, to make sure that adoption can be successful.

Basically being mindful early before you engage in the wrong process, uh, I think will basically save everyone a lot of time, uh, and resource. 

Mehmet: Great. Uh, finally Carina, where people can find more about Robin ai and of course like get in touch, 

Carina: um, on our website. Uh, you can find a lot of information. We also have quite a lot of blog posts and we are quite active on social media.

So yeah, please reach out. We are always keen to hear from you. 

Mehmet: And you, you are on LinkedIn, I believe, also, right? 

Carina: Yes. 

Mehmet: Okay, great, great. Uh, well, Carina, it was a very engaging discussion. As I said, Le we didn't discuss legal tech much. I think I did one episode, maybe one and a half year, if not more ago. So it was good refresher for us.

What's [00:40:00] the new, also in this domain, what are the use cases? So thank you very much for your time today and, uh, this is how I add my episode. This is for the audience. If you just. You know, discovered us by luck. Thank you for passing by. I hope you enjoyed, uh, the discussion today. If you did, please give me a favor, subscribe and share it with your friends and colleagues who are available in all podcasting platforms and on YouTube.

And if you are one of the people who are loyal, who keeps sending messages to me, thank you very much for the support. I really appreciate it. And as I say, always stay tuned for a new episode very soon. Thank you. Bye-bye. 

Carina: Thank you.