Dec. 28, 2025

#559 AI Without the Black Box: Nat Natarajan on Building Trust at Global Scale

#559 AI Without the Black Box: Nat Natarajan on Building Trust at Global Scale

In this episode, Mehmet Gonullu sits down with Nat Natarajan, Chief Operating Officer and Chief Product Officer at Globalization Partners, to explore what it really takes to deploy AI in highly regulated environments.

 

From labor laws and compliance across dozens of countries to human-in-the-loop AI systems, Nat shares how Globalization Partners built explainable, trustworthy AI that enterprises can actually rely on. This is a grounded, operator-level conversation on moving beyond AI hype toward real productivity and trust.

 

 

👤 About the Guest

 

Nat Natarajan is the Chief Operating Officer and Chief Product Officer at Globalization Partners, a pioneer in global employment solutions. He previously held senior leadership roles at companies including TurboTax (Acquired by Intuit), PayPal, RingCentral, Ancestry.com, and Travelocity. Nat brings decades of experience at the intersection of technology, regulation, and large-scale enterprise systems.

 

https://www.linkedin.com/in/natrajeshnatarajan/

 

 

🧠 Key Takeaways

• Why black-box AI fails in regulated industries

• How human-in-the-loop design builds trust and adoption

• The role of proprietary, vetted data in enterprise AI

• Where general-purpose LLMs fall short for compliance-heavy use cases

• Why AI should augment humans, not replace them

• How CHROs and boards are rethinking AI as a “digital workforce”

 

 

🎯 What You’ll Learn

• How to design AI systems that can explain their decisions

• When to keep humans in the loop and when automation works best

• How enterprises can deploy AI responsibly without slowing innovation

• What makes AI adoption succeed inside large, global organizations

• Why regulated complexity is an advantage, not a blocker, for AI

 

 

⏱️ Episode Highlights & Timestamps

 

00:00 – Introduction and Nat’s background

02:00 – Why regulated environments are ideal for AI, not hostile to it

05:00 – Lessons from TurboTax and encoding legal reasoning into systems

08:00 – Designing AI that avoids the black-box problem

12:00 – Human-in-the-loop systems and guardrails

16:00 – Why proprietary data beats generic models

19:00 – Enterprise vs startup AI adoption dynamics

23:00 – AI as a collaborator inside HR teams

27:00 – Explainability, trust, and employee-facing AI

32:00 – The CHRO’s role in an AI-powered workforce

36:00 – From hype to real productivity with agentic AI

40:00 – Final thoughts and advice for leaders adopting AI

 

 

📚 Resources Mentioned

• Globalization Partners : https://www.globalization-partners.com/

• GIA:  http://www.g-p.com/gia

Prediction Machines (Updated & Expanded Edition) – referenced by Mehmet

 

[00:00:00] 

Mehmet: Hello and welcome back to a new opposite of the CTO Show with Mehmet today. I'm very pleased joining me from the US Nat Natarajan. Nat. You know, the way I love to do it, you know, is I keep it to my guests to introduce themselves. So you are the [00:01:00] Chief Operating Officer at gp, and GP is Globalization Partners.

We're gonna talk a lot today about, you know, how you're leveraging ai. And, uh, we talk about you, you know, the, the vertical you are in. But again, without further ado, I will pass it to you, to Jesse, tell us more about you, you know, a little bit about your experience journey, and then we can, you know, dive directly in, in our, uh, discussion today.

Nat: Thank you, Mehmet. I'm, uh, honored to be here today with you and, and, uh, and your audience. And, uh, so I am Nat as, uh, Mehmet said, I'm the Chief Operating Officer and the Chief Product Officer at, uh, at gp, you know, globalization Partners, uh, is is one of those pioneers in global employment. Our founder, Nicole Sahin, started this industry about 13 years back, and I joined about four years back with the mandate of how do we make, you know, we have a great, uh.

Product services and how do we make this more into a technology [00:02:00] company as well as an and, and, and I joined four years back and I'm really excited about what the team has done over the last four years, taking us into an AI forward company with our new product called Gia. Before this, you know, I, I, I've been at various companies, ancestry.com, RingCentral, intuits, TurboTax, PayPal, and Travelocity.

So I've had the privilege of, uh, being here and seeing the evolution of technology. Global technology platforms and how they've impacted businesses and consumers across the world. So it's a pleasure to be here with you and your audience today. 

Mehmet: The pleasure is also mine, so, you know, not the, the sector or let's say the vertical that globalization partners gp.

I would, I would refer to it as GP VP, um. You are in a highly, I would say, regulated domain, right? Yeah. 'cause we talk about like labor laws, hundreds of countries, and now [00:03:00] we talk about bringing AI to the picture. So what was the initial insight that made you believe like an AI agent could be trusted to operate in such environment?

Nat: So Mammoth. Uh, part of it is, is, uh, is my background. You know, I, I was a chief, uh, chief product officer at TurboTax and, uh, us, um, tax laws are complex and, uh, we figured out a way when I was there to, uh, create knowledge graphs and take complex, uh, you know, legal laws, black letter laws, and how do we make computers, uh, you know, automate them so that consumers can come.

You, you using your mobile device or, or, uh, go online and do complex taxes by themselves? So when I came here to gp, I, I saw that we have a lot of this knowledge that we have, uh, accumulated over the last 13 years. Not just knowledge, which is based on, uh, you know, black letter law, right? Uh, but it's also practical [00:04:00] across the employee life cycle.

What happens when you onboard somebody, when you offboard somebody, when there's a certain situation in a certain country. Black letter law tells you what the guidance is, but what exactly happened? And I said like, this is interesting. We could probably find a way to use technology. And at that point, you know, AI was just coming out.

So we thought we could use technology to automate some of this to make it easier, right? So the first thing we did was take that knowledge base, put it into a central location, and um, you know, uh, so that was one of the insights saying that, you know, com complexity is a friend of technology, and how do you leverage technology in this case?

And the data, which is proprietary to us to make something easy for our, I mean, for our businesses and customers going forward. So that was some of the insight, if you will. 

Mehmet: You mentioned these complexities, uh, Nat, so, you know, I can imagine some of them is like, there's [00:05:00] a variety of maybe laws between countries and, um, maybe you mentioned taxes, you mentioned like other things.

Um. And usually, you know, it's, it's even for a human, it's hard sometimes to, to judge like, what would be the right approach? And this is where we, we might go to maybe a lawyer or something like this. So, designing Jia to operate, like with this legal rigor, like how does come into practice, uh, from, you know.

I would say design point of view, right? From, not necessarily from technical perspective as architecture, but I mean the way you need to design it and the way you, you need also to think how the adoption would be like. Putting in mind, and you know this, that people sometimes they are skeptical about Yeah.

Technologies and you said like, they are Yeah. Complex things and technology, but always we have the people that will raise flags. Especially, [00:06:00] again, I'm repeating myself. We talk about like high, highly regulated, uh, domain here. 

Nat: No, look, it's a, it's a great question and, and, and I think, you know, um, I started off my, uh, sort of career@travelvelocity.com where I was one of the founding members.

And, um, I discovered that human beings hate black box technology. If you are not able to unpack right what is behind the black box, they don't trust it. So that was one of the tenants that we used when we were, uh, I mean, designing gia. So look, uh, jurisdictions laws are, are very complex, like you said. So from a design perspective, we, we, we did two things.

One is. How do we take that knowledge graph into, into, think of a hierarchical decision making. You start with a jurisdiction, say the eu right. Or, uh, the, uh, I mean middle Eastern countries. And then you go to a country specific law mm-hmm. And then you go to a province specific law. You, [00:07:00] you, uh, so you, you take that nesting and you bake that into your algorithms in, I mean, into your technology.

Second, uh, to make it less black boxy and, hey, this is the answer, right? And, and we know we are right. Uh, we earlier on started, uh, having humans in the loop, right? So the, the first iteration of gia, um, we were using it within the company. So our salespeople were using it. Our HR professionals in, in, uh, 50 countries were, uh, I mean, were using it, and they were giving us feedback on, on what they liked, what they did not like.

As AI became more advanced, we started using reasoning models to tell you why did GIA get to this conclusion? How was Gia thinking about it and getting to that conclusion? So I think, you know, having that reasoning, especially with, uh, with a complex technology like, uh, like AI is very powerful because it tells you how Gia thought and it got to this answer so you can [00:08:00] interrogate it.

Right. Um, one additional point I'll, I'll, uh, make and I'll stop is. So how did we know reasoning, right? We are, we are all engineers. We don't understand how a lawyer thinks. So we actually have a legal team on my staff, right? Mm-hmm. They're, they're part of gia. So they started working with the engineering team and saying, how, you know, this is how I thought about getting to the answer.

So the actual workflow that a, that a legal person goes through to get to an answer. We try to encapsulate that thinking and that logic. Into the way Gia thinks, and we make that transparent so it doesn't feel like it's a black box. And you know, I've had hundreds of people come and challenge Gia, right?

Uh, okay, let me go and test it. And what does the other, uh, model say? What does Google say? What, what happens? And I think the fact that we are able to reason and unpack was a key differentiator for us. 

Mehmet: I'm glad you know, the discussion brought us to this point because you mentioned the [00:09:00] human in the loop.

Right? Um, so what, where do you think, you know, uh, during the testing, and maybe now after you enroll it, like what is, what is the limit? What is the border of, you know, giving the machine, the agent, whatever you want to call it, the autonomy to take decisions? And when really do you think like the human is in the loop?

And now again, back to the complexity thing, because we talk about a massive differentiations and even maybe people will think, yeah, it's like slight differentiation. And here we talk like labor laws. Taxation accounting because all, all this is like areas that GP cover, right? Because you are in the business of, you know, employment of the employer, of the record, and, you know, you take care of, uh, you know, payrolls, you take care of the, a lot of things, right?

Um, legal, finance, uh, and everything. So what, what are like some of the points if you can share that? Na, [00:10:00] and I think this is beneficial for the broader, I mean. Executives who are thinking to deploy, you know, AI agents currently. So when we really have to keep the ai, sorry, the, the humans in the loop versus Yes, like we can trust AI can now do its job.

Nat: So, uh, I think, uh, maed, we, I don't believe, uh, the technology is at the a GI level where we can trust AI to do things and think for itself completely. Uh, we are still at the point where we have to provide guardrails. And, uh, with GIA and with our EOR product at gp, uh, we have made sure that we provide those guardrails.

So the answers that it gives, right there are certain boundaries. We would not go outside, right? We have models which test that boundary to make sure that, you know, GIA stays, the answers stay within, within the guardrails that, that we have set. And, uh, when we find that, uh, the [00:11:00] answer is a little different than, than our set of assumptions, our set of data, that's when we bring in, uh, so humans in the loop.

When we, uh, we also use, uh, our internal users, our customers sometimes tell us, Hey, I don't think this is right. And that's another point where it triggers a human in the loop process to make, uh, you know, GIA better, the guardrails better. So I think in, in, in a sense, um. We are very careful about setting those guardrails and not trusting AI completely just yet.

Right? We are, we are not there as a, I mean, as a technology capability just yet. You are to be very prescriptive on what those guardrails are. So we use the, uh, a lot of LLMs today as an example, um, only for natural language processing. Everything else, the answers that it gives, uh, gr gives, we have gone and vetted, right, uh, close to a hundred thousand articles, uh, which are based on, on black letter [00:12:00] law and also the practical knowledge that we have got over the last 13, 14 years.

Uh, plus, uh, you know, we go and scrape about 1500, uh, you know, legal sources, but all that is vetted into these articles that are reviewed by human beings. Our model, our rag model goes and makes sure that it only gives answers from there. So that's a guardrail that we set. And if it's like, you can go and ask it a, uh, so question on outside of global employment and G will come and say that.

I don't know that. 

Mehmet: Right. 

Nat: So that's an example of guardrail that we have set. 

Mehmet: Is this where the, you know, I would call them the general models fall short in your world, not like, uh, um. Because again, like people here talks about, like, now models are commoditized, kind of. Um, and some people they say like, the important part is the data.

So, so is this like really what I would call it, like the differentiation for [00:13:00] you, especially in, in your domain. 

Nat: It absolutely is. I, I, I, I think if you, uh, there are recent articles which, uh, which have come out that some of the models, uh, two thirds of the data that they get are from unreliable sources like Reddit and Wikipedia or YouTube, et cetera.

Um, so when it comes to specific, you know, um, you know, very legally. Compliance or medicine, right? You wanna make sure that the data that LLMs have or the models have is, is curated. It's trusted right before you can go and trust the models. And, um, I think that's our differentiation today because we've had this data for the last 14 years and we've been able to curate it and make it usable by, uh, by a technology like, like ai.

And the evolution that I see is, you know, recently there were some companies which have a lot of in information. Um, proprietary data in taxes and accounting do a deal with open AI saying they will then share this, you know, [00:14:00] uh, knowledge with them. So I think proprietary data is the differentiator for ai.

I think you cannot create a great product with the LLMs without the lms if you don't have good data. 

Mehmet: Now you mentioned something also about, uh, you know, your models knowing from your knowledge, but again, especially in, in customer facing use cases, is not, um, there is a behavior, uh, for, for the, you know, the LLM through either a, a, uh, a chat bot, you know, the chat interface or like maybe you are automating it to send emails or maybe maybe to reply on support cases and so on.

So. Where, like you mentioned, safeguarding and guard railing also as well. But what are kind of the, I would say, best practices for making sure also the model behave the, the, the, the best way. So of course, like you gave an example, [00:15:00] like for example, I ask something not related to your domain and to answer me like, I don't know, this is not my area of expertise, but I mean regarding, you know.

Giving the trust, like, I mean, giving the trust for whoever is interacting with this agent, what kind of, uh, behaviors you train it to, to, to, to act on, if, if I might say like this way so it can trust, you know, enterprise users that you deal with. 

Nat: Look, it's a great, it's a great question. I think this is how we, we have.

Uh, being careful about training it with, uh, you know, lawyers and HR professionals who, who will look at the responses, look at the reasoning, look at the logic, and give our engineers guidance and saying, Hey, this is how I think we need to adjust the models. Um, so 1, 1, 1 thing we, we, we did in terms of, uh, you know, guidelines and, and, uh, so thinking was really, um, so if you ask it a question, um.

It thinks like a lawyer, right? So how does a lawyer [00:16:00] think? Right? And, and, and that's what we have sort trained it to. Secondly, it goes in things like an HR professional. How would you respond to a, uh, I mean, to a professional and unpacking that and really showing them two things. One is we have this tag called GP verified, right?

Mm-hmm. So, uh, we know that the data that the, that the answers come from is verified by a gp. So we have looked at it, human beings have looked at it. So that's, that's one thing. Secondly, the step by step how it got to the answer. So if you unpack the answer, how it got to the answer was a, was a, was a second area.

So I think if you combine these two, it gives a lot of confidence in the fact that we got to this answer. Third, I mean, there is no substitution to having actual users use it, right, and, and give you feedback. In some cases, you know, we, we had to adjust the level of specificity. Uh, so some cases we found that, uh, you know, users in large enterprises said, look, uh, it's too generic.[00:17:00] 

Can you be more prescriptive? Right? So then we said, okay, black letter law is not good enough. Let us show you the black letter law, but also let us show you best practices in this case. Right. So we went down one, uh, one level of specificity. Then we got feedback, input from, uh, large enterprises saying, Hey, na, that's good, but I want you to answer based on my HR policies in addition to your, uh, I mean compliance.

So then we increase the, so now we have, you know, a, uh, some concept of GE enterprise. Where you can have a local tenant where your data, you upload information, you upload your, uh, your policies. We don't share it, we don't, uh, you know, put it back into our models or outside our, uh, uh, our four walls. But then GIA takes that as a source and answers based on your models, so, or your data.

So we don't go outside your guardrails that, that you are put. So I think we are, we are, we are [00:18:00] very careful about making sure that. Gia has a really strong corpus of knowledge, but it, it, it answers based on what your policies are, but it keeps you within those guardrails that are important to keep you compliant.

Mehmet: Right. Uh, Annette, are you seeing also a shift of acceptance? I would say, especially, you know, from at least your, your, your customers on the idea? I don't want to use, uh, it's, I'm, I'm, I'm hoping like, uh, uh, it's not like a trademark or something, but Microsoft called copilot, right? So they, they name it as copilot.

So people are calling coworker or, you know, assistant, whatever you want to call it. So. Collaborator, some people are calling it the collaborator. So from a global HR platform perspective, how does this, like if, if I am one of your customers today and I'm interacting with the, uh, with gia, how [00:19:00] would that experience looks like having kind of any, um, coworker with me, let's call it this way.

Nat: Um, you know, let me read you a customer quote. How about that? Yeah, that's f This is customer who has used, uh, gia. So the quote is, knowing that G'S guidance is based on expert, expert review, legally vetted sources, gives me peace of mind. It lets me spend less time on paperwork and more time on strategic initiatives.

This is a company called Herb Pharma, who we have a case study on our website. Mm-hmm. You can go, go and take a look at it. So, uh. You know, people, uh, are one, you know, everybody in the boardroom is asking about ai. So let's start with the, you know, top of the house. You have, you know, board members saying, what are we doing with ai?

It's, it's the last two, three years, there's so much of investment, so much of buzz in the, in the industry management teams across are, are focused on how do you make ai, how do [00:20:00] you use ai? Um, and so far there's a lot of hyper around it. And, and some companies are are, or some use cases are, are breaking through.

And what we are seeing is HR is usually, you know, squeezed for resources. They're asked to support more and more, but they're squeezed for resources. So I've seen a real, uh, step forward, but which of CHROs uh, HR operations managers, HR professionals, saying, what tools can I use? What. Uh, better tools can I use, which will help me make me more productive.

So that's what I'm seeing, and I think that's, you know, this example is one of them. And I, and, and Gs solves this use case where today, uh, say you, you are a company based in the Uua E and uh, so you, in Dubai, you, uh, you wanna hire somebody in Singapore. You don't have an entity there, or you may have an entity there, but you may have two, uh, two people [00:21:00] working in Singapore and you wanna hire the third person.

So to give this person a legally compliant employment contract so they can start working, you would have to contact an external company in Singapore to make sure that you can give them a legally compliant employment contract, laws change, you know, pension laws, labor laws, et cetera. Um, and so you send a request, somebody in the, in your, uh, legal department, your HR department will try to find a, a, an external company.

They will charge you a lot of money to give you that answer, and it may take you days, if not weeks, to get that back. G answers that in, in a, in a matter of minutes. So the kind of productivity that you see saying you upload your, your, you know, Dubai based employment contract. Gia will give you a compliant contract based on that in Singapore.

Wow. And you can take it to your lawyer at, I mean, at the end of it, but 90% of the work is done. 

Mehmet: [00:22:00] That's really, you know, uh, impressive, I would say. Now, let me ask you this. Na, sometimes people challenge, uh, and we know this in the hr, you know, uh, space. So people might challenge, for example, a decision. I'm talking here about the human aspect of things like, oh, let's say.

By the way, like in this contract, for example, based on the, let's say the UAE, uh, you know, rules, you cannot have this, you have that. So, but maybe they are wrong. And then you go and say, Hey, no, by the way, there is an amendment for this law and based on this, so how comfortable do you think also to get the AI to do this, what we call like explaining the explainability, right.

Uh, because you are taking it, because probably you are taking decisions also that sometime might have like legal implications or maybe compliance implications, or maybe someone would say, you know, like I would, uh, I would accept only, let's say to be [00:23:00] paid in their homes, you know, the local currency here, but actually the, for compliance, I need to be paid in, I don't know, maybe in, in US dollars.

So, and when Gia comes here in the picture and try to explain this from. How do you think you know about this explainability from, from, from legal perspective, compliance perspective. 

Nat: Um, so, uh, you know, GIA has a reasoning. So it, it can unpack decisions, it can unpack, it can give you, you know, references. So, so what we do with every answer, we give you a couple of things.

One is we give you all the verified sources. Uh, that g went to, to get the answer from. So you can go and check. Secondly, it tells you if it's a GP verified article, uh, you can go and see why we got to that, uh, up to that point, right? So we can explain as much as as as we want. I think sometimes maad we, we found is, uh, you need humans when it comes to dealing with human beings.

There are cases where you need a person to talk to [00:24:00] them, to, uh, I mean, explain to them, to give them some level of trust. Uh, so I don't think our, our goal with GIA is to replace human beings. It is really to augment, to make it easier, right? So you as a, as an employee, you can talk to GIA about your benefits.

You can talk to Gia about when the next payroll cycle is or when the next, uh, um, you know, holiday is, uh, I mean is in your country. But when it comes to sensitive matters, we still believe you need, you need, you need to have a human being dealing with human beings. So this is really augmentation to help productivity, uh, not as a replacement.

Mehmet: Absolutely. Yeah. By the way, like every, every leader I, I, I interviewed and I talk outside of the podcast, you know, like this theme is coming again and again. That, and people, I think now more than any time before they, they understood that, you know, this. I would call it noise hype, whatever you want to call it about.

Oh, like AI is coming to, to take your [00:25:00] job and, and everything. It, it's, it's not like the case, at least for the time being, you know, in, in the foreseen future. And I have a book which I didn't finish yet, but I can recommend people to go and start reading it because from economically perspective it's good to understand.

It's called prediction Machines updated and expanded, like this is a new version. The Simple economics of artificial intelligence. I highly recommend people to go read this book. Because, you know, it, it, it also, and the reason I found that, it's good to mention it because. Of what you said Na, like about, you know, there's a reasoning and there's a human, so there's a prediction which the machine does, and there's something in the book they could talk about.

It's judgment, right? So you need the human judgment, which machines not all the time is able to do. So this is where you probably back to your human in the loop also as well. Uh, makes a lot of sense. Now, maybe it's, it's kind of a cliche question that I know this, but. Was there anything hard about [00:26:00] keeping, you know, the agent ethically grounded and at the same time powerful?

Were there any challenges in, in making that or because of it's a closed model that didn't apply? Look, 

Nat: we, we, we have models which go and check if we have any biases. Right In our model, in our training, in our data, so we have models. I mean, we have this model called the judge, which which goes and judges every answer, and then it, it, it, it goes through, you know, specific use cases that, uh, we have.

Uh, so given it, so we, we are, we do worry about it and we do take measures to reduce it as much as possible, 

Mehmet: not also I believe. You mentioned about, you know, years and years of knowledge, uh, experience that was there, and I'm sure like there was like a huge effort from the team to, to build off this. So what kind of, you know, [00:27:00] engineering talent did you need to build a platform like, uh, GIA and from, you know, best practices?

Because we talk about innovation and we talk about compliance at the same time. So how, you know. Flexibility slash you know, freedom in, in innovation. But at the same time, we need to make sure that yeah, we, this, this agent is compliant and all the things that we mentioned and not biased and so on. So, can you like, let us know more about like only high level, of course, like what kind of of talent did you need to build the platform?

Nat: You know, I, I, I'd say, I mean, innovation is a mindset. And, um, we've had that as a company. I think Nicole, our founders, founded this company. You know, no one knew what an EOR was, and she stitched this together, and now it's a, you know, um, pretty big industry across the world. And, and we've spawned out of competition as a result of that.

[00:28:00] So that spirit of innovation has always been there, that that's a spirit of really breaking boundaries and doing things which are, which, which are not done done today. Second part of our culture is really focused on compliance, right? We, uh, she set up this company to make sure that, you know, uh, our customers have peace of mind because we have done the hard work of making sure that their employees, our employees are compliant when it comes to, uh, I mean, global employment.

So those two things were sort of culturally baked into the company, right? Uh, so then you add on some, um, you know, young, motivated, uh, and experienced AI engineers, which we did. And, and we were fortunate to bring in a team who has worked together, uh, over two companies. So they came and joined us, and they started understanding what the ecosystem was and what the culture was and what the problems were.

So combining that, uh, engineering [00:29:00] skillset mindset with a culture of, you know, uh, innovation and compliance. Really is the key, right? Um, and, and, and look, uh, in any new new place or any established organization, doing something new is always hard. You gotta break some eggs, you gotta, you know, you gotta crack the eggs a little bit.

You gotta look at it differently. And then you gotta find a way to curate it to, before it take use or take it out to customers. So I think that's the process of innovation that we went through. In fact, G as an idea came across, came from a hackathon. And, uh, so that hackathon then got, you know, became an interesting idea.

We put more money behind it. Uh, we started using it internally and then, you know, within six months we were like, man, this is so great. We can have, uh, we, we can take it outside our four walls. So that spirit of innovation, but also with the culture of compliance, uh, with greatly great talent. I think that's the secret sauce.[00:30:00] 

The one maybe, you know, lesson learned and, and, and maybe two, uh, for your listeners I found today, um, if you have a, a, a, an engineer in their early career, I, they have less fear. They don't think about, you know, the legacy systems. They think about solving problems with, uh, sort technology. You marry them with, with someone who's had a little bit more experience and they're unstoppable.

Mehmet: Yeah. So 

Nat: my, um, one of my early engineers who's in their mid twenties, he's my mentor. I learned from him, I learned about him. Right. Um, because I'm not schooled in this. When I started we didn't have this right. So I, I use him as my mentor to teach me what are the trends happening in ai. 

Mehmet: Wow. That's fascinating.

Now, you, you mentioned kind of your [00:31:00] profile of your stakeholders. Na and, um, you know, for, for large enterprise, I believe, you know, it's like, uh, CHROs and, um, you know, maybe in startups it's still the founder and the, or founders and their team. Correct me if I'm having the strong impression. I think startups are more ready in general to what you offer, like regardless of the AI part.

And then of course. Because they are startups, they know about ai. I'm interested in the enterprise part. Like how are you seeing, you know, the leadership mind, uh, mind like mindset shifting towards adopting ai. Like of course you mentioned also bef a couple of minutes ago about like how the board is pushing everyone, adopt ai, adopt ai, adopt ai.

But you know, HR specifically is a different beast by itself. I spoke to a lot of people in the domain, purely HR people, not tech people, and [00:32:00] everyone telling me, you know, like the, the way we used to think before about traditional hr, it's not the same. So now how this is changing with all the new.

Advancements around AI and AgTech ai and you know, probably a lot of other innovations that would come. So how, like if you want to describe someone doing it the right way, I would call it this way, what kind of shift in leadership you're seeing, uh, happening now to adopt AI properly? 

Nat: So you talked about, um.

You know, I'm Enterprises, CHR of Enterprises, so I'll take that. So question first and I'll get to Sure. Um, you know, it's interesting. I also thought initially that it would be a lot of, uh, you know, startups who would use gia. They, they have more propensity to use tools. Uh, they are mm-hmm. 

Mehmet: Moving 

Nat: constraint.

Right. And then [00:33:00] what I'm discovering is. Uh, larger enterprises have bigger problems, bigger productivity issues, uh, because they have now talent, uh, and access to talent across the globe. So their jobs are becoming harder, right? And they spend a lot more money and, and need a lot more time to manage this.

So I've had, we've signed up a couple of large enterprises, actually, um, more than, uh, um, uh, than a handful who operate in, in 20 countries, 20 plus plus countries. And for them, when they started using gia, they're like, oh my god, this is such a time saver for us, such a productivity saver for us. You know, one of them bought a hundred licenses, which I, uh, which I was like, I thought I'd set two or three licenses to a, to a la uh, to a large enterprise.

They're like, no, we want every HR professional to use it because it so much increases, uh, um, productivity. So I think the bigger you are, [00:34:00] uh, and the more global your operations are, GIA actually is a better fit for you. 

Mehmet: Mm-hmm. 

Nat: Right. Because it, it, it, it, it gives, it saves a you a lot of time and money. To run your operation?

Um, I, I think, um, because the CHROs sit in boardrooms, I'll go to your second question. They sit in boardrooms. Um, they have adopted some level of technology in their functions, uh, and now they're being asked to figure out how do I. Uh, change or transform organizations with, uh, agentic AI in mind. What does it mean to the workforce now all of a sudden, in addition to a human workforce, I have a digital workforce.

How do I manage that? Right? That's, that's the kind of thinking that I see CHROs, you know, going through. And then the natural question is, okay, if I have to help the corporation or the enterprise think about, uh, how do you transform using this technology, then what do I do with [00:35:00] my own function? So that's the kind of thinking that I'm seeing in, in the mindsets of, you know, HR leaders across the globe.

Mehmet: Nat, I think to back to the point you mentioned about, uh, you know, how the hr, because of the global workforce and you know, the, I, I think another thing, and you mentioned productivity, you mentioned saving cost. Um, I'm not sure if this is one factor, and this is just popped up in my head as a factor. So if you think about it, you know.

Like human resources department is the first and lost department as an employee you deal with. Yeah. So you deal with them and you onboard, you deal with them and you leave. So I think also this is, will affect the culture of the company because you know when, when things get faster mm-hmm. Also, me as an employee, you know, I would have better impression about this company or Wow.

Like see how smooth they do things. And this is, um. [00:36:00] Of course we know, like there are some platforms where people can write, you know, their reviews, their experiences, and I think this is, correct me if I'm wrong, also, it'll, it'll get them into a better, uh, reputation as, as employers also as well, right? So, because, you know, the, the, let's call it the employee experience or the onboarding experience, right?

Nat: Absolutely. I think, look, um, I mean, to make, to make that first interaction with your employees, smooth, frictionless, um, makes you, uh, you know, my, my sons are 26 and 22 and, uh, you know, they, their view of how companies, uh, corporations, uh, deal with them in a very digital first way is very different than than I started, you know, my career 35 years back.

So I think having that, that that experience for your employees, which is, you know, a smooth, frictionless, on demand [00:37:00] personalized, um, is, is a no-brainer, right? And as, as you do that, it changes the culture and the engagement of the employees with you. 

Mehmet: So that turns into positive outcome. I would say it like, uh, um, you know, like people, sometimes they underride the culture part, but for me, like I think this is very, very, very, uh, important factor to put to, to put in.

Now, other than AI and other, like you, you, you get g Now people are using it. Uh, what other. Things you can share. Maybe I don't like to ask the question future trends because, you know, like it's, it's very generic. No one knows the future anymore. Like every day we see new model coming, new things. But if you want to, to, to give us like maybe a, uh, your own opinion where we are heading with, um, the advancement of these technologies when it comes to global employment.

Where do you envision, uh, you know. This, this domain to be, or this vertical to be, let's say, in the near future. I will not ask you in five, 10 [00:38:00] years. Of course no one knows. But I mean, in the foreseen future, like what major things also we can expect and maybe something you are working currently on you can share with us, that would be great.

Nat: Yeah. Look, I, I, I, uh, you're right. I, I will not predict the future. That's, that, that is always a dangerous thing to do. Um, what I do see is I think the complexity of global employment is going to continue to increase. Um, you know, we have, uh, so countries, uh, jurisdictions in the world. Are trying to regulate more, trying to improve and, and, and add new, uh, uh, sort of constraints.

So that's not going to change. So that's a constant, which is gonna keep on evolving. Um, I think the, the future around global employment is really, we believe talent exists anywhere in the world and access to talent, uh, you know, companies which find ways to, to access that, that talent, leverage that, uh, that talent in a frictionless way.

[00:39:00] We'll have better outcomes for their, for their stakeholders, uh, for, for their customers, for their investors. So platforms like ours are, are really gung-ho on automating as much as possible while making sure that when things, uh, need to be handled by human beings. So the combination of technology and humans, I think that's, that's, that's something that I see coming to fruition.

I mean, so far. So if I look at just the next 12 months, I think the first three years of AI has been a lot of hype, a lot of infrastructure building. Um, and, and now you see the first signs of real productivity coming in, in certain pockets. So the next 12 months, I think, really will be companies taking a use case, right?

Taking a a, uh, a, a business process and completely going agentic with it. And as a result, they'll start seeing real productivity. So I do believe automation with humans in the loop, uh, [00:40:00] and augmenting with, uh, with human beings is something that is gonna see companies are gonna benefit from it. Employees are gonna benefit from it in the next 12 months.

Mehmet: Yeah. This is the hope, fun fact about, you know, asking about trends because you don't predict feature. I don't also as well, not, when I was preparing to launch the podcast three years ago, cha g PT just came, right? Yeah. And you know. Um, so I was trying to think like what kind of questions I can ask. And of course, like the last question, trends.

And once chat, GPT came and I start to see like, you know, I, I, I removed this question. I said like, it, it doesn't make any sense to ask anyone in any domain, especially for a technology and business podcast, because yeah, like things are moving fast. But I would agree with you like about, you know. Um, I, I'm optimistic about the, the future, and I like when you said that it was like, kind of experimental past few years now for three years.

Again, like when we saw like the first general use of generic ai, uh, [00:41:00] and uh, generative ai and now we start to see like a genetech ai. So it's, it's exciting times to be in and it remind me a lot of. Although, like I was very, very young, but it reminds me also of the internet times when it first came and people are trying to see like, okay, how we can use this, how we can take it.

And then, you know, the rest is history. Now if you want to give us like kind of maybe final thoughts, closing remarks from Unad, um, maybe something I should have asked you briefly and I didn't. So feel free this is a space for you and. As usual, I keep kind of a call to action, like where people can get in touch and learn more.

Nat: Thank you. myEd. Uh, look, I, I, I, it's an exciting time to be alive. Um, uh, it really is. I think this, uh, ai uh, our technology is in its infancy. It's, it's gonna transform everything that we touch. Um, and, and I'm sure it's a cliche, but I feel it's really [00:42:00] real. Uh, the, the trend or, or the big shift we saw in technology with the internet, the big shift we saw with cloud, the big shift we saw with, uh, with mobile.

Um, all that is probably not as big as this shift to AI and, and, uh, um, you know, making. Having digital agents on your side, making you more, more productive. So it's a really, really fascinating time. Um, so my call to action really is, uh, take advantage of it. Don't be afraid of it. And, uh, to take advantage of it, you'll have to take some risks and you know, it, it can be careful, it can be measured.

Um. But in my opinion, if a company's not thinking about it today, they're not experimenting with it today, whatever the function is, whatever the function is, um, you'll get left behind. This is a wave that you cannot stop. Uh, you can be a part of it and you can lead it. So, uh, take [00:43:00] a little bit of risk here.

Believe in it, right? And, and, uh, you know, it, it is here to stay. This is not a fad anymore. 

Mehmet: Absolutely. And where people can, can get in touch, Nat. 

Nat: So, um, you know, we, we are on our website, g p.com. Um, you know, you, you can email me nat@ggp.com. Um, and I would love to talk about our, our g product. I would love to talk about our EOR product, uh, which, uh, which is, uh, so rated one of the best in the world.

So, but look, I'm, I'm, I'm so glad that you had me on your show. I'm super excited about the next generation of technology and, uh, what we are doing as a company to make it easy for all of you to, uh, uh, you know, employ people anywhere in the world. 

Mehmet: Absolutely. And it's, uh, an absolute pleasure to have you here.

Uh, na Today, for the audience, uh, the links that Nat just mentioned, you will find them in the show notes if you are listening on your f favorite [00:44:00] podcasting app. If you're watching this on YouTube, you'll find them in description. Just, I want to highlight one more thing, Nat, you know, I always kind of biased by any product, any service that solve complex problems and mix.

Lives, people's lives easier. And I think this is one of the things, you know, the, the business you are in is exactly this. And, uh, like kudos to you and, and the team, you know, for, for putting, um, this product, uh, for uh, everyone who is in need. And there's a lot of companies who are in need. As I said, you can find all the links that Nat mentioned in the show notes, and this is usually how I add my episodes.

Uh, if you just discovered us by luck, thank you for passing by. I hope you enjoyed if you did, so, give me a favor, subscribe, share, and, uh, you know, tell people about this podcast and if, uh, you are one of the people who keeps coming again and again, thank you so much. Um, this episode probably it's airing in, in the last week of December or [00:45:00] maybe before.

So, uh, big thank you for all the audience. For 2025, like you, you took the podcast on complete new level. We were like always in one of the top 200 Apple podcast charts in one of the countries. So this cannot happen without really support and you know, people who come again and again to listen to all the insights of all my esteemed guests that I had this year and even years before.

So thank you very much and as I say, always stay tuned. We'll be again in a new episode very soon. Thank you. Bye 

bye.