AI Is Being Deployed Without Control. Security Is Playing Catch Up | Tim Freestone

In this episode of The CTO Show with Mehmet, Mehmet sits down with Tim Freestone, Chief Strategy Officer at Kiteworks. AI is already inside the enterprise, but control is not keeping pace.
The conversation reframes AI security as a data control problem rather than a tooling problem. Tim argues that agents are not just another interface. They act, call tools, move data, and introduce a new identity layer that most enterprise security architectures were not designed to govern.
If you are leading, securing, building, or investing in enterprise AI systems, this conversation clarifies where the real risk sits: data access, agent identity, sovereignty, and governance.
About the Guest
Tim Freestone is the Chief Strategy Officer at Kiteworks, a company focused on secure content communication and data protection. His background includes roles at Contrast, Fortinet, NetApp, and over 10 years running his own business supporting technology and cybersecurity companies.
Tim brings more than 22 years of experience across cybersecurity, strategy, go-to-market, and enterprise security. His perspective is grounded in how enterprises are actually deploying AI, where governance is lagging, and why data layer control is becoming central to AI security.
LinkedIn: https://www.linkedin.com/in/freestone/
Website: https://www.kiteworks.com
Key Takeaways
- AI adoption is no longer waiting for enterprise readiness or formal governance.
- Employees are already creating shadow AI risk through uncontrolled tool usage.
- AI agents introduce a new identity layer that security teams must govern.
- Data protection becomes harder when agents can access information at machine speed.
- Sovereignty is no longer just about where data is stored.
- Frontier AI models force enterprises to choose between control and capability.
- Security architectures built around infrastructure need stronger data layer controls.
- AI-powered vulnerability discovery changes the speed and scale of cyber risk.
What You Will Learn
- The difference between chatbots, copilots, and agents in enterprise environments.
- How uncontrolled AI usage creates hidden exposure inside organizations.
- Why agent identity needs to be governed like human identity.
- The reason data security becomes the starting point for AI governance.
- How sovereignty changes when enterprise data moves through external models.
- What CTOs and CISOs should prioritize when AI enters production.
- Why AI-specific security roles are becoming necessary inside enterprises.
Episode Highlights
00:00 - Why AI security starts with enterprise readiness
02:30 - AI is being deployed before governance catches up
04:30 - Agents act differently from chatbots and copilots
07:00 - Shadow AI creates a new enterprise exposure layer
10:30 - AI agents become new actors inside security architecture
13:30 - Data layer control becomes the security priority
15:00 - Sovereignty becomes harder when AI moves data
18:30 - On-prem interest returns as control concerns rise
24:00 - AI models change the vulnerability discovery equation
29:30 - Agent native security starts with controlled data
- Listen Now
Available on all major podcast platforms and YouTube.
Connect with the Show
Follow The CTO Show with Mehmet for more conversations at the intersection of technology, startups, and venture capital.
Mehmet: [00:00:00] Hello, and welcome back to a new episode of the CTO Show at Mehmet Today I am very pleased joining me from the US Tim Freestone. He is the Chief Strategy Officer at Kiteworks. Tim, thank you very, very much for joining me this early morning for you here on the show. Um. Before we dive into anything, what I do with all my guests, I keep, you know, that space for them to introduce themselves.
Tell us more about you, your background, your journey, and then we start the conversation from there. Just as a teaser to the audience. We're gonna talk. Of course, nowadays we're discussing this topic a lot on the show AI intersection, you know, with how it's changing, you know, the way we need to protect our data, safeguarding and all these important topics.
So that's further ado. Thank you again, Tim, for being here with me on the show. The floor is yours.
Tim: Yeah, thank you Mamed. I appreciate it. Um, yeah. My name is Tim Freestone, as you said. I'm the Chief Strategy officer at Kiteworks. I've been with the company. Just under five years. Uh, [00:01:00] God, it moves fast when you get to this stage of your career, doesn't it?
Um,
Mehmet: yeah.
Tim: Before that I was with, uh, short stint at another cybersecurity company called Contrast, which mainly focused on application security. Uh, was with Fortinet for several years, which many people may know. NetApp and then, uh, about 10 years running my own business, uh, supporting tech and cyber companies around strategy and, and go to markets and things like that.
So I've been doing this now a little over 22 years, which is crazy. Uh, but, um, it still seems to be fun. And, you know, with AI coming into the world, uh, I got it reinvigorated, let's call it that.
Mehmet: Right. Uh, it's really crazy because I think I've been in the same time as you came around, you know, since my college, uh, years times when I got the first exposed to the technology world.
So yeah, it's crazy times. Also, we, uh, we are living currently, [00:02:00] um, so the. I like to start with maybe some market reality with you, Tim. Every executive technology executive I talk to, whether a CTO, head of department, even like cybersecurity leaders also as well. So we talk about, you know, deploying AI in their organizations.
But from what you are seeing, is the enterprise actually ready for what it's building?
Tim: Uh, ye yes and no. I mean, it depends on the organization, obviously the leadership, uh, when they started down the journey. Uh, one thing I will tell you is whether they're ready or not, it's getting deployed. Um, there are, um, pockets in every enterprise and some of those pockets are rather large of, uh, people just using it in.
Uncontrolled manners. Um, it's just something you can't stop. There's too much [00:03:00] value in the productivity, especially, I mean, I think like the last three months in particular, there's been sort of a tipping point, uh, with capabilities and then along with those capabilities come, uh, just sort of the, the rumor mill, uh, abounds on what you can do.
How well they work and you know, people just start using it. Uh, so whether enterprises are ready or not, they're being deployed now in a. Orchestrated organized manner. Um, I don't see it as much. Uh, there's a lot of pilots. Uh, we know we're, everybody's hoping. This year is the year from pilot to production controlled architectures, um, dedicated teams focused on enablement and organization and management and security.
And everyone's getting there. It's just taking a lot long, it's taking a lot longer than the people who are just using it on the ground to do their jobs a little bit [00:04:00] better.
Mehmet: Right. And as you said, um, it's, it's like the year where we are taking it from the lab to, to the production. Right? Yeah. Um, and this year also we are seeing kind of a shift in the paradigm from moving, you know.
Just as you know, AI as a chatbot or the co-pilot to the AI agent. Right. So, and we're talking a lot of AI agents. So what makes an AI agent fundamentally end? I'm talking in the enterprise of course, different from, you know, just deploying a chatbot or maybe a co-pilot.
Tim: Yeah, I mean, it's, it's the, it's a good question and it depends again, who you ask and how deep into the engineering.
Um, communities you go with the question. I think I heard, uh, anthropic give a definition a couple weeks ago. Uh, what an agent is versus a chatbot. So, and I believe it went something like an agent is AI that uses [00:05:00] tools over time. Um, and so if you think about it that way with, you know, chatbots, it's, I have a question.
I ask you the question, you give me an answer. Um, hopefully it's a right one. Uh, that's actually gotten a lot better as well. Whereas with an agent, it's not a question, it's an order. Do this for me. Uh, and when you ask it to do that, it calls a number of tools. Tools are essentially applications or access points or, um, basically API calls to do some sort of action or retrieve some sort of information and produce some sort of outcome.
So that's, uh, the general difference. And those have gotten better in the last, again, three months. You know, it, it really has been. You know, this quarter that the functions of agents haven't, they used to just go off the rails? I mean, I've been using, you know, AI and [00:06:00] agents since you could basically, um, and they were really unreliable and now they've gotten a lot better.
You can put guardrails around them and see people are seeing, uh, a high level of productivity in their, in their daily life.
Mehmet: Right. So before we go into the guardrails, um, one thing I want to ask you, Tim, is, you know, with, with the use of of AI and, you know, sometimes we see push, sometimes like even someone sitting in the office or like.
Working on a project, they might see all these announcements coming from all the major, uh, players in this domain. Oh. Like look, they have released this agent for that, this agent for this. So like, do you think there are some blind spots here? Hidden exposures, uh, kind of a shadow layer that's happening in the enterprise.
And how big is that [00:07:00] in your opinion?
Tim: Yeah, I mean, for sure, and it gets back to the, the point I made earlier about just the general readiness and the organization around deployment. Um, most employees, and I'll just use, you know, Kiteworks as sort of a. A litmus test here. Most employees are some degree using the tools.
Mm-hmm. Um, now we've done a good job at Kiteworks of deploying systems, um, using single sign-on, enabling everybody, et cetera, et cetera. So I don't know if we're a little bit ahead of the curve on that, but most companies have some, uh, have employees and I think, or have employees, or most of them are using some form of ai, whether it's chat, GPT and just a q and a.
All the way up to cloud code and actually doing work for them now, it gets pretty dangerous pretty quickly. Um, you know, my daily use, I would probably say I'm one of the more [00:08:00] advanced users just from a, a productivity standpoint. If you asked me one year ago. How to log into Git, GitHub and what it does, I would've have no clue.
Now I'm in it every day. Um, I had no idea what a terminal was. Um, now I'm in terminal every day with Claude Code and, you know. The first couple things I do every day is load terminal, write the words Claude, and then write the words. Unfortunately dangerously skip permissions because I don't wanna be bothered.
While it's doing some sort of a task for me though, I tell that story because I have it fairly well guard, railed, um, but many other people may not, and with commands like dangerously skip permissions. It dangerously skips permissions. So you combine this, uh, uh, you know, shadow AI people just using whatever they want, um, with the capabilities and the ability to drop guardrails if you want, through commands.[00:09:00]
And you have a lot of rogue things in your network doing rogue things with your data.
Mehmet: And this is the biggest. Challenge, I would say like, because, you know, we, we, we, we don't know what people are doing actually. You know, like I, I, I, maybe I'm repeating the same story again, you know, since I would say six months, I start to receive calls because people, they know, like I'm following the trends.
Uh, I'm, I have the podcast, uh, I'm up to date with almost every single tool that's coming up. So people started to ask me, is there a tool where a kind of a. Data loss prevention, but for ai, and I'm asking like, what do you mean? He said like, yeah, like how I can ban people from accessing AI in my organization.
I'm asking, of course, I, I, I know the reason. Like they don't want to share private, um, private data with, with these LLM models. I said, okay, but look, you know, the reality is you can't stop people [00:10:00] from doing this because you know, simply you can just. Take a photo with your, um, with your phone. And AI is becoming so good in, in, in OCR, like it can detect, you know, the text from the photo and like, like it's up to the people who are working there now saying that when, when.
And I know, like you, you've, you've published something recently where you said, you know, AI is being deployed faster than it's being governed now. If we think about this, where is the biggest gap right now? Is it like the policy, the tooling, or is it the visibility?
Tim: Yeah, so look, the one thing to make clear with cyber, there's always a way.
So this idea that you're going to create a, a fully 100% secure. Environment for anything, um, is inaccurate. And anybody in cyber that's been in cyber long enough knows that, as a matter of fact, you have to [00:11:00] operate under the assumption the bad guys are already in. That's assume breach is, is, is what, what that's referred to as, so I'll, I'll start with that and mm-hmm.
Then say what you're trying to do is reduce risk. As much as humanly possible. Um, and with, with ai, you're essentially looking at a new person that has access to data and can access data way, way faster than a, a person. So a new actor, if you will. So a lot of the systems, uh, or approaches to how you safeguard.
Network, the endpoint, the cloud, uh, the user, uh, for the human only world, which is an interesting statement, I can't believe I'm saying, but the, in this new AI world apply conceptually. It's just you have to do it differently. Um, so you, you need all of these safeguards. You need to control the identity of the, the agent, the same way you control the identity of the, [00:12:00] the person.
You need to control the environments they work in. You need to control the endpoints. You need to control the cloud, the network. All of these things need agent native. Um, security layers, uh, in order to reduce risk as much as possible. Now, to get to your question, the answer is all of it, but at the end of the day, and this is kind of a narrative I've had for a while with sort of all cybersecurity, with the exception of sort of sabotage based cyber cybersecurity, is, if you think about it.
The whole point is to protect data. That's the point. Right? And the, the cyber industry has spent 50 years protecting the network, protecting the endpoint, protecting the cloud, all this infrastructure security. To protect the data. The end game isn't to protect the networks, to protect the network so they, you can get data protection, um, and so on and so forth.[00:13:00]
Now, the reason people have done that, or reason that's been the strategy is it's easier to protect those things than every single individual piece of data. Historically, that's been the. Uh, reason why that's actually, so AI has driven a lot of the challenges, but it's also given us a lot of the tools to bring the, um, cybersecurity down to the data layer.
So, while governing the infrastructure, governing the agents are important, and you should do it, the most important thing you can do is govern the data and control and protect every individual. Data point that's regulated or private with data layer controls and data security, and you, you start to see people or companies doing that more now than they did 20 years ago.
Um. Mainly, again, one of the big reasons is AI can enable it, but also this drive because [00:14:00] agents are hungry for data. Mm-hmm. And you really just need to get a handle on that. So it's a long way of answering your question. If I were to, uh, prioritize or give guidance for priority, I would say start with the data.
Mehmet: Start with the data. Now. There's another angle for that, um, which we started to see a lot. I, I cover, um, pretty much, you know, like some weekly signals also as well. And for the past, I would say maybe since the beginning of 2026, there's um, a common theme about sovereignty. Right. And I've seen, like you, you, you said that AI fundamentally.
Changes the data sovereignty equation. Um, how, in your opinion?
Tim: Yeah, I mean, look, data, data sovereignty, digital sovereignty, it's all about control. Um, maintaining [00:15:00] control over your assets and ensuring that they are yours and access to and sharing of all of those assets. Are highly governed, um, oftentimes to, uh, a regional, uh, specificity.
Right? Um, so, you know, there's a lot coming out recently in, in France I saw this morning. There's a big push for digital sovereignty and, uh, you know, unfortunately to, for us, uh, here in the States, uh, there's a, there's a bit of sentiment about France only, but definitely not, you know, American, um, understandably.
Uh, but what happens? That's a lot easier to realize in a non-agent world or a non-AI world because of everything I just mentioned. Uh, proliferating data, uh, is, can become uncontrolled. Uh, if you don't have this piece of data, cannot leave France cannot leave [00:16:00] Germany, uh, no matter what. So you. You know, AI can just take data, move it wherever it wants, send it wherever it wants.
And so, you know, now you really need that data layer, um, policy builders or policy and controls in order to maintain digital and, and data sovereignty. And you're right, it's, it's very, very big right now. It's obviously in every country in, in Europe, in the Middle East. Uh, it's been in the Middle East for quite a while, but, uh,
right.
Tim: A lot into, um, into Europe, and even we see our neighbors to the North Canada focusing on it a lot more. But in the AI world, it's, it's definitely more difficult to maintain.
Mehmet: Right. But you know, the, it's not a surprise to me, but I mean, if we go back, I don't know, maybe. 10, 15 years ago when, you know, the wave of cloud computing started it and everyone started like to adopt cloud, whether on the, you know, infrastructure as a service or SaaS [00:17:00] solutions, you know this question.
Funny enough, I I, I used to work for one of the, um, you know, data protection companies back in the days, and, you know, when it comes to protecting the cloud, to my surprise, cloud workloads, to my surprise, a lot of people they didn't know that's their responsibility, you know, for protecting their own data.
Now, the, the, the reason I'm asking you. The, this follow up question on the ownership and control, Tim, is, you know, with the AI who controls the, the data, of course we know it should be controlled by the enterprise, but we are hearing about stories where, yeah, so someone says, okay, because it's now, even if you have a local data center, let, let's assume, you know, and, and all the hyperscalers, all of them, they have footprint here in the Middle East.
Of course they are in the us, in Europe, everywhere. So let's say the data entered, is it like, can I claim that as a cloud [00:18:00] provider? All the model, like let's say it's one of the. You know, frontiers, AI models, whether open AI entropic with, uh, or, or Google with Gemini and so on. And we are, we are giving data also to these models.
So how, how all this new, again, paradigm shift affect, you know, the discussions around ownership and control of data.
Tim: Yeah, I, I see what you're getting at. Uh, well it's interesting because what, it's sort of one of those, what's old is new again. Scenarios. Um, there's a big movement for companies to go on-prem again, you know, take because of these reasons, take the data out of the cloud, build out your infrastructure.
Get complete control over it, um, and manage everything locally. It's, it's actually, uh, a pretty big movement right now. Uh, I just, I read something recently on stats, but of course, you know, being eight 30 in the morning, I [00:19:00] can't remember anything specific right now, but I, I do know it's, it's a trend is in our customers are talking about it.
Um, now to your question. There's some level of trust and some of that trust is contractual. So you can get all of your data into your data center on-prem in your country for digital sovereignty or data sovereignty use cases, but the models, uh, are not in your da. You can't take philanthropics CLO 4.6 and put it in your data center, right?
Mm-hmm. Um, it's not open. It's not open weights. It's not open source. And so your data, inevitably, if you wanna leverage the latest in technology, has to go somewhere else. Um, there are con contractual agreements you can get into where they, the foundational model providers, don't look at your data. They don't.
Train on your [00:20:00] data. Um, so again, we're back to kinda like what risk threshold are you willing to take in order to realize the competitive advantage that these can bring to your company? But your data does move if you're gonna leverage these and. Uh, on the flip side, you can put a model locally, you just don't get it.
It's not, they're not even in the same universe as as Anthropic or chat GPT or now Meta. Um, probably X is gonna come out with a new version soon. So the companies that are serious about AI leverage are still using the foundational models to drive their business and their data is. Is moving. Um, now to whether or not you can contractually ensure that the model inference is in a data center that's local to your country or whatever digital sovereignty requirements you have, that I'm not a hundred percent sure on.
I'm sure there's some sort of, [00:21:00] um, capabilities depending on the, um, size of contract, but you're right. Um. You can't have complete sovereignty and leverage, uh, the AI models, the foundational AI models, the, the data does move.
Mehmet: So if you are sitting with A-A-C-T-O who have invested, let's say in Nvidia stack, a hyperscaler and some, let's say a Microsoft security tools.
So are they covered? They should, they should take action. What, what do you, would you advise them, Tim?
Tim: Are they covered in what sense?
Mehmet: In, in the sense of security and stack reality?
Tim: I don't know. It would be, they would just have to apply all of the measures that I started to talk about in order to reduce risk.
No one's, again, no one's covered in terms of full lockdown, cyber. It's just not a thing. Um, and. [00:22:00] Anybody leading a an IT organization or a a, a security organization operates with that knowledge that I have to assume there's holes and constantly be looking towards reducing the risk. Those holes. Um, we haven't even started talking about the new mythos mythos, what, however you wanna pronounce that model that's coming out and what that impacts on those.
But to the answer, are they covered? Uh, it's a huge question and it's, I'd have to look at every single person who asked that question and assess the risk.
Mehmet: What I find, I don't know if the word funny is the right one here, but you know, it, it's like. Mind boggling to me to think like how the internet pushed us to have this kind of interconnected world.
And, you know, it, it sparked, you know, the whole cloud computing idea and how we can like get data centers, you know, enterprise, they can [00:23:00] interact with each other. And now with ai, it looks like, you know, to your point about pushing outside. Of the cloud and going back to on-premise. Yeah. So it, it, it's kind of contradictory of all what we were trying to do for the past.
I don't know, like as I said, 15, 20 years maybe now, um, with the cloud confusing and, because I think for the first time maybe, of course we used to know, I'm, I'm sure like everyone knows the importance of the data, but like, no, not similar to any time before. Yes, so everyone start to understand data is the most important asset because I can utilize it using a model.
I can train on it, I can build agents on it, I can do bunch of things, but I need this to be my data to be protected. Now, um, because you mentioned about these new models, so it looks like we're entering a phase where AI becoming the biggest [00:24:00] attack surface in the enterprise. Would you agree on that, Tim?
Tim: Yeah,
Mehmet: I would. And what, and what you can tell us about what you're seeing also
Tim: in terms of the, the new models,
Mehmet: new models and how AI is, is shaping, you know, these attack surface today.
Tim: Yeah, I mean, look, the. You, you sort of have to balance the marketing hype with the real functionality of the new model from Anthropic and then Spud, I think they, they call it from chat, GBT, that's gonna come out.
Um, there's always a bit of hyperbole, uh, for the sake of shareholder value, uh, in these things. But you have to treat the hyperbole as fact and true. And I do believe that, um, it would be in. Better shareholder value terms to release models immediately to stay ahead of your competition and for clo, you know, for philanthropic to not release [00:25:00] it at all to the public and, you know, keep it within these 40 or 50 companies.
So it means there's something there. And you know, it's, it's funny, I saw, you know, when they released, released it. They noted that it had found something like a thousand, um, uh, zero day vulnerabilities.
Mehmet: Mm-hmm.
Tim: Uh. Finding one zero day, zero day vulnerability is quite a big deal. Used to be quite a big deal.
And then you would see this stuff, you know, people write about, oh, it wasn't really a thousand, it was a thousand. But they were really old vulnerabilities and nobody what? It didn't really matter. 'cause to find them. All these excuses and the, when you look at it, really, there was only 138. Zero day. It's like you could have said, mythos found a hundred thirty eight zero day vulnerabilities, and we'd still have our mind blown.
Right? So the [00:26:00] reality is these, you know, assuming this stuff is an absolute full fabrication, the models are incredibly powerful, incredibly dangerous from a cybersecurity standpoint, and. If you can find a vulnerability that's 27 years old, that you had to chain together four different vulnerabilities to get to that vulnerability, and it happened automatically and you needed no humans in the loop to do it, that changes the attack surface.
It makes every single piece of software essentially vulnerable. Uh, and. To take the path of let's, let's give these 40 or 50 cybersecurity companies the opportunity to understand it and patch at least what they have and enable them to patch at great scale. Uh, other companies I think is a, a smart move, but we'll see if that's enough and we'll also see [00:27:00] if, if, if they get, if they actually release it.
Now, where it gets really interesting is. If you look at China's models, they're, they're on average seven to 10 months behind the foundation models capabilities, right? They don't have quite the, um, uh, guardian element of what they're doing, at least historically with these models. They just released them.
Um, so given that and given the con um, the sort of track record. If it's highly likely that in seven to 10 months, uh, Quinn or um, I think Alibaba is the Quinn model, or Kimmy or Deep Seek, or some, or all of them will have some sort of comparable, uh, model in the wild, that scares me a lot more than the foundational models, um, because of just that sort of throw it out there and see [00:28:00] what happens mentality.
Mehmet: Because it's open source. I think also, right. Usually they, they release it in open source
Tim: and they, it's usually open weights, which is a little bit of a nuance. But the fact of the matter is the public can use it.
Mehmet: And honestly, you know, like we can debate of course on this, so. What I tell people, you know, especially, it was like the 20 24, 20 25 discussion where we saw like some voices in, in, in the industry to slow down on, on AI and, you know, okay, let's, let's try to, to get things regulated first, let's put some, uh, you know, legislations and, and so on.
My point of view was we are so late because, you know, like once these. Open source model where in the wild and people start to see like how good they were. I think we're too late for this. And don't forget, like, you know, the GPTs, you know, which is, you know, from the transform model have been in, in, [00:29:00] in different places for a while also as well.
So it's debatable, but yeah, that's, that's really scary. And to your point, yeah, I, I covered this in, in one of my, uh, posts, like I think one of them. Zero day was there for 25 years or so.
Tim: Yeah. Which
Mehmet: is really scary. Really, really scary. Now if, if I am a CTO and actually I'm building something, or maybe I'm, I'm responsible for, for, um, for an enterprise and you know, I want to stay ahead, what in practice, how I would do this thing today?
Tim: Yeah. I mean, I, again, I think it just comes back to. What I said earlier, you have to create a cybersecurity plan that's agent native and you have to start with control of your data. And there are, you know, there are new industries that have popped up around data layer control. [00:30:00] There are companies that have been around for 20 plus years.
Um, Kiteworks being one of them that has. Uh, provides data layer controls and, and, uh, what we call a controlled data environment for you to put your regulated data into to ensure that you have policies. At the data layer for humans and agents to ensure that even if they go off the rails, they can't go off the rails because we have a policy on every single data asset.
You combine the controls of, of data in action or data in use and data, um, data access and data in use that we provide. And then you compare or combine that with something like A-A-D-S-P-M-A data security posture management tool that will. Um, inventory your entire network, your, your cloud, your endpoints for all of the data, and identify what's private.
You know, you combine those two things and you get a really big jumpstart on, um, ensuring that [00:31:00] you have. Policies and controls at every single individual data asset. Um, so that if agents try to access it, it will understand what the agent identity is, what policy you've set for that agent identity, or any agents in that chain, what piece of data they can access, what they can do with it, what they can't do with it.
So. That's where I always recommend that, that people start. And then you, you need to also layer on, uh, agents, uh, guardrails beyond the data. So what tools can they access as an agent, which essentially mean it's sort of like CASB. Um, you know, cloud access security broker f 10 years ago. Mm-hmm. In terms of, you know, uh, uh, what can a, what SAS products can a human being access and what levels do they have for that?
It's the same thing for, for agents. So you, you hit the agent layer and then you ensure you have all of your appropriate controls at the network, uh, the endpoint and, and the cloud. [00:32:00] So it's a layered approach like it always has been. It's just focusing on this new, um, new identity basically.
Mehmet: Great. Um, so as we are coming closer on end, like if, if we want to have a look on, I know we discussed like different geographies and different uh, um, regulations, but, um.
Are you seeing now a diversions in the priorities across decision makers based on where they are located? You know, and I'm talking here, of course, I'm, I'm sure, like you talked to a lot of executives, not only in the US so maybe in Europe, in in in Asia, and so on. Are you seeing different priorities for them?
And if, if, if you are advising them to refocus on a certain priority, what that would be?
Tim: Uh, yes, absolutely. I mean, and uh, but that gap is narrowing quickly. They're, they're just being forced to, you know, geography, to geography. [00:33:00] It changes. Um, certainly, you know, we, our cus the customers in the US seem to be more advanced in terms of AI implementation than our customers in, in Europe.
Um, and then when you get into specific industries there, you know, the financial services industry is. Way further ahead than, you know, state and local government. Right. Different universe of, um, evolution there. So it's, it's geographic, it's industry specific, how far, uh, companies are, uh, ahead. But the gap is closing very quickly, just being forced to, to close.
Um, and I'm sorry, what was the, the follow up question of that? What can I recommend?
Mehmet: Yeah. So what kind of priorities? Recommendation usually you are telling, you know, executives you're meeting today.
Tim: Yeah. Uh, hire specifically for roles specifically [00:34:00] for AI and agent rollout in your company. AI and agent security in your company.
You can convert the interested people and you know, currently in your organization if there's an interest there and a, a capability set that matches those. But you have to create these new positions and you have to hire for them and they have to be focused. It can't be somebody's part-time job. And we're seeing a lot more of that.
I mean, JJ job descriptions that didn't exist six months ago, um, are are everywhere now, uh, which is great. But that you gotta get people in who are, who know what they're doing basically.
Mehmet: Yeah, absolutely. Absolutely. Uh, Tim, like final, traditional question I ask every guest, um, where people can get in touch and learn more.
Tim: Sure. I mean, uh, Kiteworks is an easy one. Just Kiteworks.com to learn more about us and we, [00:35:00] we post a number of, um, of blogs every day, uh, that follow the industry. What's happening in ai? What's happening in ai, uh, cyber, um, we have a, uh, newsletter on LinkedIn. We have a Substack. You just. Type Kiteworks.
Um, both of those do a great job of keeping people up to date on everything. I just, I just talked about, uh, that's about us and what we can provide and our perspective on the industry. And, uh, beyond that, uh, go to Spotify and type AI into the podcast and start listening.
Mehmet: Great. I'll make sure. So people, I like to make people's lives easy.
I gotta put all the links in the show notes so they don't need to type anywhere. Just, you know, go to the links in the resources. Uh, if you're listening on your favorite podcasting app, if you're watching this on YouTube, you'll find that in description. Tim, again, thank you very much for taking, you know, this recording in your early morning.
[00:36:00] And, uh, making the time for me and to my audience. Thank you very much again, and this is how I end my episodes. This is for the audience. If you just discovered us, thank you for passing by. Small favor from you. Just, you know, subscribe and share it with as much people as you can. And if you are one of the people who keeps coming again and again, thank you very much for tuning in.
Thank you for the support. Thank you for taking you know the podcast again this year, 2026 across multiple countries, apple to up 200 podcast charts. So this cannot happen by itself. Just because you are tuning in, you're referring other people in different countries. I can't thank you enough for this. And as I say, always stay tuned for any episode very soon.
Thank you. Bye-bye.
Tim: Thanks.





























