#586 The Battle for the Data Layer: AI, Quantum, and What Leaders Are Missing with Kathryn Wang

AI is moving from tool to autonomous actor, and most organizations are still treating it like software.
In this episode, Kathryn Wang, Principal Public Sector at SandboxAQ, breaks down what actually changes when AI systems move into production, why security models are falling behind, and how the real battleground is shifting toward the data layer.
The conversation explores how agentic AI introduces entirely new threat vectors, why identity and authorization are becoming the primary attack surface, and how quantum computing will reshape encryption, national security, and enterprise risk.
For leaders, the takeaway is simple but uncomfortable: this is no longer about adopting AI faster. It’s about understanding what you’re exposing before it’s too late.
⸻
👤 About the Guest
Kathryn Wang is Principal, Public Sector at SandboxAQ, working at the intersection of AI, cybersecurity, and quantum technologies.
She previously spent over two decades at Google, where she worked across product, strategy, and innovation, including early-stage AI initiatives.
Her work today focuses on helping governments and enterprises navigate emerging risks in AI systems, data security, and post-quantum cryptography.
https://www.linkedin.com/in/kathryn-wang/
⸻
🔑 Key Takeaways
• AI is no longer just generating content, it is executing actions within systems
• Authorization is becoming the biggest security risk in the age of agentic AI
• Most organizations still treat AI as a tool, not as an autonomous actor
• Data is the ultimate target, whether customer data, IP, or AI training data
• Quantum computing will redefine encryption and expose weak cryptographic systems
• Sovereign AI is emerging, shaped by national values, policies, and data control
• Human oversight alone is no longer enough to manage AI-driven systems
• Security needs to shift from layered defense to protecting the data layer itself
⸻
🎯 What You’ll Learn
• What fundamentally changes when AI moves from research to production
• Why agentic AI creates new attack surfaces that traditional security cannot handle
• The biggest AI risks organizations are underestimating today
• How AI can be weaponized through authorized systems and workflows
• Why securing the data layer is more important than adding more security tools
• How quantum computing impacts cybersecurity, banking, and national security
• What sovereign AI means and how it will shape global technology competition
⸻
⏱️ Episode Highlights
00:00 Introduction and Kathryn’s journey from Google to SandboxAQ
03:00 What changes when AI moves into production environments
07:30 The most underestimated AI risks in organizations today
12:00 Agentic AI, authorization, and new threat models
16:00 Why the data layer is the real battleground
22:00 Is cybersecurity still reactive in the AI era
27:00 Sovereign AI and global competition dynamics
32:00 Governance, liability, and who is responsible for AI decisions
37:00 Quantum computing and the future of encryption
43:00 Why IP is data and must be secured at all costs
45:00 Final thoughts and practical ways to learn AI
⸻
📚 Resources Mentioned
• SandboxAQ: https://www.sandboxaq.com/
• LinkedIn for AI and cybersecurity learning
• NotebookLM for simplifying complex topics
Mehmet: [00:00:00] Hello and welcome back to a new episode of the CT O Show with Mehmet today. I'm very pleased joining me from the US Kathryn Wang. She's the principal public sector at SandboxAQ. Um, Kathryn, I'm very thrilled to have you with me here today. Um, we are gonna discuss. Very, very important topics and you work at that intersection of cybersecurity and AI and other things also as well.
So I'm very happy you were able to join me here today. The way I like to do it is I keep it to my guests first to introduce themselves. So tell us a little more about your, you know, journey and you know, what. Uh, brought you later to Sandbox aq and then we can, you know, take the conversation from there.
So the floor is yours.
Kathryn: Wonderful meme. Thank you so much. I am very honored to be here. Um, and, uh, kinda joining in for this great conversation. So I started my career about 20, 25 years ago, um, at Google. So we only had one product at the time in 2004 that made any money. So, [00:01:00] um, as you kind of look throughout my career, my LinkedIn, you'll notice every four years or so I seem to get, you know what the American calls senior senioritis, right?
So you go to university and then within the first four years you grow or you go through this process and then you grow, result, like progressively more, um, anxious and ready to go onto the next thing. We call it senioritis when you're senior at university. So every four years of my career, I would switch to something different.
Uh, curve is essentially, I would, um, start with a big problem. I would build, I would scale, and then once it was mature, I would move on. Um, so after four different iterations of, you know, the, the, the ads capability and then the enterprise product as stint in, uh, product management, um, more on the strategy operations world, and then.
Moving into the area one 20, which is the incubator, similar to like a Y Combinator of, um, big problems, the, the cutting edge of technology that Google was [00:02:00] funding. Um, it was the kind of early instantiation of generative ai, uh, conversational ai, et cetera, that inspired me to, to leave Google and really pursue, um, more of a, a startup.
Proper, uh, kind of lifestyle, if you will, and unprofessional pursuit. So I, I moved into cybersecurity pretty quickly knowing that there was gonna be a huge. Threat landscape and a, and a strong aperture for opportunity there. Um, started an application security and then quickly moved into Sandbox iq. So I've been there for almost two years now.
And here I am the principal of our public sector division, um, within the, the company that focuses primarily on quantum technology within an ai, very strong AI application to ensure real current, um, global good.
Mehmet: Great. And thank you again Kat, for being here with me today. So, you know, you, you've worked, you know, you have a very rich background, I would say, and you've seen a [00:03:00] lot of changes in this sector.
So in your opinion, what fundamentally changes when AI systems move from, you know, the research to production, especially in high stakes environments? I would say like the enterprise or maybe like maybe government banking, maybe even defense.
Kathryn: Sure. So, um, there, there's so many schools of thought on this AI piece, and I'm trying not to, you know, cover things that some other people who probably could say this much more eloquently have already covered.
But the AI piece of it is just the same thing, the different day. Um, the way people sometimes describe it as it is a, it is a tool, it is not necessarily a human. And, and the, the moment that we start to give a decision making power as if it. Truly possesses intelligence is when I think we have gotten a little bit too far outside of either the practical, um, use cases for ai.
Now, I'll give you an example. If [00:04:00] you think about, um, a typical progression of a human professional, you'll grow through experience, you'll learn, you'll adapt, and you'll hopefully try continuously new things, but you'll, you'll constantly get a mentor element of it or a way of, um. Just guiding principles that you will.
You will always be able to, to fall back on, right. Um, with ai, the governing principles, the data in which it is trained on, and sometimes I would say the, the fundamental, um, spirit or the soul behind the AI is very much dependent on the country where it is trained, right? So the values that you would expect to see coming from a, um, autocratic.
Uh, uh, regimes versus a Western democracy are going to result in very different forms of ai. We need to be aware of that. That's very critical because as we think about the different AI that's being developed across many different nations, they're going to have very different quote unquote personalities.
Um, that's, that's also the reason why, like [00:05:00] we have to inherently understand how AI is built, how it is trained, the data that underpins it, and then what function we're trying to have it to do. Okay. So let's progress. We've, we've kind of talked about the recipe of what makes the AI like the, the, the way that it, it functions.
Mm-hmm. As we look at AI utilization across other countries, not just the United States, a very interesting trend emerges. Some of the developing countries have taken AI and chosen to use, utilize it to activate the professional capabilities of every single working individual within their country. They have taken AI to weaponize what they can do within their, their economic growth potential.
What did primarily Western countries do? They took AI and viewed it as a layer to cut out. Um, entry level or re reactive jobs, right? Mm-hmm. Now, on one hand, the operator me understands that [00:06:00] efficiency allows us to, um, uh, reduce the, uh, non-high value work, right, with the automation layer, which is great.
This is essentially what happened in the industrial revolution with machinery. However,
Mehmet: right.
Kathryn: What we should have seen is a corresponding increase or reinvestment of that, um, that operational layer of our workforce into something that's higher functioning that did not happen, right? The biggest technology companies in the world used AI to cut jobs.
Not to upscale and create opportunities for their, that current layer and become like further more effective in, in their roles for the company. So I mean, as you think about national security, as you think about economic growth, I mean the two are highly synonymous. It's anonymous, and I can explain that a little bit more, but as we think about AI and ADA use utilization within enterprises or even within governments, I highly encourage [00:07:00] anybody who has decision making power to stop thinking about AI as purely a operational.
Uh, uh, like automated form of replacement for the workforce and an accelerator because if you are not careful, and I mean this in the general, like every country, if you are not careful and not fully appreciating what AI can do to activate and uh, force and multiply your entire workforce, other countries will leapfrog you.
Mehmet: That's an eye opener. I would say, uh, Kathryn, and, you know, like this is really a good call out for, for decision makers to, to take at that aspect. Now we know that, you know, because we're talking about ai, so some people they think it's the elephant in the room. Some people they think, oh, it's. Discussion for another day.
The AI risks, right? Yeah. And I'm talking here from, from cybersecurity perspective and data perspective. So a lot of of people which I speak with, they tell me, yeah, we know [00:08:00] it. It's, it's real. While some other people, they say it's a problem for another day, right? So yeah, let's see how this AI thing will evolve and yeah, we know like, you know, bad actors are, might be utilizing it.
You weaponize it as you mentioned. What do you see? You know, the most things that organization are underestimating today when it comes to the AI risks?
Kathryn: I don't think they fully understand what we've unleashed here. We really don't. I, I don't fully understand it. I see things happening with AI being weaponized and, and carrying out massive, um, uh, cyber crime campaigns to the extent of the level of creativity that I cannot fathom.
Sometimes. I wish I was a better, I, I wish I didn't have a conscience because if I was a cyber criminal, I'd have a lot more money than that, you know? Um, but uh, but uh, I will say this, like in 2004. Right. We asked if AI could do the job [00:09:00] in 2026, or sorry, 2024. We asked if AI could do the job in 2026. We're asking if we can survive the AI doing the job right?
Security in the age of autonomous agents. It's not just about stopping AI anymore or like limiting ai. It's about building a digital. Framework is strong enough to contain its brilliance. Um, so a great example of this is, I just saw a headline literally yesterday about open clock, basically bypassing, um, your EDR, your, uh, um, uh, what was it, your, your, all, all, all of these other, uh.
Fundamental cybersecurity platforms that you, for, from a a, a resilience standpoint, it, it bypassed it and then set off or trigger a single alert. Right. Because when you give AI a Gen Agentic ai, the ability to execute, not just mm-hmm. Build. Create, but also execute. They have your credentials. You've [00:10:00] authorized them to do these things, so not them.
See, I have to be careful with my language. You've authorized it. It is not a human being. You've authorized it to do things on your behalf. Everything within it is perfectly acceptable. So if you have open claw or any LLM for that matter, Claude, um, uh, Gemini Chat, bt, uh, gr, anything that's authorized to scan your emails.
Anything is authorized to look at your ca, your calendar. All a cyber adversary would need to do is to like inject some malicious code into any one of those, hide it in the plain text that you couldn't visually see it, and then trust that your AI is going to carry out automated tasks and become essentially weaponized to carry out whatever nefarious type of crime plan they have.
Ated all within the. The authorization structure that you've already set, right? So now we get into this world of you, we talked about shadow at, but shadow ai, right? The, the, the example [00:11:00] that you've used is like, oh, we'll get to it. We know, or that's a problem for another day. That literally sounds like every, every example that I've heard of somebody going into the doctor and then saying.
Yeah, we're gonna need you to eat better or else you're gonna die. I know it's a problem for another day, but we know, we know what the health requirements are for us to maintain sustainable, like longevity in our, uh, our physical bodies and our brain health. We know all these things, but yet people by nature choose not to follow them.
Cybersecurity, unfortunately is kind of falling into that same category. People know their problems, but. Because they can't fathom all of the ways that AI could be recognized, because we can't parade enough guardrails to anticipate all the different ways that can be exploited. It's like decision paralysis and people just don't try.
Uh, but I do sincerely [00:12:00] worry with the forceful injection of AI into all of our platforms. Why does my refrigerator need to be connected to the internet? Why does my oven, why do I wanna create an opportunity for some adversary to come in and turn my oven into an incendiary device? Right? Um. Before we inject AI into what we do, we need to be asking ourself what exactly is it that we needed to do?
What is it operating? What is the end goal, and then how do you treat it with the same? Zero trust principles that we've had for humans for this whole time by not allowing Agen AI to have like free reign over everything, but treat them as actual agents and an individual agent will have a beginning and an end to what they're authorized to do.
If we narrow the scope and have the agents working together with those guardrails, we lessen the damage or the um. Uh, you know, the ripple effect, the blast radius of any type of weaponization.
Mehmet: You know, there's a lot to uncover here with you, Kathryn. Now, now, [00:13:00] but before deep diving into that, I see there are like multiple angles of these gaps because you mentioned the agents, but we know that there's also.
Some other risk factors, let me call them this way. Maybe there's a, a better wording for that, which is, you know, the LLMs themselves, so we know, like we saw also like LLM poisoning to extract data from there. We saw, uh, MCP, which is for people who are not familiar. It's like the equivalent of APIs in the world of ai.
So are we talking about, and of course there is the data risk, which is me. Let's say I work for a company and I go and I feed any LM without mentioning a specific, uh, one, I go and I feed it with private. Uh, data, which I shouldn't, and I make this joke, I tell people like, no one can stop me, actually. Like, this is matter of fact, because I can take a picture.
Kathryn: Yes.
Mehmet: And then [00:14:00] the model is so good, it can, you know, do the transcription and then I, I, I can do it. So we have multiple gaps. What do you think is the priority here? Is it like securing the data itself? You know, is it like fixing the LLM problem or is it, you know, fixing the integration better? Or is it all of the above?
Kathryn: So I'll tell you my personal. Experience, and I hope this helps for anybody who is currently in cyber or even thinking about moving into cyber. Um, when I started my career in big tech, I worked in platforms, right? I worked at the, the, the, the highest level of the enterprise. Um, and when I realized that so much of our digital identity sits on that layer, and it's largely unprotected, not necessarily by like lapses in, secured by design models, but just also by inces, uh, incessant persistent threats.
Um, I went into the application layer. I was like, okay, app Security. Let's, let's protect that platform. [00:15:00] Then when I realized that there were more threats that required us to defend successfully every single time when cyber adversaries only needed to win once, it was like, you're go I, this, this horrible sinking feeling that I realized.
All of this cyber security layer, it's all important. You should have locks on your doors. You should have cameras in your house. You should have, um, motion sensor lights around the perimeter. Those are all great. However, that doesn't mean that you can leave your jewels or your valuables just sitting on the table for anybody because there are so many ways that adversaries can get in now that ultimately you're gonna get got.
And if that happens. The only way that you lessen the impact is to make sure that they cannot decrypt your data. So, uh, a, a good example of this is. Um, when you, when you look at the, the, the, the developing technology [00:16:00] behind, um, quantum computing, everybody's worried about Q Day. When we are working on the national security level, when we're working with the major banks in the world, the, those are the two primary areas where we have to worry about quantum computing because the first thing that adversaries, if it's not us, actually, even if it is us.
By us, I mean the United States nations are going to go after, um, the economic security, right? That, that, that monetary layer layer, including all the, the store now to Crip layer later assets that they've compiled over the last 70 years. Um, and, and as a result, those entities understand how important it is to secure the data layer, that high value asset.
However, what we are finding out is that you don't even need a quantum computer to exploit, uh, weak cryptography. So now you have this whole agent AI layer that's coming in as another way of potential doing, um, uh, data exfiltration. You have this. Supply chain layer that could, uh, [00:17:00] have Trojan devices that are embedded directly within your military facilities or within your com company offices, like most small to medium sized businesses, they do not recover in six months after a breach.
It is an existential issue. If data gets exfiltrated into crypted, God forbid if they go down the, the, you know, the ransomware path. So. If everything ultimately goes to the data layer, my philosophy is secure the data layer, right? All these other things are important. When all else fails, the crypto cannot.
Now, this is where we get in back into, uh, the type of data you've got your customer data, right? Mm-hmm. Which could co which could result in regulatory fines and, um, other forms of substantial financial risk. That's, that's that. You've got your economic risk, which is the exfiltration of your intellectual property.
Now we've seen the Peoples of Republic of China, we've seen other nation state [00:18:00] entities specifically go after the intellectual property wealth within the United States. Right? What was it three years ago? The, uh, director of CSA Cybersecurity and Infrastructure Security Agency and the director of the FBI Christopher Row both got on on, um, and testified that.
China has amassed more in intellectual property wealth in the United States than every other nation combined. So why, if I am another developing country, would I invest to build my own tech when I can just go steal it from somebody else? So you've got the middle component where the data that you're protecting is actually your intellectual property.
Now we have the final and newest form of data, which is training data.
Mehmet: Mm-hmm.
Kathryn: It takes 0.1. 0.1% of data, your total data pool to poison a data, a uh, AI data model, right? So when we think about the, the [00:19:00] cyber threats from a, um, an AI perspective, I like to look at basically the, um, I don't know if you, I've heard of the, uh, the CIA triad, right?
You've got your confidentiality, your integrity, and your availability. So. If, if you are using AI or if you are building ai, how do you keep your trade secret, secret and prevent data leakage? That's a confidentiality piece of it. From the integrity standpoint. Do you trust the AI they you're using, or can your AI that's your product be trusted against prompt injection against data poisoning?
Against data drift, right? Like there are instances where the data could drift in accuracy that have nothing to do with data poisoning. Um, and then how are you validating the supply chain? We've heard about SBO M we have heard about CBOs and now we have AI bombs. Like how are you justifying where the data is coming from so that you could reliably say this is [00:20:00] trustworthy ai?
And then of course, the availability. We have backups for backups and backups for data. We have backups for our, you know, if AWS West or uh, east goes down, where do we fall fail over to, we have backups for this, but if our entire operational layer is a gentech AI built on top of a, one of the big AI platforms out there, and it goes down, or they get a hit with an injunction saying that you are no longer allowed to use this because we have evidence suggesting that this AI model was trained off of unlicensed data.
Think open AI being sued by Encyclopedia Britannica for $1.5 billion because they train their models off of data that they did not license from Encyclopedia Britannica, right? Those types of models, if you cannot function because the AI platform that you're using is not operational or your AI gets impacted by that and you cannot ship, it's the equivalent of a manufacturing plant that is shut down due to a cyber attack, um, and cannot [00:21:00] produce.
If it's not on the truck, it can't be sold, which means that money comes to a screeching pole. So these three different layers of data all come down to, uh, you know, whether it be high value assets for data privacy, your intellectual property, or your AI functioning, your, your entire, you know, enterprise or national security layer.
All of that is where I think the protection needs to, to live
Mehmet: right now. I'm gonna ask you something, Kathryn, and this is based on personal experience, both as you know, a practitioner, kind of if someone you know in previous lives who work in in IT department and in cybersecurity and land later with vendors in that domain Now.
And it's something that came up in, in, in, in, uh, in my podcast with guests who just, you know, maybe I didn't notice or I didn't want to notice. Let's say, as we say, like we've seen this, this movie before, is that the [00:22:00] cybersecurity industry? You know, some people they say it's kind of very active, right? So something happened and then, you know, we came up with the concept of zero trust.
Then we start to talk about, you know, secure by design when we build any system, and we've seen like layers were built on top of each other. So you just mentioned like, you know, and we know how things, you know, moved forward from just firewalls. Then, oh, we need antivirus on the endpoint. And then, oh, let's, let's put EDRs, let's put this, let's put that.
And it kept always like. Stacking all over the place and now we have the ai. Do you think that secure by design in the AI world in practice is something that also kind of a bolt on security or we need just to rethink the whole thing from scratch? And I'm sorry if I'm asking a little bit kind of a loaded question, but really I think this an important we need to discuss because.[00:23:00]
You know, valid examples of how threat actors might, you know, leverage the data. Yeah. And, and you know, these AI systems and agents and we started to see like also agents doing, as you mentioned, you me, you gave the open clothing. You know, a couple of of weeks ago I had the, the, the conversation with Betsy Atkins, and I didn't know about the paper, or maybe I, I saw it and I missed it, where, you know, they gave the agents some access and then it start to blackmail.
You know, the employee there, I think it was from Entropic, um, the study. So now when we need to talk about Secure by Design in the age of ai, what does a practice that might look like?
Kathryn: Yeah. So, um, oh, we're, we're gonna get a little bit philosophical here when you add the ca the concept of intelligence, like if we think about true intelligence here and, and how that actually doesn't really apply in the world of ai.
It's [00:24:00] just many cases. You know, uh, it's machine learning on steroids with the ability to make, um, sequential decisions based off of, uh, a large data set of that happening in, in previous examples. Um, like.
How do you, when you are an employer, how do you know that the candidate that you're hiring is the right person? How do you know that there's a culture fit? How do you know that they're capable of doing what they say that they can do? How do you know that they're of same minded judgment? How do you know that they can't be compromised?
How do you, like All of these questions are the same questions that we would be asking. Right. So from a development standpoint, if AI is only as good as the inputs that you provided, then um, responsible AI development actually becomes very important. It needs to be built on a set of foundations and [00:25:00] principles.
And this is why when I tell you, you, we have to fully understand where the AI that we are using is coming from. It will make different decisions and choices based off of how it's built. And it's very similar how we raise our children. So,
Mehmet: yeah,
Kathryn: there is a framework for this. There really isn't. I, I, I, I think that's a part of the reason why people struggle with it, and it's happening so fast that we have gotten to the, like the, the whole, I hate it when people say, just put a human in the middle.
Just prevent a human, like the human's gonna be able to determine whether or not they get that person content. Okay. But we have officially hit the point where humans can no longer cognitively manage the amount of output coming from the ai. It can't, we can't. There's so much to like, have you ever had some, um, uh, somebody come deliver you a, uh, a final product and it's literally just copy and paste it, AI swap.[00:26:00]
This is all from ai. I'm like, if you didn't take the time to read it or to review it, why do you think that I have the time or desire to read it either? Um. And I see it in LinkedIn post constantly. People don't even take the time to try and hide that it came from an LLN. How can I trust that this is actually your opinion as opposed to, you know, the, the other thing, and I'll, I'll give you the strict, anytime you see the word national security imperative, I not only know that it came from an LLM, I know which LLM it came from.
Right. So these elements, they all have these different personalities. Um, we just need to be very aware 'cause we don't have a choice. Like if we're gonna hire these uh, employees, there's not a giant 300 candidate pool to review. It's four big ones and then a bunch of other ones that are struggling to get notoriety.
Those are what we have [00:27:00] to work with. So the way that those big companies are developing their AI becomes increasingly important. Now, when we talk about sovereignty, I. Right sovereignty and like if we're seeing a lot of great AI technologies and companies coming out the Middle East, for example, there's a ton there really is.
Like, I would highly recommend everybody watch the space on how AI development happens, not only in the, the, um, middle East area, but also in, um, Asia Pacific. 'cause those are two very three very distinct flavors of, um, AI development, uh, that is going to determine how we most effectively utilize ai.
Mehmet: So now we can say we are moving toward a world where there's a sovereign AI stack right now.
If we think about the internet, and you know, one of the things that the internet allowed, uh, to happen is this kind, what some people they call it, I, I'm like you, uh, [00:28:00] Kathryn. Sometimes I go philosophical. It allowed this kind of globalization, like companies from all over the world. They can, through trade together, e-commerce is like the best use case of it.
Now when we talk about AI because of national security and you work closely, uh, with, with, with, you know, uh, companies and, and government entities in that space, when we think about the sovereign AI stack. How we can think about collaboration at the same time when it comes to, you know, be because we need to build the sovereignty and the resiliency and, you know, make sure that you gave a great example about something, some, you know, like someone coming and stealing you.
IP and go and use it. So I want to collaborate, but at the same time, I don't want to give my national secrets. So how that would look like from, from a, a, a AI stack perspective?
Kathryn: Yeah. So, um, I, I, I, I [00:29:00] think there are frameworks for this that exists already, especially in national security. So, uh, there are great ways to utilize, um, the best degree technologies to suit your purpose, even if that technology isn't.
Sovereignly developed, right? You can insist on, uh, sovereign data centers like in order or a cloud, um, access plan. So you have to instantiate a server, uh, within our country. And that way we know that the data never leaves our, our, our country. There's, uh, that way you can also put guardrails on what it is or isn't allowed to access, right?
You can make one great use of, um, some of the largest American LLMs without ha giving it access to any of your IP wealth. You'd have to really make sure that your employees understand. Do not put any proprietary code into these LLMs because they are not, you know, um, they're not, uh, authorized to have them.
And there are other security like, uh, layers that can prevent you from, from doing that or mistakes from happening. Um, so if you, as long as you. [00:30:00] As long as you know what you are comfortable with in terms of sharing, um, your data, your ip with the technology that's available, that from other countries.
There, there are ways that you could, you just safeguard that it's, it's not perfect. There's always an inherited amount of risk, but one of the benefits of having like a, uh. Data crypto, like a, a, a data training bill of materials, right? Is so that as you are evaluating this and you know what it was trained on or what code is being utilized as part of that, powering the, the AI piece of it, um, will help to create that much more like credibility and you knowing what you're signing up for.
Mehmet: Right. You mentioned something interesting also in your previous answer about the human in the loop and you know, we, we, we can argue the whole day and maybe four weeks and months about, you know, yeah. Do we need to keep [00:31:00] someone we don't need? I agree with you because I tell people we humans by nature we like.
Actually, it was like with a discussion with another guest on, on previous episode. You know, we, we like mey, we, we are lazy by design.
Kathryn: Yes.
Mehmet: So if we have something like even if you put someone in the loop, probably Yeah. Approve, right? Yeah. Like we, we've, we've seen it all the time. I've been indicted department and we used to put these guardrails and still someone would click on that button.
You mentioned also about, you know, the human. The amount of information that we are seeing. I remind people of social engineering, and this is before the ai, like how you can fool someone. And you know, like when someone used to tell me, yeah, we have the best security systems in the world, we have this. I said, yeah, but you have humans.
And said, what do you mean? I said, easily. They would wait for the weekend, right? It's Friday night. Someone would call one of your employees and they will just tell them something that, oh, here's what gonna happen. Give me [00:32:00] your. This information or that information right now, of course, things can go, uh, south, as we say at any time because we can make mistakes.
Now, from governance perspective, Kathryn and I know like this is a question that maybe in in in enterprise and in government entities and in defense industry, people would ask this question if. Okay. We give the control to the AI system to make a critical decision. Who is responsible? Is it the model, the developer, or is it the organization?
Kathryn: Yeah, so we're seeing this right now. I, I actually was in a fascinating, um, conference and, and I would encourage, fight the urge to be lazy. Utilize AI as much as you can to, um, create. Value and time where you ordinarily would be doing operational tasks but do not reinvest, right? Just like I said earlier, reinve in yourself and higher level [00:33:00] thinking.
Because if higher level thinking is where we come up with big problems that then we in pair in, in combination with AI can solve. Um, uh oh. So a good example of how I. AI creates this really interesting vacuum for us is in the medical field. Mm-hmm. Um, so this one, this one hits close to home because to your point, you know, when I was at that cyber security conference, I was seeing, looking at panel lawyers, all lawyers, and they said the, the, there are four main categories where we see a lot of cyber, uh, lawsuits requiring our, um, you know, cyber attorney.
Services and one of them is like hipaa, which is healthcare, uh, information privacy and protection. Um, the, it is, uh, [00:34:00] like abuse, which is the AI told me to go kill myself, right? Or, or other forms of self-harm. And what, to your point, like who is liable as a developer? Uh, the ai or is it, you know, us assuming the risk in using it, there's going to continue being a massive legal battle.
Uh, and the way that the court's rule is basically what will set the precedence for who owns the, the responsibility. I think with the topical relevance of philanthropic, they are seeing the right on writing on the walls, that if their technology was ultimately being used and the primary driver behind something, uh.
Bad that happens to an individual as a result of usage, that they would most likely be found liable. Now, even if that isn't true, just think about it. How much money do they have? The courts and the lawyers are absolutely gonna go after them. They've got a [00:35:00] target painted on the back just by the factor of being, you know, victims of their own success.
So, um. While I believe there is strong moral intent in responsible AI development from some of these entities, they also are forced to do that knowing that they bear financial risk as a result. Now let's take about, let, let's take it to the healthcare industry. When you have AI automatically scanning and looking at x-rays.
Or medical charge and then providing the A, the doctor with a very quick assessment of what to do.
Mehmet: Mm-hmm.
Kathryn: Two things happen. The doctor could take the AI's interpretation and allow it. In that case, who bears responsibility if something goes. Interesting question now,
Mehmet: right?
Kathryn: The second component is the doctor can see the result from the AI and [00:36:00] choose not to follow it.
Who is liable in that case? If something bad goes, what happens if something goes awry? It, it's becomes as fascinating, like it melts my brain just thinking about this because the doctors didn't have this before. So maybe AI will be able to look at like the, the larger, broader spectrum of data, help the doctor make a better assessment and save more lives, because the doctor's not gonna be able to retain all of that medical, historical data, but they can easily reference it when it's all provided to them in an easy format.
Um, but if a doctor's looking at all the results and doesn't feel like they can trust the aas response. We don't have a comparison to know whether or not the doctor made the right call. Like the same thing could have potentially happened if he did or she did follow the AI's advice, but we can't know. So in that [00:37:00] case, what, what happens there?
So the, the whole landscape is going to change and the concept of fault or liability is really going to be pushed. Um. And determined by how the courts rule on these very delicate matters.
Mehmet: Right. Um, Kathryn, you work at Sandbox AQ and you know, there's an intersection between ai, cyber and quantum computing.
We know that quantum is, is is becoming kind of. Main now, not say mainstream, but you know, there are a lot of advancements, uh, that are happening, you know, very recently. Do you think also, if I look at the cyber part of it, um, and you know, all what we discussed today, do you think that will add another stack to the AI security knowing that, you know, um, of course, like.
The main use case, uh, people I think they know about is what everyone discusses, the post [00:38:00] quantum cryptography and you know, how we'll be able to, but you just said like with agents, actually we can do that today, but still maybe if we think about, you know, from an infrastructure, and again, the same pillars we discussed today, would we need another stack just for the quantum?
Because you know, when I read about quantum and you know, I try to understand, you know, like what the. Technical capability that this technology would allow us when it's like really fully utilized. It's, it's kind of, you know, massive. But of course, like any other technology, and we keep repeating this on the show, good things always can be used by bad people.
So what are like your general thoughts on this?
Kathryn: Well, um, and I do mean that this is, I didn't. Fall into post quantum cryptographic resilience. I chose it after going through that thought process before. Right? Like enterprise layer application security layer, data layer. Yes. It just made sense. [00:39:00] Um, so I, I say this, truly believing it, um, the post quantum threat, uh, is something we really need to be prepared for.
So banks have already leaned in. National security entities are already leaned in. Policy makers within the governments from at least the ones that I've been speaking to, which include the five eyes are already leaned in where I need to see more, um, proactive initiative and funding is in healthcare and in critical infrastructure, right?
So if we're talking about superpowers that want to exercise. Their dominance with quantum capabilities, the best way that they're going to do that is completely redefine the way that cyber warfare works right now. Um, and they're gonna go after our critical infrastructure, our sensitive [00:40:00] medical data, our, in our.
Uh, economic security and our national security, that's where it's gonna happen. So I, I would love to see every nation, nation hunker down, take that seriously and, uh, make sure that they're securing the data layer. Now, the quantum technology piece of it is interesting because, um. When you have things like quantum simulation, which is what something that the sandbox focuses heavily on, we're working with several entities out in the UAE, um, and, uh, middle East in general, trying to support their research with, um, energy development.
So like if you think we talked about sovereign AI development and also economic security when it comes intellectual property, but I wanna also. Acknowledge one very big difference. The United States has been driving a lot of innovation over the last 50 years. However, the other, um, rising developing countries that are building AI capabilities and tech [00:41:00] capabilities, especially on the Middle East and in other parts of APAC and in, in, um, France, et cetera, like okay.
As well. Um, what they're doing is building on a foundation that has no technical debt. So if we think about how much energy it takes to power ai, it's stupid. It's very high. You cannot build data centers fast enough, right? We do not have enough power, right? The United States is power grid going to buckle under the amount of demand that's coming from the AI piece.
Now, the reason why that also is, is a substantial amount of that compute capability or a power is going to to support. Tech dev right from 15 years of development that need to be optimized because those. Legacy of systems and platforms in the code have never been designed to be hooked up to ai. So they're highly inefficient.
We're using more power than we need to, but some of these other countries that are utilizing it efficiently from the ground up because they don't [00:42:00] have a lot of tech debt, don't have a lot of fat to train, can potentially do more with less amount of power. So as we think about quantum capabilities and technology.
Finding ways to use more, do more with less power, more efficient power generation, alternative forms of power storage, um, and uh, uh, heat safe forms of energy storage, aside from lithium, right? All of these quantum simulation capabilities, oh, the ability to do drug simulation, to cure diseases. Right? Mm-hmm.
That, that don't have enough data to produce results for, uh, clinical trials. Those are all things that quantum technologies are very, um, poised to do, uh, from a massive like computation standpoint. So, um, that's why even if you think about all that, all that [00:43:00] cool futuristic stuff that we can do. You're generating ip.
IP is data. Data is vulnerable. It needs to be secured. So that's why I keep coming down on the data land.
Mehmet: A hundred percent agree with you, Kathryn. And you know, to your point, I'm happy that, you know, at least, you know, I can talk about the UE and the Middle Eastern in general because I'm based in Dubai.
So, um. The kind of forward thinking in this. So recently they have released their, um, imperative, not initiative imperative for all government organizations to, you know, to, to, to see seek solutions for post quantum, uh, yeah. Cryptography issue. Um, there are like similar also national, um, and this is when it comes to the data, uh, risks and you know, like how they can.
Tackle this risk, whether it's in the UAE in Saudi Arabia and other part [00:44:00] of, of, you know, what we call the developing economies, which is good. It's very, you know, stunning to hear from you. And I think this is what, you know, I can, I take away is like it's a good. Time to build fresh because you have, as you said, like you would have the legacies in some other places.
So here you can build fresh and I think, you know, at least I can say the UE doing this very well because, you know, it's like a still new. Everything is is fresh. So, so they are leveraging this. And the last thing which you mentioned, and I I want to repeat this two times. IP is data and data must be secured.
So IP is data. So a lot of people, they forget about it. You need to protect this. This is your, your, your crown jewels, I would say. And data we use, you know, I work in, in cybersecurity, part of the cyber resilience, like backup recovery and all this. We used to tell people, guys. Your data is your crown jewels.
Like it's the last thing that you need someone to go and steal it, right? So it's your everything. So I'm happy, Kathryn, you, you, you, you [00:45:00] highlighted this, um, as we come at the end of this, uh, fantastic episode today. Finally thing I ask all my guests any final thoughts and where people can learn more.
Kathryn: Oh, okay.
So, um, I am a huge, I I do think that doing professional certification courses are helpful. However, I'm also mindful that there's so much great content out there that's free. There's no need for you to go and pay for that if you can, you know, get, get a strong foundational understanding. So the two things I would say is utilize LinkedIn.
I continue to be impressed with the amount of content that is being shared around their LinkedIn learning, um, tracks, especially within ai, especially the world of like, uh, uh, like, um, like everything related to AI and cybersecurity, and then all of these other, uh, kind of ancillary side, um, things, cryptography, crypto stuff, uh, cryptocurrency, like everything that, that funnels into what we consider [00:46:00] the digital ecosystem or digital economy.
They have great resources. Check that out. Um, there's also really great, uh, resources within, uh, YouTube people who explain things with high level of fidelity of production value imagery, like drawing lightboard capabilities. Um, and they explain it in a very digestible way. Um, there are also great, uh, tools out there.
I'm a, a fan of ll um, notebook, lm. Being able to listen. Turn, yeah. Turn complex concepts into, um, funny, engaging, um, podcast. So my trick, and anybody who has seen this movie, there's an American movie that's just. Lighthearted and very stupid. But I, I enjoy this level of humor sometimes because cyber is stressful, is, um, there's a movie with Will Ferrell called Stepbrothers.
So sometimes I will ask them to create a podcast similar to that movie with a lot of humor [00:47:00] to help explain certain concepts to me. So I have a higher, I have a higher likelihood of retaining it because I know it's funny and it's memorable. Um, and then the, the, the last thing is a little bonus for everybody.
Um. I try to ask everybody that I talk to, one of the, one of the things, they're working in the conversation like, tell me how you're using ai. That's like a super AI hack. Tell me how you're using it. Because then one person's learning becomes everybody else's. I started my conversation with, um, a friend yesterday, like, how are you using ai?
She was telling me how she was. She was. Use, uh, Claude Co to create a CRM for herself, for her business. She, I was talking to another friend. We were stuck in the DCA airport due to, um, air traffic control for five hours, and we went into a hole and how he's using ai. Um, I have found a wonderful love, like, love, love relationship with Banana Pro ai.
It has completely changed the way that I [00:48:00] generate. For my presentations, my panels, my keynotes. Like everything, the fidelity and the ability to make minor modifications off of an existent image is next level. It makes Chad GBT look like a waste of time to put it nicely. So I, I, I love banana pro ai.com.
That's the one. There's a lot of banana. You know that like banana? Yes. Replicas out there that are not as good. Banana pro ai.com is great. So learn from each other how you're using ai. Use it as an excuse to connect with people, right? Because that's something that you can do as a human being that AI cannot, right?
Um, and, and then we just share those best practices and, and, and, um, you know, keep yourself challenged. Don't fall prey to the laziness.
Mehmet: Yeah, any place where you share anything online, Kathryn Pep maybe can go and, [00:49:00] and, uh, follow you, maybe
Kathryn: LinkedIn. I have a lot on there and I try to, my, my, my network, I'm very blessed, is all over the place and I think everything from the medical industry to the energy industry, a lot to national security and cybersecurity technology.
Um, I try to distill certain concepts into very easily. Digestible. Um, like reading, I don't write paragraphs. I write like maybe three sentences and then something funny in the end, and it's usually pretty, pretty spicy and memorable. And then I hyperlink it to a, a article that it, that it's about. Um, so LinkedIn is a great place to find me.
Mehmet: Great. I'll make sure that I'll put the, the, the link to the profile. And by the way, this is my favorite. Uh, writing style and I like to read things like this, like when I see something with, you know, like two pages to read. I, I don't read that honestly. Exactly. So, yeah. Uh, so this is the right way to do it.
Well, Kathryn, really, I enjoyed the conversation today. That was [00:50:00] really, really, really, you know, very. Eyeopening and I hope you know the, the audience will benefit out of it. And again, I appreciate the time. I know you were like between travels, so you took the time to speak with me. It's early for you also in the morning, and this is how usually I add my episode.
This is for the audience. If you are new here, thank you for passing by. I really appreciate it. Give me a favor and subscribe. Share it with your friends and colleagues. If you are one of the people who keep coming again and again, and I know I repeat myself at the end of every episode since beginning of 2025, last year, you did something fantastic.
I don't know how I managed to do it, but we are. Keep seeing the podcast trending in the Apple top 200 podcast charts across multiple countries in the business, sometime entrepreneurship, um, you know, category. Now this ca doesn't happen by itself. This is because of you, because you listen, you come back and the nice thing is we see the countries list.
Updated all the time, so I'm happy for this and [00:51:00] at least we see four to five countries simultaneously. I can't thank everyone enough for recommending, and I see the messages coming to me that they listened to the podcast because someone referred I sometime. I don't know who. So thank you all for the support and thank you for being part of this because what we're trying to do is to spread knowledge and make sure that.
To Kathryn's point, we explain things in a way that people can digest it. It's not like just throwing technology for the sake of technology. So thank you very much and as I stay always, I say, always stay tuned for a new episode very soon. Thank you. Bye-bye.





























