#575 AI Risk Is the New Cybersecurity Battleground With Walter Haydock

AI is moving faster than security, and the gap is widening.
In this episode, Mehmet sits down with Walter Haydock, Founder of StackAware, to explore how organizations can safely deploy AI while managing growing risks across cybersecurity, compliance, and governance.
As AI systems become embedded in products, operations, and decision-making, traditional security approaches are no longer enough. From data leakage to supply chain vulnerabilities, and from regulatory pressure to investor scrutiny, AI introduces a new layer of complexity that leaders can no longer ignore.
Walter breaks down the emerging AI risk landscape, the importance of standards like ISO 42001, and why governance is becoming a competitive advantage, not just a compliance exercise.
⸻
👤 About the Guest
Walter Haydock is the Founder of StackAware, a company helping organizations measure and manage cyber, privacy, and compliance risks in AI systems.
He previously served as a Marine Corps officer and worked on Capitol Hill advising members of the U.S. House of Representatives. His experience spans government, cybersecurity, and enterprise software, giving him a unique perspective on managing risk in fast-moving technology environments.
Walter focuses on helping companies accelerate AI adoption responsibly while maintaining trust, security, and regulatory alignment.
https://www.linkedin.com/in/walter-haydock/
⸻
🔑 Key Takeaways
• AI risk is becoming a core cybersecurity challenge, not a separate discipline
• ISO 42001 introduces a structured way to manage AI governance and risk
• Many companies still treat compliance as a checkbox instead of an operational system
• AI supply chain risks are one of the biggest emerging threats
• Training AI on customer data without transparency can lead to backlash and liability
• Open-source AI tools introduce new attack vectors through plugins and dependencies
• AI governance is quickly becoming part of investor due diligence
• Companies that manage AI risk well will gain a competitive advantage
• Speed of decision-making matters more than perfect information in AI adoption
• Every company is becoming an AI company, whether they realize it or not
⸻
🎯 What You’ll Learn
• What ISO 42001 is and why it matters for AI-driven companies
• How AI risk differs from traditional cybersecurity risk
• The biggest vulnerabilities in the AI supply chain
• How attackers are already using AI to accelerate cyber threats
• Why governance frameworks are essential for scaling AI safely
• How regulations in the US and EU are shaping AI adoption
• The role of AI governance in fundraising and M&A due diligence
• Practical first steps to assess and manage AI risk
• How to balance innovation speed with compliance requirements
• Why AI governance will become table stakes for every business
⸻
⚡ Episode Highlights (Chapters)
00:00 Introduction and guest background
02:30 What is ISO 42001 and why it exists
05:00 Why AI governance is becoming critical
07:00 Who needs AI compliance the most
10:00 Regulation across the US, EU, and globally
13:00 Innovation vs regulation: finding the balance
18:00 AI supply chain risks explained
21:00 Open source AI and new attack vectors
25:00 Why AI risk management will be mandatory
27:30 AI in due diligence and fundraising
30:00 Future threats and AI-driven attacks
32:00 First steps for managing AI risk
34:00 Leadership mindset and decision making
37:00 Who owns AI risk inside organizations
39:00 Closing thoughts
⸻
🔗 Resources Mentioned
• StackAware: https://stackaware.com/
• ISO 42001 (AI Management System Standard): https://www.iso.org/standard/42001
Mehmet: [00:00:00] Hello and welcome back to an episode of the CTO Show with Mehmet today. I'm very pleased joining me from the us. Walter Haydock, he's the founder of StackAware. Walter is an expert in what he does. So you know, we're gonna talk a lot about cybersecurity, governance, risk compliance, and this is a topic which, in my opinion, day after day is becoming more important.
And the reason for that is, as everyone is aware, AI is everywhere. And recent days, we started to hear also about. Security flaws caused by ais attacks caused by artificial intelligence. And this is what will be the main discussion for us today, how, how to protect and how to be compliant. Without further ado, Walter, thank you very much for joining me.
The way I love to do it with my guests is, you know, I ask them traditional question, kind of an intro, like your background journey and what you're currently up to. And then we are gonna start our discussion [00:01:00] right away. So the floor is yours.
Walter: Great. Well thank you Mehmet. Thank you for having me on the show.
Really appreciate it. As far as background, so I'm the founder of StackAware and we help AI powered companies measure and manage cyber compliance and privacy risk to accelerate responsible AI deployment. And I. Spent the early part of my career in the US government. I was a Marine Corps officer. I served on Capitol Hill as an advisor to members of the House of Representatives.
And then when I transitioned in the private sector, I worked at a range of software companies and understood kind of how business was done there. And I thought that I understood chaos and uncertainty from my time in government, but I saw a whole new level of that when I was in the private sector. So organizations spend a lot of time, money, and effort on.
Security and governance practices that don't necessarily tackle the problems they're trying to solve. [00:02:00] And I founded Stack Wear because I knew there was a better way to come up with a comprehensive approach to accelerating AI deployment while managing the risk.
Mehmet: Great, and thank you again, Walter, for being here with me today.
I'm gonna directly deep dive in, in the state of what's happening around us, Walter, and we are gonna talk a little bit about the new, I think it's a new iso, correct me if I'm wrong. Like the ISO , 42001, uh, 4 2 0 1. Um. And I started recently to see it more like people usually when you tell, tell them about ISOs in the cybersecurity space or the compliance 27 0 0 1.
So. If we want to take the audience in kind of a journey of, you know, what push to have, you know, the standard now, and sorry if it's a little bit, you know, loaded question. Um, [00:03:00] what is it for? Like who, who should be worried about this new, uh, standard?
Walter: ISO 42001 is an international standard for building an AI management system.
And an AI management system is a set of policies, procedures, practices. That allow you to deploy artificial intelligence in a way that lets you manage the risks effectively and is also focused closely on the effectiveness of your AI deployment. So there are requirements to track certain metrics and organizations often evaluate speed of deployment, the AI expertise in their organization, their ability to replace manual tasks.
So it is a. It is an all in one system for managing artificial intelligence deployments.
Mehmet: What was the urge, Walter, now that we, [00:04:00] you started to see that companies need to implement, or you know, have it as a standard now? Um, because. From my humble experience, you know, I've been in the tech industry. I, I would say, you know, long enough to see people, they take these audits or they take these standards as just check marks and just, you know, to be in the books.
Um. But in reality, um, why really, you know, there is an urge for people to take, um, you know, these audits and these governance and risk assessment seriously, especially with what's happening around us in, in the world of AI and AI deployment.
Walter: It is definitely possible to. Have a paperwork only ISO certification, and that's not something that I view as very valuable.
It's definitely something that people do just to check the box. [00:05:00] But the key usefulness of ISO 42001 is operationalizing how you manage artificial intelligence and the risk associated with it. So you alluded to the. Rapid growth of new AI tools, whether they're commercial or open source, and. If these are being deployed at a rate that's so fast that a haphazard or a piecemeal way of managing risk will not really serve you, and it'll open you up to a lot of risks.
So what ISO 42001 lets you do is develop a repeatable risk management process when you are. Considering or actually deploying these new systems, models and tools to ensure that any risk that you take on is by design and not by accident.
Mehmet: Let me quick follow up question to you, Walter, like, is it now becoming a must have kind of, um, um, [00:06:00] you know, certification for, for companies because.
You know, if I might, or maybe I'm wrong, I hear, uh, want you to correct me. So in, in the world of SaaS, for example, or, and, you know, we, we, we know like they have to be so or so to compliant and they need to have like some standards, um, within, you know, this wave. New tools that you just mentioned and everyone now we are recording at the beginning of Feb.
Probably this will be aired by the end of, of Feb, even not, uh, if not more, uh, before, uh, so Open Cloud, formerly known as Cloud Bot and you know, similar other tools also, but I'm mentioning it not as an affiliation, but more like it became, you know, very popular among people. So. As a company, whether, you know, I offer AI services or maybe I'm a startup utilizing ai, how important, like today, more than any [00:07:00] time before also to have these, you know, certifications and to make sure that, you know, whatever I have deployed, whether it's for my own use or for my customers, is up to the standards.
Walter: There are three main types of companies that would benefit from ISO 42001 certification. So the first one would be AI powered startups, and oftentimes these organizations only have the bandwidth to pursue a single certification or attestation. And if the core of your value proposition is artificial intelligence.
Using ISO 42001 as that standard makes sense because it covers security and privacy, but it also covers a lot of other things like AI effectiveness. It covers the impacts to society and to individuals. So I think that's one area where ISO 42001 would be appropriate. A second place where ISO 42001 is appropriate is for [00:08:00] companies that are training on customer data.
We've seen a series of controversies related to companies like Zoom, like Slack, like DocuSign that disclosed in a. Non-transparent way in a, in a somewhat muddled way, that they were using customer data for training and there was a backlash in each of those cases. Now, whether it's justified or not is a different story, and I don't wanna lump all those companies together.
But the key issue is that people did not fully understand how their data was being used, and they did not have effective assurance that it was being used. In a responsible and privacy preserving manner. So having an external audit regarding your AI practices can help build that confidence with customers.
The third type of company that would benefit from ISO 42001 certification is a heavily regulated one, specifically in healthcare or financial [00:09:00] services, at least in the United States. Interestingly enough. There's already a law on the book in Colorado that specifically calls out ISO 42001 Compliance as giving you Safe Harbor under certain circumstances if there's some sort of regulatory action against your company.
So in healthcare, there are a lot of concerns about data privacy and how AI is being used. And then same thing with financial services, because people view financial information as being quite sensitive. So being able to demonstrate that you have a sound grounded program for managing AI risk would be a big benefit for heavily regulated companies.
Mehmet: I'm, I'm happy you brought, you know these, um, you know, example industries. Healthcare is one of them. Um, when you talk to leaders in whether healthcare or any like highly regulated like finance also as well, do you see, [00:10:00] like, do you spot I would say some. Challenges for them to also like understand what governance is.
I mean, in the. In the AI space, of course, what is really it is about, do you think that they really understand, you know, how critical this is regardless of just, you know, having, uh, an audit for the sake of doing the edit. I'm wondering what's top of mind for these executives, especially in these highly regulated, uh, industries.
Mentioning here, again, healthcare and finance, because, you know, we, we know like how. Sensitive data, uh, they deal with like how, the amount also of data and private data they need to protect. So what's keeping leaders at night when it comes, you know, to, to the AI today and its governance.
Walter: In heavily regulated spaces like healthcare and financial [00:11:00] services, we're already seeing a new wave of regulation that are specifically calling out those use cases for special scrutiny and for additional safeguards.
So in the United States, states like Texas and Utah have specifically regulated the use of artificial intelligence in healthcare. Colorado I mentioned has their own AI act, which covers healthcare. It also covers financial services, insurance, employment, things like that. In the European Union, the EU AI Act is very broad in its coverage of use cases.
It includes certain types of healthcare and financial services scenarios, and then all across the world we're seeing regulators. Come up with a slew of new laws to manage the rollout and the use of artificial intelligence in those highly sensitive cases. ISO 42001 gives you a system for integrating all of these different requirements across [00:12:00] a broad set of.
Jurisdictions, customer bases, cultural considerations, and it gives you a machine that lets you build an actionable compliance system to make sure that you're meeting all of your requirements. And that's what executives are really concerned about at the end of the day, is avoiding regulatory action, avoiding customer backlash, and having a certifiable AI management system is a great way to do that.
Mehmet: Great. Now also there's something that I bring up a lot with my guests here, especially when we discuss about, uh, regulations and, you know, being compliant. Um, some people think that a lot of regulation might cause kind of a slow down when it comes to innovation and you know, they might say, yeah, we understand, but this might like delay our plans maybe for example, to.[00:13:00]
Release a new feature, or maybe it'll delay our plans to get this new model up and running in the hands of our customers or maybe our partners. So what's the balance that, you know, leaders can do here where they. Still do what they have to do when it comes to, um, uh, you know, all, all these regulations and making sure that we are securing, protecting, governing, uh, our AI implementations without, you know, having this slow down of innovation.
Walter: There's definitely a balance between. Effective regulation and innovation, and I'll use two examples to show where it's not being done well. One is in the United States, and yeah, for those who aren't familiar, the United States is a little different than most [00:14:00] countries in that we have a federal system where we have states that can have their own laws.
It's possible for the federal government to preempt the states in certain areas, specifically with respect to. Interstate commerce, which I think plays in heavily in the, in the case of ai, but unfortunately, we don't have a federal artificial intelligence law. That's nearly as comprehensive as what the individual states are passing, and the states are taking different approaches to ai.
Colorado's taking a very heavy handed approach, and I have had some criticism of their artificial intelligence act. I think Texas has done a better job and having a clear set of things that are prohibited that basically anyone in the world would agree people shouldn't be doing with ai, but otherwise.
They're allowing innovators to innovate, and what I'd like to see is in the United States, a comprehensive law to cover [00:15:00] artificial intelligence that specifically calls out things that are not allowed. But anything else lets companies innovate responsibly on the European Union side of things. They took a very heavy legislative approach and were quite insistent that they were not going to change course at all until the end of last year when it became pretty clear that it was going to be difficult or impossible for most companies to meet the high risk use case restrictions that the EU AI Act put in place.
And as of right now, those restrictions are coming into force. In August of 2026. So a big problem there is that the harmonized standard, which would give companies essentially a playbook for how to operate and how to meet the requirements of the law, has not even been formally approved yet. And the EU or the Joint Technical Committee commissioned by the EU rejected [00:16:00] ISO 42001.
As a standard, as an acceptable harmonized standard. They had some reasons I, I certainly understand why, but instead of taking something that already existed, maybe modifying it slightly, adding in some mandatory controls, they decided to essentially start fresh and come up with something completely new and they've been way behind schedule.
In the rollout of this harmonized standard, and now they're going to need to either roll back the law or just let it come into force and have a lot of ambiguity and a lot of chaos in companies. So I think the United States in Europe are both doing it wrong in their own ways, and I'd love to see a clear, lighter touch set of regulation to allow innovation to flourish while at the same time managing the.
Unacceptable risks that I think everyone agrees are there.
Mehmet: Yeah. It's, it's, it's, I think the discussions will, will, will [00:17:00] keep continuing. And it's very interesting how different geographies, um, will, will adopt to these standardization. I know like how complex it is in the US but I think like even if we take like.
The EU at least, you know, they like to standardize on the level of the eu, the whole European Union member states. But when I think like, I don't know, maybe Japan will come up with something, uh, here in the Middle East, probably members, they will have something because we've seen like these, um, I would say diversions in, in regulations.
So of course there will be like a basic, uh. A base, let me call it this way, that they would take it as, you know, reference and then try to customize base on it, especially in. Areas where, you know, here, like everyone is pushing very hard for innovation and you know, like the first thing that comes to mind when we try to think about, okay, how we can regulate something, [00:18:00] people would push back.
Oh, like you're slowing us on innovation. I'm talking here about. This region, this part of the world. I'm not sure like how this will, will, will affect other regions, but it's eye-opening up, opening, um, topic I would say, especially to see how this will converge later to some standard, similar to what we have done with the ISO 27, uh, thousand one.
And you know, like other standards, like at least we were all agreeing on. Now if I want to look. In practical terms on the threat models of the AI with you, Walter and I, you know, when I was doing my research, I've seen, like you talk a lot about the supply chain risks within, you know, the AI itself, so.
Where is that exactly? Is it in the data that is used to train the LLM models? Is it the way of development? Um, where exactly, oh, let me ask you this way, this way. Where is the biggest risk within this AI [00:19:00] supply chain? Like, uh, if we want to dissect it from training data all the way to, to deployment.
Walter: The single biggest risk to companies with respect to AI in their supply chain would be suppliers that are training on their data in a way that. The customers are not aware of or didn't approve because this could allow for the reproduction of sensitive information to parties who don't necessarily have a need to know.
Now, that doesn't mean that all training on customer data is necessarily wrong, and it's key to be transparent about what you're doing, and frankly, there's no artificial intelligence unless you have training data. So having a firm, no training on our data policy from a customer perspective. Basically says that you're going to not use AI or you're not going to contribute to the I improving effectiveness of ai.
So what I would recommend is having a risk-based approach [00:20:00] to what data you allow suppliers to train on. So in the case of physical machinery, the performance data of a piece of. Automotive equipment is not really that sensitive. So you know how many revolutions per minute a wheel is making isn't something that is really going to cause any damage if it's trained on, and it'll make the product more effective.
Now when you get into healthcare data, there's still a risk. If you are training on sensitive information, it's possible that you could reproduce. Patient data to parties that don't necessarily have a need to know, but there are safeguards you can take. For example, you can pseudonymized the data that is being trained on.
That means, for example, instead of saying this is Walter ACHs, x-Ray, you say, this is person 1, 2, 3, 4, 5, 6. It's that person's x-ray, or you can fully anonymize it by [00:21:00] completely eliminating any link between the data subject and the result. So I give the example of x-rays because that is relatively easy to do, but you still need to be careful because maybe someone has a unique bone structure or a a piece of metal in his or her body that allows you to.
Identify that person uniquely. So you need to take that into consideration when you're talking about training on data and protecting the information from a confidentiality perspective.
Mehmet: What about what follows, uh, training the data? Uh, Walter, like, I mean, uh, APIs, and again, back to, to what I was discussing with you, these agents and, you know, we, we've seen, we've seen disasters, you know, I'm not sure, maybe some of them they are exaggerated probably, but the, the risks are there, right?
So what other, um, you know, attack vectors, I would say you, you're seeing related to ai other than, you know. The things you just [00:22:00] mentioned related to, to the data itself.
Walter: On the open source side, we're seeing an explosion in tools and projects that are quite powerful from a productivity perspective. An example would be Open Claw, formerly known as Mt.
Bot and Claw Bot. That is a very popular open source project that is completely free. However. With that project, there are many risks that come through your supply chain. For example, Claude Bot uses things called skills, which are downloadable plugins, essentially, that expand its capabilities. And a security researcher named Jameson O'Reilly, he didn't experiment where you could see if he could inflate the popularity of a skill and have people download it.
And he didn't actually do anything to their computers, but he determined that. He was able to cause many people to execute arbitrary code on their [00:23:00] machines while they thought they were using claw bot skills. And he published his findings and warned people about this risk. The problem is that he's not the only one doing these types of things, and he's probably one of the few people that's actually ethical in the way he's doing this type of uh, work.
There are malicious actors out there that are using. For example, claw bot skills to try to steal cryptocurrency, to perform data exfiltration. So organizations need to be careful when they are allowing these new AI agents to roll out. And some things I recommend are applying physical or virtual sandboxes.
So maybe create a dedicated virtual machine or create a, or use a dedicated laptop to. Do your experimentation limit the type of data access that these types of agents have, have the ability to get. So that means don't let them pull confidential information right off the bat. Maybe use them for marketing or [00:24:00] for generating images, things like that until you're confident in what you're doing.
And then third of all, apply traditional software. Supply chain security measures. So do software composition analysis to understand the dependencies of these things. Do code review for open source projects to understand exactly what these things are doing, and then also come up with a allowable list of components that you can bring into your environment.
Mehmet: Yeah, and I think we're gonna keep seeing these kind of, um, new, uh, threat vectors, you know, popping up every few weeks, maybe even a days, I would say, because, you know, things are moving so fast. And this is where I would ask you, Walter, like, you know, you, you. Your main service is to help organization in making sure, um, that they're following the standards, how much that is important for teams to [00:25:00] rely on, you know, services like the one that you offer.
Because, you know, if I want to do something, you know, internally or in-house, I imagine like maybe the team would just. Spend hours, if not days, you know, learning about everything, which is around, and this is where you need the, you know, the third eye expert coming and having all the knowledge. Um, are we going to see this is also becoming not only for maybe the verticals, we just discussed something important for everyone, and this is where your services be, becomes like, I would say.
It's not like, uh, good to have, it's a must have services. How are you seeing this in, in the few months? This is, I'm not saying in few years because, you know, things are changing so fast.
Walter: I think by the end of the decade, every company will be more or less an AI company, just like every company. Each day is more or less a cloud company.[00:26:00]
So with that in mind. Having an effective AI acceleration and risk management program is going to be table stakes. Now, some companies may want to build that in-house if they have the resources to have a full-time expert focused on it. If they have the ability to track the rapid changes in technological development, that is certainly an option and some bigger enterprises are doing just that at the same time.
It definitely makes sense to have an expert come in who's a specialist focused on the area of AI risk management, because, like you alluded to, there are so many things out there to keep in mind that it's easy to miss items if you don't have a dedicated system for evaluating and managing that risk. So if you want to make all the mistakes along the way and, and, and learn it, learn things the hard way, then you can certainly do it yourself.
You know, StackAware has worked with companies who've, who've tried to do it [00:27:00] themselves at first, and then we've come in after the fact and, uh, help them accelerate things. Or you can go right to working with an expert and get that outcome from an accelerated timeline.
Mehmet: One point also some. Expecting, uh, to, to pick up more.
I'm not sure if you have seen, if you are seeing it already, Walter, at least in the us um, you know, in the world of startups and, you know, especially when probably they're about to get acquired or maybe they're raising some funds. So there's the due diligence that used to happen and all this stuff. I have a feeling that, you know, having.
You know, their infrastructure, their APIs, you know, the way they train the data. So all the things that you just talked about during, uh, this episode today, I think this is. You know, gonna become part of the due diligence where you have this report and audit, [00:28:00] um, because it's, it's another game where you have built on non-standardized security measures, especially when it comes to the ai.
Have you started to see these kinds of conversation coming to you, Walter, maybe from, I don't know, m and a teams, or probably maybe some investors who want to understand, you know. Is this startup following the best practices?
Walter: We're definitely seeing a lot of outreach related to investor due diligence at StackAware.
That's because companies where AI is central to their value proposition need to be able to communicate externally to stakeholders like investors, boards, customers, that they are deploying AI in a responsible manner and also in a way that is. Globally recognized, and that's what ISO 42001 certification gets you.
It's not a guarantee of anything, but it gives you the ability to communicate [00:29:00] clearly to these external stakeholders like investors, that you are following best practices and that you have a documented way that you're addressing the risks related to the core value proposition of your company.
Mehmet: Um, I, I, I'm seeing it coming, like, uh, and I'm seeing it also as a differentiation.
I would not say more, it's a big word, but I would say like a competitive advantage, uh, for companies. Like, it reminds me a lot, you know, what's happening today with, as I said, when the SOC two, you know, the compliance for SA companies became like, if you don't have it, that you might lose. Contracts. I know like in the US you have the FedRAMP and you have, you know, like these standards in, in, in government and sometimes you need to have like advanced also, uh, audits if you want to deal with more sensitive places related to, to the government.
So I'm, I'm, you know, expecting to see the small, whether you are a startup or maybe you are already a company and. [00:30:00] As you said, like it's sooner or later you're gonna have AI systems deployed, whether inside your product or you're gonna rely on it to deliver your services. So there is no escape from that for sure.
Um, now if we want to, I don't know, like I'm, I'm not big fan of asking these questions, but because things are changing really fast, would you be surprised to. Start to see like also some vectors that maybe we never thought about before. Because the AI is becoming so powerful that it can start to, you know, uh, create its own frameworks also to, to try to find weaknesses in, in, you know, the, the AI stack of other companies.
Like, are we expecting to see this anytime soon? Walter?
Walter: We're definitely already seeing attackers use artificial [00:31:00] intelligence to accelerate intrusions of target networks, so primarily in vulnerability, exploitation, and attack path identification. That's where generative AI is incredibly helpful from the adversary perspective.
So it's going to be a race between defenders who are using artificial intelligence. Responsibly securely to protect that their networks against attackers who are using it to generate phishing emails, using it to find new weaknesses in software products. Finding ways to navigate into the core of target networks, so is going to be a battle between the two different sides to see who can move faster and more effectively in their pursuits.
Mehmet: Great. Now, if we have technical leaders listening to us or watching us, uh, and they've got AI in production and they've never done anything before, like, what do you think, you know, other than contacting you, of course, [00:32:00] but I mean, what's the first thing you think they should be doing now? Um, like before even getting any expert, uh, to the house.
Walter: A key first step in managing your AI risk is doing an effective inventory of all the systems and models that you have in use. And this is easier said than done. Oftentimes companies will track this in spreadsheets, which become outdated very quickly. So having an automated and accurate way to track everything that's in use is really critical because that lets you do the next step, which is.
Doing a risk assessment of all the systems that are in use, and after you do your inventory and your assessment, you may determine that some of the systems are not within your risk appetite for the company. So maybe they're training on sensitive data. Maybe the outputs are not reliable. Maybe you could be breaching some regulatory obligation if you're using a system in a certain way.
So the [00:33:00] risk assessment will help you find that out. Then once you're done with the risk assessment, you can go into control design. A key control would be having a policy on the use of artificial intelligence, but you could also apply technical controls like preventing prompt injection, avoiding, uh, data poisoning scenarios, and looking at the data provenance of the models that you're using.
Mehmet: Yeah. Um, at least I start, you know, this is what I wanted to hear from you, Walter, like, um, as we are almost coming to an end, I gotta ask you something not related to the standards and all this, but like, maybe a little bit about your journey and, you know, as, as someone who founded a company and you're doing, um, this, uh, for quite some time now, um, uh.
What kind of leadership mindset you need to have when you have these, I would call them hard discussions or hard conversations with your customers. And, you know, [00:34:00] based on your background and, and like you, you work in, in, uh, national security, you know, you have this long history also of working, uh, in, in prestigious places.
Um. What can we learn from you here, Walter, about having these hard conversations and pointing out to leaders that, Hey, you're doing something wrong here. What you can tell us about that?
Walter: The way that I interact with my customers is that of an advisor, and that's how I think that security and risk management folks should.
Operate. So I try to as clearly as possible enumerate the risks and the potential impacts to the company as a result of what they're doing or what they're not doing. With that said, I think it's up to. A cross-functional business leader to make the final decision about how to proceed. Because if you count long enough or you do enough assessments, everything is a risk and [00:35:00] you don't want to get paralyzed by your analysis.
So something I learned from the Marine Corps is the importance of decision making and sometimes not making a decision can be the riskiest thing of all. So I see a lot of indecision within a lot of companies about. You know, are they gonna go develop a a risk management program? Are they gonna deploy this AI tool?
Are they gonna try to pursue this opportunity? And it's. It's easier to talk about it for a long time, and it's sometimes harder to actually do it. And I would say make sure that you're 70% certain that it's the right decision, and then document your rationale and then move forward. And by making decisions more rapidly, you will develop more information.
You'll find out what is true, what is not, and that will help you move faster than your competitors and potentially even adversaries like cyber criminals.
Mehmet: Yeah, and I think the, the 70% rule is, is proved like even, uh, [00:36:00] by acade, uh, academic people who did the studies because if you wait till you have like all the information, you're gonna wait forever because you'll never have like full information, specifically when we are talking here about a technology that is changing fast.
So we, yeah, otherwise we're gonna wait forever, which is a great advice. Also, Walter here and, uh, I liked also. You know, pointing out that there should be someone who takes, like, I would say, ownership of this, uh, and take, you know, responsibility of all the liaison and, you know, like the, the communications and making sure that everyone on the same page.
Who, who would that be usually? What is it like the CO Is it like maybe chief AI officer, because we, we are seeing these titles nowadays. Usually who you are seeing that the, the person taking these responsibilities or these duties.
Walter: I think there are two things here. So the risk owner for. A system, AI or otherwise, should be [00:37:00] the reward owner.
So that means somebody who is a business leader, who gets a bonus, who gets compensated. If a certain product hits a revenue target, that person should be making the decisions with respect to risk. And if something goes wrong, then that person should feel the consequences of that risk materializing. But conversely, if that person hits the goals and does it in a secure and responsible manner, that person should reap the rewards.
Those types of people are the ones who should be making the final risk decisions. With that said, there should be a full team that's supporting them from an advisory capacity when they're making those decisions. In organizations that I work with, about 80% of the leads for AI risk management are the chief information security officer or the security team.
I've also seen. Privacy teams do it. I've seen legal teams do it. I've even seen data science teams do it as a kind of self-governing approach, and in some cases I've even seen [00:38:00] dedicated AI governance functions. So the key is having clear decision ownership, and that should be tied to the reward ownership.
And then having a skilled and knowledgeable team of advisors to support that decision maker.
Mehmet: Right. And I think also it depends on the organization size and you know, how they have that org chart brought up. So a hundred percent. Um, final and famous question, Walter, like how people can, you know, get in touch and learn more.
Walter: The best way to find me is to look me up on LinkedIn, Walter Haydock, and follow me for AI governance, risk management and security posts every day. And if you're interested in working with Stack Wear, then definitely send me a direct message. I am very responsive on that platform.
Mehmet: Great. I'll make sure that I put the link to Pro Link in the profile, in the show notes and also to the website of Stick Aware.
So [00:39:00] StackAware. Sorry. Um, Walter, really, I enjoyed the conversation. I think it's a very important topic and I'm happy, you know, like I think. It's not the first time we discussed it, but the first time we talk specifically about the new standard, the 42001. And, um, I think we gonna discuss it more and more in general about governance in ai, about security in ai, uh, and why it's important for every business leader, uh, to protect.
Their own estate, I would say, and at the same time protect their customers at the same time. So thank you very much for sharing your knowledge and your experience with us today. And for sure, like people, as you said, they can reach out to learn more. And if they need any help, of course, um, they can, uh. Get in touch with you.
So, uh, saying that, uh, this is for the audience. This is how I end my episodes. Uh, if you just, you know, find us by luck. Thank you for passing by. I hope you enjoyed it. If you did, so, give me a favor, subscribe and share it with your friends and colleagues. And if you are one of [00:40:00] the people who are loyal.
They come all the time. They keep listening. They keep pushing us on the Apple Podcast charts in different countries. You're doing great since last year, since 2025, continuing the same trend. 2026. Thank you very much. And thank you for all the messages or the commands, all your suggestions. Also sometimes for topics and guests.
Um, and as I say, always stay tuned for a new episode very soon. Thank you. Bye-bye.





























