#573 AI Is Becoming a Commodity. The Real Game Is Value and Control With Shashank Tiwari
AI is no longer just about models, prompts, or experimentation. It is becoming infrastructure.
In this episode, I sit down with Shashank Tiwari, CEO and Founder, to unpack one of the biggest shifts happening right now: AI is rapidly commoditizing, and the real value is moving up the stack.
We explore how enterprises are moving from hype to real ROI, why AI agents introduce new risks, and how governance, control, and reliability are becoming critical in the age of autonomous systems.
This conversation goes beyond the noise to focus on what actually matters for builders, operators, and investors.
⸻
👤 About the Guest
Shashank Tiwari is the CEO and Founder of Uno.ai, a Silicon Valley-based company focused on AI-driven automation in governance, risk, and compliance (GRC).
With deep expertise in enterprise systems, AI agents, and risk management, Shashank works closely with large organizations in highly regulated industries such as banking, healthcare, and critical infrastructure.
His work focuses on automating human-centric tasks while maintaining accuracy, reliability, and control.
https://www.linkedin.com/in/tshanky/
⸻
🔑 Key Takeaways
• AI models are rapidly becoming commoditized infrastructure
• The real differentiation is shifting to applications, workflows, and execution
• AI agents introduce new categories of risk and governance challenges
• Enterprise AI adoption is moving from experimentation to ROI-driven use cases
• Automation must balance productivity with reliability and control
• The future of AI is solution-centric, not model-centric
• Coding is getting faster, but building products remains complex
• AI may increase productivity, but it also amplifies risks at scale
⸻
📚 What You’ll Learn
• Why LLMs are becoming the “operating system” of AI
• Where real value is created in the AI stack
• How enterprises are measuring AI ROI today
• Why AI agents create new threat vectors
• The challenges of AI governance and compliance
• Why “vibe coding” does not replace product thinking
• How organizations should think about control in autonomous systems
• What the future of AI applications looks like beyond hype
⸻
⏱️ Episode Highlights
00:00 Introduction and guest welcome
02:30 From generative AI to AI agents: what changed
05:00 Why AI is becoming commoditized
07:00 The myth and reality of AGI
10:30 AI and new risk landscapes
14:00 AI as a new threat vector in enterprises
18:00 Governance, compliance, and control challenges
22:00 Shadow AI and visibility gaps
26:00 Why you cannot “opt out” of AI
29:00 From hype to ROI: how enterprises are thinking
34:00 AI productivity vs real business impact
37:00 The reality of AI coding and “vibe coding”
43:00 Why building products is still hard
48:00 AI, creativity, and the future of development
51:00 What’s next: automation of human-centric work
54:00 Elevating GRC beyond processes
56:00 Closing thoughts
⸻
🔗 Resources Mentioned
• Uno.ai
• NIST AI Risk Management Framework
• ISO 42001 (AI Management Systems)
Mehmet: [00:00:00] Hello, and welcome back to an episode of the CTO Show with Mehmet today. I'm very pleased joining me for the third time, a great friend that actually were introduced by a good friend also as well, I believe three years ago. I have with me today, Shashank Tiwari. He's the CEO and founder of Uno.ai. Han, it's a great pleasure to have you again here today.
I'm happy that we are recording this. Almost beginning of the year. We're still in January on the time of this recording, so it's good to to to see you again and also discuss maybe a little bit about, you know, what you've been doing in the past year. And also we're gonna discuss some trends without further ado Shashank, like, just for folks who maybe they are listening or maybe watching this for the first time, just a reminder, you know, who are you, what you're currently building.
Shashank: Absolutely. Well firstly, thank you so much for having me one more time. You know, it's been a absolute pleasure getting to know you a few years ago and I think it's been quite awesome coming to your show over and over again, and as you mentioned a third time, which [00:01:00] is, uh, quite outstanding. So I feel very blessed, very thankful for that.
Um, for those who may have not seen me on this show or have never heard of us. You know, we are a Silicon Valley based startup. Um, you know, a little over three and a half years old, old company, uh, leaders in the area of AI agent automation and AI agent platform in the world of G-R-C-E-R-M, you know, TPRM and BCM, you know, these essentially the sort of sub buckets of risk and compliance that are very critical for companies that are in the banking, healthcare, you know, large critical industries and, you know, sort of public, right?
So that's the sector we. Kind of service. And our main value prop is very simple, right? Like we try to automate away a lot of human-centric tasks. And by doing that, we uh, obviously give massive amounts of productivity boost. But while we are doing that whole productivity boost, we like to balance that with what is super important and critical for the enterprise, and which is accuracy, reliability, you know, a system that is dependable, right?
So [00:02:00] that's what we bring to the table essentially.
Mehmet: Great, and thank you again sang for being here with me today. Um, I will start the discussion because, you know, I think, um, it's everywhere in the news. It's top of mind maybe for a lot of decision makers in, in, in the industry. When they hear about the AI and, you know, AI agents, things changed since we last spoke because of course, I remember the first time and second time the majority of the talk, you know, was about LLMs.
It was about, you know, the, uh, generative ai, you know, part of the house. Now we are talking. Completely new level over here. So maybe if you can start shahan on a high level, not necessarily just in, in, in your space. Like how have you seen this shift and then, you know how this is also shaping the way, you know, you decided with the team on how the product [00:03:00] should look like in the age of agent tech, ai.
Shashank: Yeah, no, it's a, it's a very relevant question and, uh, you know, the reality, my method is that, uh, the pace of innovation has been incredible. Uh, not only last year, but you know, over the last two, three years. Uh, so we've seen a lot of trends come and go, you know, in fact, we've seen a lot of this development in the broader world of ai.
It seemed like this was really game changing. You know, this is going to really kind of, you know, um, unearth some completely new possibilities. And then. A few months and sometimes even few days later, we see yet another advancement, which kind of out competes it and almost makes it irrelevant or redundant.
Um, so the pace of innovation has been very fast. There's no denying that. Um, and certainly these systems, you know, primarily LLM at the heart of it, but multiple other things even and around it, have accelerated the ecosystem at large. Right. So it's about a lot more people who have gotten involved in this, uh, everything from users on one side to builders and developers, [00:04:00] to people who integrate with it, to traditional companies, you know, who are sort of waking up to that reality of lms.
So there has been a lot of action. There's no twoism in that, right? Um, now what I do see, despite all this excitement and despite all this. Sort of the, uh, achievements of, of large language models and in, in general the sort of, you know, call it transforma based ai that has happened so far. Um, being deep into the trenches and having seen this technology mature and, you know, having seen how it's going.
Um, I feel at this point in time, actually it is. Not moving into further specialized technology. In fact, what we are gonna see is that it is gonna get more and more commoditized. And I think some of that is evident anyway, right? Like some of that is, uh, pretty clear because there are so many foundational models and they out compete each other, you know, on sort of a weekly basis, you know, each trying to be top of the leaderboard and the next one comes in and displaces that for a little bit.
Um, and also the growth and [00:05:00] advancement. Is again happening. What I would say is in a certain type of activities or a certain type of tasks. So eventually what's gonna happen in my opinion, is that those kinds of tasks would get very commoditized. And in other words, you will have multiple choices, uh, to accomplish the same end goal.
Those LLMs might come from big labs here in the United States, you know, uh, close home here in Silicon Valley. Or they might come from open source, you know, sort of cohorts coming outta Europe or even China or, you know, for that matter, even the Middle East. Um, or it might be from somewhere else. You know, every other country, every other geography is also kind of, you know, stepping up to now have their own local or, or, you know, sort of localized LLMs in play.
Uh, so eventually that'll become like the operating system effectively, right? Like there would be certain. Capabilities, which are exceptional. But the, the funny thing is the more exceptional and the more widespread it gets, uh, the less important and less special it becomes. Right. For example, like today, if you were to look at your.
Laptops. So for that matter, [00:06:00] even your phone, uh, the, the fact that it can multitask so many things at the same time, the fact that, you know, the, the level of audio video rendering that these, you know, devices on our hand can now accomplish, have come a long way from where they were even a few years ago.
Right. You've got a powerful computer, for example, in your pocket, right? Each one of us have it, and it's there everywhere, right? Pretty much available around the globe, and the prices have fallen, and you know, they're there in multiple brands and multiple form factors, right? It's not just about one kind of, uh.
Technology and the same thing is gonna happen to large language models. They'll become very commoditized. They'll become extremely prevalent, and they will probably become cheaper right? Over a period of time because competition will drive the prices down. Uh, so then coming to your point of, well, where's this agent tech or other things going, I think that's where we will see emergence of what I would call more solution centric or new types of approaches.
Being built on top of this, right? Because it is getting pushed down the stack to become [00:07:00] commoditized, become the operating system, you know, become taken for granted. The question is, well, what comes on top of it? Right? And so what is that special secret sauce or unique advantage that helps us all build something more interesting, right?
Or something more meaningful. So I feel like those will be different abstractions. We don't exactly know where it'll go, but it'll definitely be more domain specific, more specialized techniques. It might be an amalgamation of techniques, might be completely new techniques. Um, it might be a mix of some things that are still in the realm of science fiction and some others that are good technologies, but are reborn, you know, with new sort of adaptability and new applications, right?
So that's what we're gonna see effectively I think in the, in the coming months and years. Um, as much as, you know, the popular media might say, no, it's gonna be smarter l LMS and a G is gonna come. I don't think we are going there actually, to be very honest.
Mehmet: You don't think so?
Shashank: I don't think a GI is coming the way it was originally conceived.
Mehmet: Okay.
Shashank: And this is, yeah, so this is a debate that can, I think, uh, [00:08:00] we can all have all day long, right? Because of
Mehmet: course,
Shashank: by, by nature of the fact that nobody defined very precisely what a GI meant, it was like a catchall term that was thrown out there. Uh, you know, uh, saying this is super intelligence, right?
And so then we went down to a very foundational sort of a question or what exactly super intelligence means. Right. And so if we, if we say super intelligence means the ability to remember things at scale or, you know, do human-like tasks at scale. Well, maybe then a G is already here. Right, exactly. That definition.
Right. So it all becomes on the definition. But you know, I think of a GM O as a thinking machine, you know, human-like machine that is fresh thinking that can think on its own. Uh, you know, the classic sort of sci-fi incarnation of the Skynet, you know, that's going to be like as smart as the human. Um, I don't think LLM technology is getting us there at all, as much as anybody would like to, uh, say that.
Mehmet: Yeah. So, and this, you know, of course we will not debate this here, as you said, we gonna take this just, you know, my [00:09:00] 2 cents on, on this matter. I strongly agree with you when you said like, if we think that a machine that can do large, um, number of tasks at scale, we are really having this now.
Shashank: Right?
Mehmet: Uh, and we see like, you know, actually.
Majority of these LLM uh, producing companies, they, they now have these, uh, you know, very good agent, that they are good at what they do. I think they fix the issue of, you know, the memory. Of course we still have the hallucinations and all these things, but I mean mm-hmm. Uh, you know, the ability to have like, uh, extended memories and, you know, being able to do this, uh, deep, deep.
Thinking, they call it sometimes some, you know, chain of thoughts, like each company. Mm-hmm. They have their own terminology. Yeah. So if, if you consider this a GI actually, by the way, by itself, it's a great achievement. I mean, absolutely what we have, um. The way I see it is a machine that comes up without me writing a [00:10:00] prompt to go and mm-hmm.
Do something which is meaningful for me. This is, this is a GI, right? Like for example, of course, you know, contemplating by itself and then saying, Hey, like. We need to go and do a research in this area, let's say, of physics or chemistry. Mm-hmm. Mm-hmm. And then doing actual experiment, like same as a human will, will, will have the logic.
I think we're not there yet and I'm not sure if we're gonna have this anytime soon. But yeah, it's, it's great achievement that we have been, uh, seeing for the past two, three years now. Now I want to focus a little bit, uh, Shang in term of GRC and for people who doesn't know what GRC is, is governance, risk and compliance.
Shashank: Right
Mehmet: now ai, AI actually added also its own challenges for Oh it
Shashank: did, right.
Mehmet: For me, for mainly risk, I would say. Mm-hmm. You correct me on the gov and compliance as well. You correct me on the governance part 'cause I'm not the expert there, but at least from risk perspective, we started to see also AI being a threat itself.
[00:11:00] Um, from compliance perspective. I just, I think, shared couple of minutes ago, like also how. Eh, you know, like even people on a high level of, you know, they should be the most secure places in the world. They go and they do stuff with LLM Chatbots, what they are not supposed to do. So how AI also change the landscape of how we as humans, we see GRC in general.
Shashank: Yeah. Yeah. I, I think it's a very valid point, you know what you're saying. And, uh, I think more than, more than just GRC per se, right? Like, which of course is a broad category of its own, you know, there's governance, uh, which is about, you know, how we decide and run, you know, sort of our organizations to risk and compliance, uh, the, the broader.
I guess question at hand, and you know, something we should think about it is, um, how is AI making us work differently, uh, in our, uh, current sort of organizational [00:12:00] setup and workflows? Right? And so what is really happening is that as this technology is becoming smarter, as this technology sort of expanding and.
Beginning to do what I would say human type of tasks. Right? And I mean, in fact, be more specific to give some examples. You know, we've got now pretty smart coding agents that writes code right on behalf of people, uh, the, the so-called white coding, right? That is becoming bigger and bigger lately. Or, uh, you know, AI stepping into do a lot of human type of, you know, thinking or reasoning tasks, you know, where essentially where it's again, authoring content or, you know, helping make decisions or sometimes even making decisions, right as a part of the agent.
Clause. Um, the first question that comes up is, of course. You know, um, the autonomy around these systems. So these systems are autonomous. If the systems are gonna make decisions, uh, you know, they will obviously also impact it adversely as much as they will impact it positively, right? And especially if they're malicious in nature or, you know, they in some way are made to behavior in malicious manner, they [00:13:00] could be even more harmful than humans because, uh, they could really have impacted scale.
Right? So that's kind of the first. Premise that one has to think about. And then of course there's a second aspect of it, which is the humans and their interaction with these systems. And it could even mean something as simple as what do we put into an LLM prompt? Or what do we expose ourselves to when we're interacting with these public, you know, large language models or AI systems.
You know, what are we divulging, you know, what kind of information are we making in return public? Or, you know, putting it out in the sphere without realizing that also imposes on itself. A whole bunch of risk and compliance parameters, right? So certainly I think as much as the technology steps in to help you maybe get better at managing some of these.
Ly, it's sort of opening its own new threat vector, right? Ly it's becoming its own enemy in a way, right? Where it's now, uh, exposing us to newer threats, completely new types of risks, completely new types of compliance nightmares that we hadn't thought of, right? And, [00:14:00] uh, on that trend, in fact, I'll mention, you know, like a couple of days ago I was speaking to a healthcare leader, um, who was very, um.
Um, you know, sort of deep into this thought process around business continuity as an example. Mm-hmm. And this healthcare leader was really sort of, uh, mulling over the fact that, you know, our supply chains have become very, um, very distributed and very global and with so much of also, you know, just. The broader sort of changing world order globally, uh, risks have increased in any case, right?
Because the threat factors have increased, our dependency has become, you know, a little more fuzzier than ever before. And so the business continuity should look at vendors and third party and fourth party, et cetera, as a part of their risks. And then certainly as we were discussing. The fact was that now not only your systems, but your vendor systems, your vendor's, vendor systems, and then of course you know everything else you consume is also being influenced and powered by ai, right?
In some [00:15:00] form or the other, they're all trying to infuse and bring in these AI level automations. So now you're not only fighting with, let's say, a supply chain risk, which might be more systemic and human centric, but also you are now working with. Business continuity in the era of things that you don't even fully understand and things that are unpredictable, you know, and things that are interacting with systems that may have autonomy without us fully realizing what that boundary of that autonomy is, right?
So you might see one next big outage where flights won't take off and the power grid might fail because of some AI deciding to do it. Who knows? Right? And it won't be surprising. And the question then would be, well, what does recovery look like in that universe? Because, um. Today if switch fails or a person kind of malicious does something, we know how to recover from those.
How do we recover from an agent system, which the next move might be worse than the one that has caused an outage. Um, so there aren't a lot of those risks or newer kinds of, you know, sort of, uh, threat vectors [00:16:00] that are coming upon us. There's no two ways on that, you know, that's becoming front and center for sure.
Mehmet: You know, I have a lot of follow up questions. I'll start with the first one that came to mind now. Um. One of the things that I read a lot about it last year and of course still continuing in 2026. You know, the problem in general with, when it started with the generative AI and people wanted to do this AI transformation, the problem was sometime they don't have enough data.
Sometime, you know, they don't have enough. Uh, actually they didn't document their, their policies, uh, processes and, and, and so on. So. Are we still seeing the same thing now? Especially like in the domain you are in, like I have. And this is the other day actually, because you mentioned outages. I was discussing with someone and you know, I had a guest who was talking also about it, but he's more into the DevOps space and they're trying to mm-hmm.
Do like this, [00:17:00] the switch that can, you know, actually get you back. Um, so I said like, do companies actually have. Well, documentation and I have little bit, I can claim I have a little bit business continuity background in previous lives,
Shashank: right?
Mehmet: Yeah, yeah. Do they have now, with all these complexities, do people have this in front of them?
Can they go and say. Of course, I don't want to point to a vendor, but it happened. So it's in the news. So let's say when the CloudFlare outage happen, or for example, let's say one of the hyperscalers, they have an outage, do, do people actually have Shang today a full map of what is affecting what in case of that?
So because you know, before doing anything, we need to have these, are we still having these challenges in the enterprise today?
Shashank: Uh, very much so. I think so. And, um, well, I would, well let me sort of set more context Right. Before just giving a dry, you know, [00:18:00] simple answer of saying whether it is or it is not.
Um, there are, there are two or three factors that are, that have occurred over the last year. Right. And some of them are good and some of them are confus. Let me say that.
Mehmet: Right.
Shashank: Um, so for example, like, uh, yeah. Two companies now have some sort of an AI governance policy and AI governance structure. Are they thinking about it?
Uh, I think the answer is generally yes. Right? Especially companies that have started using AI quite a bit or you know, it's been quite present in the company. There is a policy, right? There is a governing structure or a body or a. Some mechanism and yes, they do, uh, think about it and discuss and, you know, trying to put some safeguards and structures around it.
So like, you know, do they do that? Is there something of that? The answer is probably yes. Right now the question is how effective is that AI governance in place? Or is it effective at all? Um, for that matter, is it even relevant for it being effective? Right. Um. It depends, [00:19:00] right? Like that's where it starts becoming a little interesting because, um, it's one thing to put a policy down on paper and another to kind of see how it manifests in terms of implementations and controls and, you know, downstream ramifications and are they getting adjusted to the changing reality, which is changing every day practically.
Right. Um. I think it's a whole spectrum. There are some companies that are actively trying to, uh, you know, discuss, uh, things around, Hey, what does identity in the world of agent AI looks? Mm-hmm. For example, that is quite a conversation. We even saw some proactive acquisitions lately where, you know, companies have jumped in and, you know, bought companies that are talking about agent identity.
Uh, so there's a bit of that going on on one front and we don't even know what that means. We don't even know how it'll pan out. But, you know, it's an interesting topic. So there are. People who are kind of getting deep into the, you know, the technicalities and ramifications and how you can control them or should you even control them, you know, what does boundary look like, you know, et cetera, et cetera.
Right. So there are some, some companies who are there, and then there's [00:20:00] some others who are treating this the more traditional way and saying, oh, can we put a firewall around it? Uh, can DLP solve my problem? Right? And some of this, yes it will. Some of this for sure, it'll, but this is a little more complex because you know, the moment you have AI doing human type works.
It's beyond simple filters, right? Because ultimately those are, you know, some form of filters, even though they're smart filters. This is also about the fact that, you know, by itself, that activity might look just normal, but it may have some interesting downstream activity or drown downstream impact, right?
And so that's something that you may not even be able to control, right? Um. So that people aren't understanding. Right. And then last but not the least, I think the more important bit is, uh, what are the, the real sort of on the ground. Controls and on the ground ways to, uh, manage it, to monitor it, to be on top of it, right?
Right. Now, some are taking the traditional approach and there's nothing wrong with it. I think that's good. It provides a structure where [00:21:00] they're saying, well, let's get certified, you know, and let's follow a governance mechanism. There are some companies that are taking that route where they're saying, okay, I'm gonna go follow the ISO 40 2001 spec, or use the N NIST AI RMF, or you know, some other sort of AI guardrail or AI framework.
And there're quite a few now, right? Like, which are available so you can pick yours that you like and you know, you can use that as a sort of governing structure and say, okay, let me now see how I implement it and what I do. Um, and some others are saying, well, that's too formal. And that's more in terms of, uh, you know, exercise that gets me auditor and gets me a certificate, but I don't know if that really helps me buy some real coverage.
So I'm gonna go the other route and, you know, start looking at it more from the bottoms up, who's using it, you know, and then those folks are now doing a lot more the same way that they were initiatives to figure out shadow IT or shadow. You know, sort of, uh, uh, cloud instances being spun up in the past, and those were looked at as threats.
Well, these guys are more in the discovery [00:22:00] mechanism. They're saying, let's focus a bit more on discovery. Let's try and understand where are employees, where are developers, where our teams are actually taking advantage of ai and you know, how it is impacting us. And, you know, let's go figure that out. Um, but I feel in, in all of them, and when all of them are good mechanisms, I don't think there's any bad mechanisms.
But all of them have holes, right? And. Sometimes these, the, the, the, the depth and breadth of these holes is unknown, right? So, right. Um, that's where the question comes up, right? So for example, like, I'll give you a simple example. Sure. Let's, you have all the guardrails on shadow AI and nobody's deploying an AI application in your company.
Let's just assume that, you know, a good clean slate. Um. Even that doesn't mean you're protected from AI because the idea you may be using the, uh, the payroll application that you may be using, the, you know, the, the CRM that you may be using, um, the communication system that you may be using [00:23:00] that may be using AI behind the scenes.
Right. Um, and it probably is, right? So then the question is, um, your data is still going there. Your decisions are still being impacted even if you are not directly doing it, right? And so then the question becomes, um, you know, what all do you switch off? Are you gonna be one of those companies who will, you know, switch off everything and start.
Living in the, you know, what I would call today's corporate wilderness, because that's what you'll end up, uh, having to do. Basically opt out from everywhere, not allow any employee to use any of these AI features. You know, switch off AI from everywhere, right? And I don't think that's practical or, and I don't even think it's possible.
Forget practical, right? Because they may be products today, you still have the option to switch off. There might be product six months out a year, two years out, which they might be like no product without. Uh, you know, uh, without switching, like without ai, if you switch off ai, there is no product, right?
Like, uh, right. There might not be one. Right? So, [00:24:00] so that's the whole sort of ramification here, right? Like that you have to. Uh, it's good that initiatives are on, but I think we've got some distance to go.
Mehmet: Absolutely. You know, I, I, I'll tell you a funny story, maybe it'll come funny to you. So the, uh, I think last week or the week before someone called me and he said like, Hey, like you meet a lot of people and you know, like, I know you follow the technology.
I'm looking for a very special, uh, solution. I said, yeah, tell me what, if I can help Sure, for sure. He said, I'm looking for something that would not allow my people to, to. Uh, to put like company data in, in these lms. I said inside the company, he said, no, outside of the company. I said, okay. I'm sorry to tell you that you need to wait until Elon Musk's Neuralink comes out and then you can plant chip in your employees brain so they don't go and spell out.
He start said what? I said You can't. I just, and I, I just give him a simple example. I said, look, [00:25:00] you know, like some of the Com company's data is already online. Like you know, the thing that you know and some of that are online and you are not aware, and this is where the threat intelligence comes into. I said, but you know, there's nothing that can stop.
Let's say you. You know, someone who's, I will not say the CFO, but maybe someone who's in the finance department to have kind of, you know, they know the PNL. They, they, on a high level, they still remember the number and they're gonna go and put it in chat, GPT maybe, uh. Innocently just to draw them a chart.
Right? And then I said, they don't have to do it inside the company because they can go home, they can use their own personal accounts and they can spell this information out. I said, it's hard. So I think, you know, we used to talk early days in the cybersecurity about, you know, the awareness and, uh, you know, like how you need to, for example, don't share your password with people you don't know.
Like, don't mm-hmm. Don't do that. I think now with the AI part of, you know. [00:26:00] Getting people up to speed. I'm not sure if you agree with me sharing like, but this education, like why it's important Also, um, you know, explain to people why they don't have to put this information in the LM right. And, uh, probably a more with companies to allow people to use LLMs, but within the company boundaries so they don't feel they, they need to do it in secret when they go back home.
So maybe. I don't know how it's done. Right. So, so this is on the point you just mentioned. Um, I don't know if you have anything you want to comment on this before I ask you the next question.
Shashank: No, no. I'm agreeing with you. I think that absolutely true. I mean, it is just becoming more and more pervasive, right?
In everything that we do as a part of our, not only organization, but you know, just generally life at large and in everything that we consume. Um, the same way that, you know, if you make an argument today. Saying somehow you cut out technology. Right. And there are some people who are able to do it because like, hey, my privacy gets invaded.
I don't want any, [00:27:00] uh, you know, technology to be tracking me. Let's just say that you are on that quest. Um, and sure you get rid of your cell phone. You know, you, you become a hermit. You don't go anywhere online. You get rid of all your online personas. Um, yeah, you might go that distance, but the extremely few people who might be able to just survive meaningfully in that kind of a mode.
Primarily because the whole world has become so connected. So it may not be you, but if you have to speak to somebody, well, you have to get on a messaging app. If you need to go and interact with somebody, you're probably in some sort of a network, social or otherwise, right? And
Mehmet: right.
Shashank: You know, uh, if you have a device.
Uh, I'm sure we still had some options till some time ago to say, Hey, you can use a flip phone versus a smartphone. I'm pretty sure a few years out there won't be such a thing as a flip phone. It might be a vanity thing. Maybe that's something would be like today you buy those hand, you know, hand cranked watches or something.
I mean, it might be more of a, you know, it'll go into that leak, you know what I mean? It'll actually be expensive [00:28:00] item that will be a collector's edition, but everything else will be smart. A so-called smartphone or you know. And then you take that and notch up with ai, like, yeah, it's coming. I don't think there's any escaping it.
Escape is not the solution for sure is what I'm agreeing with you. I think it's a matter of managing it, driving awareness, you know, helping people understand and helping people make their own decisions. I think that's better than escaping for sure.
Mehmet: Absolutely. Now I gotta ask you, Shan, 'cause I'm sure you know with what you're currently doing at Uno, you speak with a lot of, uh, uh, senior, um, decision makers.
Who, who of course, uh, back in the, I'm, I'm sure it's still back, uh, top of mind. Of course, cybersecurity is top of mind, but AI is also top of mind. So. Again, last year these headlines were full, uh, sometime from big, uh, you know, big, uh, companies that they do this research and they were saying like, X amount.
I, I don't have the exact number, so that's why I'm, [00:29:00] I'm, you know, making it generic like. Uh, companies discovered that they spend this amount of money and they get only like, maybe, I think only less than 10%. They get like some kind of ROI on their mm-hmm. AI spendings and all this. So now, when it comes to, in general and what you do, what leaders really are looking for in terms of ROI.
So what are the real metrics? I would say that they'll tell you Noosh, Shak, for example, what you are doing at UNO matters for us, because let's say. I don't know, cutting costs. You, you, you can like, tell us. Mm-hmm. A little more about, you know, the, the, the, the generic view of, of these senior leaders, whether in cyber or in the it, uh, teams who are now having maybe imperatives to deploy ai, but at the same time, they don't want to waste, you know, their company's money also as well at the same time.
Shashank: It's, um, it's a good question. It's an important topic to, [00:30:00] uh, to discuss and think about. So, uh, you're absolutely right. Me, you know, the, um. The, I guess the initial spend was more euphoria, excitement driven, right? Like every leader, uh, wanted to be on that wave. Uh, definitely didn't want to be left behind.
There's also a little bit of a fomo, if you may, that had been built up. So everyone wanted to jump in and, you know, get an AI project going and then, you know, bring this newer technology to their advantage so they could grow the business faster and do more things right. And then as it happens with every technology, I don't think this is unique to ai.
Um. You can jump into a technology, but the importance is like you have to spend some time, energy, money, effort, more than anything else. Really map it back to what use cases would be applicable in your domain, in your company, in your business, where it could have the most amount of impact, right? Or where it could benefit the most.
But when you're trying to rush through something in a hurry, uh, you don't have that. Time and energy to spend [00:31:00] that effort. And I think that's what we saw happen in the, what I call the phase one of AI adoption, where it was more of a rush to get AI projects off the door. And then of course there was also a lot of push from vendors and high at large.
So people jumped in and, you know, AI projects were unfolded. POCs were, were sort of conducted with no clarity on, on. Forget ROI even applicability sometimes. Um, you know, things were built that was unclear how that use case really even applied to the business. Or was it even the most relevant use case or was it something that they should really indulge in?
Right. So yeah, so we went through that journey and I think much of that was also talked about from researchers in universities to industry analysts saying, Hey, 90% of projects are failing. You know, people are disillusioned. Et cetera, et cetera. Right? But the, but the ground reality is that was the classic sort of early curve or early adoption curve of every technology.
And so today, [00:32:00] as you know, as I speak to leaders, uh, especially now, you know, and, and started actually late last year itself, um, the dialogue has changed from saying, Hey, let's just have an AI project. To now saying, let's figure out which are the right candidates where AI can actually help us. And as you rightly said, what does right candidate mean?
Well, right candidate really an organizational perspective simply means one of the two things. It either increases the top line in some fashion or the other. Or helps us manage the bottom line better, right? Like that's the beginning and end of most corporate decisions really, right? And so that's where I think most of the ROIs be measured today.
Like if you bring in ai, can we get more productive? Can we get things done faster? Can we do things at scale? Can we do more things with the same resources? Right? So that's where the entire, the top line driven ROI. Uh, is on one side of the spectrum on the other also, [00:33:00] because there are a lot of pressures in industry today from competition to again, you know, just sort of, um, economic uncertainty.
There's also a desire to save costs. There's also a desire to get more efficient. There's also a desire to make sure that, you know, you manage your resource as well. So, you know, there's a second sort of ROIs that is being measured. By saying, Hey, do I need to continuously hire people or can I scale with technology beyond a certain point in time?
Uh, can I save costs by, you know, in some ways using technology to replace manual activity, right? Can I save costs by avoiding certain steps? Otherwise I have to do, right. So it's one of the two that is not being measured quite. Uh, quite furiously, I would say, quite actively in companies. Right. And so if that is justified, either the top line or the bottom line, I think there's definitely still a lot of applicability.
And, and funny you bring this up. In fact, I was speaking to a, uh, to a customer today, towards the end of [00:34:00] 90 day today. And this was exactly where the conversation was. We were really sitting together and trying to craft out the ROI, right? And the question was not just about blind, ROI. But it was ROI, where the situation is at least at par and maybe better than status quo.
And what I mean by that is I'll give you a little more specific example. Sure. Um, so for example, you know, a lot of people are talking about by coding being a force multiplier or, you know, completely replacing your marketing agency with, um, some AI chat bots, for example, right? Like these are very common things that are being discussed today.
Um, now of course this is not a segment we work in, but I see these conversations happen all the time. And, um, the question here is, okay, great, understand, you know, the coding can be done faster, we know, but then what's the overhead of maintaining it? Right? Uh, can I actually take that to production quickly or is this going to be more prototype code that would look fancy and quick wins?
But then ultimately I have to [00:35:00] spend twice the money to really, you know, take it live, right? So those kinds of questions are being asked, for example, today, which were not being asked even a few months ago, and this is a classic ROI question. It's like looking at the total cost of ownership as opposed to just, you know, simply the, the small productivity gain from, you know, one pass being done differently.
Um, so I think that's happening, that's happening today, everywhere, right? So people want that reliability. People want the dependability of the current status quo. But of course, you know, if we can do that faster, we can do it more scale, you know, we can do with fewer resources, people will take it. You know, that's where there is a massive demand out there.
Right. So yeah, so that's where the conversation is today. It,
Mehmet: it makes a lot of sense because I call it go back to basics, right? Like, yeah. Regardless what technology you're selling is, whether it's ai, whether it's like whatever it is, right? So we talk about, you know, how we, I can take it from point A to point B, and this point A to point B can be reducing cost, increasing productivity.
As you said. [00:36:00] Maybe it's, uh, you know, allowing them to increase revenue because maybe they will be able to reflect more customers. And all these things. And, and also like one part which is you play in Shang, which is 'cause you, you deal with the risks. So of course reducing risk, which actually it leads to reducing, um, you know, cost also as well.
Because if, if you have, uh, you are not compliant, you're gonna pay a fine if, if you have a risk, which is not uncovered or where not well documented. Again, it's, it's a loss for the business. Now on the point of vibe coding, just. My perspective is because two things I want to say that I would like to hear your opinion on this, not about the vibe coding itself, about AI writing code.
I think we know, like AI now can pretty much write, I would not say excellent code, but a code that can show you something. Mm-hmm. And it's enhancing.
Shashank: Yeah. Yeah.
Mehmet: But I, I put blame because. Some people, they see me also as a content creator in a way, although like I'm not professionally a content creator.
[00:37:00] This is just a hobby I call it. And you know, just a way to get people like yourself to share their thoughts. Of course, I share my thoughts as well, so, but a lot of content creators, I've seen them, I don't want to say I blame them, but they play a role in having this hype. About the vibe coding and I see these, you know, articles like, yeah, this is the last tool you would want.
And then you're gonna build all your systems and uh, like rest in peace because now you can build, I don't know what in, and I'm telling people. Look, I'm not, I, I tried all the vibe coding tools. Of course not the advanced one. I didn't go to the Codex and these kinds of things, but I mean, the,
Shashank: right,
Mehmet: right.
The tra the traditional suspects like the Rept and, and the vertexes and the mm-hmm. Lovable. And these things I say like, see, they are amazing. Amazingly good. Excellent.
Shashank: Right,
Mehmet: right. I said, but put in mind if you decide. To use these tools to build products actually within your enterprise. Prepare yourself to hire a [00:38:00] full-time product manager because, you know, it's not like you just built a CRM and that's it because, you know a crm.
Mm-hmm. What made the CM A CRM is it kept enhancing over time. It's kept in, you know, adding features to it. Uh, you, uh, heard what your people they want and then you took it. So it's not just about building the product with vibe coding, it's about like. You know, what are you trying actually to achieve and have you calculated, you know, and of course, again, we talk about the risks and, and the cybersecurity part.
Is it secure? And we saw a lot of examples where people, they built something using vibe, coding tools.
Shashank: Mm-hmm.
Mehmet: Authentication is not there. It can actually, the rights one, I think they changed the name. I forget what they call it. There was this cloud bot, which is C-L-A-W-D. They had to change the name,
Shashank: right?
Mehmet: Yeah.
Shashank: Right. It's been
Mehmet: on the news. Yeah. So it's been on the news and uh, you know, they figure, I'm just trying to get the, the right name now. Yeah. So it's now called, um, they changed it to, uh, [00:39:00] multiple. Say weird name. So, yeah, so, so you know, people are thinking, yeah, AI is making things easy, but it's coming at the cost of, of security.
It's getting at the cost of also the user experience. What's your take on that Ang.
Shashank: Uh, yeah, you said a lot of things and I think, uh, almost everything you said is very relevant. So, so my take is that I think there are, there are two or three things we need to think about and we should maybe back up a little bit and up level and, you know, look at the broader, uh, you know, call it the, the, the productivity treadmill if you may, if I were to call it that.
Right? Sure. So, um, so the adrenal in Russia, five coating is high. There's no two ways of that, right? Like today we are intoxicated with that white coding excitement. It's like, yeah, I can turn out applications in two days opposed to doing it in. Longer periods of time. And if you really think about it, just software development as a profession, I mean, it's been sort of going in that direction for a few years, right?
So from assembly [00:40:00] coding to higher order languages, to IDs to libraries, to open source code, kind of already accelerating your development where you don't have to write every single thing that you lean on them to kind of build your systems. Uh, and then of course, you know all the community health. Uh, with very vibrant stack flow and other things that were around for long periods of time, developers have seen, uh, increase in productivity and change in the way that they code.
Like that's already been happening, right? So that's not new. Like I know that it has become more commonplace and people are excited about, like, yeah, I just write a prompt and, you know, it writes good for me. Um. And Sure. That's phenomenal as compared to where we were before that. But, you know, uh, we, what we were doing even two years ago was phenomenal compared to what people used to do 10 years ago.
And it's, it's amazingly phenomenal compared to what people used to do 20 years ago. Right. I mean, just I'll remind for people who were probably not even born in that generation. Having grayed my hair long enough that, you know, [00:41:00] back in late 1990s, um, when web servers were first generation, right, and, uh, they were not as fast, connectivity wasn't, you know, as smooth as it is today.
If you had a code dump or you had a bug, you actually looked up a physical manual. I don't know if I say that Most of the people would like scratch their head and say, what are you talking about? Right? Like, that's not a thing. But you actually looked up a physical manual, like if there was a database error, there was a programming error.
You literally opened a book, a manual that is provided by the manufacturer of the technology, like an HP UX book and an Oracle whatever, you know, like debug book with all these, and you literally read through the stacked races, if you think about it in parallel terminology in today's time. Right. So that's where we were.
25 years ago, right? And now if you look at it today, that has transformed from nobody opening a damn book to debug code ever. You know, they used to go and do Google searches and they'd go to stack over law or something like that, right? And sure that has not [00:42:00] transformed into ai, but you don't even have to do that, right?
You write prompts and say, Hey, this is what I would like to accomplish. Uh, so yes, you know, first thing is like, yeah, productivity is enhancing. And yes, we'll keep going further, but as you rightly said, I think in terms of a product per se. The coding aspect of it is just a small part of that puzzle. And in fact, you know, this is something that stuck with me for a long time when I was myself early in my career, and I was fortunate enough to work with some incredibly smart engineers over the years.
And there's one very simple statement. It's not even like something completely new, but very meaningful, where the, the person was just trying to emphasize and say that most code in the world. Is written once, but managed, evolved, and read multiple times over the years over the generations. And that same person was also emphasizing that many a times when you read your own code that you've written three years [00:43:00] ago, four years ago.
It looks very alien. You don't follow your own code. And that's the thing, and that's very common because of course you moved on as a person. You know, your ability to write code at that point in time was different. And then of course, when you're debugging and adding features on an existing code base, you know there's a certain level of understanding that you have to have so that you can do it in a mature manner.
At that point in time, and even later, it's less about coding, it's more about understanding the user's needs, understanding the product management, as you rightly said, making architectural decisions, understanding the ramifications of what would happen if I changed the code. So there's a lot more of the decision making and activity, which is beyond.
Coding is a very small piece of the puzzle. There's everything else outside of coding that really makes your product and your, you know, code itself for that matter. More robust, right? Yeah. Now you fast forward that to today's time when you had difficulty dealing with. The fastest of programmers would be writing few lines of [00:44:00] code, a few hundred lines of code a day, right?
And you had difficulty keeping yourself, you know, at pace with a few hundred lines of code a day. Now with you writing prompts and the prompts at least spinning out hundreds of thousands of lines of code perhaps in a single day. We'll make, make your guess. How are you gonna be on top of it? What do you know about the, like you mentioned, security issues or design issues, or.
Stability issues or, you know, design choices that had been made. Uh, you're not gonna go read the entire a hundred thousand lines of code that has been turned out for you because if you did while it'll eat up into your whole promise of productivity, isn't it? If you have to go back and review every line of code, what's the point of doing white coding?
You might well just write it, right? So, so there's this weird dichotomy. You know, the more of it is not necessarily gonna make you better and faster. Sure, there would be some adrenal and rush. Yes. It would be excellent for fast prototyping. Yes. In some cases it would be incredibly beautiful, but, you know, it's not a [00:45:00] universal sort of, you know, uh, promise that will solve all problems.
So that's one part of the puzzle. The, the second and important part of the puzzle, which is, which is equally important to think about,
Mehmet: is that a lot of the code that.
Shashank: These lms right? Today in particular, doesn't matter which technology it is, it's actually built its premise and promise on top of code that existing in the web before that.
So a lot of, a lot of the, the current generation of smarts that, let's say cloud code does, or you know, any of the others for that matter are putting out there. It's on the shoulders of open source code, right? Code fragments that were actually submitted to Stack Overflow or, you know, blog posts that people wrote because it's really the web that kind of taught and trained these tools, if you may.
Um, now here's a dichotomy as more and more people write AI driven code at some point, and as it happening with AI slop, as they call it, with AI content as well. You are not really raising the [00:46:00] bar. So a lot of the future generation, according tools, are going to be trained on code that is written by ai. So, which means effectively, like if AI is doing these errors or bringing these flaws in, they will actually infuse flaws at scale.
Yeah, if, if there is it's holes, there'll be holes at scale, right? So now you are going to inherit a, a different problem, right? Which is gonna happen with internet content already, where people are already budding. There is just so much a slap content in the, in the universe now, um, that, you know, it's hard to even find meaningful stuff.
That's in fact often a pain. Now when you, on social media, people say, I don't wanna look at these AI generated comments. Show me a human written comment. Right? Or a human written video. Right. And so I think like the same thing would happen with. And then last but not least on that spectrum is you would have seen this, the, the AI code generators.
And we are just getting deeper into the rabbit hole of white coding. But you know, in, in white coding, if you may, it [00:47:00] is, it actually does very well with some common code like Python and a few other of those sort of languages and certain type of a coding style. And the reason for that is there was a lot more of that type of code that was available on the web for it to get trained on.
However, if you really step back, we have had technologies, tools, coding languages, coding styles evolve over the years depending on what systems we have been building. And then some amalgamation with, there was a time when it made sense everything to be written in cc plus plus, because you know, that was basically the lingua franca.
And then higher languages. Higher level languages came about and everyone had their day. You know, there was the times of Java and the times of Python, especially in the data science era. We've seen GoLine come and go, you know, we've seen multiple others, right? And these days, for example, rust is kind of the new one that people talk about, and I'm sure tomorrows and day afters languages will be, you know, different and different.
Mehmet: True.
Shashank: [00:48:00] Um, the question is, will you have enough corpus built out? Will you have enough creative people then left around to really have that innovation and build out all those pieces in organic fresh manner? When everybody starts white coding, like this is more of a human choice that you're thinking of, right?
Uh, in the past we've come here because a lot of people really in hobbies, open source contributors, educationists, common people who just enjoyed coding, really kind of, you know, um, accelerated that, uh, that profession and accelerated the way. Coding emerged over the years so that that is why we are here today, right?
As a society. And the question is like, if everybody gives up on that, everybody says no. I would just rather write prompts than let the machine write it for me. I wonder if you could start seeing a plateauing and tapping out of creativity. Will they be basically a creativity void, uh, which is a [00:49:00] very important part of, of coding or conceiving or thinking or building something new?
So yes, we'll have a lot more clones of DocuSign, A lot more clones off. You know, um, um, workday and lot more clones that look like Salesforce. But what about the next generation Salesforce? Because when Salesforce was being written back in 2001, there was no Salesforce right? When, when DocuSign or Airbnb was being thought of, there was no Airbnb, right?
So what I mean to say is that all these were new, fresh ideas. That were bond at time because they were conceived by people in creative ways, and then they took shape over the years in a certain way. And so I just feel like is white coding alone the sort of solution to the problem? And the answer in my opinion is no.
And as you rightly said, there's a, there's a lot more to it than just simply accelerating your coding. Right. And so I think going back to that, and sorry for the long-winded answer here, but you know, the, [00:50:00] the whole proclamation that, oh, all these SaaS companies are gonna die. And you know, we'll see everything white coded away.
I don't buy into that promise. What I do buy into is that yes, there will be productivity enhancement. Yes. The style we code might change. Yes. The way we use these tools will certainly wall. But is that gonna, each of the whole universe? Probably not, no. Not in my book. I don't think it's happening. There's so much more to that puzzle than simply faster coding.
Mehmet: Absolutely. You know, I a hundred percent agree with you Shashank on this one. Yeah. And you know, this is, uh, it's stuck in my head now about, you know, the same thing we used to call it, like garbage in, garbage out about the content. So now the same thing gonna happen with code. Of course, time will show us what these machines would be able to do.
May maybe, maybe, you know, they will find a way to get. I would not called creativity, but more into like maintaining, you know, this evolution of the way how we do the coding time, will, will, will show us of course, uh, shahan as we are coming close to an end. Like what is [00:51:00] exciting, um, about UN AI that you can share with us, which is maybe something coming in the near future.
Shashank: Yeah. You know, so we have been, uh, stretching the boundary quite a bit and uh, you know, continuously innovating. Uh, while I would say staying grounded, right? So, which is why we chose to do this sort of, you know, vertical, if you may, a very regulated space. You know, a place that, that, uh, likes reliability and accuracy.
Uh, but we've been innovating, uh, you know, fairly rapidly. Um. Both in terms of techniques, but also in terms of what the system can do. Uh, you know, very excited, very proud of, you know, uh, the team and everybody who has contributed and worked and been part of this journey so far where we've gone from a fledgling idea into a, you know, a real platform and system that, uh, multiple companies around the world are taking advantage of.
And a lot of very large Fortune 500 companies to big advisories have leaned on, right? So very exciting, uh, in that front. Now looking forward. [00:52:00] The way I think about it is I think there are two pieces which are definitely there, uh, in this year's plan, you know, in 2026. And, you know, of course they'll keep evolving.
Uh, the one way that we definitely want to do more is we have made some really big strides around what I call automation of human-centric tasks.
Mehmet: Mm-hmm.
Shashank: So in our system today, you can not only just do simple workflow automation, but you can actually do complete automation around. You know, understanding a control or writing a control test plan or being control analysis and validation.
Um, so everything that a human would do, a lot of that gets automated totally on the platform. Just as an example, right? Or even risk management, quantification, remediation assessment. Even to the extent that reading and determining the tier or where stuff are all this, the system does, and it does that in a very reliable, accurate manner.
Um, and so that's been quite phenomenal. So we wanna go further, right? We wanna do more and more of that, more of the human task automation, uh, with the goal, of course, you know, of, of keeping it accurate, you know, keeping it sort of [00:53:00] reliable, right? So like that's something that we are really looking forward to.
So that's a big one. Um, the second one that I think, which is equally exciting. This is less, I would say, from just a technological innovation. I mean, there's a lot more technological innovation occurring, but um, we almost see ourselves in some ways as a, as a movement in our industry. And that was not the mission when we started, but I think it has eventually converged into it because we are also learning a lot more about domain and sector and what we see there is, um.
A little bit of a, a dichotomy between the importance of this domain of governance, risk compliance, third party supply chain risk, business continuity, like these things are important topics, but because traditionally they have been very onerous or traditionally they've been difficult to manage, uh, many times they become very process driven.
Where we are lost as humans, more in the process than the spirit of what we want to accomplish. And I can't blame people because it [00:54:00] is onerous and it is very process driven in many cases. The fact that you have to create reams of paperwork or do a lot of work, you know, to kind of bring it all together, sometimes that's very bogging down, right?
So you never elevate yourself to a point where you say, well, how do I mature my program? How do I really take advantage of it? You are always caught up in that, you know, sort of a daily fight, if you may. Um. We want the industry to mature and go beyond the daily fight. And so this is where our first mission also comes in, where we are saying like, uh, if we automate away all the mundane, the tedious, you know, the, the human-centric tasks that are not even fun, but are necessary to then do the more higher advanced, you know, human decision making, hopefully we'll unlock the time and capacity and mind space for people to really indulge in that.
True excellence of the discipline where people will then enjoy having conversations about, you know, what does risk really imply in my scenario? How do I manage it? Am I able to manage it? You know, how far am I able to tame it? Right? So we wanna take the industry [00:55:00] there. Uh, like I said, that wasn't our goal when we started, but I think as we have started seeing more and more success, and also we are learning from our customers, partners, ecosystem at large.
That has certainly become our goal, right? So this year we will also spend a good period of time furthering that goal, and we don't wanna do it in isolation. So we'll do that with the community, you know, with, with the ecosystem members and partners. And we, we are certainly very open to ideas as well. So I also very welcome anybody who is sort of.
Listening to this conversation and says, Hey, I got an idea to make it better, well bring it on. Right? Like, that's something that we definitely want to do. We want to remain humble. We want to remain hungry, and you know, we, we wanna learn from you. So, you know, if you got some interesting ideas there, which we can add to the mix, and as a community, we all then do more value added work.
Uh, it would be fun, right? It would make us happy and, you know, it would make the world better, right? So we are definitely very committed to that and excited about it. So that's, and those are the two things that. You know, I'm, I'm looking forward to it and, you know, hoping 2026 will be the year [00:56:00] for more of that.
Mehmet: Great. And you know, uh, I have no doubt that you will, uh, uh, have a great success in that Shang. And, you know, just for the audience, I advise you to go to the website, try to get in touch also as well. Um, you know, I'm a little bit biased because I consider Shang a friend now. You know, he's one of the people like, you know, who delivers on what he said, like he's, uh, maybe the audience who's listening to the first episode, I advise you to go to the first and second one.
We recorded like in 20 24, 20 25, and you will see, you know, like this, uh. Journey and the path that Shashank have followed, like, uh, delivering on something that really mattered to the customers and, you know, um, I would call it more as you said, grounded over hype, which is actually what we need a lot in these days.
So, Shashank, I can't thank you. I know how busy it can, and you stayed late in your night today for having this, uh, [00:57:00] great conversation. So thank you very, very much for doing so. I gotta put, you know, the links in, in the show notes so people who can get in touch, and this is how I end my episodes usually. Um, you know.
I tell audience, if you just discovered us and you like it, please share it with your friends, colleagues, and as many people as you can because we're trying to do an impact here. And if you are one of the people who keep coming again and again, thank you very much. You know, you keep putting us, you know, everywhere I see you, you share also the podcast with your friends.
I see. Like also you talk about. Even without me being there. So I appreciate that and I can see that in the results because the traffic doesn't lie. And I can see also the top 200 Apple podcast charts in different countries. I'm still waiting. My friends in the US it's the only country we didn't enter yet in, in North America.
So waiting for that. But yeah, like this cannot happen without all your support. And as I say, always stay tuned for a new episode very soon. Thank you. Bye-bye.
Shashank: Thank you so much, MAED, for having me again on your show.
Mehmet: Thank you.