#598 AI Pilots Don’t Fail. Enterprise Systems Do | Omid Pakseresht, CEO of Goodfolio

In this episode of The CTO Show with Mehmet, Mehmet sits down with Omid Pakseresht, CEO of Goodfolio. Omid works on enterprise AI systems that move beyond pilots and into real business workflows.
The conversation reframes enterprise AI failure as a systems problem, not a model problem. Omid argues that most AI initiatives break because the workflow, ownership model, governance layer, audit trail, and adoption path were never designed properly. The model may work, but the enterprise system around it often does not.
If you are building, investing in, or leading enterprise AI adoption, this conversation gives you a clearer way to judge whether an AI initiative is ready for production or stuck as another pilot.
About the Guest
Omid Pakseresht is the CEO of Goodfolio, a company focused on helping enterprises build and scale AI systems inside real workflows.
His background is in product and technology, with a particular focus on finance. He has spent around 10 years building and scaling AI solutions in enterprise environments.
Omid is well placed to frame this signal because his work sits at the point where AI models meet workflow design, governance, compliance, and business outcomes.
LinkedIn: https://www.linkedin.com/in/omidpakseresht/
Website: https://goodfolio.com
Key Takeaways
- Most enterprise AI fails because the system around the model was never built.
- A working AI pilot is not proof that the business is ready for production.
- AI adoption fails when it is treated as a data science project.
- Workflow owners must be part of the AI design process from the beginning.
- Human-in-the-loop fails when humans become late-stage QA gates.
- AI can create new bottlenecks when upstream productivity increases faster than downstream capacity.
- Regulated AI needs audit trails, governance layers, risk monitoring, and clear decision rights.
- AI ROI must be tied to business outcomes, not seat counts or software usage.
What You Will Learn
- The difference between an AI tool and an AI system inside an enterprise workflow.
- How AI pilots fail after the proof of concept looks successful.
- Why model quality is rarely the biggest barrier to enterprise AI adoption.
- How compliance, governance, and auditability shape production AI.
- What changes when AI becomes embedded into regulated workflows.
- Why AI can move bottlenecks rather than remove them.
- How leaders should evaluate AI ROI through outcomes instead of software spend.
Episode Highlights
00:00 — Enterprise AI failure starts beyond the model
02:00 — Proofs of concept became the easy part
04:00 — Workflow fit beats model quality in adoption
05:30 — AI cannot remain a data science project
08:30 — Production AI needs more than a model
12:00 — Compliance workflows expose AI bottlenecks
17:30 — Human-in-the-loop needs a better framing
20:00 — Governance becomes table stakes for enterprise AI
24:00 — AI ROI must connect to business outcomes
28:00 — AI exposes process gaps before scaling
Resources Mentioned
- Goodfolio: https://goodfolio.com
- Inspector: Goodfolio tool for compliance review of marketing assets in regulated industries
- AI agents: discussed in the context of compliance workflows
- Model governance: discussed as a production requirement
- Evaluation pipelines: discussed as part of production AI systems
- Prompt engineering versioning: discussed as part of AI system management
- Risk monitoring: discussed as part of regulated AI adoption
- Data lakes: discussed as a comparison point for large enterprise technology projects
Listen Now
Available on all major podcast platforms and YouTube.
Connect with the Show
Follow The CTO Show with Mehmet for more conversations at the intersection of technology, startups, and venture capital.
Mehmet: [00:00:00] Hello, and welcome back to a new episode of The CTO Show With Mehmet. Today, I'm very pleased joining me, Omid Pakseresht. He is the CEO of Goodfolio. Um, today's topic is little bit, you know, something top of mind. We, we discussed it a lot, but there is no harm of repeating this, especially from, you know, people who are experts in this field, like Omid.
Uh, Omid, I don't like to steal much from my guest's time, so, you know, traditional question: tell us a little bit more about you, your background, your journey, and what you're currently up to, and then we can start the conversation from there. So the floor is yours.
Omid: Yeah. Hi. Thanks so much for, for having me.
Um, I'm CEO of Goodfolio. My background is primarily in products and, uh, technology, with a particular focus on finance. I've spent the last 10 years or so building, uh, building AI solutions and scaling AI solutions in enterprise. Um, [00:01:00] and so the problem that we're solving is that we think most, most enterprise AI fails, not because the model is wrong, uh, but because the system around it, um, isn't built.
So that's what we work on at Goodfolio, uh, is why we built a platform that, that solves that problem specifically.
Mehmet: I kind of start from where you ended. You know, can you elaborate more on this? Because it's very interesting, uh, by saying it's not the model capability, it's system decisions and adoption. Um, where do you think most teams get this wrong?
Omid: Oh, that's really-- uh, that's a really good question. That's something that we, um, yeah, we spend a lot of time in, in, in live, uh, live environments and with the prospects and clients actually discussing that specific topic. Um, I think typically what happens is [00:02:00] that it's, it's thanks to, thanks to, let's say, vibe coding and improved models, um, it's become increasingly easy, um, to get a working POC or a version of that live.
Um, that almost has become the, the easiest part of doing any enterprise AI program, um, and the part that everyone sort of celebrates 'cause it's, it's quick, um, quick to see something. Um, but that's actually- It's hardly ever where, where it fails, right? Um, models in a proper business environment, they don't ship without an audit trail, without data plumbing, without decision rights, without actual buy-in adoption from the people using it, um, without, yeah, pipelines, integrations, um, owners of different aspects of it, uh, you name it.
It's really, [00:03:00] um, it's really hardly ever, ever that sort of like pilot. Um, and to me that's kind of resulted in a bit of, you see a lot of pilots and then as a result, I mean, everyone sees the numbers. There's also a lot of failed pilots. Uh, so you end up seeing a bit of fatigue from, um, from enterprise and some leaders like, "Okay, well, I don't wanna do another, another, another AI pilot.
Nothing, nothing's really working." Um, so I think it's, it's, um, focusing too much around, around what that POC looks like and not e- enough around what the systems and the processes and the people around it and the integrations and the workflows actually need to be able to do for, to support that AI initiative.
Mehmet: Right. Now, if you, you mentioned a lot of, you know, reasons or let's say failure patterns. If we want to rank them [00:04:00] between what is technical, what's organizational, or maybe something economical, how do you rank that, Omid?
Omid: Oh, that's really, um, that's really hard. And of course, the easy answer to say it depends on the organization.
Uh, but I would say more frequently than not, it is actually the people and the culture and really how it fits into the workflow that's wrong at the first place. Um, uh, if that's, yeah, if that's, that's correct or designed with incentives aligned and all the sort of the dependencies and frameworks in mind that require to be in place for something to scale, then that's much more likely to scale.
What doesn't, uh, at least in my experience, what doesn't end up being the problem that frequently is the quality of AI models. Right. That one's, like, usually pretty good at the moment.
Mehmet: Now, [00:05:00] you know, like any other project and, you know, even the POC itself or the demo, whatever, sandbox, whatever you want to call it, it's a project, right?
And so we've seen ownership discussions regarding AI in the enterprise. So where do you think the ownership usually should be sitting, um, when it succeed, and where have you seen it fa- You know, failing when it was sitting in, in a different ownership. Uh, and without like, of course, finger-pointing, but is it like engineering, product, or business?
Omid: Yeah. I mean, I think it's more, more increasingly so about, um, about people and the... at the moment, and it's kind of, let's say the, the line between business and technology is kind of blending into, into one as well. Um, what I've seen not work, which is [00:06:00] the easiest, easier to answer, is to view it as a data science project.
Um, that is, that is not what it's trying to do. Uh, that is not what a AI system is actually trying to do. That doesn't work. Uh, so it has to be an element of business and technology involved, um, but most importantly, if there is a part of the workflow or a system you're trying to optimize, you need to have the people that have the most knowledge about that part of the workflow in the room- Same
otherwise you're gonna get that wrong very quickly.
Mehmet: Yeah. Saying that, Ahmed, how much also is it, um, you know, the feel of urgency from maybe, let's say the board or the leaders or, you know, the managers, "Oh," like, you know, "If we don't get an AI project we're gonna miss out," right? And because, you know, we're hearing that companies who are not adopting AI in this [00:07:00] or in that, you know, catastrophe- Mm-hmm
is coming and, you know, we might go out of business. H- how much is it also, like, kind of a reaction to, or let's call it panic mode, um- Mm-hmm ... fear of missing out, in your opinion?
Omid: Yeah, I mean, there is a little bit of that and to some degree, a degree of that is justified, right? If your competitors are actually demonstrating ROI and growing really fast and removing their bottlenecks, then you do worry.
Uh, what I've seen is that if, uh, yeah, if you do AI for AI's sake, if you're trying to push that on the team, uh, push that down, then that is, um, actually, at least from our perspective, having, having conversations and engagements, it ends up being very difficult to find the right champions, and you need at least one.
You need a few, um, to, to make something happen. Um, but if it is the framing [00:08:00] of... if it is focused on growth and that's what you're trying to do, and this is just... there is a increasingly open opportunity for growth at the moment, um, then, then it becomes a much, much more exciting, um, and, uh, fruitful conversation and moves on a lot quicker.
Mehmet: Right. Now, of course, we, we, we discussed- Kind of the problem. Now let's try to figure out the solution. Um, and, you know, within the same theme of shifting from models to systems and what you just mentioned, you know, in, in your introduction also, what does a real AI system looks like in an enterprise? Like, a- and if you can define, like, maybe the minimum components to claim that this is a production grade AI system beyond just the model.
Hmm. [00:09:00]
Omid: That's a really good question. Um, I think, um, the biggest distinction really is from, from a sort of a tool to a system. If you have a system in place, it is actually incorporated into your workflow as a team or as a business unit. Um, so that means that there are parts of the process that are sped up or renewed because of it.
That means there are likely multiple stakeholders interacting with the input or output of that thing. Um, that means that it is not something that sits in isolation in someone's laptop for them to improve their copy. Um, so that really, if that is the case and you're in a business, then that need, there needs to be integrations, there needs to be model governance around it, there needs to be [00:10:00] evaluation pipelines, versioning for prompt engineering and the models, risk monitoring, um, all of the stuff that you would expect.
Um, think about it. I sometimes think about it. If you're, if you're hiring a new team, you need an HR department around it, or if you're getting an agency or a contractor to do something, it's, uh, it's pretty ca- can't sit in isolation. Uh, and one is a system in the traditional sense, and one is just a temporary solution,
Mehmet: right?
Right. Do you see, like, there's a lot of changes required in the architecture when we move from experimentation to execution to actual production?
Omid: Hmm. That's a good question. Maybe a non-conventional answer here. I am of the belief that the benefit of AI and the best AI [00:11:00] systems, um, sit on top of, uh, and work in collaboration with a lot of existing tools and systems.
Um, that is the... I think we had this phase of everyone doing, um- Wanting to do data lakes. Um, and as a, as a, millions and millions invested in creating data lakes without an ROI attached to it. Um, I think the benefit of AI or the framing that you can apply to it now is that you can get to quick ROI really, really quickly.
Um, and that's, uh, yeah, that's the real difference for me.
Mehmet: Right. Now let's talk, you know, more in, in depth about, uh, embedding AI into the workflows, right? So, um, how that looks like in, in, in, in action, right? And how do you make sure that, you know, these [00:12:00] workflows doesn't get broken by introducing AI?
Because, you know, a- regardless of the technology, and as someone, you know, who sit on, you know, the two to the two sides of the table, you know, whenever you introduce a, a new technology, you know, people expect that something could break. Mm. So how that looks in, into the, um, uh, AI adoption into the enterprise?
Omid: Um, I think probably maybe the, yeah, maybe the best way to, to tackle that one is with an example. So- Sure ... one of the tools that we have, we have created, uh, Inspector, it is focused on, um, compliance, uh, of marketing assets, uh, with particular focus on regulated industries. Um, so if you think about it, if you're, if you say that you work in a bank or you work in a retail bank, uh, [00:13:00] or an investment firm, anything you put out there, marketing or communication, the compliance department needs to have viewed and approved that.
Um, now that's something that can, uh, that can lead to a lot of back and forth and a lot of delays and so on. Um, what can happen is there's a sort of different, different ways you could tackle that, and organizations are at different stages of their journey. Um, but typically you would end up having either no processes, just managing it through emails and back and forth in spreadsheets.
Um, that's one level. Another level is that you could actually have a, let's say, ticketing system and notifications and things like that to streamline that. Um, then you could have tools that help, like AI tools that are helping you do the reviews and so on, so helping the compliance officer. Um- And then finally, you could [00:14:00] have a slightly different system where compliance becomes, let's say, a whole comply- agents for compliance become embedded into the process from, from the get-go, and the role of the compliance officer changes.
Now, as each of those kind of like third or fourth stage that I described, the reasons that could go wrong, but some of those reasons are exactly why that process could go wrong without you- without the AI as well, right? Like, so there is some of it is, okay, you got a missed risk, right? Yeah. What is the appetite doing that, right?
What type of risk are you, okay, missing? Uh, how do you put checks in place to, to make sure that doesn't happen too often or that you catch it quickly enough? So that's one risk is actually not too different to, to the human risk that existed before, not too different to the risk of the process that was already in place.
But the other thing that I think [00:15:00] breaks is, again, to think about that example, is that you start moving the bottleneck around, right? So because there is AI now, because you can do AI generated content, there is... You can do an unlimited, unlimited amount of content really, really quickly. So what happens is that your marketing team's still doing that, um, and they're producing 10 times as much, and your compliance team is the same number of people, uh, and now they're just filtering through a lot more information that they have to deal with and review and so on.
So the bottleneck used to be, okay, how quickly can you create compliant content? A part of that process, you've made it really quick to create content, but if you don't have tools to deal with that, then that workflow will, will break, whether that's missing risk or, um, or, uh, or not [00:16:00] having, uh, the ability to, uh, take advantage of that opportunity with, with AI and so on.
It is really just you've moved into, into a new type of workflow, and that introduces new types of risks. So that's, uh, that's the more interesting one, I think, to think about.
Mehmet: Yeah. I'm glad because you... Because I prepared that question and now, you know, the flow, it came just on, on the time, uh, about mentioning the regulated environments, right?
And, um, and this is something I discussed with a lot of, uh, other folks really on, on the show about- You know, building the trust, right? And designing AI in a way where, because w- we are all calling it or it's called, I'm not sure who, who came up with the term, the human in the loop, right? Um- Mm. And providing the feedback.
Now, when we are shifting, [00:17:00] you know, and we should, we should, you know, I would say consider, of course, it's like a technology project. It's like you phase it from demo to full production. But in my opinion, AI is not like just another tech migration. It's, it's like, because you, you just described the marketing case and, you know, the compliance.
So now my question, Omid, is in such regulated industries, how you build the trust, the trust inside these processes, you know, and what could block AI adoption? Like people say, "Hey, I would not use this," right? "I don't trust it. It's AI." Um, someone might have seen some news and say, "Hey, like these models, they hallucinate."
How I can make sure that, you know, this agent or this model or this output will not hurt? And especially because, you know, in regulated environments, we have the audit, we [00:18:00] have the compliance that you just mentioned. So how do we make sure that we are making... Of course, we cannot, uh, we cannot claim it's 100%, um, error-free, but at least we can reach 99.9999% compliance and, uh, you know, uh, in, in, in, um, uh, in, in good shape for auditing.
Omid: Hmm. I think we could do a whole, a whole separate show just focused on that one question. Um, I love the phrase human in the loop. Uh, it's become a little bit like, um, yeah, it's kind of approached, I think, in reverse order in a lot of, lot of instances. Uh, it's designed such that the humans become a bit of a gateway to do QA work for the AI, and that's the kind of just saying, "Okay, this is a, this is an AI process we designed.
We're gonna go live with this. But [00:19:00] oh wait, the compliance needs to, needs to approve, uh, approve what happens." Um, and that obviously doesn't, doesn't really work. That's the, that's the, that to me at least is the wrong framing of what human in the loop should be. Um, I think there is a, there should be an ability For example, uh, some base principles of, okay, AI is making decisions, um, or assisting with making decisions.
There should be an audit trail, right? Um, the people who understand that part of the journey of that domain the best need to be part of the creation process as much as possible with that AI, whether that's with the prompting or the data that's fed into it, et cetera. And that actually really works in two ways.
One, it ends up being a better system. Um, and two, it ends up buying trust from the people that are eventually going to end [00:20:00] up, uh, relying on it the most as well. Um, I think there is another thing that this is, this is something that we are actually almost productizing across different, different use cases that we do, and that's the governance layer around the process.
Like, do you actually understand what your AI is doing? Uh, why it's answering certain things, how it's performing? If it's drifting, does it need retraining? Do you have the kind of the best underlying model? All of this stuff. That is, if it's not already, and it's not already, I, I think for any major AI system going live in enterprise in a year or two, that'll be table stakes, right?
We need to have those governance processes in place, then, then it gets a lot easier. Um, then you're like, okay, well, the same way as I said that you have HR departments and in the same way that you have, uh, different types of back end processes for, for people, you have that [00:21:00] for AI as well. That's, that's, that's the, the direction that it's going at least.
Mehmet: Right. Now, of course, when we talk about, um, AI projects and, you know, or any AI initiative, I would say, um, and like any other project, so we have deadlines, we have, you know, like, um, things that, you know, we need to achieve. Um, now knowing these, I would call them, uh, landmines from compliance and, you know, auditing and all this stuff, like what is, in your opinion, an acceptable, I would say, threshold between speed and compliance?
What have you seen? Like w- when, when, like, is it like just, "Yeah, let it break, and then we fix it," or, "No, let's make sure that it's 100%, you know, fine"? Mm-hmm. And how much this will affect actually on the ROI. And the reason I'm asking [00:22:00] you, Omid, because, you know, we know traditionally in any project, the longer it takes, you know, people they will start to lose, I would say, the, the momentum that they...
when we started it, and people will start to, "Mm," you know- Like, yeah, it, it, it's dragging forever. It's not ending. And you see the enthusiasm maybe even that started with this project, it start to fade away. So any trade-offs which have you seen, um, that work or should be avoided, I would say? Mm.
Omid: I think the best kind of approaches that I've seen is, is when people or organizations come into it with an outcome in mind already.
Uh, and then you can work backwards with what are you actually comfortable with and what are you not comfortable with. Um, typically, if the outcome is centered around, centered around growth, uh, centered around actually be able to go [00:23:00] into new areas and so on, then you can define the frameworks in place that, that allows you to do that.
I think there is no real, um... I think the kind of those, the speed and the delays of AI demonstrating the possibility of ROI, that's, that's, that's, that's almost a, uh, something that should be, that should be done to start with, right? Demonstrate the possibility of ROI and then working, working around getting to that outcome.
I think those are the two things to optimize for. But the first one can't take long, otherwise, as you said, people lose interest. Uh, this is not a, yeah, full, full ARP rebuild. It's not, it's not something anyone wants to get into right now.
Mehmet: Right. O- Ahmed, you, we kept repeating the ROI, return on investment.
What's the best way to measure it, [00:24:00] right? There are a l- there are a lot of opinions now. Even, you know, even in the traditional SaaS business, you know, we used to sell licenses and seats, and now people, they start think, "Hey, should I measure the outcomes instead of just measuring an Excel sheet on how much I wa- I'm spending today versus how much I would spend in the future?"
Funny enough, people mix these two terms together. We know that it's not ROI, it was total cost of ownership. Mm. Uh, which, yeah, we, we called it also ROI. Um, but if we want to really measure, you know, what are the best practices to say that economically, you know, yes, this was a really viable project, it helped our company because it did one, two, three.
So how do we count these one, two, three, and what they are in u- in, in, in, in usual?
Omid: Yeah, that's a really good question. I mean, I think [00:25:00] there's a, the interesting part about AI, at least to me, is that it goes back to some first principles of business, uh, which is things that I think were, um, or could have been, could have been forgotten with, uh, with traditional software.
So there are things around, um, yeah, new business. There are things around, uh, growth, and there are things around the levers that affect those things. Um, that's ultimately what your ROI needs to be linked to. Um, now we do increasingly, we try to build in a way and deploy in a way that the cost is attached to the outcome.
Um, because if you, yeah, if you go back to the sort of the previous conversation, if you know the outcome that you're trying to get and you're trying to... and you're getting [00:26:00] AI or building a system that helps you do that, then you should be able to link, uh, link the, the cost to the outcome that you're, you're, you're achieving as well.
So yeah, I think there is a, there's a kind of like a philosophical, um, approach to linking ROI to, or with linking outcomes to usage that's gonna become inc- increasingly common in, in how you, how people view investing in AI.
Mehmet: Right. Now let's assume we did our first project, it went very successful. We managed to, as we call it, in the world of startups, to cross the chasm- Mm
and shift from just POC and demos and sandboxing to a fully fledged, you know, operating, um, and beneficial project. [00:27:00] Now how as enterprise, I can scale this, I can take it beyond just one use case. Mm. Like, and there are a lot of questions, and by the way, I'm happy 'cause I'm discussing, I think, this with you the first time.
Um, pe- people, they get too much into the details, so they focus, I think, on one project, and maybe they spend months, if not years, on just, you know, uh, letting, let it, uh, succeed without, you know- In hindsight, s-seeing that, oh, hold on, like, we have still whole other thing that we, where we can, you know, bring AI into.
So how we can, as... And I'm asking on behalf, of course, of leaders, how we can prepare from the first project this mindset to scale, both from, I would say, planning perspective and even maybe, I don't know- Hmm ... infrastructure perspective or deployment approach perspective. What you can share from what you have seen, Omid?
Omid: [00:28:00] Yeah. Uh, super good, um, super good topic to discuss as well. I think it needs to be kind of two, two directions in parallel. Uh, I think, uh, one big reason that, uh, sometimes AI projects fail is actually, uh, the what the organization thinks is happening, um, whether it's less with the people or their systems, um, and what is actually happening are quite different.
Um, so when you try to deploy AI into something that the organization itself doesn't understand, then that's, um, yeah, it's a little bit more challenging. Um, so I think part of the process is actually, is actually trying to identify those gaps, trying to understand your existing processes, trying to understand, you know, AI does a really good job of, uh, very quickly exposing [00:29:00] inefficiencies, uh, and gaps in data coverage, for example, uh, or data accuracy.
So really taking those learnings to, um, to- to- to build on rather than to do it as a, as a one-off process of I'm gonna design this AI that does this part of the process, um, and not realizing that that's gonna expose, um, a... That's gonna, you know, succeed because you understand things better, but it's also gonna expose some, some inefficiencies, and that's just part of the, part of the course.
And then in parallel, the other aspect of it is really there is a lot of what we just described that is repeatable, right? Um, and that's partly what we're focused on helping organizations to do. But your governance frameworks, your data, uh, data catalog, your models catalog, your, uh, let's [00:30:00] say, um, learnings layer, your model delivery, all of this stuff is something you're gonna do every time, right?
So building things in a, in a way that you may be able to do other use cases with it, that's, that's obviously the, the art more than a science. Uh, but, uh, it's definitely sometimes forgotten as, as an opportunity. And to be fair, um, wasn't really quite the same characteristics with, with SaaS, right? You would get a SaaS tool for a department and does a specific job, and- Mm-hmm
the next SaaS tool is equally as difficult to do for the next department, right? So this is, uh, a little bit of a new way of thinking, um, that needs to be applied to this.
Mehmet: L- let me ask you something also from your experience, Omid, and, you know, um, is there such [00:31:00] thing as an initiative, a project that on paper looks very promising, but actually when you go try and, and you finish it or while trying to, to build it, y- you figure out that it, it will not work commercially?
Mm. Is there such situation where you came across, um, because, you know, it ends up like you don't need AI for this maybe. Like it's, it's an overkill.
Omid: Yeah, for sure. For sure that's happened before. Uh, uh, we try to avoid that. We try not to kind of, um, there's an opportunity cost on w- to working on stuff that doesn't really have clear linkage to, to an outcome.
Um, and once we go into engagements, whether we're doing a pilot or a deployment or deploy a new product, um, we do really try very hard to be upfront about what [00:32:00] success looks like and what the outcome they're trying to achieve is. Uh, and really, really make sure that, uh, that at least they're comfortable with it and we're comfortable with it.
I think there are, um, there are definitely i- instances without, uh, without, say, going into specific, there are instances that, um, organizations are thinking about like second order optimizations and they're kind of entirely missing primary school stuff, right? It's just trying to understand like optimizing certain aspect of marketing strategy, and they don't really know what goes out of the door in the first place, right?
So there's a, there's just like a big gap between like, there's, you're not, you know, you're, you're not trying to apply, um, AI to something to optimize it. You're trying to apply to something that doesn't exist, right? One thing that does happen quite frequently, and this [00:33:00] is as, just as much an opportunity that, as it is a, a challenge really.
I quite enjoy this one, but, um, s- quite frequently AI gets viewed as a bolt on. Um, and organizations get Kind of, let's say, attracted to that bolt-on proposition, and then in the engagement it becomes clear that what it's supposed to be bolted on, it's not really there or it's not really used. Um, so it, it then becomes like, okay, well you want, again, you want this, but you've, you've just...
The system underneath it doesn't, doesn't really do what it was supposed to do. Uh, so the bolt-on is what are you bolting it on? Um, and that's a, that's something to really to think about. I mean, like the dependencies of, of AI, um, are interesting. For me, as, uh, someone who is curious and likes learning new things is, it's always engaging, engaging [00:34:00] conversations, um, and leads into interesting places, and I like thinking about systems and redesigning them and so on.
But yeah, can definitely lead to disappointment if, if, if you haven't really thought about the outcome and you haven't really thought about the whole system that you're trying to put in place.
Mehmet: Right. Now, what... I- if I'm a leader, I, I'm the CTO in my organization, or, you know, I, I'm in, in the decision-making, uh, position, I would say.
Um, and seeing all these things that are changing very fast now, um, you know, so we started the models, the systems, and the now the agents, agents frameworks. What kind of skills I need to focus to have on, you know, in, on, in my team so we can keep [00:35:00] sustaining and introducing more AI projects? W- what should I let them go and master?
Omid: Mm. Yeah, that's really interesting. Um, I think there is a, uh, kind of working, working backwards from, from, from, from the future. Um, I think organizations that will do best are the ones that really focus on growing and, um, refining their edge and moving the sort of, let's say, bottlenecks, um, away from what their sort of organizational ikigai is effectively, right?
So then that means that you actually need to have relatively T-shaped people with deep domain expertise in a specific field that are really, really good in the part that [00:36:00] you care about as an organization or that department cares about as an organization. But they also have the ability to be entrepreneurial, design new systems, design new processes, and work with others to do that.
Um, I saw somehow, and this, this was a bit of an overused word a few years ago, uh, but I think now it's even more relevant. I think effectively growth mindsets happens, uh, matters more than everything else, um, because the tools are moving too quickly, right? It's the mindset, the curiosity, the ability to actually understand and apply it that matters the most.
Mehmet: Yeah, and assuming you, you, you are a beginner all the time because, you know, like, and this, and this is part of the growth mindset, so because you, you keep need to, to learn, uh, learn, unlearn, and relearn few things. Um- Yeah ... Omid, uh, like this is I do [00:37:00] in, in the final moments of every episode, where, you know, anything that you wanted to share, maybe I didn't ask you if you wanted to share that, and where people can get in touch?
Omid: Hmm. Um, I think, yeah, no, this has been, yeah, this has been a pleasure. I'm very, uh, yeah, very happy to have been on the show. I think you can visit gitfolio.com to learn more about what we're, what we're up to, the type, the platform itself, the philosophy, and the products that we have on offer. Um, yeah, feel free, feel free to- Follow me on LinkedIn as well.
I've got some, some content that talks about exactly these topics. So that's a, yeah, good way to stay in touch.
Mehmet: Great. I will make sure that, you know, the link to your profile and to the company, Goodfolio, is available in the show notes. So people who are listening on their favorite podcasting app, you will find the links in the show notes.
If you're watching this on YouTube, you will find them in the [00:38:00] description. Uh, Obed, like I can't thank you enough, really, um, very content rich, uh, thought-provoking about, you know, in adopting AI in the enterprise. Um, I'm sure like a lot of people will, you know, will, will think twice, you know, about everything that was in their head, uh, based on the conversation today.
And myself, I learned also a lot from you today, so thank you for sharing your thoughts with us. And this is how I end my episodes. This is for the audience. If you just discovered our podcast, thank you for passing by. I hope you enjoyed it. If you did so, give me a small favor, subscribe and share it with as much people as you can.
And if you are one of the people who keeps coming again and again to listen or to watch, thank you very much. Again, you taking the podcast "2026," again, I repeat this since last year, to new height. We keep appearing in the top 200 Apple Podcast charts across different countries, and every time I see new countries, I'm happy to see this.
Recently, I did a [00:39:00] small change. I took the podcast category from business entrepreneurs to technology, because I think this is where we are starting to align the podcast more recently, uh, because we're focusing more on the tech and, you know, AI is of course one of them. Still, despite this, the m-the moment we shifted the category, we made it to the top 200 charts.
So I can't thank everyone enough for your support and for your continuous, um, you know, messages and feedback that you keep providing that to me. So thank you very much for this. And as I say always, stay tuned for a new episode very soon. Thank you. Bye-bye.





























