#531 Cooperative AI: Mark Vange on Building Human-Centered Automation

In this episode of The CTO Show with Mehmet, we sit down with Mark Vange, founder of Autom8ly and former CTO at Electronic Arts, to explore what he calls “cooperative AI.”
Mark shares how decades of experience in gaming, enterprise software, and automation shaped his belief that the true power of AI lies not in replacing humans but in partnering with them.
From building video games at 13 to running global tech teams at EA and now leading AI-driven automation across industries, Mark explains how trust, context, and human design principles drive adoption and ROI.
⸻
👤 About Mark Vange
Mark Vange is a technologist, entrepreneur, and founder of Autom8ly, a platform that helps businesses implement AI automation through partnerships with vertical experts.
He previously served as Chief Technology Officer at Electronic Arts, guiding the company’s transition from boxed titles to online, mobile, and social games.
Today, Mark focuses on bridging AI’s technical potential with real-world business outcomes — building what he calls “a post-code world” where trust and attention are the new currencies.
⸻
💡 Key Takeaways
• Cooperative AI > Autonomous AI: The future of automation lies in AI that works with people, not instead of them.
• The Post-Code World: Code is no longer the moat — trust, domain knowledge, and market access are.
• Adoption Through ROI: Start with tasks nobody likes doing; deliver measurable value fast.
• Agility Re-Defined: Speed today isn’t about writing code — it’s about adapting architectures, partners, and trust frameworks.
• Compliance as an Enabler: Secure, compliant, and explainable AI wins enterprise adoption.
• Education Before Implementation: Clarity on the “why” behind automation is more valuable than hype or dashboards.
⸻
🎧 What Listeners Will Learn
• How Autom8ly’s partnership-driven model helps vertical experts scale with AI.
• The nine-step framework for identifying automation opportunities that deliver ROI.
• Why “vibe coding” is a trap — and what real engineering looks like in a cooperative AI world.
• How encrypted retrieval-augmented generation (RAG) enables privacy-first healthcare use cases.
• Why trust and attention now matter more than algorithms or lines of code.
⸻
🕒 Episode Highlights
00:02 — Mark’s journey from video-game prodigy to EA CTO
00:08 — How Autom8ly partners with industry experts to scale AI solutions
00:15 — Defining “Cooperative AI” and why it outperforms autonomous systems
00:19 — Human trust and the psychology of AI adoption
00:25 — Compliance and agility: balancing speed with enterprise trust
00:33 — Building configurable, LLM-agnostic AI systems
00:37 — The rise of the post-code world and the new moats of trust and access
00:42 — Encrypted RAG in healthcare: AI with privacy by design
00:46 — Advice for founders: start with real problems, not tech trends
00:50 — AI buzzwords to ignore and why clarity beats vibes
⸻
🧭 Resources Mentioned
• Autom8ly.com — Mark’s company website
• Mark Vange on LinkedIn: https://www.linkedin.com/in/markvange
[00:00:00]
Mehmet: Hello, and we back to a episode of the CTO Show. With a moment today. I'm very pleased joining me from the US Mark Vange, who's the founder of Autom8ly. I like to give teaser to the audience. So as you might see [00:01:00] from the company name, we're gonna talk automation, of course, we're gonna talk ai, we're gonna talk also about use cases is interesting.
Use cases that Mark will let us more know more about. But before that, the traditional of the CTO show is I like my guest Ntroduce themselves. So Mark, tell us more about you, your background, your journey, and what brought you to become the founder of automatically.
Mark: Thank you. A pleasure to be here with you.
Um, my background is, uh, I was born, uh, in the Soviet Union a long time ago and, uh, after some travels ended up in Canada at the age of 10 or 11. And, uh, I didn't speak English, um, but I did, uh, arrive the same day as the school's first Apple two computer. So, uh, spent a lot of time being one of the really cool kids hanging out with the Apple two computer.
Um, I sold my first little game when I was 11 or 12 or something, uh, for the Apple two. Uh, by [00:02:00] 13 I wrote the first, uh, uh, teleprompter in Canada. Um, and, uh, by 16 I had, you know, a real company. Um, 1989, uh, which was my first year of college. I entered college with 23 employees and, uh, was working video games at the time, and we.
Published, uh, the then Omega version of Dragon's Layer, which became a top seller for that Christmas season. So I spent a lot of time in video games, um, built and sold several companies over the years. Um, at one point sold a company to electronic arts. I was Chief Technology Officer at Electronic Arts for a while.
Uh, primarily focused on their. Uh, kind of, um, interactive titles. So all of the transition from box titles to online title, Facebook games, mobile games, all that sort of stuff. Um, so I've done everything from video games to military communication systems. Um, and I'm always fascinated by the way that technologies actually get used and adopted in the real [00:03:00] world.
So, you know, I'm a technologist. I love building tech. Tech is cool and I love playing with it. But what really motivates me is to understand how it gets used in the real world. And so, when, you know, from the video game side of things, we've been using, you know, machine learning and AI and, and, and things for, for many, many years.
So technologically, I was very familiar with it, but. In, you know, 23, 24 when we turned the corner on kind of the cost and the model capability, and it started becoming something that you could use in a real business environment. I got very interested in how adoption was actually going to happen, right? So we know that this technology is moving incredibly fast, but adoption still happens on a human scale.
You know, so what it'll look like in a business context in five years is kind of what interests me the most. So automatically became, um. Uh, my vehicle for exploring that. And, um, that's what we've been been doing and we've, you know, we've developed a whole philosophy on how to work with AI and where we think the [00:04:00] AI is gonna be in the future.
And some ways we diverge from, uh, from a maybe more common opinion. And, uh, hopefully through this hour, we'll we'll get a chance to talk about some of that stuff.
Mehmet: Great. And you know, um, I was really fascinated by your profile, mark, like it's a long history in in, in the tech industry, and I'm sure myself and my audience will get a lot from you, you know.
You said like you, it's like your exploration vehicle, what you do currently with automatically, and I'm sure as a technologist and someone, and you know, like I, you, you know, you, you are my perfect profile. I would say for the podcast because I was talking to someone earlier today and they were asking me, Hey, like why?
It's called the CTO show. And I was trying to explain that When I decide to start the podcast, the thing that I spotted is very rare people, they are able to convey. What we call it, you know, the technical messaging into a business, uh, [00:05:00] understood terms without jargons, and at the same time showing the adding value.
So this is exactly what we are trying to do. And because you currently, you know, you decide to, to explore the ai I'll start from, from, from this mark, like of course, I'm sure you have seen. Tons and tons of use cases where AI can be solution for real business problems. Uh, but as someone experienced, and you've been doing this for a long time, I'm sure like you started to narrow down your choices to something that, you know, you thought.
That it would make sense with customers. So walk me through, you know, this narrowing down because when we say ai, machine learning automation, of course everyone hear about them today, but as technologists to pick up one use case and build around it something, how, how did you decide, you know, where to.
Mark: So, um, I'm gonna break your heart a little bit, [00:06:00] um, because, uh, actually that's, that's not our go to business.
So our actual go to business, uh, strategy is we find partners that really understand a vertical, okay. And we partner with them to re understand. What are the true motivations and what are the ROI considerations and the compliance considerations, you know, what are all the levers of that vertical? Um, and so as technologists, as AI people, we focus on building.
Very strong, configurable and capable platforms. Um, so we have essentially tools, modules that we then use to deliver capabilities for specific verticals. And for me, that's actually where, uh, the rubber hits the road a little bit. The technology's moving so fast. Um, it seems to me that trying to be the.
The company for a specific vertical is more of a sales and [00:07:00] marketing and business, um, capability while staying on top of this tiger, which is the AI technology evolution, which is measured not in weeks or days. You know, literally multiple times a day. Um, it gets completely disrupted sometimes. Um, so we focus really on building very strong AI foundations and then we build partnerships and relationships in a few different verticals where we have the entrepreneurial partners, I joke, they may not know how to spell ai, but they know everything about that business.
You know, they have the Rolodex, they understand what are the actual economic motivations, and then we work with them to deliver solutions for those verticals. So we have verticals in, um. Um. In 10 DLC onboarding, which is this very specific compliance around the telecom uses in the us. You know, people who are in in telecom in the US are familiar with this.
Anybody who [00:08:00] wants to send us a text message other than, you know, typing on their phone now needs to complete this application. Their brand needs to be approved, their campaign needs to be approved. There's all this paperwork, you know, every yoga studio and every restaurant in the US if they want to have any.
Marketing messages or transactional messages has to complete this paperwork in because we're working with, uh, a partner who's now an actual business partner of mine, um, who comes from that space, right? We identified this very specific, very niche use case, which is costing them three to five hours per client, which is really hard for them to maintain.
And by automating that process and handholding that 10 DLC onboarding journey for the end user, we can cut that down to 10 minutes or 15 minutes of work for the operator. And increase the first time success rate of the submission, right? But I, as a [00:09:00] technology guy wouldn't necessarily understand all the nuances of that use case.
It took somebody who lives in that space to be able to refine the needs so that we can use our toolbox of capabilities to build the solution.
Mehmet: Uh, that's a very interesting use case, mark. And, you know, like, um, which is one of the main things we, we, we, we, we try to as technologists also to, to stress on. So you mentioned about this, uh, use case where it's shrinking the time to onboard.
So, you know, uh, these businesses can send them messages, uh, to, to their potential clients through, uh, through their phones Now. When building and designing, and that must be a large, uh, I would say, and, and complex also infrastructure behind. I'm just, you know, guessing, right? Yep, yep. So when, when you want to design [00:10:00] such big system, and because, and I like your approach.
You said you broke, you break your heart, actually not because, uh, it's a partnership as you mentioned with entrepreneurial partners. So basically, you know, you're just. Empowering them with the technology tools to fulfill the goal that you mentioned, but still, you know, on, on the technology side. Scaling with AI and making sure that, you know, it works flawlessly for maybe multiple, I'm not sure, I'm not like very expert in in that specific, if you can explain that to us, mark, is there any, and you said about the nuances.
But from technology perspective, is it like something unified that it can be fit for any, uh, operator in the US or like does it require some customization when you go and implement it? And how does it scale? Because I'm sure like, you know, the more they utilize it, maybe the more infrastructure they would need, the more, uh, [00:11:00] bandwidth they would need.
So if you can walk us through, through this mark. Absolutely. So, um.
Mark: Absolutely. Um, we are working with service providers and these service providers have some kind of workflow, workflow platform. Um, or they use a workflow platform from somebody, uh, else, you know, as if they're a smaller, um, player. So they'll be using, you know, um. A crescendo or they'll be using, um, you know, somebody like that as their back plane to deliver the services to their clients, right?
Um, and so we will work with the platform operator to integrate with their platform so that we can then deliver with them and through them to all of the other clients, right? So. In those cases, we [00:12:00] are integrating with the platform in some other cases where the platforms are becoming a little bit, um, almost customer unfriendly and trying to hold on to the capabilities, right?
We will, uh, go over the top and we will create, um, chrome plugins that literally kind of take over the screen and then we interact with whatever platform is, is there, um. One of the other verticals we're in is, is call centers. And one of the things that we found, uh, is a lot of the call center infrastructure providers suddenly turned off API access.
'cause they realized there's all this money in transcription and call QA and so on. And they literally tried to lock it out and own it. And so for those use cases we built. A plugin that sits in Chrome and just captures the media from Chrome injects a little dialogue box that then can provide the information or can do the conversation or whatever, [00:13:00] whatever the solution is.
So to summarize, when we have partners that understand that we can add value and add stickiness and add usability to their end users, we will work with them and integrate them as a part of, of their solution. But. Uh, just 'cause they don't want us to, doesn't necessarily stop us from doing it because as long as the service is delivered through a browser anyway, I have an infinite number of ways that I can then, you know, go over the top and provide the capability that their customers ultimately need and want, and that many cases they're hoping to deliver someday, but they're not necessarily delivering today.
Mehmet: Yeah. Mark, you mentioned something interesting, which is again, uh, it's at the core of adopting any technology. You, you talked about the, the ROI right? The return on investment now. And return on investment sometime, you know, we, we measure it on, you know, of course the, the how [00:14:00] much money we, we are paying now, and you know, how much savings we are doing on, you know, let's say three years, four years, five years down, down the road.
Um, but, uh. When from a technologist perspective, you want to measure also the effectiveness of such AI systems from. Again, technological perspective, not only business perspective, how we can put some numbers there to say like, okay, we, we achieved this because we implemented, for example, this agent here.
Or you just mentioned for example, the Chrome extension because we did this, so this is, you know, the amount. I don't know, maybe of time. Is it like time usually? When we talk from a technical perspective, is it like ease of use? What are like the main things that you would look at as a technologist when we come to look at the bigger picture of the, you know, giving this ROI.
Mark: Okay. Um, I'm going to, I'm gonna need to give [00:15:00] you a little bit of background before I can actually answer question. Sure. So. Um, the first bit of background is we are big believers in what we call cooperative ai. You know, a lot of the conversation is either generative ai, you know, like the GPTI ask a question, you gimme an answer.
Or it is, um, agentic ai, which people mostly think of as autonomous, right? So I'm gonna build a solution that does something for me. And actually we believe as a philosophical kind of foundation, that the best use cases are what we call cooperative. So rather than trying to replace people, accelerating people and making them more productive, and not only does that mean that adoption is easier, there's less social barriers, there's less kind of friction for the deployment of solutions.
It also means that we can deploy solutions before they're 100% working. Right? So trying to get perfect AI solutions is really hard and sometimes you can get a lot of value out of getting something that's 75% correct. Or 95% correct, [00:16:00] but not a hundred percent correct. So easier adoption, faster time to value.
In support of that, we have kind of a methodology that we've created sort of nine steps and the first step of that methodology. Is identifying what is it that should be automated first for any given vertical or any given use case, right? So I joke, I ask people, what is your equivalent of, you know, washing the floors and cleaning the toilets, the stuff that nobody wants to do.
The stuff that often takes a lot of time, um, takes a lot of energy. Uh, maybe very error prone, but also usually is very air, very early in the pipeline, which means that there's gonna be a lot of analysis of the output. Before the customer product is ever generated. So there's a lot of places where we can catch any mistakes, right?
So that's kind of thing one. Um, step three for us after we do some analysis is talking to our customer or our partner about what are the metrics of [00:17:00] success for these tasks. So how do we know when this, when the toilet is clean enough? How do we know when the floor is shiny enough? Right? So those metrics are something that we develop very early.
And then the very final step, step nine for us is when we have the system already up and running, we measure those metrics and we develop what we call a confidence measure. And the confidence measure can then be used to drive the level of autonomy that you want to give to this AI process. And so that's kind of a metric of.
Confidence that's based on specifically measuring outcomes. And those con outcomes may be how much time did we save those outcomes may be, how correct is it? Those outcomes may be how much did it cost, like different use cases have different metrics, but eventually it all comes down to the confidence level.
And the confidence level then allows us to make decisions about how much do we [00:18:00] use this AI process and how much we trust the AI process.
Mehmet: Cool. Regarding the trust. I'm, I'm happy you ended with this point because this was my next question too. Now, one of the things we see when we are trying to adopt AI in any organization, of course there are the frictions.
Uh, you know, sometimes, especially if they are trying to build the generative AI thing, they might not have all the data they might have missing, uh, you know, uh. Guard rates for, for, you know, securing the data proper way. I gotta come back to privacy later, but one of the things I'm hearing a lot and different opinions is convincing people to actually utilize the tools, trusting the tools.
So how, how we encourage people to see it. Because you said it's like the, it's not to replace people, it's to, you know, co-work with people or to cooperate with people. As technologists, how we can convince maybe people who are a little bit [00:19:00] skeptical about sometime because of what they read. There are a lot of misleading articles out there.
There are a lot of people who just, you know, scare people with a, like, AI is gonna do this, AI gonna do that as technologists, how we can change the perception mark, in your opinion.
Mark: So for us, our approach is right. Start with the tasks that people. Don't like doing, right? Mm-hmm. Start with those mechanical, menial, time consuming things that we all have to do as we perform our work and as we deliver the stuff that, that we celebrate, right?
Like, you know, in the US. We barbecue, right? Like everybody talks about, you know, the, the brisket or you know, the burger. But nobody talks about the fact that like when you catch that fish first you gotta scale it, right? And nobody likes scaling the fish. No. But you gotta scale the fish before you can, you know, before you can bake the fish.
So we start with [00:20:00] those sorts of tasks, which eliminates a lot of the resistance, right? And then the, at the other end of it is, again, we use that confidence metric. As a way to, uh, in a mature way, talk to those users about what this thing is and what this thing isn't. You know, it's not a magic box, it is not a tool.
Think of it as a colleague, which is how we often describe it, right? When you hire a new person into the job, it doesn't matter how smart that person is, first you bring them in. Then you maybe do some training, then they maybe shadow somebody. Then you give them little tasks, then you give 'em bigger tasks.
Eventually that might become your number one producer, but it's not gonna be your number one producer on day one. Right? So we, we try to create this analogous process for onboarding these AI colleagues. As a way to overcome the barriers of adoption, right? It's not a tool, it's not, congratulations. We're now using Slack because it's not [00:21:00] Slack, right?
It is, it is another entity in the process of delivering what your company does. It's not the company, which is sometimes I think where over uh, over zealous technologists sort of like, you know, here's a tool, right? And we think of it in the terms of like 2005, I build a SaaS tool. The SaaS tool now works.
You use my SaaS tool. AI is not like that.
Mehmet: It's completely different a hundred percent. Now, um. Also there's something, uh, which I'm seeing a lot, uh, because you, you know, we've talked about automation and mixing the AI with, with the automation, which is like, you know, the agent, everyone talks about the agent tech, ai.
Now some people, mark, and it's funny because on the previous episode we were talking about this in a different, you know, angle, but there's a lot of noise out there, [00:22:00] right? And when I say noise, I'm not. Blaming anyone. It's good to create something as content, uh, to show people how things are done. But are you seeing a lot of people who tries to implement ai, especially AI and automation and ad hoc way?
Because just, you know, they saw someone who recorded a video using. You know, automation tools and they are, you know, putting them into some, I don't know, uh, um, MCP servers and, and then, you know, maybe me as someone who's trying to understand, they show it to me like it's very simple. But when I try to go and do it by myself, I find it like it's very complicated.
So there's, do you see, are you seeing this friction also between. What is being shown by some people versus the real expectation of how to do this, and I'm happy. You mentioned about the nine steps, which I can see them also on your website. So how much of education needs to happen before, [00:23:00] actually, and I know you do this through the partners, but how much you think of education?
Happens before we touch anything, even before we, we start to ask them about how they do things today, so they have the right expectation of, of what this tool that you're building would be able to do for them.
Mark: Right? So I tend to talk to customers that maybe are a little bigger. That have, you know, people who are responsible for compliance or are responsible for, you know, IT security or are responsible for, uh, investor relations and, you know, kind of investor rules and so on.
And when you talk to people like that, the biggest barrier isn't actually the technology. It's how do we answer the questions of all the other non-technical stakeholders? Mm-hmm. In deploying a solution to this [00:24:00] company that has real problems, you know, small entrepreneurial companies, you know, we'll build it, we'll ask for forgiveness later,
Mehmet: right?
It
Mark: doesn't work in, in, in, in enterprise, right? So we, we really tend to focus on enterprise rather than, um, you know, more, let's call it retail end users. So. We find less of an issue around the technical education. Um, there's always some, you know, if you've got 200, 500, a thousand employees, there's always a couple of people, at least in the organization who know it well enough.
Maybe they wanna build it themselves. And, and, and you know, the boss brings me in and says, well, mark gonna build it. And they, they get a little annoyed 'cause they wanna play with the tools. Right. But often we can find ways of working with those guys. But in my experience, it's less about, you know, understanding what the thing can do.
It's more understanding how does it blend into all of the other needs? Like [00:25:00] how does the communications department know what the chat bot on your website is going to say? How does the. Um, you know, how does the investor relations person who needs to make sure that no news is published 30 days before the quarterly results because that would be an actual, you know, violation that has real penalties associated with it.
How do they make sure that your chat bot doesn't break those rules and suddenly cost the company millions of dollars? Right? Like those are the conversations that for me. Um, come before deployment. Less about explaining to people what AI is, um, because they just kind of assume that they understand and I can disabuse 'em of that typically more quickly than the worries and the fears of non-compliance.
Mehmet: Talking about compliance mark, so there will be some, uh, [00:26:00] and, and you mentioned, for example, the use case of the telco. So it's, it's like regulated also like, and uh, if I look to many other, uh, verticals, they are highly regulated and I know like, uh. You emphasize on, you know, having proper SOC two compliant infrastructures and making sure the data is private.
As you mentioned now, secure, we'll not have the fines, but how? Do we balance between, you know, what we're trying to achieve from agility perspective, getting things done while keeping the compliance and enterprise grade trust with how we, we, we keep that without losing, you know, anyone of, of, of both agility, of compliance.
Mark: Um, I think that the notion of agility, honestly, myEd is just so, uh, based in. You know, the last decade, I, I kind of think of us as living in a postcode world [00:27:00] in the sense that if I have a good spec, I can build almost anything in a weekend. Like it's almost crazy. How much code I can generate and how quickly and how well it performs.
Right. So, you know, in spite of the fact that folks are publishing stories to sort of try to push this narrative of vibe, coding doesn't really work and it breaks, right? Like great. You can, you can say that all you'd like, I'm shipping code. With the help of AI each and every day, and the velo, and I've been doing this for a long time, and the velocities that we can achieve now are just so unimaginable even three years ago that that story, that narrative just doesn't make any sense to me at all.
Right. Um, the agility. As described in, you know, the ability to generate code, just [00:28:00] not a barrier. The agility in terms of being able to respond to needs, respond to market forces, right? That comes down to setting expectation and setting common understanding of, again, the levers, the priorities of this vertical, of this company, of this use case.
Before we ever started building, and that's why, you know, our go to market so much focuses on having a partner that understands, truly understands the vertical. So I can spend lots of time with that person really thinking about what's gonna move the needle, what are the risks, you know, what are the questions that the CFO is gonna ask me when we walk into that enterprise to then sell the tool.
So that we've kind of built those protections and built that analysis and at least built the metrics around being able to give real numbers on what is the error rate, right? Um, to that CFO because they have an error rate they deal with now, right? So [00:29:00] let's compare apples and apples. We can compare the cost, we can compare the error rates, we can talk about it in a mature way.
It's not, you know, fairy dust and magic and it's poof. So, you know, like it's not that, right? So.
Mehmet: Yeah. Uh, I asked you about, you know, and I, of course I didn't ask, I mentioned MCP, which is mo uh, the model context protocol, and I know you are also contributing to that. We didn't discuss it yet on, on the podcast, and, you know, just I, it came to my mind.
I never asked about, you know, we mentioned it, but as, as an expert, mark. If you want to explain to us in a simple terms, what, what is MCP, you know, what we are trying to achieve with it and, uh, you know, what could be a solution for, uh, when we talk about cps,
Mark: um, look, MCP is not a magical thing. MCP is just another version of an API.
Mm-hmm. The difference about, you know, from a traditional API to an MCP is the MCP [00:30:00] comes with a description. Of what this API should be used for that is relevant to the AI that then utilizes it. So it's a way not only of presenting the API, the functionality, but also of describing the functionality. So, you know, do we need to use MCP?
No, we could use, uh, you know, open API definitions and, you know, some kind of an API interface and achieve the same goal. Right? But. What's powerful about MCP is it's a standard and so now we can just adopt the standard and not have to guess, you know, are we running on Windows or Mac or Android or Right.
Um, we're, we're using MCP, so that's, that's the power of it. And one of the things that has until very recently been different, um, about the introduction of AI to other technology phases is that there's been very rapid. Non-combat adoption [00:31:00] of standards. Somebody puts up their hand and says, I, I've got a standard for something.
And everyone says, great, let's just use it. 'cause everyone's moving too fast, right? So, um, Andro says, I've got MCP. Everyone says, great, let's just use MCP. Google says, Hey, I've got a TP. Everyone says, great. Let's just use a like, 'cause everybody understands that these things have to be solved and whatever.
Don't care. You did the work of sta, you know, it's very kinda open source minded, right? Like this thing works, I'm just gonna use it and move ahead. Only literally like this month did we finally hit on an AI standard that suddenly is going beta VHS, and that's in AI payments. And I find that very interesting that that's the one place where suddenly we are building camps.
And so now we've got. This kinda the Google, right. Kinda sponsored with Visa and MasterCard and then we've got now, you know, [00:32:00] Stripe kind of offering their own. So this is like the first time in the AI cycle that we've actually like formed camps. Until now it's just been very easy adoption for standards.
Uh, I know I kind of went off, off script there for your MCP question, but basically, yeah, MCP is just a way. To have a standard interface to access services, right? Like no magic to it. It's just a way of encapsulating APIs in a way that's easy for an AI to consume.
Mehmet: I'm happy. You mentioned about FAST, and of course we were discussing before I hit the record button about how fast things.
Are going now when building these AI tools. Mark yourself. Uh, I'm just making up things. Of course you are the expert, but let's say you decided that, for example, for this use case, uh, this should be the LLM without naming any, and this should be, you know, [00:33:00] the way we automate things. You build the prototype, you know your partners who understand the market very well, they tell you, yes, this is exactly what we want, but after two weeks or maybe one month, that not let us not exaggerate things change.
And you figure out that, okay, maybe you know what, like this LLM would be a better choice now. So these decisions in putting what, where, how the fast. Change and the technology is affecting also the decision for you, which one to adapt?
Mark: That's, that's a great question and that's a very, very important thing, um, that you bring up.
So we don't ever make that decision. So we write everything configuration driven. I like literally request by request. I could change the config files and change my API provider, change the model I'm using, modify the prompt, like [00:34:00] we don't code anything. So we create, you know, technology frameworks. Then we do configuration client by client, you know, even within a solution, right?
Because we're working with enterprises, each one's a little different. Each one needs something a little different. I'm not writing code for stuff. I architect things so that they're very, very adaptable. So for example, I mentioned to you the the plugin that we created for Chrome that we can use to assist the.
Agents when they're talking on the phone to customers. So this is a call center solution. You know, it listens to the call, it puts up, you know, suggestions on things, you know, so the customer starts talking about when, what are your hours, right? Mm-hmm. It looks up in the database, says, you know, the hours for this office near them is, you know, eight to five.
Okay? Um, I literally could activate that plugin for this conversation right [00:35:00] now, and it would. Look at the URL, which is Streamy Yard. And then say, you know, do I know what to do with Streamy Yard? And if Streamy Yard is one of the platforms that we support, like Google Meet or Zoom or whatever, Uhhuh, then it'll know, okay, for this kind for, for this company, I can get the speaker's names from these fields in the page, sends that to the server.
The server says, okay, I know about Streamy Yard. I know exactly what what you're looking at. Here is the use case we have. You know, the mark is signed in as him, so this is his knowledge base, right? Like all this stuff is entirely configured, so I don't write a different plugin for every use case or even every website we go to, or every platform we integrate with.
It's literally all configure driven and the configuration includes the prompts, the configuration includes the models. Right. Um, the configuration also in, we use a lot of Python, and the configuration also [00:36:00] includes like the configuration of the tools. So I can add a Python module and deploy a new tool call to use that Python module literally without stopping anything for the, so the next time the system calls.
Open ai, the new tools already in place and it can use it. Like I haven't rebooted, I haven't restarted, I haven't done anything. It literally, like that's save
Mehmet: time.
Mark: The possibility we build into it, because that's what it takes today because as you say, right, like we're, we're in the middle of, of OpenAI I dev week, right?
Suddenly we have new toys to play with. By next week they'll be in my code.
Mehmet: So. Is code becoming a commodity? Because at some stage the code was the core thing. Yeah. So now is it like becoming the abundant thing that I don't need to worry about what I'm [00:37:00] understanding you more. It makes a lot of sense to me, by the way.
Uh. Is now the importance, how much you understand the use case you're trying to solve. It's not above the code you write. Did I get it right correctly?
Mark: Absolutely. Absolutely. Yeah. Uh, you know, I, I think I said already in this conversation, I say this all the time. We're living in a post code, world code means nothing.
If you show me a use case, I can have it built in three hours. Um, the commodity, the, the, the value, the, the, the, the kind of inherent. Company value of today is trust and attention. How much do you trust Mark to give Mark your company, JUULs and have him do something with those JUULs, Polish those jewels, cut those JUULs, and not give you diamond dust at the end.
Right? And then access, which is again where, where our partners come in, right? If Mark calls Fred Smith, the president of company X, Y, Z. Fred Smith doesn't know Mark from [00:38:00] MeMed, from, you know, Joe Blow, right? So I have no value because I'm coming in with a partner who he's known for 25 years. I get the benefit of that trust, right?
So these are, these are the only things of value. The code, the technology, it, it, it means nothing anymore. In my mind, right? Like we're so used to thinking about, you know, I'm gonna spend two years and then the code capability is gonna be the moat and it means nothing. Somebody who has market access can build any tool that I can build in a week.
Mehmet: So the mode becomes, and I'm saying this, it's how human you are with your, what you do. I mean this human connection, I mean, of course, yeah. And the trust like, and your network and how much, as you mentioned, how much market access you have. Because you know, I'm seeing a lot, you know, I receive a lot of pitch decks and you know, I, every time I say [00:39:00] yeah, our motto, like our value proposition, we have AI powered, I don't know what, and I ask, you know.
I'm not here to criticize people, but I asked since like sincerely, I asked them guys, can you explain to me what stops someone to go and take this, put it in. Any of these vibe, coding, uh, tools and get almost the same thing. Maybe Yeah. They need to, to tweak it multiple times, but what's the difference?
Yeah. Oh yeah. But you know, you know, and I see people struggling on this point because they think, you know, the, the code is the prop still the priority, which is not, and the properties is actually. The team, I mean, the team has knowledge, trust how much they can, uh, you know, uh, deliver the message also in the, in the proper way so people can understand what they are trying to do.
So on that point also, do you see also, like, of course code, as you said, we are living in postcode when it comes to infrastructure and you know, we [00:40:00] see a lot of talk about, okay, who owns, for example. The, the, the, the, the power horse of what makes the AI works. Are we seeing, you know, now back as you, because you, you've been more than me in, in this, uh, mark.
And I'm asking you, because when I mentioned this to people, they hit me on the head. No. Like, you're going back, I'm seeing like, you know, if you are now kind of an Nvidia company or a MD company or like, you know, the ones who. All the infrastructure and you know, the architecture of the ai, you actually have the more, more than software company.
And I'm telling them, actually, software is not anymore eating the world. It's something else that must be eating the world. What is that, mark, in your opinion?
Mark: Well, as I said, for me it's attention is the basic currency of the current world, not just in technology but well beyond. [00:41:00] And. In enterprise, specifically the trust that sits behind the attention.
You know, we, we are living in an attention economy. We can create so much content, we can create so much code, we can create so much of everything. The only thing that's actually valuable is the attention. Because otherwise drop you rock into the lake and it disappears without ever making a ring, right?
Like it's great you built a software. If nobody ever notices it in the sea or in the mountain, in the, in the t tsunami of code that's being written right now, then it doesn't, it doesn't matter.
Mehmet: Yeah. From your also perspective, and you mentioned enterprise a lot, but are you seeing any other big opportunities of this cooperative AI adoption?
Uh, like you gave example of couple of verticals also. Are you seeing this also possible in maybe SMBs in government? Like where are we seeing or expecting bigger opportunities [00:42:00] when it comes to cooperative ai?
Mark: I mean, I, I think that cooperative AI is going to be the dominant way that people interact with ai, uh, in five years.
That's, that's kind of my, my thesis, if you will. Um, and that's in, in every walk of life. So for example, uh, we've built one solution that's really cool for healthcare providers, right? So you walk into. A clinic where you don't necessarily have access to a senior experienced doctor. Right? You have a nurse or you have a medic or somebody who can evaluate and who can, you know, they know how to measure stuff, but they're not a surgeon, they're not a whatever, right?
So we built a solution. Where we can take the patient histories, so they, the insurance histories of millions of people. So this is like all their diagnostic codes. All their treatment codes. All their [00:43:00] billing codes. So we have like the, the, their entire treatment history, right? Um, we take that data set because there's, you know, HIPAA and lots of compliance around it, right?
So that data set gets fully encrypted. Okay. We then use that data set. To do rag in a fully encrypted way. So now you walk into a clinic somewhere, the nurse or, or, or the medic says, you know, the, what's called the presentation, you know, age and you know what the complaint is, and you know, blood pressure, like all the stuff that they've gathered that also gets encrypted.
We take that encrypted question. We put it into a rag with the encrypted data, it all stays encrypted the whole time. So you send in gobbly gook, it does rag in gobbly gook, it sends back gobbly gook and only that practitioner can decode the answer. And the answer is, here are the [00:44:00] highest probability, um, things that are going on, and here are the next tests and things that you could do to validate it based on doing rag on all of this insurance information.
Okay. Wow. So really cool technology because we're doing full rag encrypted, right? Very cooperative use case, right? Because the AI isn't talking to the patient and ultimately that medic is using their experience and their ability to interpret, to say, yes, no, okay, I'll do this test. I, but also suddenly this person on the field.
Has much faster access to the entire library of medical knowledge as shown by practice on top of now we can look at medical books. Now we can look at, you know, other sources of data to layer in.
Mehmet: So it's like as if I get, and of course in the healthcare it's the medic, the, the, the, the [00:45:00] nurse. But I mean, if I take what you just mentioned, mark, I apply it to any vertical.
It's like getting an expert on demand. Like just with one click of a button. Exactly. And all with, uh, the compliance. I, I'm really impressed, like doing this. In encryption. It's, it's like something really big because you, because one of the biggest thing, and I know from my old days when I used to be in, in the networking and networking security, you know, the big thing if, if you are trying to send a package for example, and you need some kind of, uh, acceleration or something like this, you need to decrypt.
Encrypt encrypted data so that device understand what is the packet is right before sending it. So you're telling me you can do this without decrypting the data, decryption without anything. So it, it get decrypted only with in-house, let's say, without leaving the, the premise. So this is really, really [00:46:00] fascinating.
Um. If we want to ask you, 'cause we're talking about like really these cool technologies, if you want to advise, you know, people who are planning to start something with ai, mark, and I'm seeing like you are touching on real use cases using the technology giving value. So me, if I, I, I want to start a new startup today.
You know what, what, what do you advise me when I want, when I plan to use AI in my startup, what should I avoid or what should I not miss? I should, I should have it, uh, with, with what I'm building.
Mark: Um, I think that as technologists, we often make the mistake of starting with a solution and then looking for the problem, right?
Yeah. Um. What we really need to do now as technologists and what we really need to teach new technologists, right, is how to figure out the problem first. Like what, [00:47:00] what, what do we know about? What do we have access to selling, right? Where? Where do we have either attention or trust or ideally both, because those are our differentiators, right?
Our ability to write code is not our differentiator. Our differentiator is what are these, these, what are these two things? So that's number one. Number two is the recognition that in a postcode world, what matters is actually the product and not the process. So, you know, all of us have been doing, you know, agile development for two decades, right?
Mehmet: Right,
Mark: but those of us who are old enough to remember we're waterfall development. Yeah. Where you create the spec and then you write to the spec. Right. That is actually the correct methodology. Again, because you want the attention, you want the trust, you want to then work with the stakeholders to identify [00:48:00] what actually needs to be built, and then you give that spec to the ai.
It does 95% of the work, you polish it. And then you have a company,
Mehmet: and if you want to repeat that again, it's not like taking the same time it used to take at the, you know, this is why people left Waterfall, because it was, it was like so time consuming. Although like great products come, came out from the waterfall methodology and you know, sometime when people ask me this is back in the day, like, what do you prefer?
I don't, I say like, there's nothing wrong. Nothing. Right, right. So it depends on the use cases. Uh, and the only reason why people shifted to Agile, it's just, you know, there was some use cases where you needed different methodology where Waterfall wasn't working. So that, you know, really completely makes sense.
Mark, in your opinion. We talk about the cool stuff. What do you think one of the buzzwords in AI that you know, when you hear it, you say, yuck, I don't want to hear this [00:49:00] anymore. It's so overrated.
Mark: Um, you know, vibe coding has come to mean a lot of things to different people. Um, I think that. Vibe coding as a way for people who don't understand software development.
To try to write software is a giant trap that, you know, YouTube has created. You know, to your earlier point, you know, you see somebody with these three prompts I created. Okay, now exactly. Thank you. Now you're not showing me the three other hours you spent them cleaning up, number one and number two. You had no control over the product, right?
You're, you are showing it to me backwards. You're saying, look at the great thing that this thing created, but in the real world, you want to tell it what to create and then have it created, right? Like, make me a scheduling app actually isn't a real world use case. I need a scheduling app that connects with this and that does this and this does this, [00:50:00] right?
So now suddenly this whole like vibing is not really where it's at, right? As and as I said, I use. Automated coding all the time, but there's a correct methodology for it, which is more akin to waterfall development, where you develop very good requirements and then you let it do its thing rather than, you know, vibing.
Right. So nothing wrong with the term, something wrong with what it has come to mean. Right? So that's the one thing. The other one is, um, you know. AI because it just means nothing. It just means so many things that it actually means nothing. Um, right. So I I like talking about the outcome or the process.
Right. So I think cooperative ai, I think generative AI gender guy, to me makes sense in [00:51:00] the, in the context of, you know, independent agents that do stuff for you, like you launch them and they do something. To me, that's an agent. But age agentic means so much more to different people, right. That it's become kind of like a, you know, margarine, it doesn't mean anything.
So those are kind of my two,
Mehmet: eh, yeah, I, I'm happy you mentioned about these YouTube videos 'cause exactly what I don't want to point out to specific people. I, I respect every, everyone who tries to do something, maybe they don't have the bad intent. By the way, I can understand this, but you know, showing it, yeah, it's easy and I did this and I did that.
I can understand, you know, some, some people they need also to make a living out of it, you know? But what I say, like, don't show it that it's so simple. I have A-A-C-T-O who came on the show, became a friend also as well. Like he was telling me, my biggest problem now is that people who. Vibe code, which they are non-technical people.[00:52:00]
He said like they are like family and friends and I have to go and sit down and try to understand what they were trying to build. And I'm telling them guys, like, it's not done this way for me. And I, I'm a little bit moderate here with my views. I tell people if they are not tech, I said, look, guys, like similar to the no code vibe that was there.
And by the way, I, I try to enroll myself in this, although like I can code, but I wanted to see what, how they see it. I said, maybe it's a communication mean to go to a technologist and try to tell them what you're trying to do because maybe sometime they're not getting your point. You're not technical enough to explain.
So maybe it'll help you to have a communication, but it's not. To build the main tool. And I can say without naming, actually, I, I tried all of them. They're excellent, but if you are not expert, they will not give you what you are actually aiming for. Because I tried even with very good prompts and, you know, but I, I was like, in this [00:53:00] mindset, I'm assuming, I don't know, coding, and I'm giving it a prompt and see what it'll come up with.
And of course, like all ai, it can hallucinate, it can. Get a bunch of stuff that have no sense, and even
Mark: if it's giving you something that's working, the chance of it being secure, the chance of it being compliant, right? The chance of it being scalable are approximately zero unless you set out those requirements in the prompt.
And so, you know, like I spend a lot of time when I, you know, AI code conversing with, you know. GPT, Claude, whatever. But, you know, just literally like, I'll walk my dogs and I will talk about the architecture of the solution we're working on, and I will have, you know, a couple of hours of that in and then summarize that and then generate the requirements from that.
Because I'm thinking about it like an architect and I'm thinking about, you know, all of the different angles and identifying the use, you know, the edges and [00:54:00] identifying, you know, the, the criteria for success and you know, all of that stuff. And then from that, develop the requirements and then from that, develop the prompt.
So if, like, if you don't go through that pro, which is, which is basically waterfall, right? Um, yeah. If you don't go through that, then, then you're gonna end up with crap.
Mehmet: Yeah. And you know what freaked out what some people who are trying to build something. Of course, I, I, I like to tease people sometimes. I said, guys, I think you're gonna have in the future some technical depth.
And they said, what? What is technical debt? I said, yeah, because you tried to build it by yourself. You don't know how to scale it. Maybe the AI wrote the code in a way that it's not scalable, and then maybe even you, you start to use it and maybe some people start to use it, but you figure out, oh, it's something popular.
There is demand on it, and now, okay, how you take it to the next level, what you gonna do with it? You're gonna end up probably to give it to someone who knows how to code, right? Maybe with AI or not. And to build it from scratch to you. [00:55:00] So I tell them it's good just to like hand on, maybe try to understand how you create a small MVP and try to go and show to people around, but it's not the actual end product that you will come up with.
To your point about Gentech ai, I was joking with someone the other day. Uh, you know, there are like. Of course, I, I work in sales as well, although like I was an engineer. I work in sales, but I have still my technical roots. And I, I told the guy like, I'm irritated every time I see someone who maybe doesn't know what AI stands for.
And he's saying, yeah, like, the ne the future is Gentech ai. I said, what is the fu What, what he means The future is, it's, it's the ai. What, what does it mean? Like, I, I don't like people who throw like, uh, these cool words. And I said, you know what, by the way. I was, you know, still very early in the podcast when, you know these, uh, early attempts of doing the agents.
Yeah. So there are [00:56:00] something called auto, GPT, and baby a GI, and you know, these open source. And I was telling people like two weeks ago, I said, guys, there's nothing new here. Of course it gets much better, but these concepts aren't new. People are always trying, you know, the technologists try to automate things because this is what, what we like to do to, we, we were lazy.
We, we like to get things done fast. So I'm happy you mentioned this too, and I agree a hundred percent with you on that. Uh, as we are coming almost close to, to, to the end of our. Really, I am enjoying the discussion with you today. Uh, so where people can get in touch with you, mark, like, where they can find out if they want either like to, to partner with you or maybe to, to, to get in touch and learn more about what you're doing.
Mark: So, um, Autom8ly.com, uh, A-U-T-O-M, the number eight l y.com uh, is our website. I am the only Mark Vange on the whole internet, so you can find me on LinkedIn. You [00:57:00] can find me. You know, on a lot of social media platforms, um, we have a medium, um, channel as well that, uh, where we post, uh, stories and, and discussions about this stuff.
And yeah. Um, especially, uh, for folks who are kind of entrepreneurial and who want to, uh, figure out how to ride the title, uh, this, um, this tiger to, to greater success, um, but maybe don't have the technology chops. Um, we love to, to talk to you guys and uh, and to partner with you on bringing really good, capable solutions to your markets.
So that's kinda what we do.
Mehmet: Great. And I will make the life easy for the audience. So all the links you mentioned, they will be able to find them in the show notes so they don't need to search. Yeah. So whether it's the website, your Linkin profile, the medium also as well. So everything will be there. Mark, I can't thank you enough.
I know how busy. It can get as a founder, as a business owner. So thank you for giving me one [00:58:00] hour of your time today. I appreciate that. And this is how I end my opposite. This is for the audience. If you guys just discovered this podcast by luck, I hope you enjoyed it. If you did, so give me a small favor as you're trying to see to, to, to, to figure out.
I'm getting people like Mark, who are expert in what they're doing to get. Reality in front of us and do meaningful things. Today we discussed about the AI without the buzz through business cases and how Mark can help you actually if you have this entrepreneurial spirit of building something on AI to solve real problems.
So my. Ask from you. If you liked the episode today and you liked the the show, and I know there's a lot of people, they say, yeah, we know about it, subscribe, share it with your friends and colleagues. If you are one of the people who keep coming again and again, thank you very much. Thank you for the messages, encouragement.
I couldn't keep doing this for almost three years now without all your support and seeing [00:59:00] that. I'm able to give value to you, and this is my whole purpose of this podcast. Thank you for making us on the top 200 charts in the Apple Podcast platform in multiple countries all this year since Jan till till now in October.
And thank you also for the encouragement and thank you for the support on everything. As I say, always stay tuned for a new episode very soon. Thank you. Bye-bye.