#592 Stop Treating AI Like Software. It Is Workforce Infrastructure | Karl Simon, CTO, Subatomic AI

In this episode of The CTO Show with Mehmet, Mehmet sits down with Karl Simon, Co-Founder and CTO at Subatomic AI. Karl is building orchestration infrastructure for AI agents and enterprise workflows, focused on turning AI into operational capacity rather than isolated tools.
AI adoption is often framed as a model problem. This conversation reframes it as a systems problem. The gap is not model capability but data quality, workflow design, and orchestration. The discussion breaks down why AI agents perform well in demos but fail in production, and why observability and context are now core requirements for enterprise AI.
If you are building, operating, or investing in enterprise AI systems, this conversation clarifies where value is created and where most implementations fail.
⸻
About the Guest
Karl Simon is the Co-Founder and CTO at Subatomic AI, a company focused on orchestration layers for enterprise AI workflows. His work centers on agentic systems, data integration, and operationalizing AI across business functions.
He has spent decades helping companies modernize across data, cloud, and AI systems, with a focus on automation, optimization, and enterprise-scale transformation.
He is building infrastructure that treats AI as a workforce layer, not a software feature.
LinkedIn: https://www.linkedin.com/in/karlsimon
⸻
Key Takeaways
- AI failures in enterprises are driven by data and workflow gaps, not model limitations
- AI agents succeed only when guided by structured workflows and bounded context
- Data quality issues scale faster with AI, amplifying errors across systems
- Observability is required to trust and operate AI in production environments
- Enterprise AI requires orchestration across multiple systems, not isolated tools
- AI should be treated as workforce capacity, not a software deployment
- SOPs and workflows must evolve continuously or AI will reinforce inefficiencies
- ROI from AI comes from time reallocation and revenue expansion, not just cost reduction
⸻
What You Will Learn
- Why AI models are not the primary bottleneck in enterprise adoption
- How data quality and context directly impact AI output reliability
- The difference between automation, integration, and orchestration in AI systems
- What causes AI agents to fail when moving from demo to production
- How observability frameworks enable trust and auditability in AI workflows
- The concept of AI coworkers and how they fit into enterprise operations
- What CTOs should prioritize first to achieve early ROI from AI
⸻
Episode Highlights
00:00 — AI models are not the real problem
02:00 — Orchestration is the missing layer in enterprise AI
04:00 — Why AI fails without context and trained data
06:30 — Data quality issues break AI systems at scale
09:00 — Orchestration vs automation and integration explained
12:00 — Trust, auditability, and observability in AI systems
16:00 — AI as workforce infrastructure, not software
20:00 — Can AI optimize broken enterprise workflows
27:00 — AI in regulated industries and compliance requirements
29:00 — Where to start for real AI ROI
35:00 — What changes in the next 12 to 18 months
⸻
Resources Mentioned
- Subatomic AI: https://getsubatomic.ai
- Deep Lens: Observability framework for AI workflows
- NIST: Security and compliance framework
- OWASP: Application security framework
- ISO 27001: Information security standard
⸻
Listen Now
Available on all major podcast platforms and YouTube
⸻
Connect with the Show
Follow The CTO Show with Mehmet for more conversations at the intersection of technology, startups, and venture capital
Mehmet: [00:00:00] Hello, and welcome back to the episode of the CTO Show with Mammot today. I'm very pleased. Joining me, Karl Simon, his co-founder, CTEO of Subatomic ai. Karl, thank you very much for being here with me today. I really appreciate, you know, the time you gave me today to be a guest on the CTO show. I don't like to take a much time from my guest to, you know.
Tell us about themselves. So I keep that part to you, but just I gonna give like kind of a, uh, hint or like teaser to the audience. We're gonna talk about AI agents today. We're gonna talk about, you know. A lot of topics related to AI actually. And you know, the, the point of view of Karl on what's happening in, in this domain about the orchestration, about like what the challenges we are seeing in AI as the workforce.
Without further ado, thank you again, Karl, for being here with me today. A little bit about you, your background, your journey, and then we can dive into our discussion for today. So the floor is yours.
Karl: Oh, thank you Maba. And again, I'm really appreciated to be here on the show. I'm [00:01:00] looking forward to talk about AI and all the topics associated with what you just listed out.
Yeah. Quick background on me. I've been helping companies modernize for a few decades now, and it's been a passion of mine to always provide improved automation, optimization or insights to the data from the early days of data warehousing to. Getting everyone on the cloud, the media, the social media introductions, mobile, big data, ai, machine learning, and now generative ai.
I think this could be the most impactful step up for everyone who wants to modernize because it allows 'em to adapt, evolve, and scale. You know, personally, I've done this with companies big and small, and yeah, I'm here to talk about the direction.
Mehmet: Great, and thank you again, Karl, for being here with me today.
Now, I would start from what you are currently working on, what you're building, so you call it an [00:02:00] orchestration layer for ai. If we want to summarize in a sentence what the problem you are actually trying to solve inside enterprises, what that would be.
Karl: Yeah, we're, we realize there are a lot of companies going off of or going after the single tools AI wrapper single task problem.
And what we recognize quickly is that the real PA pain is a series of source systems that don't talk to one another. You know, for the orchestration of automation processes, for the orchestration of insights, to have a unified view of your client, of your product or whatever is driving your business that you need to understand.
More about it for sales, marketing, back office operations and beyond. So orchestration is key to bring all that information from the CRMs, the ERPs, your, you know, we focus on wealth management, so there are a number of systems like the custodial [00:03:00] systems, risk analysis, tax planning, and you name it, you have to bring it together in that example.
You know, just, uh, to give a, a good proxy of the problem to know how to advise your client, and you can apply it in any domain use case
Mehmet: right now, a lot of people think, uh, that, uh, AI fails because of the models. Uh, they're not good enough. They, you know, they have issues, hallucination, all that. You argued the opposite.
Um, what are we missing here?
Karl: Actually, I would say that the models are increasingly getting better, but there's a reason why companies like Subatomic exist, and that is it's their trained data, number one, and they miss the context of the relevance to your company, your workflows, your way. So hallucination becomes quite easy.
When you ask a simple question directly to the models, [00:04:00] you need an agentic workflow with agents that understand the step that's being performed within the workflow and has the cognitive knowledge, your way of thinking embedded within your workflows to make sure that the, the model is guided on how to infer the question and then how to synthesize the answer.
Mehmet: Right now, if I want to take an example, a real, real life example, Karl, like, um, how things start to break, you know, when we try to do these orchestrations, like, is it. Because of the data is not ready. Is it because of the existing workflows? They are not, um, you know, um, optimized for the AI world? Or is it maybe sometimes no one is taking ownership?
Like what have you seen, like why things break? I would say.
Karl: Yeah. Well, it's a great question, [00:05:00] MeMed, because everything from the data to the workflow execution needs to be ready for the moment. Data. This has been true forever, and it's just as true in AI because AI will accelerate what's broken. Data cleansing data completeness, data unification, standardization in terms of the terms, the names.
I mean, the easy one is I can be spelled as Karl with a C in one system, which is wrong, Karl with a K, which is right. Someone might think my name is Karla because my middle name is Alan and it hasn't the initial A, and these don't match up across. The systems make joining in unification across these separate sources difficult.
But the terms as well, business terms, what does gross profit mean in your company? What does it mean to have marketing lifts in your campaign as a result of executing messaging to the [00:06:00] right target? These are things that need to be clarified so that when the systems bring together a complete picture, it's not an incomplete understanding of the data.
You feed that into the model, you're gonna get wrong answers. Then the workflow, you have to apply typical engineering practices. Your call to the large language model may fail, any step may fail. Typical retry logic is critically important. And then all the issues with security that you know can happen at any point in the system.
AI introduces yet another opportunity for prompt injections where you can actually. Get the model to provide answers that are inappropriate or actually create breaches of, of your critical personal information.
Mehmet: Right. Another thing we, we've seen or we've heard about, you know, especially when we talk about AI agents that, you know, they look great in demos, but you know, when we [00:07:00] try to take them into production, they collapse.
Um, how much of this is a technical problem versus an organizational problem? And, you know. Is this why also we hear companies saying, Hey, we tried to deploy the AI and nothing changed. We didn't, we didn't see the value proposition, like what's happening behind the scenes that is causing these issues to come up.
And the reason I'm, I'm, I'm, you know, digging too much into the problem because then we gonna talk about like how you're solving this. So, so let's, let's understand this part.
Karl: You know, I'm glad you are meme because a lot of people are experiencing this and it's unfortunate. You know, a lot of the easy examples are when you try to use the model directly.
Again, you're not applying your own cognitive, your workflows, the way you do business, your cognition, your philosophies, your way, so it doesn't really align with what you want to see. Or people misuse the [00:08:00] model, they assume that just because it has an input context window that is significantly large, that the model can recall information that will end up getting lost in the middle.
You really need to feed bounded context, stepwise in, in appropriate workflow, so that you have the guardrails of what acceptable answers will be. You have the. Specify context to provide guidance on how to answer the, the question. You may even provide examples within that guidance. The, you know, ultimately the right size and the given focus, given that workflow step so that it, you're not sending in contextual rots that will confuse the model, and that is really fine tuned.
Consideration that a lot of people unfortunately overlook
Mehmet: right now. Let's talk about, you know, we mentioned a couple of times the orchestration layer, [00:09:00] right? So, so let's try to bring it down in simple terms. Um, there's a lot of. I would say, mm. Terms that we are hearing, there is a lot of things that are mixing, maybe some people's minds up.
Um, because people sometime, and it happened when the AI first came and I was talking to people, they mix it. For example, with automation, some people start to mix it with, you know, the APIs, middleware, and, you know, I'm, I'm talking here about the orchestration part, so. How you we define it and how we differentiate it from what we had already, you know, related more to automation and integration, I would say.
Karl: Right. So it is automation integration, although if you have it within the Gentech workflow, you have different steps at times, taking agency and making decisions of what to do next. That's very aligned with the typical human-based workflow. That may exist. [00:10:00] I mean, you can, maybe some of the easiest examples, think about a manufacturing floor that needs to fulfill an order.
They get a pick slip. A pick slip, and it tells them all the orders or all the items that need to be placed in the order. The bin might hit the. The, uh, assembly line or conveyor belt, and you start picking from the locations where likely a lot of the items are grouped together, but sometimes, you know, items are outta stock in the primary location, so you might coordinate with someone on the fly.
Can you get something from overstock so we can fill the primary location? These are the moments in time where there's stepwise reasoning that's applied or there. It might not even be more inventory and, uh, overstock. And you have to basically, uh, either decide to partially ship or backorder the entire order and wait for that last part because without it, you can't [00:11:00] really not getting a sufficiently partial order that can be used.
So all this goes into play in the AI world or any level of automation. You need that reasoning logic. Before it was always. Automation was execute steps one through five and there would be no decisioning necessarily. Maybe you'd, you'd have some routing considerations based upon some case logic. But AI automation and the integration steps to the different systems depends upon the request.
At the moment in time, it's not a just a singular workflow, but it can be. Um, a there could be a lot of ag agentic reasoning upfront to decide, well, what is the real request here and how should it be answered? And so the steps may be reordered according to that particular request.
Mehmet: And I think here where, you know, um, data preparation plays a major role here.
So. [00:12:00] The AI has the context. Now, the question that follows, and feel free to, to elaborate a little bit on that, but you know, again, the question, and this is from, um, people who are new to this, like people, by the way, I, I'm surprise in, in some, in talking to some people, you know, business oriented people, not the technical guys, you know, um, the question that sometimes they ask, okay, how we can rely.
On the AI outputs, like how I can make sure that, you know, an agent or an energetic workflow would be able to do or to have a, you know, decision, which might be business critical here. Um, are companies having this fear call, like, and how we can answer, uh, these fears.
Karl: Yeah, I love that question meme because number one, many companies are having that fear, but number two, [00:13:00] not enough companies are having that fear.
Yeah. I mean, here you have your business reputation, your ability to deliver, and ultimately what will culminate in customer satisfaction at risk, and so absolutely people should be. First of all, distrustful that the result will be, will come back correctly and observability is key here. You need the capabilities of seeing what exactly happened, not just at the point where the model did the inference, but the steps leading up to it.
Did it get fed the correct data? As you're pointing out, the appropriate context includes both the data, which is really. The proxy for your knowledge, and then the cognitive understanding of your processes, your way with your philosophies, your reasoning. So when it doesn't have that, it's not properly fed in, of course the model's not gonna provide you with the correct [00:14:00] answer.
So the ad observability gives that to you. Now a lot of people say, oh, well we, we have logs. So you hear that from a lot of companies.
Mehmet: Mm-hmm.
Karl: If you have a deep IT team can help you dig through, but it's not immediate. It's not necessarily repeatable, you know, easily without a lot of effort. And it doesn't give you assurances that everything is done correct.
So for example, here's Subatomic. We have something called deep lens. Deep lens includes three different lenses, actually introspection, or we call it intro spec. That allows you to dig into the actual steps and the reasoning applied in your workflow so that you know it did the right thing. And then the second step is audit.
We audit across all the different considerations for compliance, whether it's nist, oasp, ISO 27,001, and many more. Then we have eval. Eval actually [00:15:00] digs in and measures how well it did all the steps of the way, including the final answer. And so it gives you a full view into whether or not it's using your cognition in a check against that as a ground truth and whether or not the answers are correct based upon reconciling it against proven techniques.
And if it doesn't. First of all, I'll try to course correct. You know, it will course correct in the middle of the process, but then you could still observe right immediately on that request to see how well it did even at the end. That's correct.
Mehmet: Right. Yeah, absolutely. Absolutely. Karl, you frame AI as workforce infrastructure, not as a software like this is.
Karl: Yeah,
Mehmet: this is a big, big claim, big shift. I understand it and I understand the rationale behind it, but you know, if you want to explain it to, to, [00:16:00] you know, to other people, right? So what, how that would look in practice, like how AI is, is part of, of the workforce now. Right. And the same infrastructure, not software.
Because when you tell people ai, the first thing that comes to their mind. Yeah. It's it's coding, it's it's software. I have to deploy something.
Karl: Yeah. So what we believe is that when you're adopting ai, you're really hiring AI coworkers. And we call them exactly that. AI coworkers. You want these coworkers to work just like a human being would if you have a question.
You can't self-serve. Answer it through knowledge, uh, is some sort of knowledge base which AI can do for you much more easily. And scalably, you ask someone who is the knowledge leader or you may ask for something to be done by a group and you, you have a core person who really represents the best [00:17:00] practices of that group.
You'll ask that person. Can you help me do this, which represents that cross-functional nature of workflows. When you have an AI coworker, you onboard them just like any other human being into your knowledge, your SOPs, your way. And so we think of ai, coworkers need to come in with the requisite skills for the purpose, uh, the purpose driven job at hand, and get them onboarded to your playbooks.
That is really the perfect analogy. And when you think about it from a perspective of evaluating any person, human or AI coworker, you would wanna evaluate the same way and decide are these AI coworkers actually performing at the right level? And if they're not, then you move on. Because ultimately when you hire AI coworkers, you're paying for them.
You're thinking about if this AI coworker's so great, [00:18:00] how many human coworkers does it represent? Am I getting 1.5 equivalent human co AI coworkers in terms of production and and likely higher quality? Or am I getting five times? These are considerations that every CTO has to think through along with their.
The executive team about ROI. What is the intention behind using these AI coworkers or using AI at at all and the AI coworker proxy for actually a person performing is the easiest way to think about it.
Mehmet: Right. I can ask a kind of a follow up question, which is little bit, uh, let's call it ary little bit.
But uh, you know, it popped up in my mind. I didn't prepare for it. So, 'cause you mentioned the SOPs and you mentioned like, you know, you bring someone this similarity with, with how you hire code on code and AI coworker, so. [00:19:00] One of the things that at some stage in my career, very long time ago is, you know, we see the company has a problem with its own SOPs, right?
So you hire people to do the same thing and they don't optimize. They just, you know, okay, this is how we do things here. This is the process. Just execute. Now, would AI be able to go and. You know, just say, I'm, I'm, I'm, I'm saying like, it's kind of a kind of science fiction of thing that it'll go and say, Hey, by the way, your SOPs are, are not optimized.
We need to do something about it. But, you know, again, is that possible? Like, and how much time that would take the AI coworker to understand that. These SOPs, maybe they're outdated. Maybe there's something that can be done in a, into a better way. And I'm the reason I'm asking you card, because also from my personal [00:20:00] experience, what kills a lot of businesses over time is that they keep their old things because they always say, yeah, this is how we have always done it.
We don't need to change. And then, you know, someone would come and disrupt. So can AI avoid this? This is what I'm trying to, to reach. Or the point I want to reach.
Karl: I think that's really smart. T you know? Yes, the answer is yes because, so I'm gonna step back for a moment. At Subatomic, we deployed that observability set of lenses called Deep Lens.
But we also have security built in as a first class citizen. And so when I talk about deep lens evaluating everything, it includes also the security as well. And we immediately give you points in time requests, visibility to even security. So, and then we roll everything up and identify patterns and recognition.
Think about it, when you wanna evaluate a workflow, what are the key things you're evaluating for? You're evaluating for [00:21:00] throughput. Execution time, accuracy, completeness, and you know, with our deep lens eval lens, that at both the, the, uh. Single requests at a time review, let alone the aggregate pattern recognitions.
You can identify whether or not it's hitting the mark on all those things, and you can build in ai, whether it's a part of the singular request execution or what I would recommend is one that actually reviews the aggregate data and identifies patterns to give you better approaches, better recommendations.
I mean, we at Subatomic believe in the idea of adapt, evolve, and scale. If you're not adapting, evolving, and scaling your workflows, then that's a critically missed opportunity because that's, you know, full C circle to the beginning of our discussion. I said for modernizing your company, your workflows, your insights, your way.
This has ne [00:22:00] never been a better time to really bring it up. Maybe orders of magnitude greater than it's ever been with generative ai, because you can get that insight quickly, adapt your workflows, even your organization can ev adapt and evolve so that you can actually scale and increase output, increase your customer satisfaction.
That's key. So yes. You can embed AI to actually review and recommend.
Mehmet: Right. This is, this is exciting and I hope, you know, we see more, uh, adoption for this. Now you talk also Karl, about how good AI is invisible, right? So what does it mean for business leaders? Like, is it like they just, they don't have to say we have, you know.
Adopted, we had adopted AI for this. Like what, what's the trick here? To have a, a good ai, which is invisible,
Karl: right? Well, you a, as [00:23:00] a company representative can talk about having ai. I think that's perfectly fine. That's, uh, that's not the invisible aspect of ai. It's really more that you know about the, the aspect that when things are working well.
You don't really think about it, but then when something breaks, you know who is involved. You know what broke. And so it becomes very visible. So many of our workflow processes just happen and they continue to do well because they are reliable, they're accurate, and ultimately successful, and that's what you want AI to be.
That's the first. Layer of invisibility. The second one is it's key when you don't have to know every single, you may not know every single person involved in the workflow. You just know that if you contact somebody today in the human-based workforce, you know it'll get taken care of. It may not have been that [00:24:00] person all by him or herself.
There may may have been many players in the background. Same thing with ai. I mean, we have this chief of staff. We call it the concierge that knows the the Fuller AI organization. All the individual teams that are workflow or domain focused that need to be contacted and they involve the right people.
Well, in this case, AI coworkers to get involved and perform. You don't have to as a human being know all those AI coworkers. You don't have to prompt specifically knowing every single AI coworker involved. You just need to know what's the general request, and you need to expect that your chief of staff for the AI team or your concierge knows exactly what you mean.
And can pull the right AI coworkers to perform the right workflows and their individual steps with the cognition and knowledge base of data to do it your way.
Mehmet: This will [00:25:00] affect the org chart. I would say, uh, Karl a lot, but are we going to see, for example, supervisors or managers who are like actually AI agents?
Karl: Oh, you mean for human beings? Yeah. I mean, we have them already for other AI coworkers, but I will say that is a very feasible future. I'm not saying suggesting that is an immediate future, but uh, because that takes a little bit of, uh, organizational behavior and acceptance, you know, for adoption. I'm sure it's happening though at a subset of companies already.
Mehmet: Yeah, time will show also, like how this will, uh, um, you know, will, will take it to the next level. To be honest, like, you know, if I, I go back in time and, you know, maybe I'm, I'm, I'm a junior and. I have to, to, to have, um, you know, a supervisor, which is an AI agent, I don't mind [00:26:00] because if, if it can teach me, you know, to do the job the right way and, you know, it's customized, to me it might be better than having a human supervisor.
Because, you know, I always relate this to education and we always know that one of the biggest problem in traditional education is they design the system assuming that everyone is on the same level. Everyone has the same. Cognitive, you know, abilities and you know, so some people they can write faster, some people they can read faster and not everyone.
And this is the problem in education. So if I take that to the workforce and you know, I have. Someone who's onboarding me, which is an AI agent in this case, at to my pace, and can bring me up to speed to be productive in, in the company much faster. I, I wouldn't, I wouldn't say no to it actually. Like, I, I would love that in person.
This is a personal opinion here. Now, if we want to talk about applying all what we've discussed, uh, so far, Karl [00:27:00] in highly regulated, you know. Verticals industries, like whether it's like healthcare, it might be financial institutions. Um, are we able to. Have the AI outputs predictable, auditable also as well in these such environments?
Karl: Oh, absolutely. Again, this gets back to subatomic deep lens and subatomic security. We build it both of them in as first class citizens within everything that is executed in the system. And we have an elaborate. Set of reports that give you the full transparency and also interactive UIs where you could step through the processes and see the checks being done all along the way.
The roll up to not just performance and execution against your ground, truth is there, but also again, the alignment with auditability, you know, across all the [00:28:00] different security frame or compliance frameworks and security frameworks are there. Now. Not enough companies are doing this. We are, we make an importance.
In fact, we intentionally differentiate in those areas because it's so critically important. The, um, the goal is for everyone to, to start demanding that as a client and for all the providers to ensure that it becomes part of the core.
Mehmet: Right now, let's try to give some hints to CTOs founders, you know, leaders in, in enterprise, uh, organizations who want now to do a proper start for, um, you know, AI that deliver outcomes where they have to start
Karl: if they wanna have, oh, so you're asking if.
They want to start having success with ai. What's a good approach to
Mehmet: this? Yes. [00:29:00] Okay. If they want something that can deliver outcomes. And as far to back to the ROI. What? Which, which you mentioned at the beginning. So how, if they want to have the earliest ROI results possible where they have to start first, fixing data systems, workflows, where should I start?
Karl: Well, absolutely you have to fix the foundation, which is the data. You have to get the data right. Again, garbage in, garbage out has been something stated for decades now. And again, AI is an accelerator of good and bad. So you gotta get your your data correct, unify it, standardize it, clean it right, and have it prepared in a structure that can be rapidly, uh, ingested at runtime for its purpose.
Through a normal form for a lot of transactional based information. So if you have a medallion architecture that's your silver layer and then a dimensional model for rapid [00:30:00] insights, that's your goal layer. Those are key, you know, so it's both the preparation of the data so that it's correct. It's the actual structuring of the data so that it's consumable at runtime for purpose.
And then think about all the use cases and the impact it would have in terms of velocity, cost optimization, and when you reduce your cost, your ability to shift, your focus to growth. Many of these different roles can actually now refocus on building revenue. In the case of wealth management, we like to talk about those that are spending a lot of time preparing agendas for meetings with their top clients.
They should be able to do it quickly and then in the meantime, use the extra or freed up time to, you know, basically grow their assets under management. That's a critical opportunity and all that goes into [00:31:00] ROI. So stack rank your top pain points. Think about it both in the context of cost optimization, velocity and accuracy quality, but then also how much you can grow your business as a result of all that.
Save time and then go for it. Now if it happens to be a complex process, again, there are companies out there like Subatomic that can help you. Get it simplified and deliver very quickly through internal engineering practices where we have actually our own AI coworkers on staff to perform. But absolutely pick the ones that if you are thinking about a compressed time to value, then indeed have all those requirements or requisites or prerequisites, all you know, well defined and ready to go.
Mehmet: Right. Karl, let's say I want to. You know, to get onboarded. So we, we, let's say we had our [00:32:00] first, uh, uh, discovery meeting with you and, you know, we decided to move forward. And I'm trying, you know, to, to see for me as an executive, as, as a, as a technology leader of my company, um. Is there anything I need to, to buy versus build and how my, of course, like I'm, I'm getting, you know, the, the help on the agents from you and like, how my, let's say first 90 days would look like if, if, if I'm, I'm start to get onboarded.
Karl: Sure. Well, if you're buying, uh, you know. Basically, you know, for a company like Subatomic, we walk you through a two week discovery, more extended discovery to learn more about your processes. Your preparation for that needs to be gathering the right individuals who can reflect your workflows, bring along those s SOPs if you have defined.
Uh, decisioning and reasoning logic, bring that along. But a lot of it's in people's head, tribal knowledge. [00:33:00] And so part of the discovery is pulling out that tribal knowledge that will get captured if you allow recordings, it'll get captured in. Meeting recordings and then all that information, at least the way we do it, is we feed it into our series of AI coworkers that start planning, architecting, and designing and actually building your solution and
Mehmet: mm-hmm
Karl: within days including a, at least a, a, a small enough proxy of information or data that's already been unified, standardized.
Cleanse to see the outputs that are already 70 to 80% of the way there. And then you can expect within the 60, 90 days, um, latter part of a three month engagement, the final 20 to 30% getting done with you, uh, every week, checking in, showing you the latest changes based upon the last meeting's, recommendations or requests.
[00:34:00] So that's what, what it would look like, at least for build. Or by situation with subatomic. I can't really speak to how other companies do it, but that preparation is the same as what we've been talking about. All meeting, get your data in order, you know, ready to be examined, profiled, and then identified for opportunities to improve.
Get all your SOPs, get the right people ready to go, and make sure you start socializing what you're doing throughout the company. That's build or buy situation. Now on the build, you're gonna wanna think about how are you gonna do it on your own? Do you understand how to build a context infrastructure that has a co a cognitive engine reflecting your ways?
A good memory, uh, that coincides with that for the further context. Infrastructure components and a way to build in that observability and security if you have a smart plan for that. Great. Uh, I will say that a lot of companies believe they do. They start down that [00:35:00] process and unfortunately they realize it's a little more.
Rigorous and complex than they thought. So, um, if you want to make sure it's done right, really dive deep into these latest practices that ensure the rigor's there and you can build visible observability, not just catch traces, you know, actually have the traces brought to life in terms of the identification of what happened, what went wrong.
Mehmet: You know, you repeated this a lot of time, and I think this is key, the observability, it's, it's, it's something very important to, to actually understand what's going on. I think this is, uh, you know, it's, for me, a te a key takeaway for this. Now, maybe a little bit traditional question. Um, if, if we want, like now everything is moving fast, Karl, and we.
Had this, you know, generative AI and, you know, the, the jump to the AgTech AI so fast. Um, what are you expecting in [00:36:00] the next 12 to 18 months will be the biggest shift that will happen in enterprise ai?
Karl: I think the biggest shift is a lot of what we've been talking about, because a lot of it is missing in the industry.
Mehmet: Mm-hmm.
Karl: But I do think maybe one of the things that will quickly evolve is we're hearing from a number of clients already that not only do they wanna be efficient in front of their keyboard, but. Even driving to and from work. Now that might not sound like a popular thing or a happy thing to hear that they're constantly thinking work, but a lot of people when they get home wanna be able to focus on family.
And the way they try to increase that time with family is by taking advantage of transit time. So that drives the opportunity for voice or audio. So interacting with ai, providing questions or instructions through voice, or maybe listening to a podcast of your day, you know, that's [00:37:00] autogenerated by the ai.
You know, the AI coworkers to say, well, Miette, good morning. Here's what you have on your calendar. Here are the key people you need to talk to. The key topics for those people or clients, and this is what you want to be able to, you know, perform before end of the day. The clients you wanna reach out to, the, the key talking points you may have and get that download before you've even taken that first step into the office.
So there's opportunities there. I do think people will refine exactly everything we've talked about to more optimal states. And I think you're gonna start hearing that people are really interested now on connecting, you know, externally with other companies within their supply chain, you know, for example.
Mm-hmm. So that that agent to agent communication. Will start becoming an [00:38:00] increasingly important part of the rollout. We've heard about the protocols being developed, but we don't hear that much yet about clients thinking outside their house. I think within the 12 to 18 month portion of that overall timeframe, we're gonna start seeing the first people, the first companies start doing that.
Mehmet: Yeah, that would be an interesting moment to see and watch Indeed. Um. A traditional final question, car, like where people can, can get in touch and learn more about the subatomic.
Karl: Oh, uh, please reach out to me directly on LinkedIn, but you can go to our website, get sub, get subatomic.ai and you'll learn more about what we're doing and there's an opportunity to fill out a form and contact us to our general, uh, support.
Inbox you can actually reach out to my co-founder as well. So we, you know, Sam is available at his first, first [00:39:00] name@getssubatomic.ai. You could reach out directly on email to me, Karl at Gets subatomic ai.
Mehmet: Great. Uh, again, thank you very much, Karl. Uh, for, you know, this, um, I would say eye-opening conversation, right?
And you know how the world is shifting towards, you know, this agent AI in the workforce. And, you know, I'm highlighting some of the key, uh. Points you discussed. So first we need to have the ob, you know, we need to understand what we have, we need to have the observability, we need to work on, you know, the optimization part.
And then we, we implement the orchestration and then we start to refine this. And then again, the last thing, which you mentioned about the future and how we might see. You know, agents talking to each others or coworkers talking to each other across companies. That would be very interesting. For the links you mentioned, they will be all available in the show notes.
So if you know the audience are listening to this podcast on their favorite podcasting app, so you'll find them in the show notes. If you're watching this on YouTube, [00:40:00] you'll find them in the description. And again, thank you Karl so much for this discussion. This is how I add my episodes. This is for the audience if you just discovered us.
Thank you for passing by. A small favor from me if you can just share it with more people. Like we try to, you know, educate and we try to reach a lot of people who might. Not heard yet about, you know, what we, discussing the trends and also to learn about great guests similar to car who was with me here today and what they are building.
And if you are one of the people who keep coming again and again, the loyal fans, thank you very much. Your support is generous. Your support is really. You know, unbeaten, you are keep, uh, you know, having the, the podcast ranking in the Apple top 200 podcast charts across multiple countries since last year, you're continuing the same thing this year.
Every week we changed two to three countries, but we are always like. At least in three to four countries in the Apple two top 200, uh, chart. So [00:41:00] thank you very much. This doesn't happen by itself. This is because you are listening and you are referring more people to. So thank you very much indeed and we'll meet again very soon.
Thank you. Bye-bye.





























