#541 How Composability Transforms Engineering: Luv Kapur on Scaling Fast and Shipping Better
In this episode, Luv Kapur joins Mehmet to break down how composability is reshaping modern engineering. Luv is an engineering leader at Bit, working across their open source and enterprise platforms, and one of the earliest advocates for modular, reusable software as a way to unlock scale.
They explore why composability matters, how modular systems speed up delivery, and the cultural shift required inside engineering teams. Luv also shares real results from enterprise adoption, including faster iteration cycles, fewer defects, and measurable ROI in the eight-figure range. The conversation closes with a deep look into HopeAI, Bit’s AI architect designed to orchestrate existing components rather than generate endless code.
This is a practical and insightful episode for any CTO, engineering leader, or founder navigating the next era of platform development.
⸻
About Luv Kapur
Luv Kapur is an Engineering Lead and Solutions Architect at Bit. His background spans platform engineering, dev tooling, internal systems, and leading enterprise adoption of composable software. He has helped teams move from monolithic and fragmented architectures to modular systems that enable real speed, discoverability, and developer empowerment.
He now works across Bit’s open source ecosystem and Bit Cloud for enterprise, helping organizations adopt composability and shift toward a more scalable engineering model.
⸻
Key Takeaways
• Composability is an operating model that enables teams to build with reusable building blocks and ship faster.
• Modular architectures reduce defects, improve consistency, and increase transparency across engineering teams.
• Discoverability and ownership are core success factors. Without them, composability collapses into fragmentation.
• AI should act as an orchestrator, not a generator. The future belongs to systems that reuse proven components.
• Enterprise ROI from composability is measurable, from reduced iteration time to real cost savings in the millions.
• Citizen developers will play a bigger role as AI unlocks access to complex internal systems.
• Engineers will still be needed, but AI will free them to solve harder and more meaningful problems.
⸻
What You Will Learn
• How modular software accelerates delivery
• Why enterprises struggle with legacy systems and how bottom up adoption solves it
• How to measure success in composability using real metrics
• The cultural shift required for high performing engineering teams
• How AI can guide architecture instead of generating more code
• The role of discoverability, ownership, and inner source in large organizations
• What HopeAI is and how it works as an AI architect
⸻
Episode Highlights
00:00 Introduction and guest background
03:00 What composability really means and why it matters
06:00 Modular architectures explained with real world examples
10:00 What defines high performance engineering teams
14:00 Why companies fail when adopting composability
17:00 The shift from top down mandates to bottom up success
20:00 Tangible metrics teams can measure
23:00 AI as orchestrator versus generator
27:00 Why code reuse will define the next decade
31:00 Inside HopeAI and how it guides architecture
35:00 Enterprise results and real ROI
37:00 The future of platform development
41:00 Why engineers remain irreplaceable
42:00 How to connect with Luv Kapur
⸻
Resources Mentioned
• Bit (Open Source): https://bit.dev
• Bit Cloud (Enterprise): https://bit.cloud
• Luv Kapur on LinkedIn: https://www.linkedin.com/in/luvkapur/
[00:00:00]
Mehmet: Hello and welcome back to an episode of the CTO Show with Mehmet today. I'm very pleased joining me. Luv Kapur. Uh, you know, the way I love to do it, my guests, all of them, they know about it, you know, as I keep it to my guests to, to [00:01:00] introduce themselves. But Luv is, is technology leader at Bit. Uh, we'll talk with Luv today about multiple topics, so we're gonna discuss, of course, you know.
Composability, high performance teams, um, maybe buy versus build also as well. Of course we're gonna talk about ai, um, and you know, like a couple of other topics about engineering. But without further ado, Luv, thank you again for being here with me today. What I love to do always is I give it to my guest, introduce themself, tell us more about you, your journey, and what you're currently up to, and then we can start discussion from there.
So the floor is yours.
Luv: Alright. Thank you so much. You know, it's a pleasure to be here with you Mammoth. Um, I'm a, I'm an engineering lead at Bit, um, at bit we build a, uh, end-to-end platform for composable software. Um, previously joining bit, I've led, um, platform team. So I was, uh, I was a, a, uh, engineering lead for a platform team in a FinTech company where, um, you know, are, we had.
Tens of the 20, 20 to 30 developers working [00:02:00] on internal tooling. Um, and we, our goal was to make sure, uh, teams can develop software faster, um, and more efficiently. So I've always had a love towards dev tooling, you know, um, as much as I love working directly on the product, but watching my work empower developers who are helping build these products is, has been very satisfying.
So my career has been. Mostly, uh, working with dev tooling, working on platform teams, building internal systems. Um, and that's how I got into Composability and bit where, um, I was actually one of the first enterprise customers for Bit when I, uh, when I was in, um, platform lead, um, I got a bit adopted across the entire enterprise.
Um. And once I, my work was done, um, I joined bit so I could, you know, um, bring the world, bring the power of composability to more than just one enterprise and, and to a larger group of, and whether they're startups or other enterprises itself. So now I work both on the open source and the enterprise, um, software for us.
Um, and I also directly deal with [00:03:00] customers as a solution architect, understanding, you know, how the integration goes once they adopt bit itself.
Mehmet: Great, and thank you again, Luv for being here with me today. Now kind of a traditional.
Tell me more. What do you mean by composability? Of course, you know, for myself, I know, but this is just to, to kind of demystify things and modular architectures and what, you know, draw you to, to, to, to adopt them. And maybe you can talk to us also about the benefits that clicked really for you.
Luv: Absolutely.
And, uh, before I start, I think, you know, thinking about composability as just and software architecture pattern, um, is, is is it's way, it's way more than that. You know, it's, it's an organizational, operational model. Um, every time I, I tell people about composability. I, I try, I tell them to envision about Lego.
Um, Lego and, and also also think about how, um, you [00:04:00] know, how manufacturing went through, um, the whole. Uh, consistency, um, where we have now, uh, how we can quickly build manufacturing itself. And the core concept about composability itself, it's that, you know, um, you need to build into these individual modular, reusable, highly leveraged units.
Um, which, which will help you scale very fast because there's consistency, um, and there's, there's quality. And, and what it helps you to do is build systems. It helps you to scale systems very independently without being blocked. One of the key issues I see a lot of organizations face as they scale, you know, their engineering, um, department and their engineering, um, you know, uh, headcount itself is starting getting, uh, coupled internally, you know, uh, getting blocked.
Um, and then this is where composability shines, where Composability says is you, you think yourself as you're not building a feature. A system, but you're building a system of features. It's, it's the other way to think about it, [00:05:00] um, where each artifact you produce as part of the development process, um, is, is.
Is at, at the end, is an asset. Um, and that asset can be used, can be leveraged, can be rebuilt, um, and any, anything you do in future, um, can be, can utilize the asset you have built. It's not pro a itself, um, which has a tremendous impact. Um, the consequences of this is that, you know, subsequent as you iterate on your software, as you build, um, your teams, as you move forward, as you scale.
Things get faster, things get more consistent, things can get, be, are able to deliver. Uh, sister, so, so the core concept again is you decouple yourself. You build these highly modular, reusable, um, artifacts. As part of the development process, you organize yourself as team. You know, if you think about.
Composability at an organizational level in the same way where each team is completely independent, they own end-to-end systems. Their systems can easily get integrated within internally and can build on top of it. And then the subsequent delivery of these systems, features or [00:06:00] artifacts, uh, keeps on getting faster because you build on top of them at the end.
Mehmet: Cool. Luv. If you can give us example, because you mentioned like there are parts of Legos, so let's take a small organization that maybe they have, uh, you know, let's say I. Sales, marketing, hr, um, and I don't know, maybe they have, uh, finance, right? So just I'm trying to, to, to explain it like in a very simple way.
So instead of a building a system that has all these things together, and then, you know, correct me if I'm wrong, and then when try to add features. So maybe you do something, you affect the whole thing. So, you know, rather than building, you know, module for finance, module for hr, module for sales, and then you keep iterating and enhancing each one by itself.
And then you can scale the whole, [00:07:00] you know, system together because you have this modular approach. Correct me if I'm, uh, if, if I get it right.
Luv: No, you're right. You know, you, you've got the gist of it. Exactly. Um, and the, the thing I try to emphasize on is you have to stop thinking in terms of deliverables internally.
Like, oh, finance needs to build a dashboard for us to be able to visualize, um, you know, what, what are we spending on? Or analytics needs to go in and build a dashboard and you take a step back and the organization thinks, what capability do I need to build that capability? Um, you know, what needs to be shipped to be able to, to have the capability internally.
Then what does the capability do? That's the contract of, of, of a modular software. Oh. It needs to be able to, you know, display some stuff or act or fetch some stuff from somewhere, or organize stuff or group stuff again. And then you build this capability as a part of a reusable module. And then you think about it of who will consume this capability.
So in terms of, let's say if you're building, if you, if you're building a deliverable or a capability for finance, you understand what my customer and finance are going to use for, how are they gonna [00:08:00] implement it, and then. If they have to extend it or customize it, how can they do it directly? So you bake in the capability thinking of.
A more holistic approach to to, to approaching it. And then you ship, um, then you own end to end of this capability itself. You know? And then I use the word capability very loosely because, um, when you translate it into actually a deliverable, there's a lot of pieces that goes on behind it. You know, there's, there's, the extraction might be going on.
There's an API server that might be going on. You have UI that gets getting built on top of it. Um, so you try to modulize everything into a single unit and then you ship. And then you, and you make sure there are clear contracts on how it behaves, how it extends, um, what this, what this empowers you to do is now you understand as an organization at a given point of time what my capabilities are.
Mm-hmm. When you start having these discussions, when product managers start getting into these meetings and deciding, Hey, you know, our stakeholders are asking us to build X feature, um, how can we ship this? Um, you're not, you're not being a guessing game. This is where I think a lot of software teams, um, [00:09:00] struggle where.
Estimates are very, very hard. Um, because non-technical people trying to understand how, how much time it takes to ship a technical piece of deliverable, um, there's a disconnect and, you know, there's nothing to blame on the non-technical user. There's a disconnect because, um, from a non-technical perspective, understanding what it takes to ship something technical, there are gaps, there are unknowns.
Um, they, they don't understand. But once you, once you start thinking of capabilities and you know, hey, you know, uh, last quarter, these are the capabilities we shipped. This is what it offers. Now, can we build on top of those capabilities? Can we extend stuff? Oh, have we have done this in Q1 already? Um, let's take that system and build on top of it.
Now you're having more intelligent, more, more real, more accurate, um, conversations around this. So this, so this has that impact overall when you start working through composability.
Mehmet: Great. Now there's another concept, or let's say it's a, a common term. We hear it a lot, uh, Luv, which is, you know, high [00:10:00] performance platform, right?
So in your experience, what do you really defines, you know, to say like this is a high performance platform, and how does the compatibility enable that in your opinion?
Luv: Um, I have a few. When everyone talks about, you know, high composable platforms, um, the, the, the foundation of it is actually, um, the impact you get out of it at the end and the impact you can get out of them is, um, is the lead time getting rid reduced?
And what I mean by lead time is like, um, when, when, when someone asks you for a feature and you deliver and then someone asks you for a subsequent feature, how you deliver. Are things slowing down or things getting faster? Things should always be getting faster because every time you deliver something, you have built a building block to step on top of it and reuse it.
Its its ability. Um, the other one is the adoption spread. Um, this is one of the biggest issues I've seen within internal organization is if a team is not high performing, um, they struggle with adoption, you can build the absolute perfect [00:11:00] piece of software or absolute perfect system if your users cannot seamlessly adopt it.
You have failed at the end. Um, it's not a success. Um, so adoption spread and adoption speed is very, very important. Um, the, the other, the other indicator of a high performance culture or a high performance team is, um, the ability to understand defects. Um, you know, um. Will shipping at scale, will shipping more features, introduce a lot more defects or reduce it?
If you go through, if, if a high performance team actually is building stuff through composability, the amount of defects will go down as they ship more, because again, the, the building blocks they're using are battle tested. These are pieces are software that are in production already that. Tests that have unit tests covered, that have integration test covered, that have end user testing covered.
Um, they've lived through all the phases of testing and you're building, building on top of it. And the other, the last piece, which is very, very critical is how is the developer sentiment within the high performance team? Every individual in the team should feel empowered. Every [00:12:00] individual in the team should feel that their, their contribution.
Is visible across, um, that's what empowers developers to become high performers themselves. Um, and at the end, you know, the most, I tell this to organizations all the time, the most valuable asset you have is the people. You know, if the people itself feel that they're being, their, their sentiment is good, they feel empowered, they're delivering, and they can see they have visibility on their impact.
Um, your culture fosters a lot of high performers itself.
Mehmet: Right now the question I would ask you as maybe follow up question on, on this one, um, I would say. And I know, like you said, you, you use it in a, in a loosely way a little bit. But, you know, uh, we see a lot of, of, of companies, they, they want to go into this, uh, and, you know, achieve that through, you know, uh, modularity, reusability.[00:13:00]
But when it comes to execution, they kind of, you know, hit their wall, right. What do you think the biggest, is it like, let me ask it this way. Is it like, because some misconceptions they have or like other reasons that really these companies, um, they, you know, really they wanted to, to, to adopt this way, but maybe they, they implemented or they tried to, to do it in, in a, a wrong way.
In a nutshell, like what, what goes wrong? Is it misconception? Is it like wrong execution? And why this wrong execution in European happens sometimes?
Luv: From my experience, the one of the biggest reasons why companies fail for adoption is because a lot of composability comes from a top down, um, you know, approach.
Mm-hmm. Where, uh, from the top, the executives itself, like, you know what? One of our mandates, um, especially technical executives, it'll be one of our mandate is to make our compo, our organization composable, our systems more composable, and they [00:14:00] try to enforce it from top down. Um, what It doesn't work top down because the problem is, it's, it's a, it's an organizational shift.
That needs to happen. It's not just a technical shift that needs to happen at the end. And systems are, you know, especially in large enterprises, they're legacy systems. They're complex systems. Um, you know, and, and companies are always, especially technical people that are maintaining the systems are risk averse.
No one is going to rewrite a system to become composable. No one is going to change the way they're working from top down to, you know, adopt composability. The things that have been successful to adopt composability have been from bottom up where you. Pick, you start with a single product or a start with a single system.
Um, um, and I'll take an example of design systems itself because it's very easy to follow a design system example. Um, so you, you, the best part, the, the right approach is to actually pick a product in a system that you internally use. Your customs customers are internal itself. Um, and then you start.
Adopting, um, composability from that system itself. And then you make your way [00:15:00] up. Um, you, you, you pick a product, you pick an internal system, whether it's a design system, you, you, you build it in a composable way, you measure it, you reuse it, you add lead time, then you build a central catalog. Um, once you have a central catalog, you have empower discoverability.
That's another piece where companies fail, is they don't. Understand that for composability to succeed, discoverability is key at the end. And the reason discoverability is key is because one of the biggest pitfalls when companies go for a composable adoption is, um, you know, fragmentation happens.
There's no consistency. Um, when, when, when you can build these highly modular components and there's no consistency across, how are they being built? They, you cannot compose them at all. So, and if you don't have visibility on what exists and what doesn't exist, so from your sys, from your company's perspective, if, if me, as an engineering leader can't understand what capabilities are exist, what they do, and refer to them and can I cannot discover them, then it'll lead to duplication.
Um, you know, it, it will not be the perfect way [00:16:00] to go about composability itself. And then once you have, so once you have started from bottom up, you've taken a system, you build it a composable way, you have measured it, you have built a catalog. The third step, which is very important, is you need to empower ownership.
Um, where end-to-end ownership of the system have to be empowered completely. You know, maintainers need to have guidelines on this module is I own it and how to contribute back, how to extend it. And then you go on to enable inner source. Um, inner source is the last missing piece. Um, to truly completely reuse composability, you need to have a culture of inner source.
All the software in this world is being built on the backbone of open source. Open source has done tremendous things. You know, the reason we are here is because of all the success we have had from open source. Um, the only downside of open source is, um. You know, for, for enterprises that have closed source software that have, you know, very specific needs, open source not always works, but what works is the principles of open source implied internally.
So there are systems and software you [00:17:00] build open, uh, how open source builds and if you build inner source way. Now you have a catalog, now you have discoverability, now you have ownership. Um, and then the last piece to just tie it up off is you need to re reward the reuse part of it where, um, you know, you celebrate teams that are using shared asset.
You measure teams that are building on top of the existing assets. Um, you encourage teams to, uh, you know, go back and maintain and, and, and enhance these shared assets again. Um, and we can dive into more specifics of the implementation because there are various approach. You can, a lot of. Companies, um, you know, one of the ways that companies approach it is a home innovate model where you split your teams into a home team and innovate team.
Mm-hmm. So, so you pick a team who you want to start bottom up, that needs to use composability, you split them in home. In a way, the home team is responsible for building these reusable, highly modular system itself. But again, you know, if you go back to the conversation I had. You can build the most beautiful system.
If no one [00:18:00] adopts it, or if they have painful adoption and integration, then you're not successful. This is where the AWAY team comes in, where the away teams will take the system that's built by the home team, join one of the partner teams, help them with integration, learn all the stuff, contribute back to the home team, and now you have this nice system within the organization that help, that fosters this.
Mehmet: Cool. So, uh, this is requires also kind of, uh, if organization didn't have this yet, so they need to have this cultural shift, uh, internally so everyone will embrace, you know, uh, this concept. Right? Luv.
Luv: Yes,
Mehmet: absolutely. Right?
Luv: Yes.
Mehmet: Right now. I would like to, you know, see tangible things that maybe you can share when it comes to results.
Right. Because I know, you know, you talk about like how, you know, uh, this, of course you, you mentioned about, you know, uh, the collaboration and, you know, the [00:19:00] ownership and, and you know, the transparency. But when it comes to, to like more tangible results like. Have you measured, like, for example, how much faster, for example, a product, you know, start to get shipped or like how more stable the product would, would become?
Maybe, you know, less number of bucks. I, I don't know, like, maybe, maybe you can highlight about some tangible metrics. Also, you, you know, maybe this is for leaders who, who want to adopt this and start to measure, uh, the impact of, of adapting this in their organizations.
Luv: Absolutely. And that's the backbone of composability itself, is the results are very, very tangible.
Um, one of the few things we have seen that, um, in iteration cycle, um, so you won't get this is very important to get clear, is you won't get the benefit of composability from step one. It takes time. It takes time to, you know, start reaping the rewards of it. So, um, if [00:20:00] you start building composable systems for, I'd say Q1 itself, what we have noticed that subsequent features in Q2 start taking 25% less time to build, not just, they take 25% less time because the reason is, um, because you are reusing what you're built in Q1.
Um, they also, what they do is they reduce the amount of defects because the more amount of new code you write. The more probability of introducing defects increase because the surface area increases. Um, this also ties back to, you know, the, the, the, the topic of composition versus generation and ai and, you know, we can dive into it later.
Um, but this is the reason. The more code you generate, the more code you write, the more defects. So the, you notice defects start going down, uh, at least by a third, 33% as you start composing more. The biggest. Advantage, which we noticed, which we wasn't expecting. This was a surprise for us when we started measuring this is.
The, the, the, the power you get adoption, the composability is liquidity In headcount. Headcount is [00:21:00] one of the most valuable assets companies have. Headcount is stuff you will see, um, you know, um, departments fight over where you have limited amount of headcount. You have unlimited amount of requests coming from your end users.
Um, you have priorities shifting because of whether there are macro factors. You know, let's say you're going through a time where compliance is very important and you need those priorities need to shift accordingly. And then you ask yourself, as an organization, how quickly can you move headcount from team A to team B without impacting performance and still deliver fast?
In a non composable organization, it's very, very hard. The reason is very hard is because teams have very different ways of working. They have very different ways of developing very different ways of software. There's no visibility at all, completely. If, if a per, if I pick a person from team A and put 'em in tv, the wrap up time is so much that you don't start seeing the benefits of headcount, uh, liquidity.
But through composability, since these are modular systems, there's discoverability, there's standards baked in, there's visibility itself now as the demands of the business [00:22:00] change. I can pivot, you know, the bus, the business feels, they have the superpower now where I can take X amount of people from team A and move to team B, supercharge them, deliver them, and then move back.
I can change the structure of my teams very dynamically and organically within the organization itself, um, based on the, how the needs change. So these are the biggest tangible benefits we have seen, you know, um, the, the. Iteration cycle, the, the speed of delivery for subsequent features, dropping down, um, dropping, uh, increasing, uh, the defects going down, and then this mobility headcount, the, the, the liquidity we call and you get, you know, for, for leaders to be able to reorganize themselves according to the change in the needs itself.
Mehmet: Great. Now you mentioned ai and of course we, we, we can't skip talking about the ai, so I. What role AI can play, you know, here Is it like, you know, the, from, from, you know, enhancing, uh, [00:23:00] the process itself, maybe help in. Suggesting how they should, you know, I'm not sure if this is the right word, even modularize what they have in place, or is it like, kind of, you know, just helping the, the developers on, on, on the bit that they would be using.
So if you can open this more, you know, about the role of AI in this whole concept of, uh, you know, uh, we were discussing about, you know, composability and, and, uh, modularity. If, if that makes sense.
Luv: Absolutely. Um, this is one of the, you know, my favorite topics to dive into because, um, AI is having such a big impact in the world.
Um, I, you know, I wasn't there when, in early two thousands when internet happened, and that was amazing for technology because it, it. It ended up transforming everything and we are going through a similar period, so it's very exciting. So if you take a step back and you think about what AI is, and specifically we will talk about large language models that have become unpopular.
They are, [00:24:00] they're generating text in a probabilistic way. These are, they're probabilistic machines where, um, you know, you ask them a query and they will generate something for you. Developers have adopted them to be able to speed up the development, uh, development process where, you know, you don't have to write software as much as manually as much.
You can generate a lot of it. Um, and the adoption has been crazy. But if you take another step back and think about the direction large language models are taking development into, you start noticing a pattern, which is, doesn't sit right, um, is the, if you ask any, any highly technical person in, in an organization.
Is the bottleneck for delivering software faster is to write more code. Most of them will say, no, it's not a bottleneck. You know, I'm, uh, I don't need five hands to be able to faster deliver software. My hands are not a, not a, not a blocker for me. The blocker for me is to actually be able to understand a complex problem.
Put it into an architecture and then deliver it. That's the hard part. It's, [00:25:00] but every AI software is going down the generation route. The generation route has a lot of pitfalls because that means that, um, every developer is generating code that looks different. Every generator is now, every developer is generating more core than existed before.
That means the surface area for defects increase at the end, um, which will not scale. You know, it might work for a small one person startup or a two person startup that they have. They actually, their bottleneck is actually headcount itself where they don't have enough people and they wanna supercharge yourself.
For large enterprises with thousands of developers, um, this will create a big mess. You don't want to give AI a, you know, to a thousand devs and let them go, make it a, you know, wild West and say, go, go ahead and just start generating code and contributing back. This is where composability comes in with where we, you know, as, as champions of composability, say, you want your AI to be an orchestrator.
Not a generator. Think about it. You know, your developers have built a lot of, [00:26:00] lot of valuable intellectual property internally, and I said to intellectual property at Tom talking about a soft, the code that's being written. When new features need to be developed, I really don't want AI to generate brand new code.
I want it to have invisibility on what exists. Again, we go back to the talking about capabilities again. What capability exists? What needs to be built? What is the gap? What can I reuse, what I can build on top of it? So I only need to build the delta between. What exists and what doesn't exist. I don't need to regenerate everything at all.
So once, once you start using large language models through the lens of, as an orchestrator, as a composer, we call them a composer or an orchestrator, then a generator. Now you are extracting immense value from large language models because not only now developers are shipping more high quality software that has been battle tested, but now.
Um, when, when product managers come into meetings and they're start interacting with the internal large language models, it has a [00:27:00] complete context of what exists internally. It has context of what has been delivered, what can be picked from, what has delivered, and how it will impact the future delivery itself.
Um, that is the real power of ai, and this is where we think the future lies, especially for the adoption of AI in large enterprises itself.
Mehmet: Cool. You know, I like what you said, we don't need, uh, to generate more costs because Yeah. I think, and, and back to your point about open source that you just mentioned before, uh, and I think of course this is a cumulative, uh, you know, uh, I would say it's a heritage.
Of all the work that have been done before the age of ai, all the people who have, you know, created these repositories and, you know, they created all these, uh, you know, uh, frameworks, right? And, and, and the templates and so on. But still Luv, you know, uh, and maybe, you know, I'm just, you know, kind of wondering and, you know, uh, uh, [00:28:00] seeking answers from you.
Um. And this is maybe applies more to, to new teams or startups rather than, you know, bigger companies. So maybe they, you know, and it's better to do it early, so if they can do the modularity, the, the compostability early, it's better for them because, you know, this will help them to, to scale better also in the future.
And you know about the performance as you said, but maybe they would need also some. You know, best practices regarding maybe choosing the good technology to use maybe the right platform to use. Do you see AI also kind of acting as a smart consultant or, I don't know, like maybe, um, and kind of architect that gives them, okay, like for this use case, it's better to use this framework and that framework.
I'm, I'm going to the nitty gritty a little bit [00:29:00] here because, you know, expectations from AI are very high. Are we there yet? Do you think AI would be able also to, to advise on like what, what technology fits us better? Like what kind of architecture we should use? Which languages, uh, which kind of databases will fit us, for example?
What's your uptake?
Luv: Ab absolutely. I think we are getting there. We are close. Um, especially if you look at the graph on how quickly, you know, AI is improving, um, uh, is it is absolutely mind blowing to see how quick the, uh, the, the changes are. So I think AI can absolutely play the role of, of, you know, a consultant and an architect.
I can actually do the guidance part of it. Um, the reason is because it's trained on a lot of good work. Like you talked about, you know, the heritage of open source. That's one of the main primary source of what AI has been trained on. So if you are starting fresh, and if you're a small company and you know, you're, you're trying to understand which way, which direction to approach for a given piece of a problem you're trying to solve, the AI will do a, will do a good enough job to guide you.
Um, [00:30:00] where it gets even more useful is for larger enterprises that have already taken a direction already, let's say, you know, the pick framework, the pick the language. Um, now what differs from, you know. A single person startup or an indie developer versus a large enterprises. Um. Not only you have the, all the standards that are open source, you have internal standards that make sense for you to scale.
Um, a 500 to a thousand person developer development team cannot just go with an open source, just a framework, and then expect that it scales, um, for everything to work cohesively. They have developed internal standards for them to be able to contribute and develop, um, so they can consistently develop or develop at scale.
This is where, you know, the AI can play. Very powerful role itself, that if it understands your internal capabilities, if it understands your internal development standards and it can help you, you know, guide you in the right direction to keep those internal standards and improve on them [00:31:00] as you scale, then you're reaping a lot of benefits from AI itself.
Mehmet: Right. Luv. I figured out on your website you have, um, you know, a platform. You call it Hope ai. Um, that, you know exactly does what I, I, I was, uh, you know, asking you about. So tell me a little bit more about, uh, about this, uh, AI component that you have at bit.
Luv: This is, this is, you know, one of the most exciting things I've ever worked on.
Um, hope AI is one of the world's first composable AI software where, um, it will, it, it works as a role of an architect, um, instead of a code generator. It is not a CodeMonkey. It's like we, I call it, you know, it's called, it's not to write a lot of code, it's to write intelligent code for you, guide you into the right architecture.
One of the biggest differences between hope versus other large language models, um, or, you know, copilot and everything else is. Hope's the context of your entire code base and not just one code base of your organization's code base itself. We call it a [00:32:00] dependency graph. Um, it it, when you ask it something, it has an internal dependency graph that it looks up to.
So if you come, uh, you know, as an organization, you have adopted hope and you ask it to build, um, and a feature, or you take a screenshot, it, first instinct is to say, Hey. I'm gonna take a step back. I'm gonna look at the dependency graph of what the organization has built, and then our dependency graph is very intelligent.
It's been structured in a way that it knows what teams are there, what they're, what they, what they're responsible for, what components they, they they own, and they maintain what API of each component is that it can extend and use. So all of this information is baked in to the HOPE AI itself. So when you ask something to hope ai, its first instinct is to take a step back.
Look at the dependency graph and propose an architecture and see this is a difference where when I ask a large language model right now, it'll just gimme an output. I then I have to go back and tell it, this is not what I wanted. You know, maybe this is not correct or give it more context. Hope says, [00:33:00] I'm not going to do anything.
I'm going to do a proposal like an architecture does. It'll, it will come up with, these are the components I think you need to build. These are the components I think we can reuse. These are the components I think we need to enhance and this is how we need to enhance. And then it gives you this architectural plan at first goal and then, and ask him to say, can you review it, please?
And then you review it yourself. He said, this looks good. Start and then it gives you an option for each component you can add, you know, you can start modifying the individual prompts itself for each one of those components. So you get fine grain control. First. You get control over how the architecture gets you propose, you review, then you get com control over how individual pieces of this architecture, when they will implement, will be generated.
And then hope starts generating, um, those complete architecture. And Hope is capable of not only building, you know, just the front end. It actually builds end-to-end systems completely. Whether it's databases, APIs, it will deploy it for you. You can review it with teams. You know, we have a whole review process you can go through, um, and Hope shares the same workspace like you.
So if [00:34:00] the agent is working and it makes a change, you can just go to the same workplace, make a change, and the agent picks up. It's like, oh, you, we, both of us are working as collaborators on the same workspace. So your code changes get reflected immediately on what the agent is working on. Um, and that's what shines.
This is very unique about Hope AI is this, you know, it's one of the worst first composable AI architect that, that works through composability itself.
Mehmet: Very cool. Uh, you know, and as I told you, like this is, uh, you know, uh, I, you know, it caught my eyes when, when I was reviewing the website and I decided also to, to bring it to the, to the topic here.
Now how much of, uh, uh. I would say, um, success, uh, have you seen in the enterprise up, you know, when, you know there a success ratio? Like is this something that always succeed with enterprises when they go for, uh, compostable systems? Um, nowadays.
Luv: Absolutely. You know, success has been very transparent. [00:35:00] Our, uh, one, so a bit, our largest customers, we, we actually do, um, uh, ROI study with them.
So, you know, they, we look at tangible benefits they have got at the end. It's not just about, you know, adopting the next fade or next craze that is in the industry. You need to see actual, tangible benefits. Our biggest customers have stayed saved in, um, you know, eight figures. Amount of money. Wow. These are the, these are the teams that have thousands of developers that are building thousands and tens of thousands of components over the course of last three to four years, let's say.
And they have taken a step back and they've done a, the, the rate ROI study with us and they put a value to it and it's in eight figures. I can tell you the amount of money and time they have saved. Um, this is just the money they have saved. This is not even talking about the quality of this, of the output they have gone.
Um, so, and these are the companies that started with us with. A team of 10 developers building hundreds of components, and four years later we have thousands of developers building tens of thousands of components with us on, on the same. So, you know, that speaks volumes to the amount of value [00:36:00] that got immunity.
Mehmet: Yeah, I, I, you know, I'm, I'm, I'm a number driven guy, so when I hear this as, uh, you know, uh, I, I like to hear these stories and, and, you know, the way you, you measure, uh, the value proposition, I would say. Now, moving ahead. Uh, Luv, like how do you envision. The future of, uh, platform, uh, development, right? So we talked about ai.
There's a lot of, of, of things that are, are going around. AI also as well. Uh, of course no one has a crystal ball. I know this, but with everything we saw in the past, I would say. Three years, let's say. Right? Since, uh, of course our labs have been there for longer, but you know, they, they started to, to explode in, in the, since, uh, uh, 2022.
Where do you think we are heading in, in, in? I would clear, I would say at least in the near future, I'll not ask you about 10 years or, or like long term.
Luv: I think [00:37:00] we are, uh, and this is the best guess I can make from the facts we have in front of us. I think we are speed running towards democratization of software.
From all this while, if you take, if you look at 10 years ago, um, there was a new trend that picked up, especially when I was part of these enterprises leading platform teams. No core tools, citizen developers. This was the biggest thing because, uh, you know, uh, the, the company, uh, large enterprises have a lot of non-technical users.
That are working with systems they want to control or modify at the end, you know, and these are internal systems they're building. Uh, my experience mostly has been with large financial institutions. I work with a large pension firm. We manage hundreds of billions of dollars in money movement a year. And we had portfolio managers, and I take an example of portfolio managers that are highly intelligent.
They're dealing with billions of dollars in software. Um, we are building internal custom trading solutions for time. Some point, you know, they understand their domain so well that they want to take control of the piece of software stack and make changes and do stuff. Um, then we moved on internally [00:38:00] what, as part of the, you know, platform, uh, leadership, we started developing a, a citizen development platform for them to enable them where built in data pipelines we're giving them the data they need built in UIs for letting them to build UIs and let them go, give them freedom to express themselves and start, you know, building stuff that.
Makes them useful. That scaled for a little bit, but the highest limitation itself, because again, you're dealing with non-technical users. This is what I see is what's gonna happen is citizen developers are gonna be able to contribute. 80% of the, as much as current technical developers are able to do, they will feel the sense of freedom within themselves.
Now where non-technical users, within large enterprises are able to now, um, control the softwares they're using, which are built by technical people, they're able to contribute back, um, and have, make. A real tangible change to it on how it affects their own workflow. Um, because large language models are gonna make software so much more accessible for non-technical users itself.
And that's, that's the thing. I think that's the next [00:39:00] big phrase we will see is the, um, empowerment of citizen developers from just. Not being citizen developers, to be actual developers that are working on technical stuff, they're using an end-to-end basis.
Mehmet: But we always, we will need to have the, the engineers, right?
Luv: Absolutely. That, that, I think that's not going away. You know, um, every time someone tells me that AI will take all software jobs, I ask them is, is, is there a limited pool of the work that needs to be done If you work in a large enterprise. We have always more work than you can do. Absolutely more work you can do.
The reason you don't work on the other stuff is because you don't have enough people to be able to do it, so you don't focus on those problems. What AI will do is it'll free the tech, highly technical people to work on more highly complex problems at an enterprise scale. The problems will not finish.
They'll just get more complex and more technical, which, which now will become more achievable, which were not achievable before because again. To build a complex software, it takes time and cost. You reduce time and cost. That doesn't mean that the developer goes away. It just means the more amount of [00:40:00] work can be get done and more complex work can get, can get done.
Mehmet: Right. Yeah. Since my opinion also as well, because, you know, there is, and people forget about it, is, you know, developers, uh, I mean engineers. Everyone, but you know, I'm, I'm biased because I'm an engineer also as well. Uh, so there's the, the creativity part, right? Which, you know, yeah, AI can. Think with you as a co-pilot, as you described it, and we know this, but AI will not go and find new ideas to go work on.
Or maybe it would not, you know, be able to be as creative as us. I know I'm, I'm not sure if in the future this will change, but at least you know, my interactions with. You know, uh, development lead CTOs, everyone I spoke with in the domain, even, even the most people who are into the ai, that they will tell you like, AI is going to be everywhere.
They agree with me that yeah, AI [00:41:00] would still have this, uh, you know. It's gonna be a copilot, but we need still the human brain and mind to, to, to get new ideas, find new problems to solve. And as you said, like always we can find work to do and AI will just make this easier. And maybe we can give the ai, you know, the.
Less, uh, demanding tasks, which are repetitive to, to do for us. So absolutely, I agree with you. Luv. Now as we are almost, uh, you know, close that end. Any final thoughts you want to share, Luv and also where people can find out more and maybe get in touch with you.
Luv: Absolutely. Um, you know, the last thing I would say is, um, everyone should be excited for the time they're going through with AI in large language models.
You know, this is one of the most exciting times in humanity we can go through because of how the good impact is happening. Um, and I'm, you know, I'm, I'm looking forward to it. I'm looking forward to see what teams and companies built with this leverage it, um, to reach out to us, you know, um, bit. Dev is our open source website, bit Cloud is our enterprise, um, cloud where I work, and I build this composable [00:42:00] systems and hope AI exists.
You can reach out directly, check out what we built, see if it works for you. If you want to reach out directly to me, you know, Luv at Bit Dev, you can reach out or you can connect me on LinkedIn. Um, and I'm always happy to have a conversation on, you know, composability or LLMs or any other things. And you know, again, thank you so much Mammoth for having me.
It was a great conversation. You know, I had a great time.
Mehmet: It was my pleasure, Luv, and that. This is, you know, for the audience, uh, I will put the links that Luv just mentioned in the show notes so you can, you know, find everything, the website, uh, his LinkedIn profile and the email of course. And again, I would love to thank you a lot.
Uh, Luv for, to, for today. I also learned a lot about co possibility reusability and how organizations small and big they can, uh, leverage. Uh, this. As you mentioned, you, you gave us the ROI and how much, you know, savings can be done. So I love to hear these stories and thank you for sharing that with us.
And of course, I'm a big fan of technology. I'm an engineer, [00:43:00] so hearing from experts like yourself is always a, a delightful thing for me. And this is for the audience again. If you just discovered us now, thank you for passing by. I hope you enjoyed if you did, so please do me a favor, subscribe and share this podcast with your friends and colleagues.
We're trying to make an impact so the more we get people, the more we can reach and you know, leave the impact. And if you are one of the people who keeps coming, and again, again, thank you very much for your. Port, thank you for, you know, the, the feedback you sent to me, I read every single email you sent.
So thank you very much for the encouragement also as well. And thank you for making also the CTO show with Mame, you know, always charting this year. I'm repeating this since the beginning of 2025. We, we've been like charting and the top 200 charts in multiple countries this year, and this cannot happen without you.
So thank you very much, and as I say, always stay tuned for a new episode very soon. Thank you. Bye-bye.

