Dec. 25, 2023

#277 AI Disruption: William (Bill) Raduchel Reflects on Technology's Transformative Journey - A Review of 2023 and Beyond

#277 AI Disruption: William (Bill) Raduchel Reflects on Technology's Transformative Journey - A Review of 2023 and Beyond

Join me as I sit down with the ever-insightful Bill Raduchel, who makes a triumphant return to the show to reflect on the astonishing developments in generative AI over the past year 2023. We reminisce about technology's historic milestones and draw parallels with the current AI revolution, pondering whether society can keep pace with such rapid advancements. Bill shares his wisdom on the AI landscape, discussing potential legal skirmishes over training data, copyright intricacies, and the rocky road to AI regulation. Our dialogue ventures into the effects of AI 'hallucinations' and the trust we place in these burgeoning technologies.

 

The conversation then shifts to the integral role third-party developers play in the technology ecosystem, harking back to Sun's success with its Solaris environment and paralleling it with OpenAI's strategic moves in the AI arena. We delve into OpenAI's organizational shifts and what they could mean for the future of AI development, and speculate on why tech behemoths like Apple and Google appear subdued in the AI uprising. Wrapping up this segment, we touch on the bold tactics employed by emerging AI entities, which may invite legal challenges that more conservative companies may shy away from.

 

Lastly, listen in as we discuss the profound influence of technology on society, tracing the lineage from the groundbreaking Lotus 1-2-3 to the potential of AI applications that lie ahead. I share a captivating tale of how a narrative video transformed the perception of spreadsheets and consider the future, where storytelling remains crucial for founders engaging with investors, employees, and customers. We round off with a discussion on the intriguing possibilities for mergers and acquisitions in the AI space, questioning the true value in a domain so deeply intertwined with open-source contributions and the critical importance of human expertise.

 

Check Bill's last book here:

https://amplifypublishinggroup.com/product/nonfiction/industries/technology/the-bleeding-edge

 

Transcript


0:00:01 - Mehmet
Hello and welcome back to a new episode of the City of Show in a moment. Today I'm bringing back my guest, bill Raduchel, who appeared before on the show, but today we, as it's end of the year, so what I'm doing is we know we're bringing some of my guests to talk about year in review, what happened and because Bill have also a new book. So, bill, thank you very much for being back. 

Thank you for having me my pleasure, as always. So tell me, bill, how this year have been like so far. We are almost done. 

0:00:36 - Bill
Well, exciting, I mean the year. I mean the advantage of being old is you've got history and you know, watching the rollout of generative AI is just like going. I mean I remember going to the then CES show I guess it was Comdex, where Windows for the first time, where Microsoft showed Windows 95. And you saw everybody just you know, ooh, ah, where it's going. And to me it was old hat because I'd been at Xerox and Xerox, of course, really invented that technology. But for most people it was just mind blowing. And you know that's where we are. 

I mean, I gave a talk to a professional group I belong to not developers in February about generative AI and where it's going. And you know, when they looked at me like I, you know, really had, you know, three heads, I think they all now are overestimating the impact. But I mean we've just seen a sea change in the world. But every abstraction layer change is a sea change. It changes everything. 

I mean, if you think about it, probably every application in the world gets rewritten in the next five years because you got a new abstraction layer. And you know, the last time we saw this was, you know, 1112 years ago with the iPhone and apps and where that goes, and that became the abstraction layer through which most people talk to the technology that runs our lives. So you got a new thing coming and it's you know. You see, the laws of physics matter a lot in video and the in making its chips and creak increasingly faster for doing the tasks that AI needs done. Again, you know massive changes in terms of how the world works. I mean, the next three, four years are going to be exciting and they're going to be, you know, winners and losers and they're not. You know you're going to probably predict them wrong if you try to do it now. 

0:03:11 - Mehmet
Yeah, that's fascinating, but are we going too much fast? Bill, you think with this AI like because you know when I started the show this year, you know and I compare what we have talked at the beginning of the year Now we are almost at the end A lot of things change. Is this something healthy? Because also, you've seen like major things you talked about like smartphones before and so about you know the GUI and all this. So how do you compare? You know the speed of the change that's happening now. 

0:03:41 - Bill
Well, the speed of change is, you know, you've got you know sort of multiple things going on at once which is going to guarantee high speed. One we're changing the chips and we're making the chips better and the chips faster, and so that is going to be a fundamental driver also going to be cheaper. The second thing is the underlying technology. The transformer technology that's being used is getting better and there are better versions of that. So now you've got you know, double thing. And then the third although this is going to end up being quite contentious is going to be the availability of training materials, and without the success of Google search, you could never have done what OpenAI did, because that made most of the knowledge of the world easily accessible for training. Now there will be litigation over whether or not you can do that. 

Japan said yes, right, I understand that Japanese government has said yes, it's fair use. We want you know, we want the gains and its benefits society. So no, you can't use copyright as a reason to stop AI from being trained, and I'm pretty much a copyright expert in the US and I don't think the law here is going to sort out that way. But Congress can change that, it's just a law. So is it too fast Maybe? I mean I'm an economist by training and economists used to have a concept called absorptive capacity, which is how much change in economy could absorb in a year. I mean society has an absorptive capacity. We may be exceeding it. I was talking to a very senior journalist in the United Kingdom a few years ago going through the changes that were coming, and he stopped me and he looked at me and he said you know, bill, this means revolution. So I mean it's going to be hard. What I don't understand is regulate AI. People say regulate AI. 

0:05:53 - Mehmet
What do? 

0:05:53 - Bill
they mean how do you regulate AI? I don't, you know. I have no idea how you regulate. I don't have any idea how you regulate software. We don't regulate developers, so it starts there. Anybody can write software If you go to the chat GPT website and start building apps on top of it. You don't have to have the skills to go work at DeepMinds, you gotta be skilled in your own art. So I have no idea how you regulate it. And I went to a presentation on the executive order that President Biden and the United States is trumpeting and it's all a great industrial policy and it matches up with their political goals, but how you make it happen, I don't know. How do you guarantee a quality of outcome with AI? I mean in nice words, but nobody knows how to do it. 

I mean again, the thing that I think should scare us to your point is never have we depended so much on a technology where we routinely talk about the fact that it hallucinates. I mean you know if I said look, I hear yourself driving a car. The only problem is occasionally it hallucinates and drives you to places you haven't thought of. You're never gonna get into it. But AI does hallucinate and it's inherent in the design. I mean, it's a huge compression technology and when it decompresses, sometimes it doesn't have enough information, so it makes it up. I mean. 

Yeah, I mean we're certainly outrunning people's ability to understand knowledge and where it goes. And the long-term consequence, I think remember that is that you know, I worry we are turning our children into answer seekers and not problem solvers. And I know one thing for certain about the future of the world we're gonna have problems to solve that are gonna be new and novel, and we're not gonna be able to go back and look at history to say, well, here's how they solved it. No, I mean, that isn't gonna be our fortune, and so we need problem solvers, we don't need answer seekers. And AI encourages people to become answer seekers, much as Google search is encouraged. I mean kids today, you know they go to YouTube, they go to Google which is, of course, also Google, but and they look for the answer and if they can't find it, they don't know what to do. 

0:08:41 - Mehmet
Yeah, I was just reading a tweet earlier today and someone was saying that his behavior changed from Googling something first to I don't know if there is a verb which would be called AI in it or chat GPT in it. So yeah, so this is the trend. 

0:09:02 - Bill
It's more than far more than half of the things I used to go to Google to search. I now I use the app Po, which gives me access to all the multiple LMS, and I go there and that's what I ask you know, unless it's really current. 

0:09:18 - Mehmet
Yeah, 100% In your opinion and because you know you've seen before people who were left behind, people who didn't, you know, jump on this new revolution, let's call it so. Do you think OpenAI is the only company that won this AI revolution, at least for this year, and the others were left behind? Or like it's not fair to say this and we say like no, there were, like some other players there and you know, probably they, next year they will be even maybe better than OpenAI. 

0:09:55 - Bill
Well, yeah, nvidia has clearly won with CUDA. Right, the lock in with Nvidia is software, not its chips. Whether OpenAI will win or not, I think comes down to its app store, just as, in the end, whether iPhone one or Android one came down to the app store. And where will third party developers put their brains and their money? And that's gonna be the attraction. I think OpenAI has not helped itself with stability. I mean, I think people are gonna be worried about investing there. And if Google does a great job with Gemini and can catch up on the app store idea, I mean they're gonna be a competitor, but so will be meta, so will be coherent. I mean you know the anthropic. I think. 

Now it's a question of where do you get third party developers? I mean that again, you know my history. You know was at sun, and the reason that sun won so much was because we were able to attract third party developers. And the third party developers wanted to develop on Sun and the programmers wanted to develop on Sun. Okay, and you go hire them and say, well, no, we're gonna use Apollo or HP, and the developer goes, no, I don't wanna work on that, I wanna work on Solaris, and because I see that my job value is higher if I understand Solaris than if I understand HP or Apollo, and so that's gonna be. 

In the end it's all a talent war. The next decade, the next two decades, the next century is all about talent. There isn't enough of it and it's gonna be. How do you attract and hold and get these people on board? And that, I think, is where chatGPT got an early lead. But the cynics would say that Sam Altman figured out that Google was gonna beat him and the only way to win was to go early and try to preempt Google. And that was his strategy. It was a good one. He worked it and he played it well and he got Mindshare very early at a very low cost. But it's developers. I mean, if there are no apps on your iPhone, I gave my step granddaughter a new iPad, which I thought was a really nice present. She thought it was useless because when she took it out of the box it didn't have her name. People are pretty simple. If there's things there I can use, it's good. If it isn't, it's a nice, expensive something or other, but it's not useful for me. So that you know. 

0:13:16 - Mehmet
Yeah, Speaking of Sam Altman, Bill, and this is one of the things that we also saw Is it the age of AI, having a CEO being fired, rehired and all this? What happened behind this story? Something which will be kind of the norm in the age where things change very fast? 

0:13:39 - Bill
Well, I mean, he has been rehired. There is a new board. The conflict was apparently with one of the directors, who's now gone. Adam DiAngelo is still there as a director. Brett Taylor is well known. Larry Summers is well known I don't think Larry would lay any claim to being an AI expert and it's got this odd structure that it's a not-for-profit but it owns a profit making subsidiary in which it's taken a lot of money. 

I mean Khan, you know? I mean that's all gonna make life bizarre and that instability I think we'll hurt it Because, again, you need people to bet their lives, their fortune, their careers on you and start writing apps that use you, and you've got to convince people that that's a good decision and if people think it is unstable and not guaranteed, they won't. I mean, attracting third-party investment is the key to success for any platform. Otherwise, you're not a platform, you're an app, and none of these people want to be an app, they want to be a platform. That means you have to attract third-party developers, and getting third-party developers on board is, you know, it's sort of like a mass courting system. I mean it's not. You know, some companies do it well, some companies never figure an app. 

0:15:27 - Mehmet
Yeah, but build some of the questions that also always came up this year again with AI, that where are the others? Like? There are some names that people expected them to be very active in the AI but never heard anything, like, for example, apple, you know, google, of course they did the counter-attack, I would call it, but still you know, with the demo that they did last week and you know people start to talk, they fabricated some of the scenes and they fight back on them. So why we really didn't see any of the? Okay, elon Musk, you know, with Grog, but still I didn't test it myself so I cannot judge about how good or bad is it, but still you know we didn't see really a real, I would say, rival to what OpenAI have done. What, why do you think is that the case? 

And if you're angry, of course, on that. 

0:16:25 - Bill
OpenAI didn't ask permission. It assumed it had the right to do that training, and I think companies that had huge market caps were much more reluctant to take that risk, because they're much more attractive targets for litigation and they may have been waiting to get that sorted out. Apple you know Apple has a business model which requires you to have a need for having an awful lot of computing power in your hand. Now you don't need to do that for a modern smartphone. With 5G technology. You could make that a thin client and have everything running on a server someplace, and the phone could basically last for a decade, because it would be a keyboard, a mic, a camera and a screen. That's not good for Apple's business model. Apple has to figure out reasons why you want to carry information and computing in your hand, and so I don't think they are going to be thinking about AI until they can figure out how to do what Google called nano, which is, how do you build a model which can run on the phone and then ideally, that makes you buy a $2,000 phone instead of a $1,000 phone. So I think that explains Apple, and their ability to be secret is legendary and very cultural. In there, google, I think they rested on their laurels. They invented this stuff. This is all deep mind, so maybe they just didn't worry about it. 

And Sergey Brin has apparently come back and said yeah, I know, I don't have a role, I have the votes, and so he's playing a spur and he's important. I mean, again, one of the themes of the book I wrote is that the laws of physics matter a lot, but so do the human beings that figure out what they mean, and Sergey is certainly one of those people and Sam Altman is another. So it's not, I mean. Look, I mean CPM was the operating system of choice for PCs at one point in time and people saw it being a huge empire and it's gone. 

So it's early. It's early, I mean. I don't know. The claims around Gemini are, you know, I'm sure they're valid, but I also think they're minor. I think they packaged it up really nicely and maybe gave a little bit of false impression and took out some time, but the ability to do what it was doing is impressive and the multimodal capabilities are how human beings want to interact. So now you, with Gemini, you could see having a complete conversation with it with, whereas with chat, gpt, you can't because you can't use images, you can't use sounds. 

0:19:56 - Mehmet
Yeah, and I think they integrated somehow with their bar. So bar is powered by Gemini now and I saw some improvements on there. Now, regarding the book, because you know, like last time when you were with me, when you were with me, like also you had your previous book, but now the book title itself, you know it's exciting, the bleeding edge. So tell me a little bit more. You know what are the themes in the book and what are, like, the main topics you have discussed. 

0:20:27 - Bill
Well, I mean, just a good friend of mine was the Dean of the Business School at Georgetown and he asked me one day if I would do a course that would teach students basically war stories from my career, and that's what started the thinking that went down this. But I just lived on the bleeding edge of information and technology for decades I mean starting in the early 70s, so 50 years, getting onto 60, of living there, and I was in most of the conversations. I mean, along the way I met the people who we all recognize. I, you know, worked with them at some places, I did deals with them and I try to take away the lessons. And you know again, my three basic lessons are one, the laws of physics matter and if you're not changing the laws of physics, it's probably not going to be material. Secondly, it takes individuals who understand both how the technology works and what it means to go and drive the change. And third, it still comes down to relationships and getting things done takes relationships because it takes trust. It gets back to the thing. If you want people to invest in you as opposed to something else, they need to trust you, and I think the biggest weakness of AI is going to be building trust. I don't know how you build trust with an AI. I mean we have we have hundreds of mechanisms that we've evolved over millions of years on whether or not I can trust you. Ai allies with abandon and a lie is going to be. You know it doesn't twitch when it's lying. Maybe we can figure out a way to guarantee that, but I don't know how. Software is software. So what I try to do is go through my time and use the examples of my life to explore how change happens and what it takes to make it happen. 

I mean, if you look at the smartphone, the essence of the smartphone is the realization that at a 700 megahertz processor, which just became available with the iPhone, one voice could just be an application on the device. And if you ever tried to build an application on slower processors, you know the Nokia N95 was one of the best devices ever made. It's incredible engineering feat. But every app that was written had to be voice aware. Every app had to be able to understand. At any point along the way I could get a phone call and I get interrupted and the app had to be voice aware. That meant that people, you know you had to be really smart to do development was really hard. It could easily break. You make voice an application, that's a problem for the operating system and suddenly you don't have to worry about it. 

And that's what made Android and iPhone such a dramatic change is that the app should not have to be voice aware. And you know chat, gpt, gemini, you have to be aware very much. They go and solve so much in the background, so you don't have to worry about it. Well, that lets you innovate. That lets you. That's why you're going to see the pace of innovation go up, because you don't have to do it. I mean, I wrote a piece the other day and I actually use Claude on Anthropic, but I was able to write it in an hour instead of a week because the basic research that I needed it did for me in seconds and you know. So I mean the pace of innovation up. 

0:24:39 - Mehmet
Yeah, absolutely. And you know to your point about saving times. When OpenAI, before the event that happened about the filing and rehiring, so they announced the GPT so you can customize it on GPT and I tried to create a GPT that does market research, I give the topic it goes. Find out some numbers for me and you know it's mind blowing. I showed to someone who does this as his day to day job and he said wow, like this is, can eliminate the need to me in the future. But yeah, so it's. 

0:25:12 - Bill
Well, the other one, I mean you know the. I think the pay has come down, but you know there's this new job category I'm going to call prompt engineer, right? So six months ago prompt engineers were getting up to 500,000 a year in salary. Because knowing how to use these LLMs is suddenly a new and very valuable skill. Knowing which LLM to use, how to ask the question, because what's clear is that even with chat, gpt, the way you ask the question affects the answer a lot. 

0:25:44 - Mehmet
A lot, true, 100%. Now in the book, I think, really, like you mentioned a couple of names that you you know, and big names, right. So, uh, not real. The question that sometimes people you know we discuss it like why we're not seeing you know these kinds of new in the new generation. You know the same I don't know if it's the word is right like the same charisma, if you want, like the same skill sets that we used to see, like in the generation where Steve Jobs you know, even like not very long time back, like Larry Ellison, you know. So the new generation is not, you know, for some people it's not up to the same level of you know like this. You know someone like, oh, you say like he's like Steve Jobs, he's like you know Bill Gates or he's like you know someone else. So is there, like in the new generation, like lack of let's call it this charisma thing from your opinion? 

0:26:53 - Bill
Yeah, I mean the key skill. I mean, if you've watched the movie Oppenheimer you know, I think the general theme of that, of the book and the movie, is that only somebody with great organizational skills, who understood the laws of physics could have done that job. And I don't think Bill Gates is any different. I don't think Steve Jobs is any different. I don't think Eric Schmidt is any different. I don't think Larry Ellison is any different. They're not necessarily the nicest, most friendly, even the most charismatic individuals, but people trusted them because they believed they knew where they were going. And you know, I certainly see that in Steve Case, who I work for. I mean, you know, steve was not a great engineer, he wouldn't claim to be, but what he was, he was able to perceive what human beings wanted to do with tech and then package it up and then keep it there. That was also Steve Jobs and that was his strengths. I mean, you know, the reason that AOL won was because of Steve Case. 

And Steve Case was given a demonstration of Windows 95 in 1995. And he looked at it and he said, boy, a lot of people are going to buy PCs this Christmas and they're going to want to do something with it. They turn it on on Christmas Day and they're not just going to sit there and say, gee, I got this big, expensive consumer of electricity. They're going to want to do something. And the obvious thing they could do was go online. And so he told his team go out and buy up every inbound motor line. Now the technology at the time was such that you had to have a number to call to get online and that number had to exist somewhere on a mainframe computer called the 5VSS, and then that would connect back to AOL. So AOL went out and bought them. I mean, in the fall of 95, they went out and bought up every motor line they could get. So on Christmas game, and people did indeed do exactly what Steve thought they were going to do, which is oh, I got this big new computer and what do I do with it? Well, I guess I go online. And AOL had paid to be on the home screen of Windows. They could get a line, they could connect competitors. They went out to buy motor lines. There weren't any, and you can't create a motor line overnight. You got to have a computer, a data center, power lines, physical wires. It took months, years to go, build up other capacity, and it's that instinct that mattered. 

Those are the people who changed it, and so they haven't emerged yet. I mean we've been a very stable time. I mean, my God, it's 10 years basically where technology has not changed that much, right? I mean I remember when people introduced Lotus 1, 2, 3 and PhysiCal and spreadsheets. We think of spreadsheets today like we think of toothpicks, but I tell you, when they came out they weren't the obvious solutions and world hunger. And yet today we couldn't think of living without them. 

In fact, I had a friend at McGraw Hill who invested heavily in building a course to teach people how to use spreadsheets. It was a complete commercial failure and nobody liked it. It was a great course Lotus 1, 2, 3. Nobody liked it. They gave it a score about one and a half out of five. So he was a smart man and he went back and he had a 30-minute video made on a BCR and the course then went back out and before the course started you played a 30-minute video that had Tom and Sally competing for a job. Sally used the spreadsheet, tom did not. Sally got promoted, tom did not. After that the ratings went to 4.5 out of 5. Nothing is changing the course. Nothing, not one piece of the course was changed. It was just. 

Suddenly people go, oh, now I know why. I should understand that. And we may be early to figure out whether a next generation of people who see it, money from AI is gonna be made in applying AI, except for a handful NVIDIA in particular. That's where the money is gonna be, and in the power plants, an enormous amount of electric power. I mean it's still remarkable that we're talking about that, but the brain uses as much power as it could be light bulb. So we're still way ahead as humans. We don't need to have a new data center. 

I mean you see people looking at how do I build my own nuclear reactor, now that Microsoft saying well, to build these data centers, we really can't afford to buy utility power. We've gotta figure out how to build our own nuclear generators. I mean, if that's where we're gonna end up, that's gonna be an interesting situation. I'm sure they will do it differently than the utilities do it. But the last effort, these small modular reactors, is just seems to be about to fail. 

But you know, I don't know. I mean you need people who people who drove change understood both. They understood people and they understood tech, and they could do all three. I mean, the tough job for a founder, I have always said, is they need to have three messages One to investors, one to employees and one to customers. And almost anyone can do that, but the problem is they have to be consistent, and that's the problem is that they can't figure out a way to make those three narratives all conform. And the genius of being a founder, as you figure out a way to have a narrative that investors, employees and customers all accept them like. 

0:33:39 - Mehmet
Yeah, yeah, this is great. You know, I would say thought about you know, this messaging thing and you know I've seen a lot of people that mix things up and then, you know, they get in trouble. You mentioned about, you know, and I think we discussed it last time, but now with the AI that dominated actually all the years About, and you mentioned, you know, even when we started about who got access to the developers, who got access to the skilled people, now are we going to see. You know more, I would say more. I would say merger between big players and small players, or it would be like, you know, the big players maybe trying to acquire each other, because at some stage when again back to the story of OpenAI so people thought that Microsoft would buy OpenAI, but it didn't happen. So where are things moving? Do? 

0:34:41 - Bill
you think you can't. I mean, what are you buying? I mean the core software. Gpt is open source, google, the training data is out there. So you're buying essentially a packaged up amount of money tens of millions, maybe even hundreds of millions in training data and you're buying people. Well, none of those are easily packaged up into a transaction, so I don't know what you buy. When you buy OpenAI, it's all people, I mean. 

I think one of the reasons why the blockchain has never taken off is that the number of people in the world who understand how to do it and make it is very small, and it reminds me of the Macintosh. The Mac never got a huge set of apps and when I was at Xerox, we hired a consultant who came in and we had dinner one night and he said you realize that all these apps were written by one of 50 people, and the genius of the first Mac was the wrong, the read-only memory. But it was brilliant, but it was so complicated Most individuals couldn't solve it. Windows wasn't nearly as elegant technology, but it was written so that mortals could use it, and the rest is history right. It never took off, and I think the problem with the blockchain is that it's so complicated and that you just don't have enough developers out there. Ai, I think, has a much bigger pool. Ai is closer to Windows than the Mac, and not necessarily in how to do the training, but in how to use it. I think there are gonna be a lot of people who become skilled practitioners and I think that that will matter a lot. But skilled practitioners are again, you can self-teach yourself a lot of this if you're recently smart and have knowledge and where you go. 

You can't easily teach yourself blockchain and I always saw that the limit when somebody told me that there were only 500 people in the world who could do this. In the 90s, when I was at son, we figured that there were roughly 800 people in the world. We did a survey who could actually work on the kernel of Unix? That was it. Wow, 500 of them worked for son. And why did we win? Well, okay, you don't need any more data. Nobody else really could compete. 

When we were designing our own microprocessors, we would go out to recruit, come back with a list of 25 people in the world we thought could do the job 25. I mean, the pools of talent are incredibly narrow. The number of people could really figure out how to do a transformer. But I mean, when you look at these things, I mean a transformer is just an algorithm with a trillion parameters. No scale matters. A trillion is a billion, whenever is a lot, but that's it's. 

How do you get talent, how do you attract it, how do you keep it? And I don't think that big companies Bill Joy was one of the founders of son and brilliant man, and he always had a law that the number of really bright people at an organization was proportional to the log of the number of employees, which of course means that the percentage drops dramatically of size. And you know he told me that 1988, it's 45 years ago. I've never seen him wrong. I mean, you just can't create enough great jobs that will attract and hold great people in a big company because you get bureaucracy. You know you deal with Amazon and Amazon seems like the military. Well, it has to At a million employees. It isn't that it's the military, it's that an organization at a certain size can only run a certain way. 

0:39:34 - Mehmet
Right. 

0:39:36 - Bill
I mean, I guess there are a few exceptions. You know the British running India or the Catholic church, but they're not trying to implement complex, you know software development. 

0:39:53 - Mehmet
Yeah. So, bill, what do you think the trends would be in the next year? Like, of course, ai would be there, but do you think it will steal the I would say the scene from any other thing? Or are we expecting some breakthroughs in maybe some other technologies? Or because the narrative is already written and everything have to be in the space of AI? 

0:40:24 - Bill
Well, it's a good question, very good question. Yeah, I think AI is just an evolution of algorithms. They're just very complicated algorithms and we're already in an algorithmic society. We're run by algorithms. Fortunes are created, rents are created, algorithms run everything and more. 

I think software has a reckoning coming, whether it's AI or not. I mean, you watch? I mean, unfortunately, public perception of AI is gonna be driven as much by Tom Cruise and Mission Impossible as by the scientific discussion that's going on at the same time, or the open AI governance issues. The public is gonna see that issue in Mission Impossible and that's gonna affect it, and I think the ability of software to remain beyond the purview of government is going to come into question. I think it's already. I mean, you see, all these things, regulate AI, regulate AI, as, again, I think they're vacuous because they have no idea what they're saying. You know how. And then they look at you and go well, you have to regulate it. What does that mean? Well, you point to commission. What does the commission do? Oh, I haven't gotten to that. That's just a point of commission. And so I think that the future of software is going to be one that is going to have a much greater intrusion of government into how we do this stuff. Now you know, we live with that. 

If you're a civil engineer and you're designing a bridge guess what? You have to have a certification. You can't just say, oh, I'm a civil engineer and I designed a bridge, let's go put people on it. That's not legal. If you're a civil engineer, you've got to have a license to be a civil engineer. And so I think that you're going to see, inevitably, that developers are going to have to get licensed. Not everybody is going to have to be licensed, but somebody who is licensed is going to have to take accountability, and so figuring out how to put accountability in the software is going to become the issue. 

I mean, you see that issue in the misinformation, the disinformation debate in the modern democracies where people are saying well, why did the algorithm do that? So now you're going to say that whatever algorithm is doing that, some human being has to take accountability for its actions, and to have that you've got to have licensing and other mechanisms to come in place. But I think you're going to see, you know, and this is going to be a pain, if you're a CTO, you're going to hate this, it's going to make your life miserable because you already got enough issues. But okay, now I want to certify this. Oh, that's great. Okay, oh, by the way, we're totally dependent on Kafka. Okay, now, how do I, you know, as the deploying engineer, certify Kafka? Well, what is? How does open source work in a world of accountability? Because open source is a world that thrives on no accountability. 

0:43:52 - Mehmet
Yeah, so it's very complicated Very. Yeah, like time will show us, I think, bill, what will happen. You know, again, like last time they told us great stories. We talked about the AI. Just one thing about you know, it's like it's a meme I saw the other day about regulation. So they saw whoever you know will be able to let the AI will be left behind. And they were talking about Europe, because I think the EU they decided to heavily, heavily, you know, regulate AI and no one actually knows what they mean by that. 

0:44:30 - Bill
Like well, it's a good thing. Japan just said it's open season on copyrighted content because they want to lead an AI and they figure that I mean that's a legal thing you can do. You know, if you want to slow down AI and that is your policy goal, it's very simple Put an excise tax on GPUs yeah. Tax GPUs Okay. Every GPU has 50% excise tax. Okay, ai slow. It's done In the United States at least, and the EU handful of other countries. We can put an excise tax on GPUs and you would slow AI. I mean tax electric power going to data centers. That will slow AI. I mean we can slow AI if we want to, but nobody really wants to. It's just a marketing slogan. And since AI is just an algorithm, at what point does building an algorithm become AI? Add a billion parameters, maybe, but you know there's so much nonsense in the public's debate of people who know nothing talking about something and about which they know nothing, but thinking they are absolute experts because they can turn on their iPhone. 

0:45:50 - Mehmet
Yeah, I agree 100% with you, bill. Sometimes we see these nonsense things at some stage you know, like yeah, it's scary, we don't know how it's gonna do, but they don't know even what they talk about. So I agree with you on this, bill. Really I enjoyed again the conversation today. I will put the link for the book the Bleeding Edge so people they can go and you know they can get it and it's really, you know, a great topic discussed there. I liked, you know, the preface that I saw. It's amazing. And thank you again for being, I guess, for second time, you know this is something I love, no problem, I love it. 

0:46:29 - Bill
Enjoy the conversation. 

0:46:30 - Mehmet
Yeah, this is now. It's because end of the year, so we're bringing you know guests again to talk about. You know what we've seen this year and expect questions. 

0:46:38 - Bill
Actually do an update any time. 

0:46:41 - Mehmet
We will do it. It will be always nice to have you on the show, bill, so thank you very much for being with us. Thank you, bye-bye and we'll see you again soon. Thank you, bye-bye.